ssas报表项目数据集

介绍 (Introduction)

While building and deploying an SSAS OLAP cube, there are two processing orders that you can choose from when you create a process operation:

在构建和部署SSAS OLAP多维数据集时,在创建流程操作时可以选择两个处理顺序 :

  • Parallel (All Objects will be processed in a single transaction): Used for batch processing, all tasks run in parallel inside one transaction. 并行(所有对象将在单个事务中处理):用于批处理,所有任务在一个事务中并行运行。
    • One Transaction: All Tasks are executed in one transaction 一笔交易:所有任务在一笔交易中执行
    • Separate Transactions: Every Task is executed in one transaction 单独的事务:每个任务都在一个事务中执行

As mentioned above, when choosing a parallel processing order, tasks are processed in parallel (specific degree of parallelism) but if one task fails all operations are rolled back, even if it was the last one. This may lead to a serious problem when processing a huge OLAP cube that contains lots of partitions.

如上所述,选择并行处理顺序时,将并行处理任务(特定的并行度),但是如果一个任务失败,则所有操作都会回滚,即使它是最后一个。 在处理包含许多分区的巨大OLAP多维数据集时,这可能会导致严重的问题。

In a similar case, using a sequential processing order will decrease the processing performance since tasks are processed sequentially which delays delivery.

在类似的情况下,使用顺序处理顺序会降低处理性能,因为任务是顺序处理的,这会延迟交付。

One of the best workarounds would be processing partitions using a parallel option but in batches; each process operation (batch) will contain a specific number of partitions (tasks). In case that an error occurs during processing phase, only one batch tasks are rolled back.

最好的解决方法之一是使用并行选项批量处理分区。 每个流程操作(批处理)将包含特定数量的分区(任务)。 如果在处理阶段发生错误,则仅回滚一个批处理任务。

I proposed this approach for the first time as a solution to a similar problem on stackoverflow.com. Since it addresses a wide audience, I decided to improve it and write it as a separate article.

我第一次提出这种方法是作为stackoverflow.com上类似问题的解决方案。 由于它面向的受众广泛,因此我决定对其进行改进,并将其作为单独的文章来撰写。

To learn to build a cube from scratch using SSAS, I would recommend you to run through this interesting article, How to build a cube from scratch using SQL Server Analysis Services (SSAS).

若要学习使用SSAS从头开始构建多维数据集,建议您阅读这篇有趣的文章: 如何使用SQL Server Analysis Services(SSAS)从头开始构建多维数据集 。

This article contains a step-by-step guide for implementing this processing logic within a SQL Server Integration Services package.

本文包含在SQL Server Integration Services包中实现此处理逻辑的分步指南。

先决条件 (Prerequisites)

In order to implement this process, you have to make sure that SQL Server Data Tools are installed to be able to create Integration Services packages and to use the Analysis Services tasks. More information can be found at Download and install SQL Server Data Tools (SSDT) for Visual Studio.

为了实现此过程,您必须确保安装了SQL Server数据工具,以便能够创建Integration Services程序包和使用Analysis Services任务。 有关更多信息,请参见下载和安装Visual StudioSQL Server数据工具(SSDT) 。

Also, the OLAP cube must be created and deployed (without processing dimensions and cube).

另外,必须创建和部署OLAP多维数据集(没有处理维度和多维数据集)。

创建一个程序包并准备环境 (Creating a package and preparing the environment)

添加变量 (Adding variables)

First, we have to create a new Integration Services project using Visual Studio, and then we have to create all the needed variables as described below:

首先,我们必须使用Visual Studio创建一个新的Integration Services项目,然后必须创建所有必需的变量,如下所述:

Variable name

Data type

Description

inCount

Int32

Stores unprocessed partitions count

intCurrent

Int32

Used within for loop container

p_Cube

String

The OLAP cube object id

p_Database

String

The SSAS database id

p_MaxParallel

Int32

Degree of parallelism

p_MeasureGroup

String

The Measure Group object id

p_ServerName

String

The Analysis Server Instance Name

strProcessData

String

Stores XMLA to process partitions Data

strProcessIndexes

String

Stores XMLA to process partitions indexes

strProcessDimensions

String

Stores XMLA to process dimensions

变量名

数据类型

描述

inCount

32位

存储未处理的分区数

当前

32位

用于for循环容器

p_Cube

OLAP多维数据集对象ID

p_数据库

SSAS数据库ID

p_MaxParallel

32位

并行度

p_MeasureGroup

度量值组对象ID

p_ServerName

Analysis Server实例名称

strProcessData

存储XMLA以处理分区数据

strProcessIndexes

存储XMLA以处理分区索引

strProcessDimensions

存储XMLA以处理维度

The following image shows the Variables Tab in Visual Studio.

下图显示了Visual Studio中的“变量”选项卡。

Note that, all variables names starting with “p_” can be considered as parameters

注意,所有以“ p_开头的变量名称都可以视为参数

添加连接管理器 (Adding Connection Managers)

After defining variables, we must create an OLE DB connection manager in order to connect to the SQL Server Analysis Service Instance:

定义变量后,我们必须创建一个OLE DB连接管理器才能连接到SQL Server Analysis Service实例:

  1. First, we must open the connection manager and configure it manually: 首先,我们必须打开连接管理器并手动配置它:
  1. @[User::p_ServerName] and Initial Catalog expression to @ [User :: p_ServerName],并将初始目录表达式设置为@[User::p_Database] as shown in the image below: @ [User :: p_Database] ,如下图所示:
  1. Rename the OLE DB connection manager to “ssas”:
  2. 将OLE DB连接管理器重命名为“ ssas”

加工尺寸 (Processing Dimensions)

In order to process dimensions, we must use a Sequence Container to isolate the dimension processing within the package. Then we must add a Script Task to prepare the processing commands and an Analysis Services Processing Task to execute them:

为了处理尺寸,我们必须使用序列容器来隔离包装内的尺寸处理。 然后,我们必须添加一个脚本任务来准备处理命令,并添加一个Analysis Services处理任务来执行它们:

In the Script Task configuration form, we must select @[User::p_Database], @[User::p_MaxParallel] as ReadOnly Variables and @[User::strProcessDimensions] as ReadWrite variable as shown in the image below:

在“脚本任务”配置表单中,我们必须选择@ [User :: p_Database]@ [User :: p_MaxParallel]作为ReadOnly变量,并选择@ [User :: strProcessDimensions]作为ReadWrite变量,如下图所示

Now, Open the Script editor and use the following code (C#):

现在,打开脚本编辑器并使用以下代码(C#):

The following code is to prepare the XMLA command to process the dimensions. We used AMO libraries to read the SSAS database objects, loop over the dimensions and generate the XMLA query to be used in the Analysis Services Processing Task:

下面的代码是准备XMLA命令来处理尺寸。 我们使用AMO库读取SSAS数据库对象,遍历维度并生成XMLA查询以用于Analysis Services处理任务:

#region Namespaces
using System;
using System.Data;
using System.Data.SqlClient;
using Microsoft.SqlServer.Dts.Runtime;
using System.Linq;
using System.Windows.Forms;
using Microsoft.AnalysisServices;
#endregionnamespace ST_00ad89f595124fa7bee9beb04b6ad3d9
{[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase{public void Main(){Server myServer = new Server();string ConnStr = Dts.Connections["ssas"].ConnectionString;myServer.Connect(ConnStr);Database db = myServer.Databases.GetByName(Dts.Variables["p_Database"].Value.ToString());int maxparallel = (int)Dts.Variables["p_MaxParallel"].Value;var dimensions = db.Dimensions; string strData;strData = "<Batch xmlns=\"http://schemas.microsoft.com/analysisservices/2003/engine\"> \r\n <Parallel MaxParallel=\"" + maxparallel.ToString() + "\"> \r\n";foreach (Dimension dim in dimensions){strData +="    <Process xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:ddl2=\"http://schemas.microsoft.com/analysisservices/2003/engine/2\" xmlns:ddl2_2=\"http://schemas.microsoft.com/analysisservices/2003/engine/2/2\" xmlns:ddl100_100=\"http://schemas.microsoft.com/analysisservices/2008/engine/100/100\" xmlns:ddl200=\"http://schemas.microsoft.com/analysisservices/2010/engine/200\" xmlns:ddl200_200=\"http://schemas.microsoft.com/analysisservices/2010/engine/200/200\" xmlns:ddl300=\"http://schemas.microsoft.com/analysisservices/2011/engine/300\" xmlns:ddl300_300=\"http://schemas.microsoft.com/analysisservices/2011/engine/300/300\" xmlns:ddl400=\"http://schemas.microsoft.com/analysisservices/2012/engine/400\" xmlns:ddl400_400=\"http://schemas.microsoft.com/analysisservices/2012/engine/400/400\"> \r\n" +"     <Object> \r\n" +"       <DatabaseID>" + db.ID + "</DatabaseID> \r\n" +"       <DimensionID>" + dim.ID + "</DimensionID> \r\n" +"     </Object> \r\n" +"     <Type>ProcessFull</Type> \r\n" +"     <WriteBackTableCreation>UseExisting</WriteBackTableCreation> \r\n" +"    </Process> \r\n";}//}strData += " </Parallel> \r\n</Batch>";Dts.Variables["strProcessDimensions"].Value = strData;Dts.TaskResult = (int)ScriptResults.Success;}#region ScriptResults declarationenum ScriptResults{Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure};#endregion}
}

After configuring the Script Task, we have to open the Analysis Services Processing Task and to define any valid task manually, then from the Properties Tab we must open the expressions editor form and set the ProcessingCommands property to @[User::strProcessDimensions] variable as shown in the image below:

配置脚本任务后,我们必须打开Analysis Services处理任务并手动定义任何有效任务,然后必须从“属性”选项卡中打开表达式编辑器窗体,并将ProcessingCommands属性设置为@ [User :: strProcessDimensions],如下所示:如下图所示:

获取未处理的分区数 (Get the unprocessed partitions count)

In order to process partitions in batches, we must first to get the unprocessed partitions count in a measure group. This can be done using a Script Task. Open the Script Task configuration form, and select @[User::p_Cube], @[User::p_Database], @[User::p_MeasureGroup] , @[User::p_ServerName] variables as ReadOnly variables and @[User::intCount] as a ReadWrite variable as shown in the image below:

为了批量处理分区,我们必须首先获取度量值组中未处理的分区数。 这可以使用脚本任务来完成。 打开脚本任务配置表单,然后选择@ [User :: p_Cube]@ [User :: p_Database]@ [User :: p_MeasureGroup]@ [User :: p_ServerName]变量作为ReadOnly变量,并选择@ [User :: intCount]作为ReadWrite变量,如下图所示:

Open the Script Editor and write the following C# script:

打开脚本编辑器并编写以下C#脚本:

This script reads the SSAS database objects using AMO libraries, and retrieves the number of unprocessed partitions within the OLAP cube Measure group, then stores this value within a variable to be used later.

该脚本使用AMO库读取SSAS数据库对象,并检索OLAP多维数据集度量值组中未处理分区的数量,然后将此值存储在变量中以供以后使用。

#region Namespaces
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using Microsoft.AnalysisServices;
using System.Linq;
#endregionnamespace ST_e3da217e491640eca297900d57f46a85
{[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase{public void Main(){// TODO: Add your code hereServer myServer = new Server();string ConnStr = Dts.Connections["ssas"].ConnectionString;myServer.Connect(ConnStr);Database db  = myServer.Databases.GetByName(Dts.Variables["p_Database"].Value.ToString());Cube objCube = db.Cubes.FindByName(Dts.Variables["p_Cube"].Value.ToString());MeasureGroup objMeasureGroup = objCube.MeasureGroups[Dts.Variables["p_MeasureGroup"].Value.ToString()];Dts.Variables["intCount"].Value = objMeasureGroup.Partitions.Cast<Partition>().Where(x => x.State != AnalysisState.Processed).Count();Dts.TaskResult = (int)ScriptResults.Success;}#region ScriptResults declarationenum ScriptResults{Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure};#endregion}
}

批量处理分区 (Process Partitions in batches)

Finally, we have to create a For loop container to loop over OLAP cube partitions in chunks. In the For Loop Editor, make sure to set the For Loop Properties as following:

最后,我们必须创建一个For循环容器,以大块地循环OLAP多维数据集分区。 在“ For循环编辑器”中,确保将“ For循环属性”设置如下:

  • InitExpression: @intCurrent = 0 InitExpression :@intCurrent = 0
  • EvalExpression: @intCurrent < @intCount EvalExpression :@intCurrent <@intCount
  • AssignExpression: @intCurrent + @p_MaxParallel AssignExpression :@intCurrent + @p_MaxParallel

Make sure the For Loop Editor form looks like the following image:

确保“ For Loop编辑器”表单如下图所示:

Within the For Loop container, add a Script Task to prepare the XMLA commands needed to process the Data and indexes of the partitions, and add two Analysis Services Processing Task to execute these commands as shown in the image below:

在For Loop容器中,添加一个脚本任务以准备处理分区的数据和索引所需的XMLA命令,并添加两个Analysis Services处理任务以执行这些命令,如下图所示:

Open the Script Task configuration form and select @[User::p_Cube], @[User::p_Database], @[User::p_MaxParallel], @[User::p_MeasureGroup] as ReadOnly Variables, and select @[User::strProcessData], @[User::strProcessIndexes] as ReadWrite Variables. The Script Task Editor should looks like the following image:

打开脚本任务配置表单,然后选择@ [User :: p_Cube]@ [User :: p_Database]@ [User :: p_MaxParallel]@ [User :: p_MeasureGroup]作为ReadOnly变量,然后选择@ [User :: strProcessData]@ [User :: strProcessIndexes]作为ReadWrite变量。 脚本任务编辑器应如下图所示:

In the script editor, write the following script:

在脚本编辑器中,编写以下脚本:

The Script is to prepare the XMLA commands needed to process the partitions Data and Indexes separately. In this Script, we use AMO libraries to read SSAS database objects, loop over OLAP cube partitions and to generate two XMLA query that executes n partitions in parallel (10 in this example) as a single batch (one query for processing data and another one to process indexes). Then we store each XML query within a variable to be used in the SSAS processing task.

该脚本用于准备分别处理分区数据和索引所需的XMLA命令。 在此脚本中,我们使用AMO库读取SSAS数据库对象,循环OLAP多维数据集分区并生成两个XMLA查询,该查询作为一个批处理并行执行n个分区(在本示例中为10个)(一个查询用于处理数据,另一个查询用于处理数据)。处理索引)。 然后,我们将每个XML查询存储在要在SSAS处理任务中使用的变量中。

#region Namespaces
using System;
using System.Data;
using System.Data.SqlClient;
using Microsoft.SqlServer.Dts.Runtime;
using System.Linq;
using System.Windows.Forms;
using Microsoft.AnalysisServices;
#endregionnamespace ST_00ad89f595124fa7bee9beb04b6ad3d9
{[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase{public void Main(){Server myServer = new Server();string ConnStr = Dts.Connections["ssas"].ConnectionString;myServer.Connect(ConnStr);Database db = myServer.Databases.GetByName(Dts.Variables["p_Database"].Value.ToString());Cube objCube = db.Cubes.FindByName(Dts.Variables["p_Cube"].Value.ToString());MeasureGroup objMeasureGroup = objCube.MeasureGroups[Dts.Variables["p_MeasureGroup"].Value.ToString()];int maxparallel = (int)Dts.Variables["p_MaxParallel"].Value;int intcount = objMeasureGroup.Partitions.Cast<Partition>().Where(x => x.State != AnalysisState.Processed).Count();if (intcount > maxparallel){intcount = maxparallel;}var partitions = objMeasureGroup.Partitions.Cast<Partition>().Where(x => x.State != AnalysisState.Processed).OrderBy(y => y.Name).Take(intcount);string strData, strIndexes;strData = "<Batch xmlns=\"http://schemas.microsoft.com/analysisservices/2003/engine\"> \r\n <Parallel MaxParallel=\"" + maxparallel.ToString() + "\"> \r\n";strIndexes = "<Batch xmlns=\"http://schemas.microsoft.com/analysisservices/2003/engine\"> \r\n <Parallel MaxParallel=\"" + maxparallel.ToString() + "\"> \r\n";string SQLConnStr = Dts.Variables["User::p_DatabaseConnection"].Value.ToString();foreach (Partition prt in partitions){strData +="    <Process xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:ddl2=\"http://schemas.microsoft.com/analysisservices/2003/engine/2\" xmlns:ddl2_2=\"http://schemas.microsoft.com/analysisservices/2003/engine/2/2\" xmlns:ddl100_100=\"http://schemas.microsoft.com/analysisservices/2008/engine/100/100\" xmlns:ddl200=\"http://schemas.microsoft.com/analysisservices/2010/engine/200\" xmlns:ddl200_200=\"http://schemas.microsoft.com/analysisservices/2010/engine/200/200\" xmlns:ddl300=\"http://schemas.microsoft.com/analysisservices/2011/engine/300\" xmlns:ddl300_300=\"http://schemas.microsoft.com/analysisservices/2011/engine/300/300\" xmlns:ddl400=\"http://schemas.microsoft.com/analysisservices/2012/engine/400\" xmlns:ddl400_400=\"http://schemas.microsoft.com/analysisservices/2012/engine/400/400\"> \r\n " +"      <Object> \r\n " +"        <DatabaseID>" + db.Name + "</DatabaseID> \r\n " +"        <CubeID>" + objCube.ID + "</CubeID> \r\n " +"        <MeasureGroupID>" + objMeasureGroup.ID + "</MeasureGroupID> \r\n " +"        <PartitionID>" + prt.ID + "</PartitionID> \r\n " +"      </Object> \r\n " +"      <Type>ProcessData</Type> \r\n " +"      <WriteBackTableCreation>UseExisting</WriteBackTableCreation> \r\n " +"    </Process> \r\n";strIndexes +="    <Process xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:ddl2=\"http://schemas.microsoft.com/analysisservices/2003/engine/2\" xmlns:ddl2_2=\"http://schemas.microsoft.com/analysisservices/2003/engine/2/2\" xmlns:ddl100_100=\"http://schemas.microsoft.com/analysisservices/2008/engine/100/100\" xmlns:ddl200=\"http://schemas.microsoft.com/analysisservices/2010/engine/200\" xmlns:ddl200_200=\"http://schemas.microsoft.com/analysisservices/2010/engine/200/200\" xmlns:ddl300=\"http://schemas.microsoft.com/analysisservices/2011/engine/300\" xmlns:ddl300_300=\"http://schemas.microsoft.com/analysisservices/2011/engine/300/300\" xmlns:ddl400=\"http://schemas.microsoft.com/analysisservices/2012/engine/400\" xmlns:ddl400_400=\"http://schemas.microsoft.com/analysisservices/2012/engine/400/400\"> \r\n " +"      <Object> \r\n " +"        <DatabaseID>" + db.Name + "</DatabaseID> \r\n " +"        <CubeID>" + objCube.ID + "</CubeID> \r\n " +"        <MeasureGroupID>" + objMeasureGroup.ID + "</MeasureGroupID> \r\n " +"        <PartitionID>" + prt.ID + "</PartitionID> \r\n " +"      </Object> \r\n " +"      <Type>ProcessIndexes</Type> \r\n " +"      <WriteBackTableCreation>UseExisting</WriteBackTableCreation> \r\n " +"    </Process> \r\n";}strData += " </Parallel> \r\n</Batch>";strIndexes += " </Parallel> \r\n</Batch>";Dts.Variables["strProcessData"].Value = strData;Dts.Variables["strProcessIndexes"].Value = strIndexes;Dts.TaskResult = (int)ScriptResults.Success;}#region ScriptResults declarationenum ScriptResults{Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure};#endregion}
}

Now Open both Analysis Services Processing Task and define any valid task manually (just to validate the task). , then from the Properties Tab we must open the expressions editor form and set the ProcessingCommands property to @[User::strProcessData] variable in the First Task and @[User::strProcessIndexes] in the second one.

现在,打开两个Analysis Services处理任务并手动定义任何有效任务(仅用于验证任务)。 ,然后必须从“属性”选项卡中打开表达式编辑器窗体,并将“ 处理命令”属性设置为第一个任务中的@ [User :: strProcessData]变量,并将第二个任务中的@ [User :: strProcessIndexes]设置为。

The package control flow should looks like the following:

程序包控制流应如下所示:

Now, the package is ready. If an error occurs during processing, only the current batch will rollback.

现在,包装已准备就绪。 如果在处理期间发生错误,则仅当前批次将回滚。

缺点和可能的改进 (Disadvantages and Possible improvements)

One of the most critical disadvantages of this approach is that it can lead to inconsistent values if the OLAP cube is online and not all partitions are processed. Therefore, it has to be executed in off-hours or on a separate server then deployed to the production server after being processed.

此方法的最关键的缺点之一是,如果OLAP多维数据集处于联机状态并且未处理所有分区,则可能导致不一致的值。 因此,它必须在下班时间执行,或在单独的服务器上执行,然后在处理后部署到生产服务器。

Besides this, many improvements can be done for this process:

除此之外,此过程可以做很多改进:

  1. We can configure some logging tasks to track the package progress, especially when dealing with a huge number of partitions 我们可以配置一些日志记录任务来跟踪程序包的进度,尤其是在处理大量分区时
  2. This example is to process one measure group partitions; it can be expanded to manipulate all measure groups defined in the OLAP cube. To do that, we need to add a Script Task to get all measure groups in the SSAS database, then we should add a Foreach Loop container (Variable enumerator) to loop over measure groups and put the For loop container we already created within it 本示例将处理一个度量值组分区。 可以对其进行扩展以处理OLAP多维数据集中定义的所有度量值组。 为此,我们需要添加一个脚本任务以获取SSAS数据库中的所有度量值组,然后我们应添加一个Foreach循环容器(变量枚举器)以遍历度量值组,并将我们已经创建的For循环容器放入其中

一些有用的链接 (Some helpful Links)

In the end, I will provide some external links that contain additional information that can serve to improve this solution:

最后,我将提供一些外部链接,其中包含可用于改进此解决方案的其他信息:

  • SSAS: Are my Aggregations processed? SSAS:是否处理了我的汇总?
  • Improving cube processing time 缩短立方体处理时间

翻译自: https://www.sqlshack.com/an-efficient-approach-to-process-a-ssas-multidimensional-olap-cube/

ssas报表项目数据集

ssas报表项目数据集_处理SSAS多维OLAP多维数据集的有效方法相关推荐

  1. ssas报表项目数据集_如何部署SSAS多维数据集

    ssas报表项目数据集 In this article, I'm going to discuss the different ways in which we can deploy SSAS cub ...

  2. ssas报表项目数据集_Analysis Services(SSAS)多维设计技巧–数据源视图和多维数据集

    ssas报表项目数据集 In this article, we'll discuss some tips and best practices regarding the design of OLAP ...

  3. mvc中嵌入ssrs报表_如何在SSRS报表中过滤多维OLAP多维数据集

    mvc中嵌入ssrs报表 Ever since the early days of my career, SQL Server Reporting Services (SSRS) has been o ...

  4. eeg数据集_运动想象,情绪识别等公开数据集汇总

    本文来自脑机接口社区 运动影像数据 Left/Right Hand MI: http://gigadb.org/dataset/100295 Motor Movement/Imagery Datase ...

  5. 运维工程师项目案例_【IT专场】系统运维工程师等岗位在线邀你入职,base上海|深圳|昆山...

    今天的IT招聘专场推送的岗位有系统运维工程师Mainframe Cobol Programmer网络工程师base上海|深圳|昆山含"金"量满满供君挑选感兴趣的快投简历吧~ 系统运 ...

  6. ssas 分层维度_通过SSAS维度层次结构增强数据分析

    ssas 分层维度 介绍 (Introduction) This article will discuss how SSAS Dimension Hierarchies can be used to ...

  7. 《BI项目笔记》用Excel2013连接和浏览OLAP多维数据集

    <BI项目笔记>用Excel2013连接和浏览OLAP多维数据集 原文:<BI项目笔记>用Excel2013连接和浏览OLAP多维数据集 用Excel2013连接和浏览OLAP ...

  8. ssas 分区 设置_分区SSAS多维数据集的好处

    ssas 分区 设置 介绍 (Introduction) In the article How to partition an SSAS Cube in Analysis Services Multi ...

  9. ssas如何创建分区_如何基于SSAS信息创建Excel报告

    ssas如何创建分区 介绍 (Introduction) In SSAS, when I offer Power BI, Reporting Services, PowerPivot or Share ...

最新文章

  1. php pthread 实例,php 真正的多线程 pthread
  2. R语言计算平均值的标准误差(standard error of the mean):自定义函数计算平均值的标准误差、使用plotrix包的std.error函数计算平均值的标准误差
  3. 磁盘调度算法java代码
  4. Windows 7 在资源管理器中显示软件快捷方式
  5. Java REST JAX-RS 2.0 –如何处理日期,时间和时间戳记数据类型
  6. 田亮:坚信大数据的变革力量
  7. Python心得--新手开发注意
  8. 360好搜鬼畜视频一下两下成洗脑热词
  9. [转] 面向对象软件开发和过程(四)重用
  10. oracle没按成功怎么卸载,关于oracle卸载没有卸载完全的问题
  11. iOS报错 之 The app delegate must implement the window property if it wants to use
  12. python快递分拣_快递背后的黑科技,你造吗?
  13. 设计素材|最流行的抽象流体彩色渐变海报,艺术感爆棚
  14. 管理心智能量,在恐惧之下训练心流
  15. YYCMS5.0影视系统/源码全开源无授权/影视站全自动采集
  16. java处理图片与base64编码互相转换
  17. css 风琴,玩一下纯 CSS 折腾的一个叫什么手拉风琴的图片展示效果
  18. 基本数据类型与高精度数字
  19. SQL实现占比、同比、环比指标分析
  20. 【排序算法】快速排序原理及Java实现

热门文章

  1. 详解mysql事务_详解MySQL执行事务的语法和流程
  2. pcie握手机制_【博文连载】PCIe扫盲——Ack/Nak 机制详解(一)
  3. mysql安装设置mysql字符集utf8及修改密码
  4. Git--rebase合并提交
  5. HTML li标签排列有空白间隙
  6. bzoj1562[NOI2009] 变换序列
  7. linux 常识笔记 20160621
  8. 【转】dx11 devicecontext-map
  9. [转]如何才能在 IIS 7.5 使用 Windows PowerShell Snap-In 功能
  10. CentOS 下 Oracle 10g 安装 + 配置 全过程(图解)