It was a dynamic discussion on Wednesday, July 25 in Insider Learning Network’s BI-BW Forum , with practical advice and information from Penny Silvia and Dr Bjarne Berg on implementing SAP HANA, based on their current HANA implementation project and book project – the upcoming SAP HANA: An Introduction, to be published this fall by SAP PRESS.
Scott Wallask, SAPexperts editor, asked his questions and moderated the questions coming in from Insider Learning Network members on everything from “Why do we need HANA?”and the basic components of HANA to the future of InfoCubes, database administrators, sizing considerations, DSOs, and more.
To view all the questions, you can review the Q&A Forum archives, or read the edited transcript below.
Scott Wallask, SAPexperts:
Hi everyone - Dr. Bjarne Berg of Comerit and Penny Silvia of IBM will take questions and share details from their current HANA implementation project, including their work with SAP HANA Studio.
Berg is a BI expert and at ComeritLabs has collaborated with IBM on testing, development, and benchmarking SAP HANA for BW solutions. He is joined today by his colleague at Comerit, Filip Lemmens.
Penny is with IBM’s Global Leadership Team for SAP Data and Analytics. She is also a longtime technical advisor for BI content on our sister SAPexperts.com site.
Together, Penny and Berg are spearheading a book on SAP HANA, which will be out this fall from SAP PRESS.
Scott Wallask: Penny and Berg, thanks for joining us. Before you start responding to member questions, could you give a quick update of where your HANA implementation project stands now and how your SAP PRESS book is coming along?
Dr. Berg: First, I want to state that the book we just finished will be available in September from SAP Press. In writing this book we worked with IBM Labs in Research triangle Park to provide a high-end HANA system and Comerit Labs to develop step-by-step instructions on how to do over 80 tasks in SAP HANA. Technically, we used the high-end IBM x3950 x5 server.
Our System: Our x3950 box was a massive 4U rack-mounting server (4U = 7 inches high) that weights 70.5 lbs. Inside the main memory was split into 2 memory banks (the HANA DB resides here). The disk was stored on a GPFS file system. This was a 3.3 Terabytes HDD. Actually, it was several HDD acting as one virtual disk and one hot-swap. Inside the box, there were two processors, of 10 core Intel Xeon E7 series 2.40 GHz processors each (20 total) and a 320 GB internal fusion card, used on a separate GPFS file system for the HANA logs.
There is also a bunch of connectors. For example, there are one dedicated Ethernet connection for IMM (IBM's Integrated Management Module that manages the server) and two QPI ports, used to connect to a second x3950. Using this connection type, the two physical servers can be scaled up to act as one big server (important for those who want multi-terabyte HANA systems).
Our system also had two 10 GB connections on an Emulex card, four 1 GB Ethernet connections on a PCI card, and two 1 GB Ethernet connections on motherboard. The software in our HANA system was a bit simpler. The operating system was Linux SUSE (SLES 11 SP 1), and we had installed the two IBM GPFS file systems as well as the SAP HANA software (the server components such as HANA Studio, XML and SQL parsers, logs and many more sub-components), but remember it is all included inside the appliance.
While this hardware example is specific to IBM's high-end x3950 x5 box, the components of other vendors are of a similar nature.
So now we can get started with the questions…
NancyDesRoches: Is HANA to be used as a database only or can it be used as a modeling tool (getting data from BW and reporting via Business Objects)? How do we decide its usage?
Penny Silvia: HANA is a database - but if you use the standalone version of HANA (the non-BW one) you can use the modeling tools within it to create your datamodel and then attach your BI tools on top. The decision point is really if you are looking to accelarate your BW system or start with something fresh.
Scott Wallask: We were talking about this before the chat: What is the future of the InfoCube? Do you still need to compress InfoCubes?
Dr. Berg: Many have asked, if InfoCubes are needed with a HANA system. Currently, there is significant debate on blogs and forums on the Internet on this topic. However, for the interim period there are several reasons why InfoCubes are needed:
First, transactional InfoCubes are needed for Integrated Planning and write-back options. InfoCubes are also needed to store and manage non-cumulative key figures and the RSDRI write interface only works for InfoCubes. In addition, the transition from SAP BW to HANA is simplified by allowing customers to move to the new platform without having to rewrite application logic, queries, MultiProviders and data transformations from DSOs to InfoCubes.
However, the continued use of InfoCubes has to be questioned. The propose of introducing the star-schema, snowflakes and other Dimensional Data Modeling (DDM) techniques in the 1990s was to reduce costly table joins in relational databases, while avoiding the data redundancy of data stored in 1st normal form (1NF) in Operational Data Stores (ODSs).
The removal of the relational database from HANA's in-memory processing makes most of the benefits of DDM mute, and continued use of these structures is questionable. In the future, we may see multi-layered DSOs with different data retention and granularity instead. But, for now InfoCubes will serve a transitional data storage role for most companies.
After the optimization of HANA InfoCubes, you can no longer partition the fact tables semantically. This is simply not needed. However, there are still four partitions behind the scenes. The first partition is used for non-compressed requests, while the second contains the compressed requests. There are also a partition with reference points of the inventory data and a partition for historical inventory data movements. The last two partitions are empty if there non-cumulative key figures are not used.
However, the first two partitions still require periodic compression to reduce the physical space used and increase the load times during merge processing (very much like traditional BW maintenance). This has minor impact to small InfoCubes (less than 100 million records) and for InfoCubes that have not seen significantly data reloads nor have many requests. Since the compression is also executed as a stored procedure inside HANA the compression is very fast and should take no more than a few minutes even for very large InfoCubes.
Dr. Berg: Oh, and yes, you can continue using existing standard InfoCubes that do not have the SAP HANA-optimized property, or you can convert them. The core of the new SAP HANA-optimized InfoCube is that when you assign characteristics and/or key figures to dimensions, the system does not create any dimension tables except for the package dimension. Instead, the master data identifiers (SIDs) are simply written in the fact table and the dimensional keys (DIM IDs) are no longer used, resulting in faster data read execution and data loads. In short, dimensions become logical units instead of physical data tables. (PS! the logical concept of 'dimensions' is still used to simplify the query development in BEx Query Designer).
To convert existing InfoCubes, simply go to the program RSDRI_CONVERT_CUBE_TO_INMEMORY and select the InfoCubes you want to convert. The job is execute in the background as a store procedure and is extremely fast. Typically, you can expect 10-20 minutes even for very large InfoCubes with hundreds of millions of rows. During the conversion, users can even query the InfoCubes. However, data loads must be suspended. Currently, traditional InfoCubes with a maximum of 233 key figures and 248 characteristics can be converted to HANA optimized InfoCubes. After the conversion to HANA optimized InfoCubes are maintained in the SAP HANA database's column-based store and are assigned a logical index (CalculationScenario). However, if the InfoCube was stored only in BWA before the conversion, the InfoCubes are set to inactive during the conversion and you will need to re-activate it and reload the data if you want to use it.
While, HANA-optimized InfoCubes cannot be remodeled, you can still delete and add InfoObjects using the InfoCube maintenance option. This can be done, even if you have already loaded data into the InfoCube.
M.S. Hein: Hi, Dr. Berg and Penny. I know we've heard this from our readers: What are the components of HANA, and what non-HANA components do you need for installation?
Dr. Berg: There are many internal software components of SAP HANA. The core components include:
BC-INS-NGP : HANA unified installer
BC-HAN-INS : HANA unified installer
BC-HAN-UPD : Software Update Manager
BC-DB-HDB-INS : HANA database installation
BC-DB-HDB-UPG : HANA database upgrade
BC-DB-HDB : HANA database
BC-DB-HDB-ENG : HANA database engine
BC-DB-HDB-PER : HANA database persistence
BC-DB-HDB-SYS : HANA database interface
BC-DB-HDB-DBA : HANA database / DBA cockpit
BC-HAN-MOD : HANA studio
BC-HAN-3DM : Information Composer
BC-HAN-SRC : Search
BC-HAN : HANA appliance software
BC-CCM-HAG : Host agent
BC-DB-HDB-MDX : MDX engine - MS Excel client
BC-DB-HDB-DEC : HANA Direct Extractor Connection
EIM-DS : Data Services - ETL-based
BC-HAN-LOA : HANA load controller -log-based
BC-HAN-LTR : Landscape Transformation (LT) - trigger-based
BC-HAN-REP : Sybase replication server - log-based
OTHER HANA COMPONENTS:
Other components that required to be installed on the HANA box include: Other components that required to be installed on the HANA box include:
- Java Runtime Environment - This is used by Java components inside HANA studio. The system needs at least version JRE 1.6.
- XULRunner - This is a runtime environment for common backend for XUL-based applications. The system needs at least version1.9.2.
- Libicu - This is a set of international components for Unicode.
- Network Time Protocol (NTP) - While technically not required, it supports trace files between HANA nodes and should be installed.
- Syslogd - This is a logging tool for system messages .
- GTK2 - This is a software component for graphical user interfaces
Scott Wallask: For folks who might not know, could either of you give a quick overview of what HANA Studio is?
Dr. Berg: Hi Scott,
HANA studio is the core interface for admin and development in HANA. This includes the admin console, modeler, and much more. There is also an information composer component for more power user tasks. For most companies, HANA Studio will be the most important interface for admins and employees.
In our book, we have 290 pages on step-by-step HANA studio tasks, so the tasks are quite numerous - from alerts, modeling, monitoring, admin, security and on.
ReneTschauder: With HANA as Sidecar (non BW) option - would we still be able to access BW Data (for example DSO tables)?
Penny Silvia: In that scenario you would be able to bring BW data in via the HANA data capabilities - most likely via Data Services
lemmens: Hi Rene, With the sidecar-BW config, there is no data in the sidecar BW itself, so there is nothing to access there. Not even the PSA is filled. Only the datasources are used to structure the data feed to HANA.
Dr. Berg: Just a note about, DSOs, which are also improved in HANA.
When converting exiting DSOs, you can either convert these automatically using the transaction RSMIGRHANADB or you can do this manually in the Data Warehousing Workbench. This migration does not require any changes to process chains, MultiProviders, queries, or data transformations.
The new HANA optimized DSOs execute all data activations at the database layer, instead of the application layer. Thereby saves significant time in data loads and process chains, making data available to users much faster.
Behind the scenes, HANA maintains a future image of the recently uploaded data stored is stored in a columnar table called the 'activation queue'. The current image of the current data is stored in a temporal table that contains the history, main and delta index. Finally, to avoid data replication, the change log is now kept in the calculation view instead of a physical table. Since log data does not have to be written to disk at this stage (in a traditional BW this data is written to a log table in a relational database), this new HANA approach is much faster and also consumes less storage space. So, while logically the activation process is very similar to the current relational tables in SAP BW, the technical approach is quite different.
The data loads are also positively impacted. By not being constrained by I/O writes and reads to/from disks (data is loaded in-memory instead) and new optimized approach to internally generated keys (SIDs) to take advantage of the storage methods in HANA, the migrated BW system on HANA typically sees 2-3 times faster data loads overall. For many companies, this will be reason enough to make the transition to HANA.
For more information on new DSO activations on HANA see Note 1646723: “BW on SAP HANA DB: SAP HANA-opt. DSO Activation Parameters”.
mukundagattu: Why would we need BI over HANA, when we are able to do modeling in HANA?
KoenvanDijck: HANA modeling is possible, but some things (e.g.: hiërarchies) are better done in BW. When installing HANA (as a separate box), you have the box, software, but no content. This has to be modeled (or you buy some RDS - Rapid Deployment Packages).
Who wants to remodel its entire BW application is on HANA?
Penny Silvia: You wouldn't NEED BW over HANA - that is your choice. Many customers want to retain their significant investments and intellectual capital they have built in BW and accelerate it via HANA. They will also get to use all of the BW pre-built content that way.
MonicaBittner: Since HANA as well as BW can support BI, what criteria should SAP BW customers use to evaluate where to do BI?
BW can handle complex Analytical models, as well as simple analytical Marts. HANA is very immature, so can really only handle simple data models are unless complex SQL statements.
As well, with BW on HANA database, ETL is very fast, so "near real time" data loads/reporting, is a real possibility with BW on HANA database. Why would existing BW customers migrate to HANA for BI?
Dr. Berg: Hi Monica,
The benefits of HANA is much more that faster data access. It is faster data activation, smaller data systems, beter compression, 'real-time' data is much more realistic and we can build complex applications on HANA.
Also, HANA is no longer immature. With SP4 we have high availability, backup and complex queries is possible. So HANA is now ready for prime-time in most organizations. Actually, I believe that within 3-5 years, 'legacy' non-HANA systems will be few and far-between.
MonicaBittner: Thanks, Berg.
Another BW question: SAP has published their SAP Next Generation Real-time Data Platform with SAP Business Suite as well as SAP Business Warehouse on the HANA platform.
SAP states that they will continue to invest in the BW application, on HANA platform - indefinitely? Do you see this as a reality? Or do you foresee that the BW application will become moot, over time, as HANA matures?
Dr. Berg: Hi Monica,
Awesome questions. As you see the BW infoCubes and DSOs have changed, and while I believe we still will need some form of data warehouse in the next generation of tools, an argument can be made that it may reside on the same platform as the ERP system. But for now, I don't see SAP discontinuing the BW data warehouse idea, even though the underlying technology has changed for the better. But in 5-8 years, who knows?
NancyDesRoches: We are currently operating on BW 7.3. If we would like to have BW on top of HANA and at the same time we would like to continue to use the existing BW environment (rather than starting from scratch), what is involved in this (i.e. is there a migration tool-kit available)?
Penny Silvia: In that scenario you would "simply" do a database migration from your existing BW database over to a HANA database. You would also need to check your hardware to make sur it is HANA-compatible.
Dr. Berg: Hi Nancy, Are you moving to HANA, or do you want to keep two BW systems (one on HANA and one 'legacy')?
NancyDesRoches: We want to move to HANA and not keep a legacy db. We would like to have only one BW system on top of HANA. Is there a migration tool-kit for what we are looking to do?
Dr. Berg: Not really a 'migration kit', but the tasks are rather simple. For example, we spent 3 days on sinstalling and moving a BW system over. The trick was naturally that we had the right people and the right configurations in place. Most of the next day was testing and testing, but the migration for a single box was very smooth.
I would plan 4-8 weeks migratuon project after HW is installed, but it depends on how much risk you can live with, how large the system is, what BW version you have (7.3) and how many boxes are involved. You also have to decide if you want to do any remodelings (not required though).
Scott Wallask: Let's talk admin for a moment. How does HANA change the role of the typical database administrator? Or does it not change?
Dr. Berg: The admin functions are quite different in how you execute them.
Many administration activities for SAP HANA are performed via the Admin Console. There are three ways to gain access to the admin console; you can go to 'Window' and select 'open perspective' and select the administrative console, you can select the perspective from the top right corner of the screen, or you can get there by selecting the 'open admin console' option on the welcome screen. Each option will take you to the administration functions of HANA Studio.
You can add systems, users, security, monitor the system usage, queries, available diskc space and memory as well as the sysus of the system.
For example, alerts are a great way to keep track of your current and historical system performance.
You can create your own alerts and have them emailed to you when triggered. You can set up alerts for when servers stop, disks are reaching critical capacity, or when the system is experiencing high stress such as CPU bottle-necks.
Behind the scenes the statistics server is collecting information about the system status and events. This is stored inside HANA and you can access key alert information in the administrator editor under the overview tab, while the details are found under the alerts tab.
You can display the all alerts grouped under different time periods, or you can select only 'current alerts'. If you do this, you will only see recent alerts that have not been resolved.
So lots of new tasks and tools to learn…
ReneTschauder: Do you have already experience how HANA could interact with other InMemory Solutions for example APO and TM1?
Dr. Berg: Hi Rene,
ODBC and JDBC access is available and the other in-memory platforms may be able to read the data in HANA (it is wide-open) based on the features of the system you are using. Think of HANA as just another database interacting with other databases.
The limitations will be on the TM1, PowerPlay and other tool side (i.e. old 'PP' compresses the data as well and may need to create a MOLAP cube to use the full tool features, but that is not a HANA limitation).
ReneTschauder: Thanks for this. We were more considering bringing TM1 data over into HANA to enable a joint BW and TM1 data reporting
Dr. Berg: Hi Rene,
Yes, you could also move the TM1 data to HANA using methods such as flatfile loads and BOBJ Data Services if you want to move the data that way instead.
TaniaTaylor: If we implement BO 4 on top of an oracle DW and BW Hana, BICS was suppose to replace the universe due to performance. Is this no longer an issue since BW is on HANA? Will the performance of a universe be comparable to BICS?
Dr. Berg: HiTania,
BICS connectors are available BOBJ Analysis (office), DBSQL for BO Explorer, MDX for MicroSoft Excel, as well as direct JDBC and ODBC for Crystal, Sybase Unwired and 3rd party. And finally, you can use JDBC/ODBC for Universes as well with IDT or universe designer also.
So there are lots of options and Universes may not be required for many of the BOBJ tools.
Dr. Berg: Almost forgot, yes the data fetch and OLAP processing (i.e. Bi Analytical Engine in BW) will be much faster, the other BOBJ BI 4.x tasks still happend in the BI stack. So overall universes will be much faster, but the BI4.0 stack is still there for most mid-size and large companies.
KoenvanDijck: What is the "common practice" when we use the SAP LT Replication server (the only way that can provide us real-time data on HANA): do we install this on the ECC system or do we need to install it on a separate NetWeaver server? Are there other approaches for real time data loading in HANA? SBO are scheduled loads that can deliver (near) real time data.
lemmens: Hello Koen,
As to where you install LT server, the general recommendation is to have it as a separate server. This makes software updates easier to manage between the ECC and SLT components and system resources are not interfering with each other. But as long as you scale hardware and are wiling to keep up with latest kernel and SP updates, there is no problem installing on the ECC side.
As to other real-time feeds, the main option besides SLT is Sybase RS/RA (replication server/ replication agent) but that only works for DB2 for LUW.
KoenvanDijck: Is the Sybase RS/RA still on the roadmap?
lemmens: Well, the latest roadmap I have is several weeks old but yes, it's still on that one ;)
Bette Ferris: Thank you both for taking the time today to answer questions. Could you tell us what the key SAP Notes for HANA are and where we can find them?
Dr. Berg: Look for the central notes, like note 1514967 and 1523337 and 1600929.
In general you can browse for notes under the HANA components (note 1523337 lists them for you):
- BC-HAN: SAP High-Performance Analytic Appliance (SAP HANA)
- BC-HAN-MOD: SAP High-Performance Analytic Appliance Modeler
- BC-HAN-LOA: Load Controler
- BC-HAN-REP: Sybase Replication Server
- BC-DB-HDB: SAP In-Memory Computing Engine
- BC-DB-HDB-DBA: Database Administration for HDB
- BC-DB-HDB-INS: Installation HDB
- BC-DB-HDB-PER: Database Persistence for HDB
- BC-DB-HDB-SYS: Database Interface/DBMS for HDB
- BC-DB-HDB-UPG: Upgrade HDB
- BC-DB-HDB-ENG: SAP In-Memory Computing Engine
- BC-DB-HDB-MDX: MDX Engine/Excel Client
In addition, there are many HANA notes on SAP MarketPlace. Here are a few of the important ones to get you started:
1514966 Sizing SAP HANA Database
1514967 SAP HANA: Central Note
1523337 SAP HANA Database: Central Note
1545815 Sofware Update
1558791 Cummulative key figures
1577128 Supported clients for SAP HANA
1597355 Swap-space recommendation for Linux
1646723 BW on SAP HANA DB: SAP HANA-opt. DSO Activation Parameters
1661202 Support for multiple applications on SAP HANA
1637145 SAP BW on HANA: Sizing SAP HANA Database
1650394 for large table management.
1681092 Support for multiple SAP HANA databases on a single SAP HANA appliance
1703675 SP4 note
1704499 Licence Keys
Laszlo Torok: Is it possible to install HANA for personal learning? Is there a test drive? Is it better on a home server or on AWS?
Scott Wallask: There's a demo and test drive on the Experience HANA site.
Penny Silvia: We do have a HANA 101 book being published in October ... :-) LOTS of great information and insight there!
lemmens: Or you can always get your own personal cloud-based sandbox on Amazon...
DaeJinSwope: With SAP BW on HANA, can you also create HANA views using HANA Modeler in the same Schema?
lemmens: The views are not created in a schema, they are created in a Package. Provided you have the authorization, you can use tables from any schema in your views and save them to a Package.
Sunil Kolhe: Hi Dr Berg,
We met at Las Vegas BI conf Nov 2011. I am interested in understanding the Licensing and Cost of HANA Hardware. I understand the licensing works based on 64BG units. Also my understanding is Data get compressed 4:1.
For a 30 TB Database to migrate to HANA (Standalone not on BW), What HANA HW I will need for successful implementation and what would be cost and Licenses estimate ?
Dr. Berg: Hi Sunil,
The HW costs for a 1Tb memory box with 20 cores and 3Tb disks is around $50-90K and there are different editions of HANA depending on what you want to do with it.
The first is called "HANA appliance software platform edition", the second is called "HANA appliance software enterprise edition" and the most complete solution is called the "HANA appliance software enterprise extended edition".
The difference between these editions is basically how you want to extract, move and replicate the data.
If you want to use HANA for classical ETL development using BusinessObjects DataServices, you should go for the "appliance software platform edition". This great for non-SAP shops, or customers who want to accelerate any sources, such as custom-made data warehouses, data marts, or data from non-SAP ERP systems.
The "appliance software enterprise edition" is for companies who want to use their HANA system with trigger-based replication. The edition also includes the SAP BusinessObjects DataServices, so you can actually do both ETL and triggers.
The last edition, "HANA appliance software enterprise extended edition", is for those who want it all. This adds the log based replication of data to the other editions and most large scale organizations that already have SAP ERP or BI software in their landscapes should seriously consider this edition. There is also a license version for SAP BW that is much cheaper. So the software components depend on what you want to do with it and how large the system is.
However, for many BW systems it is very cost effective and not as high as some would have you believe.
Penny Silvia: Hi Sunil,
HANA licensing and pricing is controlled by SAP so I cannot really comment on that part of it. As for the compression - the AVERAGE you are likely to see is a compression rate of 4:1 - but others have seen it as much as 10:1 ... it really depends on your model and how much you will be able to compress it.
Your hardware partner will be able to work with you to estimate and price the amount of hardware necessary.
Dr. Berg: Penny makes some great points and the sizing of what you need can be complex (there is a quicksizer for HANA by SAP and we show you how to use it step-by-step in our book). But here are some simple rule-of-thumbs you can use when trying to find out what you need to buy before you go to a detailed sizing effort.
There are some other quick ways to get a basic idea how large your system is going to be. These rule-of-thumbs are great for preliminary estimates and high-level budgeting, but to get exact numbers you should complete a real sizing effort as outlined above.
Memory = (Source Data Footprint / 5) * 2
Memory is what most people think of when considering HANA. It can be estimated by taking the current system size and estimate a 5 time's compression and multiplying this by two. For example, you could start with a SAP BW data warehouse system, clean the log-files, remove aggregates (not needed with HANA), compress your InfoCubes, clean the PSA and get rid of unused DSOs and InfoCubes. With this cleaned BW system, you have a starting point for your HANA sizing. For ECC systems, parts of log-files may be deleted and older tables may be archived (t-code: SARA) and thereby reducing the size of the overall system.
In our rule-of-thumb example, if we had a 1 Tb SAP BW system, we would divide it by 5, which give us a 200 GB size. Then we multiply it by two, to allow for memory for internal processes, indexing, data movement and end users. This gives as a rough estimate of 400GB of memory needed.
The next item we need is disk space. This can be estimated by:
Disk for persistence layer = 4 * Memory
Disk for the log = 1 * Memory
In our example above, we would need 4* 400 GB disk for the persistence layer and about 400 GB for the logs. This would be around 2 TB (don't worry, disks space for this size are now almost 'cheap'). The persistence layer is the disk that keeps the system secure and provides for redundancy if there should be any memory failures, so it is important not underestimate this.
You can always add more as the system grows, but it is better not to have so little disk, that the first thing to do after go-live is adding more. So 'over sizing' is encouraged. PS! This disk should not be placed on a shared, high usage, Storage Area Network (SAN). It is HANA's disk so keep it as dedicated as possible.
CPU = 0.2 CPU cores per active user
The CPUs are based on the number of cores that you include. For example, 8 and 10 core CPUs now exists (depending when you bought your system). If we have an 8x8 core system, we would have 64 cores and could handle 320 active users. For SAP vendor sizing, it is common to add processors based on the HANA memory size, so your actual number will be somewhat different.
Depending on who you give access to, the concept of active users may be hard to pin down. In the past, we have seen 20% to 40% or named users being active in SAP BW, while a higher number is normal in SAP ECC/ERP systems. You can get the actual usage numbers from the EarlyWatch report in SAP Solution Manager. This will show you how many of your named users are currently using your system, and also show a breakdown of their activities as low, medium and high. This can be a great input in determining how many CPU cores you may need.
The CPU speed may vary a bit depending on when you bought your HANA system, but as of the summer of 2012, IBMs Intel Xeon E7 series had a clock speed of 2.40 GHz and even faster processors are likely in the years to come.
So lots of options and items to consider BEFORE you budget the system as well.
Scott Wallask: Thank you again to Dr. Berg and Penny for your expertise and answers.
For more advice, check Dr. Berg’s regular blog on Insider Learning Network covering BI and SAP BusinessObjects, and his updates on his HANA projects.
Also, please visit the new SAPexpert Web site for the latest content and best-practice articles about BI and other SAP areas.
We also invite you to the upcoming HANA Seminars, which began their multi-city tour this week,and next to go San Jose in August. For more details, visit hanaseminar.com
If you have a specific BI question, ask the entire community by selecting "New Thread" in the Forum
Thanks again for a great discussion!