SAP Professional Journal editor Scott Priest (@SAPProJournal) recently spoke with SAP’s Ina Felsheim about the new book “Enterprise Information Management with SAP," recently published by SAP PRESS.
In this second in a three-part podcast series with SAPexperts, Ina covered topics on how SAP EIM offerings can drive both proactive and reactive approaches to data management, and offers an overview of the tools and HANA integration, as well as some advice on setting timelines and leaving ample time to set business definitions of what makes “good” data in the first place.
Listen to this interview or read the edited transcript below:
Scott Priest: Hello, everybody. Today I'm speaking with Ina Felsheim of SAP. She is a Director of Solution Management, and one of the coauthors of the book, "Enterprise Information Management with SAP," released this May from SAP PRESS.
This is part 2 of a podcast series that we at SAPexperts are doing on Enterprise Information Management, here on Insider Learning Network. Thanks for joining us today, Ina.
Ina Felsheim: Thanks for having me. It’s great to be here.
Scott Priest: Today we'll talk about how EIM deals with data processing, specifically with unstructured data. In the first podcast of this series, Ginger Gatling, one of your colleagues at SAP, introduced the topic of EIM generally. Here, I wanted to ask specifically with you about EIM and data management.
What are the big challenges right now, in terms of improving data processing? And how are organizations using EIM solutions to address these and other data management issues?
Ina Felsheim: Great question. First of all, some of the challenges:
As Ginger outlined, people don't necessarily know that they have an information problem. If a truck is unable to ship a product out of a certain plant, they may immediately think that that's a people problem or a business problem.
Part of what information management tackles is a discipline called Information Governance, root‑cause analysis of any of those business problems: In some cases, that is a business process problem. In some cases, that's a people training problem, and in some cases that's an information problem.
The challenge is to know when you have an information problem. When you realize, all of a sudden, "I can't ship product out of this plant, and now that is costing me millions of dollars a day" – that tends to be how it starts.
Once you have that base understanding of why information is valuable to you -- and the kind of problems that you can run into if you're not managing that information from an enterprise standpoint, as a true asset ‑ once you have that base, then you can really do some proactive analysis and identify and stop potential problems before they start impacting your business.
To work through that root‑cause analysis that I talked about, you're going to need people from the business, and from multiple disciplines inside your business. People from Business Processing, people from a specific function in the line of business, people from IT, probably someone from Sales or Marketing or Finance, especially if you're changing the foundational definition of, say, what a "customer" means or what "revenue" means.
That's going to impact all of those different areas in your company, so you need extreme cross‑functional buy‑in across all those groups, as well as making sure that you can implement and facilitate a better solution from the IT side. It's a huge challenge.
Scott Priest: What specific solutions are a part of SAP and EIM offerings?
Ina Felsheim: As Ginger outlined, we have five main pillars of our EIM technology.
It starts with Information Steward to identify "What information do I have, is it even fit for use, and where is it used throughout the enterprise?"
Secondly, you have Data Services. Data Services then can move the information that you have to any of the systems that need it. Data Services can clean it, and make it good as it moves it. This is huge, supplying data quality technologies to make sure you're not moving 10 copies of the same customer, with slight variations -- that you have one, good customer.
Then we have Master Data Governance, which acts as that central hub and repository for your best, cleaned records, and can provision them to systems that need it. We also have Information Lifecycle Management, which handles your retention and archiving policies on that master information, once they’re created.
Then Enterprise Content Management really takes care of all the rich content that goes hand‑in‑hand with your business process, like a sales contract. It doesn't do you a lot of good to have just the master data on who you're doing a sale with, unless you have the sales contract associated with it. Enterprise Content Management lets you tie that rich content with your master data, to have a much better understanding of your whole information system.
Those are the pillars of the Enterprise Information Management strategy at SAP.
Scott Priest: How do these offerings relate to something like HANA, which obviously is a big player right now in this whole space?
Ina Fleisham: HANA is, of course, about high‑volume analytics. HANA right now uses Data Services because high data volumes mean lots of information into HANA. Data Integrator, which is a part of Data Services, is the piece that is going to move high volumes of information, very quickly, with great performance, into HANA.
Then you can also use Data Services to clean that information as it goes in. Because, of course, if you don't clean the information as it is going into HANA, you're just getting a quicker look at your really bad information.
We really say you need to have this Data Quality piece. You have just spent all of this time loading and setting up your HANA system. You’ll want to make sure that that information is good quality so you're getting the best results from your analytics.
Scott Priest: Earlier, you talked about a proactive approach, as it pertains to EIM. Can you explain how EIM can offer both proactive and reactive approaches to managing data?
Ina Felsheim: First there's the reactive approach I talked about earlier. We see this all the time. Maybe a company is getting a lot of returned mail. So there’s someone whose job it is to call someone else, gets the correct address into your source system, then resend the mail. This is just part of their job, part of what they do.
That’s when you have to do that root‑cause analysis, peeling back the onion to say, "Where, exactly, does this problem happen?" and not just addressing the symptom.
That's the reactive approachm and these tools can help. Data Services can run a huge, massive cleanup on your ERP data, for example. Information Lifecycle Management can run a massive retention and archiving policy against your system. Those things will never go away.
But as organizations mature their information management strategy, they start to also implement some proactive processes. As information becomes defined -- as you say, "This is the sort of information I'm going to need" -- you're defining what makes that information good. What does a good customer element look like? Does it include a phone number? An email address? Can it include multiple addresses? Maybe it works if you're in marketing, but does it work if you're a utilities company, for example?
Once you’ve gathered buy‑in about what makes a “good” customer, how can you automate systems to make sure, whenever a new customer is entered, that data is complying with those rules?
Usually, companies have multiple points of entry for adding a new customer - maybe web forms, mobile applications, SAP systems. At any of those points of entry, when somebody adds a new customer, let's check if that customer already exists in our system. Let's make sure that that customer information is well‑formed enough for use in the business process or in analytics.
That's the more proactive approach that we're talking about.
The same tools that we use for the reactive solutions ‑ Data Services, Information Lifecycle Management, Master Data Governance, all of those tools ‑ can also be used proactively.
Whether that's proactive or reactive scenario, there’s an underlying "Design these policies once, then use them in multiple‑use scenarios" approach - you don't have to rewrite them. You're just plugging them into where you need them next.
Information Steward is also a great help proactively, because it can take a look at your information landscape and say, "What do I have, and what is complete enough to use? What is mostly null values or system default values? How many people are actually looking at this element in their analytics system?"
If you have an element that is never showing up on a report at all, maybe it's being used by the business process at some other point to investigate, but that's good information to have when you start trying to fix this information.
Scott Priest: With this boom of unstructured data, can you offer some examples of how companies are using unstructured data to answer their business questions?
Ina Felsheim: Whenever I present at a conference, people are always saying, "I'm interested in social media." When we ask what they’re trying to do with it, they say, "I don't know, but I know it's important to my CIO."
We do run into companies who are a little bit further along, who have really tried to figure out how they can use social media to help inform their master data.
That's how those two relate: Social media is the base form of that data, out in Twitter or Facebook. Then Text Data Processing pulls that information in, tries to parse it and make sense of it so you can make some decisions based on it. That's the Text Data Processing piece.
We've talked to customers who are trying to identify how a specific product is being received in the market. They'll monitor Twitter or Facebook posts about how people are talking about the product.
I also talked to one customer who's using the Text Data Processing to parse through the huge amount of resumes that they get every year. In another global company, where they do lots of hiring, they get resumes in from job sites, from emails to a friend. They want to be able to look through what are often multi‑page documents, pick out key words that are important to them, and understand at a very high level the qualifications and main quality of some of these resumes and store that information. They're using Text Data Processing to bring in those huge amounts of information, and then try to understand the specific pieces of those resumes.
A third common example of Text Data Processing is around notes during repair calls. The technicians in the field are entering data, and a lot of that gets captured in unstructured ways, in a notes field. Companies are using Text Data Processing to parse through the information for some high‑level analytics, for example: "Of the service calls on refrigerators, how many of them had a problem with the compressor? Or with the ice maker? Was it a warranty problem? Was it an improper use problem?"
Then they can start to identify how they're going to stock and market their own materials. In all cases, Text Data Processing takes a large chunk of data to parse through it and gather the key elements to try to figure out what decisions can be made based on the information.
Scott Priest: My last question is for organizations that are evaluating their data processing - maybe they're looking to move to data services. Do you have any advice on the steps that they should be taking?
Ina Felsheim: Nearly all companies have some sort of data movement tool in place, somewhere. Maybe they're using SSIS because a guy in the IT department knew SQL really well, for example.
We would say that, in general, it's great to standardize on a data movement technology for exactly the reason I mentioned earlier: Once I establish my policy of what "good" information looks like, then I can implement it in many different use scenarios.
That is where you really start to see the benefit of a centralized, standardized, EIM tool like Data Services. We would say, start in one smaller use case. Maybe you're doing a data migration into SAP. Maybe you have a HANA project going on.
Start in that specific use case. Use Data Services and Data Quality or Information Steward, for example, so you can get a feel for what the tools can do for you. Once you have used it in one use case, it's going to be really easy for you to extend it into other scenarios.
On a side note, I would say make sure that you're talking to the business and to your stakeholders, because in nearly every case, people are spending way more time on the corporate alignment and decision‑making phase. Streamworks is a great help for helping you structure those decisions and making sure they're entirely auditable and captured. Make sure you're spending time in that phase. That can be up to 80% of your entire project time -- getting that corporate alignment, documenting those decisions, and refining your decisions as you look at the resultant information. Then really only maybe 20% of time might be in the actual implementation of the tool.
I would just say make sure that you're spending time on both sides of that fence, and be aware that a lot of it is going to be on corporate alignment.
Scott Priest: Great. Thanks a lot, Ina.
Ina Felsheim: Thank you.
Scott Priest: For those of you looking for a big‑picture overview of EIM, check out the previous podcast that we mentioned on Insider Learning Network, with Ginger Gatling. And look for their new book "Enterprise Information Management with SAP” from SAP PRESS.
Be sure to look for upcoming podcasts from Insider Learning Network around EIM. There will also be an opportunity to post your own questions to the authors of this book in the coming months.
Thanks a lot, Ina.
Ina: Thanks, Scott