Most people think of data warehouses as databases that solve reporting problems. However, it’s more useful to think of them as addressing two sets of problems: 1) Reporting, or data distribution, problems and 2) Data integration problems. You probably already realize this. Heck, Bill Inmon’s original definition of the data warehouse said that it is a subject oriented, integrated, time variant, non-volatile collection of data in support of management decisions.
What you may not realize, however, is that the data structures best for data integration are not best for data distribution. Data integration works best with narrow, normalized tables. This format makes it easy to work with data at its atomic level, making small schema changes when required without causing major headaches. Just like when building a model of anything (and a database is, after all, a model of reality), you’ll get a more accurate representation when starting with small things (e.g. atoms) than when starting with large things (e.g. wooden boards).
Normalized tables, however, don’t always work well for data distribution because a number of expensive joins are frequently required to create reports. (more on why data warehouses are best modeled in normalized schema in a future post).
Data distribution, however, works best with non-traditional data structures. One example of these are wide, denormalized tables in forms like star schemata. Other examples of non-traditional data structures would include multidimensional databases, like Essbase, and various in-memory kinds of databases like those offered by QlikView and Tableau. These aren’t necessarily great for data integration but they are wonderful for distributing data really quickly. In fact, I used to call them “HPQSs” for High Performance Query Structures (I think I stopped using that name because it takes too long to type).
Why are these non-traditional data structures so good at data distribution? Because, in essence, they anticipate what reporting users will want to see and they pre-process the data to support those needs. Think about it, if you stayed with a normalized schema, you could logically recreate a star schema dimension table by placing a view over a set of its tables. You would, though, lose the performance benefits of doing that view’s joins in advance, like a true star schema dimension table does.
Now, some extremely bright people have come up with ways to replace normalized data warehouses with star schemata tied together via “conformed dimensions.” It sounds like a nice work-saver but, in actuality, it creates architectures that don’t really support the broader range of things that a well-designed data warehouse can handle. Things like serving as a source for master data.
Consider the following:
- SUBJECTIVITY vs OBJECTIVITY: Star schemata are inherently subjective (e.g. is department a dimension OR an attribute of employee OR both?) Normalized schemata, on the other hand, are less subjective – Since this data will eventually be used in a variety of ways, do you want the core place where all data is integrated to be highly subjective?
- BRITTLENESS: Star schemata are inherently brittle. For example, if you integrate in a star, when you add a column to the source system you have to figure out how to handle all the historical records in the related dimension table. Normalized schemata don’t suffer from those same effects.
In the end, yes, you can shoehorn a normalized schema into the reporting role or shoehorn a star schema into the data warehouse role but, really, they serve different purposes and are best at different things. As a result, our preferred architecture for big, complex reporting needs is a normalized data warehouse feeding out to denormalized, or alternative technology, tools.
Thoughts or comments? Please post them here.
Until next time…
Ok, there’s a ton of publicity right now around Amazon’s Redshift service. Every article I see calls it a software as a service (SAAS) data warehouse. Really?
As we read in kindergarten, a data warehouse is a “subject oriented, integrated, non-volatile, time variant collection of data in support of management decisions” (Or something to that effect – I’ll have to see if I can dig up an old copy of Dick and Jane and Bill Inmon).
Now, Redshift is cool but it’s not a data warehouse. From what I can tell, it’s really a database (kind of like SQL Server, or Oracle, or MySQL …), optimized for data warehousing (kind of like InfoBright), running on servers at some Amazon facility, and accessible through the web. In other words, it really saves you the headaches of buying and managing database hardware and software.
On the other hand, it doesn’t come with user requirements, schema, ETL designs or programs, business intelligence tools or any of the stuff we work so hard to build in data warehousing.
So, is Redshift cool & valuable? Undeniably. Is it a data warehouse? Deniably (actually, the spell checker is telling me that “deniably” is not a word so let me change that to “No”).
In my last few posts I discussed how big data is really just data, how we’ve been in this situation before and how, over the next few years, the market will be ripe for consolidation. In this post I’ll make some recommendations for how to move forward. Now, remember, this is intended for readers in 2012. If you’ve happened on this post is some archive and it’s now 2017 or later, please contact me and let me know how close I came.
RECOMMENDATION 1: Don’t Avoid Big Data
Just because the path forward isn’t 100% clear doesn’t mean that you should avoid getting into big data. Work with the business to determine what they need. Help them figure out what questions to ask and then use your web logs, your twitter logs, your machine generated data to help them answer those questions – mine value from that data.
RECOMMENDATION 2: Avoid Making Big Bets That Assume Your Big Data Technology Will be Around Forever
In the old days we used to say that, “No one ever got fired for going with Big Blue.” In other words, so long as you chose IBM, no one could question your decision. But, in 2012, there is no “IBM” for big data (although IBM itself actually has some compelling ideas here). We just don’t know who’s going to win. So, be cautious with ‘betting’ the farm today. Instead, make small, manageable bets that get you into the field.
RECOMMENDATION 3: Consider Cloud-Based Options
As a follow on to recommendation two, seriously consider cloud based options. One analogy to help think about this is your housing situation just after college.
Most of us begin our post-college lives as renters. We don’t really know what we’re going to do with our lives or where we’re going to live in the long term (In my case, for example, my permanent home – Ann Arbor – really started as two years of graduate school which eventually went horribly awry). So, we rent.
As we get more experienced we settle down and many of us buy.
Today, most of us are the big data equivalent of new graduates. It’s very hard to tell what the future holds. And, in cases like that, you don’t want to sink your tent stakes too deeply into the ground (I’m not a camper but did I get that analogy right?). Thus, renting (i.e. housing our data in the cloud) is a great way to start.
RECOMMENDATION 4: Build Loosely-Coupled Architectures Between Data Warehouse & Big Data Stores
This is a good recommendation for any data warehouse / data integration architecture: loosely couple your ETL jobs. In other words, rather than writing ETL that brings data from your big data stores into your data warehouse, write two jobs: one to extract from your big data stores and the second to load into your data warehouse. This approach protects at least the load half of your ETL jobs should you decide to change your big data technologies. This, of course, assumes that we haven’t yet developed the technologies to make all this detail available directly in your data warehouse.
RECOMMENDATION 5: THE MOST IMPORTANT RECOMMENDATION – Make Sure you Really Have “Big Data”
The fact that you have social media data or machine-generated data doesn’t mean you have “Big Data”. It could mean, simply, that you have “Data”. We’re way past the time when a few million records a year is really anything special. We regularly handle data that size on laptop computers. So, before dumping a lot of time and money into big data technologies, make sure that your current technologies can’t already handle your needs.
It’s certainly fun to lead the technology curve but it’s even more fun to see big payoffs from small investments.
So, fight it all you like. Kick and scream about how “big data” has really become “big marketing”. The truth is that there is value in capturing and analyzing large volumes of machine generated data. Get started now but be smart, and be ready for a shakeout in the market for big data technologies.
Big Data & Data Warehousing: The Coming Shakeout – Part 2/3 – The RDBMS Shakeout as a Model for The Future of Big Data
In my last post I discussed how big data is really just data and, while it hasn’t happened yet, history shows that our ‘big data’ and our analytic data stores will eventually be integrated. This points to a situation where big data technologies are in flux and the eventual, long term, industry standard tool set isn’t yet known.
Another perspective on the big data / data warehouse situation points toward a coming big data shakeout.
In the early 1990s there were a lot of credible, relational database (RDBMS) choices. For example, back in ’90 you could have chosen Oracle or IBM DB2 or Informix or Sybase SQL Server (yes, it was called that) or Microsoft SQL Server or Digital Equipment’s RDB, or Ingres, or…
Most of these still exist in one form or another but we all know that the real winners were Oracle, IBM and Microsoft. And organizations that made the wrong choice back in 1990 were frequently forced to pivot to one of the winners.
Big Data 2012
Now look at big data in 2012. You can choose between Hadoop, HBase, Big Table, Mongo DB, Couch DB.
To work with data you can choose Pig or Hive or MapReduce or Ysmart or…
What Will Happen?
So, who’s going to win and who’s going to be left out in the cold? In all honesty, I don’t know. But, history does show that the market eventually declares winners and losers. The big data space will be no different. Perhaps the correct strategy for now is… caution.
Up Next Week
My next post will go over some recommendations on how to move forward with big data in 2012.
At this point in time it would be ludicrous to deny the truth: big data exists and it’s here to stay (a quick definition, from Wikipedia, In information technology, big data is a loosely-defined term used to describe data sets so large and complex that they become awkward to work with using on-hand database management tools) For the most part, big data is data generated by machines and programs. It includes things like social media data (like twitter tweets & Facebook posts), website generated data (like web logs), and machine generated data (like intra-second readings taken by industrial equipment).
However, we are just at the beginning of the big data movement and, as I’ll discuss in this and my next few posts, experience with similar movements teaches us that, at this point it is probably better to treat big data in small tactical projects rather than large strategic efforts.
What’s Happening Today
Looking at case studies you’ll see that most data warehousing applications of big data actually separate the data warehouse from the big-data data store. For example, companies will gather their detail data in Hadoop, or some similar technology, and then periodically aggregate it into their data warehouses.
Hey, Haven’t I Seen This Before?
I entered the IT field in 1989. Back then we experienced something very similar to today’s “Big Data” craze. We called it “A lot of Data”. Believe it or not, the bits we worked with were shaped exactly the same as they’re shaped today – nothing has changed except what was an enormous volume back then is minuscule by today’s standards (don’t even get me going on how cool it was to have 10k of hard disk storage on a PC in 1982).
In an effort to improve reporting performance, when data set sizes got too large, we’d summarize the data and move the summaries into summary tables and multidimensional databases for analysis.
Still, we yearned for and craved analytic access to the detail (yes, actually yearned AND craved – it was a simpler time). Ralph Kimball told us that “retail is detail” and taught us that, if we kept our detail, we could always roll up but, once we lost the detail, we couldn’t drill down.
How Did We Get Back to Detail?
The data warehousing industry attacked the problem of detail in a variety of ways. These included techniques like summary tables with aggregate navigation; database technologies like partitioning, advanced indexing technologies and purpose-built databases like Red Brick; AND new hardware technologies like Teradata and data warehouse appliances.
Each of these tools gave us the ability to store our detail in the data warehouse while still providing rapid response to aggregate queries.
There is Nothing New Under the Sun
Thus, using history as a guide, this cycle will repeat itself. What we call “Big Data” today, we’ll just call “Data” tomorrow. In the next few years our industry will develop the tools and techniques necessary store our detail data along with our summary data, removing the Big Data – Data Warehouse tier from our future architectures.
So, if you’re implementing big data today, there are some steps you should take to insulate yourself from these coming technology shifts. Let’s cover these in part 3 of this series of posts. Next week: Part 2 – The RDBMS Shakeout as a Model for The Future of Big Data.
I build data warehouses, I understand why they’re important, I make a living from them… I also see that traditional, relational data warehouses are on the way out. Their demise is coming from a few technological advances but the biggest one is the growing use of in-memory reporting technologies, like QlikView.
Attributes of New Reporting Technologies
I’ve been working with QlikView for some time now as well as with some clients that are in the process of adopting it. Here are some attributes of QlikView , and of similar tools, that are killing the traditional data warehouse:
- They contain their own, non-relational, self-managing data stores.
- They can import data from multiple sources into a single, accessible data store.
- They join related data together, like a relational database.
- They provide predictable, blisteringly fast query performance
- They provide very easy, user-friendly user interfaces.
- They can contain, and rapidly summarize, atomic-level, granular data.
- They can be incrementally refreshed, enabling the storage of history.
Attributes of a Data Warehouse
So, how does this lead to the demise of the data warehouse? Bill Inmon originally defined a data warehouse as a, “Subject-oriented, integrated, nonvolatile, time variant collection of data in support of management decisions.” In layman’s terms, a data warehouse is a database used for reporting and analysis containing data that’s been collected from various data sources.
More Importantly – Goals of a Data Warehouse
More important than definitions, however, are the goals of the data warehouse:
- To give business people speedy access to data for business intelligence.
- To eliminate the slowness that can be associated with reporting summary data out of complex, source databases.
- To protect the performance of the source databases by offloading compute-intensive reporting to other computers.
- To make reporting easy and user-friendly.
- To provide an integrated view of the organization; to make it appear as though its data weren’t spread across a bunch of separate systems; to make it look like the company was really operating from one, central database.
- To save and provide access to history that is frequently discarded or overwritten in source systems.
Have no doubt, a well-designed data warehouse can be great at doing these things – at great cost and with significant complexity.
Do New Reporting Technologies Meet These Goals?
So, can a tool like QlikView replace a traditional data warehouse? Well, I frankly see nothing on the above list of goals that these tools can’t do, especially if a company has a master data management program in place that ensures their systems already share common keys.
While these new reporting technologies can be pricey, the overall cost of implementing them is almost certainly going to be less than the cost of designing, building and maintaining data warehouses and then purchasing these same, or similar, tools to query from those warehouses.
Caveats – Why You May Still Need a Relational Data Warehouse
There are reasons why you might still need a traditional, relational data warehouse. These include:
- You have specific needs for specialized business intelligence tools that can only be used against SQL-based databases.
- You have a need for real-time reporting of transactions (although, in most cases, this reporting should be done out of operational systems or intermediate operational data stores anyway)
- Your data must support multiple BI tools. Right now, the databases behind tools like QlikView can only be accessed with their own, proprietary BI user interfaces. Thus you can’t, for example, access a QlikView associative database with tools like Business Objects, Cognos, MicroStrategy or Excel.
- Your source data is so massive that it will overwhelm the capabilities of your BI tool’s database. Applications like telephone call detail come to mind here.
- Your source data cannot be directly loaded into a new-generation BI tool and must be staged somewhere. An example of this is some Cloud-based systems that don’t provide strong programming interfaces for data access.
- Your source data does not share common keys and requires significant massaging to make it useful.
Is the Data Warehouse Dead – or Just Morphing?
Not all new BI technologies will kill the data warehouse. Some very powerful ones are SQL-based. SQL-based tools still need relational data sources, i.e. data warehouses.
Finally, it’s incorrect to say that the data warehouse is dead. It’s really just morphing, or better put, evolving. The definition of the data warehouse says nothing about the kind of storage technology that must be used. Thus, storing that data in an associative database, a multidimensional database, or even on punched cards doesn’t mean you don’t have a data warehouse. The trick, of course, is to make sure you’ve got something that supports your current, and future, needs at a reasonable cost.
I’d be very interested in your thoughts, post your comments below. Thanks!
EDITOR’S NOTE: Tom Carroll was a Dataspace consultant in the late 1990′s. He left for an IT job at GM which quickly morphed into a finance job for GM’s OnStar subsidiary. Tom was, effectively, the person on the user side at OnStar who was responsible for delivering financial reports to management. We were thrilled when, in May, Tom came back to Dataspace as a lead consultant. Not only does he have about 20 years of BI experience but he now, also, has the perspective of a user.
I am excited to the back at Dataspace after an 11 year absence. While I was gone I learned a whole lot about what it means to be a business intelligence end user, and over the course of a few postings I’d like to share some of what I’ve learned. While much of what we read about data warehousing and business intelligence is focused on technology, it really is the end user that will determine whether your warehouse effort is successful or not.
Regardless of What IT Says, I Have a Job to Do
I guess the first dirty secret is that reporting end users really don’t care much about IT and its issues. Are you shocked or insulted by this? Don’t be. The end user has a job to do and is being evaluated on whether and how well they get that job done. If IT can help them do their job, that’s great, but if not the job still has to be done. Having trouble getting that tax data loaded to the warehouse? OLAP cube didn’t build last night due to database issues? Guess what, the books still have to be closed so as an end user I’m going to come up with some other way to get it done. It may not be 100% correct, but as an end user I don’t have time to wait for IT to figure out its problems.
The Tool I Use to do That Job? Excel
Second, all those crazy spreadsheets and Access databases that have popped up in Finance and other departments over the years? You know, the ones you’ve tried to analyze in order to ferret out reporting and data requirements? In almost all cases those are the result of end users coming up with the best solution they can muster using the tools they know best and whatever data they have access to. Not to disparage all end users, but when it comes to designing data stores, they wouldn’t be my first stop (big surprise, huh?). Not that they’re not smart people, but it’s just not their area of expertise. Given easy access to well-structured, integrated and more complete data (can you say data warehouse?) that solves their problem, they (or at least their management) would get rid of them in a heartbeat.
So what do you think? What is the state of the relationship between IT and end users in your organization? Are there any processes or user groups in your organization that help to foster this relationship? I look forward to hearing your thoughts.
WARNING: This month’s quiz is harder than it seems at first! Are you up to it?
We recently worked with a client that does business all around the world. They capture millions of transactions which they then bring into a dimensional data warehouse for analysis.
The problem is that this company, and their customers, need to analyze their data indexed to any time zone. So, for example, if a transaction takes place in Mumbai at 0200, it might be analyzed in the Mumbai time zone as 0200 or the New York time zone as 1530 or as any other time zone in the world.
Now the question; assuming a standard relational database, what is the best way to model this data to provide correct time and date results as well as adequate query response time?
HINT: In formulating your answer, consider the impact of date, not just time.
What parts of your data warehouse are really important and which are never touched? A few weeks ago I and some colleagues here at Dataspace saw a brief demo of Appfluent. Affluent is a tool that monitors and reports on the use of database objects and, in some cases, BI tools. It can answer important questions such as:
- Which tables are being used and which aren’t?
- Which queries are run most often?
- Which queries gobble up the most resources?
Armed with information like this you can do things like:
- Eliminate unused tables
- Develop strategies to improve the performance of troublesome queries
- Determine if your warehouse is really being used at all
In the end, monitoring like this can help you improve and, perhaps, lower the cost of your data warehouse. You might want to give it a try.
Infinite MIPS, Or How Your Hardware Vendor Let you Down
The Concept of Data Warehousing is Fundamentally Flawed
Ever step back, think about what you’re doing and then ask yourself, “Why?” Ever ask it about the concept of data warehousing? Let’s grow up and face a fact here – while it may be necessary, the concept of data warehousing is flawed.
Think about it. We already have all the tasty data we need in our operational systems. So, let’s chow down. Hey, wait a minute… Y’know what would be great fun? Let’s design a completely new database called a data warehouse. Then, let’s write programs to bring all of that data into our warehouse. Along the way, let’s integrate it all so we get a business-view of it, rather than a source-specific view. Hey, let’s also make sure it’s clean. And, let’s make sure we’ve built all the infrastructure necessary to schedule jobs, trap errors, verify totals, … Oh, and let’s ask our managers and shareholders to pay for all of this.
OK, is it just me or, when you step back, does this sound insane?
What Is the “Right Solution”?
So, what’s better? Well, in a really good world, all your data and systems would be integrated from the start AND you’d be able to report directly from them.
In a perfect world you wouldn’t have to integrate the data from multiple systems, you would have only one system and it would support all of your operational and informational (i.e. reporting and analysis) needs. So, what, or who, is keeping us from this perfect world?
Who’s The Villain?
(I’m sure that Dataspace employees and alumni know where I’m heading here) Who’s letting us down? Who’s making us spend all that extra money and do all that extra work just so we can actually use the data we capture?
Hardware vendors… J’accuse!
Hardware vendors? Why? Because they haven’t figured out how to master the laws of physics to give us infinite MIPS (there it is, Dataspace folks) – infinite computing power.
Think about it; if we had infinite computing power we’d put all of our data into a single, enormous integrated, normalized database. That database would support both our operational and informational needs. It would be complex but it could be made to look simple by layering views on top of it. It would keep all the history we could ever want because, well, why not? Best of all, response time to any query, no matter how complex, would be instantaneous. Why? Because we’d have infinite computing power.
So, in the end, data warehousing is really just a way to make up for the fact that hardware (and maybe communication) vendors, with as many PhDs as they have, just haven’t done that one little thing we need them to do – create a computer with infinite MIPS. (C’mon guys, get your act together!)
Is Data Warehousing the Only Solution?
Given the fact that hardware providers are smart yet, clearly, clueless, we’ve come up with a ‘dirty’ solution to help us get at our data – we build a data warehouse. We, in essence, do a lot of pre-processing on data because we don’t have the horsepower to do it when queries are issued. Preprocessing like integrating, aggregating, and putting into user-friendly formats.
But, is this the only way to do the job? Perhaps, given our lack of infinite MIPS, it is. Still, the idea of a single, enterprise-wide database is enticing. And, actually, there is a partial solution that, while not eliminating the need for informational data stores (i.e. data warehouses and data marts), minimizes the effort required to build them. That partial solution is integrating operational systems or, in its more common form, master data management.
Integrating data before, or as, you build, a data warehouse has a number of advantages:
- It makes building the warehouse easier and cheaper.
- It ensures that, operationally, the whole organization is seeing the same picture (unlike one client who called us after different data definitions led to a multi-million dollar ordering mistake).
- It creates a logical view of the single database concept, bringing you closer to that true picture of one, integrated database underlying your entire company.
- It opens you up to reporting out of a new generation of BI tools, ones that integrate data but don’t require traditional data warehouses yet don’t stress your operational systems each time a query is run. (more on this in a later posting)
Where Does This Leave Us?
So, data warehouses and data marts do accomplish a lot and, largely, are still necessary. But, integrating data between your operational systems will save you headaches, lower your cost of warehousing and, in some cases, maybe even eliminate the need for a data warehouse.
Where to start? Well, let’s leave that for a later post, too.
Any comments? I’d love to see them. Please submit them below.
When you look at how Business Intelligence tools are marketed, you’d think that the secret to a wildly successful operation is to simply have executives sit at their desks looking at beautifully laid out dashboards, clicking here and there on charts, graphs, and gauges, drilling down, rolling up, and slicing and dicing their data. After all, that’s what the vendors of Business Intelligence systems portray in their marketing communications (and we’re guilty of using eye candy in our own materials, too).
I’m the CEO of a Business Intelligence consultancy. Organizing and presenting data in ways that enable business decisions is all that we’ve done for the 15 years since I founded Dataspace. Before that, I did it at MicroStrategy. I’ve, even, co-authored three books on the topic. Of all people, you might expect me to be sitting at my desk, slicing and dicing to my heart’s content. But you know what? I have a business to run. I’ve got to spend my time on attracting new clients, ensuring my team delivers flawlessly, and conduct a variety of back office functions from tracking payables and receivables to minimizing my overhead. And while we have implemented Business Intelligence tools at Dataspace to help me manage my operation, with the data collected, integrated and presented in a manner specific to my needs, I find I actually spend very little time using these systems. And typically for only two purposes: 1) to investigate a particular problem; 2) to check in once a week or so to see whether things are on track. I recently estimated how much time I spend using on these systems, and found I don’t spend more than an hour a week in them.
Do successful managers spend their days clicking around in BI systems? I don’t think so. Successful managers spend their time managing: making decisions and interacting with people – customers, employees, partners, suppliers, etc. Well-designed BI systems quickly give managers a view of what’s going on – of what decisions they need to make and what conversations they need to have. Well-designed BI systems get the answer across quickly and then get out of the way.
I’m proud that I use my system less than 2% of the time. After all, well-designed BI systems enable use of that 2% to identify the decisions that need to be made, and the conversations that need to be had with the other 98%.
Want to discuss? Feel free to contact me at firstname.lastname@example.org.– Ben
If you check out the message boards and recent Gartner Magic Quadrants you’ll see that QlikView is the next hot thing in business intelligence. Some of our clients are using the tool and they are ecstatic. Applications are created far faster than with traditional BI tools and executive users eat it up. I can’t think of many other BI implementations where executives are eager to get on the computer.
In a stodgy BI space that is plagued by incremental upgrades and poor customer support, QlikView is BI’s battle of Midway – a point of inflection that changes the game. If you haven’t seen the tool, I urge you to check it out at www.qlikview.com. Run the demos, they give a good idea of what it can do.
So, what’s so good about QlikView? Well, once you see the tool in action you realize that it’s not about producing the next generation of pretty green bar reports. It is about giving users easy tools for rapidly slicing through data. The difference between QlikView and traditional BI tools can be summed up as follows: Traditional BI tools are for people who need reports, QlikView is for people who need answers.
In future posts I’ll talk more about what’s so great about the tool, about how it crushes the traditional BI – DW development methodology, why most companies will still need a data warehouse and why, in the end, QlikView is complementary to, not a replacement for, current BI technologies.
Want more info before then? Drop me a line at email@example.com.
I’ve recently been working to help two clients comply with new Medicare reporting regulations. The regulation, MMSEA Section 111 – Medicare Secondary Payer Mandatory Reporting, requires anyone who pays for medical costs to report those payments to the federal government. The government will then compare the list of payees to their list of folks who receive Medicare payments. If there is an overlap, the government will look for refunds of the Medicare payments.
While you might think that this regulation only affects health insurers, the truth is that the scope is far larger. Our clients are not health insurers. One of them is a medical malpractice and workers compensation insurer. The other, believe it or not, is an auto manufacturer. The insurer is subject to this regulation because a portion of their settlements frequently covers the medical costs of the claimants. The auto manufacturer’s situation is more interesting. They are subject because, sometimes, when they pay out a product liability settlement, part of the amount paid is intended to cover the claimant’s medical costs.
Once you see how many companies are affected by this legislation you can get a sense of the total cost that its implementation will entail. Millions upon millions of dollars are being spent to make sure that affected companies are not subject to large penalties for noncompliance. In addition, it seems that many subject companies don’t yet realize that they are required to comply. For those who are subject, compliance is mandatory by Q1 of 2010 – so get to work!
How does Section 111 reporting relate to data warehousing? Well, in a couple of significant ways. First, complying with the regulation entails integrating data from a number of claims and payment systems into a single place from which it can be submitted to the government – just like a data warehouse integrates data from multiple sources. Second, if you already have a data warehouse, your compliance tasks may be much easier. One of our clients has a data warehouse in place that contains a lot of their claims data. We can, therefore, source this data from the warehouse and skip many of the integration and access issues we would otherwise encounter.
Are you under the gun for Section 111 compliance? Give me a call – we’ve got some cost effective ways to get you into compliance – quickly.
TIP: USING THE USER RESPONSE AND REPLACE FUNCTIONS TO FORMAT YOUR REPORT
When you include prompts in your Web Intelligence report, you are making your report dynamic so that each time it is run, you can retrieve the data you need to see at that time without modifying the query. Did you know that you can use the User Response function to capture the value(s) you select and then include that in a report title so that you can easily know what data the report includes?
View the User Response and Replace Function example >>