Archive | March 2012

Does Clinical Data qualify as “Big Data”?

I was at an Analyst conference last week where I met a couple of analysts (no pun intended :-)) focused on Life Sciences who felt that “Big Data” is a tough sell in Life Sciences, except for Genomic Data. That made me think. I always associated “Big Data” with the size of the data sets running into Peta Bytes and Zetta Bytes. What I learned in my journey since then is that the characteristics of Big Data does not start and end with the Size.

This article on Mike 2.0 blog by Mr. Robert Hillard, a Deloitte Principal and an author, titled “It’s time for a new definition of big data” talks about why Big Data does not mean “datasets that grow so large that they become awkward to work with using on-hand database management tools” as defined by Wikipedia. He goes on to illustrate three different ways that data could be be considered “Big Data”. For more, please read the blog.

One quality he explained that is of interest to me is “the number of independent data sources, each with the potential to interact”. Why is it of interest to me? I think Clinical Data, in the larger context of Research & Development, Commercialization and Post Marketing Surveillance definitely fits this definition. As explained in one of my previous posts title “Can Clinical Data Integration on the Cloud be a reality?“, I explain the diversity of clinical data in the R&D context. Now imagine including the other data sources like longitudinal data (EMR/EHR, Claims etc.), Social Media, Pharmacovigilance so on and so forth, the complexity increases exponentially. Initiatives like Observational Medical Outcomes Partnership (OMOP) have already proven that there is value in looking into data other than the data that is collected through the controlled clinical trial process. Same thing applies to some of the initiatives going on with various sponsors and other organizations in terms of making meaningful use of data from social media and other sources. You might be interested in my other post titled “Social Media, Literature Search, Sponsor Websites – A Safety Source Data Integration Approach” to learn more about such approaches that are being actively pursued by some sponsors.

All in all, I think that the complexities involved in making sense of disparate data sets from multiple sources and analyzing them to make meaningful analysis and ensure the risks of medicinal products outweigh the benefits will definitely qualify Clinical Data as “Big Data”. Having said that, do I think that organizations would be after this any time soon? My answer would be NO. Why? The industry is still in the process of warming up to the idea. Also, Life Sciences organizations being very conservative, specially when dealing with Clinical Data which is considered Intellectual Property as well as all the compliance and regulatory requirements that goes with the domain, it is going to be a long time before it is adopted. This article titled “How to Be Ready for Big Data” by Mr. Thor Olavsrud on CIO.com website outlines the current readiness and roadmap for adoption by the industry in general.

The next couple of years will see evolution of tools and technology surrounding “Big Data” and definitely help organizations evolve their strategies which in turn will result in the uptick in adoption.

As always your feedback and comments are welcome.

Advertisements

SharePoint is not your problem, It probably is your People and Process

In a meeting with my colleague recently we were discussing about some of the challenges faced by customers who bring in certain tools that are easy to use and are adopted multiple groups in the organization in a frenzy and before you know it there is no method to that madness.

Yes, I am talking about SharePoint.

It was developed primary as a tool to make it easy on IT teams to be able to put together websites (Intranet / Extranet / Internet) quickly as well as implement collaboration better with necessary plumbing pre-built. However, the ease of use drove the adoption. The subsequent releases added features like Document & Records Management, Social Media, Insights and even capability to integrate business data from other existing enterprise applications. The problem with such growth is that people use these tools in ways that they were not originally designed for. Once the product teams realize these new ways of using it, they will tweak the design or redesign the tool to fulfill such requirements. This cycle goes on and leads to the evolution of the product.

Any way, coming back to the challenges of adoption without a strategy and governance leads to chaos. As one of my good friends, a SharePoint Architect, puts it “SharePoint is like a Virus !!!” and it needs to be stopped. While we can argue if comparing it with virus is the right way to describe it, his intention is to say that the adoption rate in organizations is phenomenal. If not controlled, this will spin out of control and in no time will lead to people blaming SharePoint for all the problems. To be honest, this is the case pretty much across the board wherever it is adopted without proper strategy and governance.

To derive the maximum benefit out of the SharePoint implementation, one needs:

  • A Good Strategy before you bring in SharePoint to ensure it serves the business purpose
  • A proper Information Architecture to implement and configure it the right way,
  • A good Application Life Cycle Management Process to ensure the applications are created and managed the right way and more importantly retired, once their purpose is served
  • A good process to increase the adoption within the organization
  • A good training process to ensure that the IT and End User community is trained to use the tool is used the way it ought to be used and
  • Last but not least, a good Governance Process to keep it all in check

While the above points would be true for new adoptions, the same goes to organizations struggling with some of the problems as a result of unplanned adoption. They should take a step back and view at the problems they are facing. More often than not they are because they have not followed one or more things from the list above. While it could take a huge effort to clean up the mess that is already created, it is never too late to start adopting some best practices that would steer them in the right direction over a period of time.

A good resource to start with is provided by Microsoft as part of their Tech Net Resource Center. Good Luck with your efforts and do let me know if I can help in any way.

Outcome Based, Shared Risk, Managed Support Service – Model For The Future?

With the advent of cloud computing and wide adoption of outsourced and offshore service models, many organizations are relying on partners to provide managed services, more than ever. While this will help transfer capital expenditure to operational expenditure and allow organizations to focus on “Core Competencies”, the challenge remains on how efficient the service model is and how satisfied the customers are? These also increase the risks for organizations as the impact of failure is felt the most by the sourcing organizations compared to the managed service providers.

On the flip side, it is reasonable for the partner to expect incentives for not only delivering outstanding services but also continually improving them. In this context it becomes very relevant to establish a relationship that constantly measures the outcomes and provides an opportunity for both parties to share the risk. Such models will provide incentives to all stakeholders for defining a business service better, identifying outcomes that are objective and also operate the process of periodic review and improvement of the services.

Let’s briefly look at each of the aspects mentioned in the title of the post.

“Managed Support Service”:

The fundamental principle that drives this aspect of a service model is the fact that the customer is not micro-managing the personnel of the partner by assigning specific tasks, rather measure the quality of service delivered and drive towards continuous improvement. The the quality of service is to be defined in terms of specific outcomes.

“Outcomes”:

In my opinion the outcomes should be defined in business terms. While it is possible to define the business outcomes, it will be very hard to get the partners agree to specific outcomes and the associated incentives/penalties for (non) performance in delivering these outcomes. However, from an IT perspective, it will be lot more easier to leverage on the commitments made to the business i.e. the service levels to ensure they run their operations uninterrupted.

“Shared Risk”

When the customer and the partner agrees to a managed service, based on the agreed upon outcomes, then there should be an incentive/penalty model that would drive the overall engagement to continual improvement. Thus, the partner gets penalized for non-performance but gets incentives for better performance.

These elements will ensure that the business gets what they want for the money they spend, the customer IT organization will manage the quality of service rather than partner’s personnel and the partner gets the incentives to deliver on commitments and continuously improve as well. This would be a “win-win” proposition for all the parties involved, in my opinion.

All in all, as noted in this article titled “What matters most in Outsourcing: Outcomes vs.Tasks“, by the CIO magazine, while the outcomes based managed service is a holy grail, I personally have driven multiple contracts in this direction and even managed programs in this model. Given the direction we are all headed in terms of Cloud based services, I think this is a model that would work and should be used more often.

Can “Clinical Data Integration on the Cloud” be a reality?

The story I am about to tell is almost 8 years old. I was managing software services delivery for a global pharmaceutical company from India. This was a very strategic account and the breadth of services covered diverse systems and geographies. It is very common that staff from the customer organization visit our delivery centers (offsite locations) to perform process audits, governance reviews and to meet people in their extended organizations.

During one such visit a senior executive noticed that two of my colleagues, sitting next to each other, supported their system (two different implementations of the same software) across two different geographies. They happened to have the name of the systems they support, pinned to a board at their desks. The executive wanted us to take a picture of the two cubicles and email to him. We were quite surprised at the request. Before moving on to speak to other people he asked a couple of questions and realized the guys were sharing each other’s experiences and leveraging the lessons learnt from one deployment for the other geography.  It turned out that this does not happen in their organization, in fact their internal teams hardly communicate as they are part of different business units and geographies.

The story demonstrates how these organizations could become siloes due to distributed, outsourced and localized teams. Information Integration has become the way of life to connect numerous silos that are created in the process. Clinical research is a complex world.  While the players are limited, depending on the size of the organization and the distributed nature of the teams (including third parties), information silos and with that the complexity of integration of data increases. The result is very long cycle times from data “Capture” to “Submission”.

Clinical Data Integration Challenges

The challenges in integrating the clinical data sources are many. I will try to highlight some of the key ones here:

  • Study Data is Unique: depending on the complexity of the protocol, the design of the study, the data collected varies. This makes it difficult to create a standardized integration of data coming in from multiple sources.
  • Semantic Context: while the data collected could be similar, unless the context is understood, it is very hard to integrate the data, meaningfully. Hence, the integration process becomes complex as the semantics become a major part of the integration process.
  • Regulations and Compliance: Given the risks associated with clinical research, it is assumed that every phase of the data life should be auditable. This makes it very difficult to manage some of the integrations as it may involve complex transformations along the way.
  • Disparate Systems: IT systems used by sponsors, CROs and other parties could be different. This calls for extensive integration exercise, leading to large projects and in turn huge budgets.
  • Diverse Systems: IT systems used at each phase of the clinical data life cycle are different. This makes sense as the systems are usually meant to fulfill a specific business need. Even the functional organizations within a business unit will be organized to focus on a specific area of expertise. More often than not, these systems could be a combination of home grown and commercial off the shelf products from multiple vendors. Hence, the complexity of integrations increases.

What is Integration on the Cloud?

As mentioned earlier, integration is a complex process. As the cloud adoption increases, the data may be distributed across Public, Private (Includes On-Premise applications) and Hybrid clouds. The primary objective of integration on the cloud is to provide a software-as-a-service on the cloud to integrate diverse systems. This follows the same pattern as any other cloud services and delivers similar set of benefits as other cloud offerings.

The “Integration on Cloud” vendors typically offer three types of services:

  1. Out-of-Box Integrations: The vendor has pre-built some point-to-point integrations between some of the most used enterprise software systems in the market (like ERPs, CRMS etc.)
  2. Do-it-Yourself: The users have the freedom to design, build and operate their own integration process and orchestrations. The service provider may provide some professional services to support the users during the process.
  3. Managed Services: the vendor provides end-to-end development and support services

From a system design and architecture perspective, the vendors typically provide a web application to define the integration touch points and orchestrate the workflow that mimics a typical Extract-Transform-Load (ETL) process. It will have all the necessary plumbing required to ensure that the process defined is successfully executed.

Who are the players?

I thought it would be useful to look at some of the early movers in this space. The following is a list (not exhaustive and in no particular order, of course) of “Integration on Cloud” providers:

  1. Dell Boomi : Atom Sphere
  2. Informatica : Informatica CLOUD
  3. IBM : Cast Iron Cloud Integration
  4. Jitterbit : Enterprise Cloud Edition

These vendors have specific solution and service offerings. Most of them provide some out-of-the-box point-to-point integration of enterprise applications like ERPs, CRMs etc. They also offer custom integrations to accomplish data migration, data synchronization, data replication etc. One key aspect to look for is “Standards based Integration”. I will explain why that is important from a clinical data integration perspective later. While this offering is still in its infancy, there are some customers that use these services and some that are in the process of setting up some more.

Clinical Data Integration on Cloud

Many of you dealing with Clinical Data Integration may be wondering as to “Why bother with Integration on the Cloud?” while we have enough troubles in finding a viable solution in a much simpler environment. I have been either trying to create solutions and services to meet this requirement or trying to sell partner solutions to meet this requirement for the past 4 years. I will confess that it has been a challenge, not just for me but for the customers too. There are many reasons like, need to streamline the Clinical Data Life Cycle, Data Management Processes, retiring existing systems, bringing in new systems, organizational change etc. Not to mention the cost associated with it.

So, why do we need integration on the cloud? I firmly believe that if a solution provides the features and benefits listed below, the customers will be more than willing to give it a strong consideration (“If you build it, they will come”). As with all useful ideas in the past, this too will be adopted. So, what are the features that would make Clinical Data Integration on the cloud palatable?  The following are a few, but key ones:

  1. Configurable: Uniqueness of the studies makes every new data set coming in from partners unique. The semantics is also one of the key to integration. Hence, a system that makes it easier to configure the integrations, for literally every study, will be required.
  2. Standards: The key to solving integration problems (across systems or organizations), is reliance on standards. The standards proposed, and widely accepted by the industry (by bodies like CDISC, HL7 etc.) will reduce the complexity. Hence, the messaging across the touch points for integration on the cloud should rely heavily on standards.
  3. Regulatory Compliance and GCP: As highlighted earlier, Clinical Research is a highly regulated environment. Hence, compliance with regulations like 21 CFR Part 11 as well as adherence to Good Clinical Practices is a mandatory requirement.
  4. Authentication and Information Security: This would be one of key concerns from all the parties involved. Any compromise on this would not only mean loss of billions of dollars but also adverse impact on patients that could potentially benefit from the product being developed. Even PII data could be compromised, which will not be unacceptable
  5. Cost: Given the economically lean period for the pharma industry due to patent expiries and macro-economic situation, this would be a key factor in the decision making process. While the cloud service will inherently convert CapEx to OpEx and thus makes it more predictable, there will be pressure to keep the costs low for add-on services like “new study data” integration.

Conclusion

All in all, I would say that it is possible, technically and economically and also a step in the right direction to overcome some existing challenges. Will it happen tomorrow or in the next 1 year? My answer would be NO. In 2 to 3 years, probably YES. The key to making it happen is to try it on the cloud rather than on-premise. Some of the vendors offering Integration on Cloud could be made partners and solve this age old problem.

Update on 03/27/2012:

This post has been picked up by “Applied Clinical Trials Online” Magazine and posted on their blog -> here

%d bloggers like this: