Tag Archive | Pharmaceutical

Social Media, Literature Search, Sponsor Websites – A Safety Source Data Integration Approach

I haven been part of my fair share of discussions on Drug Safety and Social Media. In fact, I have even written a blog post about how these two are being forced into an “arranged marriage”, which could be a good thing :-). While processing data from social media is very complex and often unreliable, there is increased push to process it anyway. Understandably, Marketing teams are the first to adopt social media channels in pharmaceutical organizations, now the drug safety teams are being forced to act as these channels could end up generating adverse events and they are obligated to register, review and report.

As mentioned, processing of data from social media could be complex and may yield very few cases (0.2% according to a Nielsen’s Online survey of health-related social media content) the high level process is very similar to Literature Scanning. The later is something that is already being handled by organizations. I think that the Social Media content search and analysis can becoming an extension to this process. Now lets look at both the processes.

Literature Search:

Literature Search is used by BioPharmaceutical organizations to identify Adverse Events related to their medicinal products in medical and scientific journals published worldwide. This process was adopted as a result of multiple serious adverse events and the ensuing regulations and increased safety concerns. Many sponsor organizations have successfully built automated systems to speed up the overall process. These systems typically scan sources (Journals, Abstract Libraries and Reference Libraries) based on certain keywords, product names, Boolean expressions etc. and capture the mentions into a local database. These entries are then screened by trained professionals to either accept or reject them based on the required data elements to qualify as an adverse event. If additional details are required, the journals are purchased and reviewed to qualify the “hit” as an adverse event. Once identified, this becomes a case that will then be transferred manually or electronically (e.g. E2B) to an Adverse Event Management System and will follow the life cycle till it is reported as an expedited or periodic report to regulatory authorities.

Social Media:

This process can be very similar to Literature Search except that the source of data is much more diverse and also the data is far less structured. Depending on the source system, a manual or automated process can be adopted to monitor and record the “hits”. If the source system is a “blog” or “Twitter” or “Facebook”, a tool can be build to continuously poll certain blogs, tweets or Facebook pages to scan for keywords, products/brands etc. The resulting “hits” can be processed to filter and aggregate the “trends”. These trends can then be reviewed by trained professionals to make a decision on whether they qualify as “Safety Cases” that will then be processed per the AE case management process.

Enterprise Websites, Response Centers etc.:

The third variety that may be considered as source systems for safety cases are Brand Websites and other portals setup to increase the brand awareness or assist the patients to receive medicine faster or address any questions and concerns. This may even include response centers setup for patients, pharmacists and physicians to reach the sponsors for information and advice. Depending on the nature of inquiries, these could be potential sources of Adverse Events. This data, once screened and qualified, can also be fed into the AE Management System for subsequent review and reporting purposes.

Source Data Integration:

From a technology standpoint the architecture and design for aggregation and analysis of data may differ for each of the datasets. However, an integrated approach to collecting, aggregating, analyzing and reporting of Adverse Event data needs to adopted by the sponsor IT organizations. The diagram below depicts:

  1. Multiple Source Categories and Systems (Literature, Social Media, Enterprise Websites)
  2. Multiple Interfaces  (Manual, XML, Text, API, RSS, Web Services etc.)
  3. Simple, High Level process to screen, record, review and report the case
Literature Scanning, Social Media & Enterprise Websites  - Safety Source Data Integration

Social Media Safety Source Data Integration

(To Be Continued…)

Jumpstart your PV Operational Efficiency Improvement Initiatives through Oracle Pharmacovigilance Operational Analytics (OPVA)

I attended Oracle Open World (OOW) recently during the week of Oct 2nd. I got a sneak peek of the newly released Oracle Pharmacovigilance Operational Analytics tool. This tool is intended to provide a jumpstart to the efforts of Safety & Pharmacovigilance departments of MAHs and CROs in being able to monitor the operational performance of the organizations and identify the bottlenecks. The key point to note is that this tool provides only the Operational Analytics and doesn’t include the Scientific Analytics.

Some of the Key business benefits it will deliver include:

  • Insights into the case processing operations
  • Measure compliance and productivity across the organization
  • Provide ability to monitor the KPIs and metrics through dashboards in real time
  • Provide ability to drill down to identify the bottlenecks in the process
  • Improve productivity and save costs across the organization
  • Improve compliance and enable better patient safety

From IT Systems perspective:

  • Single repository of operational safety data
  • Star schemas that could be populated with data from internal and external sources
  • Predefined reports, KPIs, Metrics and Dashboards
  • Ability to customize and extend the dashboards and reports
  • Ability to view current as well as historical operational performance of organization and individuals
  • Out-of-the-box integration with Oracle’s Argus Safety
  • Built using Oracle BI (OBIEE) which gives ability to view dashboards on web as well as on mobile devices (iPhones and iPads)

Some of the key dashboards that are provided out-of-the-box:

  • Case processing history in terms of volumes, compliance and repetition
  • Case processing management in terms of volumes, compliance and workflow state SLA management
  • Personal user dashboards with individual’s case management and case history,

Some of the reports include:

  • Completed case volume overview and trends
  • Pending case volume overview and trends
  • Case version line listings
  • Individual case workload reports and line listings

OPVA is compatible with Argus Safety 6.0.2 and Argus Safety 7.0.

One of the key uses of OPVA for organizations that want to define additional metrics, KPIs, dashboards and reports is the start schema that is provided with the product. It comes with a set of predefined dimensions and facts that can be leveraged to speed up the process of developing the additional custom dashboards, KPIs, Metrics and Reports.

Dimensions provided by the OPVA schema are:

  • Period
  • Product
  • Study
  • List of Values
  • Users
  • User Group
  • Case Processing Site
  • Country
  • Enterprise (in a multi-tenant scenario more relevant for CROs)

The pre-built facts into the OPVA data mart are:

  • Case Version History
  • Case Routing History
  • Case Workflow History and
  • Pending Cases

I think this is a very good addition by Oracle to its stable of products and compliments the existing tools quite well. This is also a very good tool for customers wanting to jumpstart their efforts to measuring their safety operations (aka Case Processing), identifying the bottlenecks in terms of compliance and productivity and rein in costs through improvement plans to enhance these aspects.

Agile Waterfall Model – Agile for Validated Projects in Pharma ?

Validation of IT Systems in Pharmaceutical Industry:

I have been working with a Global Pharmaceutical customer for the past 5 years. One visible difference I have noticed in the way projects are executed, specifically projects that need regulatory compliance is the amount of documentation that the team has to generate in order to Validate the project. Validation in it’s simplest sense is “to be able to prove that the team has followed all the steps/processes that they are supposed to”. The IT system being developed should come clean during any audits by FDA to ensure the system is built as per CFR Part 11 – Electronic Records; Electronic Signatures. This typically adds, on an average, about 20% to 30% additional effort due to the documentation efforts required.

Agile Projects:

Agile Software Development is much more liberal in terms of documentation. In fact the Agile Manifesto values “Working Software over comprehensive documentation“.  While the manifesto acknowledges the value in documentation, they prefer to ‘Get it Done’ rather than documenting ‘How it will be done’.

David Vs Goliath:

Having looked at what is required by FDA and what is valued by Agile, it is evident that using Agile for projects that are ‘Validated’ will be a challenge. However, the world has come to realize and acknowledge the fact that delivering projects using ‘Agile’  is a very good way to ensure customer satisfaction by adjusting to changing needs and delivering better quality. However, there are few people who claim that Agile  is one way the IT folks want to get away with not documenting and is for programming frenzy organisations.

Enter “Agile Waterfall Model”:

This is a fancy name that I just cooked up, but I will try and explain what we have done and will let the experts give it a name or trash it.

Step 1:  We have captured all the requirements in the form of a Business Requirements (User Stories), the traditional way, through workshops and one-on-one meetings with business users

Step 2: Provide estimates in terms of Cost, Schedule and Effort, with 20% buffer based on these requirements.

Step 3: Divide the  functionality into buckets/groups (Themes)

Step 4:  Through Usability Design, refine the requirements for the first bucket which we called Iteration 1.

Step 4: Complete the SDLC (Design, Develop & Test) in the Waterfall model for Iteration 1 and release the application.

Step 5:  Repeat steps 4 and 5 for the rest of the buckets

While this can be looked at as a Program with multiple smaller projects, we did employ some Agile techniques like Daily Stand-up meetings, Test Driven Development, Weekly builds, User Reviews of Weekly Builds for early feedback etc. We were able to overcome some pitfalls and issues that we have noticed in our Iteration 1 in the subsequent Iterations. Whether this can be called as Agile or not, in a way, trying to emulate Agile in a traditional Waterfall project helped us a great deal. 

With that, I rest my case for naming me as the father of “Agile Waterfall Model”. Jokes apart, I would love to hear from you as to what you think about this model or if there is a better way to deal with “Validation”. I am also researching a bit and will try and blog on the information I come across along the way.

Cheers,

Venu.

%d bloggers like this: