MEDDEV 2.7/1 revision 4, Clinical evaluation
We spoke with over a dozen of clients in regards to how they currently conduct their Clinical Evaluation, and below are the results;
- Sources Manual searches across PubMed, Cochorane, NIH, Google Scholar (and other sources) and ClinicalTrials.gov.
- Documentation Download the data from the different sources into an Excel sheet or word document.
- Curation Skim through the titles and abstract to manually mark the relevancy of each publication.
- MedDev Criteria Manually create an respond to each one of the MedDev criteria questionnaires, and manually move a publication to the exclusion and included list
- Publication Review Either search google for free article or PubMed
- Adverse Event Database Sift through the Maude database to manually create a pivot report showing the # of Malfunction, Death, etc .
- Reports Using manual processes, once again to document all the search criteria, identify the flow of the process showing the number of included, excluded and state of the art publications.
- Gather Publication Criteria Using an excel sheet with all the bibliography, they gathered the publication details in regards to the Study Design>, Study Objective, Safety Outcomes, and more…
- State of the Art Some used a word document and others used a tabular excel sheet to document the state of the art
Such manual processes are time consuming and have great room for error, causing compliance issues.
There is a better way
NLP can help us sift through the mountains of data that exist across publications, clinical trials and other digital means. For the purpose of this blog, we used the NLP solution within Perta.io to demonstrate.
Lets say a user is conducting a Clinical Evaluation about “Heart Attack” devices, how do you ensure that other terms such as “Myocardial Infarction” are also captured in the search? Most likely the user will need to make different searches to make sure not data is left behind. Also, how can one ensure no duplicate data are in the output across the various searches? It’s hard and requires a lot of manual intervention.
The example below shows the utilization of Natural Language Processing against millions of publications and 100’s of thousands of clinical trials. The example below is using Delve Health’s PERTA solution, structuring the user’s question and quickly identifying what Perta searched for.
Looking at the search above, we only entered the condition “Heart Attack”, the intervention “Surgery” and looking for “Stenosis”.
THE RESULTS
The system translated the search using NLP algorithms, providing information on “Myocardial Infarction”, “Pahtologic Constriction”, “Procedure” and other fields to make sure that the user did not miss relevant details. This allows the user to identify if they want to eliminate or focus on specific condition or event a medical device
WAS THIS HELPFUL?
Perta searches saves hours and days worth of work. Imagine if you had to conduct such searches manually. How many hours would this have taken? Here is what one user told us.
What I usually do in a day and half, I was able to do in 4 hours.
The example above used Natural Language Processing algorithms, translating what users are looking for to a more robust search, ensuring pertinent information are found quickly and easily.
SEARCH ENGINE
Perta.io searches multiple sources of data including PubMed, Medline, PLOS.org, Science Direct subscription needed, Embase subscription needed, ClinicalTrials.gov, Adverse Event Database (i.e. Maude), and more…
CLINICAL EVALUATION
Projects
Using Perta.io, users can create their own project and structure folders for the different products they are conducting the CER for
Search
Per the MedDev requirements, users can search multiple sources using Perta.io and save the output from all engines in their desired project folder
Other Sources
Knowing that users might want to conduct their own search on PubMed or Cochorane, Perta.io allows users to upload their PubMed or Cochorane data as well to their desired project folder
CER Process
Perta.io structures all the selected publications in a way to help users go through a process, ensuring compliance of the data and reducing error
- Selection based on title and abstract Choose to include, exclude or mark a publication as state of the art
- Review fulltext article Mark a publication if fulltext review is reviewed or not and based on that, users can change their original selection criteria if they chose to exclude a publication or mark it for state of the art
- Upload publication Upload the publication directly to perta for easy access
- MedDev Criteria Once user conducts fulltext review, user can respond to the MedDev criteria questions. Based on the user response, a publication can either still be included or excluded
- Publication Details After all publications have gone through the MedDev criteria, user can add publication details in regards to the Study Design, Object, Outcome, Number of patients, Conclusion and more
- State of the Art Document if a publication is included and still is State of the Art
ADVERSE EVENTS
Perta can connect to MAUDE directly, allowing users to search by their product codes and downloading the entire data set or getting a report showing the number of injuries, death and malfunctions by the product and or manufacurer.
REPORTS
Once the entire CER process is complete, users of Perta can simply download a set of reports that would be ready for the CER. The reports include a flowchart showing the entire publication selection criteria, what was excluded, the responses for the exclusions and more.