Meta-analysis is a crucial research method that synthesizes the findings from multiple studies, yet it can be incredibly time-consuming and expensive. The process usually takes months or even years to complete, requiring a significant amount of resources. However, I recently completed a meta-analysis of 480 papers in just three weeks! In this blog post, I will share my process, highlighting the tools and strategies I employed to expedite the research and save valuable time and funds.
The data preparation process for a meta-analysis involves several key steps: running keywords in multiple databases, removing duplicates, title/abstract screening, full-text downloads, full-text screening, data entry, and analysis. I will go through each of these steps and share how I managed to save time and resources in the process.
Running keywords in multiple search databases can be tedious and time-consuming. To make this process more efficient, I used ChatGPT, an AI language model, to generate search codes for various databases, such as Scopus, Web of Science, and ProQuest. I provided the model with my keywords categorized into different condition sets, and it generated search codes tailored to each database. To extract the search results, I either used Python code to interact with the APIs or manually downloaded the files. Alternatively, you can also outsource this task using platforms like UpWork to save time.
Removing duplicate results from multiple databases is essential to ensure a comprehensive and unique dataset. I used HubMeta, a cloud-based platform, to deduplicate my search results efficiently. It uses AI algorithms to find and remove duplicates while also adding complementary information, such as abstracts, to each record.
Title/abstract screening is often labor-intensive, but using HubMeta made this process more manageable. My research assistants (RAs) could quickly access the deduplicated records, evaluate them based on inclusion/exclusion criteria, and provide their input. HubMeta’s AI feature learns from the decisions made by the RAs and ranks the remaining articles based on their relevance. This helped streamline the screening process and, in my case, enabled me to complete this step in just four days.
Downloading full-text articles can be a significant bottleneck in the meta-analysis process. I used a combination of EndNote, a reference manager that automatically downloads PDFs for a portion of the database, and outsourcing the manual download of the remaining articles on UpWork. This approach allowed me to obtain nearly 1000 full-text PDFs within a day.
Similar to the title/abstract screening step, HubMeta was used to expedite the full-text screening process. My RAs reviewed the full-text articles and provided their input on the platform. This step took approximately a week to complete.
The final and most challenging step in the meta-analysis process was data entry. To speed up this process, the data entry process was broken down into four levels, with each level becoming more specialized and difficult.
Level 1: Correlation Tables
The correlation tables report the correlations between the variables and form the basis of most calculations in the meta-analysis. This task was outsourced to researchers on UpWork, with advanced researchers double-checking the data for accuracy. The AI-enabled platform HubMeta was used to capture data from correlation tables using image processing, and researchers verified and corrected any inconsistencies.
Level 2: Moderator Variables and General Information
At this level, trained research assistants (RAs) gathered moderator variables and general information about the papers, such as population type, country, and industry type. The RAs used a more detailed extraction form customized for the specific research question.
Level 3: Recording Measurements
RAs recorded the measurements used in each paper, either defining a new measurement and assigning it in the correlation table or using a measurement that had been used in previous articles.
Level 4: Organizing Measurements and Creating Constructs
The principal investigators organized the measurements and created constructs for analysis. Similar measures were grouped under the same construct, such as “Depression” for meta-analysis purposes.
To meet the one-week deadline, tasks were outsourced and parallelized as much as possible. Level 1 tasks were completed by UpWork researchers, while Levels 2 and 3 tasks were performed by trained RAs. Level 4 tasks were done by the principal investigators themselves. The team went through the 480 papers in less than four working days, making compromises in the final step, but ensuring that the results remained largely intact.
After data entry, the team used tools like HubMeta or R software to quickly build a meta-analysis model, which could take less than an hour. The quality of the work can be further improved after submission for the next round of reviews.
When you post your research tasks on UpWork, your goals should be to hire quality researchers, ensure they understand the task completely, and have them deliver the work on time with a high level of accuracy. Here are some tips to help you achieve these objectives:
By following these tips, you can have a successful and productive experience with researchers on UpWork, leading to a more efficient and accurate data entry process for your meta-analysis.
Conducting a meta-analysis of 480 papers in just 3 weeks might seem like a daunting task, but with the right combination of innovative AI tools, like HubMeta, and efficient outsourcing through platforms like UpWork, it becomes entirely possible. By focusing on each step of the process and using the right strategies, we managed to save hundreds of hours and thousands of research dollars.