Uncategorized

Three Weeks to Code 480

Meta-analysis is a crucial research method that synthesizes the findings from multiple studies, yet it can be incredibly time-consuming and expensive. The process usually takes months or even years to complete, requiring a significant amount of resources. However, I recently completed a meta-analysis of 480 papers in just three weeks! In this blog post, I will share my process, highlighting the tools and strategies I employed to expedite the research and save valuable time and funds.

Data Preparation: Breaking Down the Key Steps

The data preparation process for a meta-analysis involves several key steps: running keywords in multiple databases, removing duplicates, title/abstract screening, full-text downloads, full-text screening, data entry, and analysis. I will go through each of these steps and share how I managed to save time and resources in the process.

Step 1: Running Keywords

Running keywords in multiple search databases can be tedious and time-consuming. To make this process more efficient, I used ChatGPT, an AI language model, to generate search codes for various databases, such as Scopus, Web of Science, and ProQuest. I provided the model with my keywords categorized into different condition sets, and it generated search codes tailored to each database. To extract the search results, I either used Python code to interact with the APIs or manually downloaded the files. Alternatively, you can also outsource this task using platforms like UpWork to save time.

Step 2: Deduplication

Removing duplicate results from multiple databases is essential to ensure a comprehensive and unique dataset. I used HubMeta, a cloud-based platform, to deduplicate my search results efficiently. It uses AI algorithms to find and remove duplicates while also adding complementary information, such as abstracts, to each record.

Step 3: Title/Abstract Screening

Title/abstract screening is often labor-intensive, but using HubMeta made this process more manageable. My research assistants (RAs) could quickly access the deduplicated records, evaluate them based on inclusion/exclusion criteria, and provide their input. HubMeta’s AI feature learns from the decisions made by the RAs and ranks the remaining articles based on their relevance. This helped streamline the screening process and, in my case, enabled me to complete this step in just four days.

Step 4: Downloading Full-Texts

Downloading full-text articles can be a significant bottleneck in the meta-analysis process. I used a combination of EndNote, a reference manager that automatically downloads PDFs for a portion of the database, and outsourcing the manual download of the remaining articles on UpWork. This approach allowed me to obtain nearly 1000 full-text PDFs within a day.

Step 5: Full-Text Screening

Similar to the title/abstract screening step, HubMeta was used to expedite the full-text screening process. My RAs reviewed the full-text articles and provided their input on the platform. This step took approximately a week to complete.

Step 6: Data Entry

The final and most challenging step in the meta-analysis process was data entry. To speed up this process, the data entry process was broken down into four levels, with each level becoming more specialized and difficult.

Level 1: Correlation Tables

The correlation tables report the correlations between the variables and form the basis of most calculations in the meta-analysis. This task was outsourced to researchers on UpWork, with advanced researchers double-checking the data for accuracy. The AI-enabled platform HubMeta was used to capture data from correlation tables using image processing, and researchers verified and corrected any inconsistencies.

Level 2: Moderator Variables and General Information

At this level, trained research assistants (RAs) gathered moderator variables and general information about the papers, such as population type, country, and industry type. The RAs used a more detailed extraction form customized for the specific research question.

Level 3: Recording Measurements

RAs recorded the measurements used in each paper, either defining a new measurement and assigning it in the correlation table or using a measurement that had been used in previous articles.

Level 4: Organizing Measurements and Creating Constructs

The principal investigators organized the measurements and created constructs for analysis. Similar measures were grouped under the same construct, such as “Depression” for meta-analysis purposes.

To meet the one-week deadline, tasks were outsourced and parallelized as much as possible. Level 1 tasks were completed by UpWork researchers, while Levels 2 and 3 tasks were performed by trained RAs. Level 4 tasks were done by the principal investigators themselves. The team went through the 480 papers in less than four working days, making compromises in the final step, but ensuring that the results remained largely intact.

After data entry, the team used tools like HubMeta or R software to quickly build a meta-analysis model, which could take less than an hour. The quality of the work can be further improved after submission for the next round of reviews.

Note: Tips on Hiring Researchers on UpWork

When you post your research tasks on UpWork, your goals should be to hire quality researchers, ensure they understand the task completely, and have them deliver the work on time with a high level of accuracy. Here are some tips to help you achieve these objectives:

  1. Craft the perfect job posting: Utilize tools like ChatGPT to help you write an effective job posting based on similar successful postings. This will help you attract the right candidates for your project.
  2. Choose researchers with relevant expertise: Look for researchers with experience and skills that closely align with your specific research tasks. This increases the likelihood of high-quality work.
  3. Start with small tasks: Define smaller tasks and hire multiple researchers initially. This will allow you to evaluate their efficiency and work quality before selecting the best ones to continue with.
  4. Set a budget for each task: Opt for a fixed-price budget rather than hourly rates. This can be fairer to researchers, as those who complete the work faster with the same quality will earn more.
  5. Provide clear instructions: Create a step-by-step video tutorial and a detailed accompanying document to explain the task. Encourage questions and provide feedback to ensure they understand the task completely.
  6. Set an initial quality check milestone: Begin with a small milestone to review the quality of the researchers’ work and provide feedback. This will help them refine their understanding and improve the quality of the rest of the project.
  7. Cross-check their work: Implement a robust cross-checking system, either by having a trained research assistant perform random checks, having researchers check each other’s work, or creating a separate project to review everyone’s submissions. This helps ensure accuracy and can even enable double or triple checks for optimal results.
  8. Be prompt and fair with payments: Review and approve project payments quickly after the work has been submitted, but only after the proper checks have been done. This is fair to researchers and encourages them to maintain high-quality work.

By following these tips, you can have a successful and productive experience with researchers on UpWork, leading to a more efficient and accurate data entry process for your meta-analysis.

Conducting a meta-analysis of 480 papers in just 3 weeks might seem like a daunting task, but with the right combination of innovative AI tools, like HubMeta, and efficient outsourcing through platforms like UpWork, it becomes entirely possible. By focusing on each step of the process and using the right strategies, we managed to save hundreds of hours and thousands of research dollars.

Post a comment

Recent Posts