I'm planning a living systematic review on COVID-19 vaccine effectiveness that will be continuously updated. Guide me: 1) Set up infrastructure for ongoing searches and screening

Prompt
I'm planning a living systematic review on COVID-19 vaccine effectiveness that will be continuously updated. Guide me: 1) Set up infrastructure for ongoing searches and screening
ClaudeAnthropic
I'm planning a living systematic review on COVID-19 vaccine effectiveness that will be continuously updated. Guide me: 1) Set up infrastructure for ongoing searches and screening

Infrastructure Setup for a Living Systematic Review on COVID-19 Vaccine Effectiveness

1. Search Infrastructure

A. Automated Database Searches

Set up automated alerts in:

  • PubMed/MEDLINE: Create saved searches with email alerts (daily/weekly)

  • Embase: Set up auto-alerts via Ovid or Embase.com

  • Cochrane COVID-19 Study Register: Subscribe to updates

  • WHO COVID-19 Database: Regular monitoring

  • medRxiv/bioRxiv: RSS feeds for preprints

  • ClinicalTrials.gov: Saved searches for new/updated trials


Technical implementation:
- Use IFTTT or Zapier to aggregate alerts
- Create a dedicated email folder/label
- Set up calendar reminders for manual checks

B. Citation Management System

Recommended setup:

  • Covidence or DistillerSR (designed for living reviews)

  • Allows continuous import

  • Tracks updates automatically

  • Facilitates team collaboration


Alternative: Rayyan (free, good for continuous screening)

Configuration:

  • Create project with standardized search strings

  • Enable deduplication algorithms

  • Set up user permissions for team members


C. Grey Literature & Regulatory Monitoring

Establish monitoring for:

  • FDA, EMA, Health Canada, TGA (Australia) vaccine briefings

  • CDC MMWR reports

  • National immunization technical advisory groups (NITAGs)

  • Manufacturer press releases (Pfizer, Moderna, etc.)


Use:
  • Google Scholar alerts for key terms

  • RSS feeds from agency websites

  • Social media monitoring (Twitter lists of key researchers)



2. Screening Infrastructure

A. Two-Stage Screening Process

Stage 1: Title/Abstract Screening

  • Dual independent screening for all new records

  • Use screening software with:

  • Conflict resolution workflows

  • Progress tracking

  • Inter-rater reliability calculations


Stage 2: Full-Text Screening
  • Structured form with explicit inclusion/exclusion criteria

  • Documentation of exclusion reasons

  • Version control for screening forms


B. Machine Learning Integration (Optional but Recommended)

Tools to consider:

  • ASReview (free, open-source active learning)

  • Abstrackr

  • EPPI-Reviewer (has machine learning features)


Benefits:
  • Prioritizes relevant studies

  • Reduces screening burden over time

  • Maintains human oversight


C. Standardized Screening Forms

Create detailed forms including:

INCLUSION CRITERIA:
☐ Population: Any age group receiving COVID-19 vaccine
☐ Intervention: Any approved/investigational COVID-19 vaccine
☐ Comparator: Unvaccinated, placebo, or other vaccine
☐ Outcomes:
- Infection (PCR/RAT confirmed)
- Symptomatic disease
- Hospitalization
- ICU admission
- Death
- By variant (Alpha, Delta, Omicron, etc.)
☐ Study design: RCTs, cohort, case-control, test-negative design
☐ Minimum follow-up: [Define threshold]

EXCLUSION CRITERIA:
☐ Immunogenicity-only studies
☐ Commentaries, editorials without original data
☐ Case reports/series <[X] participants
☐ Non-human studies


3. Team Structure & Workflow

A. Core Team Roles

Minimum team:

  • Project lead (1): Overall coordination, updates schedule

  • Information specialist (1): Search strategy management

  • Screeners (2-3): Independent screening with rotation

  • Data extractors (2): Quality assessment and extraction

  • Statistician (1): Meta-analysis updates

  • Clinical expert (1): Interpretation and context


B. Update Frequency

Decision framework:
```
High frequency (monthly): First 6-12 months or during:

  • Emergence of new variants

  • New vaccines approved

  • Major policy changes


Moderate frequency (quarterly): Stabilization phase

Low frequency (bi-annually): Mature evidence base

Triggers for ad-hoc updates:

  • Major new RCT published

-
{DateTime.UtcNow:HH:mm}
GeminiGoogle
I'm planning a living systematic review on COVID-19 vaccine effectiveness that will be continuously updated. Guide me: 1) Set up infrastructure for ongoing searches and screening

A living systematic review (LSR) on COVID-19 vaccine effectiveness is an ambitious and highly valuable undertaking, given the rapidly evolving evidence base. Setting up the infrastructure for continuous updates is paramount to its success. Here’s a comprehensive guide:


Setting Up Infrastructure for an Ongoing Living Systematic Review (LSR) on COVID-19 Vaccine Effectiveness

Phase 1: Foundation – Initial Setup & Protocol Development

Before continuous updates, you need a robust initial setup.

  • Develop a Comprehensive Protocol:

  • Register your protocol (e.g., PROSPERO). Clearly state it's an LSR.

  • Research Question(s): PICO format (e.g., "What is the effectiveness of COVID-19 vaccines against symptomatic infection, severe disease, and mortality in different populations and against different variants?").

  • Inclusion/Exclusion Criteria: Detail populations, interventions (specific vaccines/doses), comparators (unvaccinated, placebo, other vaccines), outcomes (infection, hospitalization, death, transmission, adverse events), study designs (RCTs, observational studies, test-negative designs), language, and publication dates (continuously updated).

  • Update Frequency: Define the target frequency for searches, screening, data extraction, and synthesis (e.g., weekly, bi-weekly, monthly).

  • Reporting Strategy: How will updates be published? (e.g., dedicated website, cumulative manuscript, specific journal format).

  • Team Formation & Roles:

  • Project Lead: Oversees the entire process, makes final decisions.

  • Search Strategist: Designs and manages search alerts, troubleshoots database issues.

  • Screeners: Two independent screeners per article for title/abstract and full-text.

  • Conflict Resolvers: A third person to resolve screening conflicts.

  • Data Extractors: Two independent extractors per study.

  • Methodologists/Statisticians: For quality assessment, synthesis, and statistical analysis.

  • Technology Manager: Troubleshoots software, manages data flow.

  • Key for LSR: Ensure clear handover processes and training for new team members.

  • Technology Stack Selection:

  • Reference Management: While good for storing, dedicated SR software is essential for screening.

  • Systematic Review Software (Mandatory for LSR):

  • Covidence: User-friendly, good for dual screening, deduplication, full-text retrieval, data extraction. Integrates with PubMed.

  • Rayyan QCRI: Free, intuitive, supports dual screening, deduplication, blinding, and conflict resolution. Good for initial setup.

  • DistillerSR: Highly customizable, powerful for complex reviews, excellent for data extraction, but can be more expensive/complex.

  • EPPI-Reviewer: Comprehensive, flexible, but has a steeper learning curve.

  • Choose one that supports continuous import of new records and tracks screening progress for only NEW records.



Phase 2: Ongoing Search Infrastructure

This is the backbone of your living review.

  • Develop a Highly Sensitive Search Strategy:

  • Core Terms: Combine MeSH/Emtree terms and free-text keywords for COVID-19, vaccines, effectiveness, efficacy, variants (e.g., "COVID-19," "SARS-CoV-2," "vaccine," "vaccination," "effectiveness," "efficacy," "real-world," "Omicron," "Delta").

  • Date Restriction: Crucially, design the search so you can easily restrict by last search date. This ensures you only retrieve new articles.

  • Pilot Testing: Run the initial search and check for sensitivity (do you find known key studies?) and specificity (is the volume manageable?).

  • Identify Core Databases & Grey Literature Sources:

  • Biomedical Databases (Essential for Alerts):

  • PubMed/MEDLINE: Set up "My NCBI" alerts.

  • Embase: Set up alerts (often via institution). Excellent for drug-related literature.

  • Web of Science/Scopus: Set up alerts. Good for broader coverage.

  • Preprint Servers (CRITICAL for COVID-19):

  • medRxiv/bioRxiv: Set up RSS feeds or use their search alert functions. Often have the newest evidence.

  • Challenge: Preprints are not peer-reviewed. You'll need a clear policy on their inclusion and how to update if a peer-reviewed version emerges.

  • Specialized COVID-19 Repositories:

  • LitCovid (NIH/NLM): Daily updates on COVID-19 literature. You can often set up alerts based on keywords.

  • WHO COVID-19 Global literature on coronavirus disease: Another excellent, curated resource.

  • CDC/ECDC databases: For surveillance reports, technical briefs.

  • Clinical Trial Registries:

  • ClinicalTrials.gov, WHO ICTRP: Set up alerts for relevant new trials or results postings.

  • Government/Public Health Websites:

  • CDC, WHO, UKHSA, ECDC: Regularly check their publications section for new reports on vaccine effectiveness. (Less automatable, may require manual checks).

  • Google Scholar Alerts: Can capture some grey literature and reports missed by other alerts, but often low specificity.

  • Implement Automated Search Alerts:

  • Database-Specific Alerts: Go into each database (PubMed, Embase, Web of Science, Scopus) and save your search strategy. Configure email alerts to be sent weekly or bi-weekly with new results only.

  • RSS Feeds: For preprint servers and some specialized repositories, subscribe to RSS feeds for your keywords. Use an RSS reader (e.g., Feedly) to aggregate these.

  • API Integrations (Advanced): For larger teams, consider using APIs (e.g., PubMed, Crossref) to programmatically fetch new articles. This requires coding skills but offers maximum control and automation.

  • Centralized Search Log & Import Process:

  • Search Log: Maintain a detailed log of every search run, including:

  • Database/Source

  • Date of search

  • Search string used

  • Date range covered (e.g., "since last search on YYYY-MM-DD")

  • Number of results retrieved

  • Number of results imported into SR software

  • Dedicated Folder for New Results: Create a shared drive/cloud folder (e.g., Google Drive, SharePoint) where all new search results (e.g., .nbib, .ris files) are saved by the search strategist.

  • Regular Import into SR Software: Establish a schedule (e.g., every Monday morning) for importing these new results into your chosen SR software (Covidence, Rayyan, etc.). Crucially, the software must support adding new records to an existing review.

  • Automated Deduplication:

  • Your chosen SR software (Covidence, Rayyan, DistillerSR) should have robust deduplication capabilities.

  • Run deduplication immediately after importing new records.

  • Manual checks of potential duplicates may still be necessary, especially for records from diverse sources (e.g., preprints vs. published articles).



Phase 3: Continuous Screening Workflow

This is where the team actively engages with the new literature.

  • Clear & Granular Inclusion/Exclusion Criteria:

  • Detailed Document: Have an accessible, detailed document outlining criteria, with examples. This is essential for consistency across screeners and over time.

  • Key for LSR: Clarify how to handle updates to previous studies (e.g., preprint now published, new results from an ongoing trial).

  • Scheduled Screening Sprints:

  • Regular Intervals: Based on your defined update frequency (e.g., weekly or bi-weekly), schedule dedicated "screening sprints" for your team.

  • Team Capacity: Ensure your team has sufficient capacity to handle the incoming volume. COVID-19 vaccine effectiveness literature is high volume.

  • Focused Work: Encourage screeners to focus on screening during these times to maintain momentum.

  • Screening Process within SR Software:

  • Title/Abstract Screening:

  • Two independent screeners review each new title/abstract.

  • SR software should present only new, undeduped articles.

  • Decision: Include, Exclude, or "Maybe" (for uncertainty).

  • Conflict Resolution (Title/Abstract):

  • The SR software highlights conflicts.

  • Conflicts are resolved by a third reviewer or team discussion before proceeding.

  • Full-Text Retrieval:

  • For all "Include" and "Maybe" articles, retrieve the full text. Set up automated DOI lookup/link resolvers if possible.

  • Challenge: Many COVID-19 papers are behind paywalls. Ensure institutional access or utilize interlibrary loan services quickly.

  • Full-Text Screening:

  • Two independent screeners review the full text against the precise inclusion/exclusion criteria.

  • Record specific exclusion reasons for articles rejected at this stage. This is crucial for transparency.

  • Conflict Resolution (Full-Text):

  • Resolve conflicts, documenting decisions.

  • Calibration & Training:

  • Initial Calibration: Before starting, screen a small set of articles together to ensure consistent interpretation of criteria.

  • Ongoing Calibration: Periodically (e.g., quarterly or when criteria are updated) re-calibrate with a small sample of new articles, especially if new team members join.

  • Regular Check-ins: Hold brief team meetings after each screening sprint to discuss challenges, clarify criteria, and maintain morale.

  • Documentation of Screening Decisions:

  • The SR software automatically logs individual and final screening decisions.

  • Maintain a clear log of exclusion reasons for full-text articles. This will be part of your PRISMA flow diagram for each update cycle.



Phase 4: Supporting Infrastructure & Best Practices for LSR


  • Version Control & Audit Trail:

  • SR Software: Most SR software will track changes and screening decisions.

  • Protocol: Any amendments to the protocol must be dated, logged, and registered (e.g., PROSPERO amendment).

  • Search Strategies: Keep dated versions of all search strategies.

  • PRISMA Flow Diagram: You'll need a cumulative PRISMA diagram for the entire review, and potentially mini-diagrams for each update cycle showing new additions.

  • Data Extraction & Quality Assessment (Briefly Mention):

  • While not directly search/screening, this phase is the next step.

  • Standardized Forms: Use pre-piloted, standardized data extraction forms within your SR software (or custom forms).

  • Living Data Management: How will data be stored and updated? A relational database or highly structured spreadsheet (e.g., Google Sheets, Excel with version control) linked to your SR software is ideal.

  • Risk of Bias/Quality Assessment: Select appropriate tools (e.g., RoB 2.0 for RCTs, ROBINS-I for observational studies) and integrate into your workflow.

  • Communication & Dissemination:

  • Dedicated Platform: Consider a website or online platform to host the LSR, showing the latest version, previous updates, and a transparent audit trail.

  • Alerts for Stakeholders: Provide an option for stakeholders (policymakers, clinicians, public) to subscribe to email alerts for new updates.

  • Publication Strategy: Decide if you will publish cumulative review updates in journals or maintain a continuously updated living document online.

  • Sustainability & Funding:

  • LSRs are resource-intensive. Plan for long-term funding, technology subscriptions, and team compensation.



Key Considerations for a Living Review on COVID-19 Vaccine Effectiveness:

  • Volume: The sheer volume of COVID-19 literature is immense. Be prepared for a high screening load. Prioritize core outcomes if volume becomes unmanageable.

  • Speed: Evidence changes rapidly. Aim for swift processing from search to synthesis.

  • Preprints: Embrace preprints but have clear rules for their inclusion, critical appraisal, and how to handle peer-reviewed versions.

  • Heterogeneity: Vaccine effectiveness varies by vaccine type, dose, population, variant, and outcome. Your extraction and synthesis need to account for this complexity.

  • Duplicate Reporting: Many studies come from the same datasets or groups. Be vigilant about identifying overlapping populations or analyses.


By meticulously setting up this infrastructure, you'll build a robust and sustainable living systematic review capable of providing timely, evidence-based insights into COVID-19 vaccine effectiveness. Good luck!

{DateTime.UtcNow:HH:mm}
GrokxAI
I'm planning a living systematic review on COVID-19 vaccine effectiveness that will be continuously updated. Guide me: 1) Set up infrastructure for ongoing searches and screening

1. Setting Up Infrastructure for Ongoing Searches and Screening

A living systematic review (LSR) on COVID-19 vaccine effectiveness requires scalable, automated, and collaborative tools to handle continuous data influx. The goal is to efficiently search for new evidence, screen it, and integrate updates without manual overload. Below, I'll outline a step-by-step setup, drawing from established methodologies like those from Cochrane or the Joanna Briggs Institute for LSRs. This focuses on practicality for a team (e.g., researchers, librarians).

Step 1: Define Search Scope and Parameters


  • Refine your PICO (Population, Intervention, Comparator, Outcome): For COVID-19 vaccines, specify e.g., Population: Adults/children; Intervention: mRNA, viral vector, etc.; Comparator: Placebo/unvaccinated; Outcome: Effectiveness against infection, hospitalization, variants (e.g., Omicron subvariants).

  • Establish update frequency: Aim for weekly or bi-weekly searches initially, scaling to monthly as literature stabilizes. Document protocols in a living review plan (e.g., using PRISMA-L guidelines extended for living reviews).

  • Inclusion/exclusion criteria: Pre-define (e.g., RCTs, cohort studies on VE ≥50%; exclude preprints unless peer-reviewed). Use tools like Covidence or Rayyan for shared criteria.


Step 2: Automate Literature Searches


Use databases and alerting systems to monitor new publications. Avoid manual PubMed checks—opt for automation.
  • Core Databases and Sources:

  • PubMed/MEDLINE (free, comprehensive for biomedical).

  • Embase (for international coverage; subscription needed).

  • Web of Science or Scopus (for broader indexing).

  • COVID-specific: WHO COVID-19 Research Database, medRxiv/bioRxiv for preprints (but verify before inclusion).

  • Grey literature: ClinicalTrials.gov, WHO ICTRP for trials; Google Scholar alerts for policy docs.

  • Search Automation Tools:

  • PubMed Alerts: Set up saved searches (e.g., "COVID-19 vaccine effectiveness" AND (RCT OR cohort) with date filters). Email alerts deliver new hits. Use MeSH terms like "COVID-19 Vaccines/therapeutic use" for precision.

  • EndNote Click or Zotero with RSS/Feeds: Integrate with databases for automated imports. Zotero's web translator pulls PDFs automatically.

  • Advanced Automation:

  • LitSuggest or ASReview: AI-driven tools for deduplication and relevance ranking. ASReview uses active learning to prioritize screening (free/open-source).

  • Europe PMC or Semantic Scholar APIs: Script weekly pulls using Python (e.g., via biopython library for PubMed) into a shared repository.

  • Commercial Options: Covidence integrates searches from multiple sources with auto-deduplication (subscription ~$240/year for teams).

  • Tip: Export searches to RIS/EndNote XML format for easy import. Run pilot searches to test ~500-1000 hits/month volume.

  • Handling Updates: Tag new results by date (e.g., "post-2023"). Use version control (e.g., GitHub) for search strings to track changes.


Step 3: Implement Screening Workflow


Screening must be efficient, bias-minimized, and team-based to handle volume (expect 100-500 new records per update cycle).
  • Collaboration Platforms:

  • Covidence (Recommended): Cloud-based; automates title/abstract screening, full-text retrieval (via Unpaywall), risk-of-bias assessment (e.g., RoB 2 for RCTs). Supports blind duplicate screening by 2+ reviewers. Exports to RevMan for meta-analysis. Free for small teams up to 2 reviews.

  • Rayyan (Free Alternative): Web/app-based; AI prioritization (e.g., highlights similar abstracts). Great for crowdsourcing screening. Integrates with PubMed exports.

  • DistillerSR: Enterprise-level for large teams; tracks decisions and conflicts resolution.

  • Screening Process:

  • Level 1 (Title/Abstract): Two independent reviewers use the tool's blinding feature. Resolve conflicts via discussion or third reviewer (use tool's voting system).

  • Level 2 (Full-Text): Retrieve PDFs automatically (integrate with browser extensions like Unpaywall or Kopernio for paywalls). Assess eligibility against criteria.

  • Data Extraction: Use standardized forms in Excel/Google Sheets or integrated tools (e.g., Covidence templates for VE metrics like risk ratios, confidence intervals by variant/age group).

  • Efficiency Boosts: Train on 10% sample for calibration. Use machine learning in ASReview to reduce screening by 50-70% (it learns from your decisions).

  • Audit Trail: All tools log timestamps, decisions, and reasons for exclusion—essential for transparency.

  • Team Management: Assign roles (e.g., searcher, screener1, screener2) via shared dashboards. Use Slack/Teams integrations for notifications on new batches.


Step 4: Data Management and Integration


  • Repository: Store everything in a central, versioned system.

  • GitHub or OSF (Open Science Framework): For protocols, search logs, and datasets. Free and collaborative.

  • Google Drive/Dropbox: For raw exports and PDFs; ensure access controls.

  • Deduplication: Built into Covidence/Rayyan; otherwise, use EndNote's duplicate finder or Python scripts (e.g., with pandas).

  • Scalability Considerations: Start small (manual for first cycle), then automate. Budget for tools (~$500-2000/year for a team). Monitor workload— if >20 hours/week, hire a librarian or use outsourcing (e.g., via Evidence Prime).

  • Ethical/Compliance: Register on PROSPERO (adapt for living reviews). Comply with data privacy (e.g., GDPR for international teams) and report funding conflicts.


This setup should launch your LSR infrastructure in 1-2 weeks with testing. Once running, it feeds directly into synthesis (e.g., updating meta-analyses in RevMan or R with metafor package). If you need guidance on synthesis, risk-of-bias, or dissemination (e.g., via a living website), let me know for the next steps!
{DateTime.UtcNow:HH:mm}

Analysis

This comparison demonstrates the different approaches each AI model takes when responding to the same prompt. Here are the key differences observed:

Response Characteristics

Claude: Provides a detailed response with 15 sentences.

Gemini: Provides a direct response with 175 sentences.

Grok: Provides a direct response with 103 sentences.

Key Takeaways

  • Each model brings unique strengths to this type of query
  • Response styles vary significantly between models
  • Consider your specific use case when choosing between these models

Try This Comparison Yourself

Want to test these models with your own prompts? Visit SNEOS.com to compare AI responses side-by-side in real-time.


This comparison was generated using the SNEOS AI Comparison ToolPublished: October 15, 2025 | Models: Claude, Gemini, Grok