I have 5000 articles to screen for a systematic review on educational technology effectiveness. Help me: 1) Set up ASReview or Rayyan for AI-assisted screening prioritization

Prompt
I have 5000 articles to screen for a systematic review on educational technology effectiveness. Help me: 1) Set up ASReview or Rayyan for AI-assisted screening prioritization
ChatGPTOpenAI
I have 5000 articles to screen for a systematic review on educational technology effectiveness. Help me: 1) Set up ASReview or Rayyan for AI-assisted screening prioritization

Setting up ASReview (Active Learning Systematic Review) or Rayyan for AI-assisted screening prioritization involves a systematic approach. Below are the steps for both tools, so you can choose the one that best fits your needs.

Option 1: Setting up ASReview

ASReview is a tool designed to assist with systematic reviews using machine learning to prioritize studies for screening.

  • Install ASReview:

  • You can install ASReview using pip. If you have Python 3.6 or newer:

pip install asreview

  • Prepare Your Data:

  • Format: ASReview requires your articles to be in a CSV format. Include columns such as:

  • title: Title of the article

  • abstract: Abstract of the article

  • keywords: Keywords or tags (optional)

  • full_text: Link or reference to the full text (optional)

  • id: Unique identifier for each article

  • Additional columns for inclusion/exclusion that will be filled during screening.

  • Create a Project:

  • Run the ASReview GUI:

asreview gui

  • Follow the instructions to create a new project and upload your CSV file.

  • Configure the Model:

  • Select the review type and choose a machine learning model (default is usually a good start). You could adjust settings based on your knowledge of the articles, if needed.

  • Start Screening:

  • Use the prioritization feature to start the screening process. ASReview will suggest articles to screen based on model predictions. You can then approve or reject articles.

  • Iterate:

  • Continue the process of screening until you reach your desired number of included studies. The model will continue to improve its recommendations as you provide more feedback.


Option 2: Setting up Rayyan

Rayyan is a web-based tool for systematic reviews that allows collaborative screening with AI assistance.

  • Create a Rayyan Account:

  • Go to the Rayyan website and create an account.

  • Create a Review:

  • Once signed in, click on “Create a new review”.

  • Name your review and provide a brief description if necessary.

  • Upload Your Articles:

  • You can upload your articles in several formats (CSV, RIS, PubMed, etc.). Choose the format that you have.

  • Make sure to include key information such as title and abstract.

  • Configure AI Assistance:

  • After uploading, Rayyan will process your data. You will see an AI-assisted feature that prioritizes articles based on potential relevance.

  • You can set inclusion/exclusion criteria that the AI can use to assist in screening.

  • Start Screening:

  • Use the dashboard to start reviewing articles. Rayyan displays articles with high likelihood based on AI predictions first.

  • You can tag articles as included, excluded, or maybe, and you can take advantage of the blind reviewing option if working in a team.

  • Collaborate:

  • If you are working with a team, you can invite collaborators and track decisions made by each member.


Recommendations for Both Tools


  • Define Inclusion Criteria: Before you begin screening, have a clear set of inclusion and exclusion criteria based on your systematic review protocol.

  • Train the AI: For both tools, the AI becomes more effective with your feedback. The more you screen, the better it predicts the remaining articles.

  • Regularly Review Progress: Check the effectiveness of the AI recommendations by reviewing its success in predictions and adjusting if necessary.


Conclusion

Choose either ASReview or Rayyan based on your preference and needs for collaborative work. Both tools greatly enhance efficiency and accuracy when screening large collections of articles for systematic reviews. Good luck with your systematic review on educational technology effectiveness!

{DateTime.UtcNow:HH:mm}
GeminiGoogle
I have 5000 articles to screen for a systematic review on educational technology effectiveness. Help me: 1) Set up ASReview or Rayyan for AI-assisted screening prioritization

Congratulations on tackling such a large screening task! AI-assisted tools like ASReview and Rayyan can significantly streamline the process. Both use Active Learning, where the AI learns from your decisions and prioritizes articles it thinks are most likely to be relevant.

Here's a guide to setting up both, along with important considerations.


General Pre-Screening Steps (Crucial for ANY Tool)

Before you even touch ASReview or Rayyan, ensure these steps are completed:

  • Define Clear Inclusion/Exclusion Criteria: This is the most critical step. Your criteria should be explicit and consistently applied. Draft them, pilot them on a small set of articles, and refine them.

  • Combine & Deduplicate References:

  • Gather all your references (5000 articles) from various databases (e.g., Scopus, Web of Science, ERIC, PsycINFO).

  • Use a reference manager (Zotero, Mendeley, EndNote) or a dedicated deduplication tool (like a Systematic Review Deduplication tool, or even Excel/OpenRefine for advanced users) to remove duplicates before importing them into your screening tool. This is vital as AI tools won't deduplicate for you, and screening duplicates wastes time and effort.

  • Export in a Compatible Format:

  • RIS (.ris): Most commonly used and recommended for both tools as it contains rich metadata (title, abstract, authors, keywords).

  • CSV (.csv): Ensure you have columns for "Title" and "Abstract" at a minimum.

  • BibTeX (.bib): Also generally supported.



Option 1: Setting Up Rayyan for AI-Assisted Screening

Rayyan is a popular, web-based tool known for its user-friendliness and collaborative features. It's often free for academics.

Pros of Rayyan:

  • Web-based: No installation, accessible from anywhere.

  • Intuitive UI: Easy to learn and use, especially for beginners.

  • Collaboration: Excellent features for multiple reviewers, blinding, and conflict resolution.

  • Built-in AI: Automatically suggests articles based on your initial screening.

  • Keyword Highlighting: Helps identify relevant terms.


Cons of Rayyan:
  • Proprietary (though free for academics).

  • Less customizable AI model compared to ASReview.

  • Requires an internet connection.



Rayyan Setup Steps:


  • Create an Account:

  • Go to rayyan.ai.

  • Sign up for a free academic account using your institutional email.

  • Create a New Review:

  • Once logged in, click "New Review" or "Create New Review."

  • Give your review a Title (e.g., "Educational Technology Effectiveness SR").

  • Add a Description (optional, but good for context).

  • You can set it as "Public" or "Private." For most systematic reviews, "Private" is recommended.

  • Import References:

  • Click "Add References."

  • Drag and drop your deduplicated .ris, .csv, or .bib file(s) into the upload area, or browse to select them.

  • Rayyan will process the files. If it detects potential duplicates (even after your initial deduplication), it might prompt you to review them. Proceed with import.

  • Configure Screening Settings (Optional but Recommended):

  • Blinding: By default, Rayyan sets up "blinding," meaning reviewers cannot see each other's decisions until the review stage. This is standard practice in systematic reviews. You can enable/disable it in the review settings.

  • Inclusion/Exclusion Keywords: This is a powerful Rayyan feature.

  • Go to "Settings" -> "Inclusion/Exclusion"

  • Add keywords that, if present, strongly indicate inclusion or exclusion. Rayyan will highlight these in abstracts. Be careful not to over-rely on this for definitive decisions, but it's a great initial flag.

  • Start Screening and Train the AI:

  • Go to the "Screening" tab.

  • Rayyan will present articles one by one. For each article, you'll see:

  • Title

  • Abstract

  • Keywords (often highlighted by Rayyan's AI)

  • You have three primary decisions: Include, Exclude, or Maybe.

  • The AI (The "Suggest" Column): As you screen articles (especially the first 50-100), Rayyan's AI will start learning your patterns. Look for the "Suggest" column (it might appear as a little robot icon or a score).

  • Green Checkmark: Rayyan suggests inclusion.

  • Red X: Rayyan suggests exclusion.

  • Question Mark: Rayyan is uncertain.

  • Prioritization: Rayyan automatically prioritizes articles it thinks are most relevant at the top of your screening list, or you can sort by "Suggestions" to see its top picks.

  • Keep Screening: Continue screening, paying attention to the AI's suggestions. The more you screen, the smarter Rayyan's AI becomes. Aim to screen at least 10% of your total articles, or until you feel the AI is consistently making good suggestions.

  • Collaborate (If Applicable):

  • Go to "Reviewers" -> "Invite Reviewers."

  • Enter their email addresses. They will need a Rayyan account.

  • Rayyan manages dual screening and conflict resolution very well.



Option 2: Setting Up ASReview for AI-Assisted Screening

ASReview (Active learning for Systematic Reviews) is an open-source, local-first tool that offers greater flexibility and control over the AI models.

Pros of ASReview:

  • Open-Source & Free: Complete control, community-driven.

  • Local Processing: Data stays on your computer (privacy), faster processing once set up.

  • Customizable AI: You can choose different classifiers, query strategies, and feature extractors.

  • Scalable: Excellent for very large datasets.

  • Offline Access: Once installed, no internet needed for screening.


Cons of ASReview:
  • Installation Required: Needs Python, can be a bit more technical.

  • Local-only: Not inherently collaborative without external solutions (e.g., sharing project files).

  • Slightly Steeper Learning Curve: The GUI is user-friendly, but understanding the underlying AI options helps.



ASReview Setup Steps:


  • Install Python:

  • ASReview requires Python (version 3.8 or higher is recommended).

  • Download and install Python from python.org if you don't have it already.

  • Important: During installation, make sure to check "Add Python to PATH" or "Add Python 3.x to path."

  • Install ASReview:

  • Open your computer's Command Prompt (Windows) or Terminal (macOS/Linux).

  • Run the following command to install ASReview:

pip install asreview

  • For the GUI (Recommended for Ease of Use): Install the GUI template:

pip install asreview-template-gui

  • Wait for the installation to complete.

  • Prepare Your Data:

  • Ensure your deduplicated articles are in a .ris, .csv, or .bib file. .ris is generally preferred for its rich metadata.

  • Launch the ASReview GUI:

  • In your Command Prompt/Terminal, type:

asreview gui

  • This will open a new tab in your web browser (usually at http://localhost:5000/). This is your local ASReview interface.

  • Create a New Project:

  • On the ASReview GUI, click "Start new review."

  • Name: Give your project a descriptive name.

  • Description: (Optional) Add details about your review.

  • Import References:

  • Click "Upload new dataset."

  • Select your .ris (or .csv, .bib) file.

  • ASReview will process and display a summary of the imported references.

  • Configure AI Model (Crucial for Prioritization):

  • This is where ASReview shines in customization. You'll see options for:

  • Classifier: The machine learning algorithm that learns from your decisions.

  • svm (Support Vector Machine): Often a good default for text classification.

  • logistic (Logistic Regression)

  • nb (Naive Bayes)

  • Query Strategy: How the AI selects the next article to present to you.

  • max (Maximum Certainty): Prioritizes articles the AI is most confident about (either highly relevant or highly irrelevant).

  • uncertainty: Prioritizes articles the AI is least certain about. This can be good for exploring ambiguous cases.

  • random (Random sampling)

  • Feature Extractor: How the text (titles/abstracts) is converted into numerical data for the AI.

  • tfidf (TF-IDF): Common and effective for weighting word importance.

  • doc2vec

  • Balance Strategy: How the model handles imbalanced datasets (e.g., many more irrelevant than relevant articles).

  • undersampling, oversampling. Often none is fine for initial steps.

  • Recommendation for starting:

  • Classifier: svm

  • Query Strategy: uncertainty (This often works well for active learning to explore the boundaries and make the AI smarter faster).

  • Feature Extractor: tfidf

  • Initial Training/Seed Articles:

  • You'll be asked how many "initial relevant" and "initial irrelevant" articles to label. ASReview needs a few examples to start learning.

  • You can manually label these from your dataset, or ASReview can pick them randomly. For 5000 articles, starting with 5-10 relevant and 5-10 irrelevant is a good starting point if you know some examples. If not, ASReview will pick them for you, and you'll label them first.

  • Start Screening and Train the AI:

  • Click "Start Review."

  • ASReview will present articles. For each:

  • Read the title and abstract.

  • Click "Relevant" or "Irrelevant."

  • You can also mark as "Not sure" if needed, though the AI benefits most from clear decisions.

  • AI Prioritization: ASReview will continuously learn from your decisions. The list of "Unlabeled papers" will be reordered based on the AI's prediction scores. Articles the AI predicts as most likely to be relevant will appear higher in the list.

  • Continue Screening: Keep screening. The AI will become more accurate with more training data.

  • Monitor Progress:

  • ASReview provides plots showing "Number of included papers" vs. "Number of screened papers," "Efficiency," and "Recall." These help you understand how much of the relevant literature you've likely found.



Key Considerations & Best Practices for AI-Assisted Screening:


  • "Cold Start" Problem: Both tools need initial manual screening to train their AI. Expect the first 50-200 articles to be purely manual, and the AI's suggestions to be less reliable. Its performance will improve significantly as you screen more.

  • Don't Blindly Trust the AI: The AI is a prioritization tool, not a decision-maker. Always critically review the articles it suggests. It can make mistakes, especially with nuanced criteria.

  • Stopping Criteria: How do you know when to stop screening?

  • Recall: ASReview's plots can show you when the curve of relevant articles starts to flatten, suggesting you've found most of them.

  • Saturation: When you screen a large number of articles (e.g., 200-500 consecutively) and find very few or no new relevant ones.

  • Percentage: After screening a certain percentage (e.g., 50-70%) of the total, or a fixed number of irrelevant articles in a row.

  • Consistency: Maintain consistent application of your inclusion/exclusion criteria. If you have multiple screeners, calibrate thoroughly before starting and meet regularly to discuss discrepancies.

  • Documentation: Keep detailed notes on your criteria, any changes, and your screening process. This is vital for transparency and reproducibility.

  • Saving and Backup (ASReview specific): ASReview projects are saved locally. Regularly back up your project files (often in a hidden .asreview folder in your home directory or where you ran asreview gui).

  • Computational Resources (ASReview specific): For 5000 articles, ASReview should run fine on most modern computers. For much larger datasets (tens of thousands), more RAM might be beneficial.


Choose the tool that best fits your technical comfort level, collaboration needs, and desire for customization. Both can significantly reduce the workload for your 5000 articles. Good luck!

{DateTime.UtcNow:HH:mm}
GrokxAI
I have 5000 articles to screen for a systematic review on educational technology effectiveness. Help me: 1) Set up ASReview or Rayyan for AI-assisted screening prioritization

Overview of ASReview and Rayyan for AI-Assisted Screening

Both ASReview and Rayyan are excellent open-source tools for streamlining systematic review screening, especially for large volumes like your 5,000 articles. ASReview uses active learning and machine learning to prioritize potentially relevant records based on your initial labels, reducing manual effort. Rayyan offers collaborative AI features like relevance predictions and duplicate detection, with a free tier suitable for most users. I'll guide you through setup for both, focusing on AI-assisted prioritization for educational technology effectiveness (e.g., screening for studies on edtech impacts like learning outcomes).

Prerequisites for Both:

  • Export your 5,000 articles from your database (e.g., PubMed, Scopus, Web of Science) in RIS, BibTeX, EndNote XML, or CSV format. Include title, abstract, keywords, and DOI if available—abstracts are crucial for AI prioritization.

  • Ensure you have a stable internet connection; both are web-based or app-supported.

  • For large datasets, test with a subset (e.g., 500 records) first to verify imports.


Setting Up ASReview for AI-Assisted Prioritization

ASReview is ideal for solo or small-team screening with strong ML-driven prioritization. It's free, Python-based, and excels at ranking records by relevance probability after you label a few.

Step 1: Installation and Account Setup


  • Download and Install: Go to asreview.nl and download the desktop app (Windows/Mac/Linux) or use the web version via Google Colab for no-install setup.

  • For advanced users: Install via Python (pip install asreview) and run locally for privacy.

  • Create a Project: Launch ASReview, create a new project, and name it (e.g., "EdTech Effectiveness Review").

  • Import Data: Click "Import" and upload your RIS/CSV file. ASReview auto-detects columns (e.g., title, abstract). For 5,000 records, import may take 5-10 minutes; it handles up to 100,000+ efficiently.


Step 2: Configure AI-Assisted Screening


  • Select Model: Choose the default "Naive Bayes" or "LSTM" model for text classification (trains on your labels to predict relevance). For edtech, ensure abstracts are used as the primary text field.

  • Set Up Prioritization Strategy: Enable "Active Learning" mode:

  • Start by randomly sampling or querying the first 50-100 records.

  • Label them as "relevant" (include for full-text review), "irrelevant" (exclude), or "maybe" based on criteria like study design (e.g., RCTs on edtech tools) and outcomes (e.g., student performance metrics).

  • AI Prioritization: The model retrains after every 10-20 labels, ranking unseen records by predicted relevance score (0-1 probability). Focus on high-score ones first—ASReview estimates you'll screen 80-90% fewer records for 95% recall.

  • Customization for EdTech Review:

  • Define inclusion/exclusion criteria in the project notes (e.g., "Include: Empirical studies on digital tools in K-12/higher ed; Exclude: Opinions, non-English").

  • Use the built-in query builder to filter by keywords (e.g., "e-learning AND effectiveness").


Step 3: Screening Workflow and Export


  • Screen in batches: Review prioritized abstracts, label, and let AI reprioritize.

  • Track progress: ASReview shows a learning curve (e.g., precision/recall metrics) to estimate total effort—expect to label ~200-500 for your dataset.

  • Export: When done, export labeled records as RIS/CSV for full-text review or collaboration.

  • Tips: Enable cloud sync for backups. For teams, share via ASReview LAB (beta feature). Runtime: ~1-2 hours for initial setup, then 10-20 hours total screening time.


If you encounter import issues, check the ASReview documentation or GitHub for format tweaks.

Setting Up Rayyan for AI-Assisted Screening

Rayyan is user-friendly for collaborative reviews, with AI features like auto-suggesting relevance and highlighting key terms. It's web-based and free (with premium options for large teams).

Step 1: Account and Project Setup


  • Sign Up: Go to rayyan.ai, create a free account (email or Google/ institutional login). The free tier supports unlimited reviews and 1,000+ records per project.

  • Create a Review: Click "New Review," name it (e.g., "EdTech Systematic Review"), and invite collaborators if needed (up to 2 free reviewers).

  • Import Data: Use "Import References" to upload RIS, BibTeX, or EndNote XML. For 5,000 articles:

  • Drag-and-drop the file; Rayyan deduplicates automatically (crucial for multi-database searches).

  • Map fields: Ensure title/abstract are selected. Import time: 2-5 minutes for 5,000 records.


Step 2: Configure AI-Assisted Prioritization


  • Enable AI Features: In project settings, turn on "AI-Assisted Screening" (free) and "Relevance Prediction." Rayyan's AI (powered by NLP models) scores each abstract for relevance based on your labels.

  • Define Criteria: Set up screening criteria via tags or notes (e.g., PICO framework: Population=students, Intervention=edtech tools, Comparator=traditional methods, Outcomes=learning effectiveness).

  • Prioritization Workflow:

  • Start screening: Rayyan presents records in a default order; enable "Smart Order" to prioritize based on AI predictions after your first 50-100 labels.

  • Label abstracts: Mark as "Include," "Exclude," or "Maybe" with notes (e.g., "RCT on gamification—relevant"). Include/exclude reasons auto-suggest.

  • AI Kicks In: After ~10% labeled, AI sorts the queue by confidence score, surfacing likely relevants first. It also highlights phrases like "educational technology" or "effect size."

  • EdTech-Specific Tips: Use Rayyan's "Full Text" integration to pull PDFs for high-priority items. Filter by study type (e.g., via keywords like "randomized trial").


Step 3: Screening Workflow and Export


  • Screen collaboratively: Assign sections or use blind mode to reduce bias. Track inter-rater agreement via built-in stats.

  • Progress: For 5,000 records, AI can cut screening time by 50-70%; monitor via the dashboard (e.g., % screened, recall estimates).

  • Export: Download as RIS/CSV with labels for PRISMA flow diagram or full-text tools like Covidence.

  • Premium Upgrade (if needed): For >10 reviewers or advanced analytics, it's $10/user/month—worth it for teams.


Rayyan's mobile app makes on-the-go labeling easy.

Recommendations


  • Choose Based on Needs: Use ASReview if you're solo and want deep ML customization (better for pure prioritization). Pick Rayyan for team collaboration and ease (AI is more "plug-and-play").

  • Efficiency Gains: Both can reduce your workload from weeks to days by prioritizing ~10-20% of records that are 80% likely relevant.

  • Best Practices: Document your protocol (e.g., in a shared Google Doc) for reproducibility. Pilot test criteria on 100 records. If you hit limits (e.g., import errors), split your dataset into phases.

  • Resources: ASReview tutorials at docs.asreview.ai; Rayyan help at rayyan.zendesk.com. For edtech-specific guidance, check Cochrane Handbook sections on screening.


If you share more details (e.g., file format or team size), I can refine this further!
{DateTime.UtcNow:HH:mm}

Analysis

This comparison demonstrates the different approaches each AI model takes when responding to the same prompt. Here are the key differences observed:

Response Characteristics

ChatGPT: Provides a direct response with 51 sentences.

Gemini: Provides a direct response with 185 sentences.

Grok: Provides a direct response with 114 sentences.

Key Takeaways

  • Each model brings unique strengths to this type of query
  • Response styles vary significantly between models
  • Consider your specific use case when choosing between these models

Try This Comparison Yourself

Want to test these models with your own prompts? Visit SNEOS.com to compare AI responses side-by-side in real-time.


This comparison was generated using the SNEOS AI Comparison ToolPublished: October 15, 2025 | Models: ChatGPT, Gemini, Grok