Skip to content

📚 Media Node Tutorials

Welcome! This page provides hands-on tutorials for using the Media Node in ChainForge, tailored for Social Sciences and Humanities (SSH) researchers working with Vision Language Models (VLMs). Each tutorial demonstrates a real-world use case and includes an evaluation step. Add your own screenshots or GIFs where indicated!


1️⃣ Analyzing Historical Photographs (Upload Local Image Files)

Use Case: A digital historian wants to analyze a collection of historical photographs from their local archive to extract descriptions and identify key themes using a VLM.

Steps:

  1. Add a Media Node:
  2. Click Add Node ➕ and select 📺 Media Node from the Input Data section.
  3. Add Media Node

  4. Upload Local Images:

  5. Click the ➕ Add button in the Media Node.
  6. Use the Drag & Drop area or file picker to upload several historical photographs (e.g., berlin_wall_1989.jpg, civil_rights_march_1963.jpg).
  7. Upload Local Images

  8. Connect to a Prompt Node:

  9. Add a Prompt Node.
  10. Connect the Media Node’s output to the Prompt Node’s input.
  11. In the Prompt Node, use a template like: > "Describe the main event and people in this photograph. What historical context does it represent?"
  12. Connect to Prompt Node

  13. Add an Evaluation Node:

  14. Add a Multi-Evaluator Node.
  15. Connect the Prompt Node’s output to the Evaluator.
  16. Set up evaluation criteria, e.g., "Does the description correctly identify the event and people?" and "Is the historical context accurate?"
  17. Add Evaluation Node

  18. Run the Flow:

  19. Execute the flow and review the VLM’s responses and evaluation results.
  20. Run and Inspect Results

2️⃣ Analyzing Social Media Memes (Use Remote Image URLs)

Use Case: A media studies researcher wants to analyze the messaging and sentiment of viral memes circulating on social media by providing public image URLs to a VLM.

Steps:

  1. Add a Media Node:
  2. Click Add Node ➕ and select 📺 Media Node.
  3. Add Media Node

  4. Add Remote Image URLs:

  5. Click the ➕ Add button in the Media Node.
  6. Paste public URLs of memes (e.g., https://i.imgur.com/xyz123.jpg, https://pbs.twimg.com/media/abc456.jpg).
  7. Add Remote URLs

  8. Connect to a Prompt Node:

  9. Add a Prompt Node.
  10. Connect the Media Node’s output to the Prompt Node’s input.
  11. Use a prompt like: > "Analyze the message and sentiment of this meme. What social or political commentary does it make?"
  12. Connect to Prompt Node

  13. Add an Evaluation Node:

  14. Add a Multi-Evaluator Node.
  15. Connect the Prompt Node’s output to the Evaluator.
  16. Set up evaluation criteria, e.g., "Does the analysis capture the meme’s intended message?" and "Is the sentiment correctly identified?"
  17. Add Evaluation Node

  18. Run the Flow:

  19. Execute and inspect the VLM’s analyses and evaluation scores.
  20. Run and Inspect Results

3️⃣ Coding and Analyzing Survey Responses with Images (Import from Spreadsheet)

Use Case: A sociologist is studying how participants interpret ambiguous images. Survey responses and image references are stored in a TSV file.

Steps:

  1. Prepare a TSV File:
  2. Create a TSV file with columns like image_url, participant_response.
  3. Example row:
    image_url\tparticipant_response
    https://example.com/images/ambiguous1.jpg\t"The image looks like a family gathering."
    
  4. Prepare TSV File

  5. Add a Media Node:

  6. Click Add Node ➕ and select 📺 Media Node.
  7. Add Media Node

  8. Import from Spreadsheet:

  9. Click Import data in the Media Node.
  10. Select your TSV file.
  11. Import TSV

  12. Connect to a Prompt Node:

  13. Add a Prompt Node.
  14. Connect the Media Node’s output to the Prompt Node’s input.
  15. Use a prompt like: > "Given the participant's response: '{participant_response}', does the image support this interpretation? Explain why or why not."
  16. Connect to Prompt Node

  17. Add an Evaluation Node:

  18. Add a Multi-Evaluator Node.
  19. Connect the Prompt Node’s output to the Evaluator.
  20. Set up evaluation criteria, e.g., "Does the explanation reference both the image and the participant’s response?" and "Is the reasoning sound?"
  21. Add Evaluation Node

  22. Run the Flow:

  23. Execute and review the VLM’s explanations and evaluation results.
  24. Run and Inspect Results

Add your own screenshots or GIFs in place of the placeholders above to illustrate each step!