Skip to content

Add ollama-action GitHub Actions workflow for LLM automation#359

Merged
syed-reza98 merged 3 commits intomainfrom
copilot/add-ollama-action-workflow
Mar 14, 2026
Merged

Add ollama-action GitHub Actions workflow for LLM automation#359
syed-reza98 merged 3 commits intomainfrom
copilot/add-ollama-action-workflow

Conversation

Copy link
Contributor

Copilot AI commented Mar 14, 2026

Adds a workflow using ai-action/ollama-action@v2 to run Ollama LLMs in CI — useful for summarizing PRs, generating changelogs, or other AI-assisted automation tasks.

Workflow (.github/workflows/ollama-action.yml)

  • Triggers: workflow_dispatch (manual, with configurable inputs) + push to main
  • Inputs: model, prompt, version, cache — all optional with sensible defaults (llama3.2, model caching enabled)
  • Steps: checkout → run LLM via composite action → print response → save to .github/ollama-response.txt
  • Permissions: contents: read only (least privilege — file is workspace-only, not committed back)
  • Output handling uses env vars (not direct ${{ }} in run:) to avoid script injection

.gitignore

  • Added .github/ollama-response.txt to prevent accidental commits of generated output
Original prompt

This section details on the original issue you should resolve

<issue_title>Add GitHub Action workflow: ollama-action for LLM automation</issue_title>
<issue_description>## Proposal
Add an ollama-action GitHub Action workflow to the repository. This workflow will allow maintainers and contributors to run Ollama large language models (LLMs) on repo events or manually using the ai-action/ollama-action composite action.

Rationale

  • Enables automation/AI tasks directly within the CI pipeline (e.g. summarize PRs, changelogs, generate content).
  • Simple, secure, and fast setup without maintaining your own LLM infrastructure (uses latest Ollama runner on GitHub Actions).
  • The action supports model caching, customizable prompts, and Ollama version selection.

Proposed Workflow Features

  • Manual trigger (workflow_dispatch) and optional automatic run on push to main
  • Inputs for model name, prompt, version, and cache toggle
  • Prints LLM output to workflow logs and saves to .github/ollama-response.txt for downstream use

Implementation Example

name: ollama-action
on:
  workflow_dispatch:
    inputs:
      model:
        description: 'Ollama model name (e.g. llama3.2, tinyllama)'
        required: false
        default: 'llama3.2'
      prompt:
        description: 'Prompt to run'
        required: false
        default: 'Summarize recent changes in this repository.'
      version:
        description: 'Optional Ollama version to install (e.g. 0.15.0)'
        required: false
        default: ''
      cache:
        description: 'Whether to cache the model (true/false)'
        required: false
        default: 'true'
  push:
    branches:
      - main
jobs:
  run-ollama:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Ollama (ai-action/ollama-action)
        id: ollama
        uses: ai-action/ollama-action@v2
        with:
          model: ${{ github.event.inputs.model || 'llama3.2' }}
          prompt: ${{ github.event.inputs.prompt || 'Summarize recent changes in this repository.' }}
          version: ${{ github.event.inputs.version }}
          cache: ${{ github.event.inputs.cache || 'true' }}
      - name: Print Ollama response
        run: |
          echo "=== Ollama response ==="
          echo "${{ steps.ollama.outputs.response }}"
      - name: Save response to file
        run: |
          mkdir -p .github
          printf "%s\n" "${{ steps.ollama.outputs.response }}" > .github/ollama-response.txt

Reference: ai-action/ollama-action action files

  • action.yml: defines inputs (model, prompt, version, cache), output (response), and steps to setup Ollama, cache models, run the selected LLM, and output LLM results to workflow output.
  • README.md: provides examples for workflow usage, customization, and explains required/optional parameters.
  • Additional config: Also reviewed project configs under .github/ such as templates, dependabot and mergify, all standard.

See ai-action/ollama-action for upstream details and ongoing updates.

</issue_description>

<agent_instructions>Review the whole repo codebase and perform online research for ollama integrations along with authentication (if needed) and the whole repository https://github.com/ai-action/ollama-action/tree/master then start implementing the task </agent_instructions>

Comments on the Issue (you are @copilot in this section)


📍 Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.

@vercel
Copy link

vercel bot commented Mar 14, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
stormcomui Error Error Mar 14, 2026 10:53pm

Co-authored-by: syed-reza98 <71028588+syed-reza98@users.noreply.github.com>
Copilot AI changed the title [WIP] Add GitHub Action workflow: ollama-action for LLM automation Add ollama-action GitHub Actions workflow for LLM automation Mar 14, 2026
Copilot AI requested a review from syed-reza98 March 14, 2026 22:35
@syed-reza98 syed-reza98 requested a review from Copilot March 14, 2026 22:39
@syed-reza98 syed-reza98 marked this pull request as ready for review March 14, 2026 22:39
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a GitHub Actions workflow to run Ollama LLMs in CI (manual dispatch and on pushes to main), capturing the model response for downstream automation use.

Changes:

  • Added .github/workflows/ollama-action.yml workflow using ai-action/ollama-action@v2 with configurable inputs.
  • Persisted the LLM response to .github/ollama-response.txt and printed it to logs.
  • Ignored the generated response file via .gitignore to prevent accidental commits.

Reviewed changes

Copilot reviewed 1 out of 2 changed files in this pull request and generated 2 comments.

File Description
.gitignore Ignores the generated Ollama response output file.
.github/workflows/ollama-action.yml Introduces the Ollama CI workflow with inputs, execution, logging, and response persistence.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +46 to +47
echo "=== Ollama response ==="
echo "$response"

jobs:
run-ollama:
runs-on: ubuntu-latest
@syed-reza98
Copy link
Collaborator

@copilot apply changes based on the comments in this thread

Co-authored-by: syed-reza98 <71028588+syed-reza98@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Mar 14, 2026

@copilot apply changes based on the comments in this thread

Applied both review suggestions in e6a030f:

  • Added timeout-minutes: 15 at job level to prevent stuck runs from consuming runner minutes indefinitely.
  • Replaced echo with printf for robust output handling (safe against -n/-e flags), and added sed 's/^::/\x200::/' to sanitize lines starting with :: to prevent GitHub Actions workflow-command injection via log output.

@syed-reza98 syed-reza98 merged commit 7187f94 into main Mar 14, 2026
3 of 4 checks passed
@syed-reza98 syed-reza98 deleted the copilot/add-ollama-action-workflow branch March 14, 2026 22:55
@github-project-automation github-project-automation bot moved this from Backlog to Done in StormCom Mar 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Add GitHub Action workflow: ollama-action for LLM automation

3 participants