url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/2979 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2979/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2979/comments | https://api.github.com/repos/ollama/ollama/issues/2979/events | https://github.com/ollama/ollama/issues/2979 | 2,173,683,721 | I_kwDOJ0Z1Ps6Bj8gJ | 2,979 | Starcoder2 crashing ollama docker version 0.1.28 | {
"login": "tilllt",
"id": 1854364,
"node_id": "MDQ6VXNlcjE4NTQzNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1854364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilllt",
"html_url": "https://github.com/tilllt",
"followers_url": "https://api.github.com/users/tilllt/foll... | [] | closed | false | null | [] | null | 2 | 2024-03-07T11:51:04 | 2024-03-07T12:49:55 | 2024-03-07T12:49:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I noticed that the ollama version shipped as docker container has been updated to 0.1.28 and thus should run starcoder2 and gemma models - i am still not having luck running those, ollama just crashes... am i missing something?
https://pastebin.com/ALJRfZZ5 | {
"login": "tilllt",
"id": 1854364,
"node_id": "MDQ6VXNlcjE4NTQzNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1854364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilllt",
"html_url": "https://github.com/tilllt",
"followers_url": "https://api.github.com/users/tilllt/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2979/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8660 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8660/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8660/comments | https://api.github.com/repos/ollama/ollama/issues/8660/events | https://github.com/ollama/ollama/issues/8660 | 2,818,256,756 | I_kwDOJ0Z1Ps6n-y90 | 8,660 | GPU Memory Not Released After Exiting deepseek-r1:32b Model | {
"login": "Sebjac06",
"id": 172889704,
"node_id": "U_kgDOCk4WaA",
"avatar_url": "https://avatars.githubusercontent.com/u/172889704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sebjac06",
"html_url": "https://github.com/Sebjac06",
"followers_url": "https://api.github.com/users/Sebjac06/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2025-01-29T13:41:07 | 2025-01-29T13:51:19 | 2025-01-29T13:51:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
- Ollama Version: 0.5.7
- Model: deepseek-r1:32b
- GPU: NVIDIA RTX 3090 (24GB VRAM)
- OS: Windows 11 (include build version if known)
After running the `deepseek-r1:32b` model via `ollama run deepseek-r1:32b` and exiting with `/bye` in my terminal, the GPU's dedicated memory remains fully alloc... | {
"login": "Sebjac06",
"id": 172889704,
"node_id": "U_kgDOCk4WaA",
"avatar_url": "https://avatars.githubusercontent.com/u/172889704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sebjac06",
"html_url": "https://github.com/Sebjac06",
"followers_url": "https://api.github.com/users/Sebjac06/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8660/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3362 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3362/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3362/comments | https://api.github.com/repos/ollama/ollama/issues/3362/events | https://github.com/ollama/ollama/issues/3362 | 2,208,545,720 | I_kwDOJ0Z1Ps6Do7u4 | 3,362 | Report better error on windows on port conflict with winnat | {
"login": "Canman1963",
"id": 133131797,
"node_id": "U_kgDOB-9uFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/133131797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Canman1963",
"html_url": "https://github.com/Canman1963",
"followers_url": "https://api.github.com/users/Can... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-03-26T15:15:16 | 2024-04-28T19:01:09 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi DevOps
My Ollama was working fine for me until I tried to use it today not sure what has happened. The LOGS show this repeated Crash and attempt to reload in the app.log
Time=2024-03-25T12:09:31.329-05:00 level=INFO source=logging.go:45 msg="ollama app started"
time=2024-03-25T12:09:31.389-05:00 level=INFO sou... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3362/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5953/comments | https://api.github.com/repos/ollama/ollama/issues/5953/events | https://github.com/ollama/ollama/issues/5953 | 2,430,295,149 | I_kwDOJ0Z1Ps6Q21xt | 5,953 | Who are you? | {
"login": "t7aliang",
"id": 11693120,
"node_id": "MDQ6VXNlcjExNjkzMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/11693120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t7aliang",
"html_url": "https://github.com/t7aliang",
"followers_url": "https://api.github.com/users/t7a... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-07-25T15:23:15 | 2024-07-26T14:01:03 | 2024-07-26T14:01:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.2.8 | {
"login": "t7aliang",
"id": 11693120,
"node_id": "MDQ6VXNlcjExNjkzMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/11693120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t7aliang",
"html_url": "https://github.com/t7aliang",
"followers_url": "https://api.github.com/users/t7a... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5953/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5953/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5602 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5602/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5602/comments | https://api.github.com/repos/ollama/ollama/issues/5602/events | https://github.com/ollama/ollama/issues/5602 | 2,400,911,034 | I_kwDOJ0Z1Ps6PGv66 | 5,602 | Running latest version 0.2.1 running slowly and not returning output for long text input | {
"login": "jillvillany",
"id": 42828003,
"node_id": "MDQ6VXNlcjQyODI4MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/42828003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jillvillany",
"html_url": "https://github.com/jillvillany",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 3 | 2024-07-10T14:19:02 | 2024-10-16T16:18:22 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am running ollama on an AWS ml.p3.2xlarge SageMaker notebook instance.
When I install the latest version, 0.2.1, the response time on a langchain chain running an extract names prompt on a page of text using llama3:latest is about 8 seconds and doesn't return any names.
However, when I ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5602/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1876 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1876/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1876/comments | https://api.github.com/repos/ollama/ollama/issues/1876/events | https://github.com/ollama/ollama/issues/1876 | 2,073,129,789 | I_kwDOJ0Z1Ps57kXM9 | 1,876 | ollama list flags help | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6960960225,
"node_id": ... | open | false | null | [] | null | 5 | 2024-01-09T20:35:48 | 2024-10-26T21:58:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There is no obvious way of seeing what flags are available for ollama list
```
ollama list --help
List models
Usage:
ollama list [flags]
Aliases:
list, ls
Flags:
-h, --help help for list
```
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1876/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2787 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2787/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2787/comments | https://api.github.com/repos/ollama/ollama/issues/2787/events | https://github.com/ollama/ollama/issues/2787 | 2,157,567,148 | I_kwDOJ0Z1Ps6Amdys | 2,787 | bug? - session save does not save latest messages of the chat | {
"login": "FotisK",
"id": 7896645,
"node_id": "MDQ6VXNlcjc4OTY2NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7896645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FotisK",
"html_url": "https://github.com/FotisK",
"followers_url": "https://api.github.com/users/FotisK/foll... | [] | closed | false | null | [] | null | 1 | 2024-02-27T20:43:59 | 2024-05-17T01:50:42 | 2024-05-17T01:50:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was having a very long conversation with nollama/mythomax-l2-13b:Q5_K_S, saved the session and restored it and found that the latest 100-200 lines of the discussion were missing. I haven't tried to reproduce it (I don't have lengthy chats often), but I thought I'd report it. When I get another chance, I'll test it ag... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2787/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5307 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5307/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5307/comments | https://api.github.com/repos/ollama/ollama/issues/5307/events | https://github.com/ollama/ollama/pull/5307 | 2,376,001,387 | PR_kwDOJ0Z1Ps5zq6Yg | 5,307 | Ollama Show: Check for Projector Type | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 1 | 2024-06-26T18:22:07 | 2024-06-28T18:30:19 | 2024-06-28T18:30:17 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5307",
"html_url": "https://github.com/ollama/ollama/pull/5307",
"diff_url": "https://github.com/ollama/ollama/pull/5307.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5307.patch",
"merged_at": "2024-06-28T18:30:17"
} | Fixes #5289
<img width="410" alt="Screenshot 2024-06-26 at 11 21 57 AM" src="https://github.com/ollama/ollama/assets/65097070/4ae18164-e5c2-453b-91d4-de54569b8e11">
| {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5307/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5307/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7499 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7499/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7499/comments | https://api.github.com/repos/ollama/ollama/issues/7499/events | https://github.com/ollama/ollama/pull/7499 | 2,634,169,544 | PR_kwDOJ0Z1Ps6A3i20 | 7,499 | build: Make target improvements | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 59 | 2024-11-05T00:47:49 | 2025-01-18T02:06:48 | 2024-12-10T17:47:19 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7499",
"html_url": "https://github.com/ollama/ollama/pull/7499",
"diff_url": "https://github.com/ollama/ollama/pull/7499.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7499.patch",
"merged_at": "2024-12-10T17:47:19"
} | Add a few new targets and help for building locally. This also adjusts the runner lookup to favor local builds, then runners relative to the executable.
Fixes #7491
Fixes #7483
Fixes #7452
Fixes #2187
Fixes #2205
Fixes #2281
Fixes #7457
Fixes #7622
Fixes #7577
Fixes #1756
Fixes #7817
Fixes #6857
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7499/reactions",
"total_count": 17,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7499/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3528 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3528/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3528/comments | https://api.github.com/repos/ollama/ollama/issues/3528/events | https://github.com/ollama/ollama/pull/3528 | 2,229,959,636 | PR_kwDOJ0Z1Ps5r8bOC | 3,528 | Update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` on Windows to avoid compiler errors | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-04-07T21:05:46 | 2024-04-07T23:29:52 | 2024-04-07T23:29:51 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3528",
"html_url": "https://github.com/ollama/ollama/pull/3528",
"diff_url": "https://github.com/ollama/ollama/pull/3528.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3528.patch",
"merged_at": "2024-04-07T23:29:51"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3528/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6853 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6853/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6853/comments | https://api.github.com/repos/ollama/ollama/issues/6853/events | https://github.com/ollama/ollama/issues/6853 | 2,533,058,737 | I_kwDOJ0Z1Ps6W-2ix | 6,853 | Setting temperature on any llava model makes the Ollama server hangs on REST calls | {
"login": "jluisreymejias",
"id": 16193562,
"node_id": "MDQ6VXNlcjE2MTkzNTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/16193562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jluisreymejias",
"html_url": "https://github.com/jluisreymejias",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 4 | 2024-09-18T08:23:25 | 2025-01-06T07:33:52 | 2025-01-06T07:33:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When calling llava models from a REST client, setting temperature cause the ollama server hangs until process is killed.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.10 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6853/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5160 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5160/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5160/comments | https://api.github.com/repos/ollama/ollama/issues/5160/events | https://github.com/ollama/ollama/issues/5160 | 2,363,800,007 | I_kwDOJ0Z1Ps6M5LnH | 5,160 | Add HelpingAI-9B in it | {
"login": "OE-LUCIFER",
"id": 158988478,
"node_id": "U_kgDOCXn4vg",
"avatar_url": "https://avatars.githubusercontent.com/u/158988478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OE-LUCIFER",
"html_url": "https://github.com/OE-LUCIFER",
"followers_url": "https://api.github.com/users/OE-... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-06-20T08:05:57 | 2024-06-20T21:14:20 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | HelpingAI-9B is an advanced language model designed for emotionally intelligent conversational interactions. This model excels in empathetic engagement, understanding user emotions, and providing supportive dialogue. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5160/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3438 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3438/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3438/comments | https://api.github.com/repos/ollama/ollama/issues/3438/events | https://github.com/ollama/ollama/issues/3438 | 2,218,222,188 | I_kwDOJ0Z1Ps6EN2Js | 3,438 | Bug in MODEL download directory and launching ollama service in Linux | {
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | open | false | null | [] | null | 15 | 2024-04-01T13:06:01 | 2024-07-18T09:58:57 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I write this post to add more information:
1 - As you mentioned : I edited `sudo systemctl edit ollama.service`

And the /media/Samsung/ollama_models is empty....

if not ollama_api_key:
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6948/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3427 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3427/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3427/comments | https://api.github.com/repos/ollama/ollama/issues/3427/events | https://github.com/ollama/ollama/issues/3427 | 2,217,052,006 | I_kwDOJ0Z1Ps6EJYdm | 3,427 | prompt_eval_count in api is broken | {
"login": "drazdra",
"id": 133811709,
"node_id": "U_kgDOB_nN_Q",
"avatar_url": "https://avatars.githubusercontent.com/u/133811709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drazdra",
"html_url": "https://github.com/drazdra",
"followers_url": "https://api.github.com/users/drazdra/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-03-31T15:39:03 | 2024-06-04T06:58:20 | 2024-06-04T06:58:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
prompt_eval_count parameter is absent on some calls, on other calls it returns wrong information.
1. i tried /api/chat with "stablelm2", no system prompt, prompt="hi".
in result there is no field "prompt_eval_count" most of the time. sometimes it's there, randomly, but rarely.
2. when ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3427/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8366 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8366/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8366/comments | https://api.github.com/repos/ollama/ollama/issues/8366/events | https://github.com/ollama/ollama/issues/8366 | 2,778,471,158 | I_kwDOJ0Z1Ps6lnBr2 | 8,366 | deepseek v3 | {
"login": "Morrigan-Ship",
"id": 138357319,
"node_id": "U_kgDOCD8qRw",
"avatar_url": "https://avatars.githubusercontent.com/u/138357319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Morrigan-Ship",
"html_url": "https://github.com/Morrigan-Ship",
"followers_url": "https://api.github.com/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2025-01-09T18:07:28 | 2025-01-10T22:28:46 | 2025-01-10T22:28:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8366/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4260 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4260/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4260/comments | https://api.github.com/repos/ollama/ollama/issues/4260/events | https://github.com/ollama/ollama/issues/4260 | 2,285,566,329 | I_kwDOJ0Z1Ps6IOvl5 | 4,260 | Error: could not connect to ollama app, is it running? | {
"login": "starMagic",
"id": 4728358,
"node_id": "MDQ6VXNlcjQ3MjgzNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4728358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/starMagic",
"html_url": "https://github.com/starMagic",
"followers_url": "https://api.github.com/users/st... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-05-08T13:11:05 | 2024-05-21T18:34:09 | 2024-05-21T18:34:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When try run command "Ollama list", the following error occurs:
server.log
2024/05/08 20:50:26 routes.go:989: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4260/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/551 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/551/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/551/comments | https://api.github.com/repos/ollama/ollama/issues/551/events | https://github.com/ollama/ollama/issues/551 | 1,901,647,151 | I_kwDOJ0Z1Ps5xWNUv | 551 | Dockerfile.cuda fails to build server | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 7 | 2023-09-18T19:58:22 | 2023-09-26T22:29:49 | 2023-09-26T22:29:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | On an AWS EC2 `g4dn.2xlarge` instance with Ollama https://github.com/jmorganca/ollama/tree/c345053a8bf47d5ef8f1fe15d385108059209fba:
```none
> sudo docker buildx build . --file Dockerfile.cuda
[+] Building 57.2s (7/16) ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/551/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/5990 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5990/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5990/comments | https://api.github.com/repos/ollama/ollama/issues/5990/events | https://github.com/ollama/ollama/issues/5990 | 2,432,539,183 | I_kwDOJ0Z1Ps6Q_Zov | 5,990 | Tools and properties.type Not Supporting Arrays | {
"login": "xonlly",
"id": 4999786,
"node_id": "MDQ6VXNlcjQ5OTk3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4999786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xonlly",
"html_url": "https://github.com/xonlly",
"followers_url": "https://api.github.com/users/xonlly/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-07-26T16:06:52 | 2024-10-18T14:29:13 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Title:** Issue with `DynamicStructuredTool` and `properties.type` Not Supporting Arrays in LangchainJS
**Description:**
I am encountering an issue when using `DynamicStructuredTool` in LangchainJS. Specifically, the `type` property within `properties` does not currently support arrays. T... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5990/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6953/comments | https://api.github.com/repos/ollama/ollama/issues/6953/events | https://github.com/ollama/ollama/issues/6953 | 2,547,721,251 | I_kwDOJ0Z1Ps6X2yQj | 6,953 | AMD ROCm Card can not use flash attention | {
"login": "superligen",
"id": 4199207,
"node_id": "MDQ6VXNlcjQxOTkyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4199207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/superligen",
"html_url": "https://github.com/superligen",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": ... | open | false | null | [] | null | 4 | 2024-09-25T11:26:27 | 2024-12-19T19:36:01 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
My cards is w7900, and rocm driver is 6.3 , I found the llama-cpp server started by Ollama always without -fa flag.
I check the code , found :
// only cuda (compute capability 7+) and metal support flash attention
if g.Library != "metal" && (g.Library != "c... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6953/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6670 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6670/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6670/comments | https://api.github.com/repos/ollama/ollama/issues/6670/events | https://github.com/ollama/ollama/issues/6670 | 2,509,778,404 | I_kwDOJ0Z1Ps6VmC3k | 6,670 | expose slots data through API | {
"login": "aiseei",
"id": 30615541,
"node_id": "MDQ6VXNlcjMwNjE1NTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/30615541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aiseei",
"html_url": "https://github.com/aiseei",
"followers_url": "https://api.github.com/users/aiseei/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-09-06T07:58:13 | 2024-09-06T15:38:13 | 2024-09-06T15:38:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | hI
Can the information that can be seen in the logs be exposed through /slots api per server/port ? We need this to manage queuing in our load balancer. This has been exposed by llama cpp already. https://github.com/ggerganov/llama.cpp/tree/master/examples/server#get-slots-returns-the-current-slots-processing-stat... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6670/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4248 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4248/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4248/comments | https://api.github.com/repos/ollama/ollama/issues/4248/events | https://github.com/ollama/ollama/issues/4248 | 2,284,620,426 | I_kwDOJ0Z1Ps6ILIqK | 4,248 | error loading model architecture: unknown model architecture: 'qwen2moe' | {
"login": "li904775857",
"id": 43633294,
"node_id": "MDQ6VXNlcjQzNjMzMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/43633294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li904775857",
"html_url": "https://github.com/li904775857",
"followers_url": "https://api.github.com/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-05-08T03:50:18 | 2024-07-25T17:43:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Qwen1.5-MoE-A2.7B-Chat is installed by convert-hf-to-gguf.py according to the process. After 4-bit quantization, ollamamodelfile is created, but it is not supported when loading. What is the cause of this?
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4248/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5188 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5188/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5188/comments | https://api.github.com/repos/ollama/ollama/issues/5188/events | https://github.com/ollama/ollama/pull/5188 | 2,364,778,595 | PR_kwDOJ0Z1Ps5zF3gJ | 5,188 | fix: skip os.removeAll() if PID does not exist | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [] | closed | false | null | [] | null | 0 | 2024-06-20T15:54:26 | 2024-06-20T17:40:59 | 2024-06-20T17:40:59 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5188",
"html_url": "https://github.com/ollama/ollama/pull/5188",
"diff_url": "https://github.com/ollama/ollama/pull/5188.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5188.patch",
"merged_at": "2024-06-20T17:40:59"
} | previously deleted all directories in $TMPDIR starting with ollama. Added a "continue" to skip the directory removal if a PID doesn't exist. We do this to prevent accidentally deleting directories in tmpdir that share the ollama name but aren't created by us for processes
resolves: https://github.com/ollama/ollama/i... | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5188/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4204 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4204/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4204/comments | https://api.github.com/repos/ollama/ollama/issues/4204/events | https://github.com/ollama/ollama/issues/4204 | 2,281,198,575 | I_kwDOJ0Z1Ps6H-FPv | 4,204 | Support pull from habor registry in proxy mode an push to harbor | {
"login": "ptempier",
"id": 6312537,
"node_id": "MDQ6VXNlcjYzMTI1Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6312537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ptempier",
"html_url": "https://github.com/ptempier",
"followers_url": "https://api.github.com/users/ptemp... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2024-05-06T15:49:57 | 2024-12-19T06:27:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Not sure why its not working, maybe i do something bad.
From other ticket i understand it supposed to work with OCI registry.
What i tried :
ollama pull habor-server//ollama.com/library/llama3:text
Error: pull model manifest: 400
ollama pull habor-server/ollama.com/llama3:text
Error: pull model manifest: 4... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4204/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4204/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1223 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1223/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1223/comments | https://api.github.com/repos/ollama/ollama/issues/1223/events | https://github.com/ollama/ollama/pull/1223 | 2,004,804,614 | PR_kwDOJ0Z1Ps5gDJXg | 1,223 | Make alt+backspace delete word | {
"login": "kejcao",
"id": 106453563,
"node_id": "U_kgDOBlhaOw",
"avatar_url": "https://avatars.githubusercontent.com/u/106453563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kejcao",
"html_url": "https://github.com/kejcao",
"followers_url": "https://api.github.com/users/kejcao/follower... | [] | closed | false | null | [] | null | 0 | 2023-11-21T17:29:44 | 2023-11-21T20:26:47 | 2023-11-21T20:26:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1223",
"html_url": "https://github.com/ollama/ollama/pull/1223",
"diff_url": "https://github.com/ollama/ollama/pull/1223.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1223.patch",
"merged_at": "2023-11-21T20:26:47"
} | In GNU Readline you can press alt+backspace to delete word. I'm used to this behavior and so it's jarring not to be able to do it. This commit adds the feature. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1223/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3007 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3007/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3007/comments | https://api.github.com/repos/ollama/ollama/issues/3007/events | https://github.com/ollama/ollama/issues/3007 | 2,176,498,516 | I_kwDOJ0Z1Ps6BurtU | 3,007 | Search on ollama.com/library is missing lots of models | {
"login": "maxtheman",
"id": 2172753,
"node_id": "MDQ6VXNlcjIxNzI3NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2172753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxtheman",
"html_url": "https://github.com/maxtheman",
"followers_url": "https://api.github.com/users/ma... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw... | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2024-03-08T17:46:15 | 2024-03-11T22:18:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Current behavior:
Using @ehartford as an example since he's a prolific ollama model contributor:
https://ollama.com/search?q=ehartford&p=1
Shows his models:
<img width="1276" alt="Screenshot 2024-03-08 at 9 45 33 AM" src="https://github.com/ollama/ollama/assets/2172753/08b9dc80-5d94-4b86-82dd-37c0dddac326">... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3007/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3227 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3227/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3227/comments | https://api.github.com/repos/ollama/ollama/issues/3227/events | https://github.com/ollama/ollama/issues/3227 | 2,192,660,065 | I_kwDOJ0Z1Ps6CsVZh | 3,227 | ollama/ollama Docker image: committed modifications aren't saved | {
"login": "nicolasduminil",
"id": 1037978,
"node_id": "MDQ6VXNlcjEwMzc5Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1037978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicolasduminil",
"html_url": "https://github.com/nicolasduminil",
"followers_url": "https://api.gith... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-03-18T16:14:49 | 2024-03-19T13:46:15 | 2024-03-19T08:48:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm using the Docker image `ollama/ollama:latest`. I'm running the image and, in the new created container, I'm pulling `llama2`. Once the pull operation finished, I'm checking its success using the `ollama list` command.
Now, I commit the modification, I tag the new modified image and I push i... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3227/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3293 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3293/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3293/comments | https://api.github.com/repos/ollama/ollama/issues/3293/events | https://github.com/ollama/ollama/issues/3293 | 2,202,706,972 | I_kwDOJ0Z1Ps6DSqQc | 3,293 | ollama run in national user name | {
"login": "hgabor47",
"id": 1212585,
"node_id": "MDQ6VXNlcjEyMTI1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1212585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hgabor47",
"html_url": "https://github.com/hgabor47",
"followers_url": "https://api.github.com/users/hgabo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-03-22T15:01:10 | 2024-05-04T22:03:44 | 2024-05-04T22:03:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

My Username has international characters like á and the ollama not handle it.
### What did you expect to see?
RUN
### Steps to reproduce
1 Create a windows user with international charaters like: ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3293/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6946 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6946/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6946/comments | https://api.github.com/repos/ollama/ollama/issues/6946/events | https://github.com/ollama/ollama/issues/6946 | 2,546,759,749 | I_kwDOJ0Z1Ps6XzHhF | 6,946 | llama runner process has terminated: exit status 0xc0000005 | {
"login": "viosay",
"id": 16093380,
"node_id": "MDQ6VXNlcjE2MDkzMzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/16093380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viosay",
"html_url": "https://github.com/viosay",
"followers_url": "https://api.github.com/users/viosay/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA... | open | false | null | [] | null | 5 | 2024-09-25T02:30:59 | 2024-11-02T17:12:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It's again the https://github.com/ollama/ollama/issues/6011 issue.
**The issue is with embedding call with the model converted using convert_hf_to_gguf.py.**
litellm.llms.ollama.OllamaError: {"error":"llama runner process has terminated: exit status 0xc0000005"}
```
INFO [wmain] syst... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6946/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3955 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3955/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3955/comments | https://api.github.com/repos/ollama/ollama/issues/3955/events | https://github.com/ollama/ollama/pull/3955 | 2,266,430,367 | PR_kwDOJ0Z1Ps5t4K3P | 3,955 | return code `499` when user cancels request while a model is loading | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-04-26T20:03:32 | 2024-04-26T21:38:30 | 2024-04-26T21:38:29 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3955",
"html_url": "https://github.com/ollama/ollama/pull/3955",
"diff_url": "https://github.com/ollama/ollama/pull/3955.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3955.patch",
"merged_at": "2024-04-26T21:38:29"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3955/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5963 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5963/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5963/comments | https://api.github.com/repos/ollama/ollama/issues/5963/events | https://github.com/ollama/ollama/pull/5963 | 2,431,037,605 | PR_kwDOJ0Z1Ps52hIpP | 5,963 | Revert "llm(llama): pass rope factors" | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-25T21:53:31 | 2024-07-25T22:24:57 | 2024-07-25T22:24:55 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5963",
"html_url": "https://github.com/ollama/ollama/pull/5963",
"diff_url": "https://github.com/ollama/ollama/pull/5963.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5963.patch",
"merged_at": "2024-07-25T22:24:55"
} | Reverts ollama/ollama#5924 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5963/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6172 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6172/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6172/comments | https://api.github.com/repos/ollama/ollama/issues/6172/events | https://github.com/ollama/ollama/issues/6172 | 2,447,790,062 | I_kwDOJ0Z1Ps6R5k_u | 6,172 | .git file is missing | {
"login": "Haritha-Maturi",
"id": 100990846,
"node_id": "U_kgDOBgT_fg",
"avatar_url": "https://avatars.githubusercontent.com/u/100990846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Haritha-Maturi",
"html_url": "https://github.com/Haritha-Maturi",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-08-05T07:12:11 | 2024-08-05T08:14:46 | 2024-08-05T08:14:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
As per the docker file present in repo there needs to be a file named .git in repo but it is missing.

### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "Haritha-Maturi",
"id": 100990846,
"node_id": "U_kgDOBgT_fg",
"avatar_url": "https://avatars.githubusercontent.com/u/100990846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Haritha-Maturi",
"html_url": "https://github.com/Haritha-Maturi",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6172/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3656 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3656/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3656/comments | https://api.github.com/repos/ollama/ollama/issues/3656/events | https://github.com/ollama/ollama/issues/3656 | 2,244,166,564 | I_kwDOJ0Z1Ps6Fw0Ok | 3,656 | error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch | {
"login": "dpblnt",
"id": 13944122,
"node_id": "MDQ6VXNlcjEzOTQ0MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/13944122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dpblnt",
"html_url": "https://github.com/dpblnt",
"followers_url": "https://api.github.com/users/dpblnt/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-04-15T16:48:36 | 2024-04-16T06:41:45 | 2024-04-15T19:06:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
gmake[3]: Leaving directory '/mnt/storage/tmp/ollama/ollama/llm/build/linux/x86_64/cpu'
[ 8%] Built target build_info
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/13/include/immintrin.h:109,
from /mnt/storage/tmp/ollama/ollama/llm/llama.cpp/ggml-impl.h:93,
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3656/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5582 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5582/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5582/comments | https://api.github.com/repos/ollama/ollama/issues/5582/events | https://github.com/ollama/ollama/pull/5582 | 2,399,160,110 | PR_kwDOJ0Z1Ps504gzi | 5,582 | Remove nested runner payloads from linux | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-07-09T20:29:57 | 2024-07-11T15:43:00 | 2024-07-11T15:43:00 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5582",
"html_url": "https://github.com/ollama/ollama/pull/5582",
"diff_url": "https://github.com/ollama/ollama/pull/5582.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5582.patch",
"merged_at": null
} | This adjusts linux to follow the same model we use for windows with a discrete archive (zip/tgz) to cary the primary executable, subprocess runners, and dependent libraries.
Darwin retain the payload model where the go binary is fully self contained.
Marking draft as it still needs some more testing and CI will n... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5582/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6942 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6942/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6942/comments | https://api.github.com/repos/ollama/ollama/issues/6942/events | https://github.com/ollama/ollama/issues/6942 | 2,546,451,482 | I_kwDOJ0Z1Ps6Xx8Qa | 6,942 | Ollama bricks chromium based apps on mac | {
"login": "skakwy",
"id": 36933487,
"node_id": "MDQ6VXNlcjM2OTMzNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/36933487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skakwy",
"html_url": "https://github.com/skakwy",
"followers_url": "https://api.github.com/users/skakwy/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-09-24T21:46:30 | 2024-09-30T14:06:53 | 2024-09-25T09:25:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I got a weird issue where chromium based apps stop working due to an ERR_ADDRESS_INVALID error. At first, I thought it would be some kind of problem with chromium and tried out a few different things, however nothing worked till I stopped ollama. Since I got ollama that weird error started to ... | {
"login": "skakwy",
"id": 36933487,
"node_id": "MDQ6VXNlcjM2OTMzNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/36933487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skakwy",
"html_url": "https://github.com/skakwy",
"followers_url": "https://api.github.com/users/skakwy/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6942/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4003 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4003/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4003/comments | https://api.github.com/repos/ollama/ollama/issues/4003/events | https://github.com/ollama/ollama/issues/4003 | 2,267,566,677 | I_kwDOJ0Z1Ps6HKFJV | 4,003 | Ollama.com - Pull Statistics can be easily fooled | {
"login": "electricalgorithm",
"id": 27111270,
"node_id": "MDQ6VXNlcjI3MTExMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/27111270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/electricalgorithm",
"html_url": "https://github.com/electricalgorithm",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 3 | 2024-04-28T13:36:34 | 2024-05-09T21:20:50 | 2024-05-09T21:20:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama.com model statistics include a pull count. The pull count statistic provides users with the "popularity" of models, where it seems that it is easy to increase the pull count.
**Code Reapply**
By using the following Python code snippet, you can increase the amount of pulls a hundred ti... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4003/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4003/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1791 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1791/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1791/comments | https://api.github.com/repos/ollama/ollama/issues/1791/events | https://github.com/ollama/ollama/pull/1791 | 2,066,388,566 | PR_kwDOJ0Z1Ps5jQyk1 | 1,791 | update Dockerfile.build | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-04T21:55:46 | 2024-01-05T03:13:45 | 2024-01-05T03:13:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1791",
"html_url": "https://github.com/ollama/ollama/pull/1791",
"diff_url": "https://github.com/ollama/ollama/pull/1791.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1791.patch",
"merged_at": "2024-01-05T03:13:44"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1791/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4311 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4311/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4311/comments | https://api.github.com/repos/ollama/ollama/issues/4311/events | https://github.com/ollama/ollama/issues/4311 | 2,289,603,215 | I_kwDOJ0Z1Ps6IeJKP | 4,311 | Monetary Support / Donations | {
"login": "dylanbstorey",
"id": 6005970,
"node_id": "MDQ6VXNlcjYwMDU5NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6005970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dylanbstorey",
"html_url": "https://github.com/dylanbstorey",
"followers_url": "https://api.github.com... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-10T12:03:43 | 2024-05-11T02:24:12 | 2024-05-11T02:24:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How do I buy you a cup of coffee ? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4311/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5255 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5255/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5255/comments | https://api.github.com/repos/ollama/ollama/issues/5255/events | https://github.com/ollama/ollama/issues/5255 | 2,370,118,290 | I_kwDOJ0Z1Ps6NRSKS | 5,255 | Cannot run model imported from safetensors: byte not found in vocab | {
"login": "peay",
"id": 7261177,
"node_id": "MDQ6VXNlcjcyNjExNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7261177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peay",
"html_url": "https://github.com/peay",
"followers_url": "https://api.github.com/users/peay/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-06-24T12:24:56 | 2024-08-01T21:16:32 | 2024-08-01T21:16:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have been trying to import HugginFace safetensors models but getting the following error when trying to use the model with `run`. This happens both with and without quantization.
For example, trying to reproduce the`tinyllama` model from the library:
```sh
git clone https://huggingfac... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5255/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6238 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6238/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6238/comments | https://api.github.com/repos/ollama/ollama/issues/6238/events | https://github.com/ollama/ollama/issues/6238 | 2,453,965,054 | I_kwDOJ0Z1Ps6SRIj- | 6,238 | Ollama server running out of memory when it didn't in previous version | {
"login": "MxtAppz",
"id": 121626118,
"node_id": "U_kgDOBz_eBg",
"avatar_url": "https://avatars.githubusercontent.com/u/121626118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MxtAppz",
"html_url": "https://github.com/MxtAppz",
"followers_url": "https://api.github.com/users/MxtAppz/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-08-07T17:17:58 | 2024-08-13T05:45:57 | 2024-08-13T05:45:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
I'm trying to run Llama 3.1 8b in Ollama 0.3.4 on a laptop (it has 8gb ram and 4 cpu cores, running it on CPU as my GPU is integrated and not compatible). Maybe this sounds crazy, but it worked fine on Ollama 0.3.3, and just after updating to 0.3.4 and running it (I also tried updati... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6238/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7508 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7508/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7508/comments | https://api.github.com/repos/ollama/ollama/issues/7508/events | https://github.com/ollama/ollama/issues/7508 | 2,635,214,570 | I_kwDOJ0Z1Ps6dEi7q | 7,508 | Manual ollama update doesn't work | {
"login": "ExposedCat",
"id": 44642024,
"node_id": "MDQ6VXNlcjQ0NjQyMDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44642024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ExposedCat",
"html_url": "https://github.com/ExposedCat",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVw... | closed | false | null | [] | null | 2 | 2024-11-05T11:38:29 | 2024-11-05T16:31:53 | 2024-11-05T16:31:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
According to docs, after downloading release, this is a last step to update ollama manually:
```bash
sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
```
However, ollama version remains unchanged even after `ollama` service restart
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama vers... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7508/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/885 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/885/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/885/comments | https://api.github.com/repos/ollama/ollama/issues/885/events | https://github.com/ollama/ollama/issues/885 | 1,957,895,148 | I_kwDOJ0Z1Ps50sxvs | 885 | Add Parameter Environment="OLLAMA_HOST=127.0.0.1:11434" to the ollama.service file | {
"login": "byteconcepts",
"id": 33394779,
"node_id": "MDQ6VXNlcjMzMzk0Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/33394779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/byteconcepts",
"html_url": "https://github.com/byteconcepts",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 0 | 2023-10-23T19:37:55 | 2023-10-24T23:02:36 | 2023-10-24T23:02:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | For those who prefer to use ollama primarily via it's API it would be nice if the ollama.service file would already contain the line...
`Environment="OLLAMA_HOST=127.0.0.1:11434"`
Additionally it would be nice if the info that for the service the interface IP and Port may be changed in this file, would be added to ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/885/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/885/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7086 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7086/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7086/comments | https://api.github.com/repos/ollama/ollama/issues/7086/events | https://github.com/ollama/ollama/pull/7086 | 2,563,131,754 | PR_kwDOJ0Z1Ps59c822 | 7,086 | doc: Adding docs on how to compile ollama to run on Intel discrete GPU platform | {
"login": "xiangyang-95",
"id": 18331729,
"node_id": "MDQ6VXNlcjE4MzMxNzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiangyang-95",
"html_url": "https://github.com/xiangyang-95",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 3 | 2024-10-03T05:04:52 | 2024-11-09T12:33:24 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7086",
"html_url": "https://github.com/ollama/ollama/pull/7086",
"diff_url": "https://github.com/ollama/ollama/pull/7086.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7086.patch",
"merged_at": null
} | - Adding the steps to compile Ollama to run on Intel(R) discrete GPU platform
- Adding the discrete GPU that have been verified in the GPU docs | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7086/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8648 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8648/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8648/comments | https://api.github.com/repos/ollama/ollama/issues/8648/events | https://github.com/ollama/ollama/issues/8648 | 2,817,169,473 | I_kwDOJ0Z1Ps6n6phB | 8,648 | olama installer should ask in drive user wants to install it | {
"login": "VikramNagwal",
"id": 123088024,
"node_id": "U_kgDOB1YsmA",
"avatar_url": "https://avatars.githubusercontent.com/u/123088024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VikramNagwal",
"html_url": "https://github.com/VikramNagwal",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2025-01-29T03:40:28 | 2025-01-29T17:20:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, Ollama Desktop is being installed on the C drive. However, if users prefer not to have it stored there, the system should offer an option to choose a different installation location during the setup process.
**Feature Request:**
The Ollama Desktop installation wizard should prompt users to choose their pr... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8648/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8219 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8219/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8219/comments | https://api.github.com/repos/ollama/ollama/issues/8219/events | https://github.com/ollama/ollama/issues/8219 | 2,756,036,629 | I_kwDOJ0Z1Ps6kRcgV | 8,219 | I built an ollama app for Android | {
"login": "echoo-app",
"id": 192385499,
"node_id": "U_kgDOC3eR2w",
"avatar_url": "https://avatars.githubusercontent.com/u/192385499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echoo-app",
"html_url": "https://github.com/echoo-app",
"followers_url": "https://api.github.com/users/echoo-... | [] | closed | false | null | [] | null | 3 | 2024-12-23T13:04:38 | 2024-12-29T19:09:05 | 2024-12-29T19:09:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The source code of the app is here
https://github.com/echoo-app/echoo
<img width="472" alt="settings" src="https://github.com/user-attachments/assets/2d420674-baa7-4644-b420-a5d44b3c3019" />
<img width="472" alt="home" src="https://github.com/user-attachments/assets/1e26dacf-6baa-4089-af82-c2cb86d4a8be" />... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8219/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8219/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1241 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1241/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1241/comments | https://api.github.com/repos/ollama/ollama/issues/1241/events | https://github.com/ollama/ollama/issues/1241 | 2,006,389,410 | I_kwDOJ0Z1Ps53lxKi | 1,241 | Multi-line prompting from CLI issue - not waiting for closing """ | {
"login": "ken-vat",
"id": 40846144,
"node_id": "MDQ6VXNlcjQwODQ2MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/40846144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ken-vat",
"html_url": "https://github.com/ken-vat",
"followers_url": "https://api.github.com/users/ken-va... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-11-22T13:50:10 | 2024-01-18T00:09:27 | 2024-01-18T00:09:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | THe engine started to spew out code before ending the multi-line with closing """. as an example -
>>> """
... can you give python code to use this API using the python requests package/library
... curl -X POST \
... --url "localhost:5000/v1/..." \
... --header "Content-Type: application/json" \
... ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1241/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1767 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1767/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1767/comments | https://api.github.com/repos/ollama/ollama/issues/1767/events | https://github.com/ollama/ollama/pull/1767 | 2,064,427,520 | PR_kwDOJ0Z1Ps5jKNmU | 1,767 | Update README.md - Terminal Integration - ShellOracle | {
"login": "djcopley",
"id": 4100965,
"node_id": "MDQ6VXNlcjQxMDA5NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4100965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djcopley",
"html_url": "https://github.com/djcopley",
"followers_url": "https://api.github.com/users/djcop... | [] | closed | false | null | [] | null | 2 | 2024-01-03T17:51:14 | 2024-02-20T03:18:06 | 2024-02-20T03:18:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1767",
"html_url": "https://github.com/ollama/ollama/pull/1767",
"diff_url": "https://github.com/ollama/ollama/pull/1767.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1767.patch",
"merged_at": "2024-02-20T03:18:05"
} | ShellOracle is a new ZSH Line Editor widget that uses Ollama for intelligent shell command generation! Ollama rocks!
 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1767/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1960 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1960/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1960/comments | https://api.github.com/repos/ollama/ollama/issues/1960/events | https://github.com/ollama/ollama/issues/1960 | 2,079,514,896 | I_kwDOJ0Z1Ps578uEQ | 1,960 | Feature Request : new flag of --benchmark | {
"login": "vincecate",
"id": 37512606,
"node_id": "MDQ6VXNlcjM3NTEyNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/37512606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vincecate",
"html_url": "https://github.com/vincecate",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 1 | 2024-01-12T19:06:13 | 2024-04-01T09:29:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
We are making a website for performance results using different hardware running ollama models at http://LLMPerformance.ai
It would be really helpful if ollama had a flag "--benchmark" which made it output everything that "--verbose" does
but also added the following info:
CPU:
Memory:
GPU:
VRAM:
LLM Model... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1960/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1960/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4022 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4022/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4022/comments | https://api.github.com/repos/ollama/ollama/issues/4022/events | https://github.com/ollama/ollama/issues/4022 | 2,268,527,278 | I_kwDOJ0Z1Ps6HNvqu | 4,022 | cannot run moondream in Ollama (ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221225477 ") | {
"login": "prithvi151080",
"id": 157370999,
"node_id": "U_kgDOCWFKdw",
"avatar_url": "https://avatars.githubusercontent.com/u/157370999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prithvi151080",
"html_url": "https://github.com/prithvi151080",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-04-29T09:20:50 | 2024-05-04T16:13:15 | 2024-04-29T11:10:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have downloaded the moondream model from official ollama site (https://ollama.com/library/moondream) but while running the model in ollama i get this error ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221225477 "
 | {
"login": "erasmus74",
"id": 7828606,
"node_id": "MDQ6VXNlcjc4Mjg2MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7828606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erasmus74",
"html_url": "https://github.com/erasmus74",
"followers_url": "https://api.github.com/users/er... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 8 | 2024-04-18T04:21:26 | 2024-04-23T17:42:48 | 2024-04-23T17:42:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to run my ollama:rocm docker image (pulled 4/16/24) and it does the Nvidia M40 and Ryzen 7900x CPU offloads. I see there is full nvidia VRAM usage and the remaining layers offload to my CPU RAM.
However I also have my 7900xtx AMD card in there, and when I'm not passing the "--gpus ... | {
"login": "erasmus74",
"id": 7828606,
"node_id": "MDQ6VXNlcjc4Mjg2MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7828606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erasmus74",
"html_url": "https://github.com/erasmus74",
"followers_url": "https://api.github.com/users/er... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3723/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5651 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5651/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5651/comments | https://api.github.com/repos/ollama/ollama/issues/5651/events | https://github.com/ollama/ollama/issues/5651 | 2,405,465,712 | I_kwDOJ0Z1Ps6PYH5w | 5,651 | 2nd prompt never completes | {
"login": "Konuralpkilinc",
"id": 91570726,
"node_id": "U_kgDOBXVCJg",
"avatar_url": "https://avatars.githubusercontent.com/u/91570726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Konuralpkilinc",
"html_url": "https://github.com/Konuralpkilinc",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-07-12T12:31:14 | 2024-07-16T13:04:54 | 2024-07-16T13:04:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Whenever i try to give the second prompt on any GGUF models ollama fails here is the logs
time=2024-07-12T15:47:23.505Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/home/udemirezen/.ollama/models/blobs/sha256-c0dd304d761e8e05d082cc... | {
"login": "Konuralpkilinc",
"id": 91570726,
"node_id": "U_kgDOBXVCJg",
"avatar_url": "https://avatars.githubusercontent.com/u/91570726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Konuralpkilinc",
"html_url": "https://github.com/Konuralpkilinc",
"followers_url": "https://api.github.com... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5651/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3769 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3769/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3769/comments | https://api.github.com/repos/ollama/ollama/issues/3769/events | https://github.com/ollama/ollama/issues/3769 | 2,254,323,953 | I_kwDOJ0Z1Ps6GXkDx | 3,769 | An existing connection was forcibly closed by the remote host.Could you help me? | {
"login": "risingnew",
"id": 128674607,
"node_id": "U_kgDOB6trLw",
"avatar_url": "https://avatars.githubusercontent.com/u/128674607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/risingnew",
"html_url": "https://github.com/risingnew",
"followers_url": "https://api.github.com/users/rising... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 22 | 2024-04-20T02:07:16 | 2024-07-29T08:39:35 | 2024-05-02T00:24:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
PS C:\Users\Administrator\AppData\Local\Ollama> ollama run llama3
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=1AKxIvoajv-NPGYukzWJcA&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1713578711": read tcp 192.168.12... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3769/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1813 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1813/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1813/comments | https://api.github.com/repos/ollama/ollama/issues/1813/events | https://github.com/ollama/ollama/issues/1813 | 2,067,859,251 | I_kwDOJ0Z1Ps57QQcz | 1,813 | How to run Ollama only on a dedicated GPU? (Instead of all GPUs) | {
"login": "sthufnagl",
"id": 1492014,
"node_id": "MDQ6VXNlcjE0OTIwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1492014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sthufnagl",
"html_url": "https://github.com/sthufnagl",
"followers_url": "https://api.github.com/users/st... | [
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 37 | 2024-01-05T18:35:28 | 2024-11-21T19:33:14 | 2024-03-24T18:15:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I have 3x3090 and I want to run Ollama Instance only on a dedicated GPU. The reason for this: To have 3xOllama Instances (with different ports) for using with Autogen.
I also tried the "Docker Ollama" without luck.
Or is there an other solution?
Let me know...
Thanks in advance
Steve | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1813/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1813/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3519 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3519/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3519/comments | https://api.github.com/repos/ollama/ollama/issues/3519/events | https://github.com/ollama/ollama/pull/3519 | 2,229,541,406 | PR_kwDOJ0Z1Ps5r7F9r | 3,519 | fix: close files in the CreateHandler func | {
"login": "testwill",
"id": 8717479,
"node_id": "MDQ6VXNlcjg3MTc0Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8717479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testwill",
"html_url": "https://github.com/testwill",
"followers_url": "https://api.github.com/users/testw... | [] | closed | false | null | [] | null | 0 | 2024-04-07T03:27:39 | 2024-05-09T07:25:09 | 2024-05-09T07:25:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3519",
"html_url": "https://github.com/ollama/ollama/pull/3519",
"diff_url": "https://github.com/ollama/ollama/pull/3519.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3519.patch",
"merged_at": null
} | null | {
"login": "testwill",
"id": 8717479,
"node_id": "MDQ6VXNlcjg3MTc0Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8717479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testwill",
"html_url": "https://github.com/testwill",
"followers_url": "https://api.github.com/users/testw... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3519/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7266 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7266/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7266/comments | https://api.github.com/repos/ollama/ollama/issues/7266/events | https://github.com/ollama/ollama/issues/7266 | 2,598,721,607 | I_kwDOJ0Z1Ps6a5VhH | 7,266 | Windows ARM64 fails when loading model, error code 0xc000001d | {
"login": "mikechambers84",
"id": 904313,
"node_id": "MDQ6VXNlcjkwNDMxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/904313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikechambers84",
"html_url": "https://github.com/mikechambers84",
"followers_url": "https://api.github... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-10-19T04:07:04 | 2024-10-25T03:59:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I installed the latest Ollama for Windows (ARM64 build) on my 2023 Windows Dev Kit, which has an 8-core ARM processor, a Snapdragon 8cx Gen 3. It's running Windows 11 Pro.
I can pull models, but when I go to run them, I get an error. It doesn't matter what model I run, I've tried several. H... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7266/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7266/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8082 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8082/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8082/comments | https://api.github.com/repos/ollama/ollama/issues/8082/events | https://github.com/ollama/ollama/pull/8082 | 2,737,564,400 | PR_kwDOJ0Z1Ps6FGxNy | 8,082 | Docs: Add /api/version endpoint to API documentation | {
"login": "anxkhn",
"id": 83116240,
"node_id": "MDQ6VXNlcjgzMTE2MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/83116240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anxkhn",
"html_url": "https://github.com/anxkhn",
"followers_url": "https://api.github.com/users/anxkhn/fo... | [] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 0 | 2024-12-13T06:44:22 | 2024-12-29T19:33:44 | 2024-12-29T19:33:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8082",
"html_url": "https://github.com/ollama/ollama/pull/8082",
"diff_url": "https://github.com/ollama/ollama/pull/8082.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8082.patch",
"merged_at": "2024-12-29T19:33:44"
} | This PR adds documentation for the `/api/version` endpoint to the Ollama API documentation. This endpoint allows clients to retrieve the Ollama server version, which is useful for ensuring compatibility with features that are dependent on specific server versions.
The `/api/version` endpoint was added in a previous ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8082/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/992 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/992/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/992/comments | https://api.github.com/repos/ollama/ollama/issues/992/events | https://github.com/ollama/ollama/pull/992 | 1,977,056,120 | PR_kwDOJ0Z1Ps5elQcL | 992 | update langchainjs doc | {
"login": "aashish2057",
"id": 42164334,
"node_id": "MDQ6VXNlcjQyMTY0MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/42164334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aashish2057",
"html_url": "https://github.com/aashish2057",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2023-11-03T23:47:11 | 2023-11-09T13:08:31 | 2023-11-09T13:08:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/992",
"html_url": "https://github.com/ollama/ollama/pull/992",
"diff_url": "https://github.com/ollama/ollama/pull/992.diff",
"patch_url": "https://github.com/ollama/ollama/pull/992.patch",
"merged_at": "2023-11-09T13:08:31"
} | Updates docs/tutorials/langchainjs.md from issue #539
adds missing await in line 36 and adds instructions to install cheerio
| {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/992/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1032 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1032/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1032/comments | https://api.github.com/repos/ollama/ollama/issues/1032/events | https://github.com/ollama/ollama/pull/1032 | 1,981,597,237 | PR_kwDOJ0Z1Ps5e0dAn | 1,032 | Update configuration instructions in README | {
"login": "joake",
"id": 11403993,
"node_id": "MDQ6VXNlcjExNDAzOTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/11403993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joake",
"html_url": "https://github.com/joake",
"followers_url": "https://api.github.com/users/joake/follow... | [] | closed | false | null | [] | null | 1 | 2023-11-07T15:13:30 | 2023-11-15T16:33:39 | 2023-11-15T16:33:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1032",
"html_url": "https://github.com/ollama/ollama/pull/1032",
"diff_url": "https://github.com/ollama/ollama/pull/1032.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1032.patch",
"merged_at": null
} | Added in instructions on how to specify model file location, as it isn't mentioned in the docs. This should probably be handled by a `ollama config modelpath` function instead. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1032/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2730 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2730/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2730/comments | https://api.github.com/repos/ollama/ollama/issues/2730/events | https://github.com/ollama/ollama/issues/2730 | 2,152,306,215 | I_kwDOJ0Z1Ps6ASZYn | 2,730 | Langchain + Chainlit integration issue | {
"login": "Michelklingler",
"id": 63208430,
"node_id": "MDQ6VXNlcjYzMjA4NDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/63208430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Michelklingler",
"html_url": "https://github.com/Michelklingler",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 2 | 2024-02-24T13:14:18 | 2024-03-12T04:48:11 | 2024-03-12T04:48:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi!
I'm using Ollama on a local server RTX A6000 ADA running Mixtral 8x7B.
I run Ollama locally and expose an API endpoint for multiple user to connect and use the LLM in a chat powered by Chainlit + Langchain.
There is 2 issue I want to solve:
1 - Ollama is serving, but the model seems to deload from the GPU... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2730/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2391 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2391/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2391/comments | https://api.github.com/repos/ollama/ollama/issues/2391/events | https://github.com/ollama/ollama/issues/2391 | 2,123,479,904 | I_kwDOJ0Z1Ps5-kbtg | 2,391 | Ollama outputs endless stream of random words | {
"login": "ncarolan",
"id": 47750607,
"node_id": "MDQ6VXNlcjQ3NzUwNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/47750607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncarolan",
"html_url": "https://github.com/ncarolan",
"followers_url": "https://api.github.com/users/nca... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-02-07T16:59:40 | 2024-02-23T06:33:30 | 2024-02-07T22:14:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When running a model with any prompt, the output is a constant stream of random characters and words from various languages. The nonsensical output will continue until ollama is terminated. An example prompt and output is included below:
```
$ ollama run llama2
>>> hi
alseularoured阳negneg вер VALUESalling阳 statem... | {
"login": "ncarolan",
"id": 47750607,
"node_id": "MDQ6VXNlcjQ3NzUwNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/47750607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncarolan",
"html_url": "https://github.com/ncarolan",
"followers_url": "https://api.github.com/users/nca... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2391/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3834 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3834/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3834/comments | https://api.github.com/repos/ollama/ollama/issues/3834/events | https://github.com/ollama/ollama/pull/3834 | 2,257,621,848 | PR_kwDOJ0Z1Ps5taLzO | 3,834 | Report errors on server lookup instead of path lookup failure | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-22T23:23:35 | 2024-04-24T17:50:51 | 2024-04-24T17:50:48 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3834",
"html_url": "https://github.com/ollama/ollama/pull/3834",
"diff_url": "https://github.com/ollama/ollama/pull/3834.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3834.patch",
"merged_at": "2024-04-24T17:50:48"
} | This should help identify more details on the failure mode for #3738 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3834/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1139 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1139/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1139/comments | https://api.github.com/repos/ollama/ollama/issues/1139/events | https://github.com/ollama/ollama/issues/1139 | 1,995,042,241 | I_kwDOJ0Z1Ps526e3B | 1,139 | Error while loading Nous-Capybara-34B | {
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 2 | 2023-11-15T15:48:08 | 2024-03-11T18:36:03 | 2024-03-11T18:36:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried to run https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF
using this modelfile
```
FROM ./nous-capybara-34b.Q3_K_S.gguf
TEMPLATE """USER: {{ .Prompt }} ASSISTANT:"""
PARAMETER num_ctx 200000
PARAMETER stop "USER"
PARAMETER stop "ASSISTANT"
```
I got this error
```
Error: llama runner process ha... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1139/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1600 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1600/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1600/comments | https://api.github.com/repos/ollama/ollama/issues/1600/events | https://github.com/ollama/ollama/issues/1600 | 2,048,028,242 | I_kwDOJ0Z1Ps56Em5S | 1,600 | Is there any option to unload a model from memory? | {
"login": "DanielMazurkiewicz",
"id": 2885673,
"node_id": "MDQ6VXNlcjI4ODU2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2885673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielMazurkiewicz",
"html_url": "https://github.com/DanielMazurkiewicz",
"followers_url": "http... | [] | closed | false | null | [] | null | 22 | 2023-12-19T06:45:11 | 2024-11-18T02:52:12 | 2024-01-28T22:34:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As in the title. I want to unload model, is there any option for it? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1600/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6261 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6261/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6261/comments | https://api.github.com/repos/ollama/ollama/issues/6261/events | https://github.com/ollama/ollama/issues/6261 | 2,456,527,800 | I_kwDOJ0Z1Ps6Sa6O4 | 6,261 | Offload a model command | {
"login": "stavsap",
"id": 4201054,
"node_id": "MDQ6VXNlcjQyMDEwNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4201054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stavsap",
"html_url": "https://github.com/stavsap",
"followers_url": "https://api.github.com/users/stavsap/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2024-08-08T19:59:34 | 2024-09-09T22:23:08 | 2024-09-09T22:23:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can we have an offload a model cli/api command to remove a model from memory/vram? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6261/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4860 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4860/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4860/comments | https://api.github.com/repos/ollama/ollama/issues/4860/events | https://github.com/ollama/ollama/pull/4860 | 2,338,467,407 | PR_kwDOJ0Z1Ps5xsV2E | 4,860 | Update requirements.txt | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasot... | [] | closed | false | null | [] | null | 1 | 2024-06-06T14:55:08 | 2024-06-09T20:41:04 | 2024-06-09T20:23:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4860",
"html_url": "https://github.com/ollama/ollama/pull/4860",
"diff_url": "https://github.com/ollama/ollama/pull/4860.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4860.patch",
"merged_at": null
} | - chromadb==0.5.0 fixes ingest.py issue "Cannot submit more than 5,461 embeddings at once"
- python-docx==1.1.2 fixes ingest.py issue "ModuleNotFoundError: No module named 'docx'"
- including langchain-huggingface, langchain-community and langchain-core is a preparation for deprecated modules. Also, be aware of issue... | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasot... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4860/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2199 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2199/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2199/comments | https://api.github.com/repos/ollama/ollama/issues/2199/events | https://github.com/ollama/ollama/issues/2199 | 2,101,584,144 | I_kwDOJ0Z1Ps59Q6EQ | 2,199 | Go API client: make base URL and http client public fields | {
"login": "emidoots",
"id": 3173176,
"node_id": "MDQ6VXNlcjMxNzMxNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3173176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emidoots",
"html_url": "https://github.com/emidoots",
"followers_url": "https://api.github.com/users/emido... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-01-26T04:36:41 | 2024-05-17T01:16:56 | 2024-05-17T01:16:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be nice if these were public fields:
https://github.com/ollama/ollama/blob/main/api/client.go#L23-L24
`ClientFromEnvironment` is good, but there are cases where supplying one's own http client, transparent, etc. may be quite useful. It's strange to not be able to supply these in a client library.
Just... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2199/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1949 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1949/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1949/comments | https://api.github.com/repos/ollama/ollama/issues/1949/events | https://github.com/ollama/ollama/issues/1949 | 2,078,718,320 | I_kwDOJ0Z1Ps575rlw | 1,949 | bad generation on multi-GPU setup | {
"login": "jerzydziewierz",
"id": 1606347,
"node_id": "MDQ6VXNlcjE2MDYzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1606347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerzydziewierz",
"html_url": "https://github.com/jerzydziewierz",
"followers_url": "https://api.gith... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | null | [] | null | 9 | 2024-01-12T12:14:39 | 2024-05-17T00:58:46 | 2024-05-17T00:57:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When using `vast.ai` and image `nvidia/cuda:12.3.1-devel-ubuntu22.04`
and 4x RTX3090 on a AMD EPYC 7302P 16-Core Processor,
Trying any "small model" ( i have not tried large models yet )
I get either an outright crash or a bad generation like
and i quote:
```
############################
```
scree... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1949/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1949/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4454 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4454/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4454/comments | https://api.github.com/repos/ollama/ollama/issues/4454/events | https://github.com/ollama/ollama/issues/4454 | 2,298,206,546 | I_kwDOJ0Z1Ps6I-9lS | 4,454 | `OLLAMA_MODELS` environment variable is not respected | {
"login": "JLCarveth",
"id": 23156861,
"node_id": "MDQ6VXNlcjIzMTU2ODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/23156861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JLCarveth",
"html_url": "https://github.com/JLCarveth",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-05-15T15:15:45 | 2025-01-26T06:16:40 | 2024-05-15T15:23:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have followed the steps [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored) to change where Ollama stores the downloaded models.
I make sure to run `systemctl daemon-reload` and to restart the ollama service, and yet it is still storing the model blobs ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4454/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6905 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6905/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6905/comments | https://api.github.com/repos/ollama/ollama/issues/6905/events | https://github.com/ollama/ollama/pull/6905 | 2,540,549,031 | PR_kwDOJ0Z1Ps58QjCS | 6,905 | runner: Set windows above normal priority for consistent CPU inference performance | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-09-21T23:17:22 | 2024-09-21T23:54:52 | 2024-09-21T23:54:49 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6905",
"html_url": "https://github.com/ollama/ollama/pull/6905",
"diff_url": "https://github.com/ollama/ollama/pull/6905.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6905.patch",
"merged_at": "2024-09-21T23:54:49"
} | When running the subprocess as a background service windows may throttle, which can lead to thrashing and very poor token rate.
Fixes #3511
I've now reproduced the performance problem and understand the nature of what's going on leading to poor CPU inference performance for some users on Windows.
Windows trea... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6905/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5693 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5693/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5693/comments | https://api.github.com/repos/ollama/ollama/issues/5693/events | https://github.com/ollama/ollama/issues/5693 | 2,407,657,391 | I_kwDOJ0Z1Ps6Pge-v | 5,693 | Per-Model Concurrency | {
"login": "ProjectMoon",
"id": 183856,
"node_id": "MDQ6VXNlcjE4Mzg1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/183856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ProjectMoon",
"html_url": "https://github.com/ProjectMoon",
"followers_url": "https://api.github.com/user... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-07-15T00:14:39 | 2024-09-17T01:44:17 | 2024-09-17T01:44:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I like the new concurrency features. I'm wondering if it would be possible to add a new Modelfile parameter to control parallel requests on a per-model basis. This would override `OLLAMA_NUM_PARALLEL` if set. The primary use-case for this idea is to allow small models like embedding models to serve many quick requests ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5693/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5693/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4298 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4298/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4298/comments | https://api.github.com/repos/ollama/ollama/issues/4298/events | https://github.com/ollama/ollama/pull/4298 | 2,288,521,622 | PR_kwDOJ0Z1Ps5vCOc- | 4,298 | log clean up | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-09T21:53:11 | 2024-05-09T23:20:58 | 2024-05-09T23:20:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4298",
"html_url": "https://github.com/ollama/ollama/pull/4298",
"diff_url": "https://github.com/ollama/ollama/pull/4298.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4298.patch",
"merged_at": "2024-05-09T23:20:57"
} | any debug logs are available with `--verbose` which was previously a noop since it's only set if also compiled with `SERVER_VERBOSE` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4298/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7341 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7341/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7341/comments | https://api.github.com/repos/ollama/ollama/issues/7341/events | https://github.com/ollama/ollama/issues/7341 | 2,611,011,341 | I_kwDOJ0Z1Ps6boN8N | 7,341 | Ollama forgets previous information in conversation if a prompt sent by the user is very large | {
"login": "robotom",
"id": 45123215,
"node_id": "MDQ6VXNlcjQ1MTIzMjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robotom",
"html_url": "https://github.com/robotom",
"followers_url": "https://api.github.com/users/roboto... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-10-24T09:33:58 | 2024-10-24T23:20:15 | 2024-10-24T16:02:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am trying to figure out how to pass large bodies of text (or documents) to Ollama. I thought this was just an issue with my front end implementation, but it seems present in the command line interface as well.
I could have a basic conversation with the model about the weather (llama 3.2 ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7341/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4132 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4132/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4132/comments | https://api.github.com/repos/ollama/ollama/issues/4132/events | https://github.com/ollama/ollama/issues/4132 | 2,278,196,909 | I_kwDOJ0Z1Ps6Hyoat | 4,132 | model run command not rendered on mobile | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 5 | 2024-05-03T18:19:17 | 2024-05-04T00:14:25 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://ollama.com/library/phi3:3.8b
In the page the installation commrnd is not written likw llama and gemma. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4132/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/6919 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6919/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6919/comments | https://api.github.com/repos/ollama/ollama/issues/6919/events | https://github.com/ollama/ollama/pull/6919 | 2,542,710,356 | PR_kwDOJ0Z1Ps58X6Zo | 6,919 | Add LLMChat to community apps | {
"login": "deep93333",
"id": 100652109,
"node_id": "U_kgDOBf_UTQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100652109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deep93333",
"html_url": "https://github.com/deep93333",
"followers_url": "https://api.github.com/users/deep93... | [] | closed | false | null | [] | null | 0 | 2024-09-23T13:45:39 | 2024-09-24T00:49:47 | 2024-09-24T00:49:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6919",
"html_url": "https://github.com/ollama/ollama/pull/6919",
"diff_url": "https://github.com/ollama/ollama/pull/6919.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6919.patch",
"merged_at": "2024-09-24T00:49:47"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6919/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3599 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3599/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3599/comments | https://api.github.com/repos/ollama/ollama/issues/3599/events | https://github.com/ollama/ollama/pull/3599 | 2,238,044,574 | PR_kwDOJ0Z1Ps5sYLK- | 3,599 | examples: add more Go examples using the API | {
"login": "eliben",
"id": 1130906,
"node_id": "MDQ6VXNlcjExMzA5MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1130906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliben",
"html_url": "https://github.com/eliben",
"followers_url": "https://api.github.com/users/eliben/foll... | [] | closed | false | null | [] | null | 1 | 2024-04-11T15:44:09 | 2024-04-15T22:34:54 | 2024-04-15T22:34:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3599",
"html_url": "https://github.com/ollama/ollama/pull/3599",
"diff_url": "https://github.com/ollama/ollama/pull/3599.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3599.patch",
"merged_at": "2024-04-15T22:34:54"
} | Updates #2840
Followup on #2879
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3599/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3599/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/520 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/520/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/520/comments | https://api.github.com/repos/ollama/ollama/issues/520/events | https://github.com/ollama/ollama/pull/520 | 1,893,242,840 | PR_kwDOJ0Z1Ps5aKy9v | 520 | Fix ggml arm64 linux cuda build | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-09-12T20:49:57 | 2023-09-12T21:06:49 | 2023-09-12T21:06:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/520",
"html_url": "https://github.com/ollama/ollama/pull/520",
"diff_url": "https://github.com/ollama/ollama/pull/520.diff",
"patch_url": "https://github.com/ollama/ollama/pull/520.patch",
"merged_at": "2023-09-12T21:06:48"
} | Apply patch to support CUDA's half type for aarch64 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/520/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6219 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6219/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6219/comments | https://api.github.com/repos/ollama/ollama/issues/6219/events | https://github.com/ollama/ollama/pull/6219 | 2,452,132,353 | PR_kwDOJ0Z1Ps53o5Ky | 6,219 | llm: reserve required number of slots for embeddings | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-08-07T02:50:14 | 2024-08-07T03:20:51 | 2024-08-07T03:20:49 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6219",
"html_url": "https://github.com/ollama/ollama/pull/6219",
"diff_url": "https://github.com/ollama/ollama/pull/6219.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6219.patch",
"merged_at": "2024-08-07T03:20:49"
} | Fixes https://github.com/ollama/ollama/issues/6217 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6219/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5722 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5722/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5722/comments | https://api.github.com/repos/ollama/ollama/issues/5722/events | https://github.com/ollama/ollama/issues/5722 | 2,411,211,731 | I_kwDOJ0Z1Ps6PuCvT | 5,722 | Environment variable OLLAMA_NUM_PARALLEL is ignored (Linux) | {
"login": "boshk0",
"id": 75314475,
"node_id": "MDQ6VXNlcjc1MzE0NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/75314475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boshk0",
"html_url": "https://github.com/boshk0",
"followers_url": "https://api.github.com/users/boshk0/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-07-16T13:53:31 | 2024-07-22T08:12:52 | 2024-07-22T08:11:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After updating Ollama to the latest version (from 0.1.48) I noticed that the environment variable OLLAMA_NUM_PARALLEL is now ignored., Instead, the default value 4 is used.
My startup script:
export OLLAMA_MAX_LOADED_MODELS:2
export OLLAMA_KEEP_ALIVE:60m
export OLLAMA_NUM_PARALLEL:10
/b... | {
"login": "boshk0",
"id": 75314475,
"node_id": "MDQ6VXNlcjc1MzE0NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/75314475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boshk0",
"html_url": "https://github.com/boshk0",
"followers_url": "https://api.github.com/users/boshk0/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5722/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5143 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5143/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5143/comments | https://api.github.com/repos/ollama/ollama/issues/5143/events | https://github.com/ollama/ollama/issues/5143 | 2,362,575,347 | I_kwDOJ0Z1Ps6M0gnz | 5,143 | AMD iGPU works in docker with override but not on host | {
"login": "smellouk",
"id": 13059906,
"node_id": "MDQ6VXNlcjEzMDU5OTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/13059906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smellouk",
"html_url": "https://github.com/smellouk",
"followers_url": "https://api.github.com/users/sme... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 21 | 2024-06-19T14:50:14 | 2024-10-26T21:04:15 | 2024-10-26T21:04:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama is failing to run on GPU instead it uses CPU. If I force it using `HSA_OVERRIDE_GFX_VERSION=9.0.0` then I get `Error: llama runner process has terminated: signal: aborted error:Could not initialize Tensile host: No devices found`.
## ENV:
I'm using Proxmox LXC with Device Passthro... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5143/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1122 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1122/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1122/comments | https://api.github.com/repos/ollama/ollama/issues/1122/events | https://github.com/ollama/ollama/issues/1122 | 1,992,321,364 | I_kwDOJ0Z1Ps52wGlU | 1,122 | c | {
"login": "fbrad",
"id": 1182262,
"node_id": "MDQ6VXNlcjExODIyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1182262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fbrad",
"html_url": "https://github.com/fbrad",
"followers_url": "https://api.github.com/users/fbrad/follower... | [] | closed | false | null | [] | null | 0 | 2023-11-14T09:17:39 | 2023-11-14T09:19:43 | 2023-11-14T09:19:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "fbrad",
"id": 1182262,
"node_id": "MDQ6VXNlcjExODIyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1182262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fbrad",
"html_url": "https://github.com/fbrad",
"followers_url": "https://api.github.com/users/fbrad/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1122/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3739 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3739/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3739/comments | https://api.github.com/repos/ollama/ollama/issues/3739/events | https://github.com/ollama/ollama/pull/3739 | 2,251,838,454 | PR_kwDOJ0Z1Ps5tHLJa | 3,739 | draft: push | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-19T00:31:36 | 2024-06-05T20:12:07 | 2024-05-16T01:08:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3739",
"html_url": "https://github.com/ollama/ollama/pull/3739",
"diff_url": "https://github.com/ollama/ollama/pull/3739.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3739.patch",
"merged_at": null
} | this (experimental) change updates the streaming mechanism for push to use rangefunc over callback
no non-streaming
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3739/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1726 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1726/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1726/comments | https://api.github.com/repos/ollama/ollama/issues/1726/events | https://github.com/ollama/ollama/issues/1726 | 2,057,065,772 | I_kwDOJ0Z1Ps56nFUs | 1,726 | Add GPT--Pilot to ollama docs under libraries | {
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 3 | 2023-12-27T08:21:41 | 2024-05-06T23:58:45 | 2024-05-06T23:58:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The repo is available [here](https://github.com/Pythagora-io/gpt-pilot/): which supports Ollama according to this wiki:
[GPT Pilot](https://github.com/Pythagora-io/gpt-pilot/wiki/Using-GPT%E2%80%90Pilot-with-Local-LLMs).
This will allow Ollama models to do full stack development for us. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1726/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1726/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8321 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8321/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8321/comments | https://api.github.com/repos/ollama/ollama/issues/8321/events | https://github.com/ollama/ollama/pull/8321 | 2,770,996,423 | PR_kwDOJ0Z1Ps6G1zVc | 8,321 | feat(openai): add optional authentication for OpenAI compatible API | {
"login": "myaniu",
"id": 1250454,
"node_id": "MDQ6VXNlcjEyNTA0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myaniu",
"html_url": "https://github.com/myaniu",
"followers_url": "https://api.github.com/users/myaniu/foll... | [] | closed | false | null | [] | null | 2 | 2025-01-06T16:21:54 | 2025-01-25T14:35:08 | 2025-01-12T21:31:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8321",
"html_url": "https://github.com/ollama/ollama/pull/8321",
"diff_url": "https://github.com/ollama/ollama/pull/8321.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8321.patch",
"merged_at": null
} | Add authentication middleware that checks API key when OPENAI_API_KEY environment variable is set. When OPENAI_API_KEY is not set, maintain current behavior of allowing all requests. When set, clients must provide matching Bearer token in Authorization header. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8321/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8333 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8333/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8333/comments | https://api.github.com/repos/ollama/ollama/issues/8333/events | https://github.com/ollama/ollama/issues/8333 | 2,772,247,248 | I_kwDOJ0Z1Ps6lPSLQ | 8,333 | GOT-OCR and voice model support | {
"login": "Elton-Yang",
"id": 93847642,
"node_id": "U_kgDOBZgAWg",
"avatar_url": "https://avatars.githubusercontent.com/u/93847642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Elton-Yang",
"html_url": "https://github.com/Elton-Yang",
"followers_url": "https://api.github.com/users/Elton... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2025-01-07T08:50:19 | 2025-01-16T00:02:20 | 2025-01-16T00:02:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Could the GOT-OCR image model be supported by Ollama? I like the way ollama deploys and runs model very much, and really hope ollama can support more types of model including image and audio models, specifically two models I use a lot: GOT-OCR2.0 (image OCR) and SenseVoiceSmall (audio STT). Thanks all developers for yo... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8333/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8333/timeline | null | duplicate | false |
https://api.github.com/repos/ollama/ollama/issues/3534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3534/comments | https://api.github.com/repos/ollama/ollama/issues/3534/events | https://github.com/ollama/ollama/issues/3534 | 2,230,469,158 | I_kwDOJ0Z1Ps6E8kIm | 3,534 | When I ask a longer question, the output is arbitrary | {
"login": "0sengseng0",
"id": 73268510,
"node_id": "MDQ6VXNlcjczMjY4NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/73268510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0sengseng0",
"html_url": "https://github.com/0sengseng0",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-04-08T07:29:03 | 2024-04-19T15:41:10 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I ask longer questions, it's about 15K or so. And I've set enough tokens: /set parameter num_ctx 4096. The model outputs a combination of numbers and symbols. Why?

### What did you expect to ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3534/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5012 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5012/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5012/comments | https://api.github.com/repos/ollama/ollama/issues/5012/events | https://github.com/ollama/ollama/issues/5012 | 2,350,047,630 | I_kwDOJ0Z1Ps6MEuGO | 5,012 | Seeded API request is returning inconsistent results | {
"login": "ScreamingHawk",
"id": 1460552,
"node_id": "MDQ6VXNlcjE0NjA1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1460552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScreamingHawk",
"html_url": "https://github.com/ScreamingHawk",
"followers_url": "https://api.github.... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-13T03:10:50 | 2024-06-14T16:21:47 | 2024-06-14T16:21:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using `seed: 42069` and `temperature: 0.0`, I get consistent results only for the first ~126 characters. When using a seed, I would expect a deterministic result for the entire interaction.
```py
from ollama import Client, Options
ollama_client = Client(host="http://lo... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5012/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1770 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1770/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1770/comments | https://api.github.com/repos/ollama/ollama/issues/1770/events | https://github.com/ollama/ollama/issues/1770 | 2,064,513,686 | I_kwDOJ0Z1Ps57DfqW | 1,770 | Pulling manifest error | {
"login": "buczekkruczek",
"id": 155578015,
"node_id": "U_kgDOCUXunw",
"avatar_url": "https://avatars.githubusercontent.com/u/155578015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buczekkruczek",
"html_url": "https://github.com/buczekkruczek",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 9 | 2024-01-03T19:05:12 | 2025-01-01T10:29:12 | 2024-01-04T16:31:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello Everyone,
I have problem with pulling manifest, while running "ollama run dolphin-mixtral:latest" for the first time I've got "Error: max retries exceeded: unexpected EOF" and now I am unable to restart download getting "Error: pull model manifest: file does not exist".
I am grateful for all help or any k... | {
"login": "buczekkruczek",
"id": 155578015,
"node_id": "U_kgDOCUXunw",
"avatar_url": "https://avatars.githubusercontent.com/u/155578015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buczekkruczek",
"html_url": "https://github.com/buczekkruczek",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1770/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6145 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6145/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6145/comments | https://api.github.com/repos/ollama/ollama/issues/6145/events | https://github.com/ollama/ollama/pull/6145 | 2,445,778,816 | PR_kwDOJ0Z1Ps53TQpI | 6,145 | Fix crash on startup when trying to clean up unused files (#5840) | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-08-02T21:23:34 | 2024-08-07T18:24:17 | 2024-08-07T18:24:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6145",
"html_url": "https://github.com/ollama/ollama/pull/6145",
"diff_url": "https://github.com/ollama/ollama/pull/6145.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6145.patch",
"merged_at": "2024-08-07T18:24:15"
} | Improve validation and error handling of manifest files in the event of corruption. This prevents nil pointer errors and possible unintended deletion of data. | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6145/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7594 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7594/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7594/comments | https://api.github.com/repos/ollama/ollama/issues/7594/events | https://github.com/ollama/ollama/pull/7594 | 2,646,964,227 | PR_kwDOJ0Z1Ps6BbAdK | 7,594 | Invalid OLLAMA_LLM_LIBRARY Error | {
"login": "jk-vtp-one",
"id": 173852691,
"node_id": "U_kgDOClzIEw",
"avatar_url": "https://avatars.githubusercontent.com/u/173852691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jk-vtp-one",
"html_url": "https://github.com/jk-vtp-one",
"followers_url": "https://api.github.com/users/jk-... | [] | open | false | null | [] | null | 0 | 2024-11-10T07:53:47 | 2024-11-10T08:18:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7594",
"html_url": "https://github.com/ollama/ollama/pull/7594",
"diff_url": "https://github.com/ollama/ollama/pull/7594.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7594.patch",
"merged_at": null
} | If the OLLAMA_LLM_LIBRARY environment variable is an invalid target, currently the server logs a message of the problem and continues to work using another available runner. This change causes it instead to raise an error. If you are using the OLLAMA_LLM_LIBRARY variable, you should intend for it to use that runner, an... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7594/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6914 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6914/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6914/comments | https://api.github.com/repos/ollama/ollama/issues/6914/events | https://github.com/ollama/ollama/issues/6914 | 2,542,199,583 | I_kwDOJ0Z1Ps6XhuMf | 6,914 | Work done by CPU instead of GPU | {
"login": "Iliceth",
"id": 68381834,
"node_id": "MDQ6VXNlcjY4MzgxODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/68381834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iliceth",
"html_url": "https://github.com/Iliceth",
"followers_url": "https://api.github.com/users/Ilicet... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-09-23T10:23:42 | 2024-09-25T08:55:58 | 2024-09-25T00:41:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm aware this might not be a bug, but I'm trying to understand and figure out if I can and/or should change something. When I try Reflection 70b.
CPU: 8 cores 100% utilization
RAM: 23 of 32 GB in use
GPU: on average 5% utilization
VRAM: 23 of 24 GB in use
I assume the model does not f... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6914/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2739 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2739/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2739/comments | https://api.github.com/repos/ollama/ollama/issues/2739/events | https://github.com/ollama/ollama/pull/2739 | 2,152,603,520 | PR_kwDOJ0Z1Ps5n1WXz | 2,739 | Require no extra disk space for windows installation | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-25T05:08:40 | 2024-02-25T05:20:36 | 2024-02-25T05:20:35 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2739",
"html_url": "https://github.com/ollama/ollama/pull/2739",
"diff_url": "https://github.com/ollama/ollama/pull/2739.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2739.patch",
"merged_at": "2024-02-25T05:20:35"
} | Fixes https://github.com/ollama/ollama/issues/2734
Since quite a few folks won't install a model on `C:\`, and will instead use `OLLAMA_MODELS` to change where models are stored, lower the check for minimum disk space. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2739/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2902 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2902/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2902/comments | https://api.github.com/repos/ollama/ollama/issues/2902/events | https://github.com/ollama/ollama/issues/2902 | 2,165,548,696 | I_kwDOJ0Z1Ps6BE6aY | 2,902 | feature request: support for conditional probability of next word (logprobs) | {
"login": "iurimatias",
"id": 176720,
"node_id": "MDQ6VXNlcjE3NjcyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/176720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iurimatias",
"html_url": "https://github.com/iurimatias",
"followers_url": "https://api.github.com/users/i... | [] | closed | false | null | [] | null | 1 | 2024-03-03T20:44:12 | 2024-03-12T01:32:34 | 2024-03-12T01:32:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This is required for libraries like LQML, and useful for some other tooling as well, debugging etc..
https://platform.openai.com/docs/api-reference/completions/create#completions-create-logprobs | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2902/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3463 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3463/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3463/comments | https://api.github.com/repos/ollama/ollama/issues/3463/events | https://github.com/ollama/ollama/pull/3463 | 2,221,548,791 | PR_kwDOJ0Z1Ps5rfku5 | 3,463 | update graph size estimate | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-02T22:12:34 | 2024-04-03T21:27:31 | 2024-04-03T21:27:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3463",
"html_url": "https://github.com/ollama/ollama/pull/3463",
"diff_url": "https://github.com/ollama/ollama/pull/3463.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3463.patch",
"merged_at": "2024-04-03T21:27:30"
} | precise scratch space estimates for gemma and llama models with other models falling back to the previous algorithm | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3463/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/272 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/272/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/272/comments | https://api.github.com/repos/ollama/ollama/issues/272/events | https://github.com/ollama/ollama/pull/272 | 1,835,825,101 | PR_kwDOJ0Z1Ps5XJlAE | 272 | Decode ggml 2: Use decoded values | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-08-03T22:48:36 | 2023-08-11T00:22:49 | 2023-08-11T00:22:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/272",
"html_url": "https://github.com/ollama/ollama/pull/272",
"diff_url": "https://github.com/ollama/ollama/pull/272.diff",
"patch_url": "https://github.com/ollama/ollama/pull/272.patch",
"merged_at": "2023-08-11T00:22:48"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/272/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1927 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1927/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1927/comments | https://api.github.com/repos/ollama/ollama/issues/1927/events | https://github.com/ollama/ollama/issues/1927 | 2,077,156,354 | I_kwDOJ0Z1Ps57zuQC | 1,927 | Handling High traffic | {
"login": "lauvindra",
"id": 82690315,
"node_id": "MDQ6VXNlcjgyNjkwMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/82690315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lauvindra",
"html_url": "https://github.com/lauvindra",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 3 | 2024-01-11T16:52:02 | 2024-01-26T23:52:33 | 2024-01-26T23:52:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Assume I have ollama in server Tesla T4 GPU with 16GB Vram and 120 Ram, how many request can it handle in one second? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1927/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3802 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3802/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3802/comments | https://api.github.com/repos/ollama/ollama/issues/3802/events | https://github.com/ollama/ollama/issues/3802 | 2,255,184,367 | I_kwDOJ0Z1Ps6Ga2Hv | 3,802 | Advertise existence to local network using NSD or mDNS SD | {
"login": "majestrate",
"id": 499653,
"node_id": "MDQ6VXNlcjQ5OTY1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/499653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/majestrate",
"html_url": "https://github.com/majestrate",
"followers_url": "https://api.github.com/users/m... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-04-21T17:26:49 | 2024-04-21T17:26:49 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be very cool if ollama could publish itself to the local network as a discoverable service such that other devices on the local network could use NSD to discover that there is an instance locally accessible. this would greatly help unify how companion applications work (e.g. a mobile app that would let you rem... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3802/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3802/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8477 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8477/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8477/comments | https://api.github.com/repos/ollama/ollama/issues/8477/events | https://github.com/ollama/ollama/issues/8477 | 2,796,598,168 | I_kwDOJ0Z1Ps6msLOY | 8,477 | Seems v0.5.7 Linux x86_64 release build in debug mode, and the openai endpoint broken | {
"login": "Dawn-Xu-helloworld",
"id": 48346940,
"node_id": "MDQ6VXNlcjQ4MzQ2OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/48346940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dawn-Xu-helloworld",
"html_url": "https://github.com/Dawn-Xu-helloworld",
"followers_url": "ht... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2025-01-18T02:46:29 | 2025-01-24T09:30:42 | 2025-01-24T09:30:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
2025/01/18 10:21:45 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLL... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8477/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8385 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8385/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8385/comments | https://api.github.com/repos/ollama/ollama/issues/8385/events | https://github.com/ollama/ollama/issues/8385 | 2,781,798,559 | I_kwDOJ0Z1Ps6lzuCf | 8,385 | Cannot list or install models: Connection refused error on Windows 10 | {
"login": "inspector3535",
"id": 187630024,
"node_id": "U_kgDOCy8ByA",
"avatar_url": "https://avatars.githubusercontent.com/u/187630024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inspector3535",
"html_url": "https://github.com/inspector3535",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2025-01-11T12:07:45 | 2025-01-13T20:02:33 | 2025-01-13T19:28:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Greetings,
I'm unable to interact with Ollama on Windows 10. I cannot list models, nor can I install any of them.
Whatever command I try, I receive the following error: even when I type ollama list, the same error appears.
I initially suspected that my server configuration might be causing th... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8385/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.