repo stringclasses 10
values | pr_number int64 7 155k | title stringlengths 4 137 | body stringlengths 0 68.4k | buggy_commit stringlengths 40 40 ⌀ | fix_commit stringlengths 40 40 | buggy_distance int64 1 30 ⌀ | confidence stringclasses 3
values | files listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 43,004 | Fix InvalidVersion error when flash_attn version cannot be determined | ## Problem
When `flash_attn_3` is installed without `flash_attn` (v2), calling `is_flash_attn_2_available()` raises:
```
packaging.version.InvalidVersion: Invalid version: 'N/A'
```
This happens because the `flash_attn` module is importable but its version cannot be determined, so `_is_package_available` returns `"N/... | 0642963ba13f2dae0596fe489415569e1d91fbda | 82c60184bf07db5659aa1d3bebd0690d15642721 | 8 | medium | [
{
"filename": "src/transformers/utils/import_utils.py",
"patch": "@@ -890,14 +890,17 @@ def is_flash_attn_2_available() -> bool:\n \n import torch\n \n- if torch.version.cuda:\n- return version.parse(flash_attn_version) >= version.parse(\"2.1.0\")\n- elif torch.version.hip:\n- # TODO... |
facebook/react | 33,141 | [Flight] Clarify that location field is a FunctionLocation not a CallSite | Follow up to #33136.
This clarifies in the types where the conversion happens from a CallSite which we use to simulate getting the enclosing line/col to a FunctionLocation which doesn't represent a CallSite but actually just the function which only has an enclosing line/col. | 4a702865dd0c5849c1b454091560c3ef26121611 | a437c99ff7a45025367571363653c2ad5db482a7 | 2 | medium | [
{
"filename": "packages/react-client/src/ReactFlightClient.js",
"patch": "@@ -15,7 +15,7 @@ import type {\n ReactAsyncInfo,\n ReactTimeInfo,\n ReactStackTrace,\n- ReactCallSite,\n+ ReactFunctionLocation,\n ReactErrorInfoDev,\n } from 'shared/ReactTypes';\n import type {LazyComponent} from 'react/s... |
ggml-org/llama.cpp | 20,177 | Autoparser: True streaming | In the final changes to the autoparser I modified the atomicity constraint to be pretty restrictive due to models such as GLM 4.7-Flash which put arguments directly after the function name, with no markers in between. Now I'm relaxing that constraint so people can look at their favorite assistant building the tool call... | 2f2923f89526d102bd3c29188db628a8dbf507b6 | c024d859082183bc4e9446e0a56c8299b23def0f | 1 | medium | [
{
"filename": "common/chat-auto-parser-generator.cpp",
"patch": "@@ -239,8 +239,6 @@ common_peg_parser analyze_tools::build_tool_parser_tag_json(parser_build_context\n if (!function.close.empty()) {\n func_parser = func_parser + function.close;\n }\n- func_parser = p.atomi... |
ollama/ollama | 9,024 | fix: harden backend loading | - wrap ggml_backend_load_best in try/catch to mitigate potential system level exceptions (such as permissions)[1] [2]
- expose some of the log messages in ggml_backend_load_best which should help debugging
- only try loading from known library paths
- known paths include the executable directory or any directories... | null | 49df03da9af6b0050ebbf50676f7db569a2b54d9 | null | low | [
{
"filename": "llama/patches/0017-try-catch-backend-load.patch",
"patch": "@@ -0,0 +1,69 @@\n+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001\n+From: Michael Yang <mxyng@pm.me>\n+Date: Tue, 11 Feb 2025 14:06:36 -0800\n+Subject: [PATCH] try/catch backend load\n+\n+---\n+ ggml/src/ggml-... |
facebook/react | 33,136 | [Flight] Encode enclosing line/column numbers and use it to align the fake function | Stacked on #33135.
This encodes the line/column of the enclosing function as part of the stack traces. When that information is available.
I adjusted the fake function code generation so that the beginning of the arrow function aligns with these as much as possible.
This ensures that when the browser tries to ... | 0ff1d13b8055c801d8b9b6779958c09fd0dc63e0 | 4a702865dd0c5849c1b454091560c3ef26121611 | 1 | medium | [
{
"filename": "packages/react-client/src/ReactFlightClient.js",
"patch": "@@ -2226,6 +2226,8 @@ function createFakeFunction<T>(\n sourceMap: null | string,\n line: number,\n col: number,\n+ enclosingLine: number,\n+ enclosingCol: number,\n environmentName: string,\n ): FakeFunction<T> {\n // Thi... |
vercel/next.js | 88,415 | Add .cursor/worktrees.json configuration | ## What?
Add a `.cursor/worktrees.json` configuration file that specifies setup commands for Cursor worktrees.
## Why?
This configuration allows Cursor to automatically run the necessary setup commands (`pnpm install` and `pnpm build`) when creating new worktrees, ensuring a consistent and ready-to-use development e... | null | 5df44d1356f329a48a58169c923e065d1c8e40dc | null | low | [
{
"filename": ".cursor/worktrees.json",
"patch": "@@ -0,0 +1,3 @@\n+{\n+ \"setup-worktree\": [\"pnpm install\", \"pnpm build\"]\n+}",
"additions": 3,
"deletions": 0
}
] |
facebook/react | 33,058 | Allow fragment refs to attempt focus/focusLast on nested host children | This enables `focus` and `focusLast` methods on FragmentInstances to search nested host components, depth first. Attempts focus on each child and bails if one is successful. Previously, only the first level of host children would attempt focus.
Now if we have an example like
```
component MenuItem() {
return ... | e5a8de81e57181692d33ce916dfd6aa23638ec92 | 4206fe49825787eda57a5d142640a63772ccbf2b | 4 | medium | [
{
"filename": "fixtures/dom/src/components/fixtures/fragment-refs/FocusCase.js",
"patch": "@@ -43,11 +43,18 @@ export default function FocusCase() {\n </Fixture.Controls>\n <div className=\"highlight-focused-children\" style={{display: 'flex'}}>\n <Fragment ref={fragmentRef}>\n- ... |
electron/electron | 49,348 | fix: provide explicit cookie encryption provider for cookie encryption | #### Description of Change
This PR modifies our existing cookie encryption logic to match an upstream change here, which now requires a cookie encryption provider for the network service to use if cookie encryption is enabled: Reland "Port net::CookieCryptoDelegate to os_crypt async" | https://chromium-review.google... | null | 95f097a392dbdfd7c0e38a7d55bf14dad92de1fd | null | low | [
{
"filename": "shell/browser/net/network_context_service.cc",
"patch": "@@ -14,6 +14,7 @@\n #include \"net/http/http_util.h\"\n #include \"net/net_buildflags.h\"\n #include \"services/network/network_service.h\"\n+#include \"services/network/public/cpp/cookie_encryption_provider_impl.h\"\n #include \"servic... |
ggml-org/llama.cpp | 20,203 | Fix compile bug | Fixes #20187 | 566059a26b0ce8faec4ea053605719d399c64cc5 | 9b24886f78ce278b34186b47ed71a435f00d8d0d | 15 | medium | [
{
"filename": "common/chat-auto-parser-helpers.cpp",
"patch": "@@ -162,7 +162,7 @@ diff_split calculate_diff_split(const std::string & left, const std::string & ri\n right_fully_consumed = true;\n }\n \n- auto eat_segment = [](std::string & str, segment & seg) -> std::string { return str.appe... |
nodejs/node | 61,056 | deps: update simdjson to 4.2.4 | This is an automated update of simdjson to 4.2.4. | null | b8db64c18d8c4d8b6c968e85e0f227e41e3c7800 | null | low | [
{
"filename": "deps/simdjson/simdjson.cpp",
"patch": "@@ -1,4 +1,4 @@\n-/* auto-generated on 2025-11-11 14:17:08 -0500. version 4.2.2 Do not edit! */\n+/* auto-generated on 2025-12-17 20:32:36 -0500. version 4.2.4 Do not edit! */\n /* including simdjson.cpp: */\n /* begin file simdjson.cpp */\n #define SIM... |
vercel/next.js | 88,256 | [Turbopack] Move DirList to its own module | DirList is how require.context internally represents a list of files matching a requested pattern. I'd like to be able to re-use it for import.meta.glob as well, with a few tweaks. Most of this diff is just breaking it out into its own module.
The filter used to be an EsRegex. It's now an enum, with a possible Glob ... | 17acc812237db6eee64a13844beb45a11a5758b4 | d90847dead1f5c135741043f7720844583a26d0a | 4 | medium | [
{
"filename": "Cargo.lock",
"patch": "@@ -9867,6 +9867,7 @@ dependencies = [\n \"strsim 0.11.1\",\n \"swc_core\",\n \"swc_sourcemap\",\n+ \"tempfile\",\n \"tokio\",\n \"tracing\",\n \"turbo-bincode\",",
"additions": 1,
"deletions": 0
},
{
"filename": "turbopack/crates/turbopack-ecmascr... |
facebook/react | 33,135 | [Flight] Parse Stack Trace from Structured CallSite if available | This is first step to include more enclosing line/column in the parsed data.
We install our own `prepareStackTrace` to collect structured callsite data and only fall back to parsing the string if it was already evaluated or if `prepareStackTrace` doesn't work in this environment.
We still mirror the default V8 fo... | b9cfa0d3083f80bdd11ba76a55aa08fa659b7359 | 0ff1d13b8055c801d8b9b6779958c09fd0dc63e0 | 9 | medium | [
{
"filename": "packages/react-server/src/ReactFlightServer.js",
"patch": "@@ -152,11 +152,27 @@ function defaultFilterStackFrame(\n );\n }\n \n+// DEV-only cache of parsed and filtered stack frames.\n+const stackTraceCache: WeakMap<Error, ReactStackTrace> = __DEV__\n+ ? new WeakMap()\n+ : (null: any);\n... |
ollama/ollama | 8,973 | doc: add Abso SDK to the list of community integrations | Abso allows to cal various LLMs with the same API that the OpenAI SDK. It provides a unified interface while maintaining full type safety and streaming capabilities.
It automatically converts, for example, tools calls to the right format depending on the provider used. | 38117fba83c8291e28abe564d56600c200ff4cab | 0189bdd0b79fceab9e801a63e1311d53f3784dbc | 2 | high | [
{
"filename": "README.md",
"patch": "@@ -494,6 +494,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [multi-llm-ts](https://github.com/nbonamy/multi-llm-ts) (A Typescript/JavaScript library allowing access to different LLM in unified API)\n - [LlmTornado](https://github.com/lofcz/llmto... |
huggingface/transformers | 43,014 | skip test_sdpa_can_dispatch_on_flash in tests/models/pe_audio/test_mo… | …deling_pe_audio.py::PeAudioEncoderTest
```
tests/models/pe_audio/test_modeling_pe_audio.py::PeAudioEncoderTest::test_sdpa_can_dispatch_on_flash
/mnt/disk3/wangyi/transformers/src/transformers/integrations/sdpa_attention.py:92: UserWarning: Memory efficient kernel not used because: (Trigger$d internally at /pyto... | 1b280f998a86a930db3ced9f7108316c671b2d8b | 37cd3240acbf5337c0939f4bc6a0a3074a971c9f | 29 | medium | [
{
"filename": "tests/models/pe_audio/test_modeling_pe_audio.py",
"patch": "@@ -158,6 +158,10 @@ def test_model_get_set_embeddings(self):\n def test_feed_forward_chunking(self):\n pass\n \n+ @unittest.skip(reason=\"SDPA can't dispatch on flash with not None `attention_mask`\")\n+ def test_s... |
electron/electron | 49,154 | build: use @electron-ci/dev-root for package.json default | > [!IMPORTANT]
> Please note that code reviews and merges will be delayed during our [quiet period in December](https://www.electronjs.org/blog/dec-quiet-period-25) and might not happen until January.
#### Description of Change
This PR changes the `name` field in package.json from `electron` to `@electron-ci/dev-r... | 8d5b104c17f9a4ceb7c031ffd2f1e8a7bac31545 | bab6bd3dae351d8f49203a26468d58482f754c84 | 6 | medium | [
{
"filename": "package.json",
"patch": "@@ -1,5 +1,5 @@\n {\n- \"name\": \"electron\",\n+ \"name\": \"@electron-ci/dev-root\",\n \"version\": \"0.0.0-development\",\n \"repository\": \"https://github.com/electron/electron\",\n \"description\": \"Build cross platform desktop apps with JavaScript, HTM... |
vuejs/vue | 7,861 | Adds missing `asyncMeta` during VNode cloning | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [x] Bugfix... | null | 8227fb35240ab1f301c30a6ad5d4d25071fa7996 | null | low | [
{
"filename": "src/core/vdom/vnode.js",
"patch": "@@ -103,6 +103,7 @@ export function cloneVNode (vnode: VNode): VNode {\n cloned.fnContext = vnode.fnContext\n cloned.fnOptions = vnode.fnOptions\n cloned.fnScopeId = vnode.fnScopeId\n+ cloned.asyncMeta = vnode.asyncMeta\n cloned.isCloned = true\n ... |
ggml-org/llama.cpp | 20,223 | Fix structured outputs | Fixes support for structured outputs in autoparser. | c024d859082183bc4e9446e0a56c8299b23def0f | 62b8143ad217fc7d3404119a9440be4a18ce31e0 | 10 | medium | [
{
"filename": "common/chat-auto-parser-generator.cpp",
"patch": "@@ -1,6 +1,7 @@\n #include \"chat-auto-parser.h\"\n #include \"chat-peg-parser.h\"\n #include \"chat.h\"\n+#include \"common.h\"\n #include \"json-schema-to-grammar.h\"\n #include \"nlohmann/json.hpp\"\n \n@@ -51,13 +52,15 @@ common_chat_param... |
electron/electron | 49,326 | build: use @electron-ci/dev-root for package.json default | Backport of #49154
See that PR for details.
Notes: none
| 2515880814d713a7570381a357eb2fd7f453c91a | 75ee26902b658d74cc2e4419598446203272fc87 | 4 | medium | [
{
"filename": "package.json",
"patch": "@@ -1,5 +1,5 @@\n {\n- \"name\": \"electron\",\n+ \"name\": \"@electron-ci/dev-root\",\n \"version\": \"0.0.0-development\",\n \"repository\": \"https://github.com/electron/electron\",\n \"description\": \"Build cross platform desktop apps with JavaScript, HTM... |
ggml-org/llama.cpp | 20,183 | ggml-vulkan: Add ELU op support | Add support for the ELU unary operation in the Vulkan backend.
First contribution, so all feedback is appreciated!
Related: https://github.com/ggml-org/llama.cpp/issues/14909
| 213c4a0b81788e058c30479842954fb0815be61a | d088d5b74f1d63b9a345d1515ab9e3bb3bc81a10 | 6 | medium | [
{
"filename": "docs/ops.md",
"patch": "@@ -37,11 +37,11 @@ Legend:\n | CROSS_ENTROPY_LOSS | ❌ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |\n | CROSS_ENTROPY_LOSS_BACK | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |\n | CUMSUM | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ... |
electron/electron | 49,329 | build: use @electron-ci/dev-root for package.json default | Backport of #49154
See that PR for details.
Notes: none | c9a29e2568995b93510fda5ddf30ff905f539776 | 5dc3021d1ce14c9ad2be28b23ddcf6ee5ee0d6b6 | 4 | medium | [
{
"filename": "package.json",
"patch": "@@ -1,5 +1,5 @@\n {\n- \"name\": \"electron\",\n+ \"name\": \"@electron-ci/dev-root\",\n \"version\": \"0.0.0-development\",\n \"repository\": \"https://github.com/electron/electron\",\n \"description\": \"Build cross platform desktop apps with JavaScript, HTM... |
nodejs/node | 61,124 | tools: only report commit validation failure on Slack | Refs: https://github.com/nodejs/node/pull/61050#issuecomment-3674352808
<!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/nodejs/node/blob/HEAD/CONTRIBUTING.md
- the commit message formatting guidelines at
https://github.com/nodejs/node/blob/HEAD/doc/contribut... | 25947d6504c0e01d8897be5b78292e3e01d16fc9 | 607a741941994a5f27774a807ef36149963cb821 | 19 | medium | [
{
"filename": ".github/workflows/notify-on-push.yml",
"patch": "@@ -36,11 +36,12 @@ jobs:\n with:\n persist-credentials: false\n - name: Check commit message\n+ id: commit-check\n run: npx -q core-validate-commit \"$COMMIT\"\n env:\n COMMIT: ${{ githu... |
huggingface/transformers | 42,992 | Update cpu torchao usage | Update doc for CPU torchao int4 weight-only usage as api changed. | e9f0f8e0cb40be6c3addc88f1282723f6932813c | 24502729d5667b4480eb61c00bcbc0dfedd21947 | 20 | medium | [
{
"filename": "docs/source/en/quantization/torchao.md",
"patch": "@@ -387,16 +387,14 @@ print(tokenizer.decode(output[0], skip_special_tokens=True))\n <hfoption id=\"int4-weight-only\">\n \n > [!TIP]\n-> Run the quantized model on a CPU by changing `device_map` to `\"cpu\"` and `layout` to `Int4CPULayout()`... |
vuejs/vue | 7,887 | fix(codegen): support IE11 and Edge use of "Esc" key (close #7880) | Closes #7880
IE11 and Edge use "Esc" as key name, other browsers use "Escape".
I was able to test on Edge 16/15/14 and IE11.
- More reference: https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/5290772/
- Reproduction link: http://codepen.io/anon/pen/QjxPGB
![screenshot 2018-03-23 13 05 4... | 4378fc5124067c2b3a3517dd7f527edd9be2ad37 | 1bd6196fb234c28754d9a27095afe0b5b84990ad | 21 | medium | [
{
"filename": "src/compiler/codegen/events.js",
"patch": "@@ -18,7 +18,8 @@ const keyCodes: { [key: string]: number | Array<number> } = {\n \n // KeyboardEvent.key aliases\n const keyNames: { [key: string]: string | Array<string> } = {\n- esc: 'Escape',\n+ // #7880: IE11 and Edge use `Esc` for Escape key ... |
vercel/next.js | 88,353 | Warn when overriding Cache-Control header on `/_next/` routes | Custom `Cache-Control` headers on `/_next/` routes can cause hard-to-debug development behaviors. Normally the dev server sends no-cache values to ensure that assets are always fresh. If a developer sets the max-age or immutable flag,, the browser can cache resources between requests, even if the source changes.
Add... | null | 818be9ae62a850dc60faf8f8dcb55c922f0a83ac | null | low | [
{
"filename": "packages/next/src/lib/load-custom-routes.ts",
"patch": "@@ -725,6 +725,27 @@ export default async function loadCustomRoutes(\n )\n }\n \n+ const cacheControlSources: string[] = []\n+ for (const headerRoute of headers) {\n+ if (!headerRoute.source.startsWith('/_next/')) {\n+ co... |
facebook/react | 32,614 | [DevTools] Use Popover API for TraceUpdates highlighting | ## Summary
When using React DevTools to highlight component updates, the highlights would sometimes appear behind elements that use the browser's [top-layer](https://developer.mozilla.org/en-US/docs/Glossary/Top_layer) (such as `<dialog>` elements or components using the Popover API). This made it difficult to see w... | null | 53c9f81049b4440a02b5ed3edb128516821c0279 | null | low | [
{
"filename": "packages/react-devtools-extensions/chrome/manifest.json",
"patch": "@@ -4,7 +4,7 @@\n \"description\": \"Adds React debugging tools to the Chrome Developer Tools.\",\n \"version\": \"6.1.1\",\n \"version_name\": \"6.1.1\",\n- \"minimum_chrome_version\": \"102\",\n+ \"minimum_chrome_ve... |
vercel/next.js | 88,434 | [test] Remove obsolete reference to `NEXT_TEST_SKIP_RETRY_MANIFEST` | The manifest is not used anymore as of #76361. | 7a2f4ad140b6e00d1d240ce84c6c9c7ca7550ba4 | e04dad29a9fc7fd187563ba1d3760393d801fa5f | 2 | medium | [
{
"filename": "run-tests.js",
"patch": "@@ -57,12 +57,6 @@ const isTestJob = !!process.env.NEXT_TEST_JOB\n const shouldContinueTestsOnError =\n process.env.NEXT_TEST_CONTINUE_ON_ERROR === 'true'\n \n-// Check env to load a list of test paths to skip retry. This is to be used in conjunction with NEXT_TEST_... |
ollama/ollama | 8,976 | ml/backend/ggml: fix crash on dlopen on machines without AVX | Removes the `iq4nlt` global variable in sgemm.cpp that causes a runtime crash when calling dlopen on ggml-cpu libraries as its initialization depends on AVX instructions the host machine may not have.
Thanks @rick-github @tris203 @creasyWinds for the help debugging this.
Fixes https://github.com/ollama/ollama/iss... | null | f4711da7bd88f237bec7a5271facb27a107726f0 | null | low | [
{
"filename": "llama/patches/0016-remove-sgemm-global-variables.patch",
"patch": "@@ -0,0 +1,55 @@\n+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001\n+From: jmorganca <jmorganca@gmail.com>\n+Date: Sun, 9 Feb 2025 17:22:15 -0800\n+Subject: [PATCH] remove sgemm global variables\n+\n+rem... |
huggingface/transformers | 43,033 | Fix Qwen3-VL typos and clarify image/video/vision token descriptions | # What does this PR do?
hey, this PR fixes several small docstring issues in Qwen3-VL and adds a few clarifications:
- correction for the `videos` argument description ("image" → "video") in `Qwen3VLProcessor.__call__()`.
- fixes various other typos in the processing & configuration files.
- the description was... | ccc7d90bcb4a94cd0888925c6abac941f3b8151d | 37c9c941664cc402ae381bbaeb2e8e1b1434af22 | 10 | medium | [
{
"filename": "src/transformers/models/qwen3_vl/configuration_qwen3_vl.py",
"patch": "@@ -110,7 +110,7 @@ class Qwen3VLTextConfig(PreTrainedConfig):\n Dictionary containing the configuration parameters for the RoPE embeddings. The dictionary should contain\n a value for `rope_theta` ... |
electron/electron | 49,319 | build: use @electron-ci/dev-root for package.json default | Backport of #49154
See that PR for details.
Notes: none
| 35a531953bafad7c8e7470bf234060660b80bf05 | 218300e57f0864940ddd091f370ab2311435f5fd | 7 | medium | [
{
"filename": "package.json",
"patch": "@@ -1,5 +1,5 @@\n {\n- \"name\": \"electron\",\n+ \"name\": \"@electron-ci/dev-root\",\n \"version\": \"0.0.0-development\",\n \"repository\": \"https://github.com/electron/electron\",\n \"description\": \"Build cross platform desktop apps with JavaScript, HTM... |
ollama/ollama | 8,975 | doc: add link to Lunary integration to the Observability section | 484a99e428bff772886f4585751f897db4ff7fbd | 38117fba83c8291e28abe564d56600c200ff4cab | 2 | high | [
{
"filename": "README.md",
"patch": "@@ -551,7 +551,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [llama.cpp](https://github.com/ggerganov/llama.cpp) project founded by Georgi Gerganov.\n \n ### Observability\n-\n+- [Lunary](https://lunary.ai/docs/integrations/ollama) is the leading... | |
vercel/next.js | 88,441 | Revert "[Turbopack] Move DirList to its own module" | Reverts vercel/next.js#88256 | d90847dead1f5c135741043f7720844583a26d0a | 608eec0504503b52a3df9d3c7c53c439d0c9c889 | 5 | medium | [
{
"filename": "Cargo.lock",
"patch": "@@ -9867,7 +9867,6 @@ dependencies = [\n \"strsim 0.11.1\",\n \"swc_core\",\n \"swc_sourcemap\",\n- \"tempfile\",\n \"tokio\",\n \"tracing\",\n \"turbo-bincode\",",
"additions": 0,
"deletions": 1
},
{
"filename": "turbopack/crates/turbopack-ecmascr... |
ggml-org/llama.cpp | 20,212 | Adding LLMKube to Infrastructure list on README | LLMKube is a Kubernetes operator for llama.cpp. It handles model downloads, GPU scheduling (Nvidia CUDA and Apple Silicon Metal), health probes, and Prometheus metrics through Metal and InferenceService CRDs.
Related: #6546 | 2b10b62677af99fcd07b69aa05561e499bb070dc | a95047979a1671be970398a7c8073159ac71013e | 25 | medium | [
{
"filename": "README.md",
"patch": "@@ -259,6 +259,8 @@ Instructions for adding support for new models: [HOWTO-add-model.md](docs/develo\n - [llama-swap](https://github.com/mostlygeek/llama-swap) - transparent proxy that adds automatic model switching with llama-server\n - [Kalavai](https://github.com/kala... |
huggingface/transformers | 42,967 | Make EoMT visible to type checkers | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this w... | a7f29523361b2cc12e51c1f5133d95f122f6f45c | b38375399cec29888108f09b093f2e6d86ee8bd5 | 8 | medium | [
{
"filename": "src/transformers/models/__init__.py",
"patch": "@@ -121,9 +121,11 @@\n from .emu3 import *\n from .encodec import *\n from .encoder_decoder import *\n+ from .eomt import *\n from .ernie import *\n from .ernie4_5 import *\n from .ernie4_5_moe import *\n+ from .ern... |
vercel/next.js | 88,439 | [scripts] Avoid conflicting type for `pack-next --compress` | `type: 'string'` conflicts with `choices: T[]`. It's basically `string | T` which just collapses to `string` if `T` is a union of string literals.
Before:
```console
$ p pack-next --help
...
--compress ... [string] [choices: "none", "strip"]
```
After:
```console
$ p pack-next --help
...
--compress .... | 7a2f4ad140b6e00d1d240ce84c6c9c7ca7550ba4 | 0fd7d90d6385fbf0822db50c919272fe72b86997 | 4 | medium | [
{
"filename": "scripts/pack-next.ts",
"patch": "@@ -24,14 +24,6 @@ const NEXT_BA_TARBALL = `${TARBALLS}/next-bundle-analyzer.tar`\n \n type CompressOpt = 'none' | 'strip' | 'objcopy-zlib' | 'objcopy-zstd'\n \n-interface CliOptions {\n- jsBuild?: boolean\n- project?: string\n- tar?: boolean\n- compress?:... |
vuejs/vue | 7,826 | Clean shell scripts | Hi,
I fixed various things spotted by shellcheck.
**What kind of change does this PR introduce?** (check at least one)
- [ ] Bugfix
- [ ] Feature
- [x] Code style update
- [ ] Refactor
- [ ] Build-related changes
- [ ] Other, please describe:
**Does this PR introduce a breaking change?** (check one)
... | 2c52c4265ba420ff47dc35eb1060a57c0813ee5d | 943e5c242b6c3a8cc92478dc95aba0a5b4b49588 | 27 | medium | [
{
"filename": "scripts/release-weex.sh",
"patch": "@@ -1,6 +1,7 @@\n+#!/bin/bash\n set -e\n-CUR_VERSION=`node build/get-weex-version.js -c`\n-NEXT_VERSION=`node build/get-weex-version.js`\n+CUR_VERSION=$(node build/get-weex-version.js -c)\n+NEXT_VERSION=$(node build/get-weex-version.js)\n \n echo \"Current:... |
nodejs/node | 61,123 | tools: use sparse-checkout in linter jobs | We can reduce out usage by avoiding to download files we don't need. It seems to hold true only if the list of files is small, for jobs that need almost all files in the repo, it's still faster to download the full repo, so no sparse checkout for them.
<!--
Before submitting a pull request, please read:
- the CO... | null | 01ebb475a40f385ea4c241a04d0bfb599fefb64c | null | low | [
{
"filename": ".github/workflows/linters.yml",
"patch": "@@ -163,6 +163,16 @@ jobs:\n - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n with:\n persist-credentials: false\n+ sparse-checkout: |\n+ /Makefile\n+ /benchmark/\n+ ... |
ggml-org/llama.cpp | 20,226 | server : correct index on finish in OAI completion streams | When calling `/v1/chat/completions` with `"stream": true` and an `n` value greater than 1, the stop message would always have its `index` set to `0` instead of the correct index, making it impossible to tell which stream actually finished. | 566059a26b0ce8faec4ea053605719d399c64cc5 | ff52ee964d9051e07da0e89a66567f0b893129c6 | 8 | medium | [
{
"filename": "tools/server/server-task.cpp",
"patch": "@@ -827,7 +827,7 @@ json server_task_result_cmpl_final::to_json_oaicompat_chat_stream() {\n {\"choices\", json::array({\n json {\n {\"finish_reason\", nullptr},\n- {\"index\", 0},\n+ ... |
ollama/ollama | 8,452 | fix default modelfile for create | 32bd37adf805f5224a4fa18c2d4e7f33d9a972d8 | a420a453b4783841e3e79c248ef0fe9548df6914 | 13 | medium | [
{
"filename": "cmd/cmd.go",
"patch": "@@ -59,7 +59,7 @@ func getModelfileName(cmd *cobra.Command) (string, error) {\n \n \t_, err = os.Stat(absName)\n \tif err != nil {\n-\t\treturn filename, err\n+\t\treturn \"\", err\n \t}\n \n \treturn absName, nil",
"additions": 1,
"deletions": 1
},
{
"f... | |
huggingface/transformers | 43,105 | fix TP issue within the device mesh (Tensor Parallel group) |
- distributed: @3outeille @ArthurZucker
if you follow readme in https://github.com/huggingface/accelerate/tree/main/examples/torch_native_parallelism to run nd_parallel.py,
crash like
```
[rank2]: Traceback (most recent call last):
[rank2]: File "/accelerate/examples/torch_native_parallelism/nd_parallel.py"... | d6a6c82680cba9c51decdacac6dd6315ea4a766a | 314a45154650cc594d3ec9a8b1099e50daf17e5a | 9 | medium | [
{
"filename": "src/transformers/core_model_loading.py",
"patch": "@@ -1064,7 +1064,7 @@ def convert_and_load_state_dict_in_model(\n if getattr(mapping, \"distributed_operation\", None) is None:\n tp_layer = ALL_PARALLEL_STYLES[model.tp_plan[matched_tp_pattern]].__... |
electron/electron | 49,221 | build(deps): bump actions/download-artifact from 6.0.0 to 7.0.0 | Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 6.0.0 to 7.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/download-artifact/releases">actions/download-artifact's releases</a>.</em></p>
<blockquote>
<h2>v7.0.0</h2>
<h2>v7 - Wha... | fe0caa0e0d9f5ff3f6238460d448870ed092e178 | a89b2cd9bcf80688f6d687e1ada305a3af50bac6 | 22 | medium | [
{
"filename": ".github/workflows/pipeline-segment-electron-test.yml",
"patch": "@@ -168,12 +168,12 @@ jobs:\n echo \"DISABLE_CRASH_REPORTER_TESTS=true\" >> $GITHUB_ENV\n echo \"IS_ASAN=true\" >> $GITHUB_ENV\n - name: Download Generated Artifacts\n- uses: actions/download-artifact@01... |
facebook/react | 31,711 | Clean up findFiberByHostInstance from DevTools Hook | The need for this was removed in https://github.com/facebook/react/pull/30831
Since the new DevTools version has been released for a while and we expect people to more or less auto-update. Future versions of React don't need this.
Once we remove the remaining uses of `getInstanceFromNode` e.g. in the deprecated i... | null | 3b597c0576977773910c77e075cc6d6308decb04 | null | low | [
{
"filename": "packages/react-reconciler/src/ReactFiberReconciler.js",
"patch": "@@ -42,7 +42,6 @@ import {enableSchedulingProfiler} from 'shared/ReactFeatureFlags';\n import ReactSharedInternals from 'shared/ReactSharedInternals';\n import {\n getPublicInstance,\n- getInstanceFromNode,\n rendererVersi... |
huggingface/transformers | 43,084 | Fix Qwen3OmniMoe Talker weight loading and config initialization | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this w... | 0642963ba13f2dae0596fe489415569e1d91fbda | 64a476b718728c76aea8a847aeabbf8074fe3202 | 4 | medium | [
{
"filename": "src/transformers/models/qwen3_omni_moe/configuration_qwen3_omni_moe.py",
"patch": "@@ -907,6 +907,7 @@ def __init__(\n self.audio_start_token_id = audio_start_token_id\n self.vision_start_token_id = vision_start_token_id\n self.speaker_id = speaker_id\n+ self.in... |
ggml-org/llama.cpp | 20,067 | cli: Don't clear system prompt when using '/clear' | Add system prompt to messages when clearing chat history.
| b5ed0e058c98645c1cab752d012fdc4be22bceef | f7db3f37895ad3a98405afdb84b42ed462542b12 | 6 | medium | [
{
"filename": "tools/cli/cli.cpp",
"patch": "@@ -382,12 +382,15 @@ int main(int argc, char ** argv) {\n modalities += \", audio\";\n }\n \n- if (!params.system_prompt.empty()) {\n- ctx_cli.messages.push_back({\n- {\"role\", \"system\"},\n- {\"content\", params.... |
huggingface/transformers | 43,108 | Improve `_init_weights` tests | # What does this PR do?
| 0642963ba13f2dae0596fe489415569e1d91fbda | 486a7229ccb6e8a7bedd3aa4de043e19ee0311fb | 3 | medium | [
{
"filename": "tests/test_modeling_common.py",
"patch": "@@ -1145,39 +1145,42 @@ def test_can_init_all_missing_weights(self):\n # For now, skip everything older than 2023 and \"important models\" (too much models to patch otherwise)\n # TODO: relax this as we patch more and more mode... |
ollama/ollama | 8,941 | ci: use windows-2022 to sign and bundle | ollama requires `vcruntime140_1.dll` which isn't found on 2019. previously the job used the windows runner (2019) but it explicitly installs 2022 to build the app. since the sign job doesn't actually build anything, it can use the windows-2022 runner instead.
resolves #8936 | 1c198977ecdd471aee827a378080ace73c02fa8d | 1f766c36fb61f7b1969664645bf38dae93f568a2 | 16 | medium | [
{
"filename": ".github/workflows/release.yaml",
"patch": "@@ -242,7 +242,7 @@ jobs:\n dist\\${{ matrix.os }}-${{ matrix.arch }}-app.exe\n \n windows-sign:\n- runs-on: windows\n+ runs-on: windows-2022\n environment: release\n needs: [windows-depends, windows-build]\n steps:",
... |
nodejs/node | 60,913 | process: preserve AsyncLocalStorage on queueMicrotask only when needed | Does the same optimization as https://github.com/nodejs/node/pull/59873 for `queueMicrotask`
branch:
```sh
./node benchmark/run.js --filter queue-microtask process
process/queue-microtask-breadth.js
process/queue-microtask-breadth.js n=400000: 18,938,347.488090977
process/queue-microtask-depth.js
process... | null | a65421a679a09fd29d9cc544d8718fb69ee99098 | null | low | [
{
"filename": "lib/internal/process/task_queues.js",
"patch": "@@ -25,6 +25,7 @@ const {\n \n const {\n getDefaultTriggerAsyncId,\n+ getHookArrays,\n newAsyncId,\n initHooksExist,\n emitInit,\n@@ -158,13 +159,18 @@ const defaultMicrotaskResourceOpts = { requireManualDestroy: true };\n function queu... |
vuejs/vue | 7,878 | fix: correct the `has` implementation in the `_renderProxy` | It's feasible that someone might ask if something other than a string is
in the proxy such as a `Symbol` that lacks a `charAt` method. This aligns
the implementation with the `getHandler`.
<!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pu... | null | 7b387390aa917edffc0eabce0b4186ea1ef40e2c | null | low | [
{
"filename": "src/core/instance/proxy.js",
"patch": "@@ -45,7 +45,7 @@ if (process.env.NODE_ENV !== 'production') {\n const hasHandler = {\n has (target, key) {\n const has = key in target\n- const isAllowed = allowedGlobals(key) || key.charAt(0) === '_'\n+ const isAllowed = allowedGl... |
vercel/next.js | 88,403 | Update Rspack production test manifest | This auto-generated PR updates the production integration test manifest used when testing Rspack. | 9857ed1e091cc39eefe2f756ee4ea16789f5ded0 | fa00f944482c8585a9c6e2f51b0cc62fd3f0e766 | 19 | medium | [
{
"filename": "test/rspack-build-tests-manifest.json",
"patch": "@@ -7308,9 +7308,9 @@\n \"runtimeError\": false\n },\n \"test/e2e/app-dir/static-shell-debugging/static-shell-debugging.test.ts\": {\n- \"passed\": [],\n+ \"passed\": [\"static-shell-debugging should render the full page\"],\n ... |
ollama/ollama | 8,953 | Add "LocalLLM" Repo link | ec6121c331e66b7e6abe42866002b03768c0db2c | 484a99e428bff772886f4585751f897db4ff7fbd | 1 | high | [
{
"filename": "README.md",
"patch": "@@ -379,6 +379,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [MinimalNextOllamaChat](https://github.com/anilkay/MinimalNextOllamaChat) (Minimal Web UI for Chat and Model Control)\n - [Chipper](https://github.com/TilmanGriesel/chipper) AI interfac... | |
facebook/react | 33,109 | [mcp] Add proper web-vitals metric collection | Multiple things here:
- Improve the mean calculation for metrics so we don't report 0 when web-vitals fail to be retrieved
- improve ui chaos monkey to use puppeteer APIs since only those trigger INP/CLS metrics since we need emulated mouse clicks
- Add logic to navigate to a temp page after render since some web-vi... | 26ecc98a0014700524e78d938e3654c73213cf3b | 7a2c7045aed222b1ece44a18db6326f2f10c89e3 | 18 | medium | [
{
"filename": "compiler/packages/react-mcp-server/src/index.ts",
"patch": "@@ -22,6 +22,12 @@ import assertExhaustive from './utils/assertExhaustive';\n import {convert} from 'html-to-text';\n import {measurePerformance} from './tools/runtimePerf';\n \n+function calculateMean(values: number[]): string {\n+ ... |
ollama/ollama | 8,688 | Add library in Zig. | 7e402ebb8cc95e1fd2b59fe6d9ef9baf8972977e | ec6121c331e66b7e6abe42866002b03768c0db2c | 2 | high | [
{
"filename": "README.md",
"patch": "@@ -492,6 +492,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [Ollama for Haskell](https://github.com/tusharad/ollama-haskell)\n - [multi-llm-ts](https://github.com/nbonamy/multi-llm-ts) (A Typescript/JavaScript library allowing access to differen... | |
facebook/react | 33,094 | Remove useId semantics from View Transition name generation | Originally I thought it was important that SSR used the same View Transition name as the client so that the Fizz runtime could emit those names and then the client could pick up and take over. However, I no longer believe that approach is feasible. Instead, the names can be generated only during that particular animati... | 54a50729cc47a884c2110d7c59dd5f850748e142 | 845d93742fb090e7a35abea409a55e2a14613255 | 1 | medium | [
{
"filename": "packages/react-reconciler/src/ReactFiberBeginWork.js",
"patch": "@@ -36,8 +36,6 @@ import type {\n OffscreenQueue,\n OffscreenInstance,\n } from './ReactFiberOffscreenComponent';\n-import type {ViewTransitionState} from './ReactFiberViewTransitionComponent';\n-import {assignViewTransition... |
vuejs/vue | 7,802 | fix(ssr): add Fragment in RenderState typing | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [ ] Bugf... | 7af77daf7761af7ad5550f300e1750bc667484c1 | e0222da95308c46b0093901639e112e2370eb1e0 | 1 | high | [
{
"filename": "src/server/render-context.js",
"patch": "@@ -6,7 +6,12 @@ type RenderState = {\n type: 'Element';\n rendered: number;\n total: number;\n+ children: Array<VNode>;\n endTag: string;\n+} | {\n+ type: 'Fragment';\n+ rendered: number;\n+ total: number;\n children: Array<VNode>;\n } |... |
electron/electron | 49,312 | fix: drag regions in child windows (manual backport 40-x-y) | #### Description of Change
Manual backport of #49231. See that PR for details.
#### Release Notes
Notes: Fixed drag regions in child windows
| f8d3e0f3cd7eff36e7e78392865a9f722b5d127c | ff237842462d0fda96bf8018d105561451544dcf | 27 | medium | [
{
"filename": "shell/browser/api/electron_api_web_contents.cc",
"patch": "@@ -754,6 +754,10 @@ WebContents::WebContents(v8::Isolate* isolate,\n script_executor_ = std::make_unique<extensions::ScriptExecutor>(web_contents);\n #endif\n \n+ // TODO: This works for main frames, but does not work for child fr... |
ggml-org/llama.cpp | 20,160 | opencl: add l2_norm | Add l2_norm. | ba2ff79e43bad81fd19ecc8324431c93499e459e | 6fce5c6a7dba6a3e1df0aad1574b78d1a1970621 | 10 | medium | [
{
"filename": "ggml/src/ggml-opencl/CMakeLists.txt",
"patch": "@@ -116,6 +116,7 @@ set(GGML_OPENCL_KERNELS\n neg\n norm\n relu\n+ l2_norm\n rms_norm\n rope\n scale",
"additions": 1,
"deletions": 0
},
{
"filename": "ggml/src/ggml-opencl/ggml-opencl.cpp",
"patch"... |
nodejs/node | 61,026 | test: make buffer sizes 32bit-aware in test-internal-util-construct-sab | Fixes: #61025
Refs: #60497
The range checks for bound SharedArrayBuffer construction weren't compatible with the maximum array buffer size on 32-bit architectures. | null | ed6ec9626eddee3ad00edfea4b0535bb9ee2638d | null | low | [
{
"filename": "test/parallel/test-internal-util-construct-sab.js",
"patch": "@@ -3,16 +3,20 @@\n \n require('../common');\n const assert = require('assert');\n+const { kMaxLength } = require('buffer');\n const { isSharedArrayBuffer } = require('util/types');\n const { constructSharedArrayBuffer } = require(... |
ggml-org/llama.cpp | 20,171 | Autoparser: add optional argument reshuffle capability | Requires #18675
Some of the models that use tagged parsers (i.e. those that use quasi-XML tags for the arguments and their values) prefer a certain argument order with certain common tools. However, currently the grammar enforces the order in which the arguments are declared. If the tool tries to fill one optional ... | 566059a26b0ce8faec4ea053605719d399c64cc5 | 2f2923f89526d102bd3c29188db628a8dbf507b6 | 3 | medium | [
{
"filename": "common/chat-auto-parser-generator.cpp",
"patch": "@@ -302,8 +302,9 @@ common_peg_parser analyze_tools::build_tool_parser_tag_tagged(parser_build_conte\n params.at(\"required\").get_to(required);\n }\n \n- // Build parser for each argument\n- std::vector<commo... |
huggingface/transformers | 43,095 | fix(qwen2vl): make Qwen2VLImageProcessor respect config['size'], align with fast version | …n with Fast version\n\n修复:https://github.com/huggingface/transformers/issues/42910\nQwen2VLImageProcessor 不再覆盖 config["size"],一致性对齐 Fast 处理,确保 Qwen3VL 正常。
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the re... | null | b19834d8352ccd6bb34e280f94d376e2bfd3bf40 | null | low | [
{
"filename": "src/transformers/models/paddleocr_vl/image_processing_paddleocr_vl.py",
"patch": "@@ -165,8 +165,9 @@ def __init__(\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n- if size is not None and (\"shortest_edge\" not in size or \"longest_edge\" not in size):\n- ... |
vercel/next.js | 88,404 | Update Rspack development test manifest | This auto-generated PR updates the development integration test manifest used when testing Rspack. | null | 2b314e744a0d8035a2234f3359976689352d850e | null | low | [
{
"filename": "test/rspack-dev-tests-manifest.json",
"patch": "@@ -9379,9 +9379,9 @@\n \"runtimeError\": false\n },\n \"test/e2e/app-dir/static-shell-debugging/static-shell-debugging.test.ts\": {\n- \"passed\": [],\n+ \"passed\": [\"static-shell-debugging should render the full page\"],\n ... |
facebook/react | 33,059 | Add test for multiple form submissions | Test for #30041 and #33055 | null | 79586c7eb626c6b9362c308a54c9ee5b66e640e5 | null | low | [
{
"filename": "packages/react-dom/src/__tests__/ReactDOMForm-test.js",
"patch": "@@ -1670,6 +1670,37 @@ describe('ReactDOMForm', () => {\n expect(divRef.current.textContent).toEqual('Current username: acdlite');\n });\n \n+ it('parallel form submissions do not throw', async () => {\n+ const formRe... |
ollama/ollama | 8,905 | Update faq.md | add port number so people will fully understand the OLLAMA_HOST Env. variable, especially when port number is not the default 11434 port. | null | a400df48c06f6526ed74bfa3fd1af783ed0b4899 | null | low | [
{
"filename": "docs/faq.md",
"patch": "@@ -66,7 +66,7 @@ If Ollama is run as a macOS application, environment variables should be set usi\n 1. For each environment variable, call `launchctl setenv`.\n \n ```bash\n- launchctl setenv OLLAMA_HOST \"0.0.0.0\"\n+ launchctl setenv OLLAMA_HOST \"0.0.0.0:... |
nodejs/node | 61,084 | crypto: move DEP0182 to End-of-Life | This commit moves support for implicitly short GCM authentication tags to End-of-Life status, thus requiring applications to explicitly specify the `authTagLength` for authentication tags shorter than 128 bits.
There is quite a bit of refactoring to be done in the C++ source code. This commit does not do that; inste... | e20175993a4dc7090ab5419edfb3c4c74cf826c8 | dff46c07c37f5cacc63451b84ba21478c4bbc45c | 15 | medium | [
{
"filename": "doc/api/crypto.md",
"patch": "@@ -925,6 +925,11 @@ When passing a string as the `buffer`, please consider\n <!-- YAML\n added: v1.0.0\n changes:\n+ - version: REPLACEME\n+ pr-url: https://github.com/nodejs/node/pull/61084\n+ description: Using GCM tag lengths other than 128 bits withou... |
vuejs/vue | 7,121 | feat(core): support KeyboardEvent.key for built in keyboard event modifiers, fix #6900 | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [ ] Bugf... | null | 1c8e2e88ed2d74a02178217b318564b73a096c18 | null | low | [
{
"filename": "src/core/instance/render-helpers/check-keycodes.js",
"patch": "@@ -3,6 +3,26 @@\n import config from 'core/config'\n import { hyphenate } from 'shared/util'\n \n+const keyNames: { [key: string]: string | Array<string> } = {\n+ esc: 'Escape',\n+ tab: 'Tab',\n+ enter: 'Enter',\n+ space: ' '... |
electron/electron | 49,241 | fix: `webRequest.onBeforeSendHeaders` not being able to modify reserved headers | Backport of #49226
See that PR for details.
Notes: Requests sent via `net` are now capable of having their headers modified to use reserved headers via `webRequest` | null | c4bfc1491a797b7273d9ea79d7334b01ba62ce48 | null | low | [
{
"filename": "shell/browser/net/proxying_url_loader_factory.cc",
"patch": "@@ -55,9 +55,9 @@ ProxyingURLLoaderFactory::InProgressRequest::InProgressRequest(\n proxied_loader_receiver_(this, std::move(loader_receiver)),\n target_client_(std::move(client)),\n current_response_(network::mojo... |
huggingface/transformers | 43,021 | Move missing weights and non-persistent buffers to correct device earlier | # What does this PR do?
As per the title. Better to send everthing directly to the correct device, rather than cpu, then initialize, then move to device | 315dcbe45cee1489a32fc228a80502b0a150936c | d6a6c82680cba9c51decdacac6dd6315ea4a766a | 1 | medium | [
{
"filename": "src/transformers/core_model_loading.py",
"patch": "@@ -31,7 +31,7 @@\n \n import torch\n \n-from .integrations.accelerate import offload_weight\n+from .integrations.accelerate import get_device, offload_weight\n from .integrations.tensor_parallel import ALL_PARALLEL_STYLES\n from .utils impor... |
ollama/ollama | 8,603 | Update the Documentation. | Deepseek model also added into the documentation section.
1. 671B
2. 70B
3. 1.5B | b901a712c6b0afe88aef7e5318f193d5b889cf34 | 7e402ebb8cc95e1fd2b59fe6d9ef9baf8972977e | 1 | high | [
{
"filename": "README.md",
"patch": "@@ -54,6 +54,8 @@ Here are some example models that can be downloaded:\n \n | Model | Parameters | Size | Download |\n | ------------------ | ---------- | ----- | -------------------------------- |\n+| DeepSeek-R1 | 7B ... |
vuejs/vue | 7,596 | fix(core): props getter shouldn't collect dependencies during initialization of data option (fix #7573) | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [x] Bugf... | 6931a47c5c5664d4df2a99dd5c1b275ccef625bc | 318f29fcdf3372ff57a09be6d1dc595d14c92e70 | 7 | medium | [
{
"filename": "src/core/instance/lifecycle.js",
"patch": "@@ -7,6 +7,7 @@ import { createEmptyVNode } from '../vdom/vnode'\n import { observerState } from '../observer/index'\n import { updateComponentListeners } from './events'\n import { resolveSlots } from './render-helpers/resolve-slots'\n+import { push... |
ggml-org/llama.cpp | 19,861 | [ggml-quants] Add memsets and other fixes for IQ quants | While trying to stop my Qwen3.5 quants from getting a ton of "Oops: found point X not on grid ...", I (and claude) came across a potential big issue
Using gdb, it seems that `L` is often initialized to non-zero memory, and so when it's read, it has garbage data in it that's causing the quantizations to go awry when ... | null | 649f06481e363fa02a53b89af9659645730c367b | null | low | [
{
"filename": "ggml/src/ggml-quants.c",
"patch": "@@ -3104,6 +3104,11 @@ static void quantize_row_iq2_xxs_impl(const float * GGML_RESTRICT x, void * GGML\n }\n float scale = make_qp_quants(32, kMaxQ+1, xval, (uint8_t*)L, weight);\n float eff_max = scale*kMaxQ;\n+ ... |
ollama/ollama | 8,933 | add gfx instinct gpus | add exclusion rule for windows to omit these targets since windows rocm does not support these gpus. this behavior can be disabled by setting `WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX` to an empty string (or some other regular expression) to force building for these targets on windows | ae7e368f75488a98ea7dae0131dfaf5963ad1a9c | abb8dd57f8a86a71b5f8fe1f059aee3636a658b1 | 4 | medium | [
{
"filename": "CMakeLists.txt",
"patch": "@@ -85,13 +85,20 @@ if(CMAKE_CUDA_COMPILER)\n )\n endif()\n \n+set(WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX \"^gfx(906|908|90a):xnack[+-]$\"\n+ CACHE STRING\n+ \"Regular expression describing AMDGPU_TARGETS not supported on Windows. Override to force building ... |
ollama/ollama | 8,877 | add React Native client Integrations | I saw there is no React Native client for now,and I build a React Native client for ollama,it I can add this repo link to Community Integrations | e8d4eb3e68b222f930f7818744ac6f6084bce0a7 | 6ab4ba4c26afb61e14cb5fa16bd3401f9bfdb2e7 | 1 | high | [
{
"filename": "README.md",
"patch": "@@ -353,6 +353,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [Web management](https://github.com/lemonit-eric-mao/ollama-web-management) (Web management page)\n - [Promptery](https://github.com/promptery/promptery) (desktop client for Ollama.)\n ... |
facebook/react | 32,779 | [compiler][repro] Nested fbt test fixture |
Ideally we should detect and bail out on this case to avoid babel build failures.
| null | c129c2424b662a371865a0145c562a1cf934b023 | null | low | [
{
"filename": "compiler/packages/babel-plugin-react-compiler/src/__tests__/fixtures/compiler/fbt/error.todo-fbt-param-nested-fbt.expect.md",
"patch": "@@ -0,0 +1,56 @@\n+\n+## Input\n+\n+```javascript\n+import fbt from 'fbt';\n+import {Stringify} from 'shared-runtime';\n+\n+/**\n+ * MemoizeFbtAndMacroOperan... |
nodejs/node | 61,097 | test_runner: fix lazy `test.assert` accessor | The intention was for this to be a define-on-access lazy property, but the object passed to `Object.defineProperty` isn't a descriptor, so this never happens. As a result, the `assert` object is currently reconstructed on every access; `test.assert === test.assert` is false at present.
Changed to a conventional lazy... | null | 3b7477c4d57862d587026594e2f1b5ba15a62a14 | null | low | [
{
"filename": "lib/test.js",
"patch": "@@ -62,14 +62,18 @@ ObjectDefineProperty(module.exports, 'snapshot', {\n },\n });\n \n+let lazyAssert;\n+\n ObjectDefineProperty(module.exports, 'assert', {\n __proto__: null,\n configurable: true,\n enumerable: true,\n get() {\n- const { register } = requ... |
vercel/next.js | 88,119 | Turbopack: make GraphTraversal deterministically calling all nodes before erroring | ### What?
The graph traversal used to early exit when there are errors. This leads to cycles to continuously executing the graph traversal to discover more task one by one. This leads to bad incremental performance. | null | 2b04d2eecd5ac482157e8f69e781424d9d00b8e0 | null | low | [
{
"filename": "turbopack/crates/turbo-tasks/src/graph/control_flow.rs",
"patch": "@@ -8,6 +8,4 @@ pub enum VisitControlFlow {\n /// The edge is excluded, and the traversal should not continue on the outgoing edges of the\n /// given node.\n Exclude,\n- /// The traversal should abort and retur... |
electron/electron | 49,226 | fix: `webRequest.onBeforeSendHeaders` not being able to modify reserved headers | Fix `webRequest.onBeforeSendHeaders` not being able to modify headers like `Proxy-Authorization` for requests made via the `net` module.
When using `webRequest.onBeforeSendHeaders` to inject headers like `Proxy-Authorization` into requests made via `net.request` or `net.fetch`, the request fails with `net::ERR_INVAL... | null | 3df3a6a736b93e0d69fa3b0c403b33f201287780 | null | low | [
{
"filename": "shell/browser/net/proxying_url_loader_factory.cc",
"patch": "@@ -55,9 +55,9 @@ ProxyingURLLoaderFactory::InProgressRequest::InProgressRequest(\n proxied_loader_receiver_(this, std::move(loader_receiver)),\n target_client_(std::move(client)),\n current_response_(network::mojo... |
vuejs/vue | 7,671 | fix(model): direct access array with multiple checkboxes | Closes #7670
<!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least on... | 1c0b4af5fd2f9e8173b8f4718018ee80a6313872 | 550c3c0d14af5485bb7e507c504664a7136e9bf9 | 22 | medium | [
{
"filename": "src/platforms/web/compiler/directives/model.js",
"patch": "@@ -86,8 +86,8 @@ function genCheckboxModel (\n 'if(Array.isArray($$a)){' +\n `var $$v=${number ? '_n(' + valueBinding + ')' : valueBinding},` +\n '$$i=_i($$a,$$v);' +\n- `if($$el.checked){$$i<0&&(${value}=$$a... |
ollama/ollama | 8,883 | readme: add ChibiChat to community integrations | 31acd1ebf97528932714619f7123eeff25e0e149 | e8d4eb3e68b222f930f7818744ac6f6084bce0a7 | 2 | high | [
{
"filename": "README.md",
"patch": "@@ -373,6 +373,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [AI Toolkit for Visual Studio Code](https://aka.ms/ai-tooklit/ollama-docs) (Microsoft-official VSCode extension to chat, test, evaluate models with Ollama support, and use them in your ... | |
huggingface/transformers | 42,920 | Fix audio pipelines | # What does this PR do?
Attribute `_processor_class` was deleted from subprocessors but there are still places where we try to access it. This PR deletes `_processor_class` everywhere and fixes failing audio pipeline tests | null | 7bbfc19a4583d951eaec058cf467e81e6c97ee32 | null | low | [
{
"filename": "src/transformers/models/wav2vec2_bert/convert_wav2vec2_seamless_checkpoint.py",
"patch": "@@ -150,7 +150,6 @@ def convert_wav2vec2_bert_checkpoint(\n \n # save feature extractor\n fe = SeamlessM4TFeatureExtractor(padding_value=1)\n- fe._processor_class = \"Wav2Vec2BertProcessor\"\n... |
vercel/next.js | 88,306 | Turbopack: more stale details for tracing | ### What?
Add the point of stale detection to the trace | a5f8baf4ddd7dffd00355696631fbed1dea6961a | 77071f90f61d7680f286dfe8e970f5f63cd2417e | 9 | medium | [
{
"filename": "turbopack/crates/turbo-tasks-backend/src/backend/mod.rs",
"patch": "@@ -1815,7 +1815,7 @@ impl<B: BackingStorage> TurboTasksBackendInner<B> {\n else {\n // Task was stale and has been rescheduled\n #[cfg(feature = \"trace_task_details\")]\n- span.rec... |
ollama/ollama | 8,900 | build(rocm): add numa, elf | add back libelf and libnuma for better compatibility
resolves #8884 | 9a4757ae6690605298fda69c2cc1909a60508d1f | ae7e368f75488a98ea7dae0131dfaf5963ad1a9c | 2 | high | [
{
"filename": "CMakeLists.txt",
"patch": "@@ -101,7 +101,7 @@ if(CMAKE_HIP_COMPILER)\n install(TARGETS ggml-hip\n RUNTIME_DEPENDENCIES\n DIRECTORIES ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR}\n- PRE_INCLUDE_REGEXES hipblas rocblas amdhip64 rocsolver amd_... |
huggingface/transformers | 43,019 | Do not use accelerate hooks if the device_map has only 1 device | # What does this PR do?
As per the title. In case of a single device, it's much easier and safer to simply move everything to the correct device, rather than to add hooks so that non-persistent buffers etc can be correctly used
From my benchmarks, this ends up being more efficient at forward-time, as expected (no... | 50ddcfd2b03dfd4ff03eacb1266d1439affecf53 | 315dcbe45cee1489a32fc228a80502b0a150936c | 2 | medium | [
{
"filename": "src/transformers/modeling_utils.py",
"patch": "@@ -3963,7 +3963,7 @@ def from_pretrained(\n weight_mapping=weight_conversions,\n )\n \n- model.eval() # Set model in evaluation mode to deactivate DropOut modules by default\n+ model.eval() # Set model in eval... |
ggml-org/llama.cpp | 20,120 | server : preserve anthropic thinking blocks in conversion | Fix Anthropic `/v1/messages` conversion to preserve assistant `thinking` blocks as `reasoning_content` when converting to internal OpenAI-compatible chat messages.
Fixes #20090.
AI usage disclosure:
- AI was used in an assistive role for code review suggestions, small implementation adjustments, and command exec... | null | e68f2fb894d890eeead6acf0cc3341478312f1fd | null | low | [
{
"filename": "tools/server/server-common.cpp",
"patch": "@@ -1463,13 +1463,16 @@ json convert_anthropic_to_oai(const json & body) {\n json tool_calls = json::array();\n json converted_content = json::array();\n json tool_results = json::array();\n+ std::string... |
facebook/react | 32,765 | [compiler][bugfix] Bail out when a memo block declares hoisted fns |
Note that bailing out adds false positives for hoisted functions whose only references are within other functions. For example, this rewrite would be safe.
```js
// source program
function foo() {
return bar();
}
function bar() {
return 42;
}
// compiler output
let bar;
if (/* deps changed */) {
fun... | 9d795d3808f3202b36740a7a8eb60567bd7f6d90 | 0c1575cee8a78dd097edcafc307522ad000e372c | 29 | medium | [
{
"filename": "compiler/packages/babel-plugin-react-compiler/src/ReactiveScopes/PruneHoistedContexts.ts",
"patch": "@@ -5,10 +5,13 @@\n * LICENSE file in the root directory of this source tree.\n */\n \n+import {CompilerError} from '..';\n import {\n convertHoistedLValueKind,\n IdentifierId,\n+ Instr... |
ollama/ollama | 8,084 | ollama webui for local docker deployment | I make a lightweight ollama-webui that allows you:
- to select models and chats etc via the browser,
- and supports importing and exporting of records and such.
- It supports local docker deployments. | 330b6c50b040d46b83cd66913a8b00df147a7e9b | 31acd1ebf97528932714619f7123eeff25e0e149 | 6 | medium | [
{
"filename": "README.md",
"patch": "@@ -369,6 +369,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [Minima](https://github.com/dmayboroda/minima) (RAG with on-premises or fully local workflow)\n - [aidful-ollama-model-delete](https://github.com/AidfulAI/aidful-ollama-model-delete) (U... |
nodejs/node | 61,024 | build: update test-wpt-report to use NODE instead of OUT_NODE | #60850 updated a target that never runs with Debug builds, this broke the ability for CI to upload daily wpt.fyi reports
This PR reverts the `test-wpt-report` target to use NODE which is what CI consistently sets
Refs: https://openjs-foundation.slack.com/archives/C03BJP63CH0/p1765400746909869 | null | 41c507f56ddff2cd6e5a5509e61bfceab826d9b2 | null | low | [
{
"filename": "Makefile",
"patch": "@@ -655,11 +655,15 @@ test-test426: all ## Run the Web Platform Tests.\n test-wpt: all ## Run the Web Platform Tests.\n \t$(PYTHON) tools/test.py $(PARALLEL_ARGS) wpt\n \n+# https://github.com/nodejs/node/blob/a00d95c73dcac8fc2b316238fb978a7d5aa650c6/.github/workflows/dai... |
huggingface/transformers | 43,002 | Small fixes for `make fixup` to run without error | # What does this PR do?
Small changes so that `make fixup` runs without error
| e9f0f8e0cb40be6c3addc88f1282723f6932813c | dc06f2dd2e3b7a4ce3f49392ac405307a9634355 | 7 | medium | [
{
"filename": ".circleci/config.yml",
"patch": "@@ -183,6 +183,7 @@ jobs:\n - run: python utils/check_modular_conversion.py\n - run: python utils/check_dummies.py\n - run: python utils/check_repo.py\n+ - run: python utils/check_modeling_structure.py\n ... |
ggml-org/llama.cpp | 20,149 | cpu: skip redudant ROPE cache updates | I was updating ROPE for Hexagon while using the CPU implementation as the reference and noticed that some threads may end up doing redundant ROPE cache updates for the rows that are mapped to other threads.
Added a bit of instrumentation and sure enough I see something like this:
<details>
```
thread-0: compute... | 2850bc6a1324fb810aecaf236dc5955a1c142a15 | ba2fd11cdf2ac70093ec4287e34dcfffa004f171 | 8 | medium | [
{
"filename": "ggml/src/ggml-cpu/ops.cpp",
"patch": "@@ -5803,28 +5803,33 @@ static void ggml_compute_forward_rope_flt(\n \n const int32_t * pos = (const int32_t *) src1->data;\n \n+ int64_t last_i2 = -1;\n+\n for (int64_t i3 = 0; i3 < ne3; i3++) { // batch\n for (int64_t i2 = 0; i2 < ne2... |
vuejs/vue | 7,737 | Fix/7730 | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
Forked from https://github.com/vuejs/vue/pull/7735
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does t... | 1c0b4af5fd2f9e8173b8f4718018ee80a6313872 | db584931e20f9ad4b423cfc14d587f9d0240a565 | 23 | medium | [
{
"filename": "src/compiler/directives/model.js",
"patch": "@@ -15,8 +15,8 @@ export function genComponentModel (\n if (trim) {\n valueExpression =\n `(typeof ${baseValueExpression} === 'string'` +\n- `? ${baseValueExpression}.trim()` +\n- `: ${baseValueExpression})`\n+ `? ${b... |
ollama/ollama | 8,899 | build(rocm): add tinfo | b698f9a0d89c0dffbe8912cbd8b85f43f768ec1e | 9a4757ae6690605298fda69c2cc1909a60508d1f | 2 | high | [
{
"filename": "CMakeLists.txt",
"patch": "@@ -101,7 +101,7 @@ if(CMAKE_HIP_COMPILER)\n install(TARGETS ggml-hip\n RUNTIME_DEPENDENCIES\n DIRECTORIES ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR}\n- PRE_INCLUDE_REGEXES hipblas rocblas amdhip64 rocsolver amd_... | |
electron/electron | 49,196 | test: remove outdated disabled test | #### Description of Change
Refs https://github.com/electron/electron/issues/49169
The test was originally added in https://github.com/electron/electron/pull/35421.
At some point, certain tests stopped running. When @ckerr fixed that bug in https://github.com/electron/electron/pull/46816, he disabled this test ... | null | 2ab56adbbddb38ff0fbbe36da95a5e1ef4bb7851 | null | low | [
{
"filename": "spec/api-browser-window-spec.ts",
"patch": "@@ -2135,12 +2135,6 @@ describe('BrowserWindow module', () => {\n expect(w.fullScreen).to.be.true();\n });\n \n- // FIXME: this test needs to be fixed and re-enabled.\n- it.skip('does not go fullscreen if roun... |
huggingface/transformers | 43,008 | Torch distrib smallfix | # What does this PR do?
Seems that the TP/dist integ refactor broke a method to count bytes, which is in from_pretrained. cc @3outeille as well if needed (I'm setting a 1 world size by default without initializing dist :shrug: ) | 42512f7956c7cea5810b2981513178553c299407 | 50ddcfd2b03dfd4ff03eacb1266d1439affecf53 | 4 | medium | [
{
"filename": "src/transformers/modeling_utils.py",
"patch": "@@ -4556,7 +4556,7 @@ def get_total_byte_count(\n \n total_byte_count = defaultdict(lambda: 0)\n tied_param_names = model.all_tied_weights_keys.keys()\n- tp_plan = model._tp_plan\n+ tp_plan = model._tp_plan if torch.distributed.is_a... |
vercel/next.js | 88,323 | Turbopack: add family to database read span | ### What?
This makes database read spans being grouped per family | null | b09df0067b2d1fd806635d222f696dbffef5f0dc | null | low | [
{
"filename": "turbopack/crates/turbo-persistence/src/db.rs",
"patch": "@@ -1338,9 +1338,12 @@ impl<S: ParallelScheduler, const FAMILIES: usize> TurboPersistence<S, FAMILIES>\n /// might hold onto a block of the database and it should not be hold long-term.\n pub fn get<K: QueryKey>(&self, family: u... |
vuejs/vue | 7,704 | fix(compiler): return handler value for event modifiers instead of un… | fix(compiler): return handler value for event modifiers instead of undefined
<!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**... | dc97a39c2f41ce57431d42d8b41811866f8e105c | 6bc75cacb72c0cc7f3d1041b5d9ff447ac2f5f69 | 15 | medium | [
{
"filename": "src/compiler/codegen/events.js",
"patch": "@@ -132,9 +132,9 @@ function genHandler (\n code += genModifierCode\n }\n const handlerCode = isMethodPath\n- ? handler.value + '($event)'\n+ ? `return ${handler.value}($event)`\n : isFunctionExpression\n- ? `(${h... |
ggml-org/llama.cpp | 20,130 | ggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) | This patch addresses an Internal Compiler Error (Segmentation fault) observed with gcc 15 by replacing the intrinsic + cast with casting the data first and then calling the intrinsic.
This bypasses the buggy compiler path while maintaining identical instruction selection.
Performance Verification:
Assembly analysi... | 92f7da00b49ad814b95832dd6610a825bbdd3033 | c6980ff29ddc8e59c9c002dcaeec14182d893ed7 | 16 | medium | [
{
"filename": "ggml/src/ggml-cpu/llamafile/sgemm.cpp",
"patch": "@@ -2497,7 +2497,7 @@ class tinyBLAS_Q0_PPC {\n for (int r = 0; r < 8; r++) {\n const block_q4_0 * current_blk = rows_base[r] + blk;\n vector float v_scale = vec_extract_fp32_from_shorth(... |
ollama/ollama | 8,897 | doc Update: Linux uninstall | Linux instructions now correctly tell u how to uninstall ollama completely | bfdeffc375f27b04a4ae7eeb22af24643582fcea | 78140197088cdec181c354a5cc00be4a5f080468 | 22 | medium | [
{
"filename": "docs/linux.md",
"patch": "@@ -186,3 +186,9 @@ sudo rm -r /usr/share/ollama\n sudo userdel ollama\n sudo groupdel ollama\n ```\n+\n+Remove installed libraries:\n+\n+```shell\n+sudo rm -rf /usr/local/lib/ollama\n+```",
"additions": 6,
"deletions": 0
}
] |
facebook/react | 33,123 | [Flight] Don't increase serializedSize for every recursive pass | I noticed that we increase this in the recursive part of the algorithm. This would mean that we'd count a key more than once if it has Server Components inside it recursively resolving. This moves it out to where we enter from toJSON. Which is called once per JSON entry (and therefore once per key). | null | 52ea641449570bbc32eb90fb1a76740249b6bcf5 | null | low | [
{
"filename": "packages/react-server/src/ReactFlightServer.js",
"patch": "@@ -2302,6 +2302,9 @@ function renderModel(\n key: string,\n value: ReactClientValue,\n ): ReactJSONValue {\n+ // First time we're serializing the key, we should add it to the size.\n+ serializedSize += key.length;\n+\n const ... |
vuejs/vue | 7,687 | fix(types): prefer normal component over functional one | **What kind of change does this PR introduce?** (check at least one)
- [x] Bugfix
**Does this PR introduce a breaking change?** (check one)
- [ ] Yes
- [x] No
**Other information:**
Background:
https://github.com/vuejs/vetur/issues/676
https://github.com/vuejs/vetur/issues/627#issuecomment-357124941
... | 6b8516b2dde52be643ee6855b45b253a17ed0461 | 144bf5a99e2ebd644f80bc8ab61cd1bf0366961a | 6 | medium | [
{
"filename": "types/options.d.ts",
"patch": "@@ -31,20 +31,21 @@ export type Accessors<T> = {\n [K in keyof T]: (() => T[K]) | ComputedOptions<T[K]>\n }\n \n+type DataDef<Data, Props, V> = Data | ((this: Readonly<Props> & V) => Data)\n /**\n * This type should be used when an array of strings is used fo... |
ggml-org/llama.cpp | 19,916 | ggml-cuda: add mem check for fusion | Fixes #19659 | 1e38a7a6fa115de0a2731cb67ce554b7df5e8e2c | d48e876467734ab9c4292340a32b52c92660111c | 3 | medium | [
{
"filename": "ggml/src/ggml-cuda/ggml-cuda.cu",
"patch": "@@ -3412,6 +3412,69 @@ static bool ggml_cuda_can_fuse(const struct ggml_cgraph * cgraph,\n return false;\n }\n \n+// returns whether the write (out) nodes overwrite the read nodes in operation\n+static bool ggml_cuda_check_fusion_... |
nodejs/node | 60,850 | build: run embedtest with node_g when BUILDTYPE=Debug | Tests should run with the `node` exe built in the specified `BUILDTYPE`.
This depends on https://github.com/nodejs/node/pull/60806.
Refs: https://github.com/nodejs/node/pull/60806#issuecomment-3575261641 | null | 28142a6106298ac8c0041b3f385e0d941305286c | null | low | [
{
"filename": "Makefile",
"patch": "@@ -78,11 +78,17 @@ EXEEXT := $(shell $(PYTHON) -c \\\n \t\t\"import sys; print('.exe' if sys.platform == 'win32' else '')\")\n \n NODE_EXE = node$(EXEEXT)\n-# Use $(PWD) so we can cd to anywhere before calling this\n-NODE ?= \"$(PWD)/$(NODE_EXE)\"\n NODE_G_EXE = node_g$(... |
electron/electron | 49,242 | fix: `webRequest.onBeforeSendHeaders` not being able to modify reserved headers | Backport of #49226
See that PR for details.
Notes: Requests sent via `net` are now capable of having their headers modified to use reserved headers via `webRequest` | null | ade4c009843c22c4d62ff014256c23f57ad149e5 | null | low | [
{
"filename": "shell/browser/net/proxying_url_loader_factory.cc",
"patch": "@@ -55,9 +55,9 @@ ProxyingURLLoaderFactory::InProgressRequest::InProgressRequest(\n proxied_loader_receiver_(this, std::move(loader_receiver)),\n target_client_(std::move(client)),\n current_response_(network::mojo... |
ggml-org/llama.cpp | 20,157 | ggml: update comments for backends which have no memory to report | Ref: #20150
This PR updates the comments for backends which have no memory to report. By default, llama.cpp will default to host memory information if the backend reports `*free = 0; *total = 0;`.
This will help maintainers understand that reporting 0 bytes is not a bug, but rather, allow downstream GGML consumer... | 6c97bffd6508f4999d5bc292addd4f433a3648bc | ba2ff79e43bad81fd19ecc8324431c93499e459e | 9 | medium | [
{
"filename": "ggml/src/ggml-blas/ggml-blas.cpp",
"patch": "@@ -339,8 +339,8 @@ static const char * ggml_backend_blas_device_get_description(ggml_backend_dev_t\n }\n \n static void ggml_backend_blas_device_get_memory(ggml_backend_dev_t dev, size_t * free, size_t * total) {\n- // TODO\n- *free = 0;\n+ ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.