repo stringclasses 10
values | pr_number int64 7 155k | title stringlengths 4 137 | body stringlengths 0 68.4k | buggy_commit stringlengths 40 40 ⌀ | fix_commit stringlengths 40 40 | buggy_distance int64 1 30 ⌀ | confidence stringclasses 3
values | files listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|
ollama/ollama | 9,108 | test: add test cases for HumanNumber() | null | 716e36561530ce5d3f9fdc75d13cb95b37c87088 | null | low | [
{
"filename": "format/format_test.go",
"patch": "@@ -12,6 +12,9 @@ func TestHumanNumber(t *testing.T) {\n \n \ttestCases := []testCase{\n \t\t{0, \"0\"},\n+\t\t{999, \"999\"},\n+\t\t{1000, \"1K\"},\n+\t\t{1001, \"1K\"},\n \t\t{1000000, \"1M\"},\n \t\t{125000000, \"125M\"},\n \t\t{500500000, \"500.50M\"},",
... | |
huggingface/transformers | 43,111 | Fix test_all_tensors_are_parameter_or_buffer | # What does this PR do?
This PR fixes the test `test_all_tensors_are_parameter_or_buffer` across 5 models, effectively making the test pass for all models.
## Failure cases and fixes
There are 2 reasons this test used to fail:
- the model has some triton code which crash when we try to do a forward pass on t... | 37cd3240acbf5337c0939f4bc6a0a3074a971c9f | 02ddf5d47abb61730d157ddf3cb75f76715d05f5 | 19 | medium | [
{
"filename": "tests/models/modernbert/test_modeling_modernbert.py",
"patch": "@@ -27,6 +27,7 @@\n require_flash_attn,\n require_torch,\n require_torch_accelerator,\n+ require_torch_gpu,\n slow,\n torch_device,\n )\n@@ -506,6 +507,10 @@ def flash_attn_inference_equivalence(\n ... |
facebook/react | 33,159 | Reset currentEventTransitionLane after flushing sync work | This keeps track of the transition lane allocated for this event. I want to be able to use the current one within sync work flushing to know which lane needs its loading indicator cleared.
It's also a bit weird that transition work scheduled inside sync updates in the same event aren't entangled with other transitio... | 4ca97e4891b6a664b4c3a183f16b81139655ff57 | 676f0879f315130309262ff3532707029f0288bb | 8 | medium | [
{
"filename": "packages/react-reconciler/src/ReactFiberRootScheduler.js",
"patch": "@@ -257,7 +257,6 @@ function processRootScheduleInMicrotask() {\n // preserve the scroll position of the previous page.\n syncTransitionLanes = currentEventTransitionLane;\n }\n- currentEventTransitionLane... |
vercel/next.js | 88,486 | Better typesafety for `interopDefault` | Make it actually typed instead of `any -> any` | null | ac5e593b04f4d37747ed687efdc2f8e9c7268b33 | null | low | [
{
"filename": "packages/next/src/lib/interop-default.ts",
"patch": "@@ -1,3 +1,4 @@\n-export function interopDefault(mod: any) {\n+export function interopDefault<T>(mod: { default: T } | T): T {\n+ // @ts-ignore\n return mod.default || mod\n }",
"additions": 2,
"deletions": 1
},
{
"filena... |
ggml-org/llama.cpp | 20,284 | metal : add upscale | Add implementation for `GGML_OP_UPSCALE` used by many vision models. The implementation is not optimized, but will at least avoid unnecessary graph splits.
```
UPSCALE(type=f32,ne=[512,512,3,2],scale_factor=2,mode=nearest,transpose=0): OK
UPSCALE(type=f32,ne=[512,512,3,2],scale_factor=2,mode=nearest,transpose=... | null | ed0007aa32b94f40e4f3ba0d0fe24d86a582232b | null | low | [
{
"filename": "ggml/src/ggml-metal/ggml-metal-device.cpp",
"patch": "@@ -1717,12 +1717,29 @@ ggml_metal_pipeline_with_params ggml_metal_library_get_pipeline_upscale(ggml_met\n char base[256];\n char name[256];\n \n- snprintf(base, 256, \"kernel_upscale_%s\", ggml_type_name(op->src[0]->type));\n- ... |
nodejs/node | 59,007 | doc: add a smooth scrolling effect to the sidebar | When searching for the API in the Quick Flower Cutting module, each time you switch to a different module, the content in the sidebar scrolls to the top, and the currently active link is not immediately visible.
<!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/no... | null | ae407c310989913f3faf86019b02fdbca217e333 | null | low | [
{
"filename": "doc/api_assets/api.js",
"patch": "@@ -197,7 +197,7 @@\n \n if (!link) return;\n \n- link.scrollIntoView({ block: 'center' });\n+ link.scrollIntoView({ behavior: 'smooth', block: 'center' });\n }\n \n ",
"additions": 1,
"deletions": 1
}
] |
ggml-org/llama.cpp | 20,279 | server : fix off-by-1 in server_tokens::size_up_to_pos() | fix https://github.com/ggml-org/llama.cpp/pull/20087#issuecomment-4021486697
`n_past` should reflect the number of tokens with position strictly smaller than the next position `pos_next`. | 107d5999520dd02195ebe05278752db9fd33c865 | d6e1556499814da42424e39397a9964a1bebbf00 | 3 | medium | [
{
"filename": "tools/server/server-common.cpp",
"patch": "@@ -276,7 +276,7 @@ llama_pos server_tokens::pos_next(int64_t n_tokens) const {\n \n size_t server_tokens::size_up_to_pos(llama_pos max_pos) const {\n if (!has_mtmd) {\n- return std::min((size_t)(max_pos + 1), tokens.size());\n+ ret... |
ollama/ollama | 9,195 | cmake: avoid building intel backends on linux | Resolves #9139 | a4f69a0191b304c204ef074ccd6523f121bfddfe | 08a299e1d0636056b09d669f9aa347139cde6ec0 | 27 | medium | [
{
"filename": "CMakeLists.txt",
"patch": "@@ -24,7 +24,7 @@ set(GGML_LLAMAFILE ON)\n set(GGML_CUDA_PEER_MAX_BATCH_SIZE 128)\n set(GGML_CUDA_GRAPHS ON)\n \n-if((NOT CMAKE_OSX_ARCHITECTURES MATCHES \"arm64\")\n+if((CMAKE_OSX_ARCHITECTURES AND NOT CMAKE_OSX_ARCHITECTURES MATCHES \"arm64\")\n OR (NOT CMAKE_... |
vuejs/vue | 7,938 | fix(e2e-todomvc-test): trigger click on .new-todo instead of footer, fix #7937 | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [x] Bugf... | null | d280937045988097b816c4497322acc8aaca2d86 | null | low | [
{
"filename": "test/e2e/specs/todomvc.js",
"patch": "@@ -119,7 +119,7 @@ module.exports = {\n .assert.focused('.todo:nth-child(1) .edit')\n .clearValue('.todo:nth-child(1) .edit')\n .setValue('.todo:nth-child(1) .edit', 'edited!')\n- .click('footer') // blur\n+ .click('.new-todo'... |
huggingface/transformers | 43,127 | [Quantization] torchao serialization | # What does this PR do?
Fixes torchao serialization, some tests are still failling for Int8/Int4 quantization with:
```
ValueError: Unsupported tensor type: <class 'torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor'>
```
which seems to be a problem on torchao's end
| null | c8208185a738f5159d365d696a12cf740b678439 | null | low | [
{
"filename": "src/transformers/quantizers/quantizer_torchao.py",
"patch": "@@ -146,7 +146,7 @@ def get_state_dict_and_metadata(self, model):\n We flatten the state dict of tensor subclasses so that it is compatible with the safetensors format.\n \"\"\"\n if TORCHAO_VERSION >= versio... |
electron/electron | 49,360 | fix: reduce stack memory consumption in BytecodeGenerator | #### Description of Change
For https://github.com/microsoft/vscode/issues/283403
Backports
1) https://chromium-review.googlesource.com/c/v8/v8/+/7180480
2) https://chromium-review.googlesource.com/c/v8/v8/+/7160576
3) https://chromium-review.googlesource.com/c/v8/v8/+/7062734
2 and 3 are needed to cleanly... | 6ccee512e435abb6264e825353e159705106c62c | 744142fe5493f2adec64ec9f4522d532c8ca3023 | 5 | medium | [
{
"filename": "patches/v8/.patches",
"patch": "@@ -1,2 +1,5 @@\n chore_allow_customizing_microtask_policy_per_context.patch\n turboshaft_avoid_introducing_too_many_variables.patch\n+runtime_setprototypeproperties_handling_of.patch\n+runtime_correcting_setprototypeproperties.patch\n+reduce_stack_memory_consu... |
facebook/react | 33,143 | [DevTools] Get source location from structured callsites in prepareStackTrace | When we get the source location for "View source for this element" we should be using the enclosing function of the callsite of the child. So that we don't just point to some random line within the component.
This is similar to the technique in #33136.
This technique is now really better than the fake throw techn... | null | 997c7bc930304142b3af37bcb21599181124aeb4 | null | low | [
{
"filename": "packages/react-devtools-shared/src/backend/fiber/renderer.js",
"patch": "@@ -50,6 +50,7 @@ import {\n gt,\n gte,\n parseSourceFromComponentStack,\n+ parseSourceFromOwnerStack,\n serializeToString,\n } from 'react-devtools-shared/src/backend/utils';\n import {\n@@ -5805,15 +5806,13 @@... |
vuejs/vue | 8,037 | fix(ssr): fix double escaping of staticClass values (fix #7859) | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
This fixes a double escaping of literal class values in the SSR optimizing compiler by unescaping the value in `genClassSegments`.
This bugfix is very similar to ... | 3d36a443c755bf16f2656a8595dda9076f021a4a | c21b89ebeda4c45024c2a71bc7a292d47ebc7ee1 | 8 | medium | [
{
"filename": "src/server/optimizing-compiler/modules.js",
"patch": "@@ -92,7 +92,7 @@ export function genClassSegments (\n classBinding: ?string\n ): Array<StringSegment> {\n if (staticClass && !classBinding) {\n- return [{ type: RAW, value: ` class=${staticClass}` }]\n+ return [{ type: RAW, valu... |
vercel/next.js | 88,426 | Turbopack: use mimalloc on Linux musl | Closes #88174
With musl (i.e. Alpine), the system memory allocator can cause catastrophic slowdowns
- https://medium.com/p/stop-using-alpine-images-b51d12b0fde2
- https://nickb.dev/blog/default-musl-allocator-considered-harmful-to-performance/
We were seeing it cause a 10x slowdown, simply using mimalloc fixes ... | null | fe6a3385e6bfd2a8c5c1280dea532a2f0d6a6c55 | null | low | [
{
"filename": "turbopack/crates/turbo-tasks-malloc/Cargo.toml",
"patch": "@@ -12,13 +12,13 @@ bench = false\n [dependencies]\n \n \n-[target.'cfg(not(any(target_os = \"linux\", target_family = \"wasm\", target_env = \"musl\")))'.dependencies]\n+[target.'cfg(not(any(target_os = \"linux\", target_family = \"w... |
nodejs/node | 60,907 | stream: do not pass `readable.compose()` output via `Readable.from()` | `readable.compose()` was intended to return the Duplex constructed by `stream.compose()`, and is documented as such.
However, because it was added as a "stream-returning operator", its output is being passed via `Readable.from()`, which constructs a new object-mode Readable by wrapping the async iterator of the comp... | null | 5e677d6e7e4a04f217ffb79fcbcdadadb02e6fa0 | null | low | [
{
"filename": "doc/api/stream.md",
"patch": "@@ -2027,7 +2027,7 @@ changes:\n description: Marking the API stable.\n -->\n \n-* `stream` {Stream|Iterable|AsyncIterable|Function}\n+* `stream` {Writable|Duplex|WritableStream|TransformStream|Function}\n * `options` {Object}\n * `signal` {AbortSignal} allo... |
ggml-org/llama.cpp | 20,084 | vulkan: Fix data races in coopmat1 mul_mat(_id) | Add barriers between coopmat store and regular loads. We sort of got away with this because it was the same subgroup accessing the values, but it's still a race and may not work.
I added shared memory data race detection for coopmat1 (https://github.com/KhronosGroup/Vulkan-ValidationLayers/pull/11780) and this fixes... | null | cd18a50ea573c081e7fa5604c85e4d571fd6ae4f | null | low | [
{
"filename": "ggml/src/ggml-vulkan/vulkan-shaders/mul_mm.comp",
"patch": "@@ -377,6 +377,7 @@ void main() {\n [[unroll]] for (uint cm_col = 0; cm_col < cms_per_col; cm_col++) {\n coopMatStore(sums[cm_col * cms_per_row + cm_row], coopmat_stage, warp_i * TM * TN, TM, gl_CooperativeMatrixL... |
ollama/ollama | 9,128 | ci: set owner/group in tarball | set owner and group to 0 when building the linux tarball so extracted files are consistent. this is the behaviour of release tarballs in version 0.5.7 and lower | null | 7b5d916a9a85f37c199bf765ef85625945469165 | null | low | [
{
"filename": ".github/workflows/release.yaml",
"patch": "@@ -329,7 +329,9 @@ jobs:\n done\n working-directory: dist/${{ matrix.os }}-${{ matrix.arch }}\n - run: |\n- for ARCHIVE in dist/${{ matrix.os }}-${{ matrix.arch }}/*.tar.in; do tar c -C dist/${{ matrix.os }}-${{ matr... |
huggingface/transformers | 43,117 | [`Ernie 4.5 VL Moe`] Post merge adjustments | As per title, relevant PRs
- #42088
- #42697
Note that `grouped_mm` in moe leads to small fluctuations making the integration tests flaky | 0642963ba13f2dae0596fe489415569e1d91fbda | 52e9d05fde10cfccf5368c4403a206c7cbef8e6f | 26 | medium | [
{
"filename": "src/transformers/models/ernie4_5_vl_moe/modeling_ernie4_5_vl_moe.py",
"patch": "@@ -1705,6 +1705,8 @@ def prepare_inputs_for_generation(\n past_key_values=None,\n image_grid_thw=None,\n video_grid_thw=None,\n+ use_cache=True,\n+ is_first_iteration=False,\... |
ggml-org/llama.cpp | 20,278 | ggml-cuda: disable gdn for musa | 5f4cdac3857ec3915c069caf3dec4f35af1691a1 | e8bbc736cbc5d945a0b26dadbd6224d0aeba7faa | 5 | medium | [
{
"filename": "ggml/src/ggml-cuda/ggml-cuda.cu",
"patch": "@@ -4992,9 +4992,15 @@ static bool ggml_backend_cuda_device_supports_op(ggml_backend_dev_t dev, const g\n case GGML_OP_LEAKY_RELU:\n case GGML_OP_RWKV_WKV6:\n case GGML_OP_GATED_LINEAR_ATTN:\n- case GGML_OP_GATED_DELTA... | |
huggingface/transformers | 43,120 | [`GPT OSS`] Fix false flag | #42736 removed flex attn support but it turns out that #41083 added support for it in flex attention. There were typos that indicated wrong support for attn flavors, sorry about that should've noticed it / checked it properly.
So gpt oss does not support sdpa but everything else (limited FA support to the specific k... | 68dcd13bfb67bb5b2b12a2f9502d31ab7ecbc434 | 5c68832efdcdc20a933aee105a0e2ddc7bf6c982 | 1 | medium | [
{
"filename": "src/transformers/models/gpt_oss/modeling_gpt_oss.py",
"patch": "@@ -434,7 +434,7 @@ class GptOssPreTrainedModel(PreTrainedModel):\n _skip_keys_device_placement = [\"past_key_values\"]\n _supports_flash_attn = True\n _supports_sdpa = False\n- _supports_flex_attn = False\n+ _s... |
electron/electron | 49,376 | build(deps): bump github/codeql-action from 4.31.7 to 4.31.10 | Bumps [github/codeql-action](https://github.com/github/codeql-action) from 4.31.7 to 4.31.10.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/github/codeql-action/releases">github/codeql-action's releases</a>.</em></p>
<blockquote>
<h2>v4.31.10</h2>
<h1>CodeQL Action Changelog... | null | 324eb0eb1c76bb197185ee3306130054fe01f38f | null | low | [
{
"filename": ".github/workflows/scorecards.yml",
"patch": "@@ -50,6 +50,6 @@ jobs:\n \n # Upload the results to GitHub's code scanning dashboard.\n - name: \"Upload to code-scanning\"\n- uses: github/codeql-action/upload-sarif@cf1bb45a277cb3c205638b2cd5c984db1c46a412 # v3.29.5\n+ ... |
vuejs/vue | 7,881 | feat(warn): make the warning messages more explicit (close #7764) | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [x] Feat... | null | 59860b0a756526f37468655598c68d119f0e74bd | null | low | [
{
"filename": "src/core/util/props.js",
"patch": "@@ -127,11 +127,10 @@ function assertProp (\n valid = assertedType.valid\n }\n }\n+\n if (!valid) {\n warn(\n- `Invalid prop: type check failed for prop \"${name}\".` +\n- ` Expected ${expectedTypes.map(capitalize).join(', ')}` +\... |
huggingface/transformers | 42,737 | Fix shapes in modular_gpt_oss.py | # What does this PR do?
This PR fixes a comment in `modular_gpt_oss.py` which has the incorrect shape written in the description for GptOssExperts. I fix the annotated shape for routing_experts from `(batch_size * token_num, num_experts)` to `(batch_size * token_num, top_k)`. Looking at the git blame, I think the o... | e8c51d1848187b9e58d00bf7d638811686ab2a4b | 68dcd13bfb67bb5b2b12a2f9502d31ab7ecbc434 | 8 | medium | [
{
"filename": "src/transformers/models/gpt_oss/modeling_gpt_oss.py",
"patch": "@@ -88,8 +88,8 @@ def forward(self, hidden_states: torch.Tensor, router_indices=None, routing_weig\n \n Args:\n hidden_states (torch.Tensor): (batch_size, seq_len, hidden_size)\n- selected_experts (... |
vuejs/vue | 8,649 | Adding to backers | I am a bronze backer since today, just adding my info to the proper place.
<!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**Wh... | 25a688ee314d82b584ef0f192d4016b75664a616 | 9476d3e7428b6638d3243045d5a052ad5824d4e7 | 1 | high | [
{
"filename": "BACKERS.md",
"patch": "@@ -252,6 +252,11 @@ Funds donated via Patreon go directly to support Evan You's full-time work on Vu\n <img width=\"148px\" src=\"https://i.imgur.com/qX0SNa7.png\">\n </a>\n </td>\n+ <td align=\"center\" valign=\"middle\">\n+ <a href... |
ggml-org/llama.cpp | 20,270 | models : fix assert in mamba2 graph | cont #19802
fix #20268
This fixes the model loading, but the reasoning parsing seems to be broken because the post-reasoning contents are not being displayed in the WebUI:
<img width="887" height="1205" alt="image" src="https://github.com/user-attachments/assets/bc8a0725-8ad4-4fb0-969f-2bdac8649494" />
cc @p... | a976ff081b4657b67f48295bbefc030d9d899b17 | 43e1cbd6c1b407fcb1fb0196276265e774986035 | 17 | medium | [
{
"filename": "src/models/mamba-base.cpp",
"patch": "@@ -155,7 +155,6 @@ ggml_tensor * llm_build_mamba_base::build_mamba2_layer(llm_graph_input_rs * inp,\n \n const auto kv_head = mctx_cur->get_head();\n \n- const int64_t n_embd = hparams.n_embd;\n const int64_t d_conv = hparams.ssm_d_conv;\n... |
facebook/react | 22,114 | Remove the warning for setState on unmounted components | We have a warning that fires when you `setState` on an unmounted components. This is a proposal to remove it.
### Why was this warning added?
The warning states that it protects against memory leaks. The original use case (rewritten to Hooks) goes like this:
```js
useEffect(() => {
function handleChange() ... | null | 7ed0706d7ec9907e8fd19c4cf0e8625733cf2a1c | null | low | [
{
"filename": "packages/react-dom/src/__tests__/ReactCompositeComponent-test.js",
"patch": "@@ -307,7 +307,7 @@ describe('ReactCompositeComponent', () => {\n ReactDOM.render(<MyComponent />, container2);\n });\n \n- it('should warn about `forceUpdate` on unmounted components', () => {\n+ it('should ... |
ollama/ollama | 9,150 | llm_telegram_bot added to README.md | Another one telegram integration. Primary, for RP.
- buttons for changing characters, save history, change characters...
- buttons for regenerate phrase, impersonate, continue, return back through chat, impersonate generation.
- if you able to run stable diffusion - can generate a picture.
- change using model, par... | 8cf16063a52deb416e16039c73264e26f7e9a43a | 3b4424ff98a881597ec5b65869035d98ba222e11 | 20 | medium | [
{
"filename": "README.md",
"patch": "@@ -548,6 +548,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [Alfred Ollama](https://github.com/zeitlings/alfred-ollama) (Alfred Workflow)\n - [TextLLaMA](https://github.com/adarshM84/TextLLaMA) A Chrome Extension that helps you write emails, cor... |
nodejs/node | 61,100 | build: add --debug-symbols to build with -g without enabling DCHECKs | This is useful when debugging release builds on Linux without enabling DCHECKs.
<!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/nodejs/node/blob/HEAD/CONTRIBUTING.md
- the commit message formatting guidelines at
https://github.com/nodejs/node/blob/HEAD/doc/c... | null | c837811bafa59896a3509d73c2dc57f3198835ca | null | low | [
{
"filename": "configure.py",
"patch": "@@ -107,6 +107,12 @@\n default=None,\n help='build the Node.js part of the binary with debugging symbols')\n \n+parser.add_argument('--debug-symbols',\n+ action='store_true',\n+ dest='debug_symbols',\n+ default=None,\n+ help='add debugging symbols ... |
vercel/next.js | 88,473 | Turbopack: don't cell in async map | This causes nondeterministic cell order | null | 58406e6f8d6374260d8e319f24e67aa75e92fda1 | null | low | [
{
"filename": "turbopack/crates/turbopack-core/src/chunk/chunk_item_batch.rs",
"patch": "@@ -280,13 +280,11 @@ impl ChunkItemBatchGroup {\n ChunkItemBatchGroup {\n items,\n chunk_groups: this.chunk_groups.clone(),\n- ... |
electron/electron | 49,371 | fix: fix cookie encryption provider loading on Windows and Linux | #### Description of Change
Fast follow to #49348.
This PR modifies our existing cookie encryption logic to match an upstream change here: "Port net::CookieCryptoDelegate to os_crypt async" | https://chromium-review.googlesource.com/c/chromium/src/+/6996667 However, the previous PR only fixed the issue properly f... | 809ab09b6f5ede0a9f9b0fcc0e9c33ff971277f2 | 0e4ee9f03a301557b500bd6d7b33321b89b6f3c9 | 23 | medium | [
{
"filename": "BUILD.gn",
"patch": "@@ -465,6 +465,8 @@ source_set(\"electron_lib\") {\n \"//components/os_crypt/async/browser\",\n \"//components/os_crypt/async/browser:key_provider_interface\",\n \"//components/os_crypt/sync\",\n+ \"//components/password_manager/core/browser:password_switch... |
ggml-org/llama.cpp | 20,277 | server : add kill switch when server is stuck | ref https://github.com/ggml-org/llama.cpp/pull/20087#issuecomment-4021486697
Sometimes the server enters an infinite loop of empty batches. This change makes it easier to debug such cases. | d417bc43dd29eab006a0da73afc7d610c9ebae7d | 107d5999520dd02195ebe05278752db9fd33c865 | 9 | medium | [
{
"filename": "tools/server/server-context.cpp",
"patch": "@@ -562,14 +562,15 @@ struct server_context_impl {\n \n llama_model_ptr model_dft;\n \n- bool add_bos_token = true;\n+ bool add_bos_token = true;\n \n int32_t n_ctx; // total context for all clients / slots\n \n // slots / clients... |
nodejs/node | 61,077 | util: fix nested proxy inspection | Fixes: https://github.com/nodejs/node/issues/61061 | 26b7fd2009c348a64674da78ddda1076e74595d2 | 9120924de1878e61d65eaf2d5d0d27cbe129224e | 10 | medium | [
{
"filename": "lib/internal/util/inspect.js",
"patch": "@@ -1118,17 +1118,29 @@ function formatValue(ctx, value, recurseTimes, typedArray) {\n \n // Memorize the context for custom inspection on proxies.\n const context = value;\n+ let proxies = 0;\n // Always check for proxies to prevent side effect... |
vuejs/vue | 7,868 | Add UMD global declaration to index.d.ts | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [ ] Bugf... | null | 49385e1efab90f8c75087a731de2fcdcb367941e | null | low | [
{
"filename": "types/index.d.ts",
"patch": "@@ -2,6 +2,8 @@ import { Vue } from \"./vue\";\n \n export default Vue;\n \n+export as namespace Vue;\n+\n export {\n CreateElement,\n VueConstructor",
"additions": 2,
"deletions": 0
}
] |
huggingface/transformers | 42,879 | [CI] Fixing some AMD failures | This PR fixes several failures on AMD for Qwen2, Qwen2.5-Omni, and Qwen2.5-VL. | ccc7d90bcb4a94cd0888925c6abac941f3b8151d | c154b0218ab78c2df076ab8cf4a7bde1cd2bf40f | 30 | medium | [
{
"filename": "src/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py",
"patch": "@@ -2459,7 +2459,11 @@ def forward(\n self.rope_deltas = rope_deltas\n \n else:\n- batch_size, seq_length, _ = inputs_embeds.shape\n+ if inputs_embeds is not Non... |
facebook/react | 33,028 | [ReactFlightWebpackPlugin] Add support for .mjs file extension | ## Summary
Our builds generate files with a `.mjs` file extension. These are currently filtered out by `ReactFlightWebpackPlugin` so I am updating it to support this file extension.
This fixes https://github.com/facebook/react/issues/33155
## How did you test this change?
I built the plugin with this change and... | null | 2bcf06b69254cad6f7e702bf7d65c4f30478668c | null | low | [
{
"filename": "packages/react-server-dom-webpack/src/ReactFlightWebpackPlugin.js",
"patch": "@@ -277,8 +277,14 @@ export default class ReactFlightWebpackPlugin {\n chunkGroup.chunks.forEach(function (c) {\n // eslint-disable-next-line no-for-of-loops/no-for-of-loops\n ... |
electron/electron | 49,375 | fix: fix cookie encryption provider loading on Windows and Linux | Backport of #49371
See that PR for details.
Notes: Fixed an issue on Windows and Linux where no cookie encryption key provider was passed into the network service when cookie encryption was enabled.
| e3cabb611958c16be9f5f5a2de8480b6b64ee495 | df4d0bef212be4c9c6c270296e03109b2f7c60c0 | 21 | medium | [
{
"filename": "BUILD.gn",
"patch": "@@ -464,6 +464,8 @@ source_set(\"electron_lib\") {\n \"//components/os_crypt/async/browser\",\n \"//components/os_crypt/async/browser:key_provider_interface\",\n \"//components/os_crypt/sync\",\n+ \"//components/password_manager/core/browser:password_switch... |
ollama/ollama | 9,098 | Update windows.md | Corrected typo from cmd to Ctrl. | null | 0667baddc658d3f556a369701819e7695477f59a | null | low | [
{
"filename": "docs/windows.md",
"patch": "@@ -55,7 +55,7 @@ Here's a quick example showing API access from `powershell`\n ## Troubleshooting\n \n Ollama on Windows stores files in a few different locations. You can view them in\n-the explorer window by hitting `<cmd>+R` and type in:\n+the explorer window ... |
facebook/react | 33,148 | [compiler][entrypoint] Fix edgecases for noEmit and opt-outs |
Title
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33148).
* #33149
* __->__ #33148 | 5069e18060e00d7c07b2b04ebc8a3fa21e2d810a | 3820740a7fbfc3b27a5127b43bdad44382ff3ce0 | 1 | medium | [
{
"filename": "compiler/packages/babel-plugin-react-compiler/src/Entrypoint/Imports.ts",
"patch": "@@ -59,6 +59,7 @@ type ProgramContextOptions = {\n opts: PluginOptions;\n filename: string | null;\n code: string | null;\n+ hasModuleScopeOptOut: boolean;\n };\n export class ProgramContext {\n /**\n... |
vercel/next.js | 88,474 | Turbopack: add lint rule to not cell in async map | <!-- Thanks for opening a PR! Your contribution is much appreciated.
To make sure your PR is handled as smoothly as possible we request that you follow the checklist sections below.
Choose the right checklist for the change(s) that you're making:
## For Contributors
### Improving Documentation
- Run `pnpm prettier-f... | null | 7b63bd360bfda310e7fcf508bf3c91c91e2cbccb | null | low | [
{
"filename": ".config/ast-grep/rule-tests/__snapshots__/no-map-async-cell-snapshot.yml",
"patch": "@@ -0,0 +1,282 @@\n+id: no-map-async-cell\n+snapshots:\n+ ? |\n+ items.into_iter()\n+ .map(async |item| Ok(item.process().await?.cell()))\n+ .try_join()\n+ .await\n+ : labels:\n+ ... |
ggml-org/llama.cpp | 20,036 | contributing: limit open PRs for new contributors to 1 | To avoid to cases where a contributor is not aware about AI policies and creates PRs wholesale using their AI of choice | null | e2763a6723848d14e992c43410e2afcbe86446c7 | null | low | [
{
"filename": "CONTRIBUTING.md",
"patch": "@@ -39,6 +39,7 @@ Before submitting your PR:\n - For intricate features, consider opening a feature request first to discuss and align expectations\n - When adding support for a new model or feature, focus on **CPU support only** in the initial PR unless yo... |
nodejs/node | 61,138 | deps: update timezone to 2025c | This PR was generated by tools/timezone-update.yml.
Updates the ICU files as per the instructions present in https://github.com/nodejs/node/blob/main/doc/contributing/maintaining/maintaining-icu.md#time-zone-data
To test, build node off this branch & log the version of tz using
```js
console.log(process.versions.tz)
... | null | 55600e6153b9d8a23f6365737123ab26e54b12db | null | low | [
{
"filename": "test/fixtures/tz-version.txt",
"patch": "@@ -1 +1 @@\n-2025b\n+2025c",
"additions": 1,
"deletions": 1
}
] |
ollama/ollama | 9,122 | model: document high-level model interface | A documentation update that adds comments explaining the high-level model interface and core types in the Ollama model package. | ed443a03930a10bec6182c55091f0880baa1e620 | d006e1e09be4d3da3fb94ab683aa18822af4b956 | 6 | medium | [
{
"filename": "model/model.go",
"patch": "@@ -21,6 +21,7 @@ import (\n \t_ \"github.com/ollama/ollama/ml/backend\"\n )\n \n+// Options contains the inputs for a model forward pass\n type Options struct {\n \tInputs []int32\n \tPositions []int32\n@@ -34,11 +35,13 @@ type config struct {\n \tCache kvcache.... |
vercel/next.js | 88,455 | Turbopack: Remove dead generic_type_macro code | I deleted the code that used this a long time ago in https://github.com/vercel/next.js/pull/70817
The rational for why I deleted the uses of this code is here (generics in turbo-tasks would be nice, but this implementation was too limited to be worth the complexity): https://github.com/vercel/turborepo/pull/8843#issue... | a6371ddc51a8ccc6cd2ab99a4bbb1fbb5b81e898 | 60b5df66f674713310bc19b758259d89a7ff84ef | 4 | medium | [
{
"filename": "turbopack/crates/turbo-tasks-macros/src/generic_type_input.rs",
"patch": "@@ -1,21 +0,0 @@\n-use syn::{\n- Generics, Result, Token, Type,\n- parse::{Parse, ParseStream},\n-};\n-\n-/// The input of the `generic_type` macro.\n-#[derive(Debug)]\n-pub struct GenericTypeInput {\n- pub gen... |
ollama/ollama | 9,123 | Wire up system info log for new engine | Example from the logs
```
time=2025-02-14T15:30:23.385-08:00 level=INFO source=runner.go:816 msg=system info="Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | cgo(clang)"
```
Threads aren't exposed yet since that wiring isn't complete yet for ... | ed443a03930a10bec6182c55091f0880baa1e620 | df2680b4b936de82ab8e86d3fe69524b872531c5 | 5 | medium | [
{
"filename": "kvcache/causal_test.go",
"patch": "@@ -305,6 +305,10 @@ func (b *testBackend) NewContext() ml.Context {\n \treturn &testContext{}\n }\n \n+func (b *testBackend) SystemInfo() string {\n+\treturn \"not implemented\"\n+}\n+\n type testContext struct{}\n \n func (c *testContext) Zeros(dtype ml.DT... |
huggingface/transformers | 43,115 | Fix inits in modernbert | # What does this PR do?
This PR fixes a few `modernbert` tests that are failing due to the new initialization scheme, only when `flash_attention` is available.
## Failure cause and fixes
The class `ModernBertUnpaddedRotaryEmbedding` was not included in `init_weights`, which led to random initialization of `inv_fr... | null | 491e0cd2345b0a43344d4c92c0f58c70088ceb89 | null | low | [
{
"filename": "src/transformers/models/modernbert/modeling_modernbert.py",
"patch": "@@ -685,6 +685,9 @@ def init_weight(module: nn.Module, std: float):\n curr_inv_freq, _ = rope_init_fn(module.config, layer_type=layer_type)\n init.copy_(getattr(module, f\"{layer_type}_inv_fr... |
ollama/ollama | 9,119 | llamarunner: Init GGML before printing system info | We currently print system info before the GGML backends are loaded. This results in only getting information about the default lowest common denominator runner. If we move up the GGML init then we can see what we are actually running.
Before:
time=2025-02-14T11:15:07.606-08:00 level=INFO source=runner.go:935 msg=sy... | ed443a03930a10bec6182c55091f0880baa1e620 | 010313bb63e73cce2b42d1eebaf8cea3eb529567 | 4 | medium | [
{
"filename": "runner/llamarunner/runner.go",
"patch": "@@ -845,8 +845,6 @@ func (s *Server) loadModel(\n \tthreads int,\n \tmultiUserCache bool,\n ) {\n-\tllama.BackendInit()\n-\n \tvar err error\n \ts.model, err = llama.LoadModelFromFile(mpath, params)\n \tif err != nil {\n@@ -932,6 +930,8 @@ func Execute... |
electron/electron | 49,366 | build: roll build-tools SHA to `4430e4a` | Backport of #49362
See that PR for details.
Notes: none | null | 9c4e03fd8a1e8754bfc07f2f77902ce0d425c10c | null | low | [
{
"filename": ".github/actions/install-build-tools/action.yml",
"patch": "@@ -15,7 +15,7 @@ runs:\n git config --global core.preloadindex true\n git config --global core.longpaths true\n fi\n- export BUILD_TOOLS_SHA=a5d9f9052dcc36ee88bef5c8b13acbefd87b7d8d\n+ export BUILD_TOO... |
ollama/ollama | 9,089 | llm: attempt to evaluate symlinks for os.Executable, but do not fail | This PR provides a better approach to https://github.com/ollama/ollama/pull/9088 that will attempt to evaluate symlinks (important for macOS where `ollama` is often a symlink), but use the result of `os.Executable()` as a fallback in scenarios where `filepath.EvalSymlinks` fails due to permission errors.
Fixes https... | f05774b04c5d3d30c6f3037a5a14595bf57a16ad | 5296f487a840b2b9ffc28ed9b45d223a32359973 | 1 | medium | [
{
"filename": "discover/path.go",
"patch": "@@ -19,6 +19,10 @@ var LibOllamaPath string = func() string {\n \t\treturn \"\"\n \t}\n \n+\tif eval, err := filepath.EvalSymlinks(exe); err == nil {\n+\t\texe = eval\n+\t}\n+\n \tvar libPath string\n \tswitch runtime.GOOS {\n \tcase \"windows\":",
"additions"... |
nodejs/node | 61,137 | tools: update nixpkgs-unstable to 7d853e518814cca2a657b72eeba67ae20eb | This is an automated update of nixpkgs-unstable to 7d853e518814cca2a657b72eeba67ae20eb. | null | c5d3f5f9c82455b4b6d8ff4204709017a47e1b4e | null | low | [
{
"filename": "tools/nix/pkgs.nix",
"patch": "@@ -1,10 +1,10 @@\n arg:\n let\n repo = \"https://github.com/NixOS/nixpkgs\";\n- rev = \"f997fa0f94fb1ce55bccb97f60d41412ae8fde4c\";\n+ rev = \"7d853e518814cca2a657b72eeba67ae20ebf7059\";\n nixpkgs = import (builtins.fetchTarball {\n url = \"${repo}/ar... |
ggml-org/llama.cpp | 19,360 | ggml-cpu: arm64: q6_K repack gemm and gemv (and generic) implementations (dotprod) | https://github.com/ggml-org/llama.cpp/pull/19356 but Q6_K.
PR contents:
- New generics for q6_K_8x4
- New repack implementations for ARM
- Templated generic impl (Will be discussed in #19356)
Same methodology for testing -> llama-cli output, outputs of gemm and gemvs and perplexity to double check prompt proce... | null | c03a5a46f0b9d05dd6099d64ab6ed091feabdb97 | null | low | [
{
"filename": "ggml/src/ggml-cpu/arch-fallback.h",
"patch": "@@ -43,6 +43,7 @@\n #define ggml_gemv_q4_K_8x4_q8_K_generic ggml_gemv_q4_K_8x4_q8_K\n #define ggml_gemv_q4_K_8x8_q8_K_generic ggml_gemv_q4_K_8x8_q8_K\n #define ggml_gemv_q5_K_8x8_q8_K_generic ggml_gemv_q5_K_8x8_q8_K\n+#define ggml_gemv_q6_K_8x4_q8... |
ollama/ollama | 9,088 | llm: do not evaluate symlinks for exe path lookup | In some cases, the directories in the executable path read by filepath.EvalSymlinks are not accessible, resulting in permission errors which results in an error when running models. It also doesn't work well on long paths on windows, also resulting in errors. This change removes filepath.EvalSymlinks when accessing os.... | ed443a03930a10bec6182c55091f0880baa1e620 | f05774b04c5d3d30c6f3037a5a14595bf57a16ad | 2 | medium | [
{
"filename": "discover/path.go",
"patch": "@@ -19,11 +19,6 @@ var LibOllamaPath string = func() string {\n \t\treturn \"\"\n \t}\n \n-\texe, err = filepath.EvalSymlinks(exe)\n-\tif err != nil {\n-\t\treturn \"\"\n-\t}\n-\n \tvar libPath string\n \tswitch runtime.GOOS {\n \tcase \"windows\":",
"addition... |
vercel/next.js | 88,497 | Turbopack: remove unused code | <!-- Thanks for opening a PR! Your contribution is much appreciated.
To make sure your PR is handled as smoothly as possible we request that you follow the checklist sections below.
Choose the right checklist for the change(s) that you're making:
## For Contributors
### Improving Documentation
- Run `pnpm prettier-f... | null | c1fc988f71f1ef52fa2e9f7512be3c7eeb175622 | null | low | [
{
"filename": "turbopack/crates/turbopack-core/src/module_graph/mod.rs",
"patch": "@@ -1,11 +1,10 @@\n use core::panic;\n use std::{\n- collections::{BinaryHeap, VecDeque, hash_map::Entry},\n+ collections::{BinaryHeap, VecDeque},\n future::Future,\n };\n \n use anyhow::{Context, Result, bail};\n-u... |
facebook/react | 33,144 | [compiler][be] repro edge cases for noEmit and module opt-outs |
see test fixtures
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33144).
* #33146
* #33145
* __->__ #33144 | null | fbe7bc21b9aa00afa230132b3f7eee6d2b5c94a7 | null | low | [
{
"filename": "compiler/packages/babel-plugin-react-compiler/src/__tests__/fixtures/compiler/repro-bailout-nopanic-shouldnt-outline.expect.md",
"patch": "@@ -0,0 +1,30 @@\n+\n+## Input\n+\n+```javascript\n+// @panicThreshold(none)\n+'use no memo';\n+\n+function Foo() {\n+ return <button onClick={() => aler... |
ggml-org/llama.cpp | 20,219 | ggml-vulkan: add SGN operator, auto-generate Vulkan.csv and ops.md | Hello, first time contributor here; just wanted to pick one of the easy missing ops for starting. I am trying to get myself familiar with the project. Open to all feedbacks; thanks in advance | b2f460bd3c75f36b8fbe01bd3a6ca006a9424f49 | 0beb8db3a0037b51f8247ac657b7655ab68fa9f0 | 1 | medium | [
{
"filename": "docs/ops.md",
"patch": "@@ -47,6 +47,7 @@ Legend:\n | FILL | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ |\n | FLASH_ATTN_EXT | ❌ | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | ❌ |\n | FLOOR | ❌ | ❌ | ✅ | 🟡 | ❌ | ❌ | 🟡 | �... |
vuejs/vue | 8,087 | fix: no dots at the end of the comments | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [ ] Bugf... | null | 1abb944a71c13542a7203ac6beccb89def641a37 | null | low | [
{
"filename": "flow/compiler.js",
"patch": "@@ -173,7 +173,7 @@ declare type ASTText = {\n \n // SFC-parser related declarations\n \n-// an object format describing a single-file component.\n+// an object format describing a single-file component\n declare type SFCDescriptor = {\n template: ?SFCBlock;\n ... |
huggingface/transformers | 43,073 | Add fast version of `convert_segmentation_map_to_binary_masks` to EoMT | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this w... | null | 98578fc62260130652be4725a5597ad257bfe800 | null | low | [
{
"filename": "src/transformers/models/eomt/image_processing_eomt_fast.py",
"patch": "@@ -44,12 +44,43 @@\n from .image_processing_eomt import (\n EomtImageProcessorKwargs,\n compute_segments,\n- convert_segmentation_map_to_binary_masks,\n get_size_with_aspect_ratio,\n remove_low_and_no_o... |
electron/electron | 49,367 | build: roll build-tools SHA to `4430e4a` | Backport of #49362
See that PR for details.
Notes: none | null | b200b8d6c0c41af3d7f60f005509da5bd30986ac | null | low | [
{
"filename": ".github/actions/install-build-tools/action.yml",
"patch": "@@ -15,7 +15,7 @@ runs:\n git config --global core.preloadindex true\n git config --global core.longpaths true\n fi\n- export BUILD_TOOLS_SHA=a5d9f9052dcc36ee88bef5c8b13acbefd87b7d8d\n+ export BUILD_TOO... |
vercel/next.js | 86,755 | Turbopack: parallelize making dependent tasks dirty | <!-- Thanks for opening a PR! Your contribution is much appreciated.
To make sure your PR is handled as smoothly as possible we request that you follow the checklist sections below.
Choose the right checklist for the change(s) that you're making:
## For Contributors
### Improving Documentation
- Run `pnpm prettier-f... | 7ec952836d1e7319e35f5ce1c2ce92223404444a | a43b362eee9156371ba768bda247f3dd1dff60a7 | 8 | medium | [
{
"filename": "turbopack/crates/turbo-tasks-backend/src/backend/mod.rs",
"patch": "@@ -34,10 +34,11 @@ use turbo_tasks::{\n event::{Event, EventListener},\n message_queue::TimingEvent,\n registry::get_value_type,\n+ scope::scope_and_block,\n task_statistics::TaskStatisticsApi,\n trace... |
ggml-org/llama.cpp | 20,233 | vulkan: skip zero size tensors in backend copies | I noticed validation errors in #19955
```
Validation Error: [ VUID-VkBufferCopy-size-01988 ] | MessageID = 0x636fcc49
vkCmdCopyBuffer(): pRegions[0].size is zero.
The Vulkan spec states: The size must be greater than 0 (https://docs.vulkan.org/spec/latest/chapters/copies.html#VUID-VkBufferCopy-size-01988)
Objects:... | d088d5b74f1d63b9a345d1515ab9e3bb3bc81a10 | b2f460bd3c75f36b8fbe01bd3a6ca006a9424f49 | 9 | medium | [
{
"filename": "ggml/src/ggml-vulkan/ggml-vulkan.cpp",
"patch": "@@ -13253,6 +13253,10 @@ static void ggml_backend_vk_buffer_memset_tensor(ggml_backend_buffer_t buffer, g\n ggml_backend_vk_buffer_context * buf_ctx = (ggml_backend_vk_buffer_context *)buffer->context;\n vk_buffer buf = buf_ctx->dev_buf... |
ollama/ollama | 9,081 | ml/backend/ggml: stable sort devices by score | 49df03da9af6b0050ebbf50676f7db569a2b54d9 | 6600bd7d91deb07bec5832790168870c3180ccae | 20 | medium | [
{
"filename": "llama/patches/0014-sort-devices-by-score.patch",
"patch": "@@ -8,7 +8,7 @@ Subject: [PATCH] sort devices by score\n 1 file changed, 13 insertions(+), 8 deletions(-)\n \n diff --git a/ggml/src/ggml-backend-reg.cpp b/ggml/src/ggml-backend-reg.cpp\n-index 899d16f2..ac5cda07 100644\n+index 899d1... | |
nodejs/node | 61,134 | test: update WPT for urlpattern to a2e15ad405 | This is an automated update of the WPT for urlpattern to https://github.com/web-platform-tests/wpt/commit/a2e15ad40518c30c4e7f649584dbda699a40d531. | null | 76c4bee9785175d16232af075e8274fae8301ad0 | null | low | [
{
"filename": "test/fixtures/wpt/README.md",
"patch": "@@ -29,7 +29,7 @@ Last update:\n - resources: https://github.com/web-platform-tests/wpt/tree/1d2c5fb36a/resources\n - streams: https://github.com/web-platform-tests/wpt/tree/bc9dcbbf1a/streams\n - url: https://github.com/web-platform-tests/wpt/tree/9504... |
vercel/next.js | 88,435 | [test] Always run all tests without aborting on failure | It's more useful and efficient to run all tests and report all failures instead of aborting on the first failure. This way, developers get a complete picture of what needs to be fixed in a single run, and don't have to go through multiple iterations of fixing one failure at a time. | e04dad29a9fc7fd187563ba1d3760393d801fa5f | 2af84065cbbbe7bfd9c132f022c3989ab24a26f3 | 11 | medium | [
{
"filename": ".github/workflows/integration_tests_reusable.yml",
"patch": "@@ -91,7 +91,6 @@ jobs:\n afterBuild: |\n # e2e and ${{ inputs.test_type }} tests with `node run-tests.js`\n \n- export NEXT_TEST_CONTINUE_ON_ERROR=true\n export NEXT_TEST_MODE=${{\n inputs.tes... |
facebook/react | 32,833 | Add unstable_Activity to server entrypoint | Activity is a client component, but you should still be able to import it and render it from a Server Component. Same as what we do with other types like Suspense and ViewTransition. | null | 8571249eb87efa1e20ecb8a839cc380e63da767a | null | low | [
{
"filename": "packages/react/src/ReactServer.experimental.js",
"patch": "@@ -17,6 +17,7 @@ import {\n REACT_SUSPENSE_TYPE,\n REACT_SUSPENSE_LIST_TYPE,\n REACT_VIEW_TRANSITION_TYPE,\n+ REACT_ACTIVITY_TYPE,\n } from 'shared/ReactSymbols';\n import {\n cloneElement,\n@@ -80,4 +81,5 @@ export {\n //... |
ggml-org/llama.cpp | 20,185 | cuda : display total and free VRAM capacity during device initialization | - While running the benchark, realized that no VRAM info and had to check in another console
- Helps users verify hardware constraints directly from the log output even when running the benchmark
- Get total VRAM via cudaGetDeviceProperties and free memory via cudaMemGetInfo.
- Tested on Alienware m16 R2 (RTX 4070 L... | c5a778891ba0ddbd4cbb507c823f970595b1adc2 | 5f4cdac3857ec3915c069caf3dec4f35af1691a1 | 15 | medium | [
{
"filename": "ggml/src/ggml-cuda/ggml-cuda.cu",
"patch": "@@ -205,7 +205,14 @@ static ggml_cuda_device_info ggml_cuda_init() {\n GGML_ASSERT(info.device_count <= GGML_CUDA_MAX_DEVICES);\n \n int64_t total_vram = 0;\n- GGML_LOG_INFO(\"%s: found %d \" GGML_CUDA_NAME \" devices:\\n\", __func__, inf... |
huggingface/transformers | 42,956 | 🚨 Fix EfficientNet image processor default interpolation to BICUBIC | ## Summary
Fix default interpolation from NEAREST to BICUBIC in both `EfficientNetImageProcessor` and `EfficientNetImageProcessorFast` to match the original EfficientNet implementation.
## Details
The original EfficientNet implementation uses BICUBIC interpolation for image preprocessing:
https://github.com/tensorflow... | null | 1b743cd9fc44dcceda79ef9fcb6e488d0babc159 | null | low | [
{
"filename": "src/transformers/models/efficientnet/image_processing_efficientnet.py",
"patch": "@@ -66,7 +66,7 @@ class EfficientNetImageProcessor(BaseImageProcessor):\n `do_resize` in `preprocess`.\n size (`dict[str, int]` *optional*, defaults to `{\"height\": 346, \"width\": 346}`):\n... |
vuejs/vue | 6,718 | fix(ref): preserve ref on components after removing root element (#6632, #6641) | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [x] Bugf... | null | 6ad44e13e990951ff152a0fd7042613c5a87f1c0 | null | low | [
{
"filename": "src/core/vdom/patch.js",
"patch": "@@ -697,6 +697,8 @@ export function createPatchFunction (backend) {\n insert.fns[i]()\n }\n }\n+ } else {\n+ registerRef(ancestor)\n }\n ancestor = ancestor.paren... |
electron/electron | 49,357 | build: update on-create-command for siso | As in title. Otherwise:
```
2026-01-12 09:45:27.402Z: Cloning "depot_tools" into /home/builduser/.electron_build_tools/third_party/depot_tools
2026-01-12 09:45:30.860Z: Updating /home/builduser/.electron_build_tools/third_party/depot_tools
2026-01-12 09:45:30.862Z: Running "/home/builduser/.electron_build_tools/t... | null | b0e012f14eef7ecbbdb0acc527434a9957952dc8 | null | low | [
{
"filename": ".devcontainer/on-create-command.sh",
"patch": "@@ -48,7 +48,8 @@ if [ ! -f $buildtools/configs/evm.testing.json ]; then\n \\\"gen\\\": {\n \\\"args\\\": [\n \\\"import(\\\\\\\"//electron/build/args/testing.gn\\\\\\\")\\\",\n- ... |
ggml-org/llama.cpp | 19,976 | vulkan: improve partial offloading performance on AMD | I saw a big difference between Vulkan and ROCm performance in partial offloads. I narrowed it down to transfer speeds for weight transfer from CPU to GPU with offloaded ops. One possible explanation is that using the dedicated transfer queue on AMD may be faster than using a compute queue, so I implemented using a tran... | 723c71064da0908c19683f8c344715fbf6d986fd | 319146247e643695f94a558e8ae686277dd4f8da | 16 | medium | [
{
"filename": "ggml/src/ggml-vulkan/ggml-vulkan.cpp",
"patch": "@@ -590,6 +590,7 @@ struct vk_device_struct {\n vk_queue transfer_queue;\n bool single_queue;\n bool support_async;\n+ bool async_use_transfer_queue;\n uint32_t subgroup_size;\n uint32_t subgroup_size_log2;\n uint32_t... |
ollama/ollama | 7,963 | openai: finish streaming tool calls as tool_calls | When a response contains tool_calls it finishes the chat, and we see this already happening in Ollama in non-chunk mode. This ensures that the chunk with tool calls contains the finish reason, not a following one, while any following ones are not sent - their choice with empty content will conflict with the tool call r... | null | 10d59d5f9082d04d6c15bdeb20f1c04ffbe6e1b6 | null | low | [
{
"filename": "openai/openai.go",
"patch": "@@ -20,6 +20,8 @@ import (\n \t\"github.com/ollama/ollama/types/model\"\n )\n \n+var finishReasonToolCalls = \"tool_calls\"\n+\n type Error struct {\n \tMessage string `json:\"message\"`\n \tType string `json:\"type\"`\n@@ -266,7 +268,7 @@ func toChat... |
nodejs/node | 61,038 | os: freeze signals constant | Remove the ability to mutate signals constant as doing so can lead to unexpected behavior, notably when spawning child process.
Fixes: https://github.com/nodejs/node/issues/44749
<!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/nodejs/node/blob/HEAD/CONTRIBUTI... | null | 472f58684055a15b465e02466548df5def4ed640 | null | low | [
{
"filename": "lib/os.js",
"patch": "@@ -25,6 +25,7 @@ const {\n ArrayPrototypePush,\n Float64Array,\n ObjectDefineProperties,\n+ ObjectFreeze,\n StringPrototypeSlice,\n SymbolToPrimitive,\n } = primordials;\n@@ -330,6 +331,8 @@ module.exports = {\n machine: getMachine,\n };\n \n+ObjectFreeze(c... |
ollama/ollama | 9,075 | Add Ollamazing to Community Integrations | Add [Ollamazing](https://github.com/buiducnhat/ollamazing), a web extension. | 82658c3eec0cbb70ba558e5310fe3e68436aa583 | 8cf16063a52deb416e16039c73264e26f7e9a43a | 4 | medium | [
{
"filename": "README.md",
"patch": "@@ -380,6 +380,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [Chipper](https://github.com/TilmanGriesel/chipper) AI interface for tinkerers (Ollama, Haystack RAG, Python)\n - [ChibiChat](https://github.com/CosmicEventHorizon/ChibiChat) (Kotlin-ba... |
facebook/react | 33,063 | Root import types from react-native in ReactNativeTypes | <!--
Thanks for submitting a pull request!
We appreciate you spending the time to work on these changes. Please provide enough information so that others can review your pull request. The three fields below are mandatory.
Before submitting a pull request, please make sure the following is done:
1. Fork ... | null | 9518f1185621aecb99fd72385cdb137c6e8bd8fe | null | low | [
{
"filename": "packages/react-native-renderer/src/ReactNativeTypes.js",
"patch": "@@ -9,22 +9,18 @@\n * @flow strict\n */\n \n-import type {\n- Component as ReactComponent,\n- ElementRef,\n- ElementType,\n- MixedElement,\n-} from 'react';\n import type {\n // $FlowFixMe[nonstrict-import] TODO(@ruben... |
vercel/next.js | 88,416 | [test] Deflake `test/integration/invalid-custom-routes/test/index.test.ts` | More and more checks get deferred since we want "ready" to be a signal for "ready to use" not "ready and correct". Tests need to account for that.
This PR extracts launching the app out of `getStderr` so that we can actually retry reading stderr without attempting to launch a new app.
Some tests already retried `g... | null | f814b8b51a8207d28640348dc347f18ad0dc64c8 | null | low | [
{
"filename": "test/integration/invalid-custom-routes/test/index.test.ts",
"patch": "@@ -2,7 +2,7 @@\n \n import fs from 'fs-extra'\n import { join } from 'path'\n-import { launchApp, findPort, nextBuild, retry } from 'next-test-utils'\n+import { launchApp, findPort, nextBuild, retry, killApp } from 'next-t... |
huggingface/transformers | 43,042 | [SAM3] Fix MPS race condition in add_point_inputs | # What does this PR do?
For Apple Sillicon this PR enables SAM2 and SAM3 inference on MPS device.
## Problem
SAM2/SAM3 video inference produces "no object" masks on MPS (all values = -1024), while CPU inference was correct.
## Root Cause
`non_blocking=True` in `add_point_inputs` causes a race condition: the ... | null | 8fe3ec2101faeb6e2e40571c8cb7f72d9708074c | null | low | [
{
"filename": "src/transformers/models/edgetam_video/modeling_edgetam_video.py",
"patch": "@@ -1001,7 +1001,7 @@ def add_point_inputs(self, obj_idx: int, frame_idx: int, inputs: dict):\n device_inputs = {}\n for key, value in inputs.items():\n if isinstance(value, torch.Tensor):\... |
vuejs/vue | 7,996 | Reactive components count | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [x] Bugf... | null | 38d5459e109a83b4992168c673403503278ff3c1 | null | low | [
{
"filename": "benchmarks/reorder-list/index.html",
"patch": "@@ -15,7 +15,7 @@\n \n <script type=\"text/x-template\" id=\"t\">\n <div>\n- <h1>{{ total }} Components</h1>\n+ <h1>{{ items.length }} Components</h1>\n <p>{{ action }} took {{time}}ms.</p>\n <button @click... |
huggingface/transformers | 42,736 | gpt-oss is not working with flash-attention | When initializing `gpt-oss` model with `attn_implementation="flash_attention_2"` or `"flash_attention_3"` would result in silent failures and garbage generation output as reported in #42533.
`gpt-oss` models rely on attention sinks which are not yet implemented for the `flash_attention` as suggested the safest path ... | 0642963ba13f2dae0596fe489415569e1d91fbda | e8c51d1848187b9e58d00bf7d638811686ab2a4b | 16 | medium | [
{
"filename": "src/transformers/models/gpt_oss/configuration_gpt_oss.py",
"patch": "@@ -117,5 +117,22 @@ def __init__(\n **kwargs,\n )\n \n+ def __setattr__(self, key, value):\n+ \"\"\"\n+ Overwritten to allow checking for the proper attention implementation to be used.\... |
ggml-org/llama.cpp | 19,754 | Improve CUDA graph capture | Currently, CUDA graphs are eagerly enabled on the first call to ggml_backend_cuda_graph_compute. If the graph properties keep changing (4+ consecutive updates), the graph is permanently disabled. This is suboptimal because:
- The first call always incurs CUDA graph capture overhead even if the graph is unstable
- O... | null | a0c91e8f9f69c11bbdb1111af20537e206f0866f | null | low | [
{
"filename": "ggml/src/ggml-cuda/common.cuh",
"patch": "@@ -1149,8 +1149,7 @@ struct ggml_cuda_graph {\n size_t num_nodes = 0;\n std::vector<cudaGraphNode_t> nodes;\n bool disable_due_to_gpu_arch = false;\n- bool disable_due_to_too_many_updates = false;\n- int number_consecutive_updates =... |
nodejs/node | 61,128 | fix: custom inspect should not throw when called with wrong this | Fixes #46323
I see `Event` has a similar brand check (lib/internal/event_target.js:152) Should we also remove that one?
<!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/nodejs/node/blob/HEAD/CONTRIBUTING.md
- the commit message formatting guidelines at
htt... | null | 416db75e42e347f0159f1fbc65ed75de998b27da | null | low | [
{
"filename": "lib/internal/event_target.js",
"patch": "@@ -867,8 +867,6 @@ class EventTarget {\n return new CustomEvent(type, { detail: nodeValue });\n }\n [customInspectSymbol](depth, options) {\n- if (!isEventTarget(this))\n- throw new ERR_INVALID_THIS('EventTarget');\n const name = t... |
ollama/ollama | 9,076 | Add H200 as supported device. | Fixes: #9031 | null | 3a4449e2f1b19bec5b2a2d0a0b1ea2bd53d1b4cc | null | low | [
{
"filename": "docs/gpu.md",
"patch": "@@ -7,7 +7,7 @@ Check your compute compatibility to see if your card is supported:\n \n | Compute Capability | Family | Cards |\n | ------------------ | -... |
huggingface/transformers | 43,118 | Silence pytest warnings due to sentencepiece version | # What does this PR do?
As per the title. Finally localized why they would pop up when running `pytest` on some envs | a616b914451ffd3bb3553874e67af7fe47e8adeb | e8d60a7fe614d1170fd4938255f3955fbc9ca498 | 1 | high | [
{
"filename": "pyproject.toml",
"patch": "@@ -72,5 +72,11 @@ markers = [\n log_cli = 1\n log_cli_level = \"WARNING\"\n asyncio_default_fixture_loop_scope = \"function\"\n-# The above pytest-asyncio rule emits unnessecary warnings when it's not installed, so skip it by regex here\n-filterwarnings = [\"ignore... |
electron/electron | 49,349 | build: update NMV to 145 | #### Description of Change
<!--
Thank you for your Pull Request. Please provide a description above and review
the requirements below.
Contributors guide: https://github.com/electron/electron/blob/main/CONTRIBUTING.md
-->
Upstream PR: https://github.com/nodejs/node/pull/61291
Needs to be merged before cu... | null | d6a6312fc8f31901e4a93cc0934ac65880db1b52 | null | low | [
{
"filename": "build/args/all.gn",
"patch": "@@ -2,7 +2,7 @@ is_electron_build = true\n root_extra_deps = [ \"//electron\" ]\n \n # Registry of NMVs --> https://github.com/nodejs/node/blob/main/doc/abi_version_registry.json\n-node_module_version = 143\n+node_module_version = 145\n \n v8_promise_internal_fie... |
ollama/ollama | 9,045 | readme: fix nix package link | Existing link is to the now deprecated 24.05 release, removing the channel tag from the url so it will always go to the current stable channel. | afa55bc70cb1714fcad10571279a42109a6e0631 | 378d6e1e6a94099a60d6b7ce99971bbf536bc34d | 1 | high | [
{
"filename": "README.md",
"patch": "@@ -439,7 +439,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)\n - [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)\n - [Guix channel](https://codeberg.org/tu... |
vercel/next.js | 88,423 | [test] Remove rogue debug log | Logging the work unit store caused the following runtime error under some circumstances:
```
TypeError: Cannot read private member #headersList from an object whose class did not declare it
``` | null | e99a92bbada5c2811b00f2abc0795b7171ee47fb | null | low | [
{
"filename": "test/e2e/app-dir/segment-cache/prefetch-runtime/app/(default)/errors/sync-io-after-runtime-api/dynamic-params/[id]/page.tsx",
"patch": "@@ -1,6 +1,5 @@\n import { Suspense } from 'react'\n import { DebugRenderKind } from '../../../../../shared'\n-import { workUnitAsyncStorage } from 'next/dis... |
ggml-org/llama.cpp | 20,213 | Revert to OAI-compatible args | Reverts the output of function arguments to OpenAI-compatible format. Supersedes #20202 | 2f2923f89526d102bd3c29188db628a8dbf507b6 | b283f6d5b3d2d079019ae5ed3cbbdb4b3be03b25 | 6 | medium | [
{
"filename": "common/chat.cpp",
"patch": "@@ -129,7 +129,7 @@ json common_chat_msg::to_json_oaicompat(bool concat_typed_text) const {\n {\"type\", \"function\"},\n {\"function\", {\n {\"name\", tool_call.name},\n- {\"arguments\", json::... |
vuejs/vue | 7,491 | fix(core): Use native bind function instead of own (fix #7408) | Simple refactor bind function for easy revert.
Custom bind implementation performance is lower than native. ( https://jsperf.com/vue-bind-perf )
issue link: https://github.com/vuejs/vue/issues/7408
<!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBU... | null | dc2171a33a38d0563ef14934b078d9dc4c39acf3 | null | low | [
{
"filename": "src/shared/util.js",
"patch": "@@ -174,22 +174,34 @@ export const hyphenate = cached((str: string): string => {\n })\n \n /**\n- * Simple bind, faster than native\n+ * Simple bind polyfill for environments that do not support it... e.g.\n+ * PhantomJS 1.x. Technically we don't need this anymo... |
facebook/react | 32,951 | [devtools] Fix "View source" for sources with URLs that aren't normalized | ## Summary
Chrome lists resources with their normalized URLs. However, sourcemaps may contain unnormalized URLs in their `sources`. When symbolicating we get the original entry in `sources` not the normalized URL so we need to consider that when calling `openResource`. Otherwise nothing happens and you only see an e... | null | bc6184dd993e6ea0efdee7553293676db774c3ca | null | low | [
{
"filename": "packages/react-devtools-extensions/src/main/fetchFileWithCaching.js",
"patch": "@@ -1,6 +1,6 @@\n /* global chrome */\n \n-import {normalizeUrl} from 'react-devtools-shared/src/utils';\n+import {normalizeUrlIfValid} from 'react-devtools-shared/src/utils';\n import {__DEBUG__} from 'react-devt... |
vercel/next.js | 88,321 | Don't import typescript at runtime | Closes https://github.com/vercel/next.js/issues/86981
Closes PACK-6209
Don't import typescript at runtime to determine the tsconfig, to be able to transpile next.config.ts.
Instead, read the tsconfig file directly. | 9c5e3da9daace868479bd8eb5c0aa09414c1452a | d2e62656b88ed3ad52412abe22826d0e56efde94 | 3 | medium | [
{
"filename": "packages/next/src/build/next-config-ts/transpile-config.ts",
"patch": "@@ -1,17 +1,19 @@\n import type { Options as SWCOptions } from '@swc/core'\n import type { CompilerOptions } from 'typescript'\n \n-import { resolve } from 'node:path'\n-import { readFile } from 'node:fs/promises'\n+import... |
facebook/react | 32,813 | Add dispatchEvent to fragment instances | `fragmentInstance.dispatchEvent(evt)` calls `element.dispatchEvent(evt)` on the fragment's host parent. This mimics bubbling if the `fragmentInstance` could receive an event itself.
If the parent is disconnected, there is a dev warning and no event is dispatched. | 4206fe49825787eda57a5d142640a63772ccbf2b | 8a8df5dbdd57bf63d5156c1a9cba21ac6106b83d | 3 | medium | [
{
"filename": "fixtures/dom/src/components/fixtures/fragment-refs/EventDispatchCase.js",
"patch": "@@ -0,0 +1,157 @@\n+import TestCase from '../../TestCase';\n+import Fixture from '../../Fixture';\n+\n+const React = window.React;\n+const {Fragment, useRef, useState} = React;\n+\n+function WrapperComponent(p... |
ollama/ollama | 9,060 | build: add `-DGGML_CUDA_NO_PEER_COPY=ON` for ROCm ggml builds on windows | Fixes https://github.com/ollama/ollama/issues/9048 | abb8dd57f8a86a71b5f8fe1f059aee3636a658b1 | a4f69a0191b304c204ef074ccd6523f121bfddfe | 14 | medium | [
{
"filename": "CMakeLists.txt",
"patch": "@@ -104,6 +104,10 @@ if(CMAKE_HIP_COMPILER)\n if(AMDGPU_TARGETS)\n add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-hip)\n \n+ if (WIN32)\n+ target_compile_definitions(ggml-hip PRIVATE GGML_CUDA_NO_PEER_COPY=1)... |
ollama/ollama | 9,043 | doc: fix link for Abso | Hi,
I've realized the link from my previous PR was not pointing to the Abso repo, so I've fixed that. | 0189bdd0b79fceab9e801a63e1311d53f3784dbc | afa55bc70cb1714fcad10571279a42109a6e0631 | 2 | high | [
{
"filename": "README.md",
"patch": "@@ -494,7 +494,7 @@ See the [API documentation](./docs/api.md) for all endpoints.\n - [multi-llm-ts](https://github.com/nbonamy/multi-llm-ts) (A Typescript/JavaScript library allowing access to different LLM in unified API)\n - [LlmTornado](https://github.com/lofcz/llmto... |
vercel/next.js | 88,417 | Update AGENTS.md with PR template and test generation requirements | ## What?
Updates the AGENTS.md documentation with two clarifications for AI agents working on the codebase.
## Why?
To improve the quality and consistency of AI-generated contributions by:
1. Ensuring PRs follow the repository's standard description format
2. Making it clear that test generation should use the provi... | 5746e3bb14e72db37b4dc75d37137af8ed7c0310 | 059edd0b2256f4d0b5a2cb59338a32ec702fae89 | 8 | medium | [
{
"filename": "AGENTS.md",
"patch": "@@ -52,7 +52,7 @@ The main Next.js framework lives in `packages/next/`. This is what gets publishe\n \n **Note**: `gt submit` runs in interactive mode by default and won't push in automated contexts. Always use `gt submit --no-edit` or `gt submit -q` when running from Cl... |
huggingface/transformers | 42,972 | Support block_size and bf16_stochastic_round keyword arguments for torchao optimizers |
# What does this PR do?
This PR enables the configuration of `block_size` and `bf16_stochastic_round` arguments for torchao optimizers (AdamW4bit, AdamW8bit) within the Trainer.
Previously, these arguments were not passed to the optimizer, relying on default values. Now, users can specify them via optim_args in... | null | c9b0498f2e5ab0ef83ff6d6226025bec7c23aaec | null | low | [
{
"filename": "src/transformers/trainer.py",
"patch": "@@ -1671,6 +1671,12 @@ def optimizer_hook(param):\n optimizer_cls = AdamW8bit\n else:\n raise ValueError(\"Invalid optimizer\")\n+ optimizer_kwargs.update(\n+ {\n+ ... |
electron/electron | 49,350 | fix: provide explicit cookie encryption provider for cookie encryption | Backport of #49348
See that PR for details.
Notes: Fixed an issue where no cookie encryption provider was passed into the network service when cookie encryption was enabled.
| null | d5087faff77b219e9ea4f336c49703e665580352 | null | low | [
{
"filename": "shell/browser/net/network_context_service.cc",
"patch": "@@ -14,6 +14,7 @@\n #include \"net/http/http_util.h\"\n #include \"net/net_buildflags.h\"\n #include \"services/network/network_service.h\"\n+#include \"services/network/public/cpp/cookie_encryption_provider_impl.h\"\n #include \"servic... |
nodejs/node | 61,109 | doc: exclude compile-time flag features from security policy | Add a new section to the security model clarifying that experimental features behind compile-time flags are not covered by the vulnerability reporting policy. These features are intended for development only and are not enabled in official releases.
<!--
Before submitting a pull request, please read:
- the CONTR... | null | 0a5418088fb724c0ab4a958447e338ad13c6d2eb | null | low | [
{
"filename": "SECURITY.md",
"patch": "@@ -125,6 +125,26 @@ This policy recognizes that experimental platforms may not compile, may not\n pass the test suite, and do not have the same level of testing and support\n infrastructure as Tier 1 and Tier 2 platforms.\n \n+### Experimental features behind compile-... |
huggingface/transformers | 43,109 | Fix warnings poping up with `fixup` and `pytest` | # What does this PR do?
As per the title. I'm getting too annoyed to see them all the time, high-time to fix them | 82c60184bf07db5659aa1d3bebd0690d15642721 | a616b914451ffd3bb3553874e67af7fe47e8adeb | 5 | medium | [
{
"filename": "pyproject.toml",
"patch": "@@ -72,3 +72,5 @@ markers = [\n log_cli = 1\n log_cli_level = \"WARNING\"\n asyncio_default_fixture_loop_scope = \"function\"\n+# The above pytest-asyncio rule emits unnessecary warnings when it's not installed, so skip it by regex here\n+filterwarnings = [\"ignore:... |
ggml-org/llama.cpp | 20,232 | server : do not create checkpoints right after mtmd chunks | fix #20222
The checkpoint logic requires at least one text token to be present at the end of the checkpoint. | f5ddcd1696eca5069dc7915f4d4c03c9a709afea | d417bc43dd29eab006a0da73afc7d610c9ebae7d | 27 | medium | [
{
"filename": "tools/server/server-context.cpp",
"patch": "@@ -2438,6 +2438,8 @@ struct server_context_impl {\n slot.n_prompt_tokens_cache = 0;\n }\n \n+ bool do_checkpoint = params_base.n_ctx_checkpoints > 0;\n+\n // check i... |
vuejs/vue | 7,930 | feat(weex): support object syntax of class | Object syntax class is supported in Weex now:
```html
<div v-bind:class="{ active: isActive }"></div>
```
<!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]... | null | 62265035c0c400ad6ec213541dd7cca58dd71f6e | null | low | [
{
"filename": "src/platforms/weex/runtime/modules/class.js",
"patch": "@@ -1,6 +1,6 @@\n /* @flow */\n \n-import { extend } from 'shared/util'\n+import { extend, isObject } from 'shared/util'\n \n function updateClass (oldVnode: VNodeWithData, vnode: VNodeWithData) {\n const el = vnode.elm\n@@ -15,25 +15,... |
nodejs/node | 61,130 | src: remove redundant CHECK | The function `SetAuthTag()` returns early (before reaching this line) if `!cipher->IsAuthenticatedMode()`, which expands to the exact same condition that is being `CHECK()`ed here.
<!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/nodejs/node/blob/HEAD/CONTRIBUTIN... | dff46c07c37f5cacc63451b84ba21478c4bbc45c | d7e4108bc1beb7dc01890b032e7ba89d12feed4e | 7 | medium | [
{
"filename": "src/crypto/crypto_cipher.cc",
"patch": "@@ -549,7 +549,6 @@ void CipherBase::SetAuthTag(const FunctionCallbackInfo<Value>& args) {\n } else {\n // At this point, the tag length is already known and must match the\n // length of the given authentication tag.\n- CHECK(Cipher::FromC... |
facebook/react | 33,140 | feat(compiler): implement constant folding for unary minus | ## Summary
`-constant` is represented as a `UnaryExpression` node that is currently not part of constant folding. If the operand is a constant number, the node is folded to `constant * -1`. This also coerces `-0` to `0`, resulting in `0 === -0` being folded to `true`.
## How did you test this change?
See attached ... | null | 946da518eb2d64d808f9204a72e05892d3005f3f | null | low | [
{
"filename": "compiler/packages/babel-plugin-react-compiler/src/Optimization/ConstantPropagation.ts",
"patch": "@@ -327,6 +327,23 @@ function evaluateInstruction(\n }\n return null;\n }\n+ case '-': {\n+ const operand = read(constants, value.value);\n+ i... |
vuejs/vue | 7,839 | feat(server, webpack-plugin): webpack 4 support | <!--
Please make sure to read the Pull Request Guidelines:
https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#pull-request-guidelines
-->
<!-- PULL REQUEST TEMPLATE -->
<!-- (Update "[ ]" to "[x]" to check a box) -->
**What kind of change does this PR introduce?** (check at least one)
- [ ] Bugf... | 575b6e77ab82b0bbc581aec3ea9b07135d2d1fcd | ef0b25097957ae9ef9970be732d6e65cc78902e9 | 1 | medium | [
{
"filename": "src/server/webpack-plugin/client.js",
"patch": "@@ -1,6 +1,6 @@\n const hash = require('hash-sum')\n const uniq = require('lodash.uniq')\n-import { isJS, isCSS } from './util'\n+import { isJS, isCSS, onEmit } from './util'\n \n export default class VueSSRClientPlugin {\n constructor (option... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.