AscendKernelGen commited on
Commit
5596483
·
verified ·
1 Parent(s): 54affd5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -38,6 +38,40 @@ Our framework, **AscendKernelGen (AKGen)**, bridges the gap between general-purp
38
  * **Hardware-Grounded Evaluation:** Validated using **NPUKernelBench**, a comprehensive benchmark that assesses compilation success, functional correctness, and performance (latency) on real Ascend hardware across varying complexity levels.
39
  * **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely.
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ## Citation
42
 
43
  ```bibtex
 
38
  * **Hardware-Grounded Evaluation:** Validated using **NPUKernelBench**, a comprehensive benchmark that assesses compilation success, functional correctness, and performance (latency) on real Ascend hardware across varying complexity levels.
39
  * **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely.
40
 
41
+ ## Representative Case Studies
42
+
43
+ ### Example: Effect of Knowledge Injection
44
+
45
+ The following case study, which concerns the usage of the `Muls` instruction in AscendC, demonstrates the qualitative improvement brought by CoT-based domain knowledge injection in both professional knowledge comprehension and code generation capability.
46
+
47
+ ![knowledge_injection_effect.png](knowledge_injection_effect.png)
48
+
49
+ **🤔 Original response (before training):**
50
+ The response exhibits substantial uncertainty, as indicated by expressions such as “possibly” and “assuming.” The model attempts to reason by analogy from general programming knowledge, yet lacks an accurate understanding of AscendC-specific APIs, the precise usage of the `Muls` instruction, and its architectural context within the Ascend processor. The provided code example (e.g., involving `hip/hip_runtime.h`) is incorrect and explicitly reveals the model’s lack of domain-specific knowledge, as evidenced by statements such as “insufficient documentation” and “cannot be implemented.”
51
+
52
+ **🎓 Improved response (after training):**
53
+ The response demonstrates clear expert-level reasoning.
54
+
55
+ * **🧠 Structured reasoning (`<think>`)**: The model first analyzes the `Muls` instruction in terms of its operational background (vector multiplication), key parameters (e.g., `src0`, `dst`), and essential data layout constraints (e.g., whether `src0` must be a scalar or allocated in a specific region).
56
+ * **✅ Accurate implementation**: It then provides correct and concrete implementation steps, including initialization, local tensor memory management through `InQueue` and `OutQueue`, data movement (`CopyIn`), and the core computation invocation (`ScalarValue`).
57
+ * **⌨️ Code correctness**: The final generated code example (`aicore__void Compute()`) correctly employs the AscendC API and demonstrates an accurate understanding of both data queuing and vector instructions.
58
+
59
+ ### Example: Comparison of Generated Operators
60
+
61
+ The following example, which concerns the implementation of a `Swish` operator in AscendC, illustrates the qualitative improvement brought by CoT-based domain knowledge injection in handling complex operator implementation and tiling strategy design.
62
+
63
+ ![compare_swish_impl.png](compare_swish_impl.png)
64
+
65
+ **🤔 Original response (before training):**
66
+ The response exhibits substantial uncertainty in the critical tiling strategy, with phrases such as “possibly,” “obviously not feasible,” and “assuming.” Although the model attempts a simple arithmetic division (`1024/48 = 21.33`), it fails to understand Ascend core scheduling mechanisms and cannot handle non-divisible remainders appropriately. Its implementation of the `Swish` operator itself is also confused, with an inaccurate composition of operations such as `Negate`, `Reciprocal`, and `Multiply`, and an imprecise understanding of the required APIs. The response ultimately concludes that compilation would fail.
67
+
68
+ **🎓 Improved response (after training):**
69
+ The response demonstrates clear expert-level reasoning.
70
+
71
+ * **🧠 Structured reasoning (`<think>`)**: The model first accurately analyzes the mathematical formulation of the `Swish` operator, namely $y = x \times \mathrm{sigmoid}(x)$, together with the input specification. More importantly, it designs a robust tiling strategy by correctly identifying the total workload (`1024`) and the number of cores (`48`), and by formulating an uneven remainder-aware workload distribution scheme (the first 47 cores process 21 elements each, while the last core processes 37 elements).
72
+ * **✅ Accurate implementation**: The model then correctly realizes this tiling strategy in the `Init()` function by computing `blockLength` and `remainder`. In the `Compute()` function, it provides a correct and concrete operator implementation pipeline, combining AscendC instructions such as `Muls`, `Exp`, `Adds`, and `Div` to construct the sigmoid function step by step and ultimately complete the `Swish` computation.
73
+ * **⌨️ Code correctness**: The final generated code examples (`aicore__void Init()` and `aicore__void Compute()`) correctly employ the AscendC API, demonstrating not only an accurate understanding of the operator’s mathematical logic, but also a deep mastery of multi-core parallelization and tiling strategies.
74
+
75
  ## Citation
76
 
77
  ```bibtex