[{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/posts/","section":"Blog","summary":"","title":"Blog","type":"posts"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/tags/dream-of-the-red-chamber/","section":"Tags","summary":"","title":"Dream of the Red Chamber","type":"tags"},{"content":"App URL: LINK\nPreface # The key point from the previous installment\nwas to regard text as fundamentally symbolic \u0026ndash;\nastronomy, hydrology, the humanities\u0026hellip; all the \u0026ldquo;wen\u0026rdquo; (文, pattern/text) of heaven, earth, and humankind.\nText maps the world and thought in a cost-effective way,\nbecoming our primary tool for understanding and interfacing with objective reality.\nOnce you grasp this, you realize that\nalthough LLMs (Large Language Models) are essentially just next-token predictors,\nonce their capability reaches a certain level, they become nuclear-grade instruments of national importance.\nTheir significance made me want to verify their capabilities\nand to do so repeatedly as they improve over time.\nA near-perfect benchmark for this is Dream of the Red Chamber (紅樓夢, Hong Lou Meng).\nSuppose there existed an omniscient, omnipotent LLM \u0026ndash;\nit could take Cao Xueqin\u0026rsquo;s original first 80 chapters of Dream of the Red Chamber as input and output the subsequent chapters.\nBut because LLM training data is limited,\nit is like a Sudoku puzzle with too few given numbers \u0026ndash; the answer cannot be determined with certainty.\nWhat current LLMs can do is produce at very high throughput within the scope of what they understand.\nWhat the Dream of the Red Chamber Simulator aims to do is, with such productivity at hand,\nuse traditional structured methods to rapidly produce and accumulate results with minimal human effort.\nAssumptions # We need some assumptions, biases, and theories to make the task of predicting the ending sufficiently feasible and mechanical.\nWhen it comes to accurate prediction, my intuition goes to classical physics \u0026ndash; specifically thermodynamics.\nIn a closed system, if we specify the initial conditions and the governing laws,\nthe evolution of a thermodynamic system is predictable and deterministic.\nAnother assumption is that LLM capabilities will keep improving,\nbut in the foreseeable future we will not gain additional training data from the Qing Dynasty or from Cao Xueqin himself.\nTherefore, we can establish a structured workflow that both current and future LLMs can execute.\nInitial Conditions # The initial conditions are primarily data extracted from the original novel.\nNow we use LLMs to perform what was previously highly labor-intensive work.\nIn the past, human labor costs were too high, and throwing more people at the problem could not compress the timeline.\nIf you got halfway through and wanted to tweak the extraction rules and start over, it was simply impractical.\nTime and cost are no longer obstacles; extraction quality now depends on model capability.\nFor example, I extracted:\nKey character profiles, personality dossiers, family genealogies;\nPer-chapter snapshots of each character\u0026rsquo;s economic, social, emotional, health, and interpersonal states across all 120 chapters;\nA basic spatial map of the Jia (賈) estate with spatial metadata;\nAll dialogue records, poetry corpora\u0026hellip;\nThe approach was to start with broad, not-yet-rigorous extraction that at least achieves high coverage \u0026ndash; ensuring every piece of text is classified into some category.\nGoverning Laws # I divide the governing laws into two types by my own judgment: fundamental world rules and the author\u0026rsquo;s artistic will.\nThis is admittedly arbitrary, but without making some such judgment the work cannot proceed at all.\nWorld rules include but are not limited to:\nSociety: class hierarchy, power dynamics, master-servant relationships, marriage;\nEconomy: income and expenditure, debt, risk of property confiscation;\nCulture: Confucian propriety, festivals, feudal values;\nPsychology: character emotions, personality-driven behavior, internal conflict;\nPolitics: imperial favor, court dynamics, external forces\u0026hellip;\nThe artistic will is precisely what makes Dream of the Red Chamber \u0026ndash; apart from the fact that it lacks a definitive ending \u0026ndash; an ideal prediction target.\nCao Xueqin embedded hints about the characters\u0026rsquo; fates throughout the novel, from the very beginning.\nThe most iconic example is the 判詞 (prophetic verses / judgment poems) of the 十二金釵 (Twelve Beauties of Jinling), which explicitly foreshadow the fates of the female lead and deuteragonist:\n可嘆停機德，堪憐詠絮才。玉帶林中掛，金簪雪裡埋。\n(How lamentable, her virtue of halting the loom; how pitiable, her talent of chanting the willow catkins. A jade belt hangs in the forest; a golden hairpin lies buried in the snow.)\nRule Engine # Given the initial conditions and governing laws, how do we apply them?\nThe more ideal approach would be to build a 3D physics engine similar to a game engine, where each character possesses only the information they would know, and let an AI chatbot role-play each character like an actor performing a part.\nBut first, the cost would be too high and would only increase spectacle \u0026ndash; we would not be introducing new information, and the 3D engine would not produce new results.\nSecond, we are not running a wind-tunnel fluid dynamics simulation; we are trying to guess what Cao Xueqin had in mind. Staying at the textual level is sufficient for now.\nBased on the previously extracted data, we derive a set of computational subjects and rules.\nIn practice, this is the traditional process of evaluating evidence, confidence, and additive/subtractive adjustments for whether an event occurs \u0026ndash;\nmade systematic, repeatable, modifiable, and exhaustively brute-forced.\nThe simulation steps for each round are:\nProcess delayed effects \u0026ndash; check pending_effects; apply any that have reached their due chapter.\nEvaluate all laws \u0026ndash; check each law\u0026rsquo;s premises to see if all are satisfied (skip those with confidence \u0026lt; 0.3).\nConflict resolution \u0026ndash; simultaneously triggered laws may contradict each other; adjudicate which one wins.\nApply effects \u0026ndash; those with a delay go into the queue; those without directly modify state.\nSnapshot \u0026ndash; compress the current state into a numerical vector.\nchapter += 1\nA complete example \u0026ndash; the death of Lin Daiyu (林黛玉) in Chapter 98 \u0026ndash; is appended at the end of this article.\nWorkflow Summary # Among the several components in the above workflow,\nwhether the extracted data is academically rigorous, whether the rules are reasonable and applicable, whether the simulation steps are sound \u0026ndash;\nnone of this is critically important, because each part can be improved and regenerated independently.\nFrom a software engineering perspective, my goal is to make this engine work well at the interface level,\nand continuously refine prediction results as more information is incorporated and the methodology improves.\nCurrent Results: Objective vs. Subjective Parallel Comparison # Here I must introduce another self-imposed methodology to enable structured comparison:\ndividing the inference engine\u0026rsquo;s layers into two main parts \u0026ndash; objective conditions and artistic choice.\nObjective Conditions # The historical backdrop of the era in which the novel was written \u0026ndash; its characters, settings, feudal system, economy, and so on \u0026ndash; constitutes the first layer of objective conditions. This can delimit the entire scope of what the story is capable of containing. We have already extracted some objective laws based on period-appropriate historical context and academic literature.\nConversely, anything that actually existed in that era could, in theory, appear and influence the story.\nFor instance, the novel already features some Western modern objects such as self-striking clocks and pocket watches. What if Western firearms appeared and became a significant plot driver?\nExhaustively exploring such first-layer objective possibilities is a direction for future work, and might achieve an effect that is \u0026ldquo;within reason yet beyond expectation.\u0026rdquo;\nArtistic Choice # The second layer is the author Cao Xueqin\u0026rsquo;s (曹雪芹) cultivation of this fictional world.\nMany characters and the overall trajectory of the family carry a heavy fatalistic coloring.\nThe novel\u0026rsquo;s countless poems and metaphors \u0026ndash; as well as marginal annotations by a friend who reportedly read the ending \u0026ndash; hint at this.\nTherefore, we can use the author\u0026rsquo;s background and life experiences\nto infer what fates he chose for his characters,\nand thereby reveal the values he truly wished to express.\nCross-Comparison # From here, we can treat the Gao E (高鶚) continuation as the work of the most advanced \u0026ldquo;player\u0026rdquo; to date.\nWhat he did is essentially the same thing I am doing now:\nbased on the characters and setting in the novel, attempting to divine Cao Xueqin\u0026rsquo;s artistic choices as closely as possible.\nMoreover, Gao E completed the existing ending, which greatly increased the novel\u0026rsquo;s circulation, and his version has been widely accepted \u0026ndash; so we place his version in a parallel position for comparison.\nRealistic Simulation # What if we stripped away all artistic treatment and retained only objective laws, letting the story evolve naturally?\nThe result is that most plot events would not occur within the span of 120 chapters. The narrative would be less dramatic and contain fewer tragedies.\nMethods for Improving Prediction Quality # Re-extract text when LLM capabilities improve\nMore human intervention for fine-tuning and experimenting with different prompts\nEnlist scholars of Redology (紅學, the academic study of Dream of the Red Chamber) or historians to assist with data cleaning and engine logic adjustments\nIncorporate newly discovered or previously undigitized materials (if any) into training\nExperiment with alternative methodologies\nEstablish a fixed workflow and let AI agents continuously fine-tune and produce many versions; since there is no clear termination criterion, quality can only be judged manually\nConclusion # Due to the constraints of existing and pre-trained data, and the strong internal consistency of Dream of the Red Chamber as a work of art,\ndeus ex machina predictions are unlikely to emerge. What we see instead are internal comparative differences \u0026ndash;\nfor example, the confiscation and decline of the Jia family is fated to happen regardless; the difference lies only in timing.\nA Final Reflection # This kind of work would originally have required at least one to two years and at least one full-time person to complete.\nNow I can use my after-work hours to play a different professional role \u0026ndash; which also satisfies a regret from when financial pressure forced me to switch fields years ago.\nI hope that sharing the thinking process behind building the Dream of the Red Chamber Simulator is helpful to you,\nand I look forward to the social sciences \u0026ndash; not just computer science and the natural sciences \u0026ndash; benefiting from the rapid advances in AI.\nAppendix: Full Simulation Process Example # Chapters 97-98, \u0026ldquo;The Death of Lin Daiyu\u0026rdquo; (黛玉之死) \u0026ndash; a complete walk-through of all six steps (the following content was generated by AI):\nExample: Chapter 97 \u0026ndash; The Switcheroo Plot (掉包計) -\u0026gt; Burning Manuscripts and Severing Ties (焚稿斷情) -\u0026gt; Daiyu\u0026rsquo;s Death\nBackground State (entering Chapter 97)\nAfter more than a dozen chapters of cumulative decline, Lin Daiyu\u0026rsquo;s state is:\nagent.林黛玉: health=0.12, mood=0.08, isolation=0.72, tragedy_risk=0.95, alive=True\nagent.賈寶玉: monk_tendency=0.35, mood=0.20\neconomy: debt_ratio=0.65\npolitics: family_decides_marriage=True\nrelation.賈寶玉::林黛玉: marriage_probability=0.15\nrelation.賈寶玉::薛寶釵: marriage_probability=0.72\nWhy did Daiyu\u0026rsquo;s health drop from an initial 0.35 to 0.12? Because this law has been silently triggering every chapter:\n▎ PSY_E1_DAIYU_DECAY \u0026ldquo;Daiyu\u0026rsquo;s health slowly decays\u0026rdquo;\n▎ Premise: health \u0026gt; 0.0 AND isolation \u0026gt; 0.3 AND alive = True -\u0026gt; Effect: health sub 0.017\n▎ At -0.017 per chapter, over a dozen chapters this amounts to a lethal chronic drain.\n① Process Delayed Effects\nCheck the pending_effects queue. Suppose the following was triggered in Chapter 13:\n▎ FATE_010 \u0026ldquo;Qin Keqing\u0026rsquo;s deathbed dream: the peak foretells the fall\u0026rdquo; delay_chapters: 20\nIts effect, economy.spending_pressure add 0.1, already came due and was executed in Chapter 33. The queue is now empty. Skip.\n② Evaluate All 369 Laws\nThe engine scans each law in sequence. The key laws that trigger this chapter:\nLaw A \u0026ndash; VAR_MARRIAGE_SWAP \u0026ldquo;The Switcheroo: Secretly marrying Baochai instead\u0026rdquo; conf=0.95\nPremise check:\nagent.林黛玉.health \\\u0026lt; 0.15 -\u0026gt; 0.12 \\\u0026lt; 0.15 ✅ agent.林黛玉.alive == True -\u0026gt; True ✅ politics.family\\_decides\\_marriage -\u0026gt; True ✅ relation.寶玉::黛玉.marriage\\_probability \\\u0026lt; 0.5 -\u0026gt; 0.15 \\\u0026lt; 0.5 ✅ All passed -\u0026gt; 🔥 Triggered! Law B \u0026ndash; PSY_E1_DAIYU_DECAY \u0026ldquo;Daiyu\u0026rsquo;s health decay\u0026rdquo; conf=0.9\nhealth \u0026gt; 0.0 -\u0026gt; 0.12 \u0026gt; 0 ✅ isolation \u0026gt; 0.3 -\u0026gt; 0.72 \u0026gt; 0.3 ✅ alive == True ✅ -\u0026gt; 🔥 Triggered! Law C \u0026ndash; VAR_MARRIAGE_DAIYU \u0026ldquo;The Stone-and-Wood Bond: Baoyu and Daiyu marry\u0026rdquo; conf=0.9\nrelation.寶玉::黛玉.marriage\\_probability \u0026gt; 0.7 -\u0026gt; 0.15 \u0026gt; 0.7 ❌ -\u0026gt; Not triggered (Baoyu-Daiyu marriage probability too low) This chapter also simultaneously triggers over a dozen other laws (economic decline, political risk, etc.), but the above are the ones directly relevant to Daiyu.\n③ Conflict Resolution\nVAR_MARRIAGE_SWAP, VAR_MARRIAGE_NORMAL_BAOCHAI, and VAR_MARRIAGE_DAIYU belong to the same variant_group (marriage outcomes are mutually exclusive).\nOnly VAR_MARRIAGE_SWAP passed the premise check, so there is no actual conflict. However, if Daiyu were already dead (alive=False), then VAR_MARRIAGE_NORMAL_BAOCHAI would trigger instead of the switcheroo version \u0026ndash;\nthat would be a different evolutionary path.\nPSY_E1_DAIYU_DECAY\u0026rsquo;s effect is additive (sub), so it does not conflict with other laws. All effects are retained.\n④ Apply Effects\nLaw A\u0026rsquo;s effects execute immediately (delay=0):\nmarriage trigger_event BAOYU_MARRIED_BAOCHAI -\u0026gt; fate_flags[\u0026ldquo;BAOYU_MARRIED_BAOCHAI\u0026rdquo;] = True\nrelation.寶玉::寶釵.marriage_probability set 1.0 -\u0026gt; 1.0\nagent.賈寶玉.mood sub 0.5 -\u0026gt; 0.20 -\u0026gt; 0.00 (clamp)\nagent.賈寶玉.monk_tendency add 0.3 -\u0026gt; 0.35 -\u0026gt; 0.65\nagent.林黛玉.health sub 0.1 -\u0026gt; 0.12 -\u0026gt; 0.02\nLaw B\u0026rsquo;s effects:\nagent.林黛玉.health sub 0.017 -\u0026gt; 0.02 -\u0026gt; 0.003\nAt this point Daiyu\u0026rsquo;s health = 0.003, approaching zero.\n⑤ Snapshot\nCompress the current world state into a numerical vector:\nsnapshot = {\neconomy\\_vector: \\[0.42, 0.82, 0.65, 0.55, 0.80, 0.35], agent\\_vectors: { \u0026#34;林黛玉\u0026#34;: \\[0.003, 0.08, 0.10, 0.00, 0.30, 0.00, 0.72, 0.95], \u0026#34;賈寶玉\u0026#34;: \\[0.80, 0.00, 0.30, 0.72, 0.80, 0.65, 0.42, 0.92], ... }, politics\\_vector: \\[0.0, 0.60, 0.75] }\nThis vector will later be compared via Euclidean distance against the actual vector for Chapter 97 in actual_checkpoints.json.\n⑥ chapter = 98\nEnter the next chapter. At this point Daiyu\u0026rsquo;s health = 0.003, and BAOYU_MARRIED_BAOCHAI = True.\nWhen Chapter 98 runs step ② again, two lethal laws trigger simultaneously:\n▎ VAR_DAIYU_HEARTBREAK \u0026ldquo;Burning manuscripts, severing ties: Daiyu dies of heartbreak\u0026rdquo; conf=0.95\n▎ health ≤ 0.05 -\u0026gt; 0.003 ≤ 0.05 ✅\n▎ BAOYU_MARRIED_BAOCHAI -\u0026gt; True ✅\n▎ -\u0026gt; death trigger_event FATE_DAIYU_DEATH\n▎ -\u0026gt; monk_tendency add 0.4 -\u0026gt; Baoyu 0.65 -\u0026gt; 1.0 (clamp)\n▎ -\u0026gt; alive set False\nThen SYS_E19_ZERO_DAIYU triggers (checkpoint.FATE_DAIYU_DEATH = True), zeroing out all of Daiyu\u0026rsquo;s attributes.\nA few chapters later, Baoyu\u0026rsquo;s monk_tendency has reached 1.0 and mood ≤ 0.15, triggering VAR_MONK_DESPAIR \u0026ldquo;All hopes extinguished: Baoyu renounces the world\u0026rdquo; (萬念俱灰：寶玉出家).\n","date":"29 March 2026","externalUrl":null,"permalink":"/en/posts/stonestory_thermodynamics/","section":"Blog","summary":"","title":"Dream of the Red Chamber Simulator: Thermodynamics and Artistic Choice","type":"posts"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/tags/literary-simulation/","section":"Tags","summary":"","title":"Literary Simulation","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/tags/llm/","section":"Tags","summary":"","title":"LLM","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/","section":"QQder 核舟記部落格","summary":"","title":"QQder 核舟記部落格","type":"page"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/tags/rule-engine/","section":"Tags","summary":"","title":"Rule Engine","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/categories/the-workshop/","section":"Categories","summary":"","title":"The Workshop","type":"categories"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/en/tags/thermodynamics/","section":"Tags","summary":"","title":"Thermodynamics","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/ja/tags/%E3%83%AB%E3%83%BC%E3%83%AB%E3%82%A8%E3%83%B3%E3%82%B8%E3%83%B3/","section":"Tags","summary":"","title":"ルールエンジン","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/zh-hans/tags/%E7%83%AD%E5%8A%9B%E5%AD%A6/","section":"Tags","summary":"","title":"热力学","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/ja/tags/%E7%86%B1%E5%8A%9B%E5%AD%A6/","section":"Tags","summary":"","title":"熱力学","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/tags/%E7%86%B1%E5%8A%9B%E5%AD%B8/","section":"Tags","summary":"","title":"熱力學","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/ja/tags/%E7%B4%85%E6%A5%BC%E5%A4%A2/","section":"Tags","summary":"","title":"紅楼夢","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/tags/%E7%B4%85%E6%A8%93%E5%A4%A2/","section":"Tags","summary":"","title":"紅樓夢","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/zh-hans/tags/%E7%BA%A2%E6%A5%BC%E6%A2%A6/","section":"Tags","summary":"","title":"红楼梦","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/tags/%E8%A6%8F%E5%89%87%E5%BC%95%E6%93%8E/","section":"Tags","summary":"","title":"規則引擎","type":"tags"},{"content":"","date":"2026年3月29日","externalUrl":null,"permalink":"/zh-hans/tags/%E8%A7%84%E5%88%99%E5%BC%95%E6%93%8E/","section":"Tags","summary":"","title":"规则引擎","type":"tags"},{"content":" Preface # Predicting the future — from fortune and misfortune to the fate of humanity — has been one of the grand challenges of human civilization since antiquity. Large Language Models (LLMs) now offer us a glimpse of hope for tackling this problem.\nThis article explores the use of LLMs as the latest tool, with Dream of the Red Chamber (紅樓夢) serving as a sandbox, to find methods for predicting the novel\u0026rsquo;s lost final forty chapters.\nLet me state upfront: I have not succeeded. Perhaps one day when someone does, this article will surface in their search results.\nThis piece is more of a meditation on the nature of text itself. While text lacks the precision of physical formulas,\nas a tool for humanity to grasp reality and speculate about the future, it is far more important than we imagine.\nText is not merely an \u0026ldquo;imagined\u0026rdquo; reality — it is not inherently subjective. It simply mirrors objective reality in the most cost-effective way possible.\nAnd LLMs, as automated mechanisms for predicting text, will radically compress the cost of extracting, generating, and mirroring objective reality.\nThe latest implementation will be updated in the iOS app: Dream of the Red Chamber Simulator.\nAPP: Link\nThe three great regrets in life: First, that the shad has too many bones; second, that the crabapple blossom has no fragrance; third, that Dream of the Red Chamber was never finished.\n— Eileen Chang (張愛玲)\nCelestial Patterns: More Than Just Word Prediction # Predicting the future has always been a matter of great importance in human societies. Every ancient civilization had priests or officials dedicated to observing the stars.\nSymbol systems such as astronomy and hydrology textualized natural phenomena and physical laws. The most quintessential example is the coordinate system of latitude and longitude — text became a crucial tool for humanity to understand and influence the objective world.\nThe practical power of this mapping between text and reality has been validated in these recent years of explosive LLM capability.\nIn the past, language as a tool was not sufficiently deterministic. After the Industrial Revolution, when science became the primary driver of productivity, language was perpetually relegated to the bottom of the prestige hierarchy.\nThe age of LLMs has finally brought the digestion and production of text into the millisecond domain, freeing it from the bottlenecks of human reading speed, typing speed, and typographical errors.\nWork that once consumed enormous mental energy and time now has the potential to be assembled and configured like a production line.\nBut what does this production line produce? The essence of an LLM is \u0026ldquo;predicting\u0026rdquo; the next token. Is this actually productive? Does the model \u0026ldquo;sort of\u0026rdquo; \u0026ldquo;understand\u0026rdquo; what it is saying?\nIlya Sutskever (former co-founder and chief scientist of OpenAI) once gave this example:\nSay you read a detective novel, and on the last page, the detective says \u0026ldquo;I am going to reveal the identity of the criminal, and that person\u0026rsquo;s name is\u0026hellip;\u0026rdquo;\nIf an LLM can consistently and correctly guess the identity of the culprit, then we can tentatively say it \u0026ldquo;understands\u0026rdquo; the novel — at least surpassing the many readers who guessed wrong.\nAnd we must properly appreciate what \u0026ldquo;understanding\u0026rdquo; means. Understanding is ultimately for predicting the future. Every ancient civilization, without exception, studied astronomy and hydrology\nprecisely to forecast upcoming climate patterns, changes in river courses, droughts and floods — to survive better in the objective environment.\nOne could even argue that predicting correctly matters more than understanding.\nThe Humanities: Both People and Agents Remain Black Boxes # Predicting the future is the pursuit and prerequisite (reproducibility) of the natural sciences, and the holy grail of the social sciences.\nThis admittedly sounds like science fiction. In Isaac Asimov\u0026rsquo;s Foundation series, such a discipline for predicting the future was fictionalized as \u0026ldquo;psychohistory\u0026rdquo; (心理史學).\nEconomists, historians, psychologists, social scientists — all want to know how individuals and societies will react to specific events.\nFinance, in particular, is probably the field outside of software where AI is being applied most aggressively.\nAlthough we cannot yet see the finish line, the feasibility of this endeavor has improved significantly.\nThe improvement — and its limitation — is that we now have a remarkable black box (the LLM agent).\nFor tasks at a level comparable to human performance, it is blazingly fast and extremely cheap, making it suitable for replacing human labor.\nThe limitation is that its current mode of use resembles a slot machine. We can use certain techniques (prompt/context engineering) to improve the hit rate, but that is about it.\nWe struggle to open the black box. Chaining multiple black boxes together (multi-agent) yields only limited improvement.\nCurrently, tasks that a single agent can handle are done quickly and well, but more abstract tasks are difficult to improve linearly.\nApplied to social science: a single agent cannot adequately simulate even one individual\u0026rsquo;s memory and emotions, let alone having multi-agent systems simulate an entire community.\nOn the optimistic side, this feels more like a performance problem — and performance within this paradigm will continue to improve.\nThe Sandbox: Don\u0026rsquo;t Aim for a One-Hit Kill # Since we are dealing with a black box, the intuitive approach is to find a smaller box to attempt to crack.\nAssume the current baseline model capability is what was described earlier: throw any detective novel into the LLM slot machine, and it can directly (one-shot) and correctly output who the culprit is.\nBuilding on this baseline, if we put in extra effort — erecting scaffolding, going back and forth with the LLM in discussion, finding ways to linearly accumulate results across each exchange — we should theoretically be able to make predictions of higher difficulty.\nDream of the Red Chamber is the perfect target. Based on the content of the first eighty chapters, we ask the model to predict, to some degree, the final forty chapters.\nThis prediction is extremely difficult, but it is just right for my working objectives. Theoretically the probability is not zero; practically it is highly unlikely. This makes it an ideal benchmark for observing LLM capability growth over the coming years.\nHaving written this far, I can finally articulate two working objectives:\nHow can we put in additional effort so that answers unattainable through one-shot prompting can be progressively approached? How should we choose our battleground so that our results are not immediately rendered obsolete by stronger models — and ideally, so that our framework also benefits when future models improve? Below, I begin considering research methods based on the characteristics of Dream of the Red Chamber and LLMs.\nAssumptions # We assume that the ending of Dream of the Red Chamber did once exist, and that the first eighty chapters and the subsequent conclusion were written as an organic, intentional, continuous work — exhibiting the same internal coherence found within the first eighty chapters themselves.\nIf the ending never actually existed, the prediction difficulty is even higher — approaching the prediction of a parallel universe. The question becomes: if Cao Xueqin had written the ending, what would it necessarily have been?\nThis word \u0026ldquo;necessarily\u0026rdquo; is the crux. One must reach this level of confidence for generating something from nothing to be meaningful.\nThe Writing of Dream of the Red Chamber # The novel was composed around the 1750s. At that time it circulated mostly among friends and relatives. It was not until 1791, when Cheng Weiyuan published it using movable wooden type, that it became widely known.\nRedology and AI-Assisted Research # Wang Guowei and Hu Shi were pioneers of Redology (紅學 — the scholarly study of Dream of the Red Chamber). The field has continued to develop, and in recent years has trended toward popularization and entertainment. The attention given to textual archaeology (探佚學) and the controversial Guiyou manuscript (癸酉本) reflects the public\u0026rsquo;s curiosity about the ending.\nKey research achievements incorporating the latest technology include:\nMachine learning once again confirming that the final forty chapters were not written by the original author Using LLMs for more nuanced semantic vectorization of text (Word Embedding) Using LLMs to build domain-specific knowledge graphs Models trained specifically on the first eighty chapters and Qing dynasty historical texts as input data LLM Characteristics # The LLM characteristic most relevant to this task is: it has been trained on all data available on the internet, plus all valuable materials these frontier AI labs could obtain.\nFor information already in its training data, the model\u0026rsquo;s predictive capability and tendency are very high. For instance, if you input a passage from Harry Potter, it can recite the subsequent paragraphs from memory.\nBut the final forty chapters of Dream of the Red Chamber were never transmitted to posterity. They are not in the model\u0026rsquo;s training data. It cannot recite them.\nProblem 1: Context Window Limitations # Can we simply input chapters one through eighty and ask the LLM to output the remaining forty?\nOn the input side, the current top-tier models (Gemini 3.1 / GPT-5.4 / Opus 4.6) using API mode can support up to 1M tokens, which is sufficient.\nHowever, under the current paradigm, the output token window is far smaller than the input. Output is limited to roughly four to eight thousand Chinese characters at most — approximately one chapter\u0026rsquo;s worth of content.\nProblem 2: Listless Prose and Quality Degradation # What if we modify the prompt to ask the LLM to output only the content of chapter eighty-one?\nThe model gets \u0026ldquo;contaminated\u0026rdquo; by the massive text input. Its writing style closely resembles Cao Xueqin\u0026rsquo;s, and it can reasonably continue the known plot — but the result reads like a flat chronicle of events.\nThen, repeating the process for chapters eighty-two, eighty-three, and so on, the quality drops precipitously.\nProblem 3: Prior Contamination in the Model # Another issue is that during training, the model has already seen Gao E\u0026rsquo;s continuation (高鶚續書), various scholarly speculations, and other secondary sources. If these materials diverge from the original ending, the output will be biased.\nTo Be Continued # Due to the length of this piece, I will wrap up here with a preview of what comes next.\nWe cannot simply have the LLM directly produce unknown information.\nSo we still need more traditional, mechanical, or programmatic methods.\nThe good news is: for the tireless researchers of literature, history, and philosophy — we now have a tractor for the field!\nDream of the Red Chamber possesses a highly structured nature. Important characters have their own 判詞 (prophetic verses, known as \u0026ldquo;pànCí\u0026rdquo;) — poetic passages that cryptically foreshadow each character\u0026rsquo;s fate.\nMoreover, the first eighty chapters can be cross-validated against one another, making the novel more amenable to prediction than many other works of fiction.\nAlthough the cast of characters is large and their backgrounds complex, what we are ultimately predicting is Cao Xueqin\u0026rsquo;s artistic vision — his creative will permeates the entire work. This is a tremendous aid for predicting the ending.\nNext: The Thermodynamics of Dream of the Red Chamber # The next article will introduce the experimental approach: structurally extracting content from the text, iteratively experimenting to extract the rules embedded in the novel, and using code to run repeated experiments.\nThe idealized scenario is something akin to a thermodynamic system: given initial conditions (premises — e.g., characters, family wealth, socioeconomic status, interpersonal networks\u0026hellip;) plus the system\u0026rsquo;s operating mechanisms (human psychology, social hierarchy, economic dynamics, cultural norms, karmic retribution, etc.), one could predict the system\u0026rsquo;s state at any subsequent point in time.\n","date":"22 March 2026","externalUrl":null,"permalink":"/en/posts/stonestory_fate/","section":"Blog","summary":"","title":"Dream of the Red Chamber Simulator: The Holy Grail of Social Science, and LLMs as Prophetic Verse","type":"posts"},{"content":"","date":"22 March 2026","externalUrl":null,"permalink":"/en/tags/eileen-chang/","section":"Tags","summary":"","title":"Eileen Chang","type":"tags"},{"content":"","date":"22 March 2026","externalUrl":null,"permalink":"/en/tags/ontology/","section":"Tags","summary":"","title":"Ontology","type":"tags"},{"content":"","date":"22 March 2026","externalUrl":null,"permalink":"/en/tags/prophetic-verse/","section":"Tags","summary":"","title":"Prophetic Verse","type":"tags"},{"content":"","date":"2026年3月22日","externalUrl":null,"permalink":"/ja/tags/%E5%AD%98%E5%9C%A8%E8%AB%96/","section":"Tags","summary":"","title":"存在論","type":"tags"},{"content":"","date":"2026年3月22日","externalUrl":null,"permalink":"/zh-hans/tags/%E5%BC%A0%E7%88%B1%E7%8E%B2/","section":"Tags","summary":"","title":"张爱玲","type":"tags"},{"content":"","date":"2026年3月22日","externalUrl":null,"permalink":"/tags/%E5%BC%B5%E6%84%9B%E7%8E%B2/","section":"Tags","summary":"","title":"張愛玲","type":"tags"},{"content":"","date":"2026年3月22日","externalUrl":null,"permalink":"/zh-hans/tags/%E6%9C%AC%E4%BD%93%E8%AE%BA/","section":"Tags","summary":"","title":"本体论","type":"tags"},{"content":"","date":"2026年3月22日","externalUrl":null,"permalink":"/tags/%E6%9C%AC%E9%AB%94%E8%AB%96/","section":"Tags","summary":"","title":"本體論","type":"tags"},{"content":"","date":"2026年3月22日","externalUrl":null,"permalink":"/tags/%E8%A9%A9%E8%AE%96/","section":"Tags","summary":"","title":"詩讖","type":"tags"},{"content":"","date":"2026年3月22日","externalUrl":null,"permalink":"/zh-hans/tags/%E8%AF%97%E8%B0%B6/","section":"Tags","summary":"","title":"诗谶","type":"tags"},{"content":"","date":"25 February 2026","externalUrl":null,"permalink":"/en/tags/ai-assisted-development/","section":"Tags","summary":"","title":"AI-Assisted Development","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/ja/tags/ai%E6%94%AF%E6%8F%B4%E9%96%8B%E7%99%BA/","section":"Tags","summary":"","title":"AI支援開発","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/tags/ai%E8%BC%94%E5%8A%A9%E9%96%8B%E7%99%BC/","section":"Tags","summary":"","title":"AI輔助開發","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/zh-hans/tags/ai%E8%BE%85%E5%8A%A9%E5%BC%80%E5%8F%91/","section":"Tags","summary":"","title":"AI辅助开发","type":"tags"},{"content":"","date":"25 February 2026","externalUrl":null,"permalink":"/en/tags/app-store/","section":"Tags","summary":"","title":"App Store","type":"tags"},{"content":"","date":"25 February 2026","externalUrl":null,"permalink":"/en/tags/gantt-chart/","section":"Tags","summary":"","title":"Gantt Chart","type":"tags"},{"content":" Preface # In this post, I\u0026rsquo;ll talk about the market, resources, ecosystem, and development process from the perspective of an indie developer. As a shameless plug, I\u0026rsquo;m using Gantt Planet as my running example: URL. I\u0026rsquo;ll admit upfront that these are just my side projects — the pressure is very different from someone who makes a living off this — so I\u0026rsquo;m only discussing the research approach here.\nThe Spark and the Stall # The idea behind Gantt Planet was simple: free Gantt chart tools — whether desktop software, mobile apps, or web apps — are all pretty terrible to use. The ones that actually seem decent all charge money, so I figured I\u0026rsquo;d just build my own Gantt chart app.\nIt didn\u0026rsquo;t take long before I realized things weren\u0026rsquo;t that simple:\nViewing a spreadsheet-style Gantt chart on a phone screen is way too cramped A proper Gantt chart needs to connect to a ton of resources — email, contacts, meeting rooms, and so on Solving either of these problems is expensive. You\u0026rsquo;d need to spend a huge amount of time fine-tuning the UI and designing ideal usage flows, while accepting that some workflows simply can\u0026rsquo;t be integrated and have to be dropped.\nAs for resource integration, you\u0026rsquo;d need to handle sign-ins for all major platforms, deal with countless APIs and authentication protocols, and maintain all of it going forward.\nAt this point, I hit a wall — and when you\u0026rsquo;re working at a scale that doesn\u0026rsquo;t benefit from economies of scale, that\u0026rsquo;s pretty much inevitable.\nPivot After Pivot # In moments like this, I like to take each factor and extend it a step or two outward, looking for a viable intersection where things might actually work.\nAs a developer driven by personal interest, \u0026ldquo;viable\u0026rdquo; means extremely low cost, plus a value proposition that\u0026rsquo;s small but clearly defined.\nAI helped me achieve the first part — extremely low cost.\nAs for the value part, it\u0026rsquo;s mostly self-defined, though bouncing ideas off AI can help crystallize things too.\nFor me, it mainly comes down to building something I\u0026rsquo;d actually want to use — something I\u0026rsquo;d enjoy looking at, at the very least. Beyond that, if nobody else has done it, there\u0026rsquo;s no free version, or there\u0026rsquo;s a clear differentiator, that counts as value too.\nAt this point, I started wondering: is there something that\u0026rsquo;s like a Gantt chart, but not really a Gantt chart?\nAnd then a picture formed in my mind.\nI remembered that when I use Gantt charts, I tend to put the more important items further down.\nThe bottommost item is usually the big-picture condition for completing the entire project — or it represents the project itself.\nBut what if there were items even below that bottom row — items even more important? What would those be?\nWell, there are plenty of things more important — they just have nothing to do with work. They\u0026rsquo;re about me. About life.\nAnd so it clicked: I wasn\u0026rsquo;t going to build a regular business Gantt chart. I was going to build a life Gantt chart.\nThe Next Step # So I decided to build a Gantt chart that departs from the typical business use case.\nThis conveniently meant I no longer needed to integrate with online services,\nbecause now it was all about the user — just them, and nothing else.\nWith that, I\u0026rsquo;d taken one more step forward and kept the project alive for the time being. But could it lead to enough substance to be complete?\nI thought about self-management and the important-but-not-urgent things in life — they all have rhythms and frequencies.\nHealth matters, so companies do annual check-ups. Family matters, so you make sure to see your loved ones before too much time passes.\nCombined with the nature of Gantt charts, within any given time window, items overlap on the current day.\nAnd if you consider the span of an entire lifetime, every item is potentially relevant today. That meant I could collapse everything onto the center line of the UI.\nThis solved the cramped UI problem while expressing a set of values I found genuinely meaningful.\nThe actual timeline view: all life items converge on the calendar centerline — see everything that matters today at a glance\nCompleteness # One of the App Store review guidelines is that your app can\u0026rsquo;t just replicate what a plain text webpage could do.\nFor example, a simple to-do list might not pass muster. So I had to make sure this app was more than just a spreadsheet — otherwise, Google Sheets could do the same thing.\nThe top-to-bottom visual flow of the spreadsheet reminded me of digging downward — like each day you only do the bare minimum surface-level tasks. There\u0026rsquo;s a Chinese idiom, \u0026ldquo;people floating above their work,\u0026rdquo; that captures this state perfectly.\nThe metaphor of more important items sitting at deeper layers made me want to make it more visual, more tangible. The immediate association was excavation — digging through geological strata, mining.\nThen came the question of implementation. Should I slightly curve each row of the spreadsheet? Add some perspective distortion?\nI thought about the context of this life Gantt chart — solitary and introspective.\nThe image that came to mind was: on the surface of a planet\u0026rsquo;s crust, one person digging alone. And then it hit me — isn\u0026rsquo;t that the golden-haired boy who waters his rose and tames a fox?\nSo I built a 3D version of the Gantt chart, using a mine shaft and gemstones as the visual representation of to-do items.\nAn even more radical approach would have been to keep only the planet version, but considering usability, review difficulty, and how intuitive it would be to understand, I decided to keep both views.\nThe 3D planet Gantt chart — mine shafts and gemstones as visual representations of life goals\nStill Missing a Desk # Back when I was still in school, I spent a lot of time sitting properly at my desk, alone — either studying or writing.\nUsing and thinking about this life Gantt chart felt like it was bringing me back to that desk — the one that\u0026rsquo;s long been thrown away.\nIf I completed something I only do once every three months or once a year — or even a long-term goal —\nI think I\u0026rsquo;d really want to write in a journal, or maybe write a letter to a close friend.\nI realized this Gantt chart was still missing a final emotional outlet. But if I added social media sharing, users wouldn\u0026rsquo;t be able to be fully honest.\nAnother option was in-app messaging between users, but there would never — now or in the future — be enough installs to support that, or at least an Android version would need to be available too. Either way, it wasn\u0026rsquo;t necessary for the first version.\nThe most self-consistent solution I landed on was the most versatile one: a chatbot.\nFeed the chatbot a bunch of literary classics and let it play the role of a \u0026ldquo;tree hollow\u0026rdquo; — a confidant — offering users some thoughtful feedback.\nFinal Thoughts # So that\u0026rsquo;s the product development and decision-making behind this app.\nIt might look like I just kept changing direction until it was done, but in reality, there were tons of scrapped ideas and rejected features that I haven\u0026rsquo;t even mentioned.\nBeyond giving curious friends a window into the kinds of considerations that go into product development,\nthe last thing I want to emphasize — and the answer to the title — is that the indie developer\u0026rsquo;s niche and consideration is: doing whatever the hell makes you happy!\nI\u0026rsquo;m sure plenty of people will think this is too niche, or that it doesn\u0026rsquo;t match their taste or values.\nBut even so, with a bit of time and the help of AI, you can build the thing you want that doesn\u0026rsquo;t exist yet.\nYou get to be the boss — deciding what\u0026rsquo;s valuable and what\u0026rsquo;s worth building.\nYou get to be the designer — choosing your favorite layouts, colors, fonts, and images.\nYou get to be the PM — deciding how to write it and how complete the features need to be.\nAI will only get stronger. Even if it can\u0026rsquo;t do everything today, in the foreseeable future, you\u0026rsquo;ll be able to enjoy all of this too.\nThe App Store is now the new-era personal homepage — everyone can publish their own story.\nIf you\u0026rsquo;re interested, follow this blog. I\u0026rsquo;ll keep sharing real experiences and reflections from publishing on the App Store.\n","date":"25 February 2026","externalUrl":null,"permalink":"/en/posts/gantt-planet-intro/","section":"Blog","summary":"","title":"Gantt Planet: An Indie Developer's Niche and Considerations","type":"posts"},{"content":"","date":"25 February 2026","externalUrl":null,"permalink":"/en/tags/indie-developer/","section":"Tags","summary":"","title":"Indie Developer","type":"tags"},{"content":"","date":"25 February 2026","externalUrl":null,"permalink":"/en/tags/product-development/","section":"Tags","summary":"","title":"Product Development","type":"tags"},{"content":"","date":"25 February 2026","externalUrl":null,"permalink":"/en/tags/side-project/","section":"Tags","summary":"","title":"Side Project","type":"tags"},{"content":"","date":"25 February 2026","externalUrl":null,"permalink":"/en/categories/the-observatory/","section":"Categories","summary":"","title":"The Observatory","type":"categories"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/ja/tags/%E3%82%AC%E3%83%B3%E3%83%88%E3%83%81%E3%83%A3%E3%83%BC%E3%83%88/","section":"Tags","summary":"","title":"ガントチャート","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/ja/tags/%E3%82%B5%E3%82%A4%E3%83%89%E3%83%97%E3%83%AD%E3%82%B8%E3%82%A7%E3%82%AF%E3%83%88/","section":"Tags","summary":"","title":"サイドプロジェクト","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/ja/tags/%E3%83%97%E3%83%AD%E3%83%80%E3%82%AF%E3%83%88%E9%96%8B%E7%99%BA/","section":"Tags","summary":"","title":"プロダクト開発","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/zh-hans/tags/%E4%BA%A7%E5%93%81%E5%BC%80%E5%8F%91/","section":"Tags","summary":"","title":"产品开发","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/ja/tags/%E5%80%8B%E4%BA%BA%E9%96%8B%E7%99%BA%E8%80%85/","section":"Tags","summary":"","title":"個人開発者","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/zh-hans/tags/%E7%8B%AC%E7%AB%8B%E5%BC%80%E5%8F%91%E8%80%85/","section":"Tags","summary":"","title":"独立开发者","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/tags/%E7%8D%A8%E7%AB%8B%E9%96%8B%E7%99%BC%E8%80%85/","section":"Tags","summary":"","title":"獨立開發者","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/zh-hans/tags/%E7%94%98%E7%89%B9%E5%9B%BE/","section":"Tags","summary":"","title":"甘特图","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/tags/%E7%94%98%E7%89%B9%E5%9C%96/","section":"Tags","summary":"","title":"甘特圖","type":"tags"},{"content":"","date":"2026年2月25日","externalUrl":null,"permalink":"/tags/%E7%94%A2%E5%93%81%E9%96%8B%E7%99%BC/","section":"Tags","summary":"","title":"產品開發","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/claude/","section":"Tags","summary":"","title":"Claude","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/claude-code/","section":"Tags","summary":"","title":"Claude Code","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/gemini/","section":"Tags","summary":"","title":"Gemini","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/gemini-cli/","section":"Tags","summary":"","title":"Gemini Cli","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/handwriting-recognition/","section":"Tags","summary":"","title":"Handwriting Recognition","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/ios-app/","section":"Tags","summary":"","title":"IOS App","type":"tags"},{"content":" Preface # Kana Juku is the first app I ever built and shipped to the App Store.\nSince it was my first, there\u0026rsquo;s a full story arc to share.\nThis series covers the development process, how I used AI assistance and how that evolved, working with public datasets and copyright considerations, and more.\nIf other apps have noteworthy stories, I\u0026rsquo;ll publish those separately.\nThis post focuses on the transition from chatbots to AI agents starting in Q4 2025.\nThings move fast in this space, so I\u0026rsquo;ve bluntly timestamped the key moments.\nAbout the App # If you have an Apple device, feel free to download it and give it a try.\nSeveral upcoming posts will also use this app as a running example — topics like cleaning ETL datasets, Apple Create ML, PyTorch, VOICEVOX, on-device large language models, and more.\nKana Juku: URL\nDevelopment Timeline # Motivation # My family and I are both interested in learning Japanese, and I\u0026rsquo;ve long wanted a Japanese-learning app that perfectly fits our needs.\nMy family\u0026rsquo;s pain point is that they don\u0026rsquo;t read English, so the romaji in most textbooks and apps is meaningless to them.\nFor me, I really wanted kana displayed alongside their kanji origins (e.g., \u0026ldquo;あ\u0026rdquo; derives from \u0026ldquo;安\u0026rdquo;).\nAnother annoyance: I installed the Japanese keyboard for occasional use, but switching input methods every day meant an extra tap to skip past the Japanese keyboard — a small friction that added up.\nEarly Preparation # [Q4 2024]\nI was between jobs at the time, so I had the bandwidth to take Udemy courses. Since I had some JavaScript experience, I started with React \u0026amp; Expo.\nAt this stage I was following along with course content — simple web-style pages, plus extras like GPS, camera control, and fetching remote data.\nBut since it wasn\u0026rsquo;t Apple\u0026rsquo;s native ecosystem, there was a lot of extra tooling to manage.\n[Q1 2025]\nAfter hesitating for a long time, I bought a Mac Mini and switched entirely to Apple\u0026rsquo;s own SwiftUI. Again, I learned from Udemy courses.\nMost of my time went into getting comfortable with basic UI components and layouts, plus all the fundamental features — data persistence, fetching data, embedding maps — and their SwiftUI equivalents.\nSwiftUI is more modern and isn\u0026rsquo;t as tightly coupled to Xcode as UIKit, but it\u0026rsquo;s also harder to predict how a SwiftUI layout will actually look. Early on I cared too much about that and burned a lot of time experimenting.\n[Q3 2025]\nSince I had a day job and could only code in the evenings — and not every evening at that — progress was slow. I was basically building out the basic skeleton and plugging in the Japanese language data.\nWith a first app, it\u0026rsquo;s hard to foresee the final shape, so I kept revising. Sometimes I\u0026rsquo;d circle back to rewatch course videos for features I now knew I needed. Essentially, I was paying tuition.\nUp to this point, starting from Q1 2024, plain chatbots like ChatGPT were already a big help for coding.\nBut the copy-paste cycle and having to explain mountains of context was incredibly time-consuming. The output often missed the mark on the first try or drifted off course, sending me right back to the copy-paste loop. It never reached a positive feedback cycle — it was only useful as a learning reference.\nAt the time, the hottest tool was actually the Cursor editor with its tab-autocomplete, but it required a subscription for meaningful usage, so I didn\u0026rsquo;t try it.\nMeanwhile, Claude was already gaining popularity as the best model for coding, and Anthropic had released Claude Code — an AI agent that runs on your local machine. But again, it required a subscription, so I didn\u0026rsquo;t try it.\nPivoting to AI Agents # [Q4 2025]\nAt this point I expected I\u0026rsquo;d only ever subscribe to one chatbot at a time, and I had just switched from ChatGPT to Google Gemini.\nSpec-Driven Development (SDD) was trending, and Google had launched Gemini CLI — their answer to Claude Code — so I finally gave it a shot.\nI discovered that agents eliminated the copy-paste step entirely, massively boosting efficiency. The step of pasting code back and hunting for which lines to change was also gone.\nBy then I was convinced: for coding, you should use an agent, not a chatbot. So I went ahead and subscribed to Claude to use Claude Code (CC from here on).\nCC\u0026rsquo;s underlying model was clearly stronger. Its comprehension of conversations and its ability to execute as expected were already remarkably reliable.\nControlling the Computer, and Opus 4.5 # One time my Mac Mini\u0026rsquo;s disk was completely full and the machine was unusable. I just asked CC what to do — the same way I\u0026rsquo;d ask a question on a chatbot\u0026rsquo;s web page.\nCC came back with a concrete plan: which directories could be cleared, what could be moved to an external drive, and so on.\nI was worried it might brick my computer, so I approved each step one at a time. In the end, everything went smoothly.\nI wasn\u0026rsquo;t very familiar with macOS or the Xcode build environment. That\u0026rsquo;s when I realized AI has at least an 80% understanding of everything — including things I don\u0026rsquo;t know — and that being able to write code is roughly equivalent to being able to operate a computer.\nBecause CC could directly control the machine, it moved freely between directories, wrote code, saw its own errors, and fixed them — a fully self-sustaining positive feedback loop.\nThe development speed with an agent was on a completely different level, and the fact that I\u0026rsquo;d waited three extra months before switching to CC made me feel pretty foolish.\nThe time wasted was staggering, both subjectively and objectively.\nSubjectively: if I had adopted the latest tools earlier, the previous three months of work could have been done in two to three weeks.\nObjectively: other people using the latest tools were more productive than me and shipping their products sooner.\nMy earlier refusal to try — saving maybe half an hour of setup time and a few hundred dollars in subscription fees — ended up wasting vast stretches of my life.\nThis might also explain why so many people are obsessed with chasing the latest AI product news.\nAt least that\u0026rsquo;s how it is for me — I can\u0026rsquo;t afford not to stay on top of the latest releases. It\u0026rsquo;s a form of time-management risk hedging.\n[November 24, 2025]\nOpus 4.5 was released. Opus is Claude\u0026rsquo;s highest-tier flagship model, and version 4.5 had just dropped.\nBeyond significant performance improvements across the board compared to its predecessor, the biggest difference was its understanding of intent.\nThe old version essentially did exactly what you pointed at (which was already quite good, honestly). Starting with 4.5, after receiving your request, it would first summarize and plan to some degree. In human terms: it became sharper, more experienced.\nYou no longer needed to spell out which file to modify and how. You could describe the end goal like a manager or executive, and it would break it down and plan the next couple of steps on its own.\nThis planning capability boosted efficiency even further. As I mentioned, AI already knows at least 80% of everything — now it was proactively doing the next steps of work, and doing them well.\nCombined with this, I was able to operate at a much higher level of abstraction. More and more was delegated to CC. Gradually, I stopped needing to read or edit code myself.\nAfter Opus 4.5 came out, the debate on social media about whether AI can write code essentially ended.\nFor full-time software engineers and seasoned pros, I can\u0026rsquo;t speak to their experience.\nBut compared to myself: things that would have taken me one to two years could now be done in two to three months.\nThe output settled at just beyond the edges of my own knowledge — I was actually the biggest bottleneck.\nEnd of Part 1\n","date":"22 February 2026","externalUrl":null,"permalink":"/en/posts/kana_juku_dev_1/","section":"Blog","summary":"","title":"Kana Juku Dev Log (Part 1): From Chatbots to AI Agents","type":"posts"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/on-device-ai/","section":"Tags","summary":"","title":"On-Device AI","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/swiftui/","section":"Tags","summary":"","title":"SwiftUI","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/udemy/","section":"Tags","summary":"","title":"Udemy","type":"tags"},{"content":"","date":"22 February 2026","externalUrl":null,"permalink":"/en/tags/uikit/","section":"Tags","summary":"","title":"UIKit","type":"tags"},{"content":"","date":"2026年2月22日","externalUrl":null,"permalink":"/zh-hans/tags/%E6%89%8B%E5%86%99%E8%AF%86%E5%88%AB/","section":"Tags","summary":"","title":"手写识别","type":"tags"},{"content":"","date":"2026年2月22日","externalUrl":null,"permalink":"/tags/%E6%89%8B%E5%AF%AB%E8%BE%A8%E8%AD%98/","section":"Tags","summary":"","title":"手寫辨識","type":"tags"},{"content":"","date":"2026年2月22日","externalUrl":null,"permalink":"/ja/tags/%E6%89%8B%E6%9B%B8%E3%81%8D%E8%AA%8D%E8%AD%98/","section":"Tags","summary":"","title":"手書き認識","type":"tags"},{"content":" About This Site # The title refers to \u0026ldquo;The Miniature Boat\u0026rdquo; (核舟記) — a classical Chinese text about an impossibly detailed carving on a tiny boat. It means this site can\u0026rsquo;t carry much, just small crafts and the joy of making things, leaving behind a trace of information. Here I share real examples of using AI to help build apps, make small tools, and improve everyday efficiency — along with ideas, reflections, and setbacks. I won\u0026rsquo;t rehash the hot takes everyone\u0026rsquo;s already discussing. In short, the focus is on the process of mining, not repeatedly introducing the shovel. About Me # Legal name: ChengChe Lee · qqder339@gmail.com Until age 24, I identified as a humanities person. After that, I became a system administrator (the \u0026ldquo;admin\u0026rdquo; that error messages tell you to contact). I use AI in a punk rock way — simple chords, rough technique, but genuine expression. All text on this site is written by me personally, without AI ghostwriting or polishing, because the fun parts should be enjoyed firsthand. Philosophy # Experience is ownership — experiencing something new takes priority over whether it can make money. Success doesn\u0026rsquo;t have to be mine — if someone else is willing to do the same thing and does it better, I\u0026rsquo;ll find something else to work on. After the invention of cinema, human life has been extended by at least three times. — Yi Yi: A One and a Two\nAI is this era\u0026rsquo;s new medium for extending human life. ","externalUrl":null,"permalink":"/en/about/","section":"QQder 核舟記部落格","summary":"The Miniature Boat","title":"About","type":"page"},{"content":"This is the entry page for all apps currently released and actively maintained. The portfolio is organised into two product lines:\nOffline Growth — tools for long-term learning, reflection, and personal growth. Covers two sub-categories: Language Learning and Self-Reflection. Digital Citizen — apps focused on authenticity, memory, and personal digital agency. The current entry point is Democracy EDC (EveryDay Carry). Jump straight into any product from the cards below. Every entry includes its App Store link, support page, and privacy policy.\n","externalUrl":null,"permalink":"/en/apps/","section":"Apps","summary":"The current catalog of released and maintained apps","title":"Apps","type":"apps"},{"content":" In the deepfake era, verifiable authenticity is what\u0026rsquo;s actually scarce # Lowering the barrier to video and audio production is mostly a good thing. The problem is that fabrication, re-editing, and context-stripping get cheaper at the same time, so \u0026ldquo;I recorded it\u0026rdquo; has drifted away from \u0026ldquo;I can prove this is how it really happened.\u0026rdquo; Atomic Presence steps into that gap: it lets you start building a verifiable evidence chain the moment you hit record.\nThe situations it\u0026rsquo;s designed for are the ones where you\u0026rsquo;d worry about the footage being challenged later: interviews, witness accounts, whistleblowing, contested scenes, any context where a recording might get disputed, re-cut, or forged. Casual everyday capture sits outside its target.\nHow it differs from ordinary recording tools # Most recording tools think \u0026ldquo;record the file first, worry about preservation later.\u0026rdquo; Atomic Presence weaves hash chains, dynamic QR codes, and digital signatures into the capture flow while it\u0026rsquo;s happening. Verifiability is the core of the product from the start, not a patch bolted on afterwards.\nThat makes it feel more like a technical defense tool for risk scenarios. You may not reach for it every day, but when you do need it, you\u0026rsquo;ll want it already installed, with a workflow you already know, rather than scrambling to assemble tools in the moment.\nWhy four protection levels: different risks, different costs # The protection levels correspond to real scenario differences. Sometimes you only need to signal \u0026ldquo;this is being recorded\u0026rdquo; to the other party; sometimes you need something closer to evidence-grade integrity verification. Having intermediate steps lets the tool sit inside actual workflows instead of offering only a crude on/off switch.\nFor journalists, legal professionals, citizen journalists, and anyone who regularly needs a clean record of a conversation, this is valuable. What helps them is a recording tool with fewer ambiguous zones; more filters and beauty modes won\u0026rsquo;t.\nPrivacy and offline are part of credibility # A tool that claims to stand for authenticity loses credibility if its core data handling depends heavily on external servers. Atomic Presence keeps the critical computation on-device, partly for privacy and partly to reduce external dependencies inside the evidence chain itself. The fewer third parties your material passes through, the easier it is to explain later what did and didn\u0026rsquo;t happen to it.\nIf you want a recording tool that has a better chance of convincing others when a dispute breaks out, Atomic Presence is worth getting familiar with before you need it.\n","externalUrl":null,"permalink":"/en/apps/atomic-presence/","section":"Apps","summary":"","title":"Atomic Presence","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # Atomic Presence, developed by QQder339, is an anti-deepfake tool that uses cryptographic hash chains, digital signatures, and audio watermarking to help users self-verify the integrity of their recordings.\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers. All cryptographic operations and verification are performed on-device.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nAudio/Video Files: All recorded content stored in your device\u0026rsquo;s local storage Hash Chain Records: SHA-256 hash sequences and corresponding verification data Digital Signatures: Signature data generated by on-device Curve25519 algorithm Verification Reports: Integrity reports and metadata records Anonymized Device Identifier: Each .evidence.json embeds a 16-character hex prefix of SHA-256(identifierForVendor), used only to correlate recordings from the same device during verification. This identifier lives only inside evidence files on your device, is never transmitted to any server, and cannot be reversed back to the original device information 4. Cryptographic Features (Fully Offline) # All core features are completed on-device without network connection:\nHash Chain Generation: Real-time SHA-256 hash sequences; all computation runs locally Digital Signing: Uses Curve25519 algorithm to sign recordings on-device Audio Watermarking: Embeds FSK signals in recordings; all signal processing runs on-device Verification: Integrity verification computed locally 5. Important Note # The content processed by this app (audio, video) may contain sensitive information. All processing occurs on your device, and we cannot and will never access any of your recorded content.\n6. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n7. Network Access # This app requires no network connection to use all features. The only network access is:\nExternal Links: Opens browser when tapping relevant links 8. Contact Us # 📧 qqder339@gmail.com\nSubject: Atomic Presence Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/atomic-presence/","section":"Privacy Policies","summary":"","title":"Atomic Presence — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: The QR code is unclear in the video and can\u0026rsquo;t be scanned during verification?\nA: Ensure sufficient screen brightness during recording, and keep the camera 30–50 cm from the screen. The QR code updates once per second — the camera needs to be able to focus clearly. If the problem persists, try reducing the recording resolution.\nQ: Audio watermark verification fails?\nA: Watermark verification may fail if: the audio was heavily compressed (e.g., forwarded via WhatsApp), the audio was truncated, or there was excessive background noise. Record in a quiet environment and use the original audio file for verification.\nQ: The digital signature is invalid on a new device?\nA: Each device\u0026rsquo;s signing key is stored in the iOS Keychain, and a new device generates a different key. You do NOT need to manually export the public key — every .evidence.json written by the app already embeds the public key used for that recording\u0026rsquo;s signature, so any verifier who holds the evidence file can verify it regardless of which device they\u0026rsquo;re on.\nQ: The app crashed during recording — is the file still there?\nA: When the app crashes unexpectedly, partial recordings may remain in the Documents directory. Reopen the app, tap the VERIFY button at the top of the main screen, and check the three tabs (Level 1 / Level 2 / Level 3) for any recoverable files.\nQ: Hash chain verification shows \u0026ldquo;integrity broken\u0026rdquo; but I didn\u0026rsquo;t edit the recording?\nA: Possible causes include: the app was interrupted by the system during recording, low battery, or a write error due to insufficient storage. Ensure sufficient battery and storage before recording.\nTroubleshooting # Ensure the device has sufficient storage (recommend at least 2 GB available) Keep the screen on during recording to avoid system interruptions Force quit and relaunch the app Check iOS version ≥ 17.0 If a specific scenario consistently causes issues, screenshot the error message and email us Contact Support # 📧 qqder339@gmail.com\nSubject: [Atomic Presence] Issue Description\nPlease include: device model, iOS version, app version, recording mode (video/audio), steps to reproduce.\nThis app collects no user data. All cryptographic operations run entirely on-device. We have no access to your recordings. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/atomic-presence/","section":"Support","summary":"Support and contact for Atomic Presence","title":"Atomic Presence Support","type":"support"},{"content":" Sound as something you design, not just play in the background # Most white noise apps eventually converge on the same conclusion: play rain, waves, or wind, and hope it helps you focus or sleep. Auditory Companion aims further. Instead of bundling a handful of ambient samples, it treats \u0026ldquo;how sound forms an inhabitable space\u0026rdquo; as the product\u0026rsquo;s core.\nSo what you see is three distinct systems rather than a playlist: a noise synthesizer, a scene mixer, and a reader. You can shape your acoustic environment from several directions: a stable, emotionally flat spectrum when you need it; the layered spatial feel of a rainy night, a fireplace, or a café at other times; a frame that lets reading voice and background sound coexist when the task calls for it.\nWhen it\u0026rsquo;s most valuable # For long stretches of reading, writing, or deep work, \u0026ldquo;play some nature sounds\u0026rdquo; usually falls short; what\u0026rsquo;s actually useful is an adjustable soundscape. Auditory Companion fits here. Lay down a noise floor with the synthesizer, layer in event sounds and loops in the scene mixer, then let the reader speak the text aloud. The fit to your current state is much closer than simply opening a Spotify playlist.\nRelaxation and pre-sleep are another common scenario. Many people want silence that isn\u0026rsquo;t actually silent: a sound that masks the outside world without demanding attention. That\u0026rsquo;s where adjustable noise and scene mixing earn their place: you\u0026rsquo;re not forced to pick between a handful of canned presets.\nThe real story is sound-control granularity # Think of it as a small personal sound workstation. The synthesizer handles spectrum and texture; the scene mixer handles atmosphere and spatial feel; the reader handles content input. Each module stands on its own; together they form a complete focus system.\nThat\u0026rsquo;s also why it resonates with \u0026ldquo;people who need background sound.\u0026rdquo; You get to find the configuration you can actually sit with for hours, rather than accepting whatever soundfield someone else prepared.\nThe detail work shows up clearly: the synthesizer puts four noise colors (white, pink, brown, green) and multiple parameters directly in your hands, going well past an on/off switch. The scene mixer lets you layer hundreds of audio samples into stackable scenes instead of playing one file. The reader wraps on-device TTS, automatic background-audio ducking, and a full-screen player into a single flow.\nKeeping sound and reading data on-device is practically significant # What you read, listen to, and paste in tends to be private, especially when an app supports clipboard reading or local TTS. If that content travels to a server, the experience sours immediately. Auditory Companion keeps synthesis, mixing, and reading on-device, and that shapes whether you\u0026rsquo;ll comfortably use it as a daily tool, not just whether you\u0026rsquo;ll try it once.\nIf you\u0026rsquo;re looking for a sound engine that can keep you company through work, reading, downtime, and immersive listening, rather than an app measured by \u0026ldquo;how many ambient tracks,\u0026rdquo; it\u0026rsquo;s worth trying this one firsthand.\n","externalUrl":null,"permalink":"/en/apps/auditory-companion/","section":"Apps","summary":"","title":"Auditory Companion","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # Auditory Companion, developed by QQder339, is a sophisticated audio engine combining real-time DSP noise synthesis, 108 ambient sound samples, and AI-powered text-to-speech reading.\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nSoundscape Settings: Your saved mixing configurations and favorite scenes Reading Content: Articles, clipboard text, and other reading materials (processed locally only) User Settings: Volume levels, sound preferences, auto-ducking settings, etc. 4. On-Device AI Features # Text-to-Speech (TTS) and audio processing run on-device:\nAI Voice Reading: Uses iOS built-in TTS, or an optional downloadable MeloTTS on-device model; all speech synthesis runs on-device Auto-Ducking: DSP signal processing runs entirely locally, analyzing voice and background audio in real-time to automatically adjust volume 5. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n6. Network Access # Core features (noise synthesis, scene mixing, iOS built-in TTS reading) operate fully offline. Network access occurs only when you explicitly trigger it:\nDownloading the MeloTTS model (Optional): When you choose to download the on-device TTS model in Settings, the app fetches the model files from a public source and caches them locally External Links: Opens the system browser when tapping relevant links These requests transmit only the URL of the file you chose to download; no personally identifiable information is attached.\n7. Contact Us # 📧 qqder339@gmail.com\nSubject: Auditory Companion Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/auditory-companion/","section":"Privacy Policies","summary":"","title":"Auditory Companion — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: There\u0026rsquo;s static or crackling during audio playback?\nA: Some noise may come from Bluetooth headphone connection issues. Try switching to wired headphones to test. If the issue persists with wired headphones, try adjusting the sample rate in Settings or restarting the audio engine.\nQ: The TTS reading sounds very unnatural?\nA: The app uses iOS\u0026rsquo;s built-in TTS engine. You can switch between different voice packs and speech rates in Settings. TTS quality for some languages (like Traditional Chinese) depends on your iOS version — updating to the latest iOS typically improves quality.\nQ: Auto-Ducking sometimes doesn\u0026rsquo;t work?\nA: Auto-Ducking requires both background sound and TTS reading to be playing simultaneously. If only one audio source is active, ducking won\u0026rsquo;t trigger. Make sure both sources are playing and that Auto-Ducking is enabled in Settings.\nQ: Saved soundscape settings disappear on next launch?\nA: This may happen if the app was force-closed before settings were saved. After adjusting settings, confirm the save before exiting the app.\nQ: Can playback continue after the screen locks?\nA: Yes. The app supports background audio playback and continues after screen lock. If playback stops automatically, check the app\u0026rsquo;s background refresh permission in iOS Settings.\nTroubleshooting # Check that volume is not muted (physical mute switch + media volume) Try switching the audio output device (wired vs. Bluetooth) Force quit and relaunch the app Restart your device to clear potential audio routing conflicts Check iOS version ≥ 17.0 Contact Support # 📧 qqder339@gmail.com\nSubject: [Auditory Companion] Issue Description\nPlease include: device model, iOS version, app version, headphone/speaker model, steps to reproduce.\nThis app collects no user data. All audio processing is performed entirely on-device. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/auditory-companion/","section":"Support","summary":"Support and contact for Auditory Companion","title":"Auditory Companion Support","type":"support"},{"content":" The usual problem is poorly-fitted material, not lack of effort # The hardest part of learning English is rarely vocabulary size. It\u0026rsquo;s opening an article and having no idea whether it\u0026rsquo;ll be comfortably challenging or discouragingly hard. Material that\u0026rsquo;s too easy gives no sense of progress; material that\u0026rsquo;s too hard simply erodes patience. English N+1 was built for that specific gap.\nIt turns Krashen\u0026rsquo;s i+1 theory into a working product: estimate your level first, then have AI generate content just above your current ability. The point isn\u0026rsquo;t \u0026ldquo;bolt AI onto a study app\u0026rdquo;; it\u0026rsquo;s automating the work of making material fit the person.\nHow you\u0026rsquo;d actually use it # A common flow: take the placement test to locate your rough CEFR range, pick a topic you actually want to read about, and let the app generate an article at your level. Save unfamiliar words as you go, and they enter a review cadence automatically. You don\u0026rsquo;t need to separately hunt for articles, run dictionary lookups, take notes, and then wire up flashcards; the app was designed around that whole habit from the start.\nThis matters especially for people anxious about English. Instead of opening by asking you to prove what you know, it lets you expand your boundaries while still understanding most of what\u0026rsquo;s in front of you. That \u0026ldquo;only slightly harder than where I am now\u0026rdquo; margin is where most people can actually sustain practice.\nAI here is an engine, not a performance # Plenty of AI English products highlight how natural their chat is. What actually keeps learners around is whether content generation quality is consistent and whether the review rhythm flows. English N+1 behaves like a curriculum engine that tracks your level, balancing text difficulty, vocabulary density, and topic interest so you don\u0026rsquo;t gamble on \u0026ldquo;maybe this one fits\u0026rdquo; every session.\nRunning the model on-device matters too. Learning records, level information, and reading preferences are private, and especially sensitive when you\u0026rsquo;re looking at your own weaknesses. Keeping this data local makes long-term use much more comfortable than a cloud service.\nModel selection isn\u0026rsquo;t pushed in your face either. The placement test establishes your rough CEFR range, and a local model takes over generation based on your device\u0026rsquo;s capability and memory state. What you experience is \u0026ldquo;open the app and start reading\u0026rdquo; rather than being blocked by setup. Hardware-adaptive under the hood, simple on the surface. That proves more useful than showcasing AI for its own sake.\nWho should download it # If flashy English apps that never quite fit have left you worn out, this one works differently. It earns retention by being useful every time you open it, not through streaks or gamified loops. Students, self-learners, people returning to English, or anyone who wants a steady dose of reading during the commute will find this more structured than scraping the web for material.\nPeople usually give up on English not from lack of effort, but because the material doesn\u0026rsquo;t fit. English N+1 only tries to fix that one thing — so the next time you open an article, you don\u0026rsquo;t have to guess whether it\u0026rsquo;s about to break your patience.\n","externalUrl":null,"permalink":"/en/apps/english-n-plus-1/","section":"Apps","summary":"","title":"English N+1","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # English N+1, developed by QQder339, is an English learning app featuring CEFR-level assessment and on-device AI conversation technology.\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nLearning Progress \u0026amp; Level: CEFR assessment results and study records Conversation History: AI conversation logs stored locally Word Collections: Saved vocabulary and learning notes User Settings: Language preferences, difficulty settings, etc. 4. Offline AI Features # All AI features run entirely on-device without network connection:\nAI Conversation Practice: Uses local Llama 3.2 or Qwen 2.5 models; all inference runs on-device Article Generation: Personalized learning articles generated locally based on your level Level Assessment: CEFR level evaluation computed on-device AI models require a one-time download before first use (user-initiated); all features work offline after download.\n5. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n6. Network Access # Network access is restricted to:\nDownloading AI Models (Optional, one-time): Only connects when you explicitly choose to download LLM model resources External Links: Opens browser when tapping relevant links Other than the above, the app does not initiate network connections.\n7. Contact Us # 📧 qqder339@gmail.com\nSubject: English N+1 Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/english-n-plus-1/","section":"Privacy Policies","summary":"","title":"English N+1 — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: After the CEFR level test, the difficulty feels off. What can I do?\nA: The initial test is a quick vocabulary-based assessment and may not perfectly match your actual level. You can manually adjust the difficulty in Settings, or retake the test. After using the app for a while, it will automatically adapt based on your answer history.\nQ: What model does the AI conversation feature need? How large is it?\nA: Based on your device\u0026rsquo;s performance, the app recommends an appropriate model (Llama 3.2 or Qwen 2.5). Model size is approximately 1–4 GB. After downloading, conversations work completely offline — no internet required.\nQ: AI conversation responses are very slow or frozen?\nA: On-device AI inference speed depends on your device\u0026rsquo;s performance. Older iPhones will be noticeably slower. Try selecting a smaller model in Settings.\nQ: The generated articles are too difficult to understand?\nA: Articles are generated based on your CEFR level. If they feel too hard, lower your level setting in Settings, or manually select a lower difficulty when generating articles.\nQ: Will my study records and saved vocabulary be backed up?\nA: All data is currently stored locally on your device only. iCloud backup is not supported. Uninstalling the app will erase all records.\nTroubleshooting # AI model fails to load: Ensure at least 3 GB of free storage, and check that the download wasn\u0026rsquo;t interrupted App crashes during AI conversation: Try switching to a smaller model in Settings Force quit and relaunch the app Check iOS version ≥ 17.0 Contact Support # 📧 qqder339@gmail.com\nSubject: [English N+1] Issue Description\nPlease include: device model, iOS version, app version, steps to reproduce.\nThis app collects no user data. All AI conversations are processed entirely on-device. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/english-n-plus-1/","section":"Support","summary":"Support and contact for English N+1","title":"English N+1 Support","type":"support"},{"content":" A place to anchor long-term life projects # Most task management tools handle today, this week, and this month well enough. They struggle with the important-but-not-urgent projects: reading, exercise, language learning, processing emotions, keeping up certain relationships. These aren\u0026rsquo;t unimportant. They just get crowded out by noisier, more immediate demands. Gantt Planet is designed for exactly those projects.\nThe goal isn\u0026rsquo;t to make you busier. It\u0026rsquo;s to help you see what was already worth your time. Timeline, 3D planet, AI tree hole, art collection: these surface as different modules, but all serve the same outcome. Self-discipline becomes a rhythm you can visualise, sense, and return to, rather than a willpower grind.\nWhy it tends to stay on your phone longer than a typical productivity tool # Traditional to-do tools think in list logic: done means check off, undone means accumulating pressure. Gantt Planet is closer to tending a small universe. You\u0026rsquo;re letting a planet grow its own terrain and memories, not wiping items off a list. That visual language makes long-term goals harder to abandon, because they\u0026rsquo;re no longer just cold rows of text.\nThe timeline view shows what today deserves attention; the 3D planet view shows the shape of overall progress. The former pulls you back to reality; the latter reminds you why you started. Together they keep \u0026ldquo;today\u0026rdquo; and \u0026ldquo;a lifetime\u0026rdquo; in the same frame.\nIts starting point is the personal \u0026ldquo;important but not urgent,\u0026rdquo; not an enterprise project manager shrunk down. You can track daily, weekly, monthly, or longer cadences on the same timeline, then turn completion into 3D planet terrain and collection entries. That\u0026rsquo;s where it diverges most from productivity software that mostly helps you pack tasks in more tightly.\nThe tree hole and the art collection: the reason to come back # Long-term habit building isn\u0026rsquo;t just a planning problem; a lot of it is emotional. The issue is often not that you don\u0026rsquo;t know what to do. You\u0026rsquo;re tired, annoyed, distracted, or simply sick of productivity tools that never respond. Gantt Planet\u0026rsquo;s AI tree hole exists to sit with that layer. When you just want to talk something out and reorient yourself, it doesn\u0026rsquo;t demand productivity from you.\nThe collection system turns \u0026ldquo;finishing things\u0026rdquo; from obligation into something that accumulates. Not everyone needs this, but for people who get worn down by monotony, it\u0026rsquo;s exactly how patience gets rebuilt. Completing tasks gradually unlocks illustrations and collectibles, closer to leaving marks on a long-running life project than to KPI pressure.\nPrivacy and offline matter more here than usual # Goals, journal entries, emotions, conversations: this content is more personal than what goes into a typical productivity app. Part of Gantt Planet\u0026rsquo;s value is that you don\u0026rsquo;t have to hand this data to an external service to get the companionship and visualisation. For many people, inner content only gets written down when the data really stays on their own device.\nGantt Planet won\u0026rsquo;t fill your calendar for you, and it won\u0026rsquo;t flash red when you fall behind. What it\u0026rsquo;s good at is letting the things you don\u0026rsquo;t want to abandon accumulate, quietly, into a planet you can actually see.\n","externalUrl":null,"permalink":"/en/apps/gantt-planet/","section":"Apps","summary":"","title":"Gantt Planet","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # Gantt Planet, developed by QQder339, is a life goal management app combining 3D visual habit tracking with an on-device AI companion.\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers. Your habits, journals, and conversations belong only to you.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nHabits \u0026amp; Goals: All items, completion records, and timeline data AI Conversation Logs: All conversations with the built-in AI stored locally Journals \u0026amp; Mood Records: All journal content Art Collection: Unlocked stickers and illustration records 3D Planet State: Your planet\u0026rsquo;s terrain and growth data User Settings: All preference settings 4. Offline AI Features # The AI companion feature runs entirely on-device:\nAI Conversations: Uses local Large Language Models (LLM); all inference runs on-device; conversation content is never transmitted to any server Model Download: AI models require a one-time download before first use (user-initiated); fully offline after download 5. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n6. Network Access # Network access is restricted to:\nDownloading AI Models (Optional, one-time): Only connects when you explicitly choose to download Downloading Art Collection Stickers (On-demand): When you unlock art rewards, the app fetches matching images from a public GitHub repository and caches them locally for offline use Weather Information (Optional): If you enable real weather, only minimal regional data is sent to retrieve weather External Links: Opens browser when tapping relevant links These requests transmit only the URL of the resource you chose or triggered; no personally identifiable information is attached.\n7. Contact Us # 📧 qqder339@gmail.com\nSubject: Gantt Planet Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/gantt-planet/","section":"Privacy Policies","summary":"","title":"Gantt Planet — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: Does the AI companion chat require an internet connection?\nA: No. The AI companion uses an on-device local model. All conversations are processed completely offline and are never sent to any server. A one-time model download (~1–4 GB) is required on first use, after which everything works offline.\nQ: The 3D planet is laggy?\nA: The 3D planet requires some GPU performance. If it\u0026rsquo;s lagging on an older device, try lowering the render quality or disabling particle effects in Settings. Recommended device: iPhone 12 or newer.\nQ: Habit items on the timeline have disappeared?\nA: All data is stored locally on your device. If data disappears unexpectedly, check if items were accidentally deleted (they may be restorable from the recycle bin). If not, please email us with your app version.\nQ: The weather feature shows incorrect information?\nA: The weather feature requires location permission. Please ensure this app is allowed location access in iOS Settings \u0026gt; Privacy \u0026gt; Location Services. If already allowed but still incorrect, try toggling the weather feature off and on again.\nQ: Unlocked art stickers have disappeared?\nA: Sticker unlock records are stored locally and will be erased if the app is uninstalled. If data disappears without uninstalling, please email us with details.\nTroubleshooting # Force quit and relaunch the app Check available storage (AI model + 3D assets need 2+ GB) Check iOS version ≥ 17.0 If 3D rendering is abnormal, try resetting the planet display settings in Settings Contact Support # 📧 qqder339@gmail.com\nSubject: [Gantt Planet] Issue Description\nPlease include: device model, iOS version, app version, steps to reproduce (screenshots preferred).\nThis app collects no user data. All AI conversations and habit records are processed entirely on-device. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/gantt-planet/","section":"Support","summary":"Support and contact for Gantt Planet","title":"Gantt Planet Support","type":"support"},{"content":" Learning kana straight into your hands # Most beginner Japanese materials quietly assume you\u0026rsquo;re willing to sit through a long \u0026ldquo;romaji transition period.\u0026rdquo; For native Chinese speakers, that\u0026rsquo;s usually the less natural path. You already have a strong sense of character form and stroke order, and you\u0026rsquo;re used to learning visually and through writing. Kana Juku starts from that premise and designs around it.\nRather than prettifying the hiragana/katakana chart, it connects \u0026ldquo;seeing the form, writing the form, typing the form, recognising the form\u0026rdquo; into a single loop. The payoff: you reach direct kana recognition sooner and rely on romaji as a crutch for less time.\nWhy this approach suits Chinese speakers in particular # A Chinese speaker\u0026rsquo;s real advantage lies in a strong sensitivity to character structure and visual form, less so in pronunciation. Kana Juku amplifies that advantage. You memorise kana through handwriting, image recognition, a custom keyboard, and shape association, which makes learning feel like picking up a new script rather than grinding through rote repetition.\nPeople who\u0026rsquo;ve quit halfway often did so because the tool\u0026rsquo;s angle didn\u0026rsquo;t match them, not because they didn\u0026rsquo;t try hard. Kana Juku\u0026rsquo;s job is to correct that angle.\nA real doorway forward, not just a memorisation drill # Memorising kana is only the starting line. The real challenge is turning it into input, recognition, and comprehension ability. That\u0026rsquo;s why the app goes beyond static drills and includes handwriting recognition, a custom keyboard, and AI assistance. What you\u0026rsquo;re building is muscle memory closer to how kana actually gets used.\nThe design suits two kinds of learners especially: people starting Japanese who want a low-pressure entry point, and people who learned before, forgot, and now want familiarity back. The first group needs to skip detours; the second needs to rebuild recognition. The app works for both.\nPrivacy and offline have practical weight here # Language learning tools tend to drift toward content-platform behaviour over time. You feel like you\u0026rsquo;re learning, but you\u0026rsquo;re really being shuffled between recommendations. Kana Juku stays restrained. The focus sits on real input and recognition training, and both the local AI and data processing run on-device. You don\u0026rsquo;t have to trade your usage habits for a little learning convenience.\nYou don\u0026rsquo;t have to let romaji lead you through kana. Kana Juku shows you that entering through the shapes works too — and often gets you there faster.\n","externalUrl":null,"permalink":"/en/apps/kana-juku/","section":"Apps","summary":"","title":"Kana Juku","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # Kana Juku, developed by QQder339, is a Japanese kana learning app designed for native Chinese speakers.\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nLearning Progress: Tracks your kana learning status User Settings: Saves your preferences Handwriting Input: Processed in real-time memory and discarded immediately; no files are saved Widget Data: Uses iOS shared container mechanism to display kana on home screen widgets (local only) 4. Offline AI Features # All AI features operate completely offline:\nHandwriting Recognition: Uses on-device machine learning models; all processing is local Text-to-Speech: Uses pre-downloaded audio assets AI Assistance: Uses local Large Language Models (LLM); inference is performed on-device without data upload 5. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n6. Network Access # Network access is restricted to:\nDownloading AI Models (Optional): Only connects when you explicitly choose to download local model resources External Links: Opens the browser when you tap \u0026ldquo;Rate on App Store\u0026rdquo; or \u0026ldquo;Privacy Policy\u0026rdquo;; opens the browser when you use the \u0026ldquo;Search Web\u0026rdquo; function after translation/recognition Other than the above, the app does not initiate network connections.\n7. Contact Us # If you have questions about this Privacy Policy, please contact:\n📧 qqder339@gmail.com\nSubject: Kana Juku Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/kana-juku/","section":"Privacy Policies","summary":"","title":"Kana Juku — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: Handwriting recognition keeps making mistakes. What should I do?\nA: Make sure you\u0026rsquo;re not writing too fast — pause briefly after each stroke before lifting the pen. The recognition model needs complete stroke information. If the problem persists, try resetting the recognition calibration in Settings.\nQ: How do I download the local AI model? Do I still need internet after downloading?\nA: When you first use an AI feature, the app will prompt you to download the model (a few hundred MB). Once downloaded, all AI features work fully offline — no internet connection required.\nQ: The custom keyboard doesn\u0026rsquo;t appear in other apps?\nA: The built-in keyboard in Kana Juku is for in-app use only and is not a system-level keyboard extension. To type Japanese in other apps, please use the iOS system Japanese keyboard.\nQ: The home screen widget isn\u0026rsquo;t updating kana?\nA: Try long-pressing to remove the widget from the home screen, then re-adding it. If it still doesn\u0026rsquo;t update, force-quit and relaunch the app, or restart your device.\nQ: My learning progress has disappeared?\nA: Progress is stored locally on your device. Uninstalling the app will erase all data. iCloud backup is not currently supported. If progress disappears without uninstalling, please email us with details.\nTroubleshooting # Force quit and relaunch the app (swipe up on the app in the app switcher) Check iOS version ≥ 17.0 Check available storage (AI models require ~1–2 GB) If none of the above works, uninstall and reinstall (note: progress data will be erased) Contact Support # 📧 qqder339@gmail.com\nSubject: [Kana Juku] Issue Description\nPlease include: device model, iOS version, app version, steps to reproduce (screenshots welcome).\nThis app collects no user data. All data is stored locally on your device. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/kana-juku/","section":"Support","summary":"Support and contact for Kana Juku","title":"Kana Juku Support","type":"support"},{"content":"Privacy policies for all apps. None of our applications collect any personal user data. All AI processing runs entirely on your device.\nApp Privacy Policy Kana Juku View English N+1 View Gantt Planet View Auditory Companion View Python Dimensions View Atomic Presence View Sown Echoes View StoneStory View ","externalUrl":null,"permalink":"/en/privacy/","section":"Privacy Policies","summary":"Privacy policies for all apps","title":"Privacy Policies","type":"privacy"},{"content":" What matters is getting the learning rhythm right # Most Python learning tools get stuck at two extremes. One side gives you fragmented questions; answering them still leaves you in the dark about where you\u0026rsquo;re actually weak. The other side drops you into a full IDE that tends to scare beginners off. Python Dimensions bridges those extremes, helping you first build reading ability, grammatical sense, and logical sense before pushing toward more complete coding capability.\nThe core idea is less \u0026ldquo;do lots of questions\u0026rdquo; and more \u0026ldquo;break learning into three layers.\u0026rdquo; Points are vocabulary and concept recognition; Lines are syntax and local structure; Surfaces are complete program flow. This layering works for complete beginners and also for people who already know where they\u0026rsquo;re stuck and want an efficient way to patch gaps.\nWhat situations it\u0026rsquo;s most useful for # If you\u0026rsquo;re preparing for PCEP, TQC+, or CPE, the app is well suited to daily maintenance. You don\u0026rsquo;t need to open a laptop to get started; in 10 to 20 minute windows you can run through multiple-choice questions, fill in a few blanks, or re-sequence a program flow. That low friction matters more over the long run than intense burst study.\nFor self-taught beginners, the app also doesn\u0026rsquo;t behave like a machine that only reports right and wrong. You can use the question types to sketch the basic outline, then move into the playground to actually run code and understand why one variant works and another doesn\u0026rsquo;t. Knowledge stops living purely in memory and starts becoming your own judgement.\nOn-device AI here isn\u0026rsquo;t a gimmick # \u0026ldquo;AI tutor\u0026rdquo; often triggers the question, \u0026ldquo;is this about to upload my content to the cloud?\u0026rdquo; Python Dimensions places AI in a useful role that doesn\u0026rsquo;t compromise privacy. When you answer incorrectly, it can hint based on the question\u0026rsquo;s context. When you want to confirm a syntax idea, you can just ask, instead of bouncing between search engines and forums.\nJust as importantly, none of this requires handing your learning history to an external server. For students, that lowers the barrier to use. For teachers, parents, or anyone wary of data leakage, it turns the app into something closer to a long-term learning tool rather than a casual demo.\nThe AI layer also goes beyond \u0026ldquo;a chat model stuffed in for show.\u0026rdquo; The question bank, error context, context-aware retrieval, and a directly executable Python playground operate inside the same loop. You answer, ask, then run code to verify; when needed, capability analytics let you see whether you\u0026rsquo;re stuck at syntax, concepts, or program flow.\nWhy this app deserves a permanent place on your phone # The learning tools people actually keep opening are the ones that sense when you\u0026rsquo;re about to give up, more than the ones packed with features. Python Dimensions gathers question training, AI hints, and an executable environment onto a single device. The point is to let you push forward a little, even in the moments you\u0026rsquo;d otherwise scroll away.\nWhat actually moves the needle isn\u0026rsquo;t the rush of fifty problems in one sitting. It\u0026rsquo;s the three minutes you\u0026rsquo;re willing to open the app each day. Python Dimensions is built around those three minutes.\n","externalUrl":null,"permalink":"/en/apps/python-dimensions/","section":"Apps","summary":"","title":"Python Dimensions","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # Python Dimensions, developed by QQder339, is a Python programming learning app featuring a built-in Python runtime environment and on-device AI tutor.\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nLearning Progress: Answer records and error tracking across all question types (MCQ, fill-in-the-blank, Parsons) Code: Programs written in the built-in IDE AI Conversation Logs: Conversations with the AI tutor stored locally User Settings: Difficulty preferences, interface settings, etc. 4. Offline AI Features # All AI features run entirely on-device without network connection:\nAI Tutor: Uses local Large Language Models (LLM) to provide hints and explanations; all inference runs on-device Python Runtime: The built-in Python interpreter runs entirely on-device; your code is never sent to any server AI models require a one-time download before first use (user-initiated); all features work offline after download.\n5. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n6. Network Access # Network access is restricted to:\nDownloading AI Models (Optional, one-time): Only connects when you explicitly choose to download LLM model resources External Links: Opens browser when tapping relevant links Other than the above, the app does not initiate network connections. Code execution runs entirely in the local Python environment.\n7. Contact Us # 📧 qqder339@gmail.com\nSubject: Python Dimensions Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/python-dimensions/","section":"Privacy Policies","summary":"","title":"Python Dimensions — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: The built-in Python runtime throws an error or crashes the app?\nA: Complex code (infinite loops, excessive memory usage) may cause timeouts or crashes. Make sure your code has no infinite loops and avoids allocating very large amounts of memory. If a specific code snippet causes a crash, please email us with that code.\nQ: Does the AI tutor need to download a model? How large?\nA: Yes. The first time you use the AI tutor, you\u0026rsquo;ll need to download a local model (~1–4 GB). After downloading, it works completely offline — all Q\u0026amp;A and explanations run on-device without internet.\nQ: I think there\u0026rsquo;s an error in the question bank?\nA: If you find an incorrect question or answer, please email us with: the question content, your proposed correct answer, and your reasoning. We\u0026rsquo;ll verify and update the question bank as soon as possible.\nQ: The error radar chart isn\u0026rsquo;t showing?\nA: The radar chart requires a minimum number of answer records to generate. Please complete at least 20 questions first.\nQ: How do I use the code templates?\nA: In the built-in IDE screen, tap the \u0026ldquo;Templates\u0026rdquo; button in the upper right, select the category you need (loops, functions, classes, etc.), and the code will be automatically inserted into the editor.\nTroubleshooting # Python runtime crashes: Ensure no infinite loops in your code; ensure the device has sufficient available memory AI model fails to load: Ensure 3+ GB free storage; retry downloading on Wi-Fi Force quit and relaunch the app Check iOS version ≥ 17.0 Contact Support # 📧 qqder339@gmail.com\nSubject: [Python Dimensions] Issue Description\nPlease include: device model, iOS version, app version, steps to reproduce (include code if it\u0026rsquo;s a code-related issue).\nThis app collects no user data. Python execution and AI inference run entirely on-device. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/python-dimensions/","section":"Support","summary":"Support and contact for Python Dimensions","title":"Python Dimensions Support","type":"support"},{"content":" Leaving a person behind, not just a diary # Most recording tools deal with what happened today. Sown Echoes addresses a different scale of question. If a person\u0026rsquo;s values, experiences, preferences, tone, and ways of making judgments are worth preserving, how do you keep them? And how do you keep them as a structure that can be understood and conversed with again in the future, rather than as a pile of scattered notes?\nSo the app is part journal, part personal knowledge base, and carries a trace of digital legacy system. What you leave here goes beyond events: it includes how you see events, how you explain yourself, what you care about, and what you don\u0026rsquo;t. That\u0026rsquo;s the material Sown Echoes is really collecting.\nWhy it deserves to exist separately from notes or voice journals # The reason it warrants its own category: it helps you progressively organise content into an analysable structure, rather than only storing it. Text, voice, questionnaires, persona summaries, values radar charts, digital-twin conversation. These modules form a complete chain. First record, then organise, then understand, then eventually converse.\nThis suits people who feel a strong urge to record, but also know pure notes tend to pile up into chaos. It\u0026rsquo;s a container designed for organising life material, not just another blank page that only handles input.\nKeeping both private and public tracks matters # Many products force you to choose between \u0026ldquo;fully private\u0026rdquo; and \u0026ldquo;fully social.\u0026rdquo; Sown Echoes takes the more mature position that both needs are legitimate. You can keep everything entirely on your own device and iCloud, or contribute selected content under an open licence as part of the broader Human Wisdom Library.\nThis dual-track choice is the product\u0026rsquo;s philosophy, not an add-on. Some parts of your life belong only to you; others might be worth entering public knowledge. Whether to share should be yours to decide.\nOn-device AI makes this feel less like surrendering yourself # When a product\u0026rsquo;s core is your thoughts, values, and life experience, privacy stops being a feature and becomes the precondition for the product to work at all. Sown Echoes keeps analysis and conversation as on-device as it can, so you don\u0026rsquo;t have to hand yourself over wholesale just to get a tool that understands you.\nOne day you\u0026rsquo;ll try to recall a chapter of your life and find you can\u0026rsquo;t quite put it into words anymore. Sown Echoes exists to push that day as far into the future as you can.\n","externalUrl":null,"permalink":"/en/apps/sown-echoes/","section":"Apps","summary":"","title":"Sown Echoes","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # Sown Echoes, developed by QQder339, is an app that lets you actively capture your thoughts, values, and life experiences, building a digital legacy through a BIP-39 cryptographic identity.\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers. Your thoughts and records belong only to you.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nBIP-39 Mnemonic: Your Meme ID identity key (stored locally only; please back it up yourself) Voice and Text Records: All thoughts, values, and stories entered through the questionnaire Speech-to-Text Results: Whisper on-device recognition results stored locally User Settings: All preference settings 4. Offline AI Features # All AI features run entirely on-device without network connection:\nSpeech-to-Text (Whisper): Uses local Whisper model; all speech recognition runs on-device; voice data is never transmitted to any server BIP-39 Identity Generation: Mnemonics generated locally on-device, without relying on any external service AI models require a one-time download before first use (user-initiated); all features work offline after download.\n5. Data Export # If you choose to export data for AI training contribution, the export action is entirely under your control. The app does not automatically upload or share any content.\n6. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n7. Network Access # Network access is restricted to:\nDownloading AI Models (Optional, one-time): Only connects when you explicitly choose to download Whisper model resources User-Initiated Data Export: Only when you explicitly choose to export External Links: Opens browser when tapping relevant links Other than the above, the app does not initiate network connections.\n8. Contact Us # 📧 qqder339@gmail.com\nSubject: Sown Echoes Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/sown-echoes/","section":"Privacy Policies","summary":"","title":"Sown Echoes — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: I forgot my BIP-39 mnemonic (Meme ID). Can I recover it?\nA: No. The mnemonic is only shown once when first generated, and is stored only on your device. We have no backup mechanism and no access to your mnemonic. Strongly recommended: write it down or screenshot it immediately and keep it in a safe place. If lost, your identity cannot be restored on another device.\nQ: Does voice recognition (Whisper) require internet?\nA: No. Voice recognition uses a local on-device Whisper model. All recognition is processed completely offline. A one-time model download (~200 MB–1 GB) is required on first use, after which it works fully offline.\nQ: Where are my records? Can I export them?\nA: All records are stored locally on your device. You can export your records (JSON format) from the Data Management section in the app. Export is a user-initiated action — the app never automatically uploads anything.\nQ: Voice input recognition accuracy is low?\nA: Recognition accuracy depends on: background noise, clarity of speech, and language selection. Use in a quiet environment and ensure the app has microphone permission. If accuracy is particularly poor for a specific language, please email us.\nQ: Records disappeared after an update?\nA: Normal updates should not erase data. If data has disappeared, it may be due to accidental deletion or abnormal storage behavior. Please email us immediately with your app version information so we can help diagnose.\nTroubleshooting # Voice recognition fails: Ensure microphone permission is enabled (iOS Settings \u0026gt; Privacy \u0026gt; Microphone) Model download fails: Ensure stable Wi-Fi and sufficient device storage Force quit and relaunch the app Check iOS version ≥ 17.0 Contact Support # 📧 qqder339@gmail.com\nSubject: [Sown Echoes] Issue Description\nPlease include: device model, iOS version, app version, issue description.\n⚠️ Important: Please keep your mnemonic (Meme ID) safe. It cannot be recovered if lost.\nThis app collects no user data. All content is processed entirely on-device. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/sown-echoes/","section":"Support","summary":"Support and contact for Sown Echoes","title":"Sown Echoes Support","type":"support"},{"content":" Turning a classical novel back into a world that runs # Most literary apps put the original text into a prettier reader. StoneStory takes a different path. It disassembles the characters, scenes, relationships, and events of Dream of the Red Chamber into a running narrative system. What you see is more than passages. You see how characters pull at each other, how they reveal personality inside situations, and how a classical novel operates like a small society.\nThat\u0026rsquo;s why it\u0026rsquo;s called a simulator rather than a reader. The goal is to take you into the structure beneath the text, not to serve up the same words in a nicer shell.\nWho actually needs a product like this # If you already love Dream of the Red Chamber, what you\u0026rsquo;ll get here is a new way in rather than simple nostalgia. Characters become entities you can compare, observe, and reinterpret, rather than reference points to memorise. You\u0026rsquo;ll see more easily who truly made choices in which scenes, how emotions accumulated, and which details were already foreshadowing what came later.\nIf classical literature has always felt distant, the app may actually be easier to approach. It doesn\u0026rsquo;t demand that you first swallow the thick original to qualify; it breaks a complex work into a system you can approach slowly and understand incrementally.\nAI here opens the door to understanding # StoneStory uses AI for the work of comprehension, not plot generation: character inner life, emotional tension, modern-perspective interpretation, structural connections between events. For the user, this adds a layer of guided commentary: an interactive interpretation that shifts with each scene, rather than a dogmatic gloss.\nThe design suits Dream of the Red Chamber\u0026rsquo;s particular shape: many characters, complex relationships, extremely high detail density. You can let the system open the door, then decide how deep to go, instead of rebuilding the structure from scratch every time.\nWhy on-device AI still matters here # This kind of product is easy to ship as a cloud demo. The moment content comprehension, reading history, and interaction all depend on external services, though, the experience becomes fragile, and it stops feeling like something that can accompany you long-term. StoneStory pushes the core experience back to the device, so immersive reading and exploration can actually exist as everyday tools, not as a technical showcase.\nIf you want the deeper technical and methodological thinking behind this, the related reading below goes there. If you\u0026rsquo;d rather start with the experience, the App Store is the most direct entry.\n","externalUrl":null,"permalink":"/en/apps/stonestory/","section":"Apps","summary":"","title":"StoneStory","type":"apps"},{"content":"Last Updated: 2026-04-15\n1. Overview # StoneStory, developed by QQder339, is an immersive reading and character simulation app based on the classic novel \u0026ldquo;Dream of the Red Chamber.\u0026rdquo;\nIn short: We do NOT collect, store, or transmit any of your personal data to external servers.\n2. Data We Do NOT Collect # This app does not collect:\nPersonally Identifiable Information (name, email, phone number) Location data Device identifiers Usage analytics or tracking data 3. Locally Stored Data # The following data is stored strictly on your device and never transmitted externally:\nTraveler Profile: The name, personality traits, speech style, background, and optional avatar image you configure in Traveler Mode Interface Preferences: Your selected display language (Traditional Chinese / English / Japanese) and chosen offline AI model Downloaded Content: Character portraits and scene images cached after you view them in the app Offline AI Model: The Qwen 2.5 model file you\u0026rsquo;ve chosen to download (stored in an App Group container for in-app use only) 4. Third-Party Services # This app does NOT use any third-party analytics or advertising frameworks (No Google Analytics, No Facebook SDK, No Ads).\n5. Network Access # Core reading and simulation features work fully offline and require no network connection. The following features initiate network requests only when you explicitly trigger them:\nDownloading Character Portraits / Scene Images: The first time you view a character or scene, the app fetches the corresponding image from a public CDN and caches it locally Downloading the Offline AI Model: When you choose to download the Qwen2.5 model in Settings, the app fetches the file from the model\u0026rsquo;s public release source External Links: Opens the system browser when you tap relevant links These network requests transmit only the URL of the file you\u0026rsquo;ve chosen. No personally identifiable information is attached, and no data is collected in return.\n6. Contact Us # 📧 qqder339@gmail.com\nSubject: StoneStory Privacy Policy Inquiry\n","externalUrl":null,"permalink":"/en/privacy/stonestory/","section":"Privacy Policies","summary":"","title":"StoneStory — Privacy Policy","type":"privacy"},{"content":"App Store · Privacy Policy\nFAQ # Q: The app launches slowly or stalls on the splash screen.\nA: If you\u0026rsquo;ve downloaded the offline AI model (Standard 1.9 GB or High-Quality 4 GB), the app loads it into memory on launch; this may take several seconds on older devices. The first time you enter a chapter, the bundled database (characters, events, poems) is loaded — this is normal. If launch is unusually slow, please email us with your device model and iOS version.\nQ: Poetry or passages show garbled text, missing glyphs, or blank boxes.\nA: Three fonts are bundled in the app (LXGW WenKai TC, Noto Serif TC, Iansui) — no download or switching is required. If anomalies persist, please force-quit the app and relaunch, then send us a screenshot so we can fix it in the next release.\nQ: Is reading progress saved?\nA: In the current version (v1.1.1), chapter playback progress is retained only within the current app session — you can return to a chapter during the same session. However, if you force-quit the app or restart your device, chapter playback will start from the beginning. Persistent cross-session bookmarks are planned for a future release.\nQ: Character portraits or scene images won\u0026rsquo;t load.\nA: Portraits and scene images are downloaded on-demand the first time you view them, and cached locally. If they won\u0026rsquo;t load:\nVerify your network connection Swipe away from the screen and return to trigger a retry Or go to Settings → Clear Art Cache and re-enter with a stable network Q: The offline AI chat doesn\u0026rsquo;t respond.\nA: First-time use requires downloading a Qwen 2.5 model under Settings → Model Management. Choose one based on your device:\nSmall 1.5B (~0.9 GB) — iPhone 15 / iPad Air Standard 3B (~1.9 GB, default) — iPhone 15 Pro / iPad Pro High-Quality 7B (~4.0 GB) — iPhone 16 Pro / iPad Pro M-series Ensure sufficient free space on your device. Once downloaded, the chat runs fully offline.\nQ: Can the app be used offline?\nA: Yes. Chapter playback, True Endings, personality system, poem/object collections, and on-device AI chat (after model download) all work offline. Only character portraits, scene images, and the AI model file require an internet connection on first retrieval.\nTroubleshooting # Force-quit and relaunch the app Check iOS version ≥ 17.0 If a specific chapter misbehaves, note its name and email us Uninstall and reinstall (your Traveler profile and downloaded images will be cleared) Contact Support # 📧 qqder339@gmail.com\nSubject: [StoneStory] Issue Description\nPlease include: device model, iOS version, app version, steps to reproduce (screenshots preferred).\nThis app collects no user data. All content is stored locally on your device. View Privacy Policy →\n","externalUrl":null,"permalink":"/en/support/stonestory/","section":"Support","summary":"Support and contact for StoneStory","title":"StoneStory Support","type":"support"},{"content":"Support pages for all released apps. Each page provides contact information, App Store link, and privacy policy link.\nApp Support Kana Juku View English N+1 View Gantt Planet View Auditory Companion View Python Dimensions View Atomic Presence View Sown Echoes View StoneStory View ","externalUrl":null,"permalink":"/en/support/","section":"Support","summary":"Support pages for all apps","title":"Support","type":"support"}]