Abstract — 摘要
Neural implicit representations have emerged as a powerful paradigm for representing 3D geometry. However, evolving these representations under physical dynamics or user-specified deformations remains challenging, as the implicit function must be updated globally even for local changes. We present a level set theory that enables neural implicit surfaces to evolve under explicit flow fields. Our key insight is to derive a partial differential equation (PDE) governing the evolution of the neural network weights, such that the zero level set of the network tracks the evolving surface exactly. This formulation allows us to deform neural implicit surfaces using classical flow fields (mean curvature flow, advection) while maintaining the advantages of implicit representations.
神經隱式表示已成為表示三維幾何的強大範式。然而,在物理動力學或使用者指定的變形下演化這些表示仍是挑戰,因為即使是局部變化也需要全域更新隱式函數。本文提出一種等位面集理論,使神經隱式表面能在顯式流場下演化。核心洞察是導出一個控制神經網路權重演化的偏微分方程(PDE),使得網路的零等位面集精確追蹤演化中的表面。此公式化允許我們使用經典流場(平均曲率流、平流)變形神經隱式表面,同時保留隱式表示的優勢。
段落功能
全文總覽——橋接經典等位面方法與現代神經隱式表示。
邏輯角色
以「經典數學理論 + 現代神經表示」的結合為核心承諾,預告了一個跨領域的理論貢獻。
論證技巧 / 潛在漏洞
將問題框定為「在權重空間中解 PDE」是高度原創的概念。但實際上這個 PDE 是否有解析解或是否需要數值近似,決定了方法的實用性。
1. Introduction — 緒論
The level set method, pioneered by Osher and Sethian, represents surfaces as the zero level set of a higher-dimensional function and evolves this function according to a PDE derived from the desired surface motion. This approach has been enormously successful in computational physics, fluid dynamics, and classical computer vision. Recently, neural implicit functions (e.g., DeepSDF, NeRF) have demonstrated remarkable ability to represent complex 3D geometry. However, these representations are typically static — once trained, the surface they encode cannot be easily modified without retraining. We ask: can we derive a principled framework for evolving neural implicit surfaces, analogous to the classical level set method?
等位面方法由 Osher 和 Sethian 開創,將表面表示為高維函數的零等位面集,並根據所需表面運動導出的 PDE 演化此函數。此方法在計算物理、流體動力學和經典電腦視覺中取得了巨大成功。近年來,神經隱式函數(如 DeepSDF、NeRF)展現了表示複雜三維幾何的卓越能力。然而,這些表示通常是靜態的——一旦訓練完成,其編碼的表面無法在不重新訓練的情況下輕易修改。我們提問:能否導出一個有原則的框架來演化神經隱式表面,類比於經典等位面方法?
段落功能
建立研究問題——連結經典等位面方法與神經表示的鴻溝。
邏輯角色
以經典方法的成功歷史為背景,凸顯神經隱式表示在「動態演化」能力上的缺失。
論證技巧 / 潛在漏洞
以設問方式引出研究問題,激發讀者好奇。引用 Osher-Sethian 建立權威性。但「不重新訓練就無法修改」的說法忽略了一些近期的局部編輯方法。
The fundamental challenge is that in the classical level set method, the level set function is discretized on a grid and each grid point can be updated independently. In contrast, a neural implicit function is parameterized by network weights that jointly determine the function value everywhere — changing one weight affects the entire surface. Our approach resolves this by deriving the chain rule relationship between the level set PDE and the network weight dynamics, showing that the weight update rule can be expressed as a pseudo-inverse problem involving the Jacobian of the network.
根本挑戰在於:經典等位面方法中,等位面函數在網格上離散化,每個網格點可獨立更新。相較之下,神經隱式函數由網路權重參數化,這些權重共同決定了函數在所有位置的值——改變一個權重就影響整個表面。本方法透過導出等位面 PDE 與網路權重動態之間的鏈式法則關係來解決此問題,證明權重更新規則可表達為涉及網路雅可比矩陣的偽逆問題。
段落功能
核心挑戰與解決思路——指出離散化與參數化之間的根本差異。
邏輯角色
精準定位技術難點,並以「鏈式法則 + 偽逆」提供清晰的解決路徑。
論證技巧 / 潛在漏洞
將問題歸結為偽逆問題是一個優雅的數學框定。但雅可比矩陣的計算和偽逆的求解在大型網路上的可行性是關鍵的實作挑戰。
2. Method — 方法
Let f_theta(x): R^3 -> R be a neural implicit function parameterized by weights theta, where the surface S is defined as S = {x : f_theta(x) = 0}. Given a velocity field v(x, t) that specifies how each point on the surface should move, the classical level set equation states df/dt + v . grad(f) = 0. For a neural network, we have df/dt = (df/d_theta) . (d_theta/dt), where df/d_theta is the Jacobian of the network output with respect to its weights. Substituting and evaluating on the surface, we obtain a linear system J(d_theta/dt) = -v . grad_x(f) that can be solved for the weight update d_theta/dt using least squares.
令 f_theta(x): R^3 -> R 為以權重 theta 參數化的神經隱式函數,其中表面 S 定義為 S = {x : f_theta(x) = 0}。給定速度場 v(x, t) 指定表面上各點的運動方式,經典等位面方程為 df/dt + v . grad(f) = 0。對於神經網路,我們有 df/dt = (df/d_theta) . (d_theta/dt),其中 df/d_theta 是網路輸出對權重的雅可比矩陣。代入並在表面上求值,得到線性系統 J(d_theta/dt) = -v . grad_x(f),可透過最小二乘法求解權重更新 d_theta/dt。
段落功能
核心推導——從經典等位面方程推導出權重更新規則。
邏輯角色
論文的理論核心:透過鏈式法則和代入,將表面演化問題轉化為線性代數問題。
論證技巧 / 潛在漏洞
推導過程邏輯嚴密,從 PDE 到線性系統的轉化清晰。但最小二乘解的品質取決於雅可比矩陣的條件數,病態情況可能導致不穩定的更新。
In practice, we sample points on the current zero level set and compute the Jacobian at these points. The resulting overdetermined linear system is solved using damped least squares (Tikhonov regularization) to ensure stability. We demonstrate the framework with several classical flows: mean curvature flow (which smooths surfaces), advection by a velocity field (which translates and deforms surfaces), and normal flow (which inflates or deflates surfaces). The method naturally handles topological changes such as merging and splitting, inheriting this capability from the level set formulation.
在實務中,我們在當前零等位面集上採樣點並在這些點上計算雅可比矩陣。產生的過定線性系統使用阻尼最小二乘法(Tikhonov 正則化)求解以確保穩定性。我們以數種經典流場展示此框架:平均曲率流(平滑表面)、速度場平流(平移和變形表面)和法向流(膨脹或收縮表面)。此方法自然地處理拓撲變化如合併和分裂,繼承了等位面公式的此項能力。
段落功能
實作細節與展示——說明數值實現並列舉應用場景。
邏輯角色
從理論推導過渡到實際應用,Tikhonov 正則化解決了病態問題,多種流場展示了框架的通用性。
論證技巧 / 潛在漏洞
強調「拓撲變化的自然處理」是顯式表示(如 mesh)難以實現的能力,有效突出了方法的獨特優勢。
3. Experiments — 實驗
We validate our approach on several tasks. First, we demonstrate mean curvature flow on neural implicit surfaces represented by SIREN networks. Starting from a noisy surface, our method smoothly evolves the surface toward a sphere (the minimal surface), correctly reproducing the known analytical behavior of curvature flow. Compared to naive approaches that retrain the network at each timestep, our method is 10-50x faster while producing more accurate surface evolution, as the weight update directly encodes the desired motion rather than relying on fitting to a moved point cloud.
我們在數項任務上驗證本方法。首先,展示以 SIREN 網路表示的神經隱式表面上的平均曲率流。從噪音表面出發,本方法平滑地將表面演化為球體(最小曲面),正確地重現了曲率流的已知解析行為。與在每個時間步重新訓練網路的樸素方法相比,本方法快 10-50 倍,且產生更精確的表面演化,因為權重更新直接編碼了所需的運動,而非依賴對移動點雲的擬合。
段落功能
核心實驗——曲率流驗證與效率比較。
邏輯角色
以已知解析解作為基準驗證方法的正確性,同時用速度比較展示實用性。
論證技巧 / 潛在漏洞
以已知解析行為作為驗證手段極具說服力。10-50x 的加速範圍較寬,具體加速比可能因場景而異。
Second, we apply our framework to shape deformation by user-specified velocity fields. Given a neural implicit shape (e.g., a bunny or dragon mesh encoded as a neural SDF), users can specify local deformations and our method propagates the deformation smoothly through the weight update, maintaining surface quality without artifacts. Third, we demonstrate topology-changing flows where two initially separate objects merge into one as they are advected toward each other. The neural implicit representation seamlessly handles the topological transition, which would require explicit remeshing in traditional approaches.
第二,我們將框架應用於使用者指定速度場的形狀變形。給定一個神經隱式形狀(如編碼為神經 SDF 的兔子或龍模型),使用者可指定局部變形,本方法透過權重更新平滑地傳播變形,維持表面品質且無人工偽影。第三,我們展示拓撲變化流,其中兩個初始分離的物體在相向平流時合併為一。神經隱式表示無縫地處理了拓撲轉變,而傳統方法需要顯式的重新網格化。
段落功能
進階實驗——變形與拓撲變化的展示。
邏輯角色
擴展應用範圍至互動式形狀編輯和拓撲變化,強化框架的通用性論點。
論證技巧 / 潛在漏洞
拓撲變化的演示是強有力的亮點,直接展現了等位面方法的經典優勢在神經表示中的延續。
4. Conclusion — 結論
We have presented a principled level set theory for evolving neural implicit surfaces under explicit flow fields. By deriving the relationship between the surface evolution PDE and the network weight dynamics, we enable efficient, accurate, and topology-aware deformation of neural implicit representations. Our framework bridges classical computational geometry and modern neural scene representations, opening new possibilities for physics-based animation, shape optimization, and interactive geometry processing with neural implicit functions.
本文提出了一種有原則的等位面理論,用於在顯式流場下演化神經隱式表面。透過導出表面演化 PDE 與網路權重動態之間的關係,我們實現了高效、精確且拓撲感知的神經隱式表示變形。本框架橋接了經典計算幾何與現代神經場景表示,為基於物理的動畫、形狀最佳化和使用神經隱式函數的互動式幾何處理開啟了新可能。
段落功能
全文總結——重申橋接兩個領域的貢獻。
邏輯角色
以「橋接」作為總結的核心意象,強調論文的跨領域價值。
論證技巧 / 潛在漏洞
前瞻性地提到「物理動畫」和「形狀最佳化」等應用方向,但這些方向的可行性尚需後續工作驗證。