![Absolute Eigenvalue Filtering for Projected Newton (1) Absolute Eigenvalue Filtering for Projected Newton (1)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x1.png)
Honglin ChenColumbia UniversityUSAhonglin.chen@columbia.edu,Hsueh-Ti Derek LiuRoblox Research & University of British ColumbiaCanadahsuehtil@gmail.com,David I.W. LevinUniversity of Toronto & NVIDIACanadadiwlevin@cs.toronto.edu,Changxi ZhengColumbia UniversityUSAcxz@cs.columbia.eduandAlec JacobsonUniversity of Toronto & Adobe ResearchCanadajacobson@cs.toronto.edu
(2024)
Abstract.
Volume-preserving hyperelastic materials are widely used to model near-incompressible materials such as rubber and soft tissues.However, the numerical simulation of volume-preserving hyperelastic materials isnotoriously challenging within this regime due to the non-convexity of theenergy function.In this work,we identify the pitfalls of the popular eigenvalue clamping strategyfor projecting Hessian matrices to positive semi-definiteness duringNewton’s method.We introduce a novel eigenvalue filtering strategy for projected Newton’smethod to stabilize the optimization of Neo-Hookean energy and othervolume-preserving variants under high Poisson’s ratio (near 0.5) and largeinitial volume change.Our method only requires a single line of code change in the existingprojected Newton framework, while achieving significant improvement in bothstability and convergence speed.We demonstrate the effectiveness and efficiency of our eigenvalue projectionscheme on a variety of challenging examples and over different deformations ona large dataset.
Neo-Hookean elasticity, Projected Newton, eigenvalue filtering.
††submissionid: 486††journalyear: 2024††copyright: acmlicensed††conference: Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers ’24; July 27-August 1, 2024; Denver, CO, USA††booktitle: Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers ’24 (SIGGRAPH Conference Papers ’24), July 27-August 1, 2024, Denver, CO, USA††doi: 10.1145/3641519.3657433††isbn: 979-8-4007-0525-0/24/07††ccs: Computing methodologiesPhysical simulation1. Introduction
Volume-preserving hyperelastic energies play a crucial role in accuratelycapturing the near-incompressible nature of soft tissues and rubber-likematerials.Among the common hyperelastic material models, Neo-Hookean model and itsvariants are usually the de facto choice for modeling such materials with highPoisson’s ratios (see Table1).Unfortunately, the numerical optimization of Neo-Hookean material is notoriouslychallenging within this near-incompressible regime due to the non-convexity fromthe volume-preserving term, especially in the presence of large volume change.We will show thatexisting projection approaches to ensure positive-definite of Hessian matrices during Newton’s methodeffectively assume the initial shape to have small volumechange, restraining the user from freely editing the initial deformation.
Material | PR |
---|---|
Rubber | 0.4999 |
Tongue | 0.49 |
Soft palate | 0.49 |
Fat | 0.49 |
Skin | 0.48 |
Muscle | 0.47 |
Saturated clay | 0.4-0.49 |
We seek to stabilize and accelerate the optimization of Neo-Hookean energieswith high Poisson’s ratios under the projected Newton framework in arobust and efficient way.For robustness, we target the scenarios of large initial deformation and volume change, allowing the user to freely deform the initial shape without worrying about optimization blowing up.For efficiency, we aim to improve the convergence speed of projectedNewton method but still keep the per-step computation cost unchanged: filtering on the eigenvalues of per-element Hessian contributions à la Teran etal. (2005).Surprisingly, this leads to an extremely simple and elegant solution, requiring only one line of code change in the existing projected Newton framework without any additional parameter(TL;DR):
⬇
1 U,S,V=eig(Hi)
2 Hi_proj=U*max(S,0)*V’
⬇1 U,S,V=eig(Hi)2 Hi_proj=U*abs(S)*V’
![Absolute Eigenvalue Filtering for Projected Newton (2) Absolute Eigenvalue Filtering for Projected Newton (2)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x2.png)
While our proposed change to code is small, the technical analysis behind it is non-trivial and interesting:
- •
Our simple solution is supported by a thorough analysis. We show that the high non-convexity of the Neo-Hookean energy stems fromhigh Poisson’s ratio and large volume change (Sec.LABEL:sec:snh_eigenanalysis).We analyze the behavior of projected Newton method under suchscenariosand identify potential pitfalls of the traditional eigenvalue-clampingprojection strategy (Teran etal., 2005) (Sec.4).
- •
In response, we propose a novel eigenvalue projection strategy for Newton’s method tostabilize the optimization of Neo-Hookean-type energy under high Poisson’sratio and large initial volume change (Sec.5).
Our paper is a nice complement to the “Stable Neo Hookean Flesh Simulation” paper (Smith etal., 2018) which our title alludes to, improving the numerical stability by dissecting its nonconvexity in the projected Newton optimization.We illustrate the effectiveness and efficiency of our eigenvalue projectionscheme on a wide range of challenging examples, including differentdeformations, geometries, mesh resolutions, elastic energies, and physicalparameters. Through extensive experiments, we show that in the case of largevolume change, our method outperforms the state-of-the-art eigenvalueprojection strategy and other alternatives in terms of stability andconvergence speed, while still, on average, being comparable for simulations with moderatevolume changes. Our method is robust across different mesh resolutions and toextreme volume change and inversion. It achieves a stable and fast convergencerate in the presence of a high Poisson’s ratio and large volume change.
![Absolute Eigenvalue Filtering for Projected Newton (3) Absolute Eigenvalue Filtering for Projected Newton (3)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x3.png)
2. Related Work
While researchers have studied first-order methods for simulating volumetricobjects, our approach focuses on improving the robustness and efficiency of thesecond-order methods, to which we limit our discussion.
Volume-preserving hyperelastic energiesplay a crucial role in capturing the volumetric behavior of deformable objects.Many hyperelastic energies are modeled as functions of the deformation gradient .As hyperelastic materials (e.g., rubber) resist volume changes, these energyfunctions often contain a volume term, a function of the determinant of thedeformation gradient , to penalize volume changes.In practice, Neo-Hookean energy (Ogden, 1997) and other volume-preservingvariants (e.g., volume-preserving ARAP (Lin etal., 2022)) have been widely usedfor modeling materials with high Poisson’s ratios.Other alternatives such as St. Venant-Kirchhoff model(Picinbono etal., 2004)and co-rotational model(Müller etal., 2002) approximate or linearizethe volume term and thus compromise its volume-preserving property and visualeffects, particularly in the cases of large deformation and high Poisson’sratios (see Sec.6.2 of (Kim and Eberle, 2022)).
More recently, several works have focused on improving the stability andcomputational efficiency of volume-preserving hyperelastic energies.To improve robustness to mesh inversion and rest-state stability, Smith etal. (2018)proposed the stable Neo-Hookean energyand further applied their analysis to stabilize other hyperelastic energies such asFung and Arruda-Boyce elasticity (Smith etal., 2018) and Mooney-Rivlinelasticity (Kim and Eberle, 2022).To improve computational efficiency, a series of works(Smith etal., 2018, 2019; Lin etal., 2022) derive analyticaleigendecompositions for isotropic distortion energies to accelerate thepositive semi-definite (PSD) projection of per-element Hessiancontributions (following (Teran etal., 2005)).
![Absolute Eigenvalue Filtering for Projected Newton (4) Absolute Eigenvalue Filtering for Projected Newton (4)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x4.png)
(Element-wise) projected Newton’s method.
When the Hessian matrix is not positive definite, steps in Newton’s methodmay not always find a direction leading to energy decrease(Nocedal and Wright, 2006).Several Hessian approximation strategies, a.k.a. projected Newton’s methods,have been adapted. Directly computing the eigenvalues and eigenvectors of a global Hessian matrix isoften too expensive.For many energy functions, their Hessian matrices can be decomposed into a summationover the sub-Hessian of each mesh element.Thereby,one can project each sub-Hessian tothe PSD cone by clamping negative eigenvalues to a small positive number (orzero)(Teran etal., 2005) or adding a diagonal matrix(Fu and Liu, 2016).These approaches guarantee that the sub-Hessians are PSD and thus the resulting global Hessian is also PSD.While improving robustness, the projected Newton’s method may suffer fromworse convergence rate compared to the classic Newton’s method(Longva etal., 2023).What is a better projection strategy remains an open question (Nocedal and Wright, 2006).
In this work, we analyze the limitations of existingper-element projected Newton strategies, supported by extensive empiricalstudies.In the optimization and machine learning literature, Gill etal. (1981) (Sec.4.4.2.1), Paternain etal. (2019) and Dauphin etal. (2014) suggest projecting the negative eigenvalues of the global Hessian to its absolute values.We demonstrate that projecting the negative eigenvalues of the per-element Hessian to its absolute values, similar to (Paternain etal., 2019) and (Dauphin etal., 2014), performs the best in hyperelastic simulations.
![Absolute Eigenvalue Filtering for Projected Newton (5) Absolute Eigenvalue Filtering for Projected Newton (5)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x5.png)
Newton-type methods.
![Absolute Eigenvalue Filtering for Projected Newton (6) Absolute Eigenvalue Filtering for Projected Newton (6)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x6.png)
Alternatively, there are Newton-type methods that do not rely on per-element PSD projection.One strategy is to add a multiple of identity matrix to the local or globalhessian (Nocedal and Wright, 2006) but it tends to overdamp the convergence (see(Liu etal., 2017) Fig.8 and (Shtengel etal., 2017) Fig.2).Shtengel etal. (2017) proposed Composite Majorization, a tightconvex majorizer as an analytic PSD approximation of the Hessian.While this approximation is efficient to compute, it remains unclear how to extend it beyond 2D problems and to different types of energies.Instead of directly approximating the Hessian matrix, Lan etal. (2023)proposed a strategy to adjust the searching direction derivedfrom the (possibly non-PSD) Hessian when performing stencil-wise Newton-CG.Chen etal. (2014) applied the asymptotic numerical method to (inverse) staticequilibrium problem, and demonstrates its advantages over traditionalNewton-type methods.When using incremental potential in dynamic settings,Longva etal. (2023) proposed two alternative strategies: one is to use the originalHessian matrix whenever it is positive definite and use an approximated Hessianmatrix otherwise; another is to add a multiple of the mass matrix to the originalHessian until it becomes positive definite.Both of these strategies could require one or several additional Choleskyfactorizations each Newton iteration anytime the original Hessian is not PSD.Moreover, this method still inherits the flaws of those fallback Hessianswhenever they’re invoked.In contrast, our method maintains the same per-iteration computational cost as theelement-wise projected Newton’s method(Teran etal., 2005) while achieving a betterconvergence rate compared to projected Newton (Teran etal., 2005) and otheralternatives (Longva etal., 2023).
3. Background
![Absolute Eigenvalue Filtering for Projected Newton (7) Absolute Eigenvalue Filtering for Projected Newton (7)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x7.png)
Our approach focuses on quasi-static simulation, which amounts to solving a minimization problem for a (typically nonlinear) energy with respect to parameters
(1) |
Perhaps the most popular way of solving this problem is Newton’s method.Given a set of parameters at the current iteration, Newton’s methodapproximates the energy locally with a quadratic function
(2) |
where and are the gradient andthe Hessian of the energy evaluated at , and denotes the updatevector to the parameter .To obtain the optimal , one can set the derivative of to zero and obtain the update with
(3) |
Because is merely an approximation of the energy, the resulting may not guaranteeenergy decrease at the end of the iteration.Thus, a line search is needed to find a step size such that achieves a sufficient decrease in the energy .Newton’s method iterates between these steps until convergence.
![Absolute Eigenvalue Filtering for Projected Newton (8) Absolute Eigenvalue Filtering for Projected Newton (8)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x8.png)
However, the process summarized above only works in the ideal situation wherethe Hessian matrix is positive definite.Geometrically, a positive definite Hessian corresponds to a convex energy space,which ensures that the update direction towards thecritical point is adecent direction towards the minimum of (see inset).An indefinite and a negative definite Hessian , however, correspond to a saddle anda concave shape, respectively (see inset). In both cases, the direction towards the critical point could be an energy ascent direction, and thusthe Newton’s method may fail to converge (see inset).To address this issue, a popular approach is the (per-element) Projected Newton’s method.
3.1. Projected Newton
The idea of projected Newton’s method is to approximate a non-positive definite Hessian matrixusing a positive definite one.In the case of finite-element simulations, the (global) Hessian matrix can be assembled from the per-element Hessian matrix for each mesh (or tetrahedral) element :
(4) |
where is a selection matrix that maps the per-element degrees of freedom to the global degrees of freedom.
The de facto way of making the global Hessian positive definite is to project the per-element Hessian to positive (semi-)definite (PSD) by performing eigen decomposition on the per-element Hessian numerically or analytically, followed by a clamping strategy to set the negative eigenvalues to zero (or a small positive number ) (Teran etal., 2005):
(5) |
Then, one can use the clamped eigenvalues with the originaleigenvectors of of obtain a PSD projected per-element Hessian.Although the projection happens locally, this guarantees the approximated global Hessian
(6) |
assembed from to be PSD (Rockafellar, 1970), thus a more robust simulation compared to using .This per-element Hessian modification strategy is also scalable because it only requires eigendecomposition on each (small) per-element Hessian , instead of the global matrix.
4. Pitfalls of Eigenvalue Clamping
![Absolute Eigenvalue Filtering for Projected Newton (9) Absolute Eigenvalue Filtering for Projected Newton (9)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x9.png)
Although the projected Newton method has improved robustness over the vanillaNewton’s method, the widely used eigenvalue clamping strategy described inSec.3.1 still suffers from poor convergence when the simulatedobject undergoes large volume changes.The cause lies in the operation in Eq.5, whichclamps the minimum eigenvalue to zero (or a small positive number).This operation effectively turns the local quadratic approximation from asaddle shape (see inset in Sec.3.1) to a “valley” shape(see inset).In this situation, the Newton update direction could point toward anypoint on the bottom of the valley—including points that are far away fromthe current parameter location and even points leading to energy increase.
To solidify this intuition, let us consider a minimal example which exemplifies the extreme failure case of eigenvalue clamping.The function of two variables visualized in inFig.8 is defined as
(7) |
At the point (indicated by the white point),we have:
(8) |
with eigenvectors .
If we project the negative eigenvalues to a small positive value (e.g., ), the corresponding Newton direction can be computedas
(9) |
For small , the update direction is dominated by y-coordinate.For extreme cases where , the update direction even blows up along the y-coordinate.Although technically deceases along the search direction, its projection violates thespirit of Newton’s method which relies on the quadratic approximation of theenergy function in the local region.
![Absolute Eigenvalue Filtering for Projected Newton (10) Absolute Eigenvalue Filtering for Projected Newton (10)](https://i0.wp.com/arxiv.org/html/2406.05928v2/x10.png)
The issue of eigenvalue clamping also appears in 3D. This is especially commonwhen simulating large deformations on materials with high Poisson ratios (seeSec.6).
In our 2D example, we can drive this bad eigenvaluearbitrarily negative by moving the evaluation toward .A more negative eigenvalue means the function along the corresponding eigendirectionis more concave, thus more poorly approximated by a convex quadratic.Unfortunately, clamping to near-zero effectively prefers this direction(because after inversion the coefficient explodes, see Eq.3).We would rather like a filtering method which avoids this direction.It is much more reasonable to make the effect of non-convex directions directlyproportional to their eigenvalue magnitude.This immediately motivates using the absolute value as a filter.
5. Absolute Value Eigenvalue Projection
The analysis above suggests that projecting a large negative eigenvalue to a small positive value may lead to a poor estimate of the descent direction.The optimization may get stuck in a local minimum or even diverge in this case.Inspired by (Gill etal., 1981) (Sec.4.4.2.1) and (Paternain etal., 2019), we propose to project the negative eigenvalues of the local Hessian to its absolute value:
(10) |
The resulting projected Newton step can also be interpreted as the result of a generalized trust-region method where the model is a first-order Taylor expansion and the trust region is defined using the Hessian metric (see (Dauphin etal., 2014)).
Implementation.For implementations already using per-element projection and scatter-gatherassembly of Hessians following Teran etal. (2005), our method is a single linechange (see Page1).
For completeness, the full algorithm per-element absolute eigenvalue projection Hessian construction follows as:⬇1 initialize H_proj to empty sparse matrix2 foreach element i:3 # Pi*x selects element i’s local variables from x4 either:5 Hi = constructLocalHessian(i,Pi*x)6 U,S,V = eig(Hi)7 or:8 U,S,V = analyticDecomposition(i,Pi*x)9 # Our one-line change10 Hi_proj = U*abs(S)*V’11 # accumulate into global Hessian12 H_proj += Pi*Hi_proj*Pi’\end{lstlisting}13\end{minipage}1415Applying our projection to the minimal example above, the Newton direction now becomes16\begin{equation}17 \begin{aligned}18 \vd & = -(\mH^+)^{-1} \vg19 = - \left(\Phi20 \begin{bmatrix}21 \left| -1.99\times10^6 \right| & 0 \\22 0 & 3.9923 \end{bmatrix}24 \Phi^\top \right)^{-1}25 \begin{bmatrix}26 3.99 \\27 -0.0228 \end{bmatrix} \\29 & = - \left(\Phi30 \begin{bmatrix}31 5\times10^{-7} & 0 \\32 0 & 0.2533 \end{bmatrix}34 \Phi^\top \right)35 \begin{bmatrix}36 3.99 \\37 -0.0238 \end{bmatrix}39 = -\begin{bmatrix}40 0.99 \\41 -0.0142 \end{bmatrix}.43 \end{aligned}44\end{equation}45which is more aligned with the direction towards the global minimum (see \reffig{newton_example_2_variable}).4647\begin{figure}48 \centering49 \includegraphics[width=\linewidth]{figures/different_eps.pdf}50 \caption{Our absolute projection strategy is parameter-free and achieves consistent speedup over the eigenvalue clamping strategy in the cases of high Poisson’s ratio and large volume change.51 On the contrary, for the eigenvalue clamping strategy, the optimal varies across different examples and picking the optimal one requires manual tuning.}52 \label{fig:different_eps}53\end{figure}5455\section{Eigenanalysis of Volume-preserving Energy} \label{sec:snh_eigenanalysis}56While the two variable example may seem contrived, we show that the existence of57large negative eigenvalues is closely connected to the large Poisson’s ratio and58large volume change due to the volume-preserving term in Neo-Hookean energy.5960When the Hessian contains large negative eigenvalues, the absolute value61projection gives a better estimate of the descent direction, making the62optimization more stable and faster to converge. Then the question comes, when63does the Hessian contain large negative eigenvalues? It turns out that it64appears more often than we thought.6566\begin{figure}[t]67 \includegraphics[width=\linewidth]{figures/lame-youngs-poisson.pdf}68 \caption{As Poisson’s ratio , Lam\’e’s second parameter .}69 \label{fig:lame_youngs_poisson}70\end{figure}71The class of Neo-Hookean energy usually takes the form of72\begin{align}73 \Psi = \frac{\mu}{2} (I_C - 3) - \mu \log(J) + \frac{\lambda}{2} (J-1)^2,74\end{align}75where and are the first and second Lame parameters, is the first right Cauchy-Green invariant and is the determinant of the deformation gradient .7677To ensure the inversion stability and rest stability, \changed{\citet{smith2018snh}} proposes to use the stable Neo-Hookean energy:78\begin{align}79 \Psi = \frac{\mu}{2}\left(I_C-3\right)+\frac{\lambda}{2}(J-\alpha)^2,80\end{align}81where .828384As shown by the eigenanalysis in \changed{\cite{smith2018snh, kim22deformables}}, aside from the three eigenvalues corresponding to the scaling, the other six twist and flip eigenvalues of the local Hessian matrix for stable Neo-Hookean energy can be written as85\begin{align}86 & \Lambda_3=\mu+\sigma_z\left(\lambda\left(J-1\right)-\mu\right) \\87 & \Lambda_4=\mu+\sigma_x\left(\lambda\left(J-1\right)-\mu\right) \\88 & \Lambda_5=\mu+\sigma_y\left(\lambda\left(J-1\right)-\mu\right) \\89 & \Lambda_6=\mu-\sigma_z\left(\lambda\left(J-1\right)-\mu\right) \\90 & \Lambda_7=\mu-\sigma_x\left(\lambda\left(J-1\right)-\mu\right) \\91 & \Lambda_8=\mu-\sigma_y\left(\lambda\left(J-1\right)-\mu\right).92\end{align}93where and are the Lame parameters, and , , and are the singular \changed{values} of the deformation gradient .9495\begin{figure}[t]96 \centering97 \includegraphics[width=\linewidth]{figures/snh_energy.pdf}98 \caption{Energy and smallest eigenvalues of stable Neo-Hookean energy with different Poisson’s ratios and deformation.99 Here the deformation is created by dragging the top vertex of a regular tetrahedron around, and the center of the plot is with zero displacement.}100 \label{fig:snh_energy}101\end{figure}102103When the Poisson’s ratio is close to 0.5, the value of Lam\’e’s second parameter is very large (see \reffig{lame_youngs_poisson}).104Thus the magnitude of potential negative eigenvalues largely depends on the Lam\’e’s second parameter and the volume change .105For instance, a Poisson’s ratio of 0.495 corresponds to a Lam\’e’s second parameter of .106This gives the eigenvalues a large negative value if there is a relatively large volume change for a specific element (i.e., when is large, see \reffig{snh_energy}).107108When the Hessian contains large negative eigenvalues, projecting it to a small positive value may lead to a poor estimate of the descent direction which strongly \changed{biases} towards the negative eigenvectors (see \refsec{pitfall_of_clamping}).109In contrast, the absolute value projection gives a better estimate of the descent direction, making the optimization more stable and faster to converge. This work is funded in part by two NSERC Discovery grants, the Ontario Early Research Award program, the Canada Research Chairs Program, a Sloan Research Fellowship, the DSI Catalyst Grant program, SoftBank and gifts by Adobe Research and Autodesk.We thank Danny Kaufman for early discussions;Yuta Noma for testing the code;all the artists for sharing the 3D models and anonymous reviewers for their helpful comments and suggestions.6. Results
classical backtracking line search strategy (Nocedal and Wright, 2006).As inversion is extremely common in the presence of large initial deformation (see the inset), we do not use an inversion-aware line search.We implement our method in C++ with libigl (Jacobson etal., 2018) and use TinyAD (Schmidt etal., 2022) for the automatic differentiation.Experiments are performed using a MacBook Pro with an Apple M2 processor and 24GB of RAM.
Figure 10. Our abs projection strategy generalizes over various volume-preserving hyperelastic models.Here we add the volume term to different strain energies, including Mooney-Rivlin (Smith etal., 2018), ARAP (Lin etal., 2022) and Symmetric Dirichlet (Smith and Schaefer, 2015) energy.
Figure 11. The speedup of our abs projection strategy over the eigenvalue clamping strategy increases with the mesh resolution.Here we stretch the rightmost vertices of a cube (of 4 different resolutions) by a factor of 3.0x.
Figure 12. Our method stabilizes and accelerates the optimization in the presence of large rotations.Here we stretch and twist the topmost vertices of the doll by 360 degrees (divided into four 90° subsolves to avoid ambiguity).
Figure 13. Our method can also accelerate the optimization of volume-preserving parameterization, where we minimize the energy .Here we start from the Tutte embedding with the same total area as the original surface mesh (the cut is visualized in black), but nevertheless the per-element area distortion could still be large. Convergence and Stability. We compare our abs projection strategy to (Teran etal., 2005) under a variety of deformations, including stretching, compression, shearing and twisting in Fig.14, Fig.9, Fig.2, Fig.3 and Fig.12.Our abs projection strategy converges swiftly and smoothly under large local volume change and high Poisson’s ratio, while (Teran etal., 2005) suffers from a much slower convergence rate due to the instability in the optimization.The energies and shapes for both methods are generally consistent within each example when converged, except in a few cases where the other method converges to a different local minimum after “blowing up” mid-optimization (e.g., Fig.1 (clamp) and Fig.16 (global abs)). We also compare our local abs approach to the global abs approach (Dauphin etal., 2014; Paternain etal., 2019) in Fig.16.Compared to our method, the global abs approach is computationally intractable due to a full eigendecomposition of the global Hessian (takes more than 3 hours for one Hessian in Fig.15),and leads to a damped convergence and suboptimal configurations (Fig.16).We further stress test our method under extreme volume change (stretch to 11x) and an even higher Poisson’s ratio () in Fig.7.Our method still maintains a stable and fast convergence rate, while (Teran etal., 2005) still fails to converge after 200 iterations.
Figure 15. We compare our abs projection strategy with eigenvalue clamping strategy (Teran etal., 2005), adding a multiple of Identity matrix to the local Hessian or the global Hessian (until it becomes PSD) (Martin etal., 2011), and the Projection-on-Demand and Kinetic strategy (Longva etal., 2023).Note that the Projection-on-Demand strategy (Longva etal., 2023) requires an additional Cholesky factorization to check the positive-definiteness of the original Hessian at each Newton iteration.
Figure 16. Compared to our local approach, the global abs approach (Dauphin etal., 2014; Paternain etal., 2019) is computationally intractable due to a full eigendecomposition of the global Hessian (takes more than 3 hours for one Hessian in Fig.15), and leads to a damped convergence and suboptimal configurations (right).Here on average, the global abs approach takes 10.2 minutes per Newton iteration for one Hessian, while our local abs approach takes only 0.25 seconds for one Newton iteration.
Figure 17. We visualize the first 20 eigenvalues of the Hessian and the resulting descent direction using our abs projection strategy (red) and the eigenvalue clamping strategy (blue) under different volume changes and Poisson’s ratio.Here we uniformly scale the mesh by 1.05x, 1.2x and 1.5x.The eigenvalues of the original Hessian are denoted by yellow. Comparison. We evaluate our method and (Teran etal., 2005) (with threshold ) on a total of 593 the closed, genus-0 high-resolution tetrahedral meshes (with more than 5,000 vertices) from the TetWild Thingi10k dataset (Hu etal., 2018; Zhou and Jacobson, 2016) with diverse deformations, including stretching, stretching (2x), compression, twisting and bending.As shown by the histogram in Fig.6, our method is at least comparable with the eigenvalue clamping and offers, on average, 2.5 times speedup for large deformations.In Fig.LABEL:fig:different_eps, since the eigenvalue clamping strategy contains an additional user-picked parameter to control the clamping threshold, we compare the convergence rate of our method and (Teran etal., 2005) with different .Our method is parameter-free and achieves consistent speedup over (Teran etal., 2005).We further compare our abs projection scheme with other alternatives in Fig.15, including adding a multiple of Identity matrix to the local Hessian or the global Hessian (until it becomes PSD) (Martin etal., 2011), and the Projection-on-Demand and Kinetic strategy (Longva etal., 2023).Adding a multiple of Identity or mass matrix to the Hessian eventually makes the optimization closer to (preconditioned) gradient descent, and thus damps the convergence too much for large deformation problems.Note that the Projection-on-Demand strategy (Longva etal., 2023) requires an additional Cholesky factorization to check the positive-definiteness of the original Hessian at each Newton iteration.
7. CONCLUSION & FUTURE WORK
Figure 18. We perform a collision experiment using Incremental Potential Contact (IPC) (Li etal., 2020), where we put a cylinder (orange) above the back of a horse.Our method is still able to achieve speedup with collisions, even though IPC’s intersection-aware line search clamps down the step size and dominates convergence after collisions happen.We propose a Hessian modification strategy to improve the robustness and efficiency of hyperelastic simulations.
Our method is a simple one-line change to the existing projected Newton’s method, making it extremely reproducible and widely applicable in many simulation frameworks.We primarily evaluate our method on Neo-Hookean simulations with large deformations.For small deformation, especially with compression (see the inset), our method can sometimes slightly damp the convergence compared to (Teran etal., 2005).Extending our analysis to a wider range of finite element simulations (e.g., collisions (Fig.18)) could better identify the applicability of our approach, and potentially deriving an even better adaptive PSD projection method.Exploring other Hessian modification strategies, such as adding higher-order regularization terms to the elastic energy (Kim and Eberle, 2022) or a combination with (Longva etal., 2023), could also be an interesting future direction.Considering using another volume-preserving term that does not introduce large negative eigenvalues in the presence of large deformations could be another promising future direction.
Acknowledgements.
References