4.4 Numerical solver
Combining both terms gives our variational optimization model:
p and h are values to be determined.
This variational optimization problem is highly non-linear, and also non-differentiable due to the use of the
norm. To solve it, we let
and
. The optimization model becomes a constrained problem:
We construct the following augmented Lagrangian functional:
where
and
are the Lagrangian multipliers,
is the vector dot product operator, and
and
are two positive numbers. We now seek a solution to the saddle-point problem:
by using Algorithm 1. Algorithm 1 involves solving several sub-problems, whose details are now described.
Initialization of h. The initial sharpness is set based on coarse detection of sharp features. Specifically, for each edge
of the initial control mesh
, we use a threshold
to detect whether
is a candidate crease edge. Let
be the angle between the normals of the two faces containing
. If
,
; otherwise,
.
p subproblem. Fixing
, we solve the following quadratic equation:
This can be converted into a linear system.
h subproblem. Fixing
, we find the sharpness h which only affects the subdivision rule. Hence, the h-sub problem is
To solve this minimization problem, we adopt particle swarm optimization (PSO) [
31
], an evolutionary procedure. It starts from many random initialization seeds. At each iteration a set of candidate solutions is sought, the score of each solution is calculated, and the solutions are updated in the search space by shifting them towards the best current solution.
q subproblem. Fixing
, we find q by solving:
After reformulation, we have
This problem is decomposable and has the closed form solution:
where
z subproblem. Fixing
, we find z by solving:
which also has a closed form solution:
where