procrustes.softassign
The Softassign Procrustes Module.
- procrustes.softassign.softassign(a: ndarray, b: ndarray, pad: bool = True, translate: bool = False, scale: bool = False, unpad_col: bool = False, unpad_row: bool = False, check_finite: bool = True, weight: ndarray | None = None, iteration_soft: int = 50, iteration_sink: int = 200, beta_r: float = 1.1, beta_f: float = 100000.0, epsilon: float = 0.05, epsilon_soft: float = 0.001, epsilon_sink: float = 0.001, k: float = 0.15, gamma_scaler: float = 1.01, n_stop: int = 3, adapted: bool = True, beta_0: float | None = None, m_guess: float | None = None, iteration_anneal: int | None = None, kopt: bool = False, kopt_k: int = 3) ProcrustesResult [source]
Find the transformation matrix for 2-sided permutation Procrustes with softassign algorithm.
- Parameters:
a (ndarray) -- The 2D-array \(\mathbf{A}_{m \times n}\) which is going to be transformed.
b (ndarray) -- The 2D-array \(\mathbf{B}_{m \times n}\) representing the reference.
pad (bool, optional) -- Add zero rows (at the bottom) and/or columns (to the right-hand side) of matrices \(\mathbf{A}\) and \(\mathbf{B}\) so that they have the same shape.
translate (bool, optional) -- If True, both arrays are centered at origin (columns of the arrays will have mean zero).
scale (bool, optional) -- If True, both arrays are normalized with respect to the Frobenius norm, i.e., \(\text{Tr}\left[\mathbf{A}^\dagger\mathbf{A}\right] = 1\) and \(\text{Tr}\left[\mathbf{B}^\dagger\mathbf{B}\right] = 1\).
unpad_col (bool, optional) -- If True, zero columns (with values less than 1.0e-8) on the right-hand side of the intial \(\mathbf{A}\) and \(\mathbf{B}\) matrices are removed.
unpad_row (bool, optional) -- If True, zero rows (with values less than 1.0e-8) at the bottom of the intial \(\mathbf{A}\) and \(\mathbf{B}\) matrices are removed.
check_finite (bool, optional) -- If true, convert the input to an array, checking for NaNs or Infs. Default=True.
weight (ndarray, optional) -- The 1D-array representing the weights of each row of \(\mathbf{A}\). This defines the elements of the diagonal matrix \(\mathbf{W}\) that is multiplied by \(\mathbf{A}\) matrix, i.e., \(\mathbf{A} \rightarrow \mathbf{WA}\).
iteration_soft (int, optional) -- Number of iterations for softassign loop.
iteration_sink (int, optional) -- Number of iterations for Sinkhorn loop.
beta_r (float, optional) -- Annealing rate which should greater than 1.
beta_f (float, optional) -- The final inverse temperature.
epsilon (float, optional) -- The tolerance value for annealing loop.
epsilon_soft (float, optional) -- The tolerance value used for softassign.
epsilon_sink (float, optional) -- The tolerance value used for Sinkhorn loop. If adapted version is used, it will use the adapted tolerance value for Sinkhorn instead.
k (float, optional) -- This parameter controls how much tighter the coverage threshold for the interior loop should be than the coverage threshold for the loops outside. It has be be within the integral \((0,1)\).
gamma_scaler (float, optional) -- This parameter ensures the quadratic cost function including self-amplification positive define.
n_stop (int, optional) -- Number of running steps after the calculation converges in the relaxation procedure.
adapted (bool, optional) -- If adapted, this function will use the tighter covergence threshold for the interior loops.
beta_0 (float, optional) -- Initial inverse temperature.
beta_f -- Final inverse temperature.
m_guess (ndarray, optional) -- The initial guess of the doubly-stochastic matrix.
iteration_anneal (int, optional) -- Number of iterations for annealing loop.
kopt (bool, optional) -- If True, the k_opt heuristic search will be performed.
kopt_k (int, optional) -- Defines the oder of k-opt heuristic local search. For example, kopt_k=3 leads to a local search of 3 items and kopt_k=2 only searches for two items locally.
weight -- The weighting matrix.
- Returns:
res -- The Procrustes result represented as a class:utils.ProcrustesResult object.
- Return type:
Notes
Quadratic assignment problem (QAP) has played a very special but fundamental role in combinatorial optimization problems. The problem can be defined as a optimization problem to minimize the cost to assign a set of facilities to a set of locations. The cost is a function of the flow between the facilities and the geographical distances among various facilities.
The objective function (also named loss function in machine learning) is defined as
\[\begin{split}E_{qap}(M, \mu, \nu) = - \frac{1}{2}\Sigma_{aibj}C_{ai;bj}M_{ai}M_{bj} + \Sigma_{a}{\mu}_a (\Sigma_i M_{ai} -1) \\ + \Sigma_i {\nu}_i (\Sigma_i M_{ai} -1) - \frac{\gamma}{2}\Sigma_{ai} {M_{ai}}^2 + \frac{1}{\beta} \Sigma_{ai} M_{ai}\log{M_{ai}}\end{split}\]where \(C_{ai,bj}\) is the benefit matrix, \(M\) is the desired \(N \times N\) permutation matrix. \(E\) is the energy function which comes along with a self-amplification term with gamma, two Lagrange parameters \(\mu\) and \(\nu\) for constrained optimization and \(M_{ai} \log{M_{ai}}\) servers as a barrier function which ensures positivity of \(M_{ai}\). The inverse temperature \(\beta\) is a deterministic annealing control parameter.
Examples
>>> import numpy as np >>> array_a = np.array([[4, 5, 3, 3], [5, 7, 3, 5], ... [3, 3, 2, 2], [3, 5, 2, 5]]) # define a random matrix >>> perm = np.array([[0., 0., 1., 0.], [1., 0., 0., 0.], ... [0., 0., 0., 1.], [0., 1., 0., 0.]]) # define b by permuting array_a >>> b = np.dot(perm.T, np.dot(a, perm)) >>> new_a, new_b, M_ai, error = softassign(a,b,unpad_col=False,unpad_row=False) >>> M_ai # the permutation matrix array([[0., 0., 1., 0.], [1., 0., 0., 0.], [0., 0., 0., 1.], [0., 1., 0., 0.]]) >>> error # the error 0.0