id
stringlengths
20
20
score
int64
1
5
normalized_score
float64
0.2
1
content
stringlengths
216
2.36M
sub_path
stringclasses
1 value
BkiUbjM5qsJBji8af7X9
5
1
\section{Introduction} IRAS 05078+1626 is a nearby Seyfert 1.5 galaxy. Before its identification as an infrared source it was also known as CSV 6150 (Catalogue of Suspected Variables). Its position on the sky is $l=186.1$ and $b=-13.5$ in the Galactic coordinates. The cosmological redshift of this galaxy is $z\approx 0.018$ \citep{takata94}. It had never been spectroscopically examined in X-ray prior to the observation discussed in this paper. However, it was detected in a number of X-ray surveys, such as the all-sky monitoring of the INTEGRAL IBIS/ISGRI instrument \citep{sazonov07}, the SWIFT BAT instrument \citep{ajello08,2008ApJ...681..113T}, and the RXTE Slew Survey (XSS) \citep{2004A&A...418..927R}. The X-ray spectroscopic properties of intermediate Seyferts are rather elusive: both obscured Type 1 and unobscured Type 2 active galactic nuclei (AGN) have been reported \citep[e.g.,][]{2006A&A...446..459C,2008MNRAS.390.1241B,2009ApJ...695..781B}. It has been suggested that intermediate Seyfert galaxies are seen at intermediate inclination angles between pure ``face-on'' Seyfert~1s and pure ``edge-on'' Seyfert~2s, which follows directly from the orientation-based AGN unification scenarios \citep{1985ApJ...297..621A,1993ARA&A..31..473A,1995PASP..107..803U}. For this reason, X-ray spectroscopy of type 1.5 Seyferts may provide clues to the nature and geometrical distributions of optically thick gas surrounding the active nucleus, the latter being the fundamental ingredient behind the unification scenarios. IRAS~05078+1626 is included in the FERO project (``Finding Extreme Relativistic Objects''; \citeauthor{2008MmSAI..79..259L} \citeyear{2008MmSAI..79..259L}) with the aim of establishing the fraction of a relativistically broadened K$\alpha$ iron lines in the spectrum of a complete flux-limited sample (2--10\,keV flux $>$ 1 mCrab). This paper is organised as follows. In section~\ref{sec:observations} we describe new observation by XMM-Newton and the corresponding data reduction. In sections~\ref{time}~-~\ref{res}, we present the results derived from the X-ray spectra in $\sim1$--$10$ keV energy range. The results are discussed in section~\ref{discussion} and the conclusions are summarised in section~\ref{conclusions}. \section{Results from XMM-Newton observation in 2007} \subsection{Observations and data reduction} \label{sec:observations} The XMM-Newton observation of IRAS 05078+1626 was performed between 2007 August 21 UT 22:24:49 and 22 UT 15:35:43 (Obs. \#0502090501). The EPN and both MOS cameras (\citet{pn} for PN; \citet{mos} for MOS) were operating in the small window mode. The RGS cameras \citep{rgs} were operating in spectroscopic mode. The spectra were reduced with the SAS software version 9.0.0 \citep{2004ASPC..314..759G}. Intervals of high particle background were removed by applying count rate thresholds on the field-of-view (EPIC, single events) and CCD\,\#9 (RGS) light curves of 0.35\,cts/s for the PN, 0.5\,cts/s for the MOS and 0.15\,cts/s for the RGS. The exposure time after data screening is $\approx 56$\,ks for MOS, $\approx 40$\,ks for PN and $\approx 58$\,ks for RGS, respectively. The patterns 0--12 were used for both MOS cameras, and patterns 0--4 (i.e.\ single and double events) for the PN camera. The source spectra were extracted from a circle of 40 arcsec in radius defined around the centroid position with the background taken from an offset position close to the source. The two MOS spectra and the related response files were joined into a single spectrum and response matrix. Finally, the PN and MOS spectra were rebinned in order to have at least 25 counts per bin and to oversample the energy resolution of the instrument maximally by a factor of three, while the RGS spectra were left unbinned. Consequently, different statistics were used in fitting the spectra -- the traditional $\chi^2$ statistics to fit the PN and MOS spectra and the C-statistics \citep{1976A&A....52..307C} for all fits including RGS data. For the spectral analysis, we used XSPEC \citep{1996ASPC..101...17A} version 12.5, which is part of the HEASOFT software package version~6.6.\footnote{http://heasarc.gsfc.nasa.gov} \subsection{Timing properties} \label{time} The PN light curve of the source is shown in Fig.~\ref{lc}. We have divided the energy range into two bands and checked the light curve behaviour in each of them, as well as a hardness ratio, which we defined as the ratio of the counts at 2--10\,keV to the counts at 0.2--2\,keV. The energy ranges were chosen for sampling different spectral components, as indicated by the energy where the continuum starts deviating from a power law model that describes the hard X-ray spectrum (see Sect.~\ref{msp}). The hardness ratio stays almost constant during the observation, suggesting that no significant spectral variations occur, although the source flux increased by around 20\%. \begin{figure}[thb] \begin{tabular}{c} \includegraphics[angle=0,width=0.48\textwidth]{13659fg1.eps} \end{tabular} \caption{EPIC-PN light curves in the 0.2--2\,keV band (upper panel) and 2--10\,keV band (middle). The hardness ratio HR is defined as the ratio of the counts at 2--10\,keV to the counts at 0.2--2\,keV and presented as a function of time in the lower panel. The bin time is as 2048\,s.} \label{lc} \end{figure} \begin{figure}[tb!] \includegraphics[angle=0,width=0.48\textwidth]{13659fg2.eps} \caption{PN spectrum extracted from the first half of the observation with the lower source flux (blue) and from the second half of the observation with the higher source flux (black). The ratio of the two spectra is presented in the lower panel.} \label{pndata} \end{figure} To confirm this conclusion also for narrow spectral features, such as the iron emission line, we compared the PN spectra extracted during the first and the second halves of the observation (see Fig.~\ref{pndata}). The spectra correspond to the lower/higher source flux because the flux is increasing nearly monotonically during the observation. We calculated the ratio values of the two data sets and fit them with a simple function $f(E)=a \,E+b$ using the least square method. The fitting results are $a=-0.004 \pm 0.003$ and $b=1.12 \pm 0.01$ with the sum of the residuals $\chi^{2}=111$ for 190 degrees of freedom. When we set $a=0$, the fitting results are comparably good with $b=1.11 \pm 0.01$ and $\chi^{2}=113$. The ratio of the spectra is plotted in the lower panel of Fig.~\ref{pndata}. Because no significant spectral differences are evident, we analyse the time-averaged spectra hereafter. \begin{figure}[tb!] \includegraphics[width=0.48\textwidth]{13659fg3.eps} \caption{Total PN spectrum with the background level showing that the signal-to-noise ratio is very good up to high energies.} \label{pndata_back} \end{figure} \begin{figure}[thb] \begin{tabular}{c} \includegraphics[angle=0,width=0.48\textwidth]{13659fg4.eps} \end{tabular} \caption{XMM-Newton PN (black) and joint MOS (red) spectrum of IRAS05078+1626 described by a simple power law model absorbed by Galactic neutral hydrogen in the line of sight with $n_{\rm H}=0.188 \times 10^{22}$\,cm$^{-2}$. The photon index of the power law is $\Gamma = 1.49$. The model reveals an apparent excess at $E=6.4$\,keV associated with the iron line K$\alpha$ and some wiggle-like residuals at lower energies. A more detailed view of the data residuals in these parts of the spectrum is shown in Fig.~\ref{pm_resid}.} \label{powerlaw} \end{figure} \subsection{Mean spectral properties} \label{msp} The signal-to-noise ratio is very good up to high energies (Fig.~\ref{pndata_back}), so we fit the EPIC spectra spectra in the full energy range where they are well calibrated (0.35--12\,keV). The X-ray continuum is described by a power law model at energies above 2~keV, although the iron line at $E=6.4$\,keV is present (Fig.~\ref{powerlaw}). The photon index of the power law is $\Gamma \simeq 1.49(1)$.\footnote{All presented errors represent 90~$\%$ confidence level for a single interesting parameter, and the errors quoted in brackets are related to the last digit in the number.} In this and all subsequent models we included absorption by Galactic gas matter along the line of sight with column density $n_{\rm H}=0.188 \times 10^{22}$\,cm$^{-2}$. This value is from the Leiden/Argentine/Bonn H\,I measurements \citep{2005A&A...440..775K}. We used the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{tbabs}}} model \citep{wilms00} to fit the absorption produced by the Galactic interstellar matter. \begin{figure*}[bht] \includegraphics[width=0.48\textwidth]{13659fg5.eps} \includegraphics[width=0.48\textwidth]{13659fg6.eps} \caption{Ratios of the simple power law model (the same as in Fig.~\ref{powerlaw}) to the data in different energy bands: \textbf{left:} at lower energies, \textbf{right:} in the iron line band, where the narrow K$\alpha$ line at the rest energy $E=6.4$\,keV is prominent (observed at $E=6.29$\,keV due to the cosmological redshift). Black crosses correspond to the PN data points, while the red crosses correspond to the MOS data points.} \label{pm_resid} \end{figure*} We applied the simple {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{tbabs}}}*{{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{powerlaw}}} model to both PN and MOS spectra. The $\chi^{2}$ value is 3557 with 528 degrees of freedom ($\chi^{2}/\nu=6.7$) in the $0.35-12.0$\,keV energy range. The spectra differ from the power law model not only around $E=6.4$\,keV but also at lower energy band $0.35-2.0$\,keV (adding a Gaussian line model to fit the iron line improves the fit only to $\chi^{2}/\nu = 3384/524 \doteq 6.5$). Residuals against this model are shown in Fig.~\ref{pm_resid}. The residuals at lower energies are usually attributed to a warm absorber, i.e.\ absorption by totally or partially ionised matter; see, e.g., \citet{2003ApJ...599..933N,blustin05,2007ApJ...659.1022K} for more information about warm absorbers in Seyfert galaxies. The spectrum can be also affected by the so-called soft excess, which can be either caused by reflection by the ionised surface of the accretion disc \citep{2006MNRAS.365.1067C} or by partially ionised and Doppler smeared absorption \citep{2006MNRAS.371L..16G}. The spectral residuals reveal certain discrepancies between the PN and MOS spectra (see Fig.~\ref{pm_resid} with the data/model ratios of both spectra, PN and MOS, with the identical model parameters of the spectra). The level of discrepancy is, however, comparable to the level of systematic uncertainties in the cross-calibration between the EPIC cameras \citep{2006ESASP.604..937S}. Nonetheless, we conservatively analyse the EPIC spectra separately. We use the same models for both spectra but allow the values of the model parameters to be different. The values of the photon index using the simple power law model differ from each other when fitting the spectra independently, resulting in a harder PN spectrum with $\Gamma = 1.60(1)$ compared to the MOS spectrum with $\Gamma = 1.54(1)$, ignoring the energies below 2\,keV and also between 5.5-7.5\,keV. Although the absolute value of these spectral index measurements does not have a direct physical meaning, given the simplicity of the model applied on a small energy band, the comparison between them is illustrative of the quality of the cross-calibration between the EPIC cameras. Differences of the order of $\Delta \Gamma \simeq$ 0.06 in the hard X-ray band are consistent with current systematic uncertainties \citep{2006ESASP.604..937S}. \subsection{RGS spectrum} \label{rgs} We jointly fit the unbinned first-order spectra of the two RGS cameras with the same model's parameter values except the overall normalisations. The continuum is well-fitted by the simple power law model with the photon index $\Gamma = 1.57$. We searched further for narrow emission and absorption lines in the spectrum using several {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zgauss}}} models with the intrinsic width $\sigma$ set to zero. We calculated the 90\,\% confidence interval for a blind search, as $P=P_{0}/N_{\rm trial}$, where $N_{\rm trial}=N_{\rm bins}/3 = 3400/3$ and $P_{0}=0.1$. For the RGS data $P \doteq 8.8\times 10^{-5}$, to which $\Delta C = 22.4$ corresponds assuming the Student probability distribution. The only line fulfilling this criterion by improving the fit about $\Delta C=31.7$ is an emission line at the energy $E=0.561 \pm 0.001$\,keV ($22.10 \pm 0.04$\,\AA) and the equivalent width $7^{+5}_{-3}$\,eV. We identify it with the forbidden line of the O~VII triplet ($E_{\rm LAB} = 0.561$\,keV). \subsection{EPIC spectrum} \label{res} The forbidden line of the OVII triplet is clear signature of a photoionised plasma. No significant features were detected that may be expected alongside the O~VII~(f) line, if it were produced in a collisionally ionised plasma, such as the resonance line in the OVII triplet or the OVIII~Ly$\alpha$. This led us to try and explain the residuals against a power law model in the soft X-rays as effect of intervening ionised absorption gas. We used the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{xstar}}} model version 2.1ln7c \citep{2001ApJS..133..221K}\footnote{http://heasarc.gsfc.nasa.gov/docs/software/xstar/xstar.html} to calculate a grid of tabular models with the input parameters constrained from the preliminary data analysis with simple models whenever possible (photon index $\Gamma \approx 1.7$, density $\rho \leq 10^{14}$\,cm$^{-3}$, luminosity $L \leq 10^{44}$\,erg\,s$^{-1}$, column density $10^{19}$\,cm$^{-2} \leq n_{\rm H} \leq 10^{25}$\,cm$^{-2}$, and ionization parameter $-5 \leq \log \xi \leq 5$). \begin{figure}[tb!] \begin{center} \includegraphics[width=0.49\textwidth]{13659fg7.eps} \caption{Residuals of the PN data from the `baseline' model including both ionised reflection and absorption (\textbf{lower}), only reflection (\textbf{middle}), and only absorption (\textbf{upper}).} \label{del_chi} \end{center} \end{figure} \begin{table*}[tb!] \caption{Parameters of the `baseline' and `final' models.} \begin{center} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \rule{0cm}{0.5cm} Model & Model &\multicolumn{2}{c|}{`baseline' model} & \multicolumn{3}{c}{`final' (`double reflection') model} \\ component & parameter & PN & MOS & PN & MOS & PN+MOS+RGS \\ \hline \rule[-0.7em]{0pt}{2em} {\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zphabs}} & $n_{\rm H} [10^{22}$\,cm$^{-2}]$ & $0.104^{+0.005}_{-0.007}$ & $0.129^{+0.007}_{-0.007}$ & $0.102^{+0.009}_{-0.005}$ & $0.120^{+0.008}_{-0.005}$ & $0.106^{+0.004}_{-0.004}$ \\ \hline \rule[-0.7em]{0pt}{2em} {\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{xstar}} & $n_{\rm H} [10^{22}$\,cm$^{-2}]$ & $130^{+20}_{-10}$ & $170^{+20}_{-20}$ & $120^{+30}_{-30}$ & $150^{+70}_{-20}$ & $130^{+20}_{-20}$ \\ \rule[-0.7em]{0pt}{2em} & $\log\,\xi$ &$2.3^{+0.1}_{-0.1}$ & $2.4^{+0.1}_{-0.1}$ & $2.2^{+1.4}_{-0.6}$ & $2.5^{+1.0}_{-0.5}$ & $2.5^{+1.0}_{-0.4}$ \\ \rule[-0.7em]{0pt}{2em} & He/He$_{\rm Solar}$ - Ca/Ca$_{\rm Solar}$ & $1$ (f) & $1$ (f) & $1$ (f) & $1$ (f) & $1$ (f) \\ \rule[-0.7em]{0pt}{2em} & Fe$/$Fe$_{\rm Solar}$ - Ni$/$Ni$_{\rm Solar}$ & $0.2^{+0.1}_{-0.2}$ & $0.1^{+0.1}_{-0.1}$ & $1.2^{+0.3}_{-0.3}$ & $0.9^{+0.2}_{-0.2}$ & $1.1^{+0.2}_{-0.2}$ \\ \hline \rule[-0.7em]{0pt}{2em} {\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{powerlaw}} & $\Gamma$ & $1.81^{+0.03}_{-0.05}$ & $1.80^{+0.05}_{-0.05}$ & $1.75^{+0.10}_{-0.03}$ & $1.74^{+0.07}_{-0.03}$ & $1.76^{+0.04}_{-0.02}$ \\ \rule[-0.7em]{0pt}{2em} & normalisation & $\left(7 \pm 1\right) \times 10^{-4}$ & $(7\pm1) \times 10^{-4}$ & $(6\pm1) \times 10^{-4}$ & $(7\pm2) \times 10^{-4}$ & ..... \\ \hline \rule[-0.7em]{0pt}{2em} {\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}} & $\Gamma$ & $1.81$ (b) & $1.80$ (b) & $1.75$ (b) & $1.74$ (b) & $1.76$ (b) \\ \rule[-0.7em]{0pt}{2em} & $\log\,\xi$ & $3.0^{+0.2}_{-0.2}$ & $3.2^{+0.2}_{-0.2}$ & $3.0^{+0.2}_{-0.2}$ & $3^{+2}_{-3}$ & $3.0^{+0.1}_{-0.2}$ \\ \rule[-0.7em]{0pt}{2em} & Fe$/$Fe$_{\rm Solar}$ - Ni$/$Ni$_{\rm Solar}$ & $0.2$ (b) & $0.1$ (b) & $1.2$ (b) & $0.9$ (b) & $1.1$ (b) \\ \rule[-0.7em]{0pt}{2em} & normalisation & $(3 \pm 2) \times 10^{-9}$ & $(3 \pm 2) \times 10^{-9}$ & $(2\pm1) \times 10^{-9}$ & $(1\pm1) \times 10^{-9}$ & ..... \\ \hline \rule[-0.7em]{0pt}{2em} {\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}~2& normalisation & - & - & $3\pm1 \times 10^{-7}$ & $4\pm1 \times 10^{-7}$ & ..... \\ \hline \rule[-0.7em]{0pt}{2em} {\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zgauss}} & E [keV] & $6.40^{+0.01}_{-0.01}$ & $6.44^{+0.02}_{-0.02}$ & - & - & - \\ \rule[-0.7em]{0pt}{2em} & $\sigma$ [keV] & $0.06^{+0.03}_{-0.04} $ & $0.02^{+0.04}_{-0.02} $ & - & - & - \\ \rule[-0.7em]{0pt}{2em} & z & $0.018$ (f) & $0.018$ (f) & - & - & -\\ \rule[-0.7em]{0pt}{2em} & normalisation & $(3\pm1) \times 10^{-5}$ & $(3\pm1) \times 10^{-5}$ & - & - & - \\ \hline \hline \rule[-0.7em]{0pt}{2em} $\chi^2/\nu$ & & 246/264 & 405/243 & 256/266 & 404/244 & $C/\nu$ = 1551/1347 \\ \end{tabular} \end{center} { Note: The sign (f) after a value means that the value was fixed during the fitting procedure. The sign (b) means that the parameter value was bound to the value of the corresponding parameter of the previous model component. The sign ``-'' means that the model component is not included in the total model, while dots in the right column only mean that there are more values related to the individual spectra which are not necessary to be all shown in the table. } \label{model} \end{table*} A single-zone warm absorber component modifying the power law continuum dramatically improved the fit from $\chi^2 / \nu = 1850/270 \doteq 6.9$ to $\chi^2 / \nu = 402/270 \doteq 1.5$ for the PN spectrum. The ionization parameter converged to a very low value, and we found that this almost neutral absorption can be successfully reproduced with {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zphabs}}}, which is a simpler model than {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{xstar}}}, so we preferred this possibility. The addition of another warm absorber zone improves the fit to $\chi^2 / \nu = 320/266 \doteq 1.2$ for the PN spectrum, and it requires the ionization parameter $\log \xi \cong 3.9$. We checked that adding another warm absorber zones does not improve the fit significantly. The residuals from the model (see Fig.~\ref{del_chi}, upper panel) reveal an extra emission that remains at low energies, as well as around the iron K~$\alpha$ line band. These features can come from reflection of the primary radiation on the surface of the accretion disc, so we added the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}} model \citep{2005MNRAS.358..211R}, which calculates the ionised reflection for an optically thick atmosphere with constant density. We examined the significance of the addition of the reflection component into the complex model of the PN spectrum by the statistical F-test. The low value of the F-test probability ($5 \times 10^{-15}$) strongly favours this additional model component. The best fit was now $\chi^2/\nu = 246/264 \doteq 0.95$ for the PN spectrum. We hereafter call this model the `baseline' model; in the XSPEC notation: {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{tbabs}}} $\times$ {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zphabs}}}$_{\,\rm N}$ $\times$ {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{xstar}}} $\times$ ({{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{powerlaw}}} + {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}} + {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zgauss}}}). The parameter values of the `baseline' model are presented in the Table~\ref{model}. The quoted errors of the parameters represent a 90~$\%$ confidence level for a single interesting parameter. The measurement is obviously affected by a much larger systematic error, which, however, could be properly quantified only if we knew the ``right'' model. The value of the power law photon index increased to $\Gamma \approx 1.8$ compared to the simple model applied to the data in Sect.~\ref{msp}, because we included of the additional local absorption in the model. The data residuals from the model are shown in the lower panel of Fig.~\ref{del_chi}. In the same figure, we also show residuals from best fit performed with the `baseline' model, excluding the ionised absorption (middle panel) and the ionised reflection component (upper panel). The narrow iron K~$\alpha$ line with the rest energy $E=6.40 \pm 0.01$\,keV, the width $\sigma=0.06 \pm 0.03$\,keV, and the equivalent width $EW = 82 \pm 15$\,eV evidently represents cold reflection. This suggests an origin of this spectral component in the outer part of the disc, or from the torus. The cold reflection is also supposed to contribute to the soft part of the spectrum with the individual emission lines. For this reason, we replaced the Gaussian profile in the `baseline' model with another {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}} component (called as {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}}~2 in the Table ~\ref{model}) with the same values for the photon index and abundances as the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}}~1 model component. The ionization parameter was kept free during the fitting procedure, but it very quickly converged to its lowest value $\xi = 30$ ($\log \xi = 1.477$). The advantage of the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}} model compared to the other available reflection models is that it also includes the soft X-ray lines, with the disadvantage in this case that the ionization parameter cannot be set to zero. \begin{figure}[tb!] \begin{center} \includegraphics[width=0.48\textwidth]{13659fg8.eps} \caption{The `final' model. The total model is shown in black (solid line), the primary radiation is red (dashed), the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}} components are blue (dotted) for cold reflection and magenta (dot-dashed) for ionised reflection. } \label{emo} \end{center} \end{figure} This `double reflection' model, in the XSPEC notation {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{tbabs}}} $\times$ {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zphabs}}}$_{\,\rm N}$ $\times$ {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{xstar}}} $\times$ ({{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{powerlaw}}} + {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}} + {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}}), does not significantly improve the fit goodness over the `baseline' model (with $\chi^2/\nu = 256/265 \doteq 0.96$ for the PN spectrum), but it represents a more self-consistent astrophysical picture. Therefore, we call the `double reflection' model as `final' model. In contrast to the `baseline' model, it does not require subsolar iron abundances, see the Table~\ref{model}, where the parameter values for this model are presented. The `final' model with each component separately drawn is shown in Fig.~\ref{emo}. All the plotted components are absorbed by a warm absorber surrounding the central accretion disc and two kinds of cold absorber -- one from Galactic interstellar matter and one from local absorber in the host galaxy. \begin{figure}[tb!] \begin{center} \includegraphics[width=0.48\textwidth]{13659fg9.eps} \caption{The contour plot of the ionization parameters of the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}} model, representing the ionised accretion disc, and of the {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{xstar}}} model, representing the warm absorber in the `final' model. The individual curves correspond to the 1\,$\sigma$, 2\,$\sigma$, and 3\,$\sigma$ levels. The position of the minimal value of $\chi^{2}$ found by the fitting procedure is marked by a cross. The corresponding $\chi^2$ values are given in the plot.} \label{cont_ion} \end{center} \end{figure} \begin{table}[tb!] \caption{Flux values of the `final' model and its individual components.} \begin{center} \begin{tabular}{c|cc|cc} \hline \hline \rule[-0.6em]{0pt}{1.7em} Model & \multicolumn{2}{c|}{Flux at $0.5-2\,$keV} & \multicolumn{2}{c}{Flux at $2-10\,$keV} \\ \rule[-0.6em]{0pt}{1.7em} component & \multicolumn{2}{c|}{$[10^{-12}$erg\,cm$^{-2}$\,s$^{-1}$]} & \multicolumn{2}{c}{$[10^{-12}$erg\,cm$^{-2}$\,s$^{-1}$]} \\ \rule[-0.6em]{0pt}{1.7em} & PN & MOS & PN & MOS\\ \hline \rule[-0.6em]{0pt}{1.7em} total model & $7.05^{+0.03}_{-0.03}$ & $7.00^{+0.03}_{-0.03}$ & $25.0^{+0.1}_{-0.2}$ & $25.4^{+0.1}_{-0.2}$ \\ \rule[-0.6em]{0pt}{1.7em} unabsorbed model & $16.6^{+0.2}_{-0.2}$ & $16.7^{+0.2}_{-0.2}$ & $25.7^{+0.2}_{-0.2}$ & $26.5^{+0.2}_{-0.2}$ \\ \rule[-0.6em]{0pt}{1.7em} {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{powerlaw}}} & $13.6^{+0.1}_{-0.2}$ & $14.3^{+0.1}_{-0.1}$ & $22.6^{+0.3}_{-0.2}$ & $23.8^{+0.1}_{-0.1}$ \\ \rule[-0.6em]{0pt}{1.7em} {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}}$_{\rm ion}$ & $1.9^{+0.2}_{-0.2}$ & $1.1^{+0.1}_{-0.1}$ & $1.6^{+0.1}_{-0.2}$ & $0.9^{+0.1}_{-0.1}$ \\ \rule[-0.6em]{0pt}{1.7em} {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}}$_{\rm cold}$ & $1.1^{+0.1}_{-0.1}$ & $1.3^{+0.1}_{-0.1}$ & $1.5^{+0.1}_{-0.1}$ & $1.8^{+0.2}_{-0.1}$ \\ \hline \rule[-0.6em]{0pt}{1.7em} $R_{\rm ion}$ $^*$ & $0.12^{+0.01}_{-0.01}$ & $0.07^{+0.01}_{-0.01}$ & $0.06^{+0.01}_{-0.01}$ & $0.03^{+0.01}_{-0.01}$\\ \rule[-0.6em]{0pt}{1.7em} $R_{\rm cold}$ $^*$ & $0.07^{+0.01}_{-0.01}$ & $0.08^{+0.01}_{-0.01}$ & $0.06^{+0.01}_{-0.01}$ & $0.07^{+0.01}_{-0.01}$\\ \end{tabular} \end{center} {$^*$ the ratios of the reflection component flux values to the flux value of the total unabsorbed model (sum of the primary and reflected radiation).} \label{rratio} \end{table} Some model parameters were not allowed to vary during the fitting procedure. The redshift of the ionised absorber was fixed to the source cosmological value, because leaving it free yields a negligible improvement in the quality of the fit. Second, we used the same iron abundances across all the components in the model. In the `final' model, the warm absorber ionization parameter is consistent with the ionised reflection component. This result is also presented in Fig.~\ref{cont_ion}, where the contour lines related to the $1\sigma$, $2\sigma$, and $3\sigma$ levels of $\chi^{2}$ between the ionization parameters of the two model components are present. Table~\ref{rratio} summarises flux values of the individual components of the `final' model for both PN and MOS spectra for two energy bands, $0.5-2$\,keV and $2-10$\,keV, and also shows fractions of the reflection radiation to the total emission (sum of the primary and reprocessed radiation). The flux ratio is almost equally shared between the cold and ionised reflection components, and its value is in total $R<0.2$. The absorption--corrected luminosity values of the source in the same energy bands are $L\,_{0.5-2\rm\,keV}=(1.21 \pm 0.02)\times10^{43}$\,erg\,s$^{-1}$ and $L\,_{2-10\rm\,keV}=(1.87 \pm 0.02)\times10^{43}$\,erg\,s$^{-1}$, respectively. \begin{figure}[tb!] \begin{center} \includegraphics[width=0.48\textwidth]{13659f10.eps} \caption{The joint fit of all spectra of the XMM-Newton instruments -- PN (black), MOS (red), RGS\,1 (magenta), and RGS\,2 (blue), together with the model residuals.} \label{joint} \end{center} \end{figure} We also used the `final' model for a joint fitting of all the XMM-Newton instruments (PN, MOS and both RGS spectra) together. The parameter values were bound among all the spectra, only normalisation factors were allowed to vary. The goodness of the joint fit is given in C-statistics because the RGS data are unbinned (and each individual bin contains only a few counts). The result is $C = 1551$ for a number of degrees of freedom $\nu = 1347$. All the spectra, together with the residuals, are shown in Fig.~\ref{joint}, and the corresponding parameters in the last column of Table~\ref{model}. \section{Discussion} \label{discussion} \subsection{Constraints on the location of the absorbers} \label{geometry} In this section we discuss a possible location of the absorber's system in the `final' model. Photoelectric absorption is almost invariably observed in Type~2 Seyferts \citep{1991ApJ...366...88A,1997ApJS..113...23T,2002ApJ...571..234R} and generally attributed to an optically thick matter responsible for orientation-dependent classification in AGN unification scenarios \citep{1985ApJ...297..621A,1993ARA&A..31..473A}. Because the IRAS 05078+1626 galaxy is probably viewed under an intermediate inclination between unobscured Seyfert 1s and obscured Seyfert 2s, the torus rim may also intercept the line of sight to the AGN and absorb the radiation coming from the centre. The cold absorption can, however, also be associated with the interstellar matter of the galaxy \citep{2006A&A...449..551L}. Both reflection components are inside the ionised absorber in the `final' model. The geometrical interpretation is that the cold reflection occurs on the outer parts of the disc or the inner wall of the torus. Reflection on the nearer part of the torus is heavily absorbed by the torus itself, so only radiation reflected on the farther peripheral part of the torus can reach the observer after passing through the warm absorber. However, an alternative scenario, in which the cold reflection is unaffected by the warm absorber, i.e., {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{tbabs}}} $\times$ {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{zphabs}}} $\times$ [{{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}}$_{\rm cold}$ + {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{xstar}}} $\times$ ({{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{powerlaw}}} + {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{reflion}}}$_{\,\rm disc}$ )], is also acceptable with $\chi^{2}/\nu = 265/265$. The lack of constraints on the variability in the warm absorbed features \citep{2007ApJ...659.1022K}, caused by the moderate dynamical range of the primary continuum, as well as statistical limitations in our spectra, prevents us from precisely constraining the location of the warm absorber. \begin{figure}[tb!] \begin{center} \includegraphics[width=0.48\textwidth]{13659f11.eps} \caption{The best-fit values of $\chi^2$ statistics for the inner disc radius parameter, which we obtained by gradually stepping it from the horizon radius to the outer radius of the disc ($400 R_{\rm g}$). The dashed line is the 90\% confidence level for one interesting parameter. } \label{kyconvlevel} \end{center} \end{figure} \subsection{Constraints on the location of the ionised reflector} \label{disc} The ionised reflection might occur either at the inner wall of a warm absorber cone or on the accretion disc. Even in the latter case, the reflection cannot occur arbitrarily close to the black hole. In this section we investigate the constraints of the accretion disc location and structure, which can be drawn from the lack of the significant relativistic blurring of the disc reflection component. We convolved the ionised reflection component with the fully relativistic {{\fontfamily{ptm}\fontseries{m}\fontshape{sc}\selectfont{kyconv}}} model \citep{2004ApJS..153..205D}. Two assumptions about the disc emissivity were considered. First, the radial part of the intensity decreases with the power of the disc radius $q$ ($I \propto r^{-q}$), where the value of $q$ was allowed to vary between $2$ and $3.5$. Second, the angular dependence was assumed to be isotropic which seems to be appropriate approximation for our situation of an X-ray irradiated accretion disc \citep{2009A&A...507....1S}. We examined the expected confidence levels of the best-fit values of the disc's inner radius by stepping this parameter in the whole range of its possible values -- from the horizon to the outer disc radius, which we set to 400 gravitational radii ($R_{\rm g} \equiv GM/c^{2}$). The results are shown in Fig.~\ref{kyconvlevel}. At the 90~\% confidence level, the accretion disc is not allowed to extend closer to the black hole than $60 R_{\rm g}$. The ``relativistic blurring method'' would be less appropriate in looking for the imprints of the innermost parts of the accretion disc if the disc were too highly ionised (log$~\xi \approx 4$) and the narrow reflection features were not present \citep{2005MNRAS.358..211R}. However, the ionization parameter value of the reflection component is not so high in the `final' model and the dominant feature is the intermediately ionised iron line ($E \approx 6.7$\,keV). If we assumed a stratified disc with the ionization state decreasing with the radius from the centre, the hydrogen-like iron line would be also expected to appear in the spectrum (as an intermediate stage between the over-ionised and mildly ionised contribution). Because it is not detected in the data, the accretion disc truncation provides a more reasonable explanation of missing signatures of the relativistic blurring. \subsection{Mass accretion rate} Disc truncation is expected in low--luminosity AGN where the inner accretion flow is advection-dominated \citep[and references therein]{1994ApJ...428L..13N, 1997ApJ...489..865E,2008NewAR..51..733N}. The transition from the outer standard accretion disc may occur, e.g., via the disc evaporation mechanism \citep{2000A&A...361..175M, 2009ApJ...707..233L}. The observational evidence of a truncated accretion disc in low--luminosity AGN was reported e.g. by \citet{1996ApJ...462..142L, 1999ApJ...525L..89Q}. However, its presence is also suggested in some observations of Seyfert galaxies \citep{2000ApJ...537L.103L, 2000ApJ...536..213D, 2003ApJ...586...97C, 2009ApJ...705..496M} and even a quasar \citep{2005A&A...435..857M} where the luminosity value is estimated as a half of the Eddington value. Generally, it is expected that the lower the luminosity, $L/L_{\rm Edd}$, the larger transition radius \citep[see][and references therein]{2004ApJ...612..724Y}. Furthermore, we investigate whether the disc truncation hypothesis is consistent with the IRAS~05078+1626 luminosity. To have these quantities in Eddington units, we first estimated the mass of the black hole. IRAS~05078+1626 belongs to the sample of the infrared-selected Seyfert 1.5 galaxies observed by a 2.16~m optical telescope \citep{2006ApJ...638..106W} where, among others, the velocity dispersion in the O~III emission line was measured. The correlation between the O~III line width and the mass of active galactic nucleus was discussed in \citet{2000ApJ...544L..91N} and \citet{2003ApJ...585..647B}. The value from the optical measurements, $\sigma_{\rm O\,III} \approx 130$\,km\,s$^{-1}$, corresponds to the mass $M \approx 4\times10^{7} M_{\odot}$ using a correlation plot in \citet{2003ApJ...585..647B}. The scatter of the correlation is somewhat large with the reported limit of a factor of 5 for an uncertainty in the black hole mass determination, so the value only provides an order of magnitude estimation. The value of the Eddington luminosity is $L_{\rm Edd} \doteq 1.3 \times 10^{38} M/M_{\odot} $\,erg\,s$^{-1} \approx 5 \times 10^{45}$\,erg\,s$^{-1}$ for the given value of the mass. We used luminosity-dependent corrections by \citet{2004MNRAS.351..169M} to estimate the bolometric luminosity of IRAS~05078+1626 from the X-ray luminosity. Its value is $L \approx 5 \times 10^{44}$\,erg\,s$^{-1} \approx 10^{-1} L_{\rm Edd}$. Correspondingly, the mass accretion rate, $\dot{M} = L/c^2$, is sub-Eddington with $\dot{M} \approx 0.1 \dot{M}_{\rm Edd}$. This value is typical of less luminous Seyfert galaxies \citep[see for example][]{2009A&A...495..421B}, and is consistent with the disc truncation hypothesis. \section{Conclusions} \label{conclusions} The X-ray continuum spectrum of the Seyfert galaxy IRAS 05078+1626 is dominated by a power law with a standard value of the photon index ($\Gamma \cong 1.75$ in the `final model'). The residuals from the power law continuum can be interpreted in terms of the warm absorber surrounding the accretion disc, and the reflection of the primary radiation from the ionised matter and on the cold torus. The outgoing radiation is absorbed by cold matter ($n_{\rm H} \approx 1 \times 10^{21}$\,cm$^{-2}$), which can be either located in the inner side of the torus or caused by gas in the host galaxy. The type of the galaxy determined from the previous infrared and optical research is Seyfert 1.5, suggesting that the active nucleus could be seen at large inclination, consistent with either interpretation or even allowing a combination of both. The ionised warm absorber occurs in the central part of the AGN. Its column density was found to be $n_{\rm H} \geq 1 \times 10^{24}$\,cm$^{-2}$, which is a rather high value compared to the warm absorbers detected in the other Seyfert galaxies \citep{blustin05}. This may be because we are looking through a longer optical path of a conical non-relativistic outflow due to the high inclination of the system. The ionization parameter of the warm absorber is $\log \xi_{\rm WA} = 2.5 \pm 1.0$, which is comparable to the value related to the ionised reflection $\log \xi_{\rm reflection} = 3.0 \pm 0.2$, suggesting a link between them. If the ionised reflection is associated to the warm absorber (e.g. the inner walls of a conical outflow), the lack of spectral features associated with the accretion disc is a natural consequence thereof. If, instead, the ionised reflection occurs at the accretion disc, it cannot extend up to the marginally stable orbit. The lack of the significant relativistic blurring of this model component requires the disc to be truncated (inner disc radius $R_{\rm in} \geq 60\,R_{g}$). This idea is also supported by the low ratio of the reflection radiation to the primary one, $R < 0.2$, and also by the relatively low mass-accretion rate $\dot{M} \approx 0.1\dot{M}_{\rm Edd}$ determined from the source luminosity. \begin{acknowledgements} The authors are grateful for useful comments and suggestions by Michal Dov\v{c}iak, Ren\'{e}~W.~Goosmann, and the participants of the 3rd `FERO' (Finding Extreme Relativistic Objects) workshop, held in September 2009 in Rome. JS acknowledges the support from the doctoral student programme of the Czech Science Foundation, ref.\ 205/09/H033, and the research grant of the Charles University in Prague, ref.\ 33308. VK appreciates the continued support from research grants of the Czech Science Foundation, ref.\ 205/07/0052, and ESA Plan for European Cooperating States, project No.\ 98040. \end{acknowledgements} \bibliographystyle{aa}
train/arxiv
BkiUd7o25V5jAV9GHz5y
5
1
\section{Introduction} Flux tube formation between quark $q$ and antiquark $\bar{q}$ provides a possible mechanism for quark confinement and has been verified in simulations of lattice QCD (e.g.~\cite{Bali:1994de}). While the microscopic origin of the formation of the flux tube is still debated, it is common consensus that the long distance properties, in particular the spectrum, are well described by an effective string theory (EST). The EST is a two dimensional effective field theory for the Goldstone bosons associated with the breaking of translational symmetry, the quantised transversal oscillation modes. While the basic properties of the theory are known for a long time~\cite{Goto:1971ce,Goddard:1973qh,Nambu:1978bd,Luscher:1980fr, Polyakov:1980ca} a number of features have only been elucidated in the past decade~\cite{Luscher:2004ib,Aharony:2009gg,Aharony:2010db, Aharony:2010cx,Aharony:2011ga,Billo:2012da,Dubovsky:2012sh,Dubovsky:2012wk, Aharony:2013ipa,Caselle:2013dra,Dubovsky:2014fma,Caselle:2014eka}. In particular, it has been clarified~\cite{Dubovsky:2012wk} that the leading order spectrum agrees with the light cone quantisation~\cite{Arvis:1983fp} of the Nambu-Goto string theory (LC spectrum, eq.~\refc{eq:LC-spectrum}) and the corrections to the LC spectrum have been computed up to $O(R^{-5})$~\cite{Aharony:2010db,Aharony:2011ga,Caselle:2013dra}, where $R$ is the distance between quark and antiquark. For more details and a recent review see~\cite{Brandt:2016xsp}. The predictions of the EST can be compared to lattice results for the excitation spectrum of the flux tube and good agreement has typically been found, proceeding down to $q\bar{q}$ separations where the EST is not expected to be reliable (for a compilation of results see~\cite{Brandt:2016xsp}). In particular, the EST predicts a boundary correction of $O(R^{-4})$ with a free and, presumably, non-universal coefficient $\bt$. This coefficient has first been extracted from the excited states in 3d SU(2)~\cite{Brandt:2010bw} and later from the groundstate in 3d $Z_2$~\cite{Billo:2012da} and SU($N=2,3$)~\cite{Brandt:2013eua} gauge theory for a number of different lattice spacings. Indeed, strong evidence for a non-universal behaviour has been found. The good agreement down to small values of $R$ is surprising, given that the flux tube profile only agrees with the EST starting from around 1.0~fm, at least~\cite{Bali:1994de,Gliozzi:2010zv,Cardoso:2013lla,Caselle:2016mqu}. A possible explanation could be that the flux tube consists of a solid, vortex-like core whose fluctuations are governed by the EST. This would explain the good agreement of the profile with the associated exponential decay, e.g.~\cite{Cea:2012qw,Cea:2014uja,Caselle:2016mqu}, rather then with the Gaussian profile predicted by the EST~\cite{Luscher:1980iy,Caselle:1995fh}, while leaving the spectrum untouched to leading order.~\footnote{This can be seen from the effective EST action which derives from vortex solutions of the underlying microscopic theory~\cite{Forster:1974ga, Gervais:1974db,Lee:1993ty,Orland:1994qt,Sato:1994vz,Akhmedov:1995mw}.} In fact, it has been found~\cite{Cardoso:2013lla} that the profile can be analysed using a convolution of the EST and vortex profiles, which would indicate that the flux tube shares features of both. One can expect that a particular class of corrections to the EST might show up as massive modes on the worldsheet. Indeed, candidates for states receiving contributions from massive modes have been seen in 4d $\text{SU}(N)$ gauge theories~\cite{Morningstar:1998da,Juge:2002br,Juge:2004xr,Athenodorou:2010cs} and the results for closed strings~\cite{Athenodorou:2010cs} are in good agreement with the energy levels obtained by including a massive pseudoscalar particle on the worldsheet known as the worldsheet axion~\cite{Dubovsky:2013gi,Dubovsky:2014fma}. Recently, the behaviour of the mass of the worldsheet axion for $N\to\infty$ has been investigated~\cite{Athenodorou:2017cmw} and a finite result has been found in this limit. It is interesting to note that possible massive modes are only seen in 4d, which might be due to the fact that the topological coupling term associated with the worldsheet axion only exists for $d>3$ (see section~\ref{sec:beyondEST}). Consequentially, any massive modes in 3d can only couple via terms contributing to higher orders in the derivative expansion or via vertices including more than one massive field. In this case results at intermediate distances are potentially less affected and the massive modes appear as quasi-free modes on the worldsheet. Apart from the contribution of massive modes, there could also be contributions from rigidity. The associated term satisfies the symmetry constraints of the theory and naturally turns up in cases where the EST can be derived from a vortex solution of the underlying field theory~\cite{Forster:1974ga, Gervais:1974db,Lee:1993ty,Orland:1994qt,Sato:1994vz,Akhmedov:1995mw}. While the contributions due to this term start at higher orders in the derivative expansion of the EST (and thus should be negligible up to $O(R^{-7})$, at least), it has been found, using zeta function regularisation, that its non-perturbative (in the sense of the $1/R$ expansion) contribution cannot be expanded systematically in $R^{-1}$. It thus leads to contamination at all orders which are, however, exponentially suppressed for large values of $R$~\cite{Polyakov:1986cs,German:1989vk,Ambjorn:2014rwa,Caselle:2014eka}. In fact, the leading order contribution is formally equivalent to the contribution of a free massive particle, as discussed in section~\ref{sec:beyondEST}. In this series of papers we will investigate in detail the agreement of the spectrum of the open flux tube in $\text{SU}(N)$ gauge theories with the predictions from the EST. Particular emphasis lies on the reliable extraction of the boundary coefficient $\bt${} from the static potential and its continuum extrapolation. In this context it is also important to control the higher order corrections. For the case that the other corrections are regular corrections that are part of the derivative expansion, the next term would be of $O(R^{-\gamma})$ with $\gamma\geq6$. Such corrections are comparably easy to disentangle from the boundary term. If, on the other hand, there are contributions from the rigidity term, or equivalently from massive modes, those can contaminate the extraction of $\bt$~\cite{German:1989vk,Ambjorn:2014rwa,Caselle:2014eka}. We will see that we cannot exclude the latter possibility, but we can show that in a certain range of distances the exponent of the correction term with respect to the LC spectrum is indeed 4 and it is not well described by the additional terms alone. Since we cannot exclude the contamination of $\bt${}, we carry out two independent analyses, excluding and including the rigidity/massive mode contributions. Eventually we are interested in the large $N$ limit of the non-universal contributions. In particular, there is hope that some of the non-universal parameters can be used to constrain the possible holographic backgrounds and eventually help to find the holographic dual of large $N$ Yang-Mills theory. In this first paper of the series we will introduce the methods that we use to analyse the potential in $\text{SU}(N)$ gauge theory. However, we focus only on results for $N=2$ and 3 in three dimensions and leave the extraction of the results for $N>3$ and the extrapolation $N\to\infty$ for the next paper in the series. In follow up papers we plan to consider excited states, extending the studies from~\cite{Brandt:2009tc,Brandt:2010bw}, and the four dimensional theory. The paper is organised as follows: In the next section we briefly discuss the relevant predictions and properties of the EST and its limitations. The consequent section~\ref{sec:groundstate} is devoted to the basic analysis of the lattice data for the static potential. In particular, we perform a reliable extraction of the Sommer parameter and the string tension and show the leading correction to the LC spectrum is indeed of $O(R^{-4})$. In sections~\ref{sec:b2-ana1} and~\ref{sec:rigidity} we extract $\bt${} excluding/including massive mode contributions, respectively. Section~\ref{sec:results} provides the summary and discussion of the main results of the two analyses and we conclude in section~\ref{sec:concl}. The details of the lattice simulations are presented in appendix~\ref{app:sim-setup} and we discuss the control of systematic uncertainties in appendix~\ref{app:sys-effects}. \section{Effective string theory predictions} We start by discussing the properties of the EST, focusing on the relevant aspects for the present study. For more detailed reviews we refer to~\cite{Aharony:2013ipa,Brandt:2016xsp}. \subsection{Effective string theory and its limitations} \label{sec:est-setup} The EST is the low energy effective field theory (EFT) describing a single, stable, non-interacting flux tube. Here we will focus on the case of a flux tube stretched between two sources in the fundamental representation (quark and antiquark), but one can also study the hypothetical case of a closed flux tube wrapping around a compactified dimension. The presence of the flux tube breaks translational invariance in the transverse directions, leading to $d-2$ Goldstone bosons (GBs), the quantised transverse oscillation modes. For brevity we will consider the string of length $R$ to reside in the $(x^0,x^1)$-plane ($x^0$ being the temporal direction) with endpoints located at $x^1=0$ and $R$. Then one can parametrise the fluctuation field by $X^\mu(x^\alpha)=(x^\alpha,\,X^i(x^\alpha))$, where $\alpha=0,1$, $i=2,\ldots,d$ and the $X^i$ are the GBs of the EST. The action up to 6 derivatives order has been derived in~\cite{Aharony:2009gg,Aharony:2010cx,Billo:2012da} together with the constraints for the coefficients (the result for the coefficient $c_4$ has been clarified in~\cite{Dubovsky:2012wk}). Here we will write down the action in the static gauge (instead of the diffeomorphism invariant form discussed in~\cite{Aharony:2013ipa,Brandt:2016xsp}) and in Minkowski space for simplicity. Since we are considering an open string the action consists of two parts, \begin{equation} \label{eq:full_action} S_{\rm EST} = S_{\rm c} + S_{\rm b} \,, \end{equation} where $S_{\rm c}$ is the bulk action, \begin{equation} \label{eq:bulk_action} \begin{array}{rcl} \displaystyle S_{\rm c} & \displaystyle = \int_\mathcal{M} d^2 x & \displaystyle \big[ -\sigma -\frac{\sigma}{2}\,\partial_\alpha X^i \partial^\alpha X^i + c_2 (\partial_\alpha X^i \partial^\alpha X^i)^2 + c_3 (\partial_\alpha X^i \partial_\beta X^i)^2 \vspace*{2mm} \\ & & \displaystyle \:\:\: + c_4 (\partial_\alpha\partial_\beta X^i \partial^\alpha \partial^\beta X^i) (\partial_\gamma X^j\partial^\gamma X^j) + c_5 (\partial_\alpha X^i\partial^\alpha X^i)^3 \vspace*{2mm} \\ & & \displaystyle \:\:\: +c_6(\partial_\alpha X^i\partial^\alpha X^i) (\partial_\beta X^j \partial_\gamma X^j)^2 + \dots \big] , \end{array} \end{equation} where the integral runs over the worldsheet $\mathcal{M}$ of the flux tube, and $S_{\rm b}$ the boundary action \begin{equation} \label{eq:bound_action} S_{\rm b} = \int_{\partial \mathcal{M}} d^2 x \:\big[ \mu + b_1 \partial_1 X^i \partial_1 X^i\! + b_2 \partial_0\partial_1 X^i \partial_0\partial_1 X^i + b_3(\partial_1 X^i\partial_1 X^i)^2\dots \big] , \end{equation} where the integral runs over the woldsheet boundary $\partial \mathcal{M}$. The coefficients in eqs.~\refc{eq:bulk_action} and~\refc{eq:bound_action} are constrained by the residual symmetries of the effective field theory, which for the EST is Lorentz symmetry. The results for the coefficients are~\cite{Aharony:2009gg,Aharony:2010cx,Billo:2012da,Dubovsky:2012wk} \begin{equation} \label{eq:coeff_constr} c_2=-\frac{\sigma}{8}\,,\quad c_3=\frac{\sigma}{4}\,,\quad c_4=0\,,\quad c_5=\frac{\sigma}{16}\,,\quad c_6=-\frac{\sigma}{8}\,, \quad b_1=0\quad\text{and}\quad b_3=0 \,, \end{equation} while $b_2$ remains unconstrained and thus represents the first subleading low energy constant not fixed in terms of $\sigma$. In the next section we will discuss the spectrum which follows from the effective action. Note, that $b_2$ is a dimensionfull quantity, so that, for the purpose of lattice simulations and the associated continuum limit, it makes sense to introduce the dimensionless coupling \begin{equation} \label{eq:bt-coupling} \bar{b}_2=\sqrt{\sigma^3}b_2 \,, \end{equation} which we will use from now on. On top of the terms in eq.~\refc{eq:full_action}, there is one more class of terms allowed within the EST, constructed from powers of the extrinsic curvature~\cite{Aharony:2013ipa,Caselle:2014eka}. The leading order term is known as the rigidity term and has first been proposed by Polyakov~\cite{Polyakov:1996nc}. It also turns up in those cases where vortex solutions could be constructed from the underlying microscopic theory~\cite{Forster:1974ga,Gervais:1974db,Lee:1993ty,Orland:1994qt,Sato:1994vz, Akhmedov:1995mw}. In this sense, the presence of rigidity terms can be seen as evidence of vortex contributions to the EST energies. Rigidity contributions to energy levels start at higher orders in the perturbative expansion, but it has been found that it leads to a non-perturbative contributions to the energy levels in zeta-function regularisation~\cite{Braaten:1986bz,German:1989tr,Caselle:2014eka}. However, as pointed out in~\cite{Dubovsky:2012sh}, this regularisation scheme does not preserve the non-linear Lorentz symmetry of the EST, leading to possible counterterms that need to be taken into account. It is thus mandatory to crosscheck this result using a regularisation scheme which preserves Lorentz invariance. In this paper we will use the result as presented in~\cite{Caselle:2014eka}, which we summarise and discuss in more detail in section~\ref{sec:beyondEST}. The EST is expected to break down at the point where the energy of the fluctuation modes reaches the QCD scale $\Lambda_{\rm QCD}\approx \sqrt{\sigma}$. The energy of the modes is of the order $1/R$, meaning that the EST is expected to break down for \begin{equation} \label{eq:break-scale} \sqrt{\sigma}R\lesssim 1 \,. \end{equation} Owing to the derivative expansion in eqs.~\refc{eq:bulk_action} and~\refc{eq:bound_action} this makes sense, since each derivative is of the order of one unit of momentum of the degrees of freedom, $\partial\sim p\sim1/R$, so that the EST corresponds to an expansion in $(\sqrt{\sigma}R)^{-1}$ which is expected to break down when eq.~\refc{eq:break-scale} is fulfilled. On top of this, there are also several processes that are allowed in the microscopic theory, but are not accounted for within the EST. Among them is the emission of glueballs. If those are on-shell, the emitting state has to be an excited state with $E\geq E'+m_{G}$, where $E'$ is the state it decays to and $m_G$ is the mass of the lightest glueball with appropriate quantum numbers. Consequently, such a process can only appear for excited states with $E_n>E_0+m_{G}$, where $E_0$ is the groundstate. On top of this on-shell emission, there can also be virtual glueball exchange. This process will always be present (at finite $N$) and leads to corrections of unknown size. These meson-glueball interactions and mixings are suppressed with increasing $N$ and vanish in the $N=\infty$ limit (e.g.~\cite{Lucini:2012gg}). Furthermore, within the EST the string is not allowed to develop knots or to intersect with itself, which would correspond to handles on the worldsheet. In addition, the flux tube is likely to have an intrinsic width, allowing for inner excitations, contributing in the form of massive excitations on the worldsheet which have to be added to the EST. We will discuss these additional modes and their possible contributions in section~\ref{sec:beyondEST}. In general, the EST only describes a single non-interacting and stable string. We would like to emphasise that this is not the case realised for the flux tube in QCD, which can break due to the creation of a light $q\bar{q}$ pair from the vacuum. Furthermore, the quarks are taken to be static, meaning that one takes the limit of infinitely heavy quarks. The effect of finite quark masses can be included using non-relativistic effective field theories of QCD~\cite{Brambilla:2000gk,Pineda:2000sz,Brambilla:2003mu,Brambilla:2004jw}, for instance (see~\cite{Brambilla:2014eaa} for a recent evaluation of the $1/m^2$ corrections to the potential using the EST predictions). In this paper, however, we will focus on pure gauge theory, where effects owing to finite quark masses are absent and the EST holds up to the limitations discussed above. We close this section by noting that one possibility for an extension of the standard EST to include the effect of possible internal excitation or external influences on the flux tube is to view them as `defects' on the worldsheet~\cite{Andreev:2012mc,Andreev:2012hw}. Here a defect is a place on the string where the derivatives of the coordinates are discontinuous. The associated effect has been worked out for the potential and found to be in reasonable agreement with observed anomalous states in four dimensions~\cite{Andreev:2012hw}. \subsection{Spectrum of the open string and boundary corrections} \label{sec:est-pred} We will now discuss the spectrum that follows from the action~\refc{eq:full_action}. The constraints in eq.~\refc{eq:coeff_constr} for the coefficients set $c_2$ to $c_6$ to the values they obtain in the NG theory. Consequently, one can expect that the leading order spectrum is equivalent to the NG one, up to the corrections due to the boundary term proportional to $\bt$. This expectation is corroborated by the formulation of the EST in diffeomorphism invariant form~\cite{Aharony:2013ipa}, where the full NG action appears as the leading order term in the action. The main question thus concerns the spectrum following from the NG action in $d$ dimensions. For $d=26$ and $d=3$ the exact spectrum is known from the light cone (LC) quantisation and takes the form~\cite{Arvis:1983fp} \begin{equation} \label{eq:LC-spectrum} E^{\rm LC}_{n}(R) = \sigma \: R \: \sqrt{ 1 + \frac{2\pi}{\sigma\:R^{2}} \: \left( n - \frac{1}{24} \: ( d - 2 ) \right) } \;. \end{equation} Note, that we only consider open strings, so that $n$ denotes the number of phonon excitations and as such labels the excitation level. The first few excited states in terms of phonon creation operators $\alpha^i_{-m}$ (for phonon momentum $m$) are listed in table~\ref{tab:3d-string-states}. However, the LC quantisation breaks Lorentz invariance explicitly so that, away from $d=26$ and $d=3$, counterterms are necessary for a consistent quantisation. The first counterterm is proportional to the $c_4$ term in eq.~\refc{eq:bulk_action} with a coefficient~\cite{Dubovsky:2012sh,Dubovsky:2014fma} \begin{equation} \label{eq:c4-counter} \tilde{c}_4 = - \frac{d-26}{192\pi} \,. \end{equation} Thus, the first correction in the bulk action to the LC spectrum appears at order $R^{-5}$, but, since the NG action only contains one free parameter, the correction is universal. \begin{table} \centering \begin{tabular}{cc|cl|cc} \hline \hline energy & $\vert n, l \big\rangle$ & \multicolumn{2}{c|}{representation} & $B_n^l$ & $C_n^l$ \\ \hline \hline $E_0$ & $\vert0\big\rangle$ & $\mathtt{1} \vert0\big\rangle$ & scalar & 0 & 0 \\ \hline $E_1$ & $\vert1\big\rangle$ & $\alpha^i_{-1} \vert0\big\rangle$ & vector & 4 & $d-3$ \\ \hline $E_{2,1}$ & $\vert2,1\big\rangle$ & $\alpha^i_{-1} \alpha^i_{-1} \vert0\big\rangle$ & scalar & 8 & 0 \\ $E_{2,2}$ & $\vert2,2\big\rangle$ & $\alpha^i_{-2} \vert0\big\rangle$ & vector & 32 & $16 (d-3)$ \\ $E_{2,3}$ & $\vert2,3\big\rangle$ & $\big(\alpha^i_{-1} \alpha^j_{-1} - \frac{\delta^{ij}}{d-2} \alpha^i_{-1} \alpha^i_{-1} \big) \vert0\big\rangle$ & sym. tracel. tensor & 8 & $4 (d-2)$\\ \hline \hline \end{tabular} \caption{String states in the lowest three energy levels and their representation in terms of creation operators $\alpha_m$ in the Fock space together with their representation with respect to SO($d-2$). We also list the associated values for the numbers $B_n^l$ and $C_n^l$ appearing in eq.~\refc{eq:est-spec}. `sym. tracel.' for the state $\vert2,3\big\rangle$ stands for `symmetric traceless'.} \label{tab:3d-string-states} \end{table} One might thus wonder whether the square root formula in eq.~\refc{eq:LC-spectrum} should be taken as the starting point of the expansion, or whether it is rather its expansion in orders of $R^{-1}$. In fact, this has been one of the biggest puzzles associated with the EST in the past decades. Lattice data (see~\cite{Brandt:2016xsp} for a compilation) show excellent agreement with the full square root formula even down to values of $R$ where its expansion breaks down. It thus has been conjectured that it should be the full formula which provides a reasonable starting point (e.g.~\cite{Athenodorou:2010cs,Brandt:2010bw}). The discussion above agrees with this conjecture in the sense that the LC spectrum can schematically be written as~\cite{Dubovsky:2012sh} \begin{equation} \label{eq:ng-spect} E^{\rm LC}_{n}(R) = E^{\rm NG}_{n}(R) - {\rm counterterms} \,. \end{equation} Consequently, it will always be the full square root formula which appears when we solve the above equation for $E^{\rm NG}_{n}$. Furthermore, an analysis using the machinery of the Thermodynamic Bethe Ansatz (TBA) shows, that the leading order $S$-matrix is integrable, leading to energies given by the full square root formula~\cite{Dubovsky:2012sh,Dubovsky:2014fma}. Note, that the boundary term can also be included in this TBA analysis~\cite{Caselle:2013dra}. Following the above discussion and the corrections to the LC spectrum computed in~\cite{Aharony:2010db,Aharony:2011ga} the spectrum up to $O(R^{-5})$ is thus given by \begin{equation} \label{eq:est-spec} E^{\rm EST}_{n,l}(R) = E^{\rm LC}_{n}(R) - \bar{b}_2 \frac{\pi^3}{\sqrt{\sigma^3} R^4} \Big( B_n^l + \frac{d-2}{60} \Big) - \frac{\pi^3 (d-26)}{48 \sigma^2 R^5} C_n^l + O(R^{-\xi}) \,, \end{equation} where we have inserted the dimensionless coupling $\bt${} introduced in the previous section. $B_n^l$ and $C_n^l$ are dimensionless coefficients tabulated in table~\ref{tab:3d-string-states}. They depend on the representation of the state with respect to rotations around the string axis, transformations of $X^i$ with elements of SO($d-2$), and thus lift the degeneracies of the LC spectrum. Note once more, that $C_n^l$ vanishes identically for $d=3$ since the state $\vert2,3\big\rangle$ does not exist in this case. This can also be seen from the associated term in the EST action, the $c_4$ term in eq.~\refc{eq:bulk_action}, which is trivial for $d=3$. The next correction term to the LC energies can be expected to appear with an exponent $\xi=6$ or $\xi=7$, depending on whether the next correction originates from another boundary term (which will generically be the case if the associated coefficient does not vanish identically due to symmetry) or a bulk term. \subsection{Beyond the standard EST: massive modes and rigidity term} \label{sec:beyondEST} As mentioned already in the introduction and in section~\ref{sec:est-setup}, there is also another class of terms allowed within the EST containing the extrinsic curvature. The leading order term of this type is known as the rigidity term, whose presence has first been found in~\cite{Polyakov:1986cs}. In terms of the derivative expansion the contributions of the rigidity term start at 8th derivative order, but the presence of this term in the action can give a non-perturbative contribution to the potential~\cite{Klassen:1990dx,Nesterenko:1997ku,Caselle:2014eka}. Including the first two terms from the bulk action, eq.~\refc{eq:bulk_action}, together with the leading order contribution from the rigidity term and evaluating the resulting Gaussian integral leads to a Euclidean potential of the form (for the details see~\cite{Caselle:2014eka}) \begin{equation} \label{eq:pot-rdet} V(R) = \lim_{T\to\infty}\Big\{ \sigma R + \frac{1}{2T} \log\Big[ \det\big(-\Delta\big) \, \det\Big(1-\frac{\Delta}{m^2}\Big) \Big] \Big\} , \end{equation} where T is a finite temporal extent of the spacetime, $\Delta$ is the 2d Laplace operator, and we have introduced the mass \begin{equation} \label{eq:mdef} m=\sqrt{\sigma/2\alpha} \,. \end{equation} The first determinant is the one resulting from a free massless boson field, leading to the Coulombic term within the EST, first computed by L\"uscher~\cite{Luscher:1980ac}, while the second is reminiscent of the one of a free boson field with mass parameter $m$. At this point it is important to stress that the contribution of the rigidity term resembles that of a massive excitation on the world sheet in Euclidean spacetime. However, the above result is only the leading order result, originating from the Gaussian part of the rigidity action. Higher order contributions will potentially spoil this equivalence. Nonetheless, at this order the two types of contributions cannot be disentangled. Using zeta-function regularisation, the determinant of the massive boson in eq.~\refc{eq:pot-rdet} leads to a leading order term in the potential of the form~\cite{Klassen:1990dx,Nesterenko:1997ku,Caselle:2014eka} \begin{equation} \label{eq:pot-rigid} V^{\rm rig}_0(R) = - \frac{m}{2\pi} \sum_{k=1}^{\infty} \frac{K_1(2kmR)}{k} \,, \end{equation} which appears on top of the leading order Coulomb term originating from the massless determinant. Note, that the result has been obtained in zeta-function regularisation which breaks Lorentz invariance explicitly. Owing to the discussion in section~\ref{sec:est-setup} this means that one has to care for the inclusion of potential higher order counterterms in the action to obtain the correct result. The higher order terms in the action can be included perturbatively~\cite{German:1989vk,Caselle:2014eka}, leading to the additional term~\cite{Braaten:1986bz,German:1989vk} \begin{equation} \label{eq:pot-rigid-cor} V^{\rm rig}_1(R) = - \frac{(d-2)(d-10)\pi^2}{3840 m \sigma R^4} \,, \end{equation} which contaminates the $R^{-4}$ boundary term from eq.~\refc{eq:est-spec}. Naively we expect a similar term in the presence of a free boson on the worldsheet. To assess the effect of the term $V^{\rm rig}_0$ it is instructive to consider the two different limits with respect to the EST. First, let us consider the large $R$ limit, so that $mR\gg 1$. In this case the dominating contribution comes from the term in the sum with $k=1$, which gives a leading order contribution of the form~\cite{Caselle:2014eka} \begin{equation} \label{eq:pot-rigid-largeR} V^{\rm rig}_0(R) \approx -\sqrt{\frac{m}{16\pi R}} e^{-2mR} \,. \end{equation} Consequently, $V^{\rm rig}_0$ will be exponentially suppressed with respect to all other terms in the EST, so that it can only give a relevant contribution at intermediate or small values of $R$ for a given mass $m$. Next, let us consider the region of small $mR<\pi$. In this case one obtains~\cite{Caselle:2014eka} (keeping only the terms relevant for $R\to0$) \begin{equation} \label{eq:pot-rigid-smallR} V^{\rm rig}_0(R) \approx -\frac{\pi}{24 R} + \frac{m^2 R}{4\pi} \ln\Big(\frac{mR}{2\pi}\Big) + m\cdot O(mR) \,. \end{equation} Thus, for small distances the rigidity term leads to another Coulombic term, doubling the standard L\"uscher term within the EST. What is particularly important is the fact that in some intermediate regime (which, for not too large values of $m$, will still be within the validity region of $mR<\pi$ expansion) the Coulombic term will dominate over the $R^{-4}$ correction (while the logarithm is already negative). Consequently, we expect a negative Coulombic correction to the LC energy levels in some regime before seeing an $R^{-4}$ increase for $R\to0$ if the $\bt${} term is absent. As a remark, we would like to stress that the above computation only applies to the potential. The modification of the excited energy levels due to the rigidity term are still unknown. For the purpose of this paper we will only need the potential, but a distinction between the boundary corrections and the rigidity term is notoriously difficult in this case. It would be very interesting to get a result for the associated modification of excited states. The hope is, that for the excited states the predictions including the massive modes is incompatible with the splittings between the energy levels, so that one can distinguish between the cases with or without massive mode contributions. In particular, it has been found in~\cite{Brandt:2010bw} that the splitting of the first few excitations is well described by the boundary term, which could be different when massive modes are included. In the presence of additional massive bosonic degrees of freedom the presence of terms proportional to the extrinsic curvature also allows for a topological coupling term between the boson field and the GBs, proportional to the mode number~\cite{Dubovsky:2013gi,Dubovsky:2014fma}~\footnote{Note, that the presence of this worldsheet $\theta$-term has first been noticed in~\cite{Polyakov:1986cs}.}. Consequently, since the boson couples to the worldsheet analogue of the $\theta$-term, the boson can naturally be referred to as the worldsheet axion~\cite{Dubovsky:2013gi}. The action for the worldsheet axion can then be written as~\cite{Dubovsky:2014fma} \begin{equation} \label{eq:act-axion} S_{\rm a} = \int_\mathcal{M}\sqrt{-h} \Big(-\frac{1}{2}\nabla_\alpha \phi\nabla^\alpha \phi-\frac{1}{2}m^2\phi^2 -\frac{\gamma}{8\pi} \phi\, \epsilon_{ij}\epsilon^{\alpha\beta}K_{\alpha\gamma}^iK_\beta^{\gamma i}+ \ldots\Big) , \end{equation} where $\nabla$ is the covariant derivative with respect to the induced metric $h^{\alpha\beta}=\partial^\alpha X \partial^\beta X$. To leading order, i.e. up to coupling terms of the form $\phi\partial_\alpha\partial_\beta\phi h^{\alpha\beta}$ and higher orders, $\nabla^\alpha$ can be replaced by $\partial^\alpha$. The associated leading order spectrum has been computed with the TBA method for closed strings in~\cite{Dubovsky:2013gi,Dubovsky:2014fma} and has been found to be consistent with states showing an anomalously slow approach to the LC energies in the spectrum of the closed flux tube for $d=4$~\cite{Athenodorou:2010cs}. Note that the topological coupling term in eq.~\refc{eq:act-axion} is only present for $d>3$, which could be a reason why no anomalous states have been observed in the 3d flux tube spectrum. In 3d massive modes appear as free bosons up to the coupling terms mentioned above. It would be interesting to include those coupling terms in a computation of the energy levels to check their influence on the spectrum. In the following we will always denote the contributions discussed in this section as the contributions from ``massive modes'' for brevity. We would like to emphasise, however, that strictly speaking we can only compare our results to those obtained from the rigidity term, or from the leading order contribution, neglecting any direct couplings, of a massive boson on the worldsheet. Whether direct coupling terms are indeed negligible is an open question, which we cannot answer in the course of this study. \subsection{EST and gauge/gravity duality} We close the discussion of the EST with the remark that the EST action can potentially be computed (e.g.~\cite{Aharony:2009gg} and references therein) from the original 10d fundamental string theory appearing in a generalisation of the AdS/CFT correspondence for large $N$ gauge theories~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}. For recent computations of properties of the flux tube in this framework see~\cite{Kol:2010fq,Vyas:2010wg,Vyas:2012dg,Giataganas:2015yaa}. Consequently, constraining the possible terms and their couplings in the EST action could allow to constrain the fundamental string theory relevant for Yang-Mills theory. In particular, in such cases where the background in the fundamental string theory is weakly curved, the additional bosonic and fermionic degrees of freedom (e.g. the additional coordinate fields in the directions transverse to the gauge theory plane and the supersymmetry partners of the bosonic fields) can be integrated out perturbatively. This has been done for closed strings and a special class of backgrounds in~\cite{Aharony:2009gg} and the resulting terms have been found to agree with the terms appearing in the EST bulk action, eq.~\refc{eq:bulk_action}. Furthermore, in~\cite{Aharony:2010cx} it has been shown that the same class of backgrounds considered for an open string ending on two infinitely stretched $D$-branes leads to the boundary term proportional to $\bt${} in eq.~\refc{eq:bound_action}. In terms of the masses of the additional bosons its contribution is given by~\cite{Aharony:2010cx} \begin{equation} \label{eq:b2-holography} b_2 = - \frac{1}{64 \sigma} \sum_\xi \frac{(-1)^{{\rm BC}(\xi)}}{m^b_\xi} + b^f_2 + \ldots \,, \end{equation} where $\xi$ labels the transverse directions to the gauge theory plane, $m^b_\xi$ is the mass of the boson field associated with the direction $\xi$, $b^f_2$ is the contribution from the fermionic degrees of freedom and ${\rm BC}(\xi)$ depends on the boundary conditions for the fields in this direction. In the case of Dirichlet boundary conditions ${\rm BC}=0$, while for Neumann boundary conditions one has ${\rm BC}=1$. The ellipses in eq.~\refc{eq:b2-holography} stand for terms originating from possible other fields present on the gravity side of the duality. We would like to emphasize that, even though cancellations can appear, there is no reason why the terms in eq.~\refc{eq:b2-holography} should add up to zero, except for certain fine-tuned situations (see also the discussion in~\cite{Aharony:2010cx}), so that, generically, one can expect $\bt${} to be non-vanishing when the EST originates from a duality with this types of string background. It would be interesting to see whether the duality can also account for the rigidity term in the EST action and make a statement about its coefficient. The aforementioned computations have neglected the appearance of possible additional massless scalar modes on the worldsheet.~\footnote{Note, that those are present for all the examples investigated in~\cite{Aharony:2009gg}.} If present, they would contribute to the L\"uscher (Coulomb) term and thus inherently change the large $R$ behaviour of the confining string. Since the L\"uscher term is well reproduced by lattice data, this situation is basically ruled out. This issue can be resolved if the action for these additional degrees of freedom non-perturbatively develops a mass gap, so that these additional fields contribute as massive degrees of freedom on the worldsheet. \section{Results for the potential} \label{sec:groundstate} \begin{table} \centering \begin{tabular}{cc|cccc|c} \hline \hline $\beta$ & Lattice & $R/a$ & $t_s$ & $n_t$ & \#meas & $T/r_0$ \\ \hline \hline $\text{SU}(2)$ & & & & & & \\ 5.0 & $48^3$ & 2-14 & 2 & 20000 & 1600 & 12.2 \\ 5.7 & $48^3$ & 1-13 & 2 & 20000 & 5000 & 10.4 \\ 6.0 & $48^3$ & 1-18 & 3 & 20000 & 7300 & 9.8 \\ 7.5 & $64^3$ & 1-24 & 4 & 20000 & 3000 & 10.2 \\ 10.0 & $96^3$ & 1-28 & 6 & 20000 & 3100 & 11.1 \\ 12.5 & $128^3$ & 1-34 & 8 & 20000 & 2400 & 11.7 \\ 16.0 & $192^3$ & 1-37 & 12 & 20000 & 2800 & 13.7 \\ \hline $\text{SU}(3)$ & & & & & & \\ 11.0 & $48^3$ & 2-11 & 2 & 20000 & 1700 & 14.6 \\ 14.0 & $48^3$ & 2-14 & 2 & 20000 & 1900 & 10.8 \\ 20.0 & $64^3$ & 1-23 & 4 & 20000 & 2700 & 9.5 \\ 25.0 & $96^3$ & 1-28 & 6 & 20000 & 2200 & 11.2 \\ 31.0 & $128^3$ & 1-33 & 8 & 20000 & 2400 & 11.8 \\ \hline \hline \end{tabular} \caption{Simulation parameters for the extraction of the Polyakov loop correlation functions. Listed are the range of $q\bar{q}$ separations, the temporal extent of the LW sublattices $t_s$, both in units of the lattice spacing, the number of sublattice updates $n_t$ and the number of measurements. We also list the temporal lattice extent in units of the Sommer scale $r_0$.} \label{tab:sim-paras} \end{table} To extract the static potential we have used the spatial Polyakov loop correlation function. The details of the simulations and the extraction of the potential from the correlation function are discussed in appendix~\ref{app:sim-setup}, where we also discuss the L\"uscher-Weisz (LW) multilevel algorithm~\cite{Luscher:2001up}, which is mandatory to achieve the precision needed for the extraction of the subleading coefficients in the EST. For the suppression of excited state contaminations and finite size effects our simulations have been done on large lattices. The parameters of the simulations are tabulated in table~\ref{tab:sim-paras}. To demonstrate that, even for the high precision achieved, the aforementioned effects are indeed negligible, we report on extensive checks in appendix~\ref{app:sys-effects}. \subsection{Scale setting and string tension} \label{sec:scale-setting} For scale setting purposes we use the Sommer parameter $r_0$~\cite{Sommer:1993ce} (see appendix~\ref{app:sim-setup}). For $\text{SU}(3)$ and $d=4$ the associated continuum value is $r_0=0.5$~fm which can be used to convert to ``physical units''. We extract $r_0/a$ from the force using four different methods: \begin{enumerate} \item[(a)] a numerical polynomial interpolation; \item[(b)] a numerical rational interpolation; \item[(c)] a parameterisation of the form~\cite{Sommer:1993ce} \begin{equation} \label{eq:force-string} F(R) = f_0 + \frac{f_1}{R^2} + \frac{f_2}{R^4} \end{equation} for the values of $R$ corresponding to the four nearest neighbours of $r_0$ (motivated by the LO EST); \item[(d)] the parameterization of \refc{eq:force-string} with $f_2=0$ for the two nearest neighbours of $r_0$. \end{enumerate} As final estimate for $r_0/a$ we will use method (d). The systematic uncertainty associated with the interpolation to obtain $r_0/a$ can be estimated from the maximal deviation of $r_0/a$ obtained from methods (a), (b), and (c) compared to method (d). The results for $r_0/a$ are tabulated in table~\ref{tab:scale-fits}. Another option for scale setting is to use the string tension $\sigma$, associated with the $R\to\infty$ asymptotics of the potential. We extract the string tension in two different ways: \begin{enumerate} \item[(i)] we fit the data for the force to the form \begin{equation} \label{eq:force-sig-fit} R^2 F(R) = \sigma R^2 + \gamma \,, \end{equation} following eq.~\refc{eq:force_nlo}; \item[(ii)] we fit the potential to the leading order EST prediction, eq.~\refc{eq:LC-spectrum} for $n=0$, adding a normalisation constant $V_0$. \end{enumerate} Both ans\"atze are correct only up to a certain order in the $1/R$ expansion, so that, in the region where higher orders become important, we will get incorrect results for the string tension. To isolate the asymptotic linear behaviour of the potential we investigate the dependence on the minimal value of $R$ included in the fit, $R_{\rm min}$. The strength of using these particular two ans\"atze is, that the corrections to the fit formula are different, so that we can determine the region where higher order terms are negligible by comparing the results. In the region where both sets of results, in dependence on $R_{\rm min}$, show the onset of a plateau and agree within errors, the estimate for $\sigma$ will be reliable (with the present accuracy). \begin{figure}[ht] \centering \includegraphics[]{sig_extract_paper1.pdf} \caption{Results for the string tension in SU(2) (left) and SU(3) (right) gauge theory extracted from method (i), $\sigma_{\rm (i)}$, and (ii), $\sigma_{\rm (ii)}$, as explained in the text, versus the minimal value of $R$ included in the fit, $R_{\rm min}$ in units of $r_0$. The red bands are the values for $\sigma_{(ii)}$ which we will use for the further analysis.} \label{fig:sigma_vs_Rmin} \end{figure} The results obtained from the two methods, denoted as $\sigma_{\rm (i)}$ and $\sigma_{\rm (ii)}$, respectively, are shown in figure~\ref{fig:sigma_vs_Rmin} for $\text{SU}(2)$ (left) and $\text{SU}(3)$ (right). The plots indicate that in most of the cases the extraction of $\sigma$ is reliable. The most critical case is $\beta=11.0$ for SU(3), whose value of $\sigma$ we exclude from the following analysis. Another critical case is $\beta=16.0$ for SU(2), where the two plateaus for $\sigma_{\rm (i)}$ and $\sigma_{\rm (ii)}$ do not agree within errors. In that case we use the extent where the two results are closest (the discrepancy is of order $1\sigma$). The resulting values for $\sigma$ are listed in table~\ref{tab:scale-fits} together with the other fit parameters and are indicated by the bands in the figures. The plots indicate that within the region where the two fits agree the particular choice for $R_{\rm min}$ does not matter within the given uncertainties. In the following analysis we will use $\sigma_{\rm (ii)}$, whose value is shown as the red band in figure~\ref{fig:sigma_vs_Rmin}. \begin{table} \centering \begin{tabular}{cc|r|rr|rr} \hline \hline & & & \multicolumn{2}{c|}{(i)} & \multicolumn{2}{c}{(ii)} \\ $N$ & $\beta$ & \multicolumn{1}{c|}{$r_0/a$} & \multicolumn{1}{c}{$\sqrt{\sigma} r_0$} & \multicolumn{1}{c|}{$\gamma$} & \multicolumn{1}{c}{$\sqrt{\sigma} r_0$} & \multicolumn{1}{c}{$aV_0$} \\ \hline \hline 2 & 5.0 & 3.9472(4)( 7) & 1.2325(14)(1) & 0.129(16) & 1.2321(5)(1) & 0.2148(6) \\ & 5.7 & 4.6072(4)(10) & 1.2321(16)(1) & 0.141(13) & 1.2325(4)(1) & 0.2031(4) \\ & 6.0 & 4.8880(2)( 4) & 1.2330( 3)(1) & 0.136( 2) & 1.2331(4)(1) & 0.1976(1) \\ & 7.5 & 6.2860(4)( 3) & 1.2341( 6)(1) & 0.135( 6) & 1.2341(3)(1) & 0.1740(2) \\ & 10.0 & 8.6021(4)( 8) & 1.2349( 8)(1) & 0.136( 9) & 1.2350(3)(1) & 0.1449(2) \\ & 12.5 & 10.9085(7)(16) & 1.2345( 7)(1) & 0.136( 6) & 1.2346(3)(1) & 0.1248(1) \\ & 16.0 & 14.2958(6)(10) & 1.2401(32)(1) & 0.092(33) & 1.2364(5)(1) & 0.1026(2) \\ \hline 3 & 11.0 & 3.2881(2)( 6) & 1.2256( 2)(1) & 0.139( 3) & 1.2259(1)(1) & 0.2498(2) \\ & 14.0 & 4.4433(3)( 1) & 1.2303( 9)(1) & 0.129( 9) & 1.2300(3)(1) & 0.2239(3) \\ & 20.0 & 6.7075(4)( 2) & 1.2318( 2)(1) & 0.138( 2) & 1.2318(1)(1) & 0.18196(6) \\ & 25.0 & 8.5797(4)( 6) & 1.2322( 3)(1) & 0.135( 3) & 1.2322(1)(1) & 0.15709(6) \\ & 31.0 & 10.8182(4)( 5) & 1.2323( 3)(1) & 0.134( 3) & 1.2323(1)(1) & 0.13550(5) \\ \hline \hline \end{tabular} \caption{Results for the Sommer parameter $r_0$ and the string tension $\sigma$ in units of $r_0$ from methods (i) and (ii), as explained in the text. Also listed are the other fitparameters, the L\"uscher constant $\gamma$ and the normalisation constant $V_0$.} \label{tab:scale-fits} \end{table} We want to obtain the asymptotic $R\to\infty$ behaviour of the potential in the continuum. To this end we perform a continuum extrapolation for $\sqrt{\sigma}$ of the form, \begin{equation} \label{eq:sig-conti} \sqrt{\sigma} r_0 = \big( \sqrt{\sigma} r_0 \big)^{\rm cont} + b_{\sigma,1} \Big(\frac{a}{r_0}\Big)^2 + b_{\sigma,2} \Big(\frac{a}{r_0}\Big)^4 \,. \end{equation} In practice, we perform two fits, one with $b_{\sigma,2}\neq0$ and another one with $b_{\sigma,2}=0$. The continuum extrapolations for $\text{SU}(2)$ and $\text{SU}(3)$ are displayed in figure~\ref{fig:sig_conti} and the results are tabulated in table~\ref{tab:scale-conti}. For the fit with $b_{\sigma,2}=0$ and SU(3) we could only include points with $(a/r_0)^2<0.03$ to obtain a reasonable $\chi^2/$dof. We have applied a similar cut for the fit in the case of SU(2), too. For SU(2) the three results at the smallest lattice spacings show fluctuations, which for $\beta=16.0$ are bigger than $1\sigma$ with respect to the fits, leading to a $\chi^2/$dof$>1$. The continuum results from the two fits agree well in both cases. In the following we will use the result coming from the analysis with $b_{\sigma,2}\neq0$. \begin{figure}[t] \centering \includegraphics[]{sig_conti_paper1.pdf} \caption{Continuum extrapolations of $\sqrt{\sigma_{(ii)}}r_0$ for SU(2) (left) and SU(3) gauge theory (right).} \label{fig:sig_conti} \end{figure} \begin{table} \centering \begin{tabular}{c|cc|cc} \hline \hline & \multicolumn{2}{c|}{SU(2)} & \multicolumn{2}{c}{SU(3)} \\ \hline \hline & quadratic & linear & quadratic & linear \\ \hline $\big( \sqrt{\sigma} r_0 \big)^{\rm cont}$ & 1.2356(3) & 1.2355(3) & 1.2325(3) & 1.2327(2) \\ \hline \hline \end{tabular} \caption{Results from the continuum extrapolations with $b_{\sigma}=0$ (linear) and $b_{\sigma}\neq0$ (quadratic).} \label{tab:scale-conti} \end{table} \subsection{The static potential} \label{sec:res-pot} In figure~\ref{fig:res_pot} we show the results for the static potential, rescaled via \begin{equation} \label{eq:resc-pot} V^{\rm RS}(R) = \Big( \frac{V(R)-V_0}{\sqrt{\sigma}} - R\sqrt{\sigma} \Big) \frac{R\sqrt{\sigma}}{\pi} + \frac{1}{24} \,. \end{equation} In this rescaled form the leading order potential to $O(1/R)$ in the $1/R$ expansion is normalised to 0, so that small differences become visible. We have rescaled the energies using the string tension for each individual value of $\beta$. The solid lines in the plot correspond to the LC spectrum, eq.~\refc{eq:LC-spectrum}, and are evaluated using the continuum extrapolated string tension. For SU(2) gauge theory we have not plotted the potential for the two largest lattice spacings for which the results look similar. \begin{figure}[t] \centering \includegraphics[]{potential_paper.pdf} \caption{Results for the static potential (in its rescaled form -- see eq.~\refc{eq:resc-pot}) for different lattice spacings versus $R$ in SU(2) (top) and SU(3) (bottom) gauge theory. The black continuous line is the light cone potential from eq.~\refc{eq:LC-spectrum} using the continuum extrapolated string tension.} \label{fig:res_pot} \end{figure} Corrections to the LC spectrum start at around $R/r_0\approx2$ (i.e. around 1~fm in 4d physical units) and are positive, except for one case, namely $\beta=11.0$ for SU(3). This means that the dominant corrections are not expected to be due to the rigidity term from section~\ref{sec:beyondEST}. In that case one would expect to obtain a negative correction for intermediate values of $R/r_0$. However, this does not rule out the presence of the rigidity term in general. It can still be a subleading correction for the lattice spacings considered. The only exception is the SU(3) $\beta=11.0$ lattice (with $a\approx0.15$~fm -- in physical units defined via $r_0\equiv0.5$~fm), where the negative correction could be due to a dominant rigidity term. In all of the cases we observe that the magnitude of the corrections becomes stronger when we approach the continuum limit. In particular, they are larger for SU(2) gauge theory than for the SU(3) case. \subsection{Isolating the leading order correction terms} \label{sec:LO-correction} On top of the leading order behaviour (the LC spectrum from eq.~\refc{eq:LC-spectrum}), the EST predicts corrections starting at $O(R^{-4})$. We would now like to test whether this prediction is true. The first step is to isolate the leading order correction to eq.~\refc{eq:LC-spectrum}. If eq.~\refc{eq:LC-spectrum} is incorrect, corrections will appear with an exponent $0<m<4$, if the EST predictions are correct and $\bar{b}_2\neq0$, we expect $m=4$, if corrections only appear at higher orders we will obtain $m>4$. If eq.~\refc{eq:LC-spectrum} is correct to all orders we should obtain $m\approx0$. \begin{figure}[t] \centering \includegraphics[]{m_vs_R_paper.pdf} \caption{Results for the exponent $m$ plotted versus the minimal value of $R$ included in the fit, $R_{\rm min}$, for SU(2) (left) and SU(3) (right) gauge theory. The horizontal line indicates the LO exponent of the EST.} \label{fig:m-exponent} \end{figure} To investigate the power of the leading order correction we fit the data to the form \begin{equation} \label{eq:lead-coeff-fit} V(R) = E_0^{\rm LC}(R) + \frac{\eta}{\big(\sqrt{\sigma} R\big)^m} + V_0 \,, \end{equation} where $E_0^{\rm LC}(R)$ is the potential obtained from the LC energy levels~\refc{eq:LC-spectrum} and $\eta$ and $m$, together with $\sigma$ and $V_0$, are fit parameters. The results for the exponent $m$ versus the minimal value of $R$ included in the fit, $R_{\rm min}$, is shown in figure~\ref{fig:m-exponent}. The results for the exponent typically show a plateau in the region $0.5\lesssim R/r_0 \lesssim 1.0$ where $m\approx4$. The only exception is, once more, $\beta=11.0$ for SU(3). For SU(2), we see that the result for $m$ is typically a bit smaller than 4 (around 3.6). This could indicate that we observe a mixing of two types of corrections where the one of $O(R^{-4})$ is the dominant one. For $R_{\rm min}/r_0>1$ we cannot resolve the correction terms reliably, so that $m$ is not determined sufficiently. \section{Analysis of boundary corrections without massive modes} \label{sec:b2-ana1} In this section we will now turn towards the extraction of the boundary coefficient $\bt$. In this first part of the analysis we will neglect contributions from massive modes (or the rigidity term) and extract the value of $\bt${} in this setup. We will see how $\bt${} changes in the presence of these terms in the next section. \subsection{Extraction of the boundary coefficient} \label{sec:b2-extract} To extract $\bt${} we use the groundstate energy from eq.~\refc{eq:est-spec}. To check for the impact of higher order correction terms our general fit formula is of the form \begin{equation} \label{eq:boundary-fit} V(R) = E^{\rm EST}_{0}(R) + \frac{\gamma^{(1)}_{0}}{\sqrt{\sigma^5} R^6} + \frac{\gamma^{(2)}_{0}}{\sigma^3 R^7} + V_0 \,, \end{equation} where we have included appropriate powers of $\sigma$ multiplying the higher order terms to keep the coefficients dimensionless. In practice, we perform five different types of fits: \begin{enumerate} \item[{\bf A}] we use the string tension and $V_0$ extracted from method (ii) in the determination of the string tension from section~\ref{sec:scale-setting} and use eq.~\refc{eq:boundary-fit} with $\bt$, $\gamma^{(1)}_{0}$ and $\gamma^{(2)}_{0}$ as free parameters; \item[{\bf B}] use $\sigma$, $V_0$ and $\bt${} as free parameters and $\gamma^{(1)}_{0}=0$ and $\gamma^{(2)}_{0}=0$; \item[{\bf C}] use $\sigma$, $V_0$, $\bt${} and $\gamma^{(1)}_{0}$ as free parameters and $\gamma^{(2)}_{0}=0$; \item[{\bf D}] use $\sigma$, $V_0$, $\bt${} and $\gamma^{(2)}_{0}$ as free parameters and $\gamma^{(1)}_{0}=0$; \item[{\bf E}] use $\sigma$, $V_0$, $\gamma^{(1)}_{0}$ and $\gamma^{(2)}_{0}=0$ as free parameters and $\bar{b}_2=0$. \end{enumerate} From the analysis in the previous section, we expect the correction terms to be relevant starting with $R/r_0\approx1.2$. To test the region in which the data is well described by the higher order terms we perform the fits for several values of the lower cut in $R$ and check at which value for $R_{\rm min}$ we get a good description, indicated by acceptable values for $\chi^2/$dof. For the final result, i.e. for the $R_{\rm min}$ in the final fit, we pick the second smallest distance for which the fit gave an acceptable $\chi^2/$dof$<1.5$. We use the results with minimal $R$ value of $R_{\rm min}\pm1a$ to estimate the systematic uncertainty of the extraction of the fit parameters for this particular fit. For the fits including terms of $O(R^{-6})$ or $O(R^{-7})$ we would expect that these work well down to even smaller values of $R_{\rm min}$ than the fits including only the $R^{-4}$ term, since these fits include higher order correction terms which should improve the agreement with the data. \begin{table} \small \centering \begin{tabular}{c|ll|l|ll|cc} \hline \hline Fit & \multicolumn{1}{c}{$\sqrt{\sigma}r_0$} & \multicolumn{1}{c|}{$aV_0$} & \multicolumn{1}{c|}{$\bar{b}_2\cdot10^{2}$} & \multicolumn{1}{c}{$\gamma^{(1)}_{0}\cdot10^{3}$} & \multicolumn{1}{c|}{$\gamma^{(2)}_{0}\cdot10^{3}$} & $\chi^2/$dof & $R_{\rm min}/r_0$ \\ \hline \hline \multicolumn{2}{l}{{ }$\beta=5.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & { }1.7(21)(23) & { }7(21)(75) & -{ }6(17)(76) & 0.17 & 0.76 \\ {\bf B} & 1.2138(2)(3) & 0.2151(1)(3) & -1.75({ }3)(22) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.22 & 0.76 \\ {\bf C} & 1.2320(3)(3) & 0.2148(3)(4) & -2.29(32)(102) & -2({ }2)({ }7) & \multicolumn{1}{c|}{---} & 0.12 & 0.76 \\ {\bf D} & 1.2320(3)(3) & 0.2148(3)(3) & -2.15(24)(79) & \multicolumn{1}{c}{---} & -1.3(7)(6) & 0.24 & 0.76 \\ {\bf E} & 1.2321(3)(3) & 0.2148(5)(4) & \multicolumn{1}{c|}{---} & -67(73)(60) & -64(71)(86) & 0.11 & 1.01 \\ \hline \multicolumn{2}{l}{{ }$\beta=5.7$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & { }1.6({ }7)(16) & 60({ }4)(38) & -45({ }3)(31) & 0.01 & 0.87 \\ {\bf B} & 1.2329(3)(1) & 0.2027(3)(1) & -1.96(21)(26) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.03 & 1.09 \\ {\bf C} & 1.2329(4)(2) & 0.2027(4)(3) & -1.99(59)(114) & 0.3(25)(80) & \multicolumn{1}{c|}{---} & 0.03 & 0.87 \\ {\bf D} & 1.2329(4)(2) & 0.2027(4)(2) & -1.98(44)(80) & \multicolumn{1}{c}{---} & -0.2(18)(68) & 0.03 & 0.87 \\ {\bf E} & 1.2327(3)(3) & 0.2029(3)(3) & \multicolumn{1}{c|}{---} & 33(33)(12) & -24(24)(12) & 0.01 & 0.87 \\ \hline \multicolumn{2}{l}{{ }$\beta=6.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & { }2.26(61)(140) & 97({ }8)(47) & -80({ }5)(45) & 0.15 & 1.02 \\ {\bf B} & 1.2333(1)(1) & 0.1973(1)(1) & -2.19(6)(12) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.25 & 1.02 \\ {\bf C} & 1.2333(1)(1) & 0.1973(1)(1) & -2.27(15)(60) & -1.3(6)(41) & \multicolumn{1}{c|}{---} & 0.28 & 0.82 \\ {\bf D} & 1.2333(1)(1) & 0.1973(1)(1) & -2.20(12)(39) & \multicolumn{1}{c}{---} & -0.9(4)(32) & 0.27 & 0.82 \\ {\bf E} & 1.2332(2)(1) & 0.1975(2)(1) & \multicolumn{1}{c|}{---} & 43(44)(25) & -35(36)(35) & 0.19 & 1.02 \\ \hline \multicolumn{2}{l}{{ }$\beta=7.5$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & -0.5(14)(14) & 31(18)(39) & -22(12)(35) & 0.16 & 0.80 \\ {\bf B} & 1.2342(2)(2) & 0.1738(1)(1) & -2.30(7)(17) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.05 & 0.95 \\ {\bf C} & 1.2343(2)(1) & 0.1738(2)(1) & -2.69(20)(21) & -2.3(7)(9) & \multicolumn{1}{c|}{---} & 0.03 & 0.80 \\ {\bf D} & 1.2343(2)(1) & 0.1738(1)(1) & -2.54(14)(20) & \multicolumn{1}{c}{---} & -1.6(5)(8) & 0.03 & 0.80 \\ {\bf E} & 1.2342(2)(2) & 0.1739(2)(2) & \multicolumn{1}{c|}{---} & 52(53)(15) & -43(43)(16) & 0.03 & 0.95 \\ \hline \multicolumn{2}{l}{{ }$\beta=10.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & -0.95(127)(117) & 28(17)(33) & -21(11)(29) & 0.52 & 0.81 \\ {\bf B} & 1.2350(1)(2) & 0.14486(5)(9) & -2.42(4)(17) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.28 & 0.93 \\ {\bf C} & 1.2350(1)(3) & 0.14482(5)(13) & -2.78(6)(36) & 2.2(2)(12) & \multicolumn{1}{c|}{---} & 0.19 & 0.70 \\ {\bf D} & 1.2350(1)(3) & 0.14484(5)(14) & -2.58(5)(29) & \multicolumn{1}{c}{---} & -1.3(1)(10) & 0.27 & 0.70 \\ {\bf E} & 1.2349(2)(2) & 0.14491(7)(10) & \multicolumn{1}{c|}{---} & 55(55)(16) & -45(45)(19) & 0.17 & 0.93 \\ \hline \multicolumn{2}{l}{{ }$\beta=12.5$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.53(73)(96) & 47(9)(25) & -33(6)(21) & 0.28 & 0.83 \\ {\bf B} & 1.2347(1)(2) & 0.12467(4)(8) & -2.27(3)(12) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.33 & 0.83 \\ {\bf C} & 1.2349(2)(1) & 0.12458(5)(5) & -2.75(9)(18) & -2.1(3)(7) & \multicolumn{1}{c|}{---} & 0.06 & 0.73 \\ {\bf D} & 1.2349(2)(1) & 0.12460(5)(5) & -2.58(7)(16) & \multicolumn{1}{c}{---} & -1.3(2)(5) & 0.07 & 0.73 \\ {\bf E} & 1.2346(2)(2) & 0.12476(5)(9) & \multicolumn{1}{c|}{---} & 39(39)(9) & -28(28)(9) & 0.17 & 0.83 \\ \hline \multicolumn{2}{l}{{ }$\beta=16.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & -0.2(8)(110) & 60(9)(36) & -50(5)(33) & 1.20 & 0.91 \\ {\bf B} & 1.2357(1)(2) & 0.10281(3)(5) & -2.56(3)(14) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.95 & 0.91 \\ {\bf C} & 1.2358(1)(3) & 0.10277(3)(7) & -3.01(5)(28) & -2.6(2)(10) & \multicolumn{1}{c|}{---} & 0.76 & 0.70 \\ {\bf D} & 1.2358(1)(2) & 0.10275(3)(5) & -2.97(6)(22) & \multicolumn{1}{c}{---} & -2.2(2)(10) & 0.54 & 0.77 \\ {\bf E} & 1.2356(2)(2) & 0.10285(4)(7) & \multicolumn{1}{c|}{---} & 55(55)(13) & -44(44)(14) & 0.63 & 0.91 \\ \hline \hline \end{tabular} \caption{Results of the fits for the extraction of $\bt${} for $\text{SU}(2)$ gauge theory.} \label{tab:b2-fits-su2} \end{table} \begin{table} \small \centering \begin{tabular}{c|ll|l|ll|cc} \hline \hline Fit & \multicolumn{1}{c}{$\sqrt{\sigma}r_0$} & \multicolumn{1}{c|}{$aV_0$} & \multicolumn{1}{c|}{$\bar{b}_2\cdot10^{2}$} & \multicolumn{1}{c}{$\gamma^{(1)}_{0}\cdot10^{3}$} & \multicolumn{1}{c|}{$\gamma^{(2)}_{0}\cdot10^{3}$} & $\chi^2/$dof & $R_{\rm min}/r_0$ \\ \hline \hline \multicolumn{2}{l}{{ }$\beta=11.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & { }1.6({ }5)(19) & { }14(8)(92) & -{ }3(6)(100) & 0.07 & 0.91 \\ {\bf B} & 1.2259(1)(1) & 0.2498(1)(2) & { }1.0({ }2)({ }9) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.02 & 1.52 \\ {\bf C} & 1.2260(1)(2) & 0.2497(1)(4) & { }1.3({ }2)(13) & { }10(6)(12) & \multicolumn{1}{c|}{---} & 0.05 & 0.91 \\ {\bf D} & 1.2260(1)(2) & 0.2497(1)(4) & { }0.8(10)(11) & \multicolumn{1}{c}{---} & { }{ }8(1)({ }12) & 0.07 & 0.91 \\ {\bf E} & 1.2260(1)(2) & 0.2496(1)(3) & \multicolumn{1}{c|}{---} & -17(2)(46) & { }22(2)({ }64) & 0.12 & 0.91 \\ \hline \multicolumn{2}{l}{{ }$\beta=14.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & { }0.81(15)(43) & { }{ }6(10)(11) & -{ }3(6)({ }10) & 0.05 & 0.68 \\ {\bf B} & 1.2300(2)(1) & 0.2239(1)(2) & -1.37({ }4)(10) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.02 & 0.90 \\ {\bf C} & 1.2300(2)(2) & 0.2239(2)(2) & -1.24({ }9)(30) & 0.7(3)({ }5) & \multicolumn{1}{c|}{---} & 0.01 & 0.68 \\ {\bf D} & 1.2300(2)(1) & 0.2239(2)(1) & -1.29({ }7)(10) & \multicolumn{1}{c}{---} & 0.4(2)({ }{ }3) & 0.01 & 0.68 \\ {\bf E} & 1.2299(2)(1) & 0.2240(2)(2) & \multicolumn{1}{c|}{---} & { }23(5)(18) & -17(5)({ }23) & 0.03 & 0.90 \\ \hline \multicolumn{2}{l}{{ }$\beta=20.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & -0.44(26)(63) & { }17(3)(18) & -11(2)({ }15) & 0.11 & 0.75 \\ {\bf B} & 1.2319(2)(0) & 0.18166(4)(1) & -1.71({ }2)({ }1) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.003 & 0.89 \\ {\bf C} & 1.2319(2)(0) & 0.18166(5)(1) & -1.73({ }6)({ }8) & -0.1(2)({ }2) & \multicolumn{1}{c|}{---} & 0.004 & 0.75 \\ {\bf D} & 1.2319(2)(0) & 0.18166(5)(1) & -1.72({ }4)({ }5) & \multicolumn{1}{c}{---} & -0.1(1)({ }{ }1) & 0.004 & 0.75 \\ {\bf E} & 1.2318(2)(1) & 0.18178(5)(7) & \multicolumn{1}{c|}{---} & { }30(2)({ }8) & -22(2)({ }{ }9) & 0.02 & 0.89 \\ \hline \multicolumn{2}{l}{{ }$\beta=25.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.2({ }{ }4)(110) & { }41(5)(48) & -32(3)({ }53) & 0.04 & 0.93 \\ {\bf B} & 1.2319(1)(0) & 0.15696(3)(2) & -1.78({ }2)({ }4) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.004 & 0.93 \\ {\bf C} & 1.2319(1)(1) & 0.15695(3)(10) & -1.83({ }4)(32) & -0.3(2)({ }5) & \multicolumn{1}{c|}{---} & 0.004 & 0.70 \\ {\bf D} & 1.2319(1)(1) & 0.15695(3)(8) & -1.81({ }3)(19) & \multicolumn{1}{c}{---} & -0.2(1)({ }{ }5) & 0.004 & 0.70 \\ {\bf E} & 1.2318(2)(1) & 0.15704(4)(7) & \multicolumn{1}{c|}{---} & { }36(3)(17) & -28(3)({ }23) & 0.03 & 0.93 \\ \hline \multicolumn{2}{l}{{ }$\beta=31.0$} & & & & & & \\ {\bf A} & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.0({ }{ }4)({ }6) & { }28(4)(15) & -19(3)({ }13) & 0.55 & 0.83 \\ {\bf B} & 1.2324(1)(1) & 0.13542(2)(2) & -1.77({ }1)({ }2) & \multicolumn{1}{c}{---} & \multicolumn{1}{c|}{---} & 0.34 & 0.74 \\ {\bf C} & 1.2324(1)(2) & 0.13540(2)(6) & -1.84({ }2)(15) & -0.23(5)(41) & \multicolumn{1}{c|}{---} & 0.24 & 0.65 \\ {\bf D} & 1.2324(1)(1) & 0.13541(2)(5) & -1.81({ }8)(10) & \multicolumn{1}{c}{---} & -0.12(3)({ }21) & 0.25 & 0.65 \\ {\bf E} & 1.2322(1)(1) & 0.13552(2)(5) & \multicolumn{1}{c|}{---} & { }27(28)({ }5) & -19(19)({ }6) & 0.46 & 0.83 \\ \hline \hline \end{tabular} \caption{Results of the fits for the extraction of $\bt${} for $\text{SU}(3)$ gauge theory.} \label{tab:b2-fits-su3} \end{table} The results of the fits are listed in table~\ref{tab:b2-fits-su2} for $\text{SU}(2)$ gauge theory and in table~\ref{tab:b2-fits-su3} for the $\text{SU}(3)$ case. Let us discuss the individual fits before moving on with the analysis. In general, the fits lead to small values of $\chi^2/$dof (since we picked the second possible value for $R_{\rm min}$), so that it is difficult to make statements about the agreement of the EST predictions with the data based on these numbers. Accordingly, our main argument to judge the agreement will be based on the values of $R_{\rm min}$ which can be used in the fits. At coarse lattice spacings all fits work equally well, leading to similar of $R_{\rm min}$. When going to smaller lattice spacings, however, the picture changes slightly. In these cases the worst fits are typically fit {\bf B} (which is not surprising since higher order terms have been neglected) and fits {\bf A} and {\bf E}. For fit {\bf A} this might be due to the fact that the fit does not allow $\sigma$ and $V_0$ to change, which, apparently, is to restrictive, even though both do not change significantly for the other fits. For fit {\bf E} $R_{\rm min}$ needs to be larger even though higher order terms are included in the fit, indicating less agreement with the data. This implies that the scenario with $\bar{b}_2=0$ is disfavoured, in agreement with the analysis from section~\ref{sec:LO-correction}. For the following analysis we will thus include the results from fits {\bf C} and {\bf D} together with the one from fit {\bf B}, for which the larger value of $R_{\rm min}$ has been expected. We note, however, that we cannot fully exclude the possibility that the coefficient of the $R^{-4}$ term is 0 and that the result from section~\ref{sec:LO-correction} is due to a fine-tuned combination of higher order terms. \begin{table} \small \centering \begin{tabular}{cc|cc} \hline \hline \multicolumn{2}{c|}{$\text{SU}(2)$} & \multicolumn{2}{c}{$\text{SU}(3)$} \\ $\beta$ & $\bt$ & $\beta$ & $\bt$ \\ \hline \hline 5.0 & -0.0179({ }5)(50)(23) & 11.0 & 0.0100(15)(34)(95) \\ 5.7 & -0.0196(25)({ }3)({ }2) & 14.0 & -0.0133({ }6)(10)({ }2) \\ 6.0 & -0.0211({ }7)(17)(11) & 20.0 & -0.0171({ }3)({ }7)({ }2) \\ 7.5 & -0.0244(11)(25)(16) & 25.0 & -0.0175({ }8)({ }7)({ }2) \\ 10.0 & -0.0251({ }5)(27)(22) & 31.0 & -0.0178({ }6)({ }7)({ }3) \\ 12.5 & -0.0245({ }5)(31)(13) & & \\ 16.0 & -0.0273({ }4)(29)(17) & & \\ \hline \hline \end{tabular} \caption{Final results for $\bt${} for the individual lattice spacings. The first error is the statistical uncertainty, the second the systematic one due to the unknown correction terms, estimated by calculating the maximal deviations of the results for $\bt${} from fits {\bf B} to {\bf C}, and the third is the systematic one associated with the choice for $R_{\rm min}$.} \label{tab:b2-results} \end{table} We determine the associated result for $\bt${} on a single lattice spacing using the average over the results from fits {\bf B} to {\bf D}, weighted with the associated uncertainties. To determine the systematic error due to the particular choice for $R_{\rm min}$ we repeat the same procedure with $R_{\rm min}\pm 1a$ and take the maximal deviation as an estimate. The results are listed in table~\ref{tab:b2-results}. In the next section we discuss the associated continuum extrapolation. It is interesting to note that in $\text{SU}(3)$ gauge theory the systematic errors are typically an order of magnitude smaller than in the $\text{SU}(2)$ gauge theory. A possible reason could be, that the agreement between the effective string theory predictions and the energy levels becomes better with increasing $N$. This appears natural in the light of possible corrections to the EST discussed in section~\ref{sec:est-setup}, which are expected to be further suppressed with increasing $N$. \begin{figure}[t] \centering \includegraphics[]{b2_vs_asq_plot_su3_paper.pdf} \caption{Results for $\bt${} from table~\ref{tab:b2-results} versus the squared lattice spacing in units of $r_0$. Also shown are the continuum results from eq.~\refc{eq:b2-conti-res}.} \label{fig:br_vs_r0} \end{figure} The results for $\bt${} are plotted versus $a^2$ in figure~\ref{fig:br_vs_r0}. The plot displays the rather smooth behaviour towards the continuum ($a=0$). The exception is the data point at $\beta=11.0$ for gauge group $\text{SU}(3)$, which shows a rather strong upwards trend. We have excluded this data point from the following analysis, since it appears to lie outside of the scaling region. \subsection{Continuum limit of boundary corrections} \label{sec:b2-conti} Up to now the comparison with the EST has been done at finite lattice spacing. To extract the final continuum results we have to perform the continuum extrapolation. To this end we parameterise the boundary coefficient $\bt${} as \begin{equation} \label{eq:b2-conti} \bar{b}_2 = \big( \bar{b}_2 \big)^{\rm cont} + b_{\bar{b}_2,1} \Big(\frac{a}{r_0}\Big)^2 + b_{\bar{b}_2,2} \Big(\frac{a}{r_0}\Big)^4 \,. \end{equation} Using this parameterisation we now perform three different fits: \begin{enumerate} \item[(1)] a fit including all terms in eq.~\refc{eq:b2-conti} and all lattice spacings (except for $\beta=11.0$ for $\text{SU}(3)$); \item[(2)] fit (1) but with $b_{\bar{b}_2,2}=0$; \item[(3)] a fit with $b_{\bar{b}_2,2}=0$, including only the $\beta$-values 7.5, 10.0, 12.5 and 16.0 for $\text{SU}(2)$ gauge theory and 20.0, 25.0 and 31.0 for $\text{SU}(3)$. \end{enumerate} As discussed above, we have excluded the data set with $\beta=11.0$. In all cases we have included systematic errors due to the functional form and the particular value for $R_{\rm min}$ used for the extraction of $\bt${} by performing these fits for the results from fits {\bf B} to {\bf D} and the fits with a minimal $R$ value of $R_{\rm min}\pm1a$ individually. To determine the final result, we have averaged the results weighted with the individual uncertainties of the extrapolations for fits {\bf B} to {\bf C} and estimated the systematic uncertainty due to the choice for $R_{\rm min}$ by doing the same for $R_{\rm min}\pm1a$. The procedure is the same as the one used for the averaging at the individual lattice spacings. In this way we obtain a final result for the three different fits (1) to (3). The continuum results for $\bt${} for the different fits are given in table~\ref{tab:b2_conti_fitpar}. As can be seen from figure~\ref{fig:br_vs_r0}, the data is consistent with a straight line, which is also indicated by good values for $\chi^2/$dof, below but close to 1, for fit (2), even though they still might show a slight curvature. As our final result we thus use the result from fit (2), including the spread of the results from fits (1) and (3) as a systematic uncertainty associated with the continuum extrapolation. The associated curves from fit (2) with the main value for $R_{\rm min}$ are shown in figure~\ref{fig:br_vs_r0_fit}. \begin{table} \small \centering \begin{tabular}{cc|ccc} \hline \hline $N$ & Fit & (1) & (2) & (3) \\ \hline 2 & $\big( \bar{b}_2 \big)^{\rm cont}$ & -0.0254(5)(42)(27) & -0.0257(3)(38)(17) & -0.0256(5)(37)(16) \\ 3 & $\big( \bar{b}_2 \big)^{\rm cont}$ & -0.0185(3)(14)(10) & -0.0187(2)(13)({ }4) & -0.0186(2)(15)({ }8) \\ \hline \hline \end{tabular} \caption{Results for $\big( \bar{b}_2 \big)^{\rm cont}$ from fits (1) to (3) (see text). The first error is the statistical uncertainty, the second one the one associated with the unknown higher order correction terms in the potential and the third one the one due to the choice for $R_{\rm min}$.} \label{tab:b2_conti_fitpar} \end{table} \begin{figure}[t] \centering \includegraphics[]{b2_vs_asq_plot_su2_conti_lin_paper.pdf} \\ \includegraphics[]{b2_vs_asq_plot_su3_conti_lin_paper.pdf} \\ \caption{Results for the linear continuum extrapolation, fit (2), for the results for $\bt${} obtained from fits {\bf B}, {\bf C} and {\bf D} (from left to right) for $\text{SU}(2)$ (top) and $\text{SU}(3)$ (bottom) gauge theory.} \label{fig:br_vs_r0_fit} \end{figure} The final continuum results for $\bt${}, given in eq.~\refc{eq:b2-conti-res} in section~\ref{sec:results}, are already shown in figure~\ref{fig:br_vs_r0}. The figure indicates that the results for $\bt${} in $\text{SU}(3)$ gauge theory are significantly larger than the results for $\text{SU}(2)$, indicating the non-universality of the boundary corrections in the EST. In particular, this difference remains in the continuum limit. From the tendency between $\text{SU}(2)$ and $\text{SU}(3)$, one might think that $\bt${} tends to zero for $N\to\infty$. This will be investigated further in the next publication of the series. \section{Analysis of boundary corrections with massive modes} \label{sec:rigidity} Up to now we have ignored possible contributions from massive modes. Our aim in this section is to see how the presence of these modes changes the result for $\bt${} from the previous section. \subsection{Testing the consistency with the potential} \label{sec:m-test} To include the massive modes in the analysis we include the terms from eqs.~\refc{eq:pot-rigid} and~\refc{eq:pot-rigid-cor} in the fit function eq.~\refc{eq:boundary-fit}, \begin{equation} \label{eq:mass-fit} V(R) = E_{0}^{\rm eff}(R) - \frac{m}{2\pi} \sum_{k=1}^{\infty} \frac{K_1(2kmR)}{k} - \frac{(d-2)(d-10)\pi^2}{3840 m \sigma R^4} + \frac{\gamma^{(1)}_{0}}{\sqrt{\sigma^5} R^6} + \frac{\gamma^{(2)}_{0}}{\sigma^3 R^7} + V_0 \,. \end{equation} The main difficulty when fitting to eq.~\refc{eq:mass-fit} is the presence of the infinite sum over the modified Bessel functions of the second kind. In practice, however, the sum is always completely dominated by the first few terms, since $K_1(nc)$ decays exponentially with increasing $n$ for any positive real number $c$. In our fits we have used the first 100 terms and checked explicitly the correction to this approximation by calculating the corrections due to the next 10 terms in the sum. For all combinations of $m$ and $R$ that appeared in our analysis the correction has been suppressed by 100 orders of magnitude, at least. Using eq.~\refc{eq:mass-fit} we have performed the four different types of fits: \begin{enumerate} \item[{\bf F}] use $\sigma$, $V_0$, $\bt${} and $m$ as free parameters and $\gamma^{(1)}_{0}=\gamma^{(2)}_{0}=0$; \item[{\bf G}] use $\sigma$, $V_0$, $\bt${}, $m$ and $\gamma^{(1)}_{0}$ as free parameters and $\gamma^{(2)}_{0}=0$; \item[{\bf H}] use $\sigma$, $V_0$, $\bt${}, $m$ and $\gamma^{(2)}_{0}$ as free parameters and $\gamma^{(1)}_{0}=0$; \item[{\bf J}] use $\sigma$, $V_0$ and $m$ as free parameters and $\gamma^{(1)}_{0}=\gamma^{(2)}_{0}=\bar{b}_2=0$; \end{enumerate} The last fit constitutes a check whether the presence of the $R^{-4}$ term due to the massive mode (i.e. the term from eq.~\refc{eq:pot-rigid-cor}) is already sufficient to describe the $R^{-4}$ correction found in section~\ref{sec:LO-correction}. \begin{table} \small \centering \begin{tabular}{c|ll|l|l|l|cc} \hline \hline Fit & \multicolumn{1}{c}{$\sqrt{\sigma}r_0$} & \multicolumn{1}{c|}{$aV_0$} & \multicolumn{1}{c|}{$\bar{b}_2\cdot10^{2}$} & \multicolumn{1}{c|}{$r_0m$} & \multicolumn{1}{c}{$\gamma^{(1)/(2)}_{0}\cdot10^{3}$} & $\chi^2/$dof & $R_{\rm min}/r_0$ \\ \hline \hline \multicolumn{2}{l}{{ }$\beta=5.0$} & & & & & & \\ {\bf F} & 1.2320(2)(1) & 0.2148(2)(2) & -0.8({ }1)({ }4) & 2.75({ }6)(49) & \multicolumn{1}{c|}{---} & 0.12 & 0.76 \\ {\bf G} & 1.2322(4)(2) & 0.2145(5)(3) & -2.0({ }3)({ }1) & 2.01(70)(45) & -6.3(12)(56) & 0.11 & 0.76 \\ {\bf H} & 1.2322(3)(2) & 0.2145(4)(3) & -1.7(11)({ }8) & 2.06(44)(50) & -3.6(52)(35) & 0.10 & 0.76 \\ {\bf J} & 1.2320(3)(2) & 0.2149(3)(3) & \multicolumn{1}{c|}{---} & 2.63({ }7)(22) & \multicolumn{1}{c|}{---} & 0.19 & 1.52 \\ \hline \multicolumn{2}{l}{{ }$\beta=5.7$} & & & & & & \\ {\bf F} & 1.2331(10)(2) & 0.2024(12)(3) & -1.3({ }8)({ }6) & 2.1(149)(607) & \multicolumn{1}{c|}{---} & 0.06 & 0.87 \\ {\bf G} & 1.2328(8)(5) & 0.2026(10)(7) & -7.4(321)(73) & 1.2(150)(40) & -45(220)(81) & 0.02 & 0.87 \\ {\bf H} & 1.2329(5)(3) & 0.2027(7)(3) & -1.2(17)({ }7) & 5.5(69)(24) & -0.8(71)(19) & 0.03 & 0.87 \\ {\bf J} & 1.2327(2)(4) & 0.2029(2)(4) & \multicolumn{1}{c|}{---} & 2.93(19)(89) & \multicolumn{1}{c|}{---} & 0.06 & 1.30 \\ \hline \multicolumn{2}{l}{{ }$\beta=6.0$} & & & & & & \\ {\bf F} & 1.2333(1)(1) & 0.1973(1)(1) & -1.0({ }1)(10) & 3.5({ }5)(238) & \multicolumn{1}{c|}{---} & 0.29 & 0.82 \\ {\bf G} & 1.2333(2)(2) & 0.1972(1)(3) & -6.7({ }6)(69) & 1.3({ }1)(27) & -37(42)(710) & 0.33 & 0.82 \\ {\bf H} & 1.2333(2)(2) & 0.1972(2)(3) & -4.9({ }5)(52) & 1.4({ }1)(25) & -21({ }3)(57) & 0.34 & 0.82 \\ {\bf J} & 1.2332(1)(2) & 0.1975(1)(2) & \multicolumn{1}{c|}{---} & 2.88(33)(454) & \multicolumn{1}{c|}{---} & 0.27 & 1.43 \\ \hline \multicolumn{2}{l}{{ }$\beta=7.5$} & & & & & & \\ {\bf F} & 1.2343(2)(3) & 0.1738(1)(2) & -1.15({ }5)(14) & 2.9({ }2)({ }6) & \multicolumn{1}{c|}{---} & 0.08 & 0.80 \\ {\bf G} & 1.2345(3)(2) & 0.1736(4)(2) & -5.1(63)(38) & 1.5(37)(18) & -24(37)(30) & 0.06 & 0.80 \\ {\bf H} & 1.2345(5)(2) & 0.1736(5)(3) & -3.7(53)(26) & 1.6(68)(15) & -12(25)(22) & 0.06 & 0.80 \\ {\bf J} & 1.2341(2)(2) & 0.1740(1)(2) & \multicolumn{1}{c|}{---} & 2.70({ }8)(15) & \multicolumn{1}{c|}{---} & 0.11 & 1.43 \\ \hline \multicolumn{2}{l}{{ }$\beta=10.0$} & & & & & & \\ {\bf F} & 1.2351(1)(2) & 0.14475(6)(8) & -1.47(11)(18) & 2.6({ }2)({ }3) & \multicolumn{1}{c|}{---} & 0.10 & 0.93 \\ {\bf G} & 1.2352(2)(2) & 0.14472(2)(2) & -1.74(42)(23) & 2.7({ }6)(10) & -3({ }2)(13) & 0.06 & 0.70 \\ {\bf H} & 1.2352(2)(2) & 0.1447{ }(2)(2) & -1.67(33)(29) & 2.6({ }4)({ }3) & -2({ }2)({ }2) & 0.07 & 0.70 \\ {\bf J} & 1.2348(1)(2) & 0.14497(5)(9) & \multicolumn{1}{c|}{---} & 2.75({ }3)(11) & \multicolumn{1}{c|}{---} & 0.50 & 1.40 \\ \hline \multicolumn{2}{l}{{ }$\beta=12.5$} & & & & & & \\ {\bf F} & 1.2349(2)(2) & 0.12459(5)(6) & -1.27({ }5)({ }9) & 3.0({ }3)({ }4) & \multicolumn{1}{c|}{---} & 0.11 & 0.83 \\ {\bf G} & 1.2349(2)(2) & 0.12456(7)(5) & -1.49(11)(13) & 3.2({ }4)({ }3) & -2.0({ }6)({ }8) & 0.08 & 0.64 \\ {\bf H} & 1.2349(2)(2) & 0.12456(7)(8) & -1.40(11)(16) & 3.0({ }4)({ }1) & -1.2({ }4)({ }7) & 0.09 & 0.64 \\ {\bf J} & 1.2345(2)(2) & 0.12478(4)(7) & \multicolumn{1}{c|}{---} & 2.88({ }3)({ }9) & \multicolumn{1}{c|}{---} & 0.42 & 1.28 \\ \hline \multicolumn{2}{l}{{ }$\beta=16.0$} & & & & & & \\ {\bf F} & 1.2359(1)(2) & 0.10273(3)(5) & -1.62({ }4)(18) & 2.7({ }1)({ }2) & \multicolumn{1}{c|}{---} & 0.60 & 0.91 \\ {\bf G} & 1.2361(1)(2) & 0.10265(3)(7) & -2.34({ }9)(49) & 2.4({ }1)({ }3) & -6.0({ }5)(28) & 0.35 & 0.70 \\ {\bf H} & 1.2360(1)(2) & 0.10269(3)(7) & -1.93({ }6)(35) & 2.5({ }1)({ }3) & -3.0({ }2)(17) & 0.47 & 0.70 \\ {\bf J} & 1.2356(2)(2) & 0.10285(4)(4) & \multicolumn{1}{c|}{---} & 2.74({ }1)({ }6) & \multicolumn{1}{c|}{---} & 0.88 & 1.40 \\ \hline \hline \end{tabular} \caption{Results of the fits for the extraction of $\bt${} and $m$ for $\text{SU}(2)$ gauge theory.} \label{tab:b2m-fits-su2} \end{table} \begin{table} \small \centering \begin{tabular}{c|ll|l|l|l|cc} \hline \hline Fit & \multicolumn{1}{c}{$\sqrt{\sigma}r_0$} & \multicolumn{1}{c|}{$aV_0$} & \multicolumn{1}{c|}{$\bar{b}_2\cdot10^{2}$} & \multicolumn{1}{c|}{$r_0m$} & \multicolumn{1}{c}{$\gamma^{(1)/(2)}_{0}\cdot10^{3}$} & $\chi^2/$dof & $R_{\rm min}/r_0$ \\ \hline \hline \multicolumn{2}{l}{{ }$\beta=11.0$} & & & & & & \\ {\bf F} & 1.2261(1)(3) & 0.2493(1)(5) & -1.11({ }3)(15) & 1.46({ }2)(32) & \multicolumn{1}{c|}{---} & 0.20 & 0.91 \\ {\bf G} & 1.2260(1)(1) & 0.2495(2)(2) & -6.2{ }({ }4)(62) & 1.11({ }3)(25) & -44({ }4)(82) & 0.01 & 0.91 \\ {\bf H} & 1.2260(1)(1) & 0.2495(2)(2) & -4.6{ }({ }3)(47) & 1.15({ }3)(24) & -30({ }3)(76) & 0.01 & 0.91 \\ {\bf J} & 1.2260(1)(2) & 0.2495(1)(3) & \multicolumn{1}{c|}{---} & 1.51({ }3)(19) & \multicolumn{1}{c|}{---} & 0.05 & 1.52 \\ \hline \multicolumn{2}{l}{{ }$\beta=14.0$} & & & & & & \\ {\bf F} & 1.2302(2)(3) & 0.2236(1)(5) & -0.64({ }1)(43) & 2.25({ }6)(37) & \multicolumn{1}{c|}{---} & 0.09 & 0.68 \\ {\bf G} & 1.2302(2)(2) & 0.2236(2)(3) & -2.5{ }({ }5)(25) & 1.63(12)(162) & -10({ }3)(13) & 0.004 & 0.68 \\ {\bf H} & 1.2302(2)(1) & 0.2236(2)(1) & -1.8{ }({ }3)(14) & 1.70(12)(71) & -5({ }2)({ }7) & 0.005 & 0.68 \\ {\bf J} & 1.2300(2)(1) & 0.2239(1)(1) & \multicolumn{1}{c|}{---} & 2.65(35)(37) & \multicolumn{1}{c|}{---} & 0.01 & 1.35 \\ \hline \multicolumn{2}{l}{{ }$\beta=20.0$} & & & & & & \\ {\bf F} & 1.2321(2)(1) & 0.18151(4)(1) & -0.89({ }2)(42) & 2.36({ }5)(45) & \multicolumn{1}{c|}{---} & 0.12 & 0.75 \\ {\bf G} & 1.2321(3)(2) & 0.1846{ }(4)(3) & -3.5{ }(46)(29) & 1.6{ }(59)(29) & -15(26)(17) & 0.03 & 0.75 \\ {\bf H} & 1.2321(2)(2) & 0.1815{ }(1)(2) & -2.5{ }({ }2)(12) & 1.6{ }({ }1)(156) & -7({ }1)({ }8) & 0.03 & 0.75 \\ {\bf J} & 1.2318(2)(2) & 0.1818{ }(1)(2) & \multicolumn{1}{c|}{---} & 2.93({ }2)(17) & \multicolumn{1}{c|}{---} & 0.21 & 1.19 \\ \hline \multicolumn{2}{l}{{ }$\beta=25.0$} & & & & & & \\ {\bf F} & 1.2324(1)(2) & 0.1570(1)(2) & -0.92({ }4)(15) & 5.0{ }({ }4)(28) & \multicolumn{1}{c|}{---} & 0.03 & 0.70 \\ {\bf G} & 1.2326(1)(3) & 0.1568(1)(2) & -2.8{ }({ }2)(15) & 1.7{ }({ }4)(272) & -10({ }1)(10) & 0.17 & 0.70 \\ {\bf H} & 1.2325(1)(3) & 0.1568(1)(2) & -2.0{ }({ }1)(12) & 1.8(4)(214276) & -5({ }1)({ }5) & 0.16 & 0.70 \\ {\bf J} & 1.2323(1)(1) & 0.1571(1)(1) & \multicolumn{1}{c|}{---} & 2.85({ }4)(12) & \multicolumn{1}{c|}{---} & 0.22 & 1.28 \\ \hline \multicolumn{2}{l}{{ }$\beta=31.0$} & & & & & & \\ {\bf F} & 1.2326(1)(1) & 0.13530(2)(1) & -0.90({ }1)(20) & 2.55({ }1)(37) & \multicolumn{1}{c|}{---} & 0.50 & 0.74 \\ {\bf G} & 1.2327(1)(1) & 0.13525(3)(1) & -3.0{ }({ }2)(11) & 1.67({ }5)(25) & -11({ }1)({ }7) & 0.31 & 0.74 \\ {\bf H} & 1.2327(3)(4) & 0.13527(2)(2) & -2.2{ }({ }7)({ }6) & 2(335)(34) & 5({ }7)({ }6) & 0.32 & 0.74 \\ {\bf J} & 1.2324(1)(1) & 0.13543(3)(1) & \multicolumn{1}{c|}{---} & 2.57({ }6)({ }9) & \multicolumn{1}{c|}{---} & 0.38 & 1.57 \\ \hline \hline \end{tabular} \caption{Results of the fits for the extraction of $\bt${} and $m$ for $\text{SU}(3)$ gauge theory.} \label{tab:b2m-fits-su3} \end{table} The fit results are tabulated in tables~\ref{tab:b2m-fits-su2} and~\ref{tab:b2m-fits-su3}. The first thing we note is that fit {\bf J} needs a value of $R_{\rm min}$ which is a factor between 1.5 and 2 larger than $R_{\rm min}$ for the other fits. In particular, $R_{\rm min}$ is typically larger than those values of $R$ where we found the $R^{-4}$ correction term to be dominant in section~\ref{sec:LO-correction}. This basically rules out the possibility that the boundary term is absent, in which case the full $R^{-4}$ correction would be given by the correction term from the massive modes. For the other fits the values of $R_{\rm min}$ are comparable, but typically a bit smaller than those of the fits in the previous section. This is not surprising given the fact that each fit has an additional free parameter compared to fits {\bf B} to {\bf D} from the previous section. When looking at the result for $\bt${} and $m$ we see that the fits {\bf F} to {\bf H} always agree within uncertainties, since the uncertainties of the fits {\bf G} and {\bf H} are large. The large uncertainties are most likely due to the fact that the available data does not allow to constrain the additional fit parameter in these fits compared to fit {\bf F}. In the following analysis we will thus only use the results from fit {\bf F}. The uncertainties of the parameters from the other fits do not allow for any conclusions about the values of the parameters and continuum extrapolations. In this case our results for the individual lattice spacings are given by the values for fit {\bf F} in tables~\ref{tab:b2m-fits-su2} and~\ref{tab:b2m-fits-su3} and they only have two uncertainties, namely the statistical one and the systematic uncertainty associated with the particular choice for $R_{\rm min}$ (estimated in the usual way). Unfortunately we cannot investigate the systematic uncertainty due to unknown higher order correction terms, but we believe that the presence of these terms is only a minor effect if massive modes are present. \begin{figure}[t] \centering \includegraphics[]{b2+m_vs_asq_plot_su3_paper.pdf} \caption{Results for $\bt${} and $r_0m$ versus the squared lattice spacing in units of $r_0$. Also shown are the continuum results from eqs.~\refc{eq:b2-conti-res2} and~\refc{eq:m-conti-res2}.} \label{fig:bm_vs_r0} \end{figure} The results for $\bt${} and $r_0m$ are plotted versus $a^2$ in figure~\ref{fig:bm_vs_r0}. In comparison to the results for $\bt${} from the previous section (figure~\ref{fig:br_vs_r0}), the results for $\text{SU}(2)$ show stronger fluctuations in the approach to the continuum while the results for $\text{SU}(3)$ obtain larger uncertainties. In general, the results for $\bt${} are closer to zero when massive modes are included. The reason for this is obviously given by the presence of the additional $R^{-4}$ term in eq.~\refc{eq:mass-fit}. This term contaminates the boundary correction term and thus reduces the magnitude of $\bt${}. The results for $r_0m$ show a rather smooth approach to the continuum with the exception of the point at $\beta=25.0$ for $\text{SU}(3)$ gauge theory, which, however, also has a large uncertainty. \subsection{Continuum extrapolations} \label{sec:b2m-conti} Ultimately we are interested in the comparison between the two sets of results in the continuum. To this end we perform a continuum extrapolation similarly to the one in section~\ref{sec:b2-conti} by using fits of the type (1) to (3) for $\bt${} with the ansatz from eq.~\refc{eq:b2-conti}. For $r_0m$ the ansatz reads \begin{equation} \label{eq:m-conti} r_0m = \big( r_0m \big)^{\rm cont} + b_{m,1} \Big(\frac{a}{r_0}\Big)^2 + b_{m,2} \Big(\frac{a}{r_0}\Big)^4 \end{equation} and we perform fits of the types (1) to (3) where $b_{\bar{b}_2,2}$ is replaced by $b_{m,2}$ in the fit definitions. The difference to section~\ref{sec:b2-conti} is, that we only perform continuum extrapolations for fit {\bf F}, so that no averaging in the continuum limit is necessary. Once more the fits have also been done for the results from $R_{\rm min}\pm1a$ to assess the uncertainty associated with the choice for $R_{\rm min}$. \begin{table} \small \centering \begin{tabular}{cc|ccc} \hline \hline $N$ & Fit & (1) & (2) & (3) \\ \hline 2 & $\big( \bar{b}_2 \big)^{\rm cont}$ & -0.0164(4)(56) & -0.0151(3)(27) & -0.0164(5)(56) \\ 3 & $\big( \bar{b}_2 \big)^{\rm cont}$ & -0.0087(3)({ }3) & -0.0098(1)(13) & -0.0091(2)({ }5) \\ \hline 2 & $\big( r_0m \big)^{\rm cont}$ & 2.56({ }7)(31) & 2.65({ }3)(29) & 2.61({ }6)(28) \\ 3 & $\big( r_0m \big)^{\rm cont}$ & 2.75(12)(25) & 2.60({ }5)(62) & 2.71({ }9)(29) \\ \hline \hline \end{tabular} \caption{Results for $\big( \bar{b}_2 \big)^{\rm cont}$ and $\big( r_0m \big)^{\rm cont}$ from fits (1) to (3) (see text). The first error is the statistical uncertainty, the second the systematical error due to the choice for $R_{\rm min}$.} \label{tab:b2m_conti_fitpar} \end{table} The results for the continuum extrapolations with fits (1) to (3) are given in table~\ref{tab:b2m_conti_fitpar}. We usually obtain a rather good continuum extrapolation, even though for $\bt${} in $\text{SU}(2)$ gauge theory the extrapolation leads to rather large values of $\chi^2$/dof around 5 to 10, due to the fluctuations of $\bt${} for the small values of $a$. Since the continuum extrapolation is a bit more problematic in this case we will use the results from fit (3) for our final results. This fit appears to be a bit more stable than the continuum extrapolation linear in $a^2$ using all data points. Once more the uncertainty concerning the continuum extrapolation is estimated via the maximal difference of the result of fit (3) with respect to the result of the other fits. The continuum results for $\bt${} and $r_0m$, given in eqs.~\refc{eq:b2-conti-res2} and~\refc{eq:m-conti-res2} in section~\ref{sec:results}, are shown in figure~\ref{fig:bm_vs_r0}. Owing to the figure, we expect the hierarchy between the results from $\text{SU}(2)$ and $\text{SU}(3)$ gauge theory to remain in the analysis with massive modes. The inclusion of massive modes only changes the quantitative results while retaining the qualitative features. The results for $r_0m$ in the continuum agree within error bars between $\text{SU}(2)$ and $\text{SU}(3)$. \section{Summary and discussion of the final results} \label{sec:results} The previous three sections contained a number of interesting results, which are, however, difficult to extract from the somewhat lengthy discussion. Before we move on to the conclusions let us thus summarise and discuss the main findings in some more detail. \subsection{Summary of results} \label{sec:results-sum} In section~\ref{sec:groundstate} we have extracted the Sommer parameter and the string tension with high accuracy from our results for the static potential. The results are given in table~\ref{tab:scale-fits}. After that we have performed a continuum extrapolation of the string tension to obtain the final result \begin{equation} \label{eq:sig-conti-final} \big( \sqrt{\sigma} r_0 \big)^{\rm cont} = \left\{ \begin{array}{ll} 1.2356(3)(1) & \quad \text{for} \:\:\text{SU}(2) \vspace*{2mm} \\ 1.2325(3)(2) & \quad \text{for} \:\:\text{SU}(3) \,. \end{array} \right. \end{equation} Here the string tension has been extracted by parameterising the potential with the LC spectrum, eq.~\refc{eq:LC-spectrum}, up to the point where the result for $\sigma$ did agree with the NLO expansion of eq.~\refc{eq:LC-spectrum}. The first uncertainty is the statistical one and the second uncertainty originates from the continuum extrapolation. In three dimensions any comparison to physical units is meaningless. Nonetheless, identifying $r_0$ with 0.5 fm~\cite{Sommer:1993ce} (to define the unit `fm' in three dimensions) and using the standard conversion factor $\hbar c=197.3 (\ldots)$~fm MeV we obtain \begin{equation} \label{eq:sig-conti-final-fm} \big( \sqrt{\sigma} \big)^{\rm cont} = \left\{ \begin{array}{ll} 487.6(2)(1) \:\:\text{MeV} & \quad \text{for} \:\:\text{SU}(2) \vspace*{2mm} \\ 486.4(2)(1) \:\:\text{MeV} & \quad \text{for} \:\:\text{SU}(3) \,, \end{array} \right. \end{equation} respectively. A comparison to the latest results for the continuum string tension in three dimensions~\cite{Lucini:2002wg,Bringoltz:2006gp} is only possible in terms of the Karabili-Kim-Nair prediction~\cite{Karabali:1998yq}, i.e. in terms of the continuum extrapolated coupling. We leave this type of comparison to the next publication. Our results at finite lattice spacing are fully in agreement, but more precise, than the values given in~\cite{Teper:1998te,HariDass:2007tx}. With a constant continuum extrapolation the $\text{SU}(2)$ results from~\cite{HariDass:2007tx} give a continuum result around 1.234, which is a bit smaller but comparable to our result. After the extraction of the leading order (linear) behaviour we have compared the results for the potential to the leading order EST prediction, namely the LC spectrum, and extracted the exponent of the leading order correction in $1/R$. The results, shown in figure~\ref{fig:m-exponent} are consistent with an exponent of 4, as expected from the EST, where the leading order correction is the boundary term proportional to $\bt${}. We then extracted the value for $\bt${}, excluding contributions from possible massive modes (cf. section~\ref{sec:beyondEST}). The results at finite lattice spacing are given in table~\ref{tab:b2-results} and the continuum results from the individual extrapolations in table~\ref{tab:b2_conti_fitpar}. As the final continuum estimates for $\bt${} we obtain \begin{equation} \label{eq:b2-conti-res} \big( \bar{b}_2 \big)^{\rm cont} = \left\{ \begin{array}{ll} -0.0257\,(3)(38)(17)(3) & \quad \text{for} \:\:\text{SU}(2) \vspace*{2mm} \\ -0.0187\,(2)(13)(\:\:4)(2) & \quad \text{for} \:\:\text{SU}(3) \,. \end{array} \right. \end{equation} The first error is purely statistical, the second is the systematical error due to the unknown correction terms to the potential, the third is the one associated with the particular choice for the minimal value of $R$ included in the fit for the extraction of $\bt${}, $R_{\rm min}$, and the fourth systematic uncertainty is the one due to the continuum extrapolation (for details see section~\ref{sec:b2-ana1}). These values indicate that the magnitude of $\bt${} decreases with increasing $N$, meaning that it could potentially vanish in the limit $N\to\infty$. The result also shows that $\bt${}, indeed, is non-universal, as expected since its value is not constrained in the EST. Concerning the extraction of $\bt${} the main uncertainty comes from the fact that massive modes, or, equivalently, a possible rigidity term in the EST, lead to a contaminating additional $R^{-4}$ correction. The presence of this term is expected to change the value of $\bt${}. At present it is unclear whether such a contamination will ultimately be present or not, so that we have to take this possibility into account. On the basis of our simulations we also cannot exclude this possibility, since the fits to the potential reported in section~\ref{sec:m-test} work equally well compared to the results from section~\ref{sec:b2-extract}. We can exclude, however, the possibility that the $R^{-4}$ correction is fully due to the correction associated with the massive modes, since fit {\bf J} (see section~\ref{sec:m-test}) leads to a much larger value for $R_{\rm min}$ than the other fits. Repeating the whole analysis, we see that the accuracy is only sufficient for fit {\bf F} from tables~\ref{tab:b2m-fits-su2} and~\ref{tab:b2m-fits-su3}. Using these results we obtain the continuum results listed in table~\ref{tab:b2m_conti_fitpar} and the final continuum results \begin{equation} \label{eq:b2-conti-res2} \big( \bar{b}_2 \big)^{\rm cont} = \left\{ \begin{array}{ll} -0.0164\,(5)(56)(13) & \quad \text{for} \:\:\text{SU}(2) \vspace*{2mm} \\ -0.0091\,(2)(\:\:5)(\:\:7) & \quad \text{for} \:\:\text{SU}(3) \,. \end{array} \right. \end{equation} Here the first error is purely statistical, the second is the one associated with the particular choice for $R_{\rm min}$ and the third systematic uncertainty is the one due to the continuum extrapolation. In this analysis we have only been able to use one of the fits for a reliable extraction of $\bt${} (fit {\bf F} from tables~\ref{tab:b2m-fits-su2} and~\ref{tab:b2m-fits-su3}), which is why one of the systematic uncertainties cannot be estimated. The comparison of eqs.~\refc{eq:b2-conti-res} and~\refc{eq:b2-conti-res2} reveals that the inclusion of the contamination reduces the magnitude of $\bt${} but does not alter the qualitative feature of a decrease in magnitude of $\bt${} with increasing $N$. From the extraction of $\bt${} when massive modes are included we also obtain an estimate for the mass of the massive modes. The continuum results are listed in table~\ref{tab:b2m_conti_fitpar}, and as our final continuum results we get \begin{equation} \label{eq:m-conti-res2} \big( r_0m \big)^{\rm cont} = \left\{ \begin{array}{ll} 2.61\,(6)(28)(\:\:5) & \quad \text{for} \:\:\text{SU}(2) \vspace*{2mm} \\ 2.71\,(9)(29)(11) & \quad \text{for} \:\:\text{SU}(3) \,, \end{array} \right. \end{equation} where the uncertainties are as in eq.~\refc{eq:b2-conti-res2}. The interpretation of this ``mass'' is an open question. The first possibility is that it represents a massive mode on the string worldsheet, which could indicate that we observe the three-dimensional analogue to the worldsheet axion (cf. section~\ref{sec:beyondEST}). Recall, that the topological coupling term does not exist in 3d, so that the massive mode will appear as a quasi-free mode on the worldsheet up to coupling terms including two massive boson fields. On the other hand, the contribution could also be due to the rigidity term, in which case the results from eq.~\refc{eq:m-conti-res2} can be translated into results for the coupling $\alpha$ via eq.~\refc{eq:mdef}. It is intriguing to note that the results for $m$, dividing by $\sqrt{\sigma}$ from eq.~\refc{eq:sig-conti-final}, are very similar to the results for the mass of the worldsheet axion from~\cite{Dubovsky:2013gi,Athenodorou:2017cmw}, at least for the case of $\text{SU}(2)$ gauge theory. For $\text{SU}(3)$ gauge theory our result is somewhat larger than the result from~\cite{Dubovsky:2013gi,Athenodorou:2017cmw}, however, in contrast to those results our result is extrapolated to the continuum. In fact, our results around $\beta=25.0$ are already fully compatible with the results from~\cite{Dubovsky:2013gi,Athenodorou:2017cmw}. This opens up the possibility that, indeed, we are looking at a massive mode on the three dimensional worldsheet which is similar in nature to the worldsheet axion. It is also important to ask whether the value for $m$ extracted in our analysis does still comply with the framework of the EST. From eq.~\refc{eq:break-scale} we expect modes of masses down to a few times $\sqrt{\sigma}$ to be integrated out. The masses we have obtained here are around twice as large as $\sqrt{\sigma}$. In our fits we could typically go down to $R/r_0\approx0.7$. When we assume that this is a sign for the scale where the EST is bound to break down, this leads to a cut-off scale of $1.2\times\sqrt{\sigma}$. However, this estimate could well underestimate the true cut-off scale, so that a mass of $2\times\sqrt{\sigma}$ can potentially be below that bound. \subsection{Comparison between continuum EST and data} \begin{figure}[t] \centering \includegraphics[]{cr_coupling_paper.pdf} \caption{Results for the curvature $c(R)$ plotted versus $R/r_0$. The curve labeled with `LT' is the L\"uscher constant, the one labeled with `LC' is the light cone potential with the continuum extrapolated string tension and the curve labeled with `LC$+O(R^{-4})$' is the LC curve including the boundary term with the continuum extrapolated value of $\bt$.} \label{fig:cr-coupling} \end{figure} We will now compare the continuum predictions for the potential from the EST to the results for the potential on our finest lattice spacings. A suitable quantity to visualise the subleading contributions to the potential is the curvature, associated with the second derivative \begin{equation} \label{eq:cr-term} c(R) = \frac{R^3}{2} \frac{\partial^2 V(R)}{\partial R^2} \,. \end{equation} In the $R\to\infty$ limit $c(r)$ is expected to reproduce the L\"uscher constant $-\pi/24$. The results for $c(R)$ are compared to the continuum results for the EST in figure~\ref{fig:cr-coupling}. The black curve in the figure is the LC spectrum, eq.~\refc{eq:LC-spectrum}, with the continuum string tensions from eq.~\refc{eq:sig-conti-final} (the uncertainties are smaller than the line), and the red band includes the boundary term from eq.~\refc{eq:est-spec} with the continuum boundary coefficient $\bt${} from eq.~\refc{eq:b2-conti-res}. The inclusion of the boundary term obviously enhances the agreement with the data significantly down to small values of $R$. We also note that the two curves are basically indistinguishable at the scale of the plot (they are not identical, however). For $\text{SU}(2)$ gauge theory the data at finite lattice spacing already agrees rather well with the continuum EST, while for $\text{SU}(3)$ gauge theory the data lies below the curve. We have not shown the curve including massive modes in the plot since it overlaps with the `LC$+O(R^{-4})$' curve and compares very similarly to the data. \section{Conclusions} \label{sec:concl} In this article we have performed a high precision study of the static $q\bar{q}$ potential in three-dimensional $\text{SU}(N)$ gauge theory with $N=2$ and 3 and compared the results to the potential obtained from the effective string theory. In particular, we obtained accurate results with full control over the systematic effects for the continuum string tension, the non-universal boundary coefficient $\bt${} and, for the extended analysis, the ``mass'' of the possible massive mode within the EST. The results are summarised and discussed in detail in section~\ref{sec:results-sum}. In particular, we could show that the leading order correction to the light cone spectrum is of $O(R^{-4})$ (cf. section~\ref{sec:LO-correction}). If massive modes are present, the results for $\bt${} change (see eqs.~\refc{eq:b2-conti-res} and~\refc{eq:b2-conti-res2}) due to another correction of $O(R^{-4})$, contaminating the result for $\bt${}. However, the contribution from the massive modes only is not enough to describe the data, so that we can conclude that $\bar{b}_2\neq0$.~\footnote{This statement is true up to the unlikely possibility of fine-tuned higher order corrections, mimicking the effect of an $O(R^{-4})$ term.} The data for $\bt${} shows an interesting trend towards zero with increasing $N$, leading to the possibility that $\bt${} could vanish in the limit $N\to\infty$. This result is also independent of whether or not we include massive modes in the analysis. In the context of generalisations of the AdS/CFT correspondence to large $N$ gauge theories, this is an interesting result since it imposes constraints on the fundamental string theory for the dual version of Yang-Mills theories at large $N$. We will investigate this issue further in the next publication in this series of papers. The main obstacle of our analysis for $\bt${} is the fact, that it is impossible to judge whether or not massive modes, or, equivalently, the rigidity term (or even both) are present. While some properties (like the decrease in magnitude for $N\to\infty$) are independent of this issue, the exact value of $\bt${} can only be extracted once the issue is resolved. To this end it would be desirable to have an expression for the excited state energies in the presence of a massive mode, or, alternatively, the rigidity term. This could potentially help to discriminate between the existence or non-existence of such terms and, in addition, it could help to discriminate between the two types of additional contributions. It is intriguing to see that the results for the mass of the possible massive modes is in good agreement with the masses found in ~\cite{Dubovsky:2013gi,Athenodorou:2017cmw}. It is thus possible that we are seeing a massive mode on the worldsheet which is similar in nature to the worldsheet axion. Note, that in 3d the topological coupling term is absent, so that the analogue to the worldsheet axion appears as a quasi-free mode on the worldsheet coupling to the GBs via the covariant derivative and possible higher order coupling terms. It will be interesting to see whether the mass of the massive modes remains consistent with the one of the worldsheet axion for $N\to\infty$. \section*{Acknowledgements} I am grateful to Marco Meineri for numerous enlightening discussions during the collaboration on the joint review~\cite{Brandt:2016xsp} and I would like to thank F. Cuteri for carefully going through the manuscript. The simulations have been done in parts on the clusters Athene and iDataCool at the University of Regensburg and the FUCHS cluster at the Center for Scientific Computing, University of Frankfurt. I am indepted to the institutes for offering these facilities. During this work I have received support from DFG via SFB/TRR 55 and the Emmy Noether Programme EN 1064/2-1.
train/arxiv
BkiUfKDxK5YsWV5L43kb
5
1
\section{INTRODUCTION} Networked dynamic systems consist of multiple dynamic units that are interconnected via a network. In recent years, the area of networked systems has received extensive attention from the research community. There are many examples of networked systems in our everyday lives such as social and transportation networks. Analysis of certain classes of complex networks such as biological networks, power grids, and robotic networks is of increasing interest in a number of scientific and engineering communities. A network abstracts the communication topology for exchanging information between agents or nodes in order to coordinate reaching a network-level goal. When different agents exchange sensitive data, one of the main concerns is ensuring privacy. In a dynamic setting, network observability captures the information content available to an individual agent in the network. In this work, we propose an adaptation mechanism for the network topology that aims to minimize observability, that in turn can be used to minimize the information exposed to an intruder. Observability-based design of a multi-agent network can be found in recent works~\cite{pequito2014optimal, kibangou2014observability, kia2015dynamic, alaeddini2016optimal}. The notion of observability has also been used in multi-robot localization~\cite{mariottini2005vision, zhou2008robot, huang2011observability, sharma2012graph}, social networks~\cite{golovin2011adaptive}, electric power grid management~\cite{wu1988real}, and biological systems~\cite{kang2009computational, chis2011structural, whalen2012observability}. Protocols with privacy guarantees have been previously addressed in~\cite{kefayati2007secure, huang2012differentially, manitara2013privacy, giraldo2014delay}, where the injection of random offsets into the states of the agents have been considered. Among other works that have examined network privacy, the authors of~\cite{pequito2014design} have considered the connection between privacy in the network and its observability. Privacy guarantees in~\cite{pequito2014design} ensure that each agent is unable to retrieve the initial states of non-neighboring agents in the network. Since all agents are potentially malicious, the optimal solution presented in ~\cite{pequito2014design} is inherently conservative, resulting in removing some edges to generate a topology that as many nodes as possible are part of the unobservable subspace of the graph. The main contribution of this work is presenting the problem of privacy maximization in a network as a regret minimization problem. The present work considers the design of the weights on the edges of the network in an online manner in order to achieve maximum privacy. This is done using the regret minimization framework- in particular, we do not require any noise injection or edge removal as other works in network privacy have relied upon in the past. In this work, we assume that the structure of the network has been given-and that in fact-it is connected. We then proceed to design the weights in the network in order to minimize the amount of information about a node to other agents in the network from the observability perspective. In order to guarantee privacy, our goal is proposing a network adaptation mechanism such that if a node is compromised by an intruder, the amount information leaked by this node to the rest of the network is minimized. \section{BACKGROUND} \label{sec:prelim} The interactions amongst agents in a network can be abstracted as an undirected graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\vec{w})$. Each agent in the network is denoted as a node, and edges represent communication links between agents. The number of nodes will be denoted by $N$ and the number of edges as $M$. The node set, $\mathcal{V}$, consists of all nodes in the network. On the other hand, the edge set, $\mathcal{E}$, is comprised of pairs of nodes $\{i,j\}$, if nodes $i$ and $j$ are adjacent. Each edge is assumed to have a weight $w_i \in \mathbb{R}_{>0}$. The neighborhood set $\mathcal{N}(i)$ of node $i$ is composed of all agents in $\mathcal{V}$ that are adjacent to $v_i$. The weighting vector $\vec{w} \in \mathbb{R}_{>0}^M$ with dimension $M$, represents the weight on the edges. These edges are encoded through the index mapping $\sigma$ such that $l=\sigma(i,j)$, if and only if edge $l$ connects nodes $i$ and $j$. The edge weights will be denoted as $w_{ij}$ and $w_l$, interchangeably. The adjacency matrix is an $N \times N$ symmetric matrix with $\mathcal{A}[i,j] = 1$ when $\{i,j\} \in \mathcal{E}$, and $\mathcal{A}[i,j] = 0$, otherwise. The weighted adjacency matrix is an $N \times N$ symmetric matrix with $\mathcal{A}[i,j] = w_l$ when $\{i,j\} \in \mathcal{E}$ and edge $l$ connects them, and $\mathcal{A}[i,j] = 0,$ otherwise. The weighted degree $\delta_i$ of node $i$ is the sum of the weights of the edges connecting node $i$ to its neighbors. The weighted degree matrix, $\Delta_w$, is a diagonal matrix with $\delta_i$ as its $i$th diagonal entry. The incidence matrix $E(\mathcal{G})$ is an $N \times M$ matrix. Column $\sigma(i,j)$ of the incidence matrix is $\vec{e}_{i}-\vec{e}_{j}$, which is represented by $\vec{e}_{ij}$ in this paper. The graph Laplacian is defined as $L=\Delta_w-\mathcal{A}_w$, and is a positive semi-definite matrix, which satisfies $\vec{1}^TL =\vec{0}$ and $L \vec{1}=\vec{0}$. Here, $\vec{1}$ denotes a vector with all elements equal to one. If the matrix $A$ is positive semidefinite (respectively, positive definite), denoted as $A \succeq 0$ (respectively, $A \succ 0$), then all eigenvalues of $A$ are nonnegative (respectively, positive). We consider the following operation over matrices: the Hadamard product is a binary operation that takes two matrices of the same dimension, and produces another matrix where each element $[i,j]$ is the product of elements $[i,j]$ of the original two matrices: $$(A \circ B)[i,j] = A[i,j] \cdot B[i,j]\,.$$ The Hadamard product is associative and distributive, and unlike the matrix product it is also commutative. It is known that the Hadamard product of two positive semi-definite matrices is positive semi-definite. \subsection{Consensus Algorithm} The consensus algorithm is a control policy, based on local information, that enables the agents to reach agreement on certain states of interest. Let us provide a brief overview of the consensus protocol. Consider $x_i(t) \in \mathbb{R}$ to be the $i$th node's state at time $t$. The continuous-time consensus protocol is then defined as $$\displaystyle \dot{x}_i(t) = \sum_{\{i,j\} \in \mathcal{E}} w_{ij} \left(x_j(t)-x_i(t)\right)\,.$$ In a compact form with $\vec{x}(t) \in \mathbb{R}^N$, the corresponding collective dynamics is represented as $\dot{\vec{x}} = -L(\vec{w}) \vec{x}$. Thus, each agent only requires its relative state with respect to its neighbors for the consensus dynamics. The dynamics of a connected network performing the consensus algorithm converges to agreement on the state \cite{olfati2007consensus}. \subsection{Observability of a Dynamical System} Observability, the feasibility of reconstructing system states from the time history of its outputs, is essential in design of control systems. Observability Gramian is the most common construct for evaluating system observability. For a linear system \begin{equation} \dot{\vec{x}} = A \vec{x} + B \vec{u}, \ \ \vec{y} = C \vec{x} \,, \label{linear} \end{equation} the observability Gramian \cite{Krener09}, \begin{equation} W_O = \int_{0}^{t_f} e^{A^Tt} C^T C e^{At} \mathrm{d}t \,, \label{LinGram} \end{equation} can be computed to evaluate the observability of a system. A system is fully observable if and only if the corresponding observability Gramian is full rank \cite{Muller72}. \begin{proposition} Minimizing the trace of the observability Gramian is equivalent to minimizing the inverse of the estimation error covariance matrix for a linear system. \end{proposition} \begin{proof} See Theorem 1 of \cite{alaeddini2016observability}. \end{proof} An alternative method to evaluate observability of a nonlinear system is using a relatively new concept of the observability covariance, namely the \emph{empirical observability Gramian} \cite{hahn2002improved, Krener09}. This tool provides a more accurate description of a nonlinear system's observability, while it is less computationally expensive than some other approaches, e.g.\,, Lie algebraic based approaches. For a given small perturbation $\epsilon > 0$ of the state, let $\vec{x}_0^{\pm i} = \vec{x}_0 \pm \epsilon \vec{e}_i$ be the initial condition and $\vec{y}^{\pm i}(t)$ be the corresponding output, with $\vec{e}_i$ is the $i^{\text{th}}$ unit vector in $\mathbb{R}^n$. For the system \begin{empheq}[left=\Sigma_1 :\empheqlbrace ]{align} & \dot{\vec{x}}=\vec{f}(\vec{x},\vec{u}), & \vec{x} & \in \mathbb{R}^n, \ \ \vec{u} \in \mathbb{R}^p \nonumber \\ & \vec{y}=\vec{h}(\vec{x}), & \vec{y} & \in \mathbb{R}^m\,, \label{generalNL} \end{empheq} the empirical observability Gramian, $W_O$, is an $n \times n$ matrix, whose $(i,j)$ component, $W_{O_{ij}}$, is given by \begin{equation} \frac{1}{4\epsilon^2} \int_{0}^{\infty} \left(\vec{y}^{+i}(t)-\vec{y}^{-i}(t) \right)^T \left(\vec{y}^{+j}(t)-\vec{y}^{-j}(t) \right) \mathrm{d}t. \label{EmpObsGram} \end{equation} It can be shown that if the system is smooth, then the empirical observability Gramian converges to the local observability Gramian as $\epsilon \to 0$. The largest eigenvalue \cite{Singh05}, smallest eigenvalue \cite{Krener09}, the determinant \cite{DeVries13, Serpas13}, and the trace of the inverse \cite{DeVries13} of the observability Gramian have all been used as measures for observability. \subsection{Online Convex Optimization} Online learning has recently become popular for tackling very large scale estimation problems. The convergence properties of these algorithms are well understood and have been analyzed in a number of different frameworks, including game theory \cite{hazan2007logarithmic}. In the game theoretic framework, for analyzing online learning, a game called \emph{Online Convex Optimization} problem, which is a special case of a general online optimization problem, is defined. In this game, a player chooses a point from a convex set. After choosing the point, an adversary reveals a convex loss function, and the online player receives a loss corresponding to the point she had chosen. This scenario is then repeated. To measure the performance of the online player in this game, a metric called \emph{regret} is used. Regret is the difference between the loss of the online player and the best fixed point in hindsight. The algorithm introduced in \cite{hazan2007logarithmic} is similar to the well known Newton-Raphson method for offline optimization problem. This algorithm attains regret which is proportional to the logarithm of the number of iterations when the loss (cost) functions are convex. Assume that an online player iteratively chooses a point from a non-empty, bounded, closed and convex set in the Euclidean space denoted by $\mathcal{P}$ for $T$ times. At iteration $t$, the algorithm takes the history of cost functions, $\{f_1,\cdots, f_{t-1}\}$ as input, and suggests $x_t \in \mathcal{P}$ to the online player to choose. After committing to this $x_t$, a convex cost function $f_t : \mathcal{P} \rightarrow \mathbb{R}$ is revealed. The loss associated with the choice of the online player is the value of the cost function at the point she had committed to, i.e.\,, $f_t(x_t)$. The objective is to minimize the accumulative penalty. The regret of the online player at the end of the game, i.e.\,, at time $T$, is defined as the difference between the total cost and the cost of the best single decision, where the ``best'' is chosen with the benefit of hindsight. Formally, regret is defined as \begin{equation} \mathcal{R}_T = \sum_{t=1}^T f_t(x_t) - \underset{x \in \mathcal{P}}{\text{min}} \sum_{t=1}^T f_t(x)\,. \end{equation} The objective of the online algorithm is to achieve a guaranteed low regret. Specifically, it is desired that the online algorithm guarantees a sub-linear $\mathcal{R}_T$ or $\displaystyle \frac{\mathcal{R}_T}{T} \rightarrow 0$. A sub-linear regret guarantees that on average the algorithm performs as well as the best fixed action in hindsight. \section{System Modeling} Consider a multi-agent network consisting of $N$ agents and let $\mathcal{G}$ denote the inter-agent communication graph, where each agent $i$ has the ability to transmit scalar data to its neighbors, denoted by $\mathcal{N}(i) = \{j : (i, j) \in \mathcal{E}\}$. The state dynamics of each agent in the network is assumed to be stable and linear, and the neighboring agents $i$ and $j$ are connected by a linear diffusion with weight $w_{ij}$. The network assumed in this paper is a network of multiple identical agents. The scalar state of each agent follows a stable dynamics, $\dot{x}_i = - x_i$ augmented with a weighted consensus dynamics. Specifically, the dynamics of the network is given by: \begin{equation} \label{networkLTI} \begin{aligned} \dot{\vec{x}} &= \left\{ \sum_{l=1}^M \left[ -\left( \vec{e}_{ij}\right) \left( \vec{e}_{ij}\right)^T w_{ij} \right] - I \right\} \vec{x}\,, \ \ l=\sigma(i,j)\,. \end{aligned} \end{equation} The dynamics of the network can then be written as \begin{equation} \dot{\vec{x}} = A(\vec{w}) \vec{x} = \left( A_0+\sum_{l=1}^M A_l w_{ij} \right) \vec{x} \,, \label{wParameter} \end{equation} where, \begin{equation} \label{A0Als} A_0 = - I, \ \ A_l = -\vec{e}_{ij} \vec{e}_{ij}^T\,, \ \ l = \sigma(i,j)\,. \end{equation} \begin{lemma} \label{A_nd} The matrix $A(\vec{w})$ is negative definite for all positive weights. \end{lemma} \subsection{Optimization Problem} Now, assume an intruder in the network aims to retrieve information by connecting to an agent in the network, say agent $k$. Then the data being directly exposed to the intruder is $y = x_k$. The objective here is to minimize the information leaking from this node through an adaptation mechanism on the network. Thus, we proceed to minimize a particular metric of observability of the network with respect to the measurement $y = x_k$. This optimization problem for privacy can be written as \begin{equation} \label{MaxPrivacy} \begin{aligned} & \underset{\vec{w}}{\text{minimize}} & & \trace \left(W_{O}\right) \\ & \text{subject to} & & \dot{\vec{x}} = A(\vec{w}) \vec{x} \,\\ & \text{} & & y = x_k \,. \end{aligned} \end{equation} Now, let us discretize the time period $[0,t_f]$, such that $0=t_0<t_1<\cdots<t_f$. Thus, the trace of the empirical observability gramian is given by \begin{equation} \begin{aligned} \trace\left(W_{O}\right) &= \frac{1}{4\epsilon^2} \int_{0}^{t_f} \sum\limits_{i=1}^N \left\| y^{+i}(\tau)-y^{-i}(\tau) \right\|^2 \mathrm{d}\tau \\ &=\sum_{t} \int_{t}^{t+\Delta} \frac{1}{4\epsilon^2} \sum\limits_{i=1}^N \left\| x_k^{+i}(\tau)-x_k^{-i}(\tau) \right\|^2 \mathrm{d}\tau \\ &=\sum_{t} \int_{t}^{t+\Delta} \sum\limits_{i=1}^N \left( \vec{e}_i^T e^{A\tau} \vec{e}_k \right)^2 \mathrm{d}\tau\\ &=\sum_{t} \int_{t}^{t+\Delta} \trace \left( \vec{e}_k \vec{e}_k^T e^{2A\tau} \right) \mathrm{d}\tau\\ &=\sum_{t} \int_{t}^{t+\Delta} \left[e^{2A\tau}\right]_{k,k} \mathrm{d}\tau \,. \end{aligned} \label{TrW_LTV} \end{equation} \begin{lemma} \label{partialCVX} The function $\phi(\vec{w},\tau)=\left[e^{2A(\vec{w})\tau}\right]_{k,k}$ is convex with respect to $\vec{w}>0$, for all $\tau>0$. \end{lemma} \begin{proof} In the Appendix, \cref{CVX_expMatrix}, it is proved that $\left[e^{A(\vec{w})}\right]_{k,k}$ is convex for all matrices $A(\vec{w})=A_0(\vec{w})+\sum_{l} A_l(\vec{w}) \vec{w}[l]$, for $A(\vec{w})$ defined in \eqref{wParameter}. \end{proof} \subsection{Online Algorithm for Weight Selection} We now consider the problem of online adaptation of the weight in the network to minimize \begin{equation} \begin{aligned} & \text{minimize} & & f_t(\vec{w}) \\ & \text{subject to} & & \vec{w}_{min} \leq \vec{w} \leq \vec{w}_{\max} \\ & \text{} & & \vec{1}^T \vec{w} = 1 \,, \end{aligned} \label{OnlineMaxObs} \end{equation} where \begin{equation} \label{costOnline} f_t(\vec{w}) = \int_{t}^{t+\Delta} \phi(\vec{w},\tau) \mathrm{d}\tau \,. \end{equation} \begin{theorem} The cost function \eqref{costOnline} is convex. \end{theorem} \begin{proof} In \cref{partialCVX}, we have shown that $\phi(\vec{w},\tau)$ is convex. Since, integral of a convex function is convex, the result of the theorem follows. \end{proof} \begin{theorem} The cost function \eqref{costOnline} has bounded gradients. \end{theorem} \begin{proof} First let us calculate the derivative of $\phi(\vec{w},\tau)$ with respect to $w_{ij}$, \begin{equation} \label{gradientwij} \begin{aligned} &\frac{\partial \phi(\vec{w},\tau)}{\partial w_{ij}} = \vec{e}_k^T \left( 2\tau e^{2A(\vec{w})\tau} A_l \right) \vec{e}_k \\ &= -\vec{e}_k^T \left( 2\tau e^{2A(\vec{w})\tau} \vec{e}_{ij} \vec{e}_{ij}^T \right) \vec{e}_k \\ &= -2\tau \trace \left[\left( \vec{e}_k \vec{e}_k^T e^{A(\vec{w})\tau} \right)^T \left(\vec{e}_{ij} \vec{e}_{ij}^T e^{A(\vec{w})\tau} \right) \right]\\ & = -2\tau \sum_{m,n} \left[ \left( \vec{e}_k \vec{e}_k^T e^{A(\vec{w})\tau} \right) \circ \left( \vec{e}_{ij} \vec{e}_{ij}^T e^{A(\vec{w})\tau} \right) \right]_{m,n} \,. \end{aligned} \end{equation} Thus, \begin{equation} \nabla \phi(\vec{w},\tau) = -2\tau \text{ diag} \left( E^T e^{2A\tau} \vec{e}_k \vec{e}_k^T E \right) \,, \end{equation} and, \begin{equation} \vec{g}_t(\vec{w})=\nabla f_t(\vec{w}) = \int_{t}^{t+\Delta} \nabla \phi(\vec{w},\tau) \mathrm{d}\tau \,. \end{equation} Since $A$ is a negative definite matrix for all $\vec{w}$ (\cref{A_nd}), $\lambda_i\{A\} < 0$; thus the matrix $\tau e^{2A\tau}$ has finite entries for all $\tau>0$. Therefore, $\nabla \phi(\vec{w},\tau)$ is bounded, and the integral over a finite period of time, $\tau \in [t,t+\Delta]$ is bounded. Therefore, there always exists an upper-bound, $G$, for the gradient such that \begin{equation} \begin{aligned} & \underset{\vec{w} \in \mathcal{P}, t \in [T]}{\text{sup}} & & \| \nabla f_t(\vec{w}) \|_2 \leq G \,. \end{aligned} \end{equation} \end{proof} \subsection{Regret Minimization} Regret is the difference between the cost of the sequence of actions and the performance of the best single action, $\vec{w}^*$, taken at every time step, considering $f_t(\vec{w})$ is known for all time a priori. The regret of an action sequence $\{\vec{w}_t\}$ is \begin{equation} \mathcal{R}_T = \sum_{t=1}^T \left( f_t(\vec{w}_t) - f_t(\vec{w}^*) \right)\,. \end{equation} Here, we are interested in applying the celebrated $\log(T)$ bound online algorithm \cite{hazan2007logarithmic}. The \emph{Online Newton Step} (ONS) algorithm is given in Algorithm \ref{ONS_Alg}. This algorithm is straightforward to implement, and the running time is $O(M)$ per iteration given the gradient. This algorithm is based on the well known Newton-Raphson method for offline optimization uses second-order information of the cost functions. To implement this algorithm, the convex set $\mathcal{P}$ should be bounded, such that \begin{equation} D = \underset{\vec{x}, \vec{y} \in \mathcal{P},}{\text{max}} \| \vec{x}-\vec{y} \|_2 < \infty \,; \end{equation} $D$ is called the diameter of the underlying convex set $\mathcal{P}$. \begin{algorithm}[!h] Inputs: convex set $\mathcal{P} \subset \mathbb{R}^M$, initial $\vec{w}_1 \in \mathcal{P}$\; Set $\displaystyle \beta := \frac{1}{8GD}$ and $\displaystyle \epsilon = \frac{1}{\beta^2 D^2}$\; In iteration $s=1$: use point $\vec{w}_1 \in \mathcal{P}$\; \Repeat{Convergence}{ $\displaystyle A_s = A_0+\sum_{l=1}^M A_l \vec{w}_s[l]$\; $\displaystyle \nabla \phi(\vec{w}_s,\tau) = -2\tau \text{ diag} \left( E^T e^{2A\tau} \vec{e}_k \vec{e}_k^T E \right)$\; $\displaystyle \vec{g}_{s} := \int_{s}^{s+\Delta} \nabla \phi \mathrm{d}\tau $\; $\displaystyle \mathbb{A}_s : = \sum_{i=1}^s \vec{g}_{i} \vec{g}_{i}^T + \epsilon I_M$\; $\displaystyle \vec{w}_{s+1} = \Pi_{\mathcal{P}} \left( \vec{w}_s - \frac{1}{\beta} \mathbb{A}_s^{-1} \vec{g}_{s} \right)$\; $s := s+1$\; Here, $\displaystyle \Pi_{\mathcal{P}}$ denotes the \emph{projection} in the norm induced by $\displaystyle \mathbb{A}_s$, $\Pi_{\mathcal{P}}^{\mathbb{A}_s}(\vec{y}) = arg\min_{\vec{x} \in \mathcal{P}} (\vec{y}-\vec{x})^T \mathbb{A}_s (\vec{y}-\vec{x})$\; \vspace{3mm} } \vspace{3mm} \caption{Online Newton Step Algorithm} \label{ONS_Alg} \end{algorithm} Using the Online Newton Step algorithm which is a second-order method is not essential here. We can easily apply a first-order online learning algorithm, e.g.\,, Online Gradient Descent \cite{hazan2007logarithmic}, and attain regret proportional to the logarithm of the number of iterations. A benefit of using Online Newton Step algorithm is its faster convergence compared to the first order algorithms. The main advantage of a first order algorithm compared to second order algorithms is its ease of generalization to the distributed scenarios. \section{Simulation} Our privacy guarantee online algorithm is tested on a random 9 node graph. We assume there exists a single foreign agent attacking the network. The location of the foreign agent is fixed from $t = 0$ to $t = 25$, where the intruder changes its location to another node. The best graph topology for this foreign agent evolution is calculated over 50 iterations. The resultant regret is depicted in \cref{25regret} emphasizing the performance agreement with the $O(\log(T))$ bound found by Hazan \emph{et al.} \cite{hazan2007logarithmic}. \begin{figure} \begin{center} {\includegraphics[width=.5\textwidth]{25regret.pdf}} \end{center} \caption{Regret over time of Algorithm \ref{ONS_Alg}.} \label{25regret} \end{figure} The online Newton step algorithm, given in Algorithm \ref{ONS_Alg} is also applied to a 15 node graph with multiple foreign nodes that their locations are randomly changing over time. The location of foreign agents are detected by the agents marked with large circles. The edge weights are initialized randomly and constrained such that $\vec{1}^T \vec{w}=1$, $\vec{w}_{min} = 0.01$ and $\vec{w}_{\max} = 0.99$. The resultant graph after each online Newton step is depicted in \cref{multi10change}. The first three figures (top of \cref{multi10change}) depict the algorithm response to static foreign agents locations. The notable characteristic, over these figures, is the increasing of edge weights on those edges close to the foreign agents. The intuition behind this behavior is that the state of the node which is connected to the malicious node need to synchronize fast to make it indistinguishable to the foreign agent. Note that the state of a node when it reaches to its equilibrium becomes less distinguishable, and the measure of observability decreases. At time $t=10$, the location of the red circles (two of foreign agents) change, and the three plots on the bottom depict the algorithm's response to this change in the foreign agent location. Note that it is assumed here that the networked system is stable; therefore, the states of all nodes in the network synchronizes as time passes. Thus, adding an intruder after the network becomes synchronous, does not change the weights significantly. This comes from the fact that the state of a system becomes indistinguishable when it approaches to its equilibrium point. \begin{figure*} \begin{center} {\includegraphics[width=1.0\textwidth]{multi10change.pdf}} \end{center} \caption{Re-weighting of a network with multiple foreign nodes, denoted with large red circles.} \label{multi10change} \end{figure*} \section{CONCLUSIONS} \label{sec:conclusion} The main goal in this paper was to design a proper selection of the weights in the dynamics of a networked system induced by the communication graph, such that the privacy of the network is guaranteed, where the privacy guarantee makes each agent cannot retrieve the initial states of other agents. It was assumed that we are uncertain about the intention of the foreign agent who attacks the network. The privacy of the network was posed as an online optimization problem on the gramian of a weighted network. An online learning algorithm was used to find an optimal set of weights that guarantee sub-linear regret. The use of empirical observability Gramian was discussed in the context of privacy for a networked control systems. For this approach it may be worth noticing that the empirical observability Gramian can be used to evaluate the observability of nonlinear systems, however, we considered a parametrized linear system to guarantee a logarithmic regret bound.
train/arxiv
BkiUeGLxK7IDPV_ItvmM
5
1
\section{Introduction and statement of results} \label{intro} Ideal lattices are important objects in number theory and discrete geometry, which have been extensively studied in a series of papers by Eva Bayer-Fluckiger and her co-authors in the 1990's and 2000's (see, for instance, \cite{bayer1}, \cite{bayer2}, \cite{bayer_nebe}). In this note, we consider the simplest kind of ideal lattices coming from quadratic number fields. Let $K$ be a quadratic number field, and let us write ${\mathcal O}_K$ for its ring of integers. Then $K={\mathbb Q}(\sqrt{D})$ (real quadratic) or $K={\mathbb Q}(\sqrt{-D})$ (imaginary quadratic), where $D$ is a positive squarefree integer. The embeddings $\sigma_1, \sigma_2 : K \to {\mathbb C}$ can be used to define the standard embedding $\sigma_K$ of $K$ into ${\mathbb R}^2$: if $K={\mathbb Q}(\sqrt{D})$, then $\sigma_K : K \to {\mathbb R}^2$ is given by $\sigma_K = (\sigma_1,\sigma_2)$; if $K={\mathbb Q}(\sqrt{-D})$, then $\sigma_2=\overline{\sigma_1}$, and $\sigma_K=(\Re(\sigma_1), \Im(\sigma_1))$, where $\Re$ and $\Im$ stand for real and imaginary parts, respectively. Each nonzero ideal $I \subseteq {\mathcal O}_K$ becomes a lattice of full rank in ${\mathbb R}^2$ under this embedding, which we will denote by $\Lambda_K(I) := \sigma_K(I)$. Such lattices are called planar {\it ideal lattices}. Given a lattice $\Lambda \subset {\mathbb R}^2$ of full rank with a basis ${\boldsymbol a}_1,{\boldsymbol a}_2$, we can write $A=({\boldsymbol a}_1 {\boldsymbol a}_2)$ for the corresponding basis matrix, and then $\Lambda = A {\mathbb Z}^2$. The corresponding norm form is defined as $$Q_A({\boldsymbol x}) = {\boldsymbol x}^t A^t A {\boldsymbol x},$$ and we say that the lattice is {\it integral} if the coefficient matrix $A^t A$ of this quadratic form has integer entries; it is easy to see that this definition does not depend on the choice of a basis. It is also easy to see that every ideal lattice is integral. We define $\operatorname{det}(\Lambda)$ to be $|\operatorname{det}(A)|$, again independent of the basis choice, and (squared) {\it minimum} or {\it minimal norm} $$|\Lambda| = \min \{ \|{\boldsymbol x}\|^2 : {\boldsymbol x} \in \Lambda \setminus \{{\boldsymbol 0}\} \} = \min \{ Q_A({\boldsymbol y}) : {\boldsymbol y} \in {\mathbb Z}^2 \setminus \{{\boldsymbol 0}\} \},$$ where $\|\ \|$ stands for the usual Euclidean norm. Then each ${\boldsymbol x} \in \Lambda$ such that $\|{\boldsymbol x}\|^2 = |\Lambda|$ is called a {\it minimal vector}, and the set of minimal vectors of $\Lambda$ is denoted by $S(\Lambda)$. A lattice $\Lambda$ is called {\it well-rounded} (abbreviated WR) if the set $S(\Lambda)$ contains two linearly independent vectors. These vectors form a basis for $\Lambda$, and we refer to such a basis as a {\it minimal basis}. WR lattices are important in discrete geometry and in a variety of optimization problems (see \cite{martinet}). In this note, we study WR ideal lattices coming from quadratic number fields. The general investigation of WR ideal lattices has recently been started in~\cite{lf:petersen}, where, in particular, infinite families of WR ideal lattices coming from real and imaginary quadratic number fields have been constructed. We continue this investigation. Let us call an ideal $I$ in the ring of integers ${\mathcal O}_K$ of a quadratic number field $K={\mathbb Q}(\sqrt{\pm D})$ well-rounded (WR) if the corresponding planar lattice $\Lambda_K(I)$ is WR. We will say that a positive squarefree integer $D$ satisfies the {\it $\nu$-nearsquare condition} if it has a divisor $d$ with $\sqrt{\frac{D}{\nu}} \leq d < \sqrt{D}$, where $\nu > 1$ is a real number. Our main result is the following theorem. \begin{thm} \label{ideal_IWR} If $D$ satisfies the 3-nearsquare condition, then the rings of integers of quadratic number fields $K={\mathbb Q}(\sqrt{\pm D})$ contain WR ideals; the statement becomes if and only if when $K={\mathbb Q}(\sqrt{-D})$. This in particular implies that a positive proportion (more than $1/5$) of real and imaginary quadratic number fields contain WR ideals, more specifically \begin{equation} \label{K_density} \liminf_{N \to \infty} \frac{ \left| \left\{ K = {\mathbb Q}(\sqrt{\pm D}) : K \text{ contains a WR ideal, } 0 < D \leq N \right\} \right|}{ \left| \left\{ K = {\mathbb Q}(\sqrt{\pm D}) : 0 < D \leq N \right\} \right|} \geq \frac{\sqrt{3}-1}{2\sqrt{3}}. \end{equation} Moreover, for every $D$ satisfying the 3-nearsquare condition the corresponding imaginary quadratic number field $K={\mathbb Q}(\sqrt{-D})$ contains only finitely many WR ideals, up to similarity of the corresponding lattices, and this number is \begin{equation} \label{ideal_est_1} \ll \min \left\{ 2^{\omega(D)-1}, \frac{2^{\omega(D)}}{\sqrt{\omega(D)}} \right\}, \end{equation} where $\omega(D)$ is the number of prime divisors of $D$ and the constant in the Vinogradov notation $\ll$ does not depend on $D$. \end{thm} \begin{rem} \label{class_number} Two WR ideal lattices coming from the same imaginary quadratic field are similar if and only if the corresponding ideals are in the same ideal class, which explains finiteness of the number of WR ideals in this case, as asserted in Theorem~\ref{ideal_IWR}. In fact, this number is strictly less than the class number $h_K$ unless $K={\mathbb Q}(\sqrt{-1})$ or ${\mathbb Q}(\sqrt{-3})$, since ${\mathcal O}_K$ is WR if and only if $K$ is cyclotomic, as established in~\cite{lf:petersen}. We can then compare the bound of~\eqref{ideal_est_1} to estimates on the class number of imaginary quadratics. For instance, Siegel's original estimate~\cite{siegel_class_number} asserts that for $K={\mathbb Q}(\sqrt{-D})$, \begin{equation} \label{s-cl-n} \log h_K \sim \log \sqrt{|\Delta|}, \end{equation} where $\Delta$ is the corresponding fundamental discriminant: $$\Delta = \left\{ \begin{array}{ll} -D & \mbox{if $-D \equiv 1 (\operatorname{mod} 4)$} \\ -4D & \mbox{if $-D \not\equiv 1 (\operatorname{mod} 4)$,} \end{array} \right.$$ and so $h_K$ is about $O(\sqrt{D})$ as $D \to \infty$. On the other hand, the average order of $\omega(D)$ is $\log \log D$ (see~\cite{hardy}, \S\S22.11-22.13), and so the upper bound of~\eqref{ideal_est_1} is usually about $\frac{(\log D)^{\log 2}}{\sqrt{\log \log D}}$. Hence most ideal classes are not WR. \end{rem} \smallskip The paper is structured as follows. In Section~\ref{iwr_section} we develop some notation, recalling a parameterization of integral well-rounded lattices in the plane, and prove a simple technical lemma which we use in our main argument. In Section~\ref{ideal} we demonstrate explicit constructions of WR ideals, in particular specifying infinite families of similarity classes of planar WR lattices containing ideal lattices, and then prove Theorem~1.1. In addition, we further discuss the case of real quadratic fields, as well as certain criteria for quadratic number fields to contain WR principal ideals at the end of Section~\ref{ideal}. We are now ready to proceed. \bigskip \section{Integral WR lattices in the plane} \label{iwr_section} We start with some notation, following~\cite{fletcher1}. An important equivalence relation on lattices is geometric similarity: two planar lattices $\Lambda_1, \Lambda_2 \subset {\mathbb R}^2$ are called {\it similar}, denoted $\Lambda_1 \sim \Lambda_2$, if there exists a positive real number $\alpha$ and a $2 \times 2$ real orthogonal matrix $U$ such that $\Lambda_2 = \alpha U \Lambda_1$. It is easy to see that similar lattices have the same algebraic structure, i.e., for every sublattice $\Gamma_1$ of a fixed index in $\Lambda_1$ there is a sublattice $\Gamma_2$ of the same index in $\Lambda_2$ so that $\Gamma_1 \sim \Gamma_2$. If $\Lambda \subset {\mathbb R}^2$ is a full rank WR lattice, then its set of minimal vectors $S(\Lambda)$ contains 4 or 6 vectors, and this number is 6 if and only if $\Lambda$ is similar to the hexagonal lattice $${\mathcal H} := \begin{pmatrix} 2 & 1 \\ 0 & \sqrt{3} \end{pmatrix} {\mathbb Z}^2$$ (see, for instance Lemma 2.1 of \cite{lf:petersen}). Any two linearly independent vectors ${\boldsymbol x},{\boldsymbol y} \in S(\Lambda)$ form a minimal basis. While this choice is not unique, it is always possible to select ${\boldsymbol x},{\boldsymbol y}$ so that the angle $\theta$ between these two vectors lies in the interval $[\pi/3,\pi/2]$, and any value of the angle in this interval is possible. From now on when we talk about a minimal basis for a WR lattice in the plane, we will always mean such a choice. Then the angle between minimal basis vectors is an invariant of the lattice, and we call it the {\it angle of the lattice} $\Lambda$, denoted $\theta(\Lambda)$; in other words, if ${\boldsymbol x},{\boldsymbol y}$ is any minimal basis for $\Lambda$ and $\theta$ is the angle between ${\boldsymbol x}$ and ${\boldsymbol y}$, then $\theta = \theta(\Lambda)$ (see \cite{hex} for details and proofs of the basic properties of WR lattices in ${\mathbb R}^2$). In fact, it is easy to notice that two WR lattices $\Lambda_1,\Lambda_2 \subset {\mathbb R}^2$ are similar if and only if $\theta(\Lambda_1)=\theta(\Lambda_2)$ (see \cite{hex} for a proof). The following parameterization of integral well-rounded (IWR) lattices is discussed in~\cite{fletcher1}. Let $\Lambda \subset {\mathbb R}^2$ be an IWR lattice, then \begin{equation} \label{cos_sin} \cos \theta(\Lambda) = \frac{p}{q},\ \sin \theta(\Lambda) = \frac{r\sqrt{D}}{q} \end{equation} for some $r,q,D \in {\mathbb Z}_{>0}$, $p \in {\mathbb Z}_{\geq 0}$ such that \begin{equation} \label{prqD} p^2+Dr^2=q^2,\ \gcd(p,q)=1,\ \frac{p}{q} \leq \frac{1}{2}, \text{ and } D \text{ squarefree}, \end{equation} and so $\Lambda$ is similar to \begin{equation} \label{OprqD} \Omega_D(p,q) := \begin{pmatrix} q & p \\ 0 & r\sqrt{D} \end{pmatrix} {\mathbb Z}^2. \end{equation} Moreover, for every $p,r,q,D$ satisfying \eqref{prqD}, $\Omega_D(p,q)$ is an IWR lattice with the angle $\theta(\Omega_D(p,q))$ satisfying \eqref{cos_sin}, and $\Omega_D(p,q) \sim \Omega_{D'}(p',q')$ if and only if $(p,r,q,D) = (p',r',q',D')$. In addition, if $\Lambda$ is any IWR lattice similar to $\Omega_D(p,q)$, then \begin{equation} \label{min_lattice} \left| \Lambda \right| \geq \left| \frac{1}{\sqrt{q}} \Omega_D(p,q) \right|, \end{equation} where the lattice $\frac{1}{\sqrt{q}} \Omega_D(p,q)$ is also IWR. Due to this property, we call $\frac{1}{\sqrt{q}} \Omega_D(p,q)$ a {\it minimal} IWR lattice in its similarity class. We say that an IWR planar lattice $\Lambda$ is of {\it type $D$} for a squarefree $D \in {\mathbb Z}_{>0}$ if it is similar to some $\Omega_D(p,q)$ as in \eqref{OprqD}. As discussed in~\cite{fletcher1}, the type is uniquely defined, i.e., $\Lambda$ cannot be of two different types. Moreover, a planar IWR lattice $\Lambda$ is of type $D$ for some squarefree $D \in {\mathbb Z}_{>0}$ if and only if all of its IWR finite index sublattices are also of type $D$. If this is the case, $\Lambda$ contains a sublattice similar to $\Omega_D(p,q)$ for every 4-tuple $(p,r,q,D)$ as in~\eqref{prqD}. Hence the set of planar IWR lattices is split into types which are indexed by positive squarefree integers with similarity classes inside of each type $D$ being in bijective correspondence with solutions to the ternary Diophantine equation $p^2+r^2D=q^2$. \smallskip We also need the following counting functions, as introduced in~\cite{fletcher1}. For fixed positive integers $D$ and $r$, $D$ squarefree, define \begin{equation} \label{r_count} f(r) = \left| \left\{ (p,q) \in {\mathbb Z}_{>0}^2 : q^2-p^2 = r^2D,\ \gcd(p,q) = 1,\ 0< \frac{p}{q} \leq \frac{1}{2} \right\} \right|, \end{equation} as well as $$f_1(r) = \left| \left\{ (p,q) \in {\mathbb Z}_{>0}^2 : q^2-p^2 = r^2D,\ \gcd(p,q) = 1 \right\} \right|,$$ and \begin{equation} \label{f_21} f_2(r) = \left| \left\{ (p,q) \in {\mathbb Z}_{>0}^2 : q^2-p^2 = r^2D,\ 0< \frac{p}{q} \leq \frac{1}{2} \right\} \right|. \end{equation} Notice that \begin{equation} \label{f_f1_f2} f(r) \leq \min \{ f_1(r), f_2(r) \}. \end{equation} The function $f_1(r)$ is well-studied; in particular, Theorem 6.2.4 of \cite{mollin} implies that \begin{equation} \label{f1_est} f_1(r) \leq 2^{\omega(r^2D)-1} = 2^{\omega(rD)-1}. \end{equation} On the other hand, equation~(34) of~\cite{fletcher1} establishes that \begin{equation} \label{f_22} f_2(r) = \left| \left\{ b \in {\mathbb Z}_{>0} : b \mid r^2D,\ r \sqrt{D} < b \leq r\sqrt{3D} \right\} \right|, \end{equation} and then inequality~(38) of~\cite{fletcher1} guarantees that \begin{equation} \label{f2_est} f_2(r) \leq O \left( \frac{\tau(r^2D)}{\sqrt{\omega(r^2D)}} \right), \end{equation} where $\tau(r^2D)$ is the number of divisors of~$r^2D$. We now prove a lemma, which will be useful to us in Section~\ref{ideal}. \begin{lem} \label{r_1} The equation $p^2+D=q^2$ with squarefree positive integer $D$ has an integral solution $(p,q)$ satisfying $p/q \leq 1/2$ if and only if \begin{equation} \label{D_div} D=d_1d_2 \text{ for some } d_1,d_2 \in {\mathbb Z}_{>0} \text{ with } d_1 < d_2,\ \sqrt{\frac{D}{3}} \leq d_1 < \sqrt{D}. \end{equation} If this is the case, then $\gcd(d_1,d_2)=1$, and if $(p,q)$ is such a solution, then $\gcd(p,q)=1$. Moreover, if $D$ is a positive even squarefree integer, then there are no solutions to $p^2+D=q^2$. \end{lem} \proof Notice that $p^2+D=q^2$ has an integral solution if and only if it has a positive integral solution. The number of positive integral solutions $(p,q)$ with $p/q \leq 1/2$ is then given by $f_2(1)$ as in \eqref{f_21}, and so the equation has solutions if and only if $f_2(1) \geq 1$. Hence \eqref{f_22} with $r=1$ implies that this happens if and only if $$D=d_1d_2 \text{ for some } d_1,d_2 \in {\mathbb Z}_{>0} \text{ with } d_1 < d_2,\ \sqrt{D} < d_2 \leq \sqrt{3D},$$ which is equivalent to \eqref{D_div}. Now, if \eqref{D_div} holds, then $\gcd(d_1,d_2)=1$ since $D$ is squarefree. Similarly, suppose $(p,q)$ is a solution with $\gcd(p,q)=g$, then $g^2 \mid D$, and so $g=1$. Finally, suppose $2 \mid D = q^2-p^2$. Then $2 \mid q-p$ or $2 \mid q+p$, which means that 2 divides both, $q-p$ and $q+p$, and so $2^2 \mid D$, which contradicts $D$ being squarefree. Hence $p^2+D=q^2$ has no solutions. \endproof \bigskip \section{Ideal WR lattices in the plane} \label{ideal} In this section we discuss ideal well-rounded lattices in the plane, proving Theorem~\ref{ideal_IWR}. We start by setting some standard notation, following \cite{lf:petersen} (see also \cite{buell} for a detailed exposition). As in Section~\ref{intro} above, let $K = {\mathbb Q}(\sqrt{\pm D})$ be a real or imaginary quadratic number field, where $D$ is a positive squarefree integer, and let us write ${\mathcal O}_K$ for its ring of integers. We have ${\mathcal O}_K={\mathbb Z}[\delta]$, where \begin{equation} \label{delta} \delta = \left\{ \begin{array}{ll} - \sqrt{D} & \mbox{if $K={\mathbb Q}(\sqrt{D})$, $D \not\equiv 1 (\operatorname{mod} 4)$} \\ \frac{1-\sqrt{D}}{2} & \mbox{if $K={\mathbb Q}(\sqrt{D})$, $D \equiv 1 (\operatorname{mod} 4)$} \\ - \sqrt{-D} & \mbox{if $K={\mathbb Q}(\sqrt{-D})$, $-D \not\equiv 1 (\operatorname{mod} 4)$} \\ \frac{1-\sqrt{-D}}{2} & \mbox{if $K={\mathbb Q}(\sqrt{-D})$, $-D \equiv 1 (\operatorname{mod} 4)$.} \end{array} \right. \end{equation} The embeddings $\sigma_1, \sigma_2 : K \to {\mathbb C}$ are given by $$\sigma_1(x+y\sqrt{ \pm D}) = x+y\sqrt{ \pm D},\ \sigma_2(x+y\sqrt{ \pm D}) = x-y\sqrt{ \pm D}$$ for each $x+y\sqrt{\pm D} \in K$, where $\pm$ is determined by whether $K$ is a real or an imaginary quadratic field, respectively. The number field norm on $K$ is defined by $${\mathbb N}(x+y\sqrt{\pm D}) = \sigma_1(x+y\sqrt{ \pm D}) \sigma_2(x+y\sqrt{ \pm D}) = \left( x+y\sqrt{ \pm D} \right) \left( x-y\sqrt{ \pm D} \right).$$ Now $I \subseteq {\mathcal O}_K$ is an ideal if and only if \begin{equation} \label{I_abg} I = \{ ax + (b+g\delta)y : x,y \in {\mathbb Z} \}, \end{equation} for some $a,b,g \in {\mathbb Z}_{\geq 0}$ such that \begin{equation} \label{abg} b < a,\ g \mid a,b,\text{ and } ag \mid {\mathbb N}(b+g\delta). \end{equation} Such integral basis $a,b+g\delta$ is unique for each ideal $I$ and is called the {\it canonical basis} for $I$. For each ideal $I$, we consider the corresponding planar ideal lattice $\Lambda_K(I) = \sigma_K(I)$ as defined in Section~\ref{intro}. An investigation of well-rounded ideal lattices has been initiated in \cite{lf:petersen} with some special attention devoted to the planar case. In particular, it has been established in \cite{lf:petersen} that the full ring of integers ${\mathcal O}_K$ is WR if and only if $K={\mathbb Q}(\sqrt{-1}), {\mathbb Q}(\sqrt{-3})$. On the other hand, infinite families of real and imaginary quadratic fields with WR ideals have been constructed in \cite{lf:petersen}. Here we extend and generalize these constructions, showing that many similarity classes of planar IWR lattices contain ideal lattices. Let $K={\mathbb Q}(\sqrt{\pm D})$ and $I \subseteq {\mathcal O}_K$ be an ideal with the canonical basis $a, b+g\delta$, as above. It is then easy to check that $\Lambda_K(I)$ has the following shape, which we record in a convenient form for the proof of our next lemmas: \begin{trivlist} \item If $K={\mathbb Q}({\sqrt{D}})$, $D \not\equiv 1 (\operatorname{mod} 4)$, then \begin{equation} \label{B1} \Lambda_K(I) = \begin{pmatrix} a & b-g\sqrt{D} \\ a & b+g\sqrt{D} \end{pmatrix} {\mathbb Z}^2 = \begin{pmatrix} a-b+g\sqrt{D} & b-g\sqrt{D} \\ a-b-g\sqrt{D} & b+g\sqrt{D} \end{pmatrix} {\mathbb Z}^2. \end{equation} \item If $K={\mathbb Q}({\sqrt{D}})$, $D \equiv 1 (\operatorname{mod} 4)$, then \begin{equation} \label{B2} \Lambda_K(I) = \begin{pmatrix} a & \frac{2b+g}{2} - \frac{g\sqrt{D}}{2} \\ a &\frac{2b+g}{2} + \frac{g\sqrt{D}}{2} \end{pmatrix} {\mathbb Z}^2 = \begin{pmatrix} \frac{2a-2b-g}{2} + \frac{g\sqrt{D}}{2} & \frac{2b+g}{2} - \frac{g\sqrt{D}}{2} \\ \frac{2a-2b-g}{2} - \frac{g\sqrt{D}}{2} & \frac{2b+g}{2} + \frac{g\sqrt{D}}{2} \end{pmatrix} {\mathbb Z}^2. \end{equation} \item If $K={\mathbb Q}({\sqrt{-D}})$, $-D \not\equiv 1 (\operatorname{mod} 4)$, then \begin{equation} \label{B3} \Lambda_K(I) = \begin{pmatrix} a & b \\ 0 & -g\sqrt{D} \end{pmatrix} {\mathbb Z}^2 = \begin{pmatrix} a-b & b \\ g\sqrt{D} & -g\sqrt{D} \end{pmatrix} {\mathbb Z}^2. \end{equation} \item If $K={\mathbb Q}({\sqrt{-D}})$, $-D \equiv 1 (\operatorname{mod} 4)$, then \begin{equation} \label{B4} \Lambda_K(I) = \begin{pmatrix} a & \frac{2b+g}{2} \\ 0 & - \frac{g\sqrt{D}}{2} \end{pmatrix} {\mathbb Z}^2 = \begin{pmatrix} \frac{2a-2b-g}{2} & \frac{2b+g}{2} \\ \frac{g\sqrt{D}}{2} & - \frac{g\sqrt{D}}{2} \end{pmatrix} {\mathbb Z}^2. \end{equation} \end{trivlist} \begin{lem}\label{lat_to_ideal} Let $D$ be a positive odd squarefree integer satisfying \eqref{D_div} of Lemma~\ref{r_1} and let $(p,q)$ be a solution to the equation $p^2+D=q^2$ with $p/q \leq 1/2$. Then the similarity class of $\Omega_D(p,q)$ contains ideal lattices. More specifically, let \begin{equation} \label{ab_D} (a,b) = \left\{ \begin{array}{ll} \left( p+q, \frac{p+q-1}{2} \right) & \mbox{if $D \equiv 1 (\operatorname{mod} 4)$} \\ \left( 2(p+q), p+q \right) & \mbox{if $D \equiv 3 (\operatorname{mod} 4)$}, \end{array} \right. \end{equation} and define \begin{eqnarray} \label{IJ_ab} & & I = I(p,q) := \{ ax + (b+\delta)y : x,y \in {\mathbb Z} \} \subset {\mathcal O}_{{\mathbb Q}(\sqrt{D})} \nonumber \\ & & J = J(p,q) := \{ ax + (b+\delta)y : x,y \in {\mathbb Z} \} \subset {\mathcal O}_{{\mathbb Q}(\sqrt{-D})} \end{eqnarray} for this choice of $(a,b)$, where $\delta$ is defined accordingly as in \eqref{delta}. Then $I,J$ are WR ideals in their respective quadratic number rings, and the ideal lattices $\Lambda_{{\mathbb Q}(\sqrt{D})}(I)$, $\Lambda_{{\mathbb Q}(\sqrt{-D})}(J)$ belong to the similarity class of $\Omega_D(p,q)$. \end{lem} \proof It is easy to verify that $(a,b)$ as in \eqref{ab_D} and $g=1$ satisfy the conditions of \eqref{abg} with $\delta$ chosen respectively as in \eqref{delta}. Therefore $I$ and $J$ defined in \eqref{IJ_ab} are indeed ideals with the corresponding canonical basis $a,b+\delta$. Now a straight-forward calculation shows that the ideal lattices $\Lambda_{{\mathbb Q}(\sqrt{D})}(I), \Lambda_{{\mathbb Q}(\sqrt{-D})}(J)$ are WR with the corresponding minimal basis matrices being the second matrices in formulas \eqref{B1}-\eqref{B4}, respectively, and cosine of the angle of each such lattice being $p/q$. This completes the proof. \endproof In fact, Lemma~\ref{lat_to_ideal} allows for an additional observation on WR ideal lattices in the plane, which we record below. \begin{cor} \label{all_in_type} Suppose that $\Gamma$ is an IWR lattice of type $D$, where $D$ is as in Lemma~\ref{lat_to_ideal}. Then $\Gamma$ contains IWR sublattices similar to ideal lattices coming from ideals in ${\mathbb Q}(\sqrt{\pm D})$. \end{cor} \proof As discussed in Section~\ref{iwr_section} above, $\Gamma$ contains sublattices similar to $\Omega_D(p,q)$ for any $p,r,q$ satisfying $p^2+r^2D=q^2$ with $\gcd(p,q)=1$ and $p/q \leq 1/2$. Since $D$ is as in the statement of Lemma~\ref{lat_to_ideal}, there must exist such $p,q$ with $r=1$. Then $\Gamma$ must contain IWR sublattices similar to $\Lambda_{{\mathbb Q}(\sqrt{D})}(I)$ and to $\Lambda_{{\mathbb Q}(\sqrt{-D})}(J)$ for $I,J$ as in \eqref{IJ_ab} for each such choice of $p,q$. \endproof Next we prove that the WR ideal lattices coming from imaginary quadratic fields constructed in Lemma~\ref{lat_to_ideal} are all that there are, up to similarity. \begin{lem} \label{ideal_D} Let $D \in {\mathbb Z}_{>0}$ be squarefree and let $K={\mathbb Q}(\sqrt{- D})$ be such that there exists a WR ideal $I \subset {\mathcal O}_K$. Then $D$ must satisfy \eqref{D_div} of Lemma~\ref{r_1} and $\Lambda_K(I) \sim \Omega_D(p,q)$ for some $p,q$ so that $p^2+D=q^2$, $\gcd(p,q)=1$, $p/q \leq 1/2$. \end{lem} \proof We start with a general remark that applies to any planar lattice $\Lambda$. Given a basis matrix $A$ for $\Lambda$, there must exist a change of basis matrix \begin{equation} \label{U_b_c} U = \begin{pmatrix} s_1 & s_2 \\ s_3 & s_4 \end{pmatrix} \in \operatorname{GL}_2({\mathbb Z}) \end{equation} so that $B=AU$ is a basis matrix for $\Lambda$ corresponding to a Minkowski-reduced basis (in case $\Lambda$ is WR, this is our minimal basis). We are now ready to start the proof with notation as in the statement of the lemma. Assume that the canonical basis for the ideal $I$ is $a,b+g\delta$, then $I=gI'$, where $I'$ has canonical basis $\frac{a}{g}, \frac{b}{g}+\delta$ and $\Lambda_K(I) \sim \Lambda_K(I')$. Hence we can assume without loss of generality that $g=1$. First suppose that $-D \not\equiv 1 (\operatorname{mod} 4)$, $K={\mathbb Q}(\sqrt{-D})$, then $\Lambda_K(I)$ is as in \eqref{B3} with $g=1$. Let $U$ as in \eqref{U_b_c} be the change of basis matrix from the first basis matrix in \eqref{B3} to a minimal basis matrix. Then $\Lambda_K(I)$ is a WR lattice with minimal basis matrix \begin{equation} \label{b_im_n1} B = \begin{pmatrix} as_1+bs_3 & as_2+bs_4 \\ -s_3\sqrt{D} & -s_4\sqrt{D} \end{pmatrix}. \end{equation} Since \begin{equation} \label{sin_th} \sin \theta(\Lambda_K(I)) = \frac{\operatorname{det} \Lambda_K(I)}{|\Lambda_K(I)|} = \frac{r\sqrt{D}}{q}, \end{equation} where $\gcd(r,q)=1$, we immediately deduce from \eqref{B3} and \eqref{b_im_n1} that $$r = \frac{a}{\gcd(a, (as_1+bs_3)^2+Ds_3^2)},$$ where $$(as_1+bs_3)^2+Ds_3^2 = a(as_1^2+2bs_1s_3) + (b^2+D)s_3^2$$ is divisible by $a$, by \eqref{abg}, since ${\mathbb N}(b+g\delta) = b^2+D$ in this case. Therefore $r$ must be equal to 1, and so $\Lambda_K(I) \sim \Omega_D(p,q)$ for some $p,q$ so that $p^2+D=q^2$, $\gcd(p,q)=1$, $p/q \leq 1/2$. Next suppose that $-D \equiv 1 (\operatorname{mod} 4)$, $K={\mathbb Q}(\sqrt{-D})$, then $\Lambda_K(I)$ is as in \eqref{B4} with $g=1$. Let $U$ as in \eqref{U_b_c} be the change of basis matrix from the first basis matrix in \eqref{B4} to a minimal basis matrix. Then $\Lambda_K(I)$ is a WR lattice with minimal basis matrix \begin{equation} \label{b_im_1} B = \begin{pmatrix} as_1+(b+1/2)s_3 & as_2+(b+1/2)s_4 \\ -s_3\sqrt{D}/2 & -s_4\sqrt{D}/2 \end{pmatrix}. \end{equation} Analogously to the argument above, $$r = \frac{2a}{\gcd(2a, 4a^2s_1^2+4a(2b+1)s_1s_3+((2b+1)^2+D)s_3^2} = 1,$$ since $(2b+1)^2+D$ is divisible by $2a$, by \eqref{abg}, because ${\mathbb N}(b+g\delta) = \frac{1}{4} ((2b+1)^2+D)$ in this case. Hence again $\Lambda_K(I) \sim \Omega_D(p,q)$ for some $p,q$ so that $p^2+D=q^2$, $\gcd(p,q)=1$, $p/q \leq 1/2$. This completes the proof of the lemma. \endproof In the case of a real quadratic field the situation appears to be more complicated. We propose the following question. \begin{quest} \label{D_real_quad} Do there exist real quadratic fields $K={\mathbb Q}(\sqrt{D})$ with positive squarefree $D$ not satisfying \eqref{D_div} of Lemma~\ref{r_1} so that ${\mathcal O}_K$ contains WR ideals? \end{quest} \noindent Computational evidence suggests that the answer to this question is no, however at the moment we only have the following partial result in this direction. \begin{lem} \label{ideal_D_real} Let $D \in {\mathbb Z}_{>0}$ be squarefree and let $K={\mathbb Q}(\sqrt{D})$ be such that there exists a WR ideal $I = \left< a,b+g\delta \right> \subset {\mathcal O}_K$, where $a,b+g \delta$ is the canonical basis for~$I$. Assume in addition that $a \mid 2D$, then $D$ must satisfy \eqref{D_div} of Lemma~\ref{r_1} and $\Lambda_K(I) \sim \Omega_D(p,q)$ for some $p,q$ so that $p^2+D=q^2$, $\gcd(p,q)=1$, $p/q \leq 1/2$. In particular, if \begin{enumerate} \item $D \not\equiv 1 (\operatorname{mod} 4)$ and $\min \{ a^2, b^2+D \} \geq 2ab$, or \item $D \equiv 1 (\operatorname{mod} 4)$ and $\min \left\{ a^2, \frac{1}{4} ((2b+1)^2+D) \right\} \geq 2a(b+1)$, \end{enumerate} then $a \mid 2D$. \end{lem} \proof First notice, as in the proof of Lemma~\ref{ideal_D}, that $I=gI'$, where $I'$ has canonical basis $\frac{a}{g}, \frac{b}{g}+\delta$ and $\Lambda_K(I) \sim \Lambda_K(I')$. Hence we can assume without loss of generality that $g=1$. It is easy to see that the condition $a \mid 2D$ is equivalent to $a \mid b^2+D$ if $D \not\equiv 1 (\operatorname{mod} 4)$, and to $a \mid (2b+1)^2+D$ if $D \equiv 1 (\operatorname{mod} 4)$. Indeed, if $D \not\equiv 1 (\operatorname{mod} 4)$, then \eqref{abg} implies that \begin{equation} \label{nrm_cond_1} a \mid {\mathbb N}(b+g\delta) = b^2-D, \end{equation} and so $a \mid 2D$ if and only if $a \mid b^2+D$; if $D \equiv 1 (\operatorname{mod} 4)$, then \eqref{abg} implies that \begin{equation} \label{nrm_cond_2} a \mid {\mathbb N}(b+g\delta) = \frac{1}{4} ((2b+1)^2-D), \end{equation} and so $a \mid 2D$ if and only if $a \mid (2b+1)^2+D$. Now suppose that $D \not\equiv 1 (\operatorname{mod} 4)$, $K={\mathbb Q}(\sqrt{D})$, then $\Lambda_K(I)$ is as in \eqref{B1} with $g=1$. Let $U$ as in \eqref{U_b_c} be the change of basis matrix from the first basis matrix in \eqref{B1} to a minimal basis matrix. Then $\Lambda_K(I)$ is a WR lattice with minimal basis matrix \begin{equation} \label{b_re_n1} B = \begin{pmatrix} as_1+(b-\sqrt{D})s_3 & as_2+(b-\sqrt{D})s_4 \\ as_1+(b+\sqrt{D})s_3 & as_2+(b+\sqrt{D})s_4 \end{pmatrix}. \end{equation} Then we must have: \begin{equation} \label{vnn1} a^2s_1^2+2abs_1s_3+(b^2+D)s_3^2=a^2s_2^2+2abs_2s_4+(b^2+D)s_4^2, \end{equation} and analogously to the arguments in the proof of Lemma~\ref{ideal_D} above in the imaginary case, we have \begin{equation} \label{Dr1} r = \frac{a}{\gcd(a, a^2s_1^2+2abs_1s_3+(b^2+D)s_3^2)}. \end{equation} Hence if $a \mid 2D$, then $r=1$, and so $\Lambda_K(I) \sim \Omega_D(p,q)$ for some $p,q$ so that $p^2+D=q^2$, $\gcd(p,q)=1$, $p/q \leq 1/2$. We will now show that if (1) is satisfied, then $a \mid 2D$. Take $U$ to be the identity matrix. The positive definite binary quadratic norm form corresponding to the basis matrix $B$ in this case will be $$Q_B(x,y) = 2a^2x^2+4abxy+2(b^2+D)y^2.$$ Since $\min \{ a^2, b^2+D \} \geq 2ab$, this form must be Minkowski reduced, which means that $B$ must be a minimal basis matrix. Since $\Lambda_K(I)$ is WR, a reduced basis for the lattice must consist of vectors of the same length, i.e we must have $a^2=b^2+D$, and hence $a \mid 2D$ by the argument above. Next suppose that $D \equiv 1 (\operatorname{mod} 4)$, $K={\mathbb Q}(\sqrt{D})$, then $\Lambda_K(I)$ is as in \eqref{B2} with $g=1$. Let $U$ as in \eqref{U_b_c} be the change of basis matrix from the first basis matrix in \eqref{B2} to a minimal basis matrix. Then $\Lambda_K(I)$ is a WR lattice with minimal basis matrix \begin{equation} \label{b_re_1} B = \begin{pmatrix} as_1+ \frac{(2b+1-\sqrt{D})s_3}{2} & as_2+ \frac{(2b+1-\sqrt{D})s_4}{2} \\ as_1+ \frac{(2b+1+\sqrt{D})s_3}{2} & as_2+ \frac{(2b+1+\sqrt{D})s_4}{2} \end{pmatrix}. \end{equation} Then we must have: \begin{eqnarray} \label{vn1} & & 4a^2s_1^2+8abs_1s_3+4as_1s_3+((2b+1)^2+D)s_3^2 \nonumber \\ & = & 4a^2s_2^2+8abs_2s_4+4as_2s_4+((2b+1)^2+D)s_4^2, \end{eqnarray} and analogously to the arguments in the proof of Lemma~\ref{ideal_D} above in the imaginary case, we have $$r = \frac{a}{\gcd(a, 4a^2s_1^2+8abs_1s_3+4as_1s_3+((2b+1)^2+D)s_3^2)} = 1.$$ Hence if $a \mid 2D$, then $r=1$, and so again $\Lambda_K(I) \sim \Omega_D(p,q)$ for some $p,q$ so that $p^2+D=q^2$, $\gcd(p,q)=1$, $p/q \leq 1/2$. We will now show that if (2) is satisfied, then $a \mid 2D$. Again, take $U$ to be the identity matrix. The positive definite binary quadratic norm form corresponding to the basis matrix $B$ in this case will be $$Q_B(x,y) = 2a^2x^2+4a(b+1)xy+ \frac{1}{2} ((2b+1)^2+D)y^2.$$ Since $\min \left\{ a^2, \frac{1}{4} ((2b+1)^2+D) \right\} \geq 2a(b+1)$, this form must be Minkowski reduced, which means that $B$ must be a minimal basis matrix. Since $\Lambda_K(I)$ is WR, a reduced basis for the lattice must consist of vectors of the same length, i.e we must have $4a^2=(2b+1)^2+D$, and hence $a \mid 2D$ by the argument above. \endproof In addition, we have the following finiteness result for the number of WR ideals in a fixed imaginary quadratic number field. \begin{lem} \label{K_fin_num} Suppose that $D$ satisfies \eqref{D_div} of Lemma~\ref{r_1} and $K={\mathbb Q}(\sqrt{-D})$. Then $K$ contains only finitely many WR ideals, up to similarity of the corresponding lattices, and this number is \begin{equation} \label{ideal_est} \ll \min \left\{ 2^{\omega(D)-1}, \frac{2^{\omega(D)}}{\sqrt{\omega(D)}} \right\}, \end{equation} where the constant in the Vinogradov notation $\ll$ does not depend on $D$. \end{lem} \proof Lemma~\ref{r_1} guarantees that there exist integer pairs $(p,q)$ such that \begin{equation} \label{pq1} p^2+D=q^2 \text{ for some } p,q \text{ with } \gcd(p,q)=1, p/q \leq 1/2. \end{equation} Now Lemma~\ref{lat_to_ideal} guarantees that if $K={\mathbb Q}(\sqrt{-D})$, then there exists a WR ideal $I \subseteq {\mathcal O}_K$ with $\Lambda_K(I) \in \Omega_D(p,q)$ for each $p,q$ satisfying \eqref{pq1}, and Lemma~\ref{ideal_D} implies that {\it all} WR ideals in ${\mathcal O}_K$ correspond to solutions of \eqref{pq1}. Then the number of WR ideals in $K$, up to similarity of the lattices $\Lambda_K(I)$, is precisely the number of pairs $p,q$ as in \eqref{pq1}. This number is precisely $f(1)$ as defined in \eqref{r_count}, which is estimated by \eqref{f_f1_f2}. Now applying \eqref{f1_est} and \eqref{f2_est}, and noticing that for a squarefree integer $D$ $$\frac{\tau(D)}{\sqrt{\omega(D)}} = \frac{2^{\omega(D)}}{\sqrt{\omega(D)}},$$ we obtain \eqref{ideal_est}. \endproof \proof[Proof of Theorem \ref{ideal_IWR}] The first part of the theorem along with \eqref{ideal_est_1} follow from Lemmas~\ref{lat_to_ideal}, \ref{ideal_D}, and~\ref{K_fin_num} above, so it is only left to establish \eqref{K_density}. Define sets \begin{equation} \label{A_set} {\mathcal A} = \left\{ D : D \in {\mathbb Z}_{>0} \text{ squarefree} \right\}, \end{equation} and \begin{equation} \label{B_set} {\mathcal B}_{\nu} = \left\{ D : D \in {\mathbb Z}_{>0} \text{ squarefree with a divisor } \frac{\sqrt{D}}{\nu} \leq d < \sqrt{D} \right\}, \end{equation} where $\nu > 1$ is a real number. Define also $${\mathcal A}(N) = \left\{ D \in {\mathcal A} : D \leq N \right\},\ {\mathcal B}_{\nu}(N) = \left\{ D \in {\mathcal B}_{\nu} : D \leq N \right\},$$ for any $N \in {\mathbb Z}_{>0}$. To prove \eqref{K_density} we simply need to show that \begin{equation} \label{D_3} \liminf_{N \to \infty} \frac{ \left| {\mathcal B}_{\sqrt{3}}(N) \right| }{ \left| {\mathcal A}(N) \right| } \geq \frac{\sqrt{3}-1}{2\sqrt{3}}. \end{equation} An analogue of \eqref{D_3} for integers that are not necessarily squarefree has been established in Theorem 4.4 of \cite{wr1}. We will now adapt the proof of Theorem 4.4 of \cite{wr1} to account for the squarefree condition. Theorem 333 of \cite{hardy} implies that there exist absolute constants $c_1,c_2$ such that \begin{equation} \label{aN} \frac{6N}{\pi^2} + c_1\sqrt{N} \leq |{\mathcal A}(N)| \leq \frac{6N}{\pi^2} + c_2\sqrt{N}. \end{equation} Now, following Section~4 of \cite{wr1}, we define $$I_{\nu}(n) = \left\{n^2, n(n-1), \dots, n \left(n-\left[\left(\frac{\nu-1}{\nu}\right)n\right]\right)\right\},$$ for each $n \in {\mathbb Z}_{>0}$, and let $$I'_{\nu}(n) = \left\{ m \in I_{\nu}(n) : m \text{ is squarefree} \right\}.$$ Suppose $n$ is prime, then $$I'_{\nu}(n) = \left\{ nm : \left(n-\left[\left(\frac{\nu-1}{\nu}\right)n\right]\right) \leq m < n,\ m \text{ is squarefree} \right\},$$ and so, by \eqref{aN} \begin{eqnarray} \label{I_prime} |I'_{\nu}(n)| & = & \left| {\mathcal A}(n) \right| - \left| {\mathcal A} \left(n-\left[\left(\frac{\nu-1}{\nu}\right)n\right]\right) \right| \nonumber \\ & \geq & \frac{6}{\pi^2} \left[\left(\frac{\nu-1}{\nu}\right)n\right] + (c_1-c_2) \sqrt{n}. \end{eqnarray} Since each $I'_{\nu}(n) \subseteq I_{\nu}(n)$ and $I_{\nu}(n) \cap I_{\nu}(m) = \emptyset$ when $\gcd(n,m)=1$, by part (ii) of Lemma~4.2 of \cite{wr1}, we conclude that $I'_{\nu}(n) \cap I'_{\nu}(m) = \emptyset$ when $\gcd(n,m)=1$. Notice also that $$\bigcup_{n=1}^{[\sqrt{N}]} I'_{\nu}(n) \subseteq {\mathcal B}_{\nu}(N).$$ Now we adapt the argument in the proof of Lemma~4.3 of \cite{wr1}. Let $M=\pi(\sqrt{N})$, i.e. the number of primes up to $\sqrt{N}$. A result of Rosser and Schoenfeld (Corollary~1 on p.~69 of \cite{rosser}) implies that for all $\sqrt{N} \geq 17$, \begin{equation} \label{N_bound} \frac{\sqrt{N}}{\log \sqrt{N}} < M < \frac{1.25506\ \sqrt{N}}{\log \sqrt{N}} \end{equation} Hence suppose that $N \geq 289$, and let $p_1, \dots, p_M$ be all the primes up to $\sqrt{N}$ in ascending order. Then $I_{\nu}(p_i) \cap I_{\nu}(p_j) = \emptyset$ for all $1 \leq i \neq j \leq N$. Therefore, using \eqref{I_prime}, we obtain: \begin{equation} \label{low2} |{\mathcal B}_{\nu}(N)| \geq \sum_{i=1}^M |I'_{\nu}(p_i)| \geq \frac{6(\nu-1)}{\pi^2 \nu} \sum_{i=1}^M p_i + (c_1-c_2) \sum_{i=1}^M \sqrt{p_i}. \end{equation} A result of R. Jakimczuk \cite{jakimczuk} implies that \begin{equation} \label{prime_bound} \sum_{i=1}^M p_i > \frac{M^2}{2}\ \log^2 M. \end{equation} Notice also that $$(c_1-c_2) \sum_{i=1}^M \sqrt{p_i} \geq - |c_1-c_2| M^{3/2},$$ and combining this observation with \eqref{N_bound}, \eqref{low2}, and \eqref{prime_bound}, we obtain: \begin{equation} \label{low} |{\mathcal B}_{\nu}(N)| > \frac{6(\nu-1)N}{2\pi^2 \nu} \left( 1 - \frac{\log \log \sqrt{N}}{\log \sqrt{N}} \right) ^2 - |c_1-c_2| N^{3/4} \left( \frac{1.25506}{\log \sqrt{N}} \right)^{3/2}. \end{equation} Notice also that as $N \to \infty$, \eqref{aN} implies that $|{\mathcal A}(N)| \leq \frac{(6+{\varepsilon})N}{\pi^2}$ for any ${\varepsilon} > 0$, and combining this observation with \eqref{low}, we obtain: \begin{equation} \label{any_nu} \liminf_{N \to \infty} \frac{ \left| {\mathcal B}_{\nu}(N) \right| }{ \left| {\mathcal A}(N) \right| } > \frac{6(\nu-1)}{2(6+{\varepsilon}) \nu} \geq \frac{\nu-1}{2\nu}, \end{equation} since the choice of ${\varepsilon}$ is arbitrary. Now \eqref{D_3} follows by taking $\nu=\sqrt{3}$. \endproof Finally, we briefly discuss WR lattices coming from principal ideals. It is well known that every ideal in a ring of integers of a number field can be generated by at most two elements, and principal ideals play a very special role in algebraic number theory: they correspond to the identity element of the class group of a number field. It is therefore natural to ask whether principal ideals in a quadratic number ring can be WR? Corollary~2.4 of \cite{lf:petersen} implies that if $K={\mathbb Q} \left( \sqrt{-D} \right)$, then ${\mathcal O}_K$ contains principal WR ideals if and only if $D=1,3$. In the real quadratic case, the situation is again more complicated. We propose the following question. \begin{quest} \label{D_principal} Do there exist real quadratic fields $K={\mathbb Q}(\sqrt{D})$ with positive squarefree $D \not\equiv 1 (\operatorname{mod} 4)$ so that ${\mathcal O}_K$ contains principal WR ideals? \end{quest} \noindent Computational evidence suggests that the answer to this question is no. On the other hand, there do exist $D \equiv 1 (\operatorname{mod} 4)$ so that ${\mathcal O}_{{\mathbb Q}(\sqrt{D})}$ contains a WR principal ideal $I$. In Table~\ref{table2} below we present a few examples of number fields $K={\mathbb Q}(\sqrt{D})$, $D \equiv 1 (\operatorname{mod} 4)$, so that the class number $h_K=1$, which contain WR ideals. We present these ideals in terms of their canonical integral bases with $\delta$ as in \eqref{delta}. \begin{center} \begin{table}[!ht] \caption{Examples of WR ideals in $K={\mathbb Q}(\sqrt{D})$, $D \equiv 1 (\operatorname{mod} 4)$, $h_K=1$} \begin{tabular}{|l|l|l|l|} \hline $D$& WR ideals & Their similarity classes $p/q$ ($r=1$) \\ \hline $21$ & $\left< 3, 1+\delta \right>, \left< 7, 3+\delta \right>$ & $2/5, 2/5$\\ \hline $77$ & $\left< 7, 3+\delta \right>, \left< 11, 5+\delta \right>$ & $2/9, 2/9$\\ \hline $133$ & $\left< 7, 3+\delta \right>, \left< 19, 9+\delta \right>$ & $6/13, 6/13$\\ \hline $209$ & $\left< 11, 5+\delta \right>, \left< 19, 9+\delta \right>$ & $4/15, 4/15$\\ \hline \end{tabular} \label{table2} \end{table} \end{center} \bigskip {\bf Acknowledgment.} We would like to thank the Fletcher Jones Foundation-supported Claremont Colleges research experience program, under the auspices of which a large part of this work was done during the Summer of 2011. We are also grateful to the referee for a thorough reading and helpful comments; in particular, Remark~\ref{class_number} was suggested by the referee. \bigskip \bibliographystyle{plain}
train/arxiv
BkiUcXDxK02iP4Y2uIuX
5
1
\section{Introduction} The gauge-gravity duality\cite{Banks:1996vh, Maldacena:1997re} is a concrete realization of the holographic principle\cite{Susskind:1994vu} which states that the number of degrees of freedom in quantum gravity scales like area. This idea, that was conceived from the nature of black hole entropy, {\it entangles} ideas in quantum gravity and information theory. In recent years, there has been a great progress in understanding aspects of strongly coupled large $N$ gauge theories using the AdS/CFT correspondence and generalizations thereof. Within this context, {\it i.e.} analyzing systems described by such large $N$ gauge theories, it is an intriguing possibility to implement ideas that are natural in quantum information theory. One such important concept is entanglement entropy, which at zero temperature, measures the quantum entanglement between two sub-systems of a given system. In a quantum field theory, entanglement entropy of a region $A$ contains short-distance divergence which also scales like the area\cite{Bombelli:1986rw, Srednicki:1993im}. For large $N$ theories, which in the gravity dual are described by classical Einstein gravity with suitable matter fields, entanglement entropy can be computed using the Ryu-Takayanagi conjectured formula proposed in \cite{Ryu:2006bv, Ryu:2006ef}. This conjectured formula does indeed satisfy many non-trivial relations\cite{Headrick:2007km, Headrick:2010zt} known in quantum information theory. There have been numerous works analyzing entanglement entropy in various systems that are described by such classical gravity dual backgrounds, see {\it e.g.} \cite{Nishioka:2009un, Takayanagi:2012kg} for recent reviews. However, due to its short distance divergence structure, entanglement entropy is a scheme-dependent quantity. This issue can be avoided by introducing an appropriate linear combination of entanglement entropies, which introduces a new concept named mutual information: $I(A,B) = S_A + S_B - S_{A\cup B}$, where $S_Y$ denotes the entanglement entropy of the region $Y$. Mutual information is an important concept in information theory that has certain advantages over entanglement entropy. It is (i) finite, (ii) positive semi-definite, (iii) measures the total correlations between the two sub-systems $A$ and $B$ and (iv) it is proportional to the entanglement entropy when $B \equiv A^c$, where $A^c$ denotes the complement of $A$, such that $S_{A\cup A^c} =0$.\footnote{We are assuming that $A\cup A^c$ is in a ground state with no degeneracy. Also, in order for $B$ to become $A^c$, we necessarily need $B$ to approach $A$. When two region $A$ and $B$ approach each other, new divergences appear depending on the shape of $A$ and $B$; see \cite{Swingle:2010jz} for more details on this. This is precisely the short-distance divergence structure observed in entanglement entropy.} Moreover, it can be proven\cite{PhysRevLett.100.070502} that mutual information satisfies an area law at finite temperature. This is to be contrasted with the behaviour of entanglement entropy which is dominated by the thermal entropy at large temperatures and hence follows a volume law. Thus it is expected that mutual information carries more relevant content as far as describing quantum entanglement is concerned, since entanglement is still expected to scale as the are rather than the volume. It was pointed out in \cite{Headrick:2010zt}, that in holographic duals, mutual information does undergo a ``first order phase transition" as the separation between the two rectangular sub-systems $A$ and $B$ is increased. For small separation, $I(A,B) \not= 0$, but for large separation $I(A,B) =0$; in the bulk there are always two candidate minimal area surfaces for the computation of $S_{A\cup B}$ and depending on the separation of $A$ and $B$ one or the other is favoured.\footnote{This is explained in more details in the next section.} Clearly, this does not correspond to a phase transition in the usual sense; however when $I(A,B)=0$, the two sub-systems $A$ and $B$ become completely decoupled. Hence we will call it a ``disentangling transition". In this article, we study this disentangling transition for large $N$ relativistic conformal theories in $d$-spacetime dimensions in the presence of a finite temperature. The corresponding ``phase diagram" can be presented in the $(x/l)$ vs $(xT)$ plane, where $l$ denotes a linear size of the rectangular regions $A$ and $B$ (which we take to be of equal size for simplicity), $x$ is the separation between them and $T$ denotes the temperature of the system. Using the analytical methods developed in \cite{Fischler:2012ca}, we also explore the finite temperature behaviour of mutual information for this class of theories. We then move on to analyzing the same disentangling transition in non-relativistic scale invariant theories, {\it e.g.} for Lifshitz and hyperscaling-violating backgrounds. We find that this disentangling transition has universal qualitative features for all such theories with holographic duals. It has been suggested in recent years that the emergence of a holographic space can be envisioned from the entanglement properties of a large class of many body quantum systems at criticality, see {\it e.g.} \cite{swingle:0905}. In the presence of a high temperature, entanglement entropy is dominated by thermal entropy and classical correlations. Mutual information, on the other hand, subtracts out the thermal contribution and still satisfies an area law. Furthermore, in an appropriate regime of parameters we analytically demonstrate that the finite piece of the mutual information actually captures the sub-leading term in the entanglement entropy at large temperature, and hence is a better guide to capturing quantum entanglement. Perhaps one key universal feature alluded to in a couple of paragraphs earlier is the fact that mutual information decreases monotonically for increasing temperature. Hence, in this context, disentanglement in the boundary theory corresponds to raising the temperature, which in the dual gravitational picture can be viewed as the extremal surface probing deeper in the bulk. This is to be contrasted with the situation described in \cite{Czech:2012be}, where the emergence of an asymptotically globally AdS spacetime is described in terms of entangled states of a pair of CFTs defined on a hyberbolic space. It is nonetheless suggesting a possible deep connection between disentanglement and emergence of a holographic direction. This article is divided in the following parts: in the next section we introduce the concept of entanglement entropy and mutual information more formally and discuss the holographic prescription. Section 3 is devoted to the discussion of properties of mutual information at finite temperature in various limits of $(x/l)$ and a numerical study of the disentangling transition for CFT$_d$. In section 4, we continue analyzing similar physics in non-relativistic scale-invariant theories by considering generic examples of Lifshitz and hyperscaling-violating backgrounds. Finally we conclude in section 5. Several details relevant for obtaining analytical results for mutual information at finite temperature have been relegated to three appendices. \section{Entanglement entropy, mutual information and a summary} In this section we will briefly elaborate on the definitions of entanglement entropy and mutual information. We will also include a brief summary of the results that we will discuss in the subsequent sections. Let us begin with the ideas of entanglement entropy and mutual information. Consider a $d$ (spacetime) dimensional quantum field theory (QFT). Quantum systems are described by state vectors $| \psi \rangle \in {\cal H} $, where ${\cal H} $ denotes the Hilbert space of the system, evolving with some Hamiltonian $H$. A quantum system is also described by the density matrix, usually denoted by $\rho$, and defined as: $\rho = | \psi \rangle \langle \psi |$. The expectation value of an operator ${\cal O} $ is then simply obtained by $\langle {\cal O} \rangle = {\rm tr} \left(\rho {\cal O} \right)$. Also, note that the entropy of such a system is given by the von Neumann formula: $S = - {\rm tr} \left[ \rho \log \rho \right] $. Now let us consider a QFT defined on ${\cal M} ^{d-1,1}$: a Lorentzian manifold.\footnote{For our current purposes we will focus only on Minkowski space: $\mathbb{R}^{d-1,1}$.} On a constant time Cauchy surface let us imagine dividing the system in two sub-systems, $A$ and $A^c$ respectively, where $A^c$ is the complement of $A$. The total Hilbert space then factorizes: ${\cal H} = {\cal H} _A \otimes {\cal H} _{A^c}$. We can define a ``reduced" density matrix of the sub-system $A$ by tracing out the information contained in ${\cal H} _{A^c}$ and thus define \begin{eqnarray} \rho_A = {\rm tr}_{A^c} \left[ \rho \right] \ , \end{eqnarray} and subsequently define the von Neumann entropy described by \begin{eqnarray} S_A = - {\rm tr} \left[ \rho_A \log \rho_A \right] \end{eqnarray} as the entanglement entropy. Entanglement entropy is proportional to the number of degrees of freedom residing on the boundary shared by the sub-systems $A$ and $A^c$. The leading order divergence thus follows an area law\cite{Bombelli:1986rw, Srednicki:1993im}\footnote{We also note that the area law has violations in physically interesting and important cases. One simple example is $(1+1)$-dimensional conformal field theory, where a logarithmic violation arises. There is a simple scaling intuition behind such area law and its violations\cite{Swingle:2010jz}.} \begin{eqnarray} S_A = \alpha \frac{\partial A}{\epsilon^{d-2}} + \ldots \ , \end{eqnarray} where $(\partial A)$ denotes the area of the region $A$ and $\epsilon$ denotes an UV cut-off of the QFT (in the limit of $\epsilon \to 0$); in a discretized version of the QFT, this cut-off can be identified with the lattice spacing. The constant $\alpha$ depends on the regularization scheme and thus is not universal. It is a challenging task to compute entanglement entropy in a given quantum field theory. Within the realm of the AdS/CFT correspondence, more generally the holographic principle, there is a particularly simple yet powerful proposal for computing entanglement entropy for strongly coupled theories. The proposal was given in \cite{Ryu:2006bv, Ryu:2006ef} for static backgrounds and later generalized in \cite{Hubeny:2007xt} for backgrounds with explicit time-dependence. For a recent review, see {\it e.g.}~\cite{Nishioka:2009un, Takayanagi:2012kg}. According to this proposal, entanglement entropy of region $A$ is given by the Ryu-Takayanagi formula \begin{eqnarray} \label{eedef} S_A = \frac{{\rm Area} \left(\gamma_A\right)}{4 G_N^{(d+1)}} \ , \end{eqnarray} where $G_N^{(d+1)}$ is the Newton's constant in $(d+1)$ bulk dimensions, $\gamma_A$ denotes the $(d-1)$-dimensional minimal\footnote{$\gamma_A$ is extremal in case the background has explicit time-dependence, as described in \cite{Hubeny:2007xt}.} area surface whose boundary coincides with the boundary of the region $A$: $\partial \gamma_A = \partial A$ and we also require that $\gamma_A$ is homologous to $A$. As described in \cite{Ryu:2006bv, Ryu:2006ef}, the Ryu-Takayanagi formula has passed several non-trivial checks. At finite temperature, the corresponding ``reduced" density matrix can be defined as: $\rho_A = e^{-\beta H_A}$,\footnote{Note that, even at zero temperature one can write the reduced density matrix $\rho_{\rm reduced} = e^{- \hat{H}}$, where $\hat{H}$ is some hermitian operator referred to as the ``modular Hamiltonian" in \cite{Haag:axiom}.} where the total Hamiltonian of the system can at least be schematically represented as: $H = H_A + H_{A^c} + H_\partial $. Here $H_A$ and $H_{A^c}$ denotes the Hamiltonians of the sub-systems $A$ and $A^c$ respectively; $H_\partial$ denotes the interactions between the two sub-systems across the boundary.\footnote{Strictly speaking, a schematic representation of the total Hamiltonian as $H= H_A + H_{A^c} + H_\partial$ may be misleading for non-local theories, since the interactions between $A$ and $B$ need not be confined on the boundary.} Using the Ryu-Takayanagi formula in the context of AdS/CFT correspondence, it can be observed that the regularized entanglement entropy for a $d$-dimensional CFT behaves like thermal entropy: for large enough temperature, the leading order behaviour becomes $S_A \sim V T^{d-1}$, where $V = {\rm vol} \left(\mathbb{R}^{d-1}\right)$. Hence there is no area law at finite temperature at the leading order. Mutual information is a quantity that is derived from entanglement entropy. The definition of mutual information between two disjoint sub-systems $A$ and $B$ (see fig.~\ref{shape} for example) is given by \begin{eqnarray} \label{mi} I(A, B) = S_A + S_B - S_{A \cup B} \ , \end{eqnarray} where $S_A$, $S_B$ and $S_{A\cup B}$ denote entanglement entropy of the region $A$, $B$ and $A\cup B$ respectively with the rest of the system. From the definition, it is clear that mutual information is a finite quantity since the non-universal divergent pieces in the entanglement entropy cancel out. Thus, we do not need to worry about any regularization scheme. Moreover, as showed in \cite{PhysRevLett.100.070502}, given an operator ${\cal O} _A$ in the region $A$ and ${\cal O} _B$ in the region $B$, mutual information sets an upper bound \begin{eqnarray} \label{mi1} I(A, B) \ge \frac{\left(\langle {\cal O} _A {\cal O} _B \rangle - \langle {\cal O} _A \rangle \langle {\cal O} _B \rangle \right)^2}{2 || {\cal O} _A ||^2 || \langle {\cal O} _B ||^2 } \end{eqnarray} and thus measures the total correlation between the two sub-systems: including both classical and quantum correlations. Furthermore, it was shown in \cite{PhysRevLett.100.070502} that mutual information follows an area law even at finite temperature. In the context of AdS/CFT, or holography, some intriguing features can already be conceived. Let us imagine two disjoint sub-systems $A$ and $B$, each of ``rectangular" shape with one dimension of length $l$ and the other as $L^{d-2}$, are separated by a distance $x$ along one of the spatial directions of a given CFT. This is schematically shown in fig.~\ref{shape}. \begin{figure}[!] \centering \includegraphics[width=0.7\textwidth]{Rec_shape.pdf} \caption{The two disjoint sub-systems $A$ and $B$, each of length $l$ along $X$-direction and separated by a distance $x$. The schematic diagram on the right shows the possible candidates for minimal area surfaces which is relevant for computing $S_{A\cup B}$. The choice on top gives $S_{A\cup B} = S_A + S_B = 2 S(l)$; and the choice at the bottom gives $S_{A\cup B} = S(2l+x) + S(x)$. This is also summarized in (\ref{twoch}).} \label{shape} \end{figure} One can easily follow the Ryu-Takayanagi formula to compute the entanglement entropy of the individual sub-systems $A$ and $B$. The computation of $S_{A\cup B}$ is more interesting: in this case, there are multiple choices of minimal area surfaces, which are schematically shown in fig.~\ref{shape}.\footnote{Actually, we have not shown yet another possibility where the two minimal area surfaces cross each other. However, this will always have a larger area than the two choices shown in fig.~\ref{shape}.} Depending on the ratio $x/l$, \begin{eqnarray} \label{twoch} S_{A \cup B} & = & S(2l+x) + S(x) \, \, \, \, {\rm for \, \, ``small"} \, \, \, \, x/l \ , \nonumber\\ & = & 2 S(l) \, \, \, \, {\rm for \, \, ``large"} \, \, \, \, x/l \ . \end{eqnarray} Here $S(y)$ denotes the area of a minimal surface whose boundary has a length of dimension $y$. Thus, in the latter case, we will have $I(A,B) = 0$ identically above a certain value for $x/l$\cite{Headrick:2010zt}. To summarize, mutual information has an intriguing feature for such systems\cite{Headrick:2010zt}: \begin{eqnarray} \label{transition} I(A,B) \not = && 0 \ , \quad x/l \le a_d \ , \nonumber\\ = && 0 \ , \quad x/l > a_d \ . \end{eqnarray} Thus, mutual information undergoes a first order phase transition at $x/l = a_d$, where $a_d$ is a number that depends on the dimension of the CFT. By virtue of the relation in (\ref{mi1}), eqn (\ref{transition}) implies that for $x/l > a_d$, the two sub-systems $A$ and $B$ completely disentangle.\footnote{Note that $\rho_{A\cup B} = \rho_A \otimes \rho_B$ implies that $I(A,B) =0$ and {\it vice versa}. We thank Matt Headrick for a correction on this point.} Similar phenomenon persists at finite temperature also. In the limit $l\to\infty$, the disentangling transition takes place as a function of temperature \begin{eqnarray} \label{transitionT} I(A,B) & \not = & 0 \ , \quad x T \le b_d \ ,\nonumber\\ & = & 0 \quad x T > b_d \ , \end{eqnarray} where $b_d$ is a constant and $T$ denotes the backgrounds temperature. See \cite{MolinaVilaplana:2011xt} for a related work in AdS$_3$-BTZ black hole background. Our goal here will be to study this disentangling transition for a class of conformal (or scale invariant) large $N$ gauge theories in a general dimension within the context of holography. We will make use of the analytical techniques developed in \cite{Fischler:2012ca} and also use numerical methods to explore the regime of parameters where this disentangling transition takes place in the $(x/l)$ vs $(T x)$ plane. Before proceeding further, let us offer some more comments. For relativistic CFTs, the area law for mutual information at finite temperature along with dimensional analysis suggests that \begin{eqnarray} \label{migen} I(A,B) = \left(\frac{L}{l} \right)^{d-2} F \left( x/l, x T\right) \ , \end{eqnarray} where $F(x/l, xT)$ is some function that depends on the CFT. At vanishing temperature, we recover the well-known form\cite{Swingle:2010jz}. The two regimes where we are able to obtain analytical results are $lT \ll 1$, $x T \ll 1$ and $lT \gg 1$, $xT \ll 1$ respectively. For small temperature, {\it i.e.}~when both $lT \ll 1$, $x T \ll 1$, we can make a formal expansion of form \begin{eqnarray} \label{migen1} F \left(x/l, xT \right) = \sum_{i} (xT)^i g_{i} (x/l) \ . \end{eqnarray} In the limit $lT \gg 1$ but $x T \ll 1$, we can make the following expansion \begin{eqnarray} \label{migen2} F\left(x/l, xT \right) = \left(l T\right)^{d-2} \sum_\alpha (xT)^\alpha \tilde{g}_\alpha(lT) \ . \end{eqnarray} where $g_{i}(x/l)$ and $\tilde{g}_\alpha(lT)$ are hitherto undetermined functions that depend on the underlying theory. We will find that generally $i \ge 0$, but $\alpha$ can range over positive and negative numbers. For example, in (\ref{migen2}) as $xT \to 0$, mutual information acquires a divergent piece: $I(A,B) \sim (L/x)^{d-2}$, which is in accord with the results obtained in \cite{Swingle:2010jz}. Finally, in the regime where both $lT \gg1$ and $xT\gg 1$, mutual information vanishes identically. In the next sections, we will discuss some generic examples in the light of equations (\ref{migen1}) and (\ref{migen2}).\footnote{Note that, here we are excluding the possibility of any logarithmic term. In general, such logarithmic contributions can arise; see {\it e.g.}~the example of $(1+1)$-dim CFT and the special case of hyperscaling-violating background in later sections.} Also note that, for non-relativistic scale-invariant theories, equations (\ref{migen1}) and (\ref{migen2}) will have similar forms with $T \to T^{1/z}$, where $z$ denotes the dynamical exponent of the theory. Before concluding this section, let us also comment on a general result that we will discuss in the subsequent sections. Clearly, both the expansions alluded to in (\ref{migen1}) and (\ref{migen2}) correspond to low temperature with respect to the separation scale, {\it i.e.}~when $x T \ll 1$. However, as we will demonstrate, the two regimes of low and high temperature with respect to the system sizes, {\it i.e.}~for $lT \ll 1$ or $lT \gg 1$ contain distinct physics. It is particularly interesting to consider the case $lT \gg 1$. In this regime, the entanglement entropy of either sub-system $A$ or sub-system $B$ can be schematically given by (see {\it e.g.}~\cite{Fischler:2012ca} or equations (\ref{highee}) and (\ref{higheelif})) \begin{eqnarray} \label{eeT} S_{A} = S_B = S_{\rm div} + S_{\rm thermal} + S_{\rm finite} + S_{\rm corr} \ , \end{eqnarray} where $S_{\rm div}$ denotes the divergent piece that typically follows the area law, $S_{\rm thermal}$ denotes the purely thermal entropy that goes as the volume, $S_{\rm finite}$ denotes the next leading order contribution that also follows an area law and finally $S_{\rm corr}$ denotes corrections suppressed by exponentials of $(lT)$. In this limit, mutual information behaves in the following manner: \begin{eqnarray} \label{miT} \left. I(A,B) \right |_{x \to 0} = I_{\rm div} + S_{\rm finite} + I_{\rm corr} \ , \end{eqnarray} where $I_{\rm div}$ is the divergent piece that emerges in the limit $x\to 0$ and $I_{\rm div} = S_{\rm div}$ similar to what is observed in \cite{Swingle:2010jz} and $I_{\rm corr}$ are correction terms in powers of $(xT)$ and $e^{- lT}$. From (\ref{eeT}) and (\ref{miT}), we see that apart from the diverging piece as $x\to 0$, mutual information does coincide with the thermal-part-subtracted entanglement entropy at the leading order. Thus, it truly measures quantum entanglement by discarding the volume-worth thermal contribution in the entanglement entropy. There are perhaps a couple of non-trivialities associated with this observation: First, note that {\it a priori} there is no reason for the sub-leading terms of entanglement entropy to follow an area law in the large temperature regime. This behaviour which was rigorously obtained in \cite{Fischler:2012ca}, however, is very crucial for the above relation to be true. Second, there is a precise match between the numerical factors as well. \section{Mutual information in relativistic CFTs} Let us begin by considering a class of large N gauge theories in $d$-dimensions whose dual is given by an asymptotically AdS$_{d+1}$-background. Finite temperature in introduced by having a black hole in the bulk spacetime. The generic bulk spacetime is given by the AdS-Schwarzschild metric of the form \begin{eqnarray} ds^2 = - \frac{r^2}{R^2} f(r) dt^2 + \frac{r^2}{R^2} d\vec{x}^2 + \frac{R^2}{r^2} \frac{dr^2 }{f(r)} \ , \quad f(r) = 1 - \frac{r_H^d}{r^d} \ , \end{eqnarray} where $r_H$ is the location of the black hole horizon, $R$ is the AdS radius, $\vec{x}$ is a $(d-1)$-dimensional vector and the boundary of the spacetime is located at $r\to\infty$. The temperature of the background is obtained by Euclideanizing the time direction and periodically compactifying it on a circle. The inverse period of this Euclidean time direction then gives the temperature as: \begin{eqnarray} \label{temp} T = \frac{r_H d}{4 \pi R^2} \ . \end{eqnarray} In what follows, we will set $R=1$. To obtain mutual information for an arrangement schematically shown in fig.~\ref{shape}, we specify the strip by \begin{eqnarray} X \equiv x^1 \in \left[ - \frac{l}{2}, \frac{l}{2}\right] \ , \quad x^i = \in \left[ - \frac{L}{2}, \frac{L}{2}\right] \ , \quad i = 2, \ldots, d-2 \ . \end{eqnarray} with $L \rightarrow \infty$. Extremal surface is translationally invariant along $x^i, i=2,...,d-2$ and the profile of the surface in the bulk is $X(r)$. Area of this surface is given by \begin{equation} \label{areagen} A= L^{d-2}\int dr r^{d-2}\sqrt{r^2 X'^2+ \frac{1}{r^2\left(1-\frac{r_H^d}{r^d}\right)}}. \end{equation} This action leads to the equation of motion \begin{align} \label{eomgen} \frac{dX}{dr}= \pm \frac{r_c^{d-1}}{ r^{d+1} \sqrt{\left(1- \frac{r_c^{2d-2}}{r^{2d-2}}\right)\left(1-\frac{r_H^d}{r^d}\right)}}, \end{align} where, $r_c$ is an integral of motion and $r=r_c$ represents the point of closest approach of the extremal surface. Such surfaces have two branches, joined smoothly at $(r=r_c, X=0)$ and $r_c$ can be determined using the boundary conditions: \begin{equation} X(\infty)=\pm\frac{l}{2} \ , \end{equation} which leads to \begin{align} \label{lengen} \frac{l}{2}=&\int_{r_c}^{\infty}\frac{r_c^{d-1} dr}{r^{d+1} \sqrt{\left(1- \frac{r_c^{2d-2}}{r^{2d-2}}\right)}} \left(1-\frac{r_H^d}{r^d}\right)^{-1/2}\nonumber\\ =& \frac{1}{r_c}\int_{0}^{1}\frac{u^{d-1} du}{ \sqrt{1- u^{2d-2}}} \left(1-\frac{r_H^d}{r_c^d}u^d\right)^{-1/2}. \end{align} So far, we have kept our discussion for general $d$. \subsection{Special case: $d=2$} Let us now focus on $d=2$. In this case, it is possible to evaluate the integrals in (\ref{lengen}) and (\ref{areagen}) in closed forms. This eventually leads to the following expression for entanglement entropy: \begin{eqnarray} \label{ee2} S_A = \frac{c}{3} \log\left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi l}{\beta}\right)\right] \ , \quad \beta = \frac{1}{T} \ , \quad c = \frac{3}{2 G_N^{(2+1)}} \ . \end{eqnarray} Using the above expressions, the definitions of entanglement entropy and mutual information in (\ref{eedef}) and (\ref{mi}) respectively, we obtain \begin{eqnarray} \label{mid2} I(A,B) = \frac{c}{3} \log \left[ \frac{\left(\sinh(\pi l T) \right)^2}{\sinh(\pi xT) \sinh (\pi (2l + x) T)}\right] \ , \end{eqnarray} In the limit $lT \ll 1$ and $xT \ll 1$, we get \begin{eqnarray} I (A, B) = \frac{c}{3} \left[\log\left(\frac{l^2}{x (2l+x)}\right) - \frac{1}{3} \pi^2 T^2 \left(l + x\right)^2 + \ldots \right] \ , \end{eqnarray} where the first term in the square bracket is just the zero temperature mutual information. In view of (\ref{migen}), we observe that there is no linear term in $T$. We also observe that finite temperature reduces mutual information and therefore promotes disentangling between the two sub-systems. On the other hand, in the regime $lT \gg 1$ and $xT \ll 1$, we get \begin{eqnarray} \label{mid2high} I(A,B) = \frac{c}{3} \left[- \log \left(2\pi x T\right) - \left(\pi x T\right) + \log\left(\tanh(\pi l T) \right) \ldots \right] \ , \end{eqnarray} where the contributions in $(lT)$ are exponentially suppressed. It is now easy to check that, in the large temperature regime, {\it i.e.}~$l T \gg 1$, the entanglement entropy takes the form \begin{eqnarray} \label{eed2high} S_A = S_{\rm div} + \frac{c}{3} \log\left(\sinh (\pi l T)\right) + \ldots \ . \end{eqnarray} In the limit $x \to 0$, defining $\epsilon = x/2$ we get that the large temperature expansion of mutual information given in (\ref{mid2high}) coincides exactly with the leading order large temperature expansion of the entanglement entropy given in (\ref{eed2high}). The mismatch is suppressed in exponentials of $(lT)$. This is an example of what we discussed in equations (\ref{eeT}) and (\ref{miT}). We have pictorially shown a ``phase diagram" in fig.~\ref{2dmi} corresponding to either $I(A,B) \not =0$ or the $I(A,B) = 0$ phase. The blue-shaded region represents the regime of parameters where there is non-vanishing correlation between the two sub-systems. From this phase diagram it is evident that increasing temperature does indeed disentangle the two sub-systems and entanglement reduces monotonically for increasing temperature. In the gravitational dual, increasing temperature implies that the extremal surface probes deeper in the background. Hence, as the two sub-systems keep disentangling, the ``emergent" AdS radial direction becomes more pronounced. Our results are in agreement with earlier work in \cite{MolinaVilaplana:2011xt}. \begin{figure}[!] \centering \includegraphics[width=0.8\textwidth]{2d_mutinfo.pdf} \caption{2-dimensional parameter space for the (1+1)-dimensional boundary theory. The mutual informational is non-zero only in the blue shaded region.} \label{2dmi} \end{figure} \subsection{General case: $d>2$} We now move on to discussing the general case of $d>2$. In this case, it is not possible to evaluate the integrals in (\ref{areagen}) and (\ref{eomgen}) in closed forms. We will use the approximation scheme outlined in \cite{Fischler:2012ca}. Much of the relevant details have been relegated to appendix A. Here we will discuss the final results. For the discussions in this section, we will define \begin{eqnarray} \label{cd} c = \frac{R^{d-1}}{4 G_N^{(d+1)}} \ , \end{eqnarray} where $G_N^{(d+1)}$ is the Newton's constant in $(d+1)$-dimensional bulk theory. We will set $R=1$. \subsubsection{Mutual information: $T=0$} At zero temperature, the mutual information is given by, \begin{eqnarray} \label{miT0} I(A,B) & = & c \, {\cal S} _0 L^{d-2}\left[\frac{2}{l^{d-2}}-\frac{1}{x^{d-2}}- \frac{1}{(2l+x)^{d-2}}\right] \ , \quad x/l \le a_d \ , \\ & = & 0 \ , \quad x/l >a_d \ , \end{eqnarray} where ${\cal S} _0$ is a constant (defined in (\ref{S0})) with a negative sign, $c$ is defined in (\ref{cd}), $a_d$ is a constant depending on the dimension of the dual CFT. In fig.~\ref{temptran} we have shown how this constant depends on the dimension $d$. It is clear that for a given $x/l$, increasing dimension makes it more difficult to disentangle the two sub-systems. This is intuitively expected since the higher the dimension, the more the ``area" becomes resulting in larger entanglement. \subsubsection{Finite temperature: $T\ll \frac{1}{l},\frac{1}{x}$} In this limit, when the mutual information is non-zero, it is given by, \begin{align} I(A,B)= & c \, {\cal S} _0 L^{d-2}\left[\frac{2}{l^{d-2}}\left(1+ {\cal S} _1\left( \frac{4\pi T l}{d}\right)^d\right)-\frac{1}{x^{d-2}}\left(1+ {\cal S} _1\left( \frac{4\pi T x}{d}\right)^d\right)\right.\nonumber\\ &\left.- \frac{1}{(2l+x)^{d-2}}\left(1+ {\cal S} _1\left( \frac{4\pi T (2l+x)}{d}\right)^d\right)\right]\\ =&I(A,B)|_{T=0}-2 c \, {\cal S} _0{\cal S} _1 \left( \frac{4\pi }{d}\right)^d L^{d-2}~ T^d~ \left(l-x\right)^2 \ . \end{align} Here ${\cal S} _1$ is a constant (defined in (\ref{S1})) with negative sign. In this case, the finite temperature correction obeys an area law as generally proved in \cite{PhysRevLett.100.070502}. Once again we observe that introducing finite temperature decreases mutual information. \subsubsection{Finite temperature: $\frac{1}{l}\ll T\ll \frac{1}{x}$} In this limit, when the mutual information is non-zero, it is given by, \begin{align} \label{midgen} I(A,B)= c \, L^{d-2}T^{d-2}&\left[-{\cal S} _0 \frac{1}{(xT)^{d-2}}+\left( \frac{4\pi }{d}\right)^{d-2}{\cal S} _{\rm high}\right.\nonumber\\ &\left.-\left( \frac{4\pi }{d}\right)^{d-1} T x-{\cal S} _0{\cal S} _1 \left( \frac{4\pi }{d}\right)^d T^2 x^2\right] \ . \end{align} Comparing the expression in (\ref{midgen}) and (\ref{highee}), we again observe that mutual information indeed captures the true entanglement part of the entanglement entropy by getting rid of the thermal contribution and this is a precise result including all numerical factors. It is an example of the generic observation mention in (\ref{eeT}) and (\ref{miT}). From the above expression, it is possible to find an upper bound on $(xT)\equiv b_d$, above which $I(A,B)$ is always zero. We have shown the dependence of $b_d$ as a function of $d$ in fig.~\ref{temptran}. Once again we observe that increasing dimension increases $b_d$, which is intuitively expected. \begin{figure}[!ht] \begin{center} \subfigure[] {\includegraphics[angle=0, width=0.45\textwidth]{zerotemp_transition.pdf} } \subfigure[] {\includegraphics[angle=0, width=0.45\textwidth]{bd.pdf} } \caption{\small The left panel: the dependence of $a_d$, as defined in (\ref{transition}), with respect to $d$. The right panel: the dependence of $b_d$, as defined in (\ref{transitionT}), with respect to $d$. The solid dots represent the corresponding value of $a_d$ or $b_d$ beyond which mutual information vanishes.} \label{temptran} \end{center} \end{figure} \subsubsection{Large temperature: $T\gg \frac{1}{x}$} In this limit, the two sub-systems are completely disentangled and mutual information is identically zero. The corresponding ``phase diagram" is shown in fig.~\ref{4dmi}, where the shaded region corresponds to $I(A,B) \not = 0$ and $I(A,B) = 0$ everywhere outside. \begin{figure}[!] \centering \includegraphics[width=0.8\textwidth]{4d_mutinfo.pdf} \caption{2-dimensional parameter space for the (3+1)-dimensional boundary theory. The mutual informational is non-zero only in the blue shaded region. The corresponding parameter space looks qualitatively similar for general $d$, thus we have showed one representative example here.} \label{4dmi} \end{figure} \section{Other backgrounds} We will now consider generic examples of scale-invariant (but not conformal) theories, which are known to have gravity dual descriptions. Such field theories with gravity duals, assuming they exist, are non-relativistic. Examples include the so called Lifshitz geometry introduced in \cite{Kachru:2008yh}; and more recently the background with hyperscaling violation in \cite{Huijse:2011ef}. Note that, in both \cite{Kachru:2008yh} and \cite{Huijse:2011ef} the approach is {\it phenomenological} or the so called {\it bottom-up}, {\it i.e.}~the existence of a dual field theory with the right symmetry properties is postulated {\it ab initio} without directly making connection to a more rigorous string or brane-construction. A lot of progress have been made to embed such effective gravity descriptions in ten or eleven dimensional supergravity, see {\it e.g.}~\cite{Balasubramanian:2010uk, Donos:2010tu} and \cite{Dong:2012se, Narayan:2012hk} respectively. By now there is a vast literature on such embeddings, and emboldened by these results we will work with an effective description without explicitly referring to the precise details of the dual field theory and also assume that the Ryu-Takayanagi proposal holds. \subsection{Lifshitz background} Let us discuss the Lifshitz background first. In this case, the background metric is invariant under the following scale transformation: \begin{eqnarray} t \to \lambda^z t \ , \quad x \to \lambda x \ , \quad r \to \lambda r \ , \end{eqnarray} where $\lambda$ is a real number and $r$ is the radial coordinate, in which the boundary is located at $r\to 0$. Such backgrounds are typically obtained from Einstein gravity with a negative cosmological constant and some matter field, such as a massive vector field or a scalar field. An analytic finite temperature Lifshitz background in $(3+1)$-dimensions is obtained in \cite{Balasubramanian:2009rx}, and is given by \begin{eqnarray} \label{Lifshitz} && ds^2 = R^2\left(- f \frac{dt^2}{r^{2z}} + \frac{d\vec{x}^2}{r^2} + \frac{dr^2}{f r^2} \right)\ , \quad f = 1- \frac{r^2}{r_H^2} \ , \\ && \phi = - \frac{1}{2} \log\left( 1 + \frac{r^2}{r_H^2} \right) \ , \quad A = \frac{f}{r^2} dt \ , \end{eqnarray} where $\phi$ is the dilaton field and $A$ is a massive vector field and the dynamical exponent $z=2$. For our purposes, it is only the background metric that will be relevant. The temperature in the dual field theory is given by the Hawking temperature of the black hole \begin{equation} T=\frac{R}{2\pi r_H^2} \ . \end{equation} Now the details of the calculations for entanglement entropy and subsequently mutual information will proceed as before. The relevant details are provided in appendix B. Here we will discuss the final results. At zero temperature, the mutual information is given by \begin{equation} \label{lifmi} I(A,B)=-c \, {\cal L} _0 L\left[\frac{2}{l}-\frac{1}{x}- \frac{1}{(2l+x)}\right] \ , \quad c = \frac{R^2}{4G_N^{(4)}} \ , \end{equation} for $x/l\le 0.618$. Here ${\cal L} _0$ is a numerical constant given in (\ref{l0l1}). Note that the above formula matches exactly with the zero temperature mutual information for $d=3$ obtained in (\ref{miT0}). This is expected since at zero temperature, the dynamical exponent of the background does not enter in the computations. Thus it is not possible to distinguish between a relativistic CFT and a scale-invariant on-relativistic field theory by looking at the behaviour of the mutual information. In the intermediate temperature range: $\sqrt{T/R}\ll \frac{1}{l},\frac{1}{x}$, we get \begin{equation} I(A,B) =I(A,B)|_{T=0} - 2 c \, {\cal L} _0{\cal L} _1 \frac{ L~ T}{R} x + \ldots \ , \end{equation} where ${\cal L} _1$ is a numerical constant given in (\ref{l0l1}). In this case, in addition to the familiar area law, we do observe a linear correction in temperature as a small temperature is introduced. As before, introducing temperature decreases mutual information. In the limit $\frac{1}{l}\ll \sqrt{T/R}\ll \frac{1}{x}$, mutual information is obtained to be: \begin{eqnarray} \label{mihighlif} I(A,B) & = & c \, L \sqrt{\frac{T}{R}}\left[{\cal L} _0 \sqrt{\frac{R}{T}}\frac{1}{x}-{\cal L} _{\rm high} -\sqrt{\frac{T}{R}}x(2\pi+{\cal L} _0{\cal L} _1)\right] \ , \quad \sqrt{\frac{T}{R}}x \le 0.261 \ , \\ & = & 0 \ , \quad \sqrt{\frac{T}{R}}x > 0.261 \ . \end{eqnarray} Here ${\cal L} _{\rm high} = 2.671$ is just a numerical constant. Comparing (\ref{mihighlif}) with (\ref{higheelif}), we find that in this regime mutual information indeed coincides with the thermal-part-subtracted entanglement entropy. Finally for $\sqrt{T/R}\gg \frac{1}{x}$, $I(A,B) = 0$ identically. The corresponding $2$-dimensional ``phase diagram" is shown in fig.~\ref{lifpd}, where the shaded region corresponds to $I(A,B) \not = 0$ and it vanishes everywhere else. \begin{figure}[!] \centering \includegraphics[width=0.7\textwidth]{lifshitz.pdf} \caption{\small $2$-dimensional parameter space for a scale-invariant $(2+1)$-dimensional field theory with Lifshitz scaling. The dynamical exponent is $z=2$. Mutual information is non-zero in the shaded region.} \label{lifpd} \end{figure} \subsection{Hyperscaling-violating background} A more general background with hyperscaling violation was proposed in \cite{Huijse:2011ef}. In this case, the metric is covariant under the scale transformation and has the following property: \begin{eqnarray} t \to \lambda^z t \ , \quad r \to \lambda r \ , \quad x \to \lambda x \ , \quad ds^2 \to \lambda^{2\theta/(d-1)} ds^2 \ , \end{eqnarray} where $\theta$ is known as the hyperscaling violation exponent. In the presence of a black hole, the metric takes the following form\cite{Alishahiha:2012qu}\footnote{Note that in \cite{Alishahiha:2012qu} $d$ denotes the spatial dimensions only. Thus $d_{\rm here} = d_{\rm there} + 1$.} \begin{eqnarray} && ds^2 = r^{2\theta/(d-1)} \left(- f(r) \frac{dt^2}{r^{2z}} + \frac{dr^2}{r^2 f(r)} + \frac{d\vec{x}^2}{r^2} \right) \ , \nonumber\\ && f(r) = 1 - \left(\frac{r}{r_H}\right)^{\gamma} \ , \end{eqnarray} where $\gamma$ is a real-valued constant which we will keep unspecified for now, $r_H$ is the location of the horizon, $z$ is the dynamical exponent, $\theta$ is the hyperscaling violation exponent and $d$ is the spacetime dimension of the boundary dual theory. We have also set the curvature of the space $R=1$. The advantage of writing the metric in the above fashion is the fact that in the zero temperature limit it becomes conformal to the Lifshitz metric written in (\ref{Lifshitz}). The boundary is located at $r \to 0$. The backgrounds temperature is given by \begin{equation} T= \frac{\gamma}{4 \pi r_H^z} \ . \end{equation} We will consider the case when $d-\theta-2\ge 0$, which typically exhibits an area law for entanglement entropy with the exception of logarithmic violation for $\theta = d - 2$. \subsubsection{General case: $\theta \not = d-2$} In the same spirit as before, let us investigate the general case of $\theta \not = d-2$ in various temperature regimes. Some relevant details containing the high temperature and low temperature expansions for entanglement entropy have been relegated in appendix C. \\ \noindent{\bf The case of $T=0$:} At zero temperature, the mutual information for the $d$-dimensional boundary theory (for $d-\theta-2> 0$), is given by \begin{equation} \label{hsmi} I(A,B)= c \, {\cal C} (\theta,d) L^{d-2}\left[\frac{2}{l^{d-\theta-2}}-\frac{1}{x^{d-\theta-2}}- \frac{1}{(2l+x)^{d-\theta-2}}\right] \ , \quad c = \frac{1}{4G_N^{(d+1)}} \ , \end{equation} where ${\cal C} (\theta, d)$ is a numerical constant which is given in appendix C. $I(A,B)$ is zero when $x/l\ge a_{hs}$, where $a_{hs}$ is the solution of the algebraic equation \begin{equation} 2 a_{hs}^{d-\theta-2}-1- \frac{a_{hs}^{d-\theta-2}}{(2+a_{hs})^{d-\theta-2}}=0 \ . \end{equation} \\ \noindent{\bf The case of $l T^{1/z}, x T^{1/z}\ll 1$:} In this limit, when the mutual information is non-zero, it is given by, \begin{equation} I(A,B) =I(A,B)|_{T=0}+ c \, h_1 {\cal C} (\theta,d) L^{d-2}T^{\frac{\gamma}{z}} \left[2 l^{\gamma-d+\theta + 2}-x^{\gamma-d+\theta + 2}-(2l+x)^{\gamma-d+\theta + 2}\right] \ . \end{equation} Here $h_1$ is a numerical constant, which does not contain any physical information. We do observe the familiar correction term at low temperature. \\ \noindent{\bf The case of $ x T^{1/z}\ll 1, l T^{1/z}\gg1$:} In this limit, when the mutual information is non-zero, it is given by \begin{equation} \label{mihshigh} I(A,B) =-c \, L^{d-2} \left[\frac{{\cal C} (\theta,d)}{x^{d-\theta-2}} - h_3 T^{\frac{d - \theta - 2}{z} } + {\cal C} (\theta,d)h_1 T^{\frac{\gamma}{z}}x^{\gamma-d+\theta+2}+h_2 T^{\frac{d-\theta-1}{z}}x\right] \ , \end{equation} where $h_2$ is a numerical constant. Finally, as before we have $I(A,B) = 0$ identically in the limit $ x T^{1/z}\gg 1$. Comparing equations (\ref{mihshigh}) with (\ref{eehighhs}), we note that mutual information coincides with the thermal-part-subtracted entanglement entropy at large temperature. It can be checked that the corresponding ``phase diagram" looks very similar to the ones analyzed before; hence we do not explicitly provide one here. \subsubsection{Special case: $\theta = d-2$} Let us now consider the special case of $\theta = d-2$, where logarithmic violation of the area law shows up. For a similar earlier study, see {\it e.g.}~\cite{Huijse:2011ef}.\\ \noindent{\bf The case of $T = 0$:} At zero temperature, the mutual information for the $d-$dimensional boundary theory (for $d-\theta-2= 0$), is given by \begin{equation} I(A,B)=c \, L^{d-2}\ln\left[\frac{l^2}{x(2l+x)}\right] \ , \end{equation} $I(A,B)$ is zero when $x/l\ge 0.414$. This result is identical to the result obtained in (\ref{mid2}) when $T=0$. \\ \noindent{\bf The case of $l T^{1/z}, x T^{1/z}\ll 1$:} In this limit, when the mutual information is non-zero, it is given by \begin{equation} I(A,B) =I(A,B)|_{T=0}+ 2L^{d-2}c \, k_1 T^{\gamma/z}\left[2 l^{\gamma}-x^{\gamma}-(2l+x)^{\gamma}\right] \ , \end{equation} where $k_1$ is a numerical constant. \\ \noindent{\bf The case of $ x T^{1/z}\ll 1, l T^{1/z}\gg1$:} In this limit, when the mutual information is non-zero, it is given by \begin{equation} I(A,B) = c L^{d-2} \left[2 \ln \left(\frac{1}{x T^{1/z}}\right)+k_2 - k_3 x T^{1/z}- 2k_1 x^{\gamma} T^{\gamma/z}\right] \ , \end{equation} where $k_2$ and $k_3$ are numerical constants. In this case, irrespective of the value of $\gamma$, mutual information does indeed capture the thermal-part-subtracted entanglement entropy. Finally, in this limit of large temperature, $I(A,B) = 0$ identically. This also results in a similar ``phase diagram". \section{Conclusions and Outlook} In this article, we have explored the disentangling transition between two sub-systems by studying mutual information in the context of holography. We have considered a class of large $N$ relativistic gauge theories as well as generic examples of non-relativistic scale-invariant theories. We have found an universal qualitative behaviour in the corresponding ``phase diagram" in $(x/l)$ vs $(xT)$-plane. There are numerous possibilities that we can consider in future. It will be interesting to explore how the disentangling transition depends on the shape of the sub-systems $A$ and $B$. Intuitively, we expect this transition to be present irrespective of the geometry of the sub-systems, but the precise nature of the transition may depend crucially on it. In recent years, there have been a lot of developments in understanding and holographically computing a more general notion of entanglement entropy, the so called R\'{e}nyi entropy. See for example \cite{Casini:2011kv, Hung:2011nu}. Subsequently we can define a mutual information that is derived from the R\'{e}nyi entropy. It is an interesting question to explore what physics may be contained in this R\'{e}nyi mutual information as far as the disentangling transition is concerned. In gravity duals of confining large $N$ gauge theories, it is known that the entanglement entropy itself undergoes a transition\cite{Nishioka:2006gr, Klebanov:2007ws}: this corresponds to having two candidate minimal area surfaces for a given length. Thus it will be extremely interesting to explore the physics of mutual information in such backgrounds, since in addition to the disentangling transition that we have explored here, the transition in the entanglement entropy itself is likely to produce a richer analogue of the ``phase diagram" that we have analyzed here. In quantum many-body systems, mutual information is emerging as an useful order parameter for certain phase transitions, such as the ones described in \cite{Singh:2011, 2012JSMTE..01..023W}. Within the context of AdS/CFT correspondence or the gauge-gravity duality, examples of various phase transitions are plentiful. Typically such phase transitions are engineered to understand aspects of strongly coupled Quantum Chromodynamics (QCD) or more recently in strongly coupled condensed matter-type systems, see {\it e.g.}~\cite{Hartnoll:2009sz} for a review on some of these. It will be interesting to consider what role mutual information might play in phase transitions that are described within the context of holography. Finally, let us note that the sharp transition of mutual information is a consequence of large N limit. In this limit, the inequality in (\ref{mi1}) is trivially satisfied since the right hand side is always $1/N$-suppressed\cite{Headrick:2010zt}. At finite $N$, however, mutual information should not vanish identically. Hence, the $1/N$-corrections to the RT formula perhaps do not contain a simple geometric interpretation as an area functional in the bulk geometry. \section{Acknowledgements} We are grateful to Matthew Headrick for his insightful comments and discussions that led to this work and also for pointing out a couple of errors in a previous version. This material is based upon work supported by the National Science Foundation under Grant Number PHY-0969020 and by Texas Cosmology Center, which is supported by the College of Natural Sciences and the Department of Astronomy at the University of Texas at Austin and the McDonald Observatory. AK is also supported by a Simons postdoctoral fellowship awarded by the Simons Foundation. \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix A. Low and high temperature expansions} \addcontentsline{toc}{section}{Appendix A. Low and high temperature expansions} Here we will recall some of the relevant results that have been discussed in details in \cite{Fischler:2012ca}. Let us make the following expansion of (\ref{lengen}) \begin{align} l=\frac{2}{ r_c}\sum_{n=0}^\infty\left(\frac{1}{1+nd}\right)\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{d (n+1)}{2 (d-1)}\right]}{ \Gamma[1+n]\Gamma \left[\frac{d n+1}{2 (d-1)}\right]}\left(\frac{r_H}{r_c}\right)^{nd} \ , \label{eerc} \end{align} which converges for $r_c> r_H$. The area of the extremal surface is given by \begin{align} A=2 L^{d-2}\int_{r_c}^{\infty}\frac{r^{d-3} dr}{ \sqrt{\left(1- \frac{r_c^{2d-2}}{r^{2d-2}}\right)}} \left(1-\frac{r_H^d}{r^d}\right)^{-1/2} \ , \end{align} which is a divergent quantity with the divergent piece: \begin{equation} A_{\rm div}=\frac{2}{d-2}L^{d-2}{r_b}^{d-2}= \frac{2}{d-2}\left(\frac{L}{a}\right)^{d-2}\qquad d\neq 2 \ . \end{equation} Here $r_b$ corresponds to the ultraviolet cut off $a=1/r_b$ (or a lattice spacing) of the boundary theory.\footnote{We are working with AdS radius $R=1$. Restoring $R$, the lattice spacing is given by $a=\frac{R^2}{r_b}$.} This is the familiar area law divergence. This area law behavior of the divergent piece is well understood from field theory computations \cite{Bombelli:1986rw, Srednicki:1993im}. Now, we can do an expansion $(d\neq 2)$ for the finite part of the area \begin{align} A_{\rm finite}=&2 L^{d-2}r_c^{d-2}\int_{r_c/r_b}^{1}\frac{ du}{u^{d-1} \sqrt{1- u^{2d-2}}} \left(1-\frac{r_H^d}{r_c^d}u^d\right)^{-1/2}-\frac{2}{d-2}L^{d-2}{r_b}^{d-2}\nonumber\\ =&2 L^{d-2}r_c^{d-2} \left[\frac{\sqrt{\pi } \Gamma \left(-\frac{d-2}{2 (d-1)}\right)}{2 (d-1) \Gamma \left(\frac{1}{2 (d-1)}\right)}+ \sum_{n=1}^\infty\left(\frac{1}{2(d-1)}\right)\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{d (n-1)+2}{2 d-2}\right]}{ \Gamma[1+n]\Gamma \left[\frac{d n+1}{2 (d-1)}\right]}\left(\frac{r_H}{r_c}\right)^{nd}\right] \ , \label{aren} \end{align} which again converges for $r_c>r_H$. Now we can solve equation (\ref{eerc}) for $r_c$ and then we can calculate area by using equation (\ref{aren}); entanglement entropy of the rectangular strip can subsequently be computed using (\ref{eedef}). We can then extract low and high temperature behavior of the entanglement entropy from equations (\ref{eerc}, \ref{aren}).\\ \noindent {\bf Low temperature regime:}\\ The low temperature regime is characterized by having $Tl\ll 1$, or equivalently $r_H l \ll 1$. In this case, $r_c\gg r_H$ and the leading contributions to the area come from the near-boundary AdS region. The deviations can be computed to give \begin{align} r_c=\frac{2 \sqrt{\pi}\Gamma\left[\frac{d}{2(d-1)}\right]}{l~\Gamma\left[\frac{1}{2(d-1)}\right]} \left[1+ \frac{1}{2(d+1)}\frac{2^{\frac{1}{d-1}-d} \Gamma \left(1+\frac{1}{2 (d-1)}\right) \Gamma \left(\frac{1}{2 (d-1)}\right)^{d+1}}{\pi^{\frac{d+1}{2}} \Gamma \left(\frac{1}{2}+\frac{1}{d-1}\right)\Gamma\left(\frac{d}{2(d-1)}\right)^d}\left(r_H l\right)^d+{\cal O} \left(r_H l\right)^{2d}\right] \end{align} Now using equation (\ref{aren}), at first order in $(r_H l)^d$, we get \begin{align} A_{\rm finite}= {\cal S} _0 \left(\frac{L}{l}\right)^{d-2}\left[1+ {\cal S} _1 (r_H l)^d+ {\cal O} (r_H l)^{2d}\right] \ , \end{align} where, \begin{align} {\cal S} _0=& \frac{2^{d-2} \pi ^{\frac{d-1}{2}} \Gamma \left(-\frac{d-2}{2 (d-1)}\right) }{(d-1) \Gamma \left(\frac{1}{2 (d-1)}\right)} \left(\frac{\Gamma \left(\frac{d}{2 (d-1)}\right)}{\Gamma \left(\frac{1}{2 (d-1)}\right)}\right)^{d-2} \ , \label{S0} \\ {\cal S} _1=& \frac{\Gamma \left(\frac{1}{2 (d-1)}\right)^{d+1}}{\Gamma \left(\frac{d}{2(d-1)}\right)^d\Gamma \left(\frac{1}{2}+\frac{1}{d-1}\right)}2^{-d-1} \pi ^{-\frac{d}{2}} \left(\frac{\Gamma \left(\frac{1}{d-1}\right) }{\Gamma \left(-\frac{d-2}{2 (d-1)}\right)}+\frac{2^{\frac{1}{d-1}} (d-2) \Gamma \left(1+\frac{1}{2 (d-1)}\right) }{\sqrt{\pi } (d+1)}\right) \ . \label{S1} \end{align} Therefore, following equation (\ref{eedef}), the entanglement entropy of the rectangular strip for the $d$-dimensional boundary theory at low temperature ($Tl\ll1$) is given by, \begin{equation} S_A=c \left[\frac{2}{d-2}\left(\frac{L}{a}\right)^{d-2}+ {\cal S} _0 \left(\frac{L}{l}\right)^{d-2}\left\{1+ {\cal S} _1 \left(\frac{4 \pi T l}{d}\right)^d+ {\cal O} \left(\frac{4 \pi T l}{d}\right)^{2d}\right\}\right] \ , \end{equation} where $c$ is defined in (\ref{cd}). \\ \noindent{\bf High temperature regime}\\ At high temperature (i.e. $T l \gg 1$, or equivalently $r_H l \gg 1$), using the methods outlined in \cite{Fischler:2012ca} we can evaluate a perturbative expansion. In this case ($r_H l\gg 1$), $r_c$ approaches $r_H$. We will rewrite equation (\ref{aren}) in a way that allows us to take the limit $r_c\rightarrow r_H$ without encountering any divergence. \begin{align} A_{\rm finite}=&2 L^{d-2}r_c^{d-2} \left[\frac{\sqrt{\pi } \Gamma \left(-\frac{d-2}{2 (d-1)}\right)}{2 (d-1) \Gamma \left(\frac{1}{2(d-1)}\right)}\right.\nonumber\\&\left.+ \sum_{n=1}^\infty\frac{1}{1+nd}\left(1+\frac{d-1}{d(n-1)+2}\right)\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{d (n+1)}{2 d-2}\right]}{ \Gamma[1+n]\Gamma \left[\frac{d n+1}{2 (d-1)}\right]}\left(\frac{r_H}{r_c}\right)^{nd}\right] \nonumber\\ =&2 L^{d-2}r_c^{d-2} \left[\frac{l r_c}{2}-\frac{\sqrt{\pi }(d-1) \Gamma \left(\frac{d}{2 (d-1)}\right)}{(d-2)\Gamma \left(\frac{1}{2(d-1)}\right)}\right.\nonumber\\ &+\left.\sum_{n=1}^\infty\left(\frac{1}{1+nd}\right)\left(\frac{d-1}{d(n-1)+2}\right)\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{d (n+1)}{2 d-2}\right]}{ \Gamma[1+n]\Gamma \left[\frac{d n+1}{2 (d-1)}\right]}\left(\frac{r_H}{r_c}\right)^{nd}\right].\label{highA} \end{align} The infinite series in the last equation for large $n$ admits the limit $r_c\rightarrow r_H$. The leading behavior is obtained to be \begin{align}\label{Shigh} A_{\rm finite}\approx l L^{d-2}r_H^{d-1}\left[1+ \left(\frac{1}{l r_H}\right){\cal S} _{\rm high}\right] \ , \end{align} where, \begin{align} {\cal S} _{\rm high}=&2\left[-\frac{\sqrt{\pi }(d-1) \Gamma \left(\frac{d}{2 (d-1)}\right)}{(d-2)\Gamma \left(\frac{1}{2(d-1)}\right)}+\sum_{n=1}^\infty\left(\frac{1}{1+nd}\right)\left(\frac{d-1}{d(n-1)+2}\right)\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{d (n+1)}{2 d-2}\right]}{ \Gamma[1+n]\Gamma \left[\frac{d n+1}{2 (d-1)}\right]}\right] \ .\label{constantshigh} \end{align} Hence, the entanglement entropy of the rectangular strip for the $d$-dimensional boundary thoery at high temperature is given by, \begin{equation}\label{highee} S_A = c \left[\frac{2}{d-2}\left(\frac{L}{a}\right)^{d-2}+V \left(\frac{4 \pi T}{d}\right)^{d-1}\left\{1+ \left(\frac{d}{4 \pi T l }\right){\cal S} _{\rm high}\right\} + \ldots \right] \ , \end{equation} where $V=l L^{d-2}$ is the volume of the rectangular strip with AdS radius $R=1$. \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix B. Computations in the Lifshitz background} \addcontentsline{toc}{section}{Appendix B. Computations in the Lifshitz background} In this case, the area functional of the surface is given by \begin{equation} A= L\int dr \sqrt{X'^2+ \frac{1}{\left(1-\frac{r^2}{r_H^2}\right)}} \ . \end{equation} This action leads to the equation of motion \begin{align} \frac{dX}{dr}= \pm \frac{r^{2}}{ r_c^{2} \sqrt{\left(1- \frac{r^{4}}{r_c^{4}}\right)\left(1-\frac{r^2}{r_H^2}\right)}} \ , \end{align} where, $r_c$ can be determined from \begin{equation} x(\infty)=\pm\frac{l}{2} \ . \end{equation} Thus we get \begin{align} l=2\int^{r_c}_{0} \frac{r^{2}dr}{ r_c^{2} \sqrt{\left(1- \frac{r^{4}}{r_c^{4}}\right)\left(1-\frac{r^2}{r_H^2}\right)}} = & 2r_c \int_{0}^{1}\frac{u^2 du}{ \sqrt{1- u^{4}}} \left(1-\frac{r_c^2}{r_H^2}u^2\right)^{-1/2}\nonumber\\ =&\frac{r_c}{ 2}\sum_{n=0}^\infty\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{n}{2}+\frac{3}{4}\right]}{ \Gamma[1+n]\Gamma \left[\frac{n}{2}+\frac{5}{4}\right]}\left(\frac{r_c}{r_H}\right)^{2n} \ .\label{lifrc} \end{align} For any finite temperature $r_c< r_H$ and the infinite series converges. The area of the extremal surface is given by \begin{align} A=2L \int^{r_c}_{0} \frac{dr}{ r^{2} \sqrt{\left(1- \frac{r^{4}}{r_c^{4}}\right)\left(1-\frac{r^2}{r_H^2}\right)}} \ . \end{align} This area is infinite indicating that the entanglement entropy has a divergence. We can do a similar expansion for the area \begin{align} A=\frac{2L}{a}+ \frac{L}{r_c}\left[-\frac{(2\pi)^{3/2}}{\Gamma(1/4)^2}+\frac{1}{2} \sum_{n=1}^\infty\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{n}{2}-\frac{1}{4}\right]}{ \Gamma[1+n]\Gamma \left[\frac{n}{2}+\frac{1}{4}\right]}\left(\frac{r_c}{r_H}\right)^{2n}\right] \ , \label{lifee} \end{align} where, again $a$ is the ultraviolet cut (or a lattice spacing) of the boundary theory. At low temperature, $r_c\ll r_H$ and the leading contributions to the area come from the boundary. In this limit, equation (\ref{lifrc}) can be solved for $r_c$ and at first order, we obtain \begin{equation} r_c=\frac{\Gamma(1/4)^2 l}{(2\pi)^{3/2}}\left[1- \frac{\Gamma(1/4)^8}{12 (2\pi)^{5}}\frac{l^2}{r_H^2}+{\cal O} \left(\frac{l^4}{r_H^4}\right)\right] \ . \end{equation} Now using equation (\ref{lifee}), the entanglement entropy of the rectangular strip for the boundary theory at low temperature ($l\sqrt{T/R}\ll1$) is given by \begin{equation} S_A=c \, \left[\frac{2L}{a} -{\cal L} _0\left(\frac{L}{l}\right)\left\{1-{\cal L} _1\frac{Tl^2}{R} + \ldots \right\}\right] \ , \end{equation} where, ${\cal L} _0, {\cal L} _1$ are numerical constants given by \begin{align} \label{l0l1} {\cal L} _0= \frac{(2\pi)^{3}}{\Gamma(1/4)^4} \ , \qquad {\cal L} _1=\frac{\Gamma(1/4)^8}{6 (2\pi)^{4}} \ , \end{align} and $c$ is defined in (\ref{lifmi}). At high temperature ($l/r_H\gg 1$), $r_c$ approaches $r_H$. Equation (\ref{lifee}), does not converge for $r_c=r_H$; we will rewrite equation (\ref{lifee}) in a way that allows us to take the limit $r_c\rightarrow r_H$ without encountering any divergence \begin{align} A=\frac{2L}{a}+ \frac{L}{r_c}\left[-\frac{(2\pi)^{3/2}}{\Gamma(1/4)^2}+\frac{\Gamma(-1/4)\Gamma(1/2)}{2\Gamma(1/4)}+\frac{l}{r_c}+ \sum_{n=1}^\infty \frac{1}{2n-1}\frac{\Gamma\left[\frac{1}{2}+n\right]\Gamma \left[\frac{n}{2}+\frac{3}{4}\right]}{ \Gamma[1+n]\Gamma \left[\frac{n}{2}+\frac{5}{4}\right]}\left(\frac{r_c}{r_H}\right)^{2n}\right] \ . \end{align} Now the entanglement entropy of the rectangular strip for the boundary theory at high temperature ($l\sqrt{T/R}\gg 1$) is obtained by taking the limit $r_c\rightarrow r_H$ in the last equation, yielding \begin{equation} \label{higheelif} S_A=c \, \left[\frac{2L}{a} +\frac{2\pi L l T}{R}- {\cal L} _{\rm high}\frac{L\sqrt{T}}{\sqrt{R}}+ \ldots \right] \ , \end{equation} where, ${\cal L} _{\rm high}=2.671$. \renewcommand{\theequation}{C.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix C. Computations in the Hyperscaling-violating background} \addcontentsline{toc}{section}{Appendix C. Computations in the Hyperscaling-violating background} \noindent{\bf The case of $\theta \not = d-2$:} \begin{align} l=&r_c \sum_{n=0}^{n=\infty}p_n \left(\frac{r_c}{r_H}\right)^{n\gamma},\\ S_{A; \rm finite}=&\frac{2 c\, L^{d-2}}{r_c^{d-\theta-2}} \left[q_0+ \sum_{n=1}^{n=\infty}q_n \left(\frac{r_c}{r_H}\right)^{n\gamma}\right] \ , \end{align} where, $p_n,q_n$ are constants that depend only on $d$ and $\theta$ and $c$ is defined in (\ref{hsmi}). At low temperature $r_c\ll r_H$ and we get \begin{equation} S_{A; \rm finite}=\frac{c\, {\cal C} (\theta,d) L^{d-2}}{l^{d-\theta-2}}\left[1+h_1~ l^{\gamma}~ T^{\frac{\gamma}{z}}+...\right] \ , \end{equation} where, $h_1$ is a numerical constant and \begin{align} {\cal C} (\theta,d)=2 p_0^{d-\theta-2} q_0 \ . \end{align} At high temperature $r_c\sim r_H$ and our previous calculations suggests that in the limit $r_c\rightarrow r_H$ we can write \begin{equation} S_{A; \rm finite}=\frac{2 c\, L^{d-2}}{r_H^{d-\theta-2}} \left[q_0-p_0+\frac{l}{r_H}+ \sum_{n=1}^{n=\infty}(q_n-p_n) \right] \end{equation} and the infinite sum now converges. Finally, we obtain \begin{equation} \label{eehighhs} S_{A; \rm finite}= c \, L^{d-2}~ T^{\frac{d-\theta-1}{z}}\left[h_2 l +h_3 ~ T^{-\frac{1}{z}}+...\right] \ , \end{equation} where, $h_2, h_3$ are numerical constants.\\ \noindent{\bf The case of $\theta = d-2$:} At low temperature $r_c\ll r_H$ and we get \begin{equation} S_{A; \rm finite}=2L^{d-2} c \left[ \ln(l)+k_1 l^{\gamma} ~T^{\gamma/z}+...\right] \ , \end{equation} where $k_1\ge 0$ is a numerical constant. At high temperature $r_c\sim r_H$ and we obtain \begin{equation} S_{A; \rm finite}=c \, L^{d-2}\left[k_2- \frac{2}{z}\ln(T)+k_3 l T^{1/z}+...\right] \ , \end{equation} where, $k_2$ and $k_3$ are again numerical constants.
train/arxiv
BkiUfnY25V5jax0lomj3
5
1
\section{Section Heading} \baselineskip=24pt \section{Basic properties of contact metric structures} A contact form on a $2n+1$-dimensional manifold $M$ is a one-form $\eta$ such that $\eta\wedge (d\alpha )^n$ is a volume form on $M$. Given a contact manifold $(M,\eta )$, there exist tensor fields $(\xi ,\phi , g)$, where $g$ is a Riemannian metric and $\xi$ is a unit vector field, called the Reeb field of $\eta$ and $\phi$ is an endomorphism of the tangent bundle of $M$ such that \begin{itemize} \item[(i)] $\eta (\xi )=1,~\phi^2 =-Id +\eta\otimes\xi ,~\phi\xi =0$ \item[(ii)] $d\eta =2g(.,\phi .)$ \end{itemize} The data $(M,\eta ,\xi ,\phi ,g)$ is called a contact metric structure; see (\cite{BLA}) for more details. Denoting by $\nabla$ the Levi-Civita connection of $g$, and by $$R(X,Y)Z=\nabla_X\nabla_YZ-\nabla_Y\nabla_XZ-\nabla_{[X,Y]}Z$$ its curvature tensor, a contact metric structure $(M,\eta ,\xi ,\phi ,g)$ is called Sasakian if the condition$$(\nabla_X\phi )Y=g(X,Y)\xi -\eta (Y)X$$ is satisfied for all tangent vectors $X$ and $Y$. A well known curvature characterization of the Sasakian condition is as follows: \begin{pro} {\it A contact metric structure $(M,\eta , \xi ,\phi ,g)$ is Sasakian if and only if $$R(X,Y)\xi =\eta (Y)X-\eta (X)Y$$ for all tangent vectors $X$ and $Y$.}\end{pro} A condition weaker than the Sasakian one is the K-contact condition. A contact metric structure $(M,\eta ,\xi ,\phi ,g)$ is called K-contact if the tensor field $h=\frac{1}{2}L_\xi \phi$ vanishes identically. Here, $L_\xi \phi$ stands for the Lie derivative of $\phi$ in the direction of $\xi$. The above K-contact condition is known to be equivalent to the Reeb vector field $\xi$ being a $g$- infinitesimal isometry, or a Killing vector field. The tensor field $h$ is known to be symmetric and anticommutes with $\phi$. An equally well known curvature characterization of K-contactness is as follows: \begin{pro}{\it A contact metric structure $(M,\eta ,\xi ,\phi ,g)$ is K-contact if and only if $$R(X,\xi )\xi =X-\eta (X)\xi$$ for all tangent vectors $X$.}\end{pro} The notation ''$l$" is common for the tensor $$lX=R(X,\xi )\xi$$ \begin{pro}\label{prop33} On a contact metric structure $(M,\eta ,\xi ,\phi ,g)$, the following identities hold: \begin{equation}\nabla_\xi h=\phi -h^2\phi -\phi l \label{bl1}\end{equation} \begin{equation}\phi l\phi -l=2(h^2+\phi^2) \label{bl2}\end{equation} \begin{equation}L_\xi h=\nabla_\xi h+2\phi h+2\phi h^2\label{bl3}\end{equation} \end{pro} \proof The first two identities appear in Blair's book (\cite{BLA}). We establish the third one. $$\begin{array}{rcl}( L_\xi h) X&=&[\xi ,hX]-h[\xi ,X]\\&=&\nabla_\xi(hX)-\nabla_{hX}\xi -h(\nabla_\xi X-\nabla_X\xi )\\&=&(\nabla_\xi h)X+h\nabla_\xi X-[-\phi hX-\phi h^2X]-h\nabla_\xi X+h[-\phi X-\phi hX]\\ &=&(\nabla_\xi h)X+h\nabla_\xi X+\phi hX+\phi h^2X-h\nabla_\xi X-h\phi X+\phi h^2 X\\ &=&(\nabla_\xi h)X+2\phi hX+2\phi h^2X \end{array} $$ $\qed$ Given a contact metric structure $(M,\eta ,\xi , \phi , g)$, its $D_a$-homothetic deformation is a new contact metric structure $(M,\overline{\eta}, \overline{\xi }, \overline{\phi}, \overline{g})$ given by a real number $a>0$ and $$\overline{\eta}=a\eta,~~\overline{\xi}=\frac{\xi}{a},~~\overline{\phi}=\phi$$ $$\overline{g}=ag+a(a-1)\eta\otimes\eta$$ D-homothetic deformations preserve the K-contact and Sasakian conditions. \section{Weakly $(\kappa , \mu )$- spaces} A direct calculation shows that under a $D_a$ homothetic deformation, the curvature tensor transforms as follows: $$\begin{array}{rcl}a\overline{R}(X,Y)\overline{\xi}&=&R(X,Y)\xi -(a-1)[(\nabla_X\phi )Y-(\nabla_Y\phi )X+\eta (X)(Y+hY)\\&&-\eta (Y)(X+hX)]+(a-1)^2[\eta (Y)X-\eta (X)Y] \end{array} $$ Letting $Y=\xi$ and recalling $\nabla_\xi\phi =0$, we get:$$\begin{array}{rcl}a\overline{R}(X,\xi )\overline{\xi}&=&R(X,\xi )\xi-(a-1)[(\nabla_X\phi )\xi +\eta (X)\xi -(X+hX)\\&&+(a-1)^2[X-\eta (X)\xi ] \end{array} $$ On any contact metric manifold, the following identity holds: $$(\nabla_X\phi)\xi =-\phi\nabla_X\xi=-X+\eta (X)-hX.$$ Taking into account of this identity, we see that the curvature tensor deforms as follows: $$a^2\overline{R}(X,\overline{\xi})\overline{\xi}=R(X,\xi )\xi +(a^2-1)(X-\eta (X)\xi )+2(a-1)hX$$ Equivalently, since $\xi =a\overline{\xi}$ and $h=a\overline{h}$, \begin{equation}\overline{R}(X,\overline{\xi})\overline{\xi}=\frac{1}{a^2}R(X,\xi )\xi +\frac{a^2-1}{a^2}(X-\overline{\eta} (X)\overline{\xi})+\frac{2a-2}{a}\overline{h}X\label{k6}\end{equation} It follows from (\ref{k6}) that, under a $D_a$-homothetic deformation, the condition $R(X,\xi )\xi =0$ transforms into $$\overline{R}(X,\overline{\xi} )\overline{\xi} =\kappa (X-\overline{\eta} (X)\overline{\xi} )+\mu \overline{h}X$$ where $\kappa =\frac{a^2-1}{a^2}$ and $\mu =\frac{2a-2}{a}$. As a generalization of both $R(X,\xi )\xi =0$ and the K-contact condition, $R(X,\xi )\xi =X-\eta (X)\xi $, we consider $$R(X,\xi )\xi =\kappa (X-\eta(X)\xi ) +\mu hX.$$ We call this the weak $(\kappa , \mu )$ condition. The same generalization was refered to as Jacobi $(\kappa , \mu )$-contact s manifold in \cite{GHS}. Let us point out also that a strong $(\kappa ,\mu )$ condition $R(X,Y)\xi=\kappa (\eta(Y)X-\eta (X)Y)+\mu (\eta (Y)hX-\eta (X) hY)$ has been introduced in \cite{BKP}. Examples of weakly $(\kappa ,\mu )$ spaces which are not strongly $(\kappa ,\mu )$ are provided by the Darboux contact forms $\eta =\frac{1}{2}(dz-\sum y^idx^i)$ on ${\bf R}^{2n+1}$ with associated metric $$g=\frac{1}{4}\left(\begin{array}{ccr} \delta_{ij}+y^iy^j+\delta_{ij}z^2&\delta_{ij}z&-y^i\\\delta_{ij}z&\delta_{ij}&0\\-y^j&0&1\end{array}\right) $$ (see \cite{BLA}). Other examples of weakly $(\kappa ,\mu )$-spaces have been found on normal bundles of totally geodesic Legendre submanifolds in Sasakian manifolds (see \cite{BAN}). The two notions of $(\kappa ,\mu )$-spaces are D-homothetically invariant. It follows from identity (\ref{k6}) that, if $(M,\eta ,\xi ,\phi ,g )$ is a (weak) $(\kappa ,\mu )$ structure, then the $D_a$-homothetic deformation $( \overline{\eta}, \overline{\xi}, \phi ,\overline{g} )$ is a (weak) $(\overline{\kappa}, \overline{\mu})$-structure with: \begin{equation}\label{km} \overline{\kappa}=\frac{\kappa +a^2-1}{a^2},~~\overline{\mu}=\frac{\mu +2a-2}{a}\end{equation} The tensor fields $\phi$ and $h$ on a weakly $(\kappa ,\mu )$-space are related by the identities in the following proposition. \begin{pro}\label{prop4} On a weakly $(\kappa ,\mu )$-space $(M,\eta ,\xi , \phi , g)$, the following identities hold: \begin{equation} h^2=(\kappa -1)\phi^2,~~\kappa \le 1\label{ka1}\end{equation} \begin{equation}\nabla_\xi h=-\mu\phi h\label{ka2}\end{equation} \begin{equation} L_\xi h=(2-\mu )\phi h+2(1-\kappa )\phi\label{eq6}\end{equation} \end{pro} \proof Starting with identity (\ref{bl2}), which is valid on any contact metric structure, one has, for any tangent vector $X$: $$\begin{array}{rcl}2h^2X+2\phi^2X&=&\phi (\kappa \phi X+\mu h\phi X)-(\kappa (-\phi^2X)+\mu hX)\\&=&\kappa\phi^2X+\mu\phi h\phi X+\kappa\phi^2X-\mu hX\\&=&\kappa\phi^2X-\mu h\phi^2X+\kappa\phi^2X-\mu hX\\&=&2\kappa\phi^2X\end{array} $$ Hence, grouping terms $$2h^2X=(2\kappa -2)\phi^2X$$ So $$h^2=(\kappa -1)\phi^2$$ But, since $h$ is symmetric, $h^2$ must be a non-negative operator, hence $\kappa \le 1$, proving (\ref{ka1}). From identity (\ref{bl1}) combined with $lX=\kappa (X-\eta (X)\xi )+\mu hX$, we see that $$\begin{array}{rcl}(\nabla_\xi h)X&=&\phi X-h^2\phi X-\phi (\kappa (X-\eta (X)\xi )+\mu hX)\\&=&\phi X-(\kappa -1)\phi^3X-\kappa\phi X-\mu\phi hX\\&=&(1-\kappa +(\kappa -1))\phi X-\mu\phi hX\\&=&-\mu \phi hX \end{array} $$ proving (\ref{ka2}). Next, combining identities (\ref{bl3}), (\ref{ka2}) and (\ref{ka1}), one has: $$\begin{array}{rcl}L_\xi h&=&\nabla_\xi h+2\phi h+2\phi h^2\\&=&-\mu \phi h+2\phi h+2\phi h^2\\ &=&-\mu \phi h+2\phi h+2\phi (\kappa -1)\phi^2\\&=&-\mu \phi h+2\phi h-2(\kappa -1)\phi\\&=&(2-\mu )\phi h+2(1-\kappa )\phi\end{array} $$ proving (\ref{eq6}). $\qed$ Tangent bundle's structure on a $(\kappa ,\mu )$-space is described by the following theorem: \begin{Theorem} Let $(M^{2n+1},\eta , \xi ,\phi , g)$ be a weakly $(\kappa ,\mu )$, contact metric manifold. Then $\kappa \le 1$. If $\kappa =1$, then the structure is K-contact. If $\kappa <1$, then the tangent bundle $TM$ decomposes into three mutually orthogonal distributions $D(0)$, $D(\lambda )$ and $D(-\lambda )$, the eigenbundles determined by tensor $h$'s eigenspaces, where $\lambda =\sqrt{1-\kappa }$. \end{Theorem} \proof Clearly, $\kappa =1$ is exactly the K-contact condition. Suppose $\kappa <1$. Since $h\xi =0$ and $h$ is symmetric, it follows from identity (\ref{ka1}), Proposition \ref{prop4}, ($h^2=(\kappa -1)\phi^2$), that the restriction $h_{|D}$ of $h$ to the contact subbundle $D$ has eigenvalues $\lambda =\sqrt{1-k}$ and $-\lambda$. By $D(\lambda ),~D(-\lambda )~and~D(0)$, we denote the corresponding eigendistributions. If $X\in D(\lambda )$, then $h\phi X=-\phi hX=-\lambda\phi X$. Thus $\phi X\in D(-\lambda )$ which shows that the three distributions above are mutually orthogonal. $\qed$ To shade some light on the difference between weak $(\kappa ,\mu )$ and strong $(\kappa ,\mu )$-spaces, we propose a weak, semi-symmetry condition. We say that a contact metric space $(M,\eta ,\xi ,\phi , g)$ is weakly semi-symmetric if $R(X,\xi )R=0$ for all tangent vectors $X$ where $R$ is the curvature operator. We will prove the following: \begin{Theorem}\label{Theo2} Let $(M,\eta ,\xi ,\phi , g)$ be a weakly semi-symmetric, contact metric weakly $(\kappa ,0)$-space. Then $(M,\eta ,\xi ,\phi ,g )$ is a strongly $(\kappa ,0)$-space. \end{Theorem} \proof The weakly semi- symmetric condition means that $(R(X,\xi )R)(Y,\xi )\xi =0$ holds for any tangent vectors $X$ and $Y$. Extending $Y$ into a local vector field, we have: $$\begin{array}{rcl} 0&=&R(X,\xi )R(Y,\xi )\xi-R(R(X,\xi )Y,\xi )\xi -R(Y,R(X,\xi )\xi )\xi -\\&&R(Y,\xi )R(X,\xi )\xi\\ &=&R(X,\xi )(\kappa (Y-\eta (Y)\xi ))-\kappa (R(X,\xi )Y-\eta (R(X,\xi )Y)\xi )-\\&&R(Y,\kappa (X-\eta (X)\xi ))\xi -R(Y,\xi )(\kappa (X-\eta (X)\xi ))\\ &=&\kappa R(X,\xi )Y-\kappa \eta (Y)R(X,\xi )\xi-\kappa R(X,\xi )Y+\kappa\eta (R(X,\xi)Y)\xi -\\&&\kappa R(Y,X)\xi +\kappa\eta (X)R(Y,\xi )\xi -\kappa R(Y,\xi )X+\kappa\eta (X)R(Y,\xi )\xi\\ &=&-\kappa^2\eta (Y)X-\kappa g(R(X,\xi )\xi ,Y)\xi -\kappa R(Y,X)\xi +\kappa^2\eta (X)Y-\\&&\kappa R(Y,\xi )X+\kappa^2\eta (X)Y-\kappa^2\eta (X)\eta (Y)\xi\\&=&-\kappa^2\eta (Y)X-\kappa g(\kappa (X-\eta (X)\xi ),Y)\xi -\kappa R(Y,X)\xi +2\kappa^2\eta (X)Y-\\&&\kappa R(Y,\xi )X-\kappa^2\eta (X)\eta (Y)\xi\\0&=&-\kappa^2\eta (Y)X-\kappa^2 g(X,Y)\xi -\kappa R(Y,X)\xi +2\kappa^2\eta (X)Y-\kappa R(Y,\xi )X\end{array} $$Equation \begin{equation}\label{w1} -\kappa^2\eta (Y)X-\kappa^2 g(X,Y)\xi -\kappa R(Y,X)\xi +2\kappa^2\eta (X)Y-\kappa R(Y,\xi )X=0\end{equation} is valid for any $X$ and $Y$. Exchanging $X$ and $Y$ leads to \begin{equation}\label{w2}-\kappa^2\eta (X)Y-\kappa^2g(Y,X)\xi -\kappa R(X,Y)\xi +2\kappa^2\eta (Y)X-\kappa R(X,\xi )Y=0\end{equation} Substracting equation (\ref{w1}) from equation (\ref{w2}), we obtain:\begin{equation}3\kappa^2 (\eta (Y)X-\eta (X)Y)+\kappa (R(Y,X)\xi -R(X,Y)\xi )+\kappa (R(Y,\xi )X-R(X,\xi )Y)=0\label{w3}\end{equation} By the first Bianchi Identity, $R(Y,\xi )X-R(X,\xi )Y=R(Y,X)\xi$ holds. Incorporating this identity into (\ref{w3}), we obtain the following: \begin{equation} 3 \kappa^2(\eta (Y)X-\eta (X)Y)+3\kappa R(Y,X)\xi =0\end{equation} which implies the strong $(\kappa ,0)$ condition $$R(X,Y)\xi =\kappa (\eta (Y)X-\eta (X)Y)$$ $\qed$ Theorem \ref{Theo2} applies to the case $\kappa =1$ and has the following interesting corollary: \begin{cor} A weakly semi-symmetric K-contact manifold is Sasakian. \end{cor} \proof In the $\kappa =1$ case, the strong $(\kappa ,\mu )$ condition is exactly the Sasakian condition. $\qed$ Weakly $(\kappa ,\mu )$-structures with $\kappa =1$, (the $K$-contact ones), are D-homothetically fixed. A non-K-contact weakly $(k,\mu )$ structure cannot be D-homothetically deformed into a K-contact one. Weakly $(k,\mu )$-structures with $\mu =2$ are also D-homothetically fixed. As a consequence, a weakly $(k, \mu )$ structure with $\mu \neq 2$ cannot be deformed into one with $\mu =2$ neither. Existence of these homothetically fixed $(\kappa ,\mu )$ structures depends on an invariant that was first introduced by Boeckx for strongly $(\kappa ,\mu )$ structures in (\cite{BOE}). \section{The Boeckx invariant} The Boeckx invariant, $I_M$ of a weakly contact $(\kappa ,\mu )$ space is defined by $$I_M=\frac{1-\frac{\mu }{2}}{\sqrt{1-k}}=\frac{1-\frac{\mu}{2}}{\lambda}.$$ $I_M$ is a D-homothetic invariant. Any two D-homothetically related weakly $(\kappa ,\mu )$-structures have the same Boeckx invariant. The following lemma is crucial in proving existence of $D$-homothetically fixed $(\kappa ,\mu )$-structures. \bl\label{lemma1} Let $(M,\alpha , \xi ,\phi , g )$ be a non-K-contact, weakly $(\kappa ,\mu )$ space.\begin{itemize} \item[(i)] If $I_M>1$, then $2-\mu -\sqrt{1-k}>0$ and $2-\mu +\sqrt{1-k}>0$. \item[(ii)] If $I_M<-1$, then $2-\mu +\sqrt{1-k}<0$ and $2-\mu -\sqrt{1-k}<0.$ \item[(iii)] $|I_M|<1$ if and only if $0<2\lambda +2-\mu$ and $0<2\lambda +\mu -2$ \end{itemize} \el \proof (i). Suppose $I_M>1$. Then $1-\frac{\mu}{2}>\sqrt{1-\kappa}$ and $\mu <2$. $$\begin{array}{rcr}1-\frac{\mu}{2} >\sqrt{1-\kappa}&\Rightarrow&2-\mu >2\sqrt{1-k}\\&\Rightarrow& 2-\mu >\sqrt{1-k} ~and~ 2-\mu >-\sqrt{1-k}\\&\Rightarrow&2-\mu-\sqrt{1-k}>0~and~2-\mu +\sqrt{1-k}>0\end{array} $$ (ii). Suppose $I_M<-1$. Then $1-\frac{\mu }{2}<-\sqrt{1-k}$ and $\mu >2$. $$\begin{array}{rcr} 1-\frac{\mu}{2}<-\sqrt{1-k}&\Rightarrow&2-\mu <-2\sqrt{1-k} ~and ~2-\mu <\sqrt{1-k}\\&\Rightarrow&2-\mu <-\sqrt{1-k}~and ~2-\mu <\sqrt{1-k}\\&\Rightarrow&2-\mu +\sqrt{1-k}<0~and~2-\mu -\sqrt{1-k}<0\\&&\qed\end{array} $$ (iii). $|I_M|<1$ if and only if $-1<\frac{1-\frac{\mu}{2}}{\lambda}<1$. Equivalently $$-1<\frac{2-\mu}{2\lambda}<1~~and~~-1<\frac{\mu -2}{2\lambda}<1$$ Thus $$-2\lambda <2-\mu <2\lambda~~and~~-2\lambda <\mu -2<2\lambda$$ Or, $$0<2\lambda +2-\mu ~and~~0<2\lambda +\mu -2$$ $\qed$ \section{D-homothetically fixed structures on weakly $(\kappa ,\mu )$-spaces} \subsection{K-contact structures on weakly $(\kappa ,\mu )$ spaces} We have pointed out that D-homothetic deformations of non K-contact weakly $(\kappa ,\mu )$ structures remain non K-contact. However, on $(\kappa ,\mu )$-spaces with large Boeckx invariant, K-contact structures coexist with $(\kappa ,\mu )$ structures. \bt \label{theo2}Let $(M,\eta , \xi , \phi ,g )$ be a non-K-contact, weakly $(k,\mu )$-space whose Boeckx invariant $I_M$ satisfies $|I_M|>1$. Then, $M$ admits a K-contact structure $(M,\eta , \xi, \overline{\phi}, \overline{g})$ compatible with the contact form $\eta$.\et \proof We define tensor fields $\overline{\phi}$ and $\overline{g}$ by \begin{equation}\label{def1}\overline{\phi}=\frac{\epsilon}{(1-k)\sqrt{(2-\mu )^2-4(1-k)}}(L_\xi h\circ h)\end{equation} $$\overline{g}=-\frac{1}{2}d\eta (., \overline{\phi}.)+\eta\otimes\eta$$ where $$\epsilon=\left\{\begin{array}{ll}+1&if~I_M>0\\-1&if~I_M<0\end{array}\right.$$ From the formula $h^2=-(1-k)\phi^2$ and $L_\xi h=(2-\mu )\phi h+2(1-k)\phi$ in Proposition \ref{prop4}, we obtain $$\begin{array}{rcl} (L_\xi h\circ h)^2&=&(2-\mu )^2(1-k)^2\phi^2-4(1-k)^2\phi^2h^2\\ &=&(1-k)^2((2-\mu )^2-4(1-k))\phi^2 \end{array} $$ That is: $$(L_\xi h\circ h)^2=\lambda^4\alpha (-Id+\eta\otimes \xi )$$ where $\lambda =\sqrt{1-k}$ and $\alpha =(2-\mu )^2-4(1-k).$ One sees that, if $\alpha >0$, then $\overline{\phi}=\frac{\epsilon}{\lambda^2\sqrt\alpha}(L_\xi h\circ h)$ defines an almost complex structure on the contact subbundle. Notice also that $\alpha >0$ is equivalent to $|I_M|>1$. We will show that $\overline{\phi}$ is $\xi$ invariant. For that, it suffices to show that the Lie derivative of $L_\xi h\circ h$ vanishes in the $\xi$ direction. $$\begin{array}{rcl}L_\xi (L_\xi h\circ h)&=&L_\xi ((2-\mu )(1-k)\phi +2(1-k)\phi h)\\&=&2(2-\mu )(1-k )h+4(1-k)h^2+2(1-k)[(2-\mu )\phi^2h+\\&&2(1-k)\phi^2]\\ &=&2(2-\mu )(1-k)h+4(1-k)h^2+2(1-k)(2-\mu )\phi^2h+\\&&4(1-k)^2\phi^2\\ &=&-4(1-k)^2\phi^2+4(1-k)^2\phi^2 =0 \end{array} $$ Next, we will show that $\overline{g}=-\frac{1}{2}d\eta (., \overline{\phi}. )+\eta\otimes\eta$ is an adapted Riemannian metric for the structure tensors $(\eta , \overline{\xi}, \overline{\phi})$. That is $\overline{g}$ is a bilinear, symmetric, positive definite tensor with $$d\eta =2 \overline {g}(., \overline{\phi}).$$ From the definition of $\overline{g}$, we have, for arbitrary tangent vectors $X$ and $Y$: $$\begin{array}{rcl} \overline{g}(X,Y)&=& -\frac{1}{2}d\eta (X\overline{\phi}Y)+\eta (X)\eta (Y)\\ &=&-\frac{1}{2(1-\kappa )\sqrt{4(1-\kappa )-(2-\mu )^2}}d\eta (X,(1-\kappa )(2-\mu )\phi +\\&&(1-\kappa )2\phi h)Y)+\eta (X)\eta (Y)\\&=& \frac{2-\mu }{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(X,Y)+\frac{2}{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(X,hY)+\\&&(1-\frac{1}{\sqrt{4(1-\kappa )-(2-\mu )^2}})\eta(X)\eta(Y)\\&=&\frac{2-\mu }{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(Y,X)+\frac{2}{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(Y,hX)+\\&&(1-\frac{1}{\sqrt{4(1-\kappa )-(2-\mu )^2}})\eta(Y)\eta(X\\&=&\overline{g}(Y,X) \end{array} $$ proving symmetry of $\overline{g}$. We used $h$'s symmetry in the step before the last. For $\overline{g}$'s positive definitness, first observe that $\overline{g}(\xi ,\xi )=1>0$. Then for any non-zero tangent vector $X$ in the contact bundle $D$, using the definition of $\overline{\phi }$ in (\ref{def1}), the formula for $L_\xi h$ from identity (\ref{eq6}) in Proposition \ref{prop4}, we have: $$\begin{array}{rcl}\overline{g}(X,X)&=&-\frac{1}{2}d\eta (X,\overline{\phi }X)\\&=&-\frac{\epsilon (2-\mu )}{2\sqrt{(2-\mu )^2-4(1-\kappa )}}d\eta (X,\phi X)-\frac{\epsilon}{\sqrt{(2-\mu )^2-(4(1-\kappa )}}d\eta (X,\phi hX)\\ &=&\frac{\epsilon (2-\mu )}{\sqrt{(2-\mu )^2-4(1-\kappa )}}g(X,X)+\frac{2\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}g(X,hX)\\&=& \frac{\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}((2-\mu )g(X,X)+2g(X,hX)) \end{array} $$ If $X\in D(\lambda )$, then $$\overline{g}((X,X)=\frac{\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}((2-\mu )+2\sqrt{1-\kappa })g(X,X)).$$ By Lemma \ref{lemma1}, (i), (ii), the inequality $$\epsilon ((2-\mu )-2\sqrt{1-\kappa}))>0$$ holds when $|I_M|>1$. Therefore $\overline{g}(X,X)>0$. In the same way, if $X\in D(-\lambda )$, then $$\overline{g}(X,X)=\frac{\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}((2-\mu )-2\sqrt{1-\kappa })g(X,X))$$ which is also $>0$ by Lemma \ref{lemma1}, (i) and (ii). This concludes the proof of $\overline{g}$'s positivity. We easily verify that $\overline{g}$ is an adapted metric. $$\begin{array}{rcl}2\overline{g}(X,\overline{\phi}Y)&=&-d\eta (X,\overline{\phi}^2Y)\\&=&-d\eta (X,-Y+\eta (Y)\xi )\\&=&d\eta (X,Y).\end{array} $$ $\qed$ \noindent{\bf Remark:} As a consequence of Theorem \ref{theo2}, contact forms on compact, weakly $(\kappa ,\mu )$-spaces with $|I_M|>1$ admit associated K-contact structures, hence verify Weinstein's Conjecture about the existence of closed Reeb orbits.( see\cite{ RUK}). On weakly $(\kappa ,\mu )$-spaces with small Boeckx invariant, it turns out that $(\kappa ,2)$ structures coexist with $(\kappa ,\mu\ne 2)$ structures. This will be established in the next subsection. \subsection{Contact metric weakly $(\kappa , 2 )$-spaces} Given a non-K-contact, weakly $(\kappa ,\mu )$-space $(M,\eta ,\xi ,\phi ,g)$, we define the D-homothetic invariant tensor field $$\tilde{\phi}=\frac{1}{\sqrt{1-k}}h$$ \begin{Lemma}\label{lem2} Denoting by $$\tilde{h}=\frac{1}{2}L_\xi\tilde{\phi }=\frac{1}{2\sqrt{1-k}}L_\xi h,$$ the following identities are satisfied: \begin{equation}\tilde{h}=\frac{1}{2\sqrt{1-k}}((2-\mu )\phi h+2(1-k)\phi )\label{tilde1}\end{equation} \begin{equation}\tilde{h}^2=((1-k)-(1-\frac{\mu}{2})^2)\phi^2 \label{tilde2}\end{equation} \end{Lemma} \proof From the third identity in Proposition \ref{prop33}, combined with identity (\ref{eq6}), Proposition \ref{prop4}, we get $$2(\sqrt{1-k})\tilde{h}=L_\xi h=(2-\mu )\phi h+2(1-\kappa )\phi $$ So $$\tilde{h}=\frac{1}{2\sqrt{1-k}}(2-\mu )\phi h+2(1-\kappa )\phi$$ which is (\ref{tilde1}). The proof of (\ref{tilde2}) is a straightforward calculation. $\qed$ \noindent {\bf Remark}: If $|I_M|<1$, then $1-k-(1-\frac{\mu}{2})^2>0$. Therefore, identity (\ref{tilde2}) suggests that $\tilde{h}$ can be used to define a complex structure on the contact subbundle.\vskip 12pt Define the tensor field $\phi_1$ by:$$\phi_1=\frac{1}{\sqrt{1-k-(1-\frac{\mu}{2})^2}}\tilde{h}=\frac{1}{\sqrt{1-k-(1-\frac{\mu}{2})^2}}\frac{1}{2\sqrt{1-k}}((2-\mu)\phi h+2(1-k)\phi )$$ \begin{pro} \label{prop5}The tensor field $\phi_1$ satisfies $$\phi_1^2=-I+\eta\otimes \xi$$ \begin{equation} h_1=\frac{1}{2}L_\xi\phi_1=(\sqrt{1-I_M^2})h\label{phi11}\end{equation}\end{pro} \proof The identity $\phi_1^2=\phi^2=-I+\eta\otimes\xi$ follows from Lemma \ref{lem2}, (\ref{tilde2}). As for identity (\ref{phi11}), we proceed as follows: $$\begin{array}{rcl}h_1=\frac{1}{2}(L_\xi\phi_1)&=&\frac{1}{4\sqrt{(1-\kappa )(1-\kappa-(1-\frac{\mu}{2})^2}} L_\xi ((2-\mu )\phi h+2(1-\kappa )\phi\\&=&\frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[(2-\mu)((L_\xi\phi )h+\phi L_\xi h)+\\&&2(1-\kappa )L_\xi \phi]\\&=& \frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )(2h^2+\phi((\mu -2)h\phi+\\&&2(1-\kappa )\phi ) +4(1-\kappa )h)\\ &=&\frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[2(2-\mu )h^2-(2-\mu )^2\phi h\phi +\\&&2(2-\mu )(1-\kappa )\phi^2+4(1-\kappa )h]\\&=&\frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[2(2-\mu )(\kappa -1)\phi^2-(2-\mu )^2h+\\&&2(2-\mu )(1-\kappa )\phi^2 +4(1-\kappa )h]\\&=& \frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[4(1-\kappa )-(2-\mu )^2]h\\&=&\sqrt{\frac{4(1-\kappa )-(2-\mu )^2}{4(1-\kappa )}}h=(\sqrt{1-I_M^2})h \end{array} $$ $\qed$ As pointed out earlier, when a D-homothetic deformation is applied to a weakly $(\kappa , \mu )$ structure with $\mu =2$, the $\mu$ value remains the same as is seen from one of formulas (\ref{km}): $$\overline{\mu}=\frac{\mu +2a-2}{a}$$ As a consequence, weakly $(\kappa ,2 )$ structures cannot be obtained through D-homothetic deformations. In the case $|I_M|<1$, we prove the following theorem: \bt \label{theo4} Let $(M, \eta , \xi , \phi , g)$ be a non-K-contact, weakly $(\kappa , \mu )$-space with Boeckx invariant $I_M$ satisfying $ |I_M|<1$. Then, there is a weakly $(\kappa_1 ,\mu_1 )$ structure $(M,\eta , \xi , \phi_1, g_1 )$ where $\mu_1=2$ and $\kappa_1=\kappa +(1-\frac{\mu}{2})^2$.\et \proof Define $g_1$ by $$g_1(X,Y)=-\frac{1}{2}d\eta (X,\phi_1 Y)+\eta (X)\eta (Y).$$ We will show that $g_1$ is a Riemannian metric adapted to $\phi_1$ and $\eta$, i.e. $$d\eta =2g_1(.,\phi_1 )$$ For any tangent vectors $X$ and $Y,$ $$\begin{array}{rcl}g_1(X,Y)&=&-\frac{1}{2}\frac{1}{\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,\tilde{h}Y)+\eta (X)\eta (Y)\\&=&-\frac{1}{2\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,(2-\mu)\phi hY)+d\eta (X,2(1-\kappa )\phi Y))+\\&&\eta (X)\eta (Y)\\&=&-\frac{1}{2\sqrt{1\kappa -(1-\frac{\mu}{2})^2}}(2g(X,(2-\mu )\phi^2hY)+2d\eta (X, 2(1-\kappa )\phi^2Y))+\\&&\eta (X)\eta (Y)\\&=& -\frac{1}{2\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,(2-\mu)\phi hY)+d\eta (X,2(1-\kappa )\phi Y))+\\&&\eta (X)\eta (Y))\\&=&-\frac{1}{2\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}(2g((2-\mu )\phi^2hX,Y)+2d\eta (2(1-\kappa )\phi^2X,Y))+\\&&\eta (Y)\eta (X)\\&=& -\frac{1}{2}\frac{1}{\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (Y,\tilde{h}X)+\eta (Y)\eta (X)\\&=& g_1(Y,X)\end{array} $$ proving that $g_1$ is a symmetric tensor. To prove positivity of $g_1$, first observe that $g_1(\xi ,\xi )=1>0$. Next, for any $X$ in the contact distribution, $$\begin{array}{rcl}g_1(X,X)&=&-\frac{1}{2}d\eta (X,\frac{1}{2\sqrt{1-\kappa}\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )\phi hX+2(1-\kappa ) \phi X)\\&=&-\frac{(2-\mu )}{4\sqrt{1-\kappa }\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,\phi hX)-\frac{(1-\kappa )}{2\sqrt{1-\kappa}\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,\phi X)\\&=&\frac{1}{2\sqrt{1-\kappa }\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )g(X,hX)+2(1-\kappa )g(X,X))\end{array} $$ \begin{equation}\label{fr1}g_1(X,X)=\frac{1}{2\sqrt{1-\kappa }\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )g(X,hX)+2(1-\kappa )g(X,X))\end{equation} If $X\in D(\lambda )$, then (\ref{fr1}) becomes $$g_1(X,X)=\frac{1}{\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}(2\sqrt{1-\kappa }+(2-\mu ))g(X,X))>0 $$ The last inequality follows from Lemma \ref{lemma1}, (iii). If $X\in D(-\lambda )$, then (\ref{fr1}) becomes $$g_1(X,X)=\frac{1}{\sqrt{1-\kappa-(1-\frac{\mu}{2})^2}} (2\sqrt{1-\kappa} -(2-\mu ))g(X,X)>0$$ also following from Lemma \ref{lemma1}, (iii). We now prove that $g_1$ is an adapted metric. Directly from the definition of $g_1$, $$\begin{array}{rcl}2g_1(X, \phi_1Y)&=&-d\eta (X,\phi^2_1Y)\\&=&d\eta (X,Y) \end{array} $$ $\qed$ Finally, we show that the structure $(M,\eta ,\xi ,\phi_1, g_1)$ is a weakly $(\kappa_1 ,2 )$-structure. By Proposition \ref{prop5}, (\ref{phi11}), the positive eigenvalue of $h_1$ is $$\lambda_1=\sqrt{1-I_M^2}\lambda =\sqrt{(1-\kappa )(1-I_M^2)}=\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}.$$ Since $(\eta ,\xi , \phi_1, g_1)$ is a contact metric structure, identity (\ref{bl1}), Proposition \ref{prop33} holds. $$\overline{\nabla}_\xi h_1=\phi_1-\phi_1l_1-\phi_1h_1^2.$$ For any tangent vector field $X$, one has $$\phi_1X-\phi_1l_1X-\phi_1h^2X=(\overline{\nabla}_\xi h_1)X$$ $$\begin{array}{rcl}\phi_1X-\phi_1l_1X-\lambda_1^2\phi_1X&=&\overline{\nabla}_\xi (h_1X)-h_1\overline{\nabla}_\xi X\\&=&\overline{\nabla}_{h_1X}\xi +[\xi , h_1X]-h_1(\overline{\nabla}_X\xi +[\xi ,X])\\&=&-\phi_1h_1X-\phi_1h_1^2X+(L_\xi h_1)X+h_1[\xi ,X]\\&&-h_1(-\phi_1X-\phi_1h_1X+[\xi ,X])\\ \phi_1X-\phi_1l_1X-\lambda_1^2\phi_1X&=&-2\phi_1h_1X-2\lambda_1^2\phi_1X+(L_\xi h_1)X \end{array} $$ Applying $\phi_1$ on both sides of the above identity, one has $$\phi_1^2X+l_1X-\lambda_1^2\phi_1^2X=2h_1X-2\lambda_1^2\phi_1^2X+\phi_1(L_\xi h_1)X$$ Solving for the tensor field $l_1$ gives \begin{equation}\label{m2}l_1X=2h_1X-(1+\lambda_1^2)\phi_1^2X+(\phi_1L_\xi h_1)X\end{equation} From Proposition \ref{prop5}, we know $L_\xi h_1=\sqrt{1-I_M^2}L_\xi h$ and $L_\xi h=(\mu -2)h\phi +2(1-\kappa )\phi$ from Proposition \ref{prop4}. Also $\phi_1=\frac{1}{2\sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}(L_\xi h)$. A direct calculation shows that $$\begin{array}{rcl} \phi_1L_\xi h_1&=&\frac{\sqrt{1-I_M^2}}{2\sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}(L_\xi h)^2\\ &=&\frac{\sqrt{1-I_M^2}}{2\sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}(1-\kappa )[4(1-\kappa )-(\mu -2)^2]\phi^2\\&=&\frac{1}{2}[4(1-\kappa )-(\mu -2)^2]\phi^2\end{array} $$ Reporting this in identity (\ref{m2}), we get: $$\begin{array}{rcl}l_1X&=&2h_1X-(1+\lambda_1^2)\phi_1^2X+\frac{1}{2}[4(1-\kappa )-(2-\mu )^2]\phi^2X\\&=&2h_1X-(1+1-\kappa -(1-\frac{\mu}{2})^2)(-X+\eta (X)\xi +2(1-\kappa )-\\&&\frac{(2-\mu )^2}{2}(-X+\eta (X)\xi\\&=&2h_1X+(\kappa +\frac{(2-\mu )^2}{4})(X-\eta (X)\xi) \end{array} $$ Which is the $(\kappa_1, \mu_1)$ condition with $\mu_1=2$ and $\kappa_1=\kappa +\frac{(2-\mu)^2}{4}$. $\qed$
train/arxiv
BkiUbDQ4uzqh_FJ_JWRU
5
1
\section{Introduction} Among the $J^P = \frac{3}{2}^+$ decuplet states only the $\Omega^-$ decays weakly into nonleptonic channels. The allowed two-body decay modes to the lowest-lying baryon octet are $\Omega^- \rightarrow \Lambda K^-, \Omega^- \rightarrow \Xi^0 \pi^-$ and $\Omega^- \rightarrow \Xi^- \pi^0$. The pertinent decay amplitudes have been calculated within the framework of heavy baryon chiral pertubation theory at tree level \cite{Jen} and also at one-loop order including the lowest nonanalytic corrections.\cite{EMS} Within these investigations the authors come to the conclusion that the decay amplitudes are well described by values for the weak parameters of the Lagrangian determined from a fit to the s-wave amplitudes of the nonleptonic decays of the octet baryons. But the same values inserted into the expressions for the p-wave amplitudes of the nonleptonic hyperon decays do not allow a good description of the experimental data.\cite{Jen} Therefore, it is questionable if the chosen values for the weak parameters are appropriate. An intriguing approach for nonleptonic hyperon decays was examined by Le Yaouanc et al., who assert that a reasonable fit for both s- {\it and} p-waves can be provided by appending pole contributions from $SU(6)$ (70,$1^-$) states to the s-wave amplitudes.\cite{LeY} Their calculations were performed in a simple constituent quark model and appeared to be able to provide a resolution of the s- and p-wave dilemma. In a previous work \cite{BH1} we have studied this approach within a chiral framework and have shown that it is indeed possible to find a simultaneous fit to both s- and p-wave hyperon decay amplitudes if contributions from the lowest lying 1/2$^-$ and 1/2$^+$ baryon octet resonant states are included in the formalism. Other weak processes involving hyperons are, e.g., the radiative-nonleptonic hyperon decays. The primary problem of radiative hyperon decay has been to understand the large negative value found experimentally for the asymmetry parameter in polarized $\Sigma^+ \rightarrow p \gamma$ decay.\cite{pdg} The difficulty here is associated with the restrictions posed by Hara's theorem, which requires the vanishing of this asymmetry in the $SU(3)$ limit.\cite{Har} Recent work involving the calculation of chiral loops has also not lead to a resolution, although slightly larger asymmetries can be accomodated.\cite{Neu} In \cite{BH2} we examined the contribution of 1/2$^-$ and 1/2$^+$ baryon intermediate states to radiative hyperon decay within the framework of chiral perturbation theory. We obtained reasonable predictions for the decay amplitudes and a significant negative value for the $\Sigma^+ \rightarrow p \gamma$ asymmetry as a very natural result of this picture, even though Hara's theorem is satisfied. Thus, the inclusion of spin-1/2 resonances provides a reasonable explanation of the importance of higher order counterterms and gives a satisfactory picture of both radiative and nonradiative nonleptonic hyperon decays. In the present paper we extend the discussion to weak nonleptonic $\Omega^-$ transitions and investigate the validity of this approach for these decays. In the next Section we introduce the effective weak and strong Lagrangian including resonant states and evaluate the pole diagram contributions of both ground state baryons and resonant states to nonleptonic $\Omega^-$ decay. Numerical results are presented in Sec. 3, and in Sec. 4 we conclude with a short summary. In the Appendix, we determine the strong couplings of the decuplet to the spin-1/2 resonances by a fit to the strong decays of these spin-1/2 resonances. \section{Weak nonleptonic $\Omega^-$ decay} There are three two-body decay modes of the $\Omega^-$ to the ground state baryon octet: $\Omega^- \rightarrow \Lambda K^-, \Omega^- \rightarrow \Xi^0 \pi^-$ and $\Omega^- \rightarrow \Xi^- \pi^0$. Phenomenologically, the matrix elements of these decays can each be expressed in terms of a parity-conserving p-wave amplitude $A_{ij}^{(P)}$ and a parity-violating d-wave amplitude $A_{ij}^{(D)}$ \begin{equation} {\cal A}( \Omega^- \rightarrow B_i \, \phi_j) = \bar{u}_{B_i} \Big\{ \, A_{ij}^{(P)} q_\mu + \, A_{ij}^{(D)}\gamma_5 q_\mu \Big\} u_{\Omega^-}^\mu , \end{equation} where $B_i$ and $\phi_j$ are the ground state baryon and Goldstone boson, respectively, and $q_\mu$ is the four-momentum of the outgoing kaon or pion. The underlying strangeness-changing Hamiltonian transforms under $SU(3)_L \times SU(3)_R$ as $(8_L, 1_R) \oplus (27_L,1_R)$ and, experimentally, the octet piece dominates over the 27-plet by a factor of twenty or so in nonleptonic hyperon decay. In the case of $\Omega^-$ decay the corresponding octet dominance contribution implies the relation \begin{equation} {\cal A}(\Omega^- \rightarrow \Xi^0 \pi^-) - \sqrt{2} {\cal A}(\Omega^- \rightarrow \Xi^- \pi^0) =0 \end{equation} which holds both for p- and d-waves. The experimentally measured ratio of the decay widths --- $\Gamma(\Omega^- \rightarrow \Xi^0 \pi^-)/ \Gamma(\Omega^- \rightarrow \Xi^- \pi^0) = 2.65 \pm 0.20$ --- is in disagreement with the isospin prediction of 2. This indicates that the $\Delta I = 3/2$ amplitude in $\Gamma(\Omega^- \rightarrow \Xi \pi)$ may be significantly larger than in nonleptonic hyperon decay \cite{TV} and could signal a violation of the $\Delta I = 1/2$ rule for the $\Omega^-$ decay.\cite{CG} We do not study this issue here and have, therefore, neglected the 27-plet contribution. We prefer not to work in the isospin limit, however, and do not use this isospin relation to eliminate one of the decay amplitudes but rather attempt to find a simultaneous fit to the three decay modes of the $\Omega^-$. The purpose of this work is to study the role of spin-1/2 resonances in $\Omega^-$ decays. To this end, it is sufficient to work at tree level. This is also necessary because the calculations \cite{BH1, BH2} which provide values for the weak parameters involved in these decays have been performed at tree level, so that it would be inconsistent to use these values for the weak parameters in a loop calculation for the $\Omega^-$ decays. In the rest frame of the $\Omega^-$ the amplitudes are related to the decay width via \begin{equation} \label{width} \Gamma = \frac{1}{12 \pi} \frac{|{\bf q}|}{M_\Omega} ( E_\phi^2- m_\phi^2) \Big[ |A^{(P)}|^2 (E_B + M_B ) + |A^{(D)}|^2 (E_B - M_B ) \Big] \end{equation} with ${\bf q}$ being the three-momentum of the outgoing pseudoscalar. In previous work the calculation of the d-wave amplitudes $A^{(D)}$ has generally been neglected, since its contribution to the decay width is suppressed by kinematical factors. However, in order to evaluate the decay parameters on the other hand, one {\it must} include the d-waves. Thus, e.g., the asymmetry parameter is \begin{equation} \alpha = \frac{2 \mbox{Re} ( A^{(P)*} \bar{A}^{(D)} )}{ |A^{(P)}|^2 + |\bar{A}^{(D)}|^2 } \end{equation} with $\bar{A}^{(D)} = [( E_B-M_B)/(E_B+M_B)]^{1/2} A^{(D)}$.\cite{pdg} \subsection{The effective Lagrangian} The effective Lagrangian can be decomposed into a strong and weak component \begin{equation} {\cal L}_{\mbox{eff}} = {\cal L}^{(S)} + {\cal L}^{(W)}. \end{equation} As mentioned above, since we will be using results from our previous work, which considered the role of these resonances in nonleptonic hyperon decay at tree level, we restrict ourselves to tree level for the present study. We first consider the Lagrangian in the absence of spin-1/2 resonances. The strong part consists of the free kinetic Lagrangians of the decuplet, the ground state baryon octet and the Goldstone bosons together with an interaction term \begin{equation} {\cal L}^{(S)} = {\cal L}_{kin} + {\cal L}_{\Delta B \phi} . \end{equation} The kinetic component reads \begin{eqnarray} {\cal L}_{kin} &=& i \, \mbox{tr} \big( \bar{B} \gamma_{\mu} [ D^{\mu} , B] \big) - \mnod \, \mbox{tr} \big( \bar{B} B \big) + \frac{F_\pi^2}{4} \, {\rm tr} \big( u_{\mu} u^{\mu}\big) + \frac{F_\pi^2}{4} \, {\rm tr} \big( \chi_+ \big) \nonumber \\ &+& \bar{\Delta}^\alpha \bigg[ (-i \partial \!\!\!/ + M_\Delta) g_{\alpha \beta} + i (\gamma_\alpha \partial_\beta + \gamma_\beta \partial_\alpha ) - i \gamma_\alpha \partial \!\!\!/ \gamma_\beta - M_\Delta \gamma_\alpha \gamma_\beta \bigg] \Delta^\beta \end{eqnarray} with $\mnod$ being the octet baryon mass in the chiral limit. The pseudoscalar Goldstone fields ($\phi = \pi, K, \eta$) are collected in the $3 \times 3$ unimodular, unitary matrix $U(x)$, \begin{equation} U(\phi) = u^2 (\phi) = \exp \lbrace 2 i \phi / F_\pi \rbrace \qquad ,\qquad u_{\mu} = i u^\dagger \nabla_{\mu} U u^\dagger \end{equation} where $F_\pi \simeq 92.4$ MeV is the pion decay constant, \begin{eqnarray} \phi = \frac{1}{\sqrt{2}} \left( \matrix { {1\over \sqrt 2} \pi^0 + {1 \over \sqrt 6} \eta &\pi^+ &K^+ \nonumber \\ \pi^- & -{1\over \sqrt 2} \pi^0 + {1 \over \sqrt 6} \eta & K^0 \nonumber \\ K^- & \bar{K^0}&- {2 \over \sqrt 6} \eta \nonumber \\} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \right) \, \, \, \, \, \end{eqnarray} represents the contraction of the pseudoscalar fields with the Gell-Mann matrices and $B$ is the standard $SU(3)$ matrix representation of the low-lying spin-1/2 baryons $( N, \Lambda, \Sigma, \Xi)$. Here $\chi_+ = u^\dagger \chi u^\dagger + u \chi^\dagger u$ is proportional to the quark mass matrix ${\cal M}$ = ${\rm diag}(m_u,m_d,m_s)$ via $\chi = 2 B {\cal M}$. Also, $B = - \langle 0 | \bar{q} q | 0 \rangle / F_\pi^2$ is the order parameter of the spontaneous symmetry violation, and we assume $B \gg F_\pi$. The propagator of the spin-3/2 fields (denoted generically by $\Delta$) is given by \begin{equation} G_{\beta \delta} (p) = -i\frac{p \!\!/ + M_\Delta}{p^2-M_\Delta^2} \, \biggl( g_{\beta \delta} - \frac{1}{3} \gamma_\beta \gamma_\delta - \frac{2 p_\beta p_\delta}{3M_\Delta^2} + \frac{p_\beta \gamma_\delta - p_\delta \gamma_\beta}{3 M_\Delta} \, \biggr) \, \, \, , \end{equation} with $M_\Delta = 1.38$~GeV being the average decuplet mass. The interaction Lagrangian between the spin-3/2 fields, the baryon octet and the Goldstone bosons reads \begin{equation} {\cal L}_{\Delta B \phi} = \frac{{\cal C}}{2} \, \biggl\{ \bar{\Delta}^{\mu ,abc} \, \Theta_{\mu \nu} (Z) \, (u^\nu)_a^i \, B_b^j \, \epsilon_{cij}- \bar{B}^b_i \, (u^\nu)^a_j \, \Theta_{\nu \mu} (Z) \, {\Delta}^\mu_{abc} \,\epsilon^{cij}\, \biggr\} \, \, \, , \end{equation} where $a, b, \ldots , j$ are SU(3)$_f$ indices and the coupling constant $1.2 < {\cal C} < 1.8 $ has been determined from the decays $\Delta \to B \pi$.\cite{JM} In our application, we use the mean value $C=1.5$. The Dirac matrix operator $\Theta_{\mu \nu} (Z)$ is given in general by \begin{equation} \Theta_{\mu \nu} (Z) = g_{\mu \nu} - \biggl(Z + \frac{1}{2} \biggr) \, \gamma_\mu \, \gamma_\nu \, \, \, \, . \label{theta} \end{equation} However, the off-shell parameter $Z$ does not contribute at tree level because of the subsidiary condition $\gamma_\mu u^\mu_{\Omega^-} =0$. The weak Lagrangian can be written as \begin{equation} \label{hpi} {\cal L}^{(W)} = d \, \mbox{tr} \Big( \bar{B} \{ h_+ , B\} \Big) + f \, \mbox{tr} \Big( \bar{B} [ h_+ , B ] \Big) + \frac{F_\pi^2}{4} \, h_{\pi} \, {\rm tr} \Big( h_+ u_{\mu} u^{\mu}\Big) + \, h_c \, \bar{\Delta}^{\mu, abc} (h_+)_a^d \Delta_{\mu, dbc}, \end{equation} where we have defined \begin{equation} h_+ = u^{\dagger} h u + u^{\dagger} h^{\dagger} u \qquad , \end{equation} with $h^{a}_{b} = \delta^{a}_{2} \delta^{3}_{b}$ being the weak transition matrix. (Note that $h_+$ transforms as a matter field.) In Eq.(\ref{hpi}) the weak coupling $h_{\pi}$ is known from weak nonleptonic kaon decays --- $h_{\pi} = 3.2 \times 10^{-7}$ --- while in our previous work the LECs $d$ and $f$ have been determined from nonleptonic hyperon decay: $d=0.44 \times 10^{-7}$ GeV, $f=-0.50 \times 10^{-7}$ GeV. In \cite{BH1} we included lowest lying spin-1/2$^+$ and 1/2$^-$ resonances in the theory and performed a tree level calculation. Integrating out the heavy degrees of freedom provides then a plausible estimate of the weak counterterms and a satisfactory fit for both s- and p-waves was achieved. Since we herein apply this scheme for the related weak $\Omega^-$ decays we will use the values for $d$ and $f$ from \cite{BH1}. The parameter $h_c$ does not appear in the tree-level calculation of the nonleptonic hyperon decays in \cite{BH1,BH2}. We therefore consider it as a free parameter and determine its value from a fit to the weak $\Omega^-$ decays. We now proceed to append the contribution from spin-1/2 resonances. In \cite{LeY,LeY1} it was argued that in a simple constituent quark model inclusion of the lowest lying spin 1/2$^-$ octet from the (70,1$^-$) multiplet leads to significant improvements in both radiative and nonleptonic hyperon decays, and we confirmed in two recent calculations \cite{BH1,BH2} that there indeed exist significant contributions from such resonances to the nonleptonic hyperon decays in the framework of chiral perturbation theory. We begin therefore with the inclusion of the octet of spin-parity 1/2$^-$ states, which include the well-established states $N(1535)$ and $\Lambda(1405)$. As for the rest of the predicted 1/2$^-$ states there are a number of not so well-established states in the same mass range --- cf. \cite{LeY} and references therein. In order to include such resonances one begins by writing down the most general Lagrangian at lowest order which exhibits the same symmetries as the underlying theory, i.e. Lorentz invariance and chiral symmetry. For the strong part we require invariance under $C$ and $P$ transformations separately, while the weak piece is invariant under $CPS$ transformations, where the transformation $S$ interchanges down and strange quarks in the Lagrangian. We will work in the $CP$-conserving limit so that all LECs are real, and denote the 1/2$^-$ octet by $R$. Another important multiplet of excited states is the octet of Roper-like spin-1/2$^+$ fields. While it was argued in \cite{JM} that these play no significant role, a more recent study seems to indicate that one cannot neglect contributions from such states to, e.g., decuplet magnetic moments.\cite{BaM} Also in \cite{BH1,BH2} we found that inclusion of these states was essential both in nonleptonic and radiative hyperon decay to achieve experimental agreement. It is thus important to include also the contribution of these baryon resonances here. The Roper octet, which we denote by $B^*$, consists of the $N^*(1440)$, the $\Sigma^*(1660)$, the $\Lambda^*(1600)$ and the $\Xi^*(1620?)$. The resonance kinetic term is straightforward \begin{equation} {\cal L}_{kin} = i \, \mbox{tr} \Big( \bar{R} \gamma_{\mu} [ D^{\mu} , R] \Big) - M_R \, \mbox{tr} \Big( \bar{R} R \Big) +i \, \mbox{tr} \Big( \bar{B}^* \gamma_{\mu} [ D^{\mu} , B^*] \Big) - M_{B^*} \, \mbox{tr} \Big( \bar{B}^* B^* \Big) \end{equation} with $M_R$ and $M_{B^*}$ being the masses of the resonance octets in the chiral limit. The strong interaction Lagrangian relevant to the processes considered here reads \begin{eqnarray} {\cal L}_{\Delta \phi B^*/R} &=& \frac{{\cal C_{B^*}}}{2} \, \biggl\{ \bar{\Delta}^{\mu ,abc} \, \Theta_{\mu \nu} (Z) \, (u^\nu)_a^i \, B_b^{*j} \, \epsilon_{cij}- \bar{B}^{*b}_i \, (u^\nu)^a_j \, \Theta_{\nu \mu} (Z) \, {\Delta}^\mu_{abc} \,\epsilon^{cij}\, \biggr\} \nonumber \\ &+& i \frac{s_c}{2} \, \biggl\{ \bar{\Delta}^{\mu ,abc} \, \Theta_{\mu \nu} (Z) \, (u^\nu)_a^i \, \gamma_5 \, R_b^j \, \epsilon_{cij}- \bar{R}^b_i \, (u^\nu)^a_j \, \Theta_{\nu \mu} (Z) \, \gamma_5 \, {\Delta}^\mu_{abc} \,\epsilon^{cij}\, \biggr\} \nonumber \\ \end{eqnarray} and the couplings $C_{B^*}$ and $s_c$ can be determined from a fit to the strong decays of the spin-1/2 resonances to $\Delta(1232)$ --- cf. App. A. The corresponding weak Lagrangian is \begin{eqnarray} {\cal L}^{W} &=& d^* \Big[ \, \mbox{tr} \Big( \bar{B}^* \{h_+ , B\} \Big) + \mbox{tr} \Big( \bar{B} \{ h_+, B^* \} \Big) \: \Big] + f^* \Big[ \, \mbox{tr} \Big( \bar{B}^* [h_+ , B] \Big) + \mbox{tr} \Big( \bar{B} [ h_+, B^*] \Big) \: \Big] \nonumber \\ &+ & i w_d \Big[ \, \mbox{tr} \Big( \bar{R} \{h_+ , B\} \Big) - \mbox{tr} \Big( \bar{B} \{ h_+, R\} \Big) \: \Big] + i w_f \Big[ \, \mbox{tr} \Big( \bar{R} [h_+ , B] \Big) - \mbox{tr} \Big( \bar{B} [ h_+, R] \Big) \: \Big] \end{eqnarray} with four couplings $d^*, f^*$ and $w_d, w_f$ which have been determined from a fit to the nonleptonic hyperon decays in \cite{BH1}. There is thus only a single unknown parameter in this approach --- $h_c$ --- once the weak couplings are fixed from nonleptonic hyperon decays. The inclusion of the spin-1/2 resonant states does not lead to any new unknown parameters, so that study of the weak nonleptonic $\Omega^-$ decays provides a nontrivial check on the role of spin-1/2 resonances in weak baryon decays --- extending this approach to the case of weak decuplet decays. \subsection{Pole contributions} In this section we consider the tree diagrams which contribute to weak $\Omega^-$ decay. The graphs involving only the decuplet and the ground state baryon octet are depicted in Fig. 1. Graphs 1a and 1b contribute to the decay $\Omega^- \rightarrow \Lambda K^-$, while graphs 1b and 1c deliver contributions to the pionic decays $\Omega^- \rightarrow \Xi \pi$. (Note that diagram 1c with the weak decay of the Goldstone boson has been neglected in \cite{Jen} and \cite{EMS}.) The diagrams in Fig. 1 contribute only to the parity-conserving p-wave amplitudes $A^{(P)}$, yielding \begin{eqnarray} A^{(P)}_{\Lambda K} & = & \frac{C}{2 \sqrt{3} F_\pi} \left( \frac{d-3f}{M_\Lambda-M_{\Xi^0}} + \frac{h_c}{M_\Omega-M_{\Xi^*}} \right) \nonumber \\ A^{(P)}_{\Xi^0 \pi^-} & = & \frac{C}{\sqrt{2} F_\pi} \left( \frac{h_c}{3(M_\Omega-M_{\Xi^*})} + \frac{h_\pi m^2_{\pi^-}}{2(m^2_{K^-}-m^2_{\pi^-})} \right) \nonumber \\ A^{(P)}_{\Xi^- \pi^0} & = & \frac{C}{2 F_\pi} \left( \frac{h_c}{3(M_\Omega-M_{\Xi^*})} + \frac{h_\pi m^2_{\pi^0}}{2(m^2_{K^0}-m^2_{\pi^0})} \right) . \end{eqnarray} Here, we have used the physical meson masses and have replaced the baryon masses in the chiral limit by their physical values, which is consistent to the order we are working. There are no contributions from these diagrams to the parity-violating d-wave amplitudes $A^{(D)}$, so that at this order there is in each case a vanishing asymmetry parameter $\alpha$. We now include the spin-1/2 resonances, which contribute only to the decay $\Omega^- \rightarrow \Lambda K^-$ through the diagram depicted in Fig. 2. One obtains the result \begin{eqnarray} A^{(P)}_{\Lambda K} & = & \frac{C_{B^*}}{2 \sqrt{3} F_\pi} \frac{d^*-3f^*}{M_\Lambda-M_{B^*}} \nonumber \\ A^{(D)}_{\Lambda K} & = & \frac{s_c}{2 \sqrt{3} F_\pi} \frac{w_d-3w_f}{M_\Lambda-M_R} \end{eqnarray} and there are no further contributions from the spin-1/2 resonant states. \section{Numerical results} In this section we present the results obtained by a fit to the three weak $\Omega^-$ decays. (Note that when applying Eq.(\ref{width}) in order to perform the fit, we use the physical values for the masses of the outgoing particles. Therefore, we obtain a ratio $\Gamma(\Omega^- \rightarrow \Xi^0 \pi^-)/ \Gamma(\Omega^- \rightarrow \Xi^- \pi^0)$ which is slightly different from the isospin prediction of 2. However, the 30\% effect found experimentally is outside our approach.) Our fit including the spin-1/2 resonances leads to $h_c = 0.39 \times 10^{-7}$ GeV and in units of $10^{-15}$ GeV \begin{eqnarray} \Gamma(\Omega^- \rightarrow \Lambda K^-) & = & 5.61 \qquad (5.42 \pm 0.06)\nonumber \\ \Gamma(\Omega^- \rightarrow \Xi^0 \pi^-) & = & 1.62 \qquad (1.89\pm 0.06)\nonumber \\ \Gamma(\Omega^- \rightarrow \Xi^- \pi^0) & = & 0.76 \qquad (0.69 \pm 0.03) , \end{eqnarray} where we have given the experimental numbers in brackets. (Note that the ratio $\Gamma(\Omega^- \rightarrow \Xi^0 \pi^-)/ \Gamma(\Omega^- \rightarrow \Xi^- \pi^0)$ is approximately 2.1 in our approach.) The inclusion of the spin-1/2 resonances predicts a finite decay parameter $\alpha$ for the decay $\Omega^- \rightarrow \Lambda K^-$ \begin{equation} \alpha_{\Lambda K} = -0.015 \end{equation} to be compared with an experimental value of $-0.026 \pm 0.026$, while we obtain vanishing asymmetry parameters for the two pionic decay modes which are also consistent with the experimental values $\alpha_{\Xi^0 \pi^-} = 0.09 \pm 0.14$ and $\alpha_{\Xi^0 \pi^-} = 0.05 \pm 0.21$. \footnote{The results from a fit without the inclusion of the spin-1/2 resonances read in units of $10^{-15}$ GeV $\Gamma(\Omega^- \rightarrow \Lambda K) = 4.64, \Gamma(\Omega^- \rightarrow \Xi^0 \pi^-) = 0.69, \Gamma(\Omega^- \rightarrow \Xi^- \pi^0) = 0.32 $ with $h_c = 0.24 \times 10^{-7}$ GeV and vanishing asymmetry parameters in each case. Interestingly, we do not have a satisfactory fit to experiment. However, it is not really fair to compare these results to the fit including the spin-1/2 resonances, since we employ values for $d$ and $f$ which have been determined in \cite{BH1} from a fit to the nonleptonic decays after the inclusion of such states.} Our results are only indicative. A full discussion would have to include both the effects of chiral loops as well as contributions from higher resonances. In this regard, we do not anticipate our results to be able to precisely reproduce the experimental values for the decay widths and the asymmetry parameters. Although it would be desirable to estimate the error of our tree level result from higher order chiral loop corrections by e.g. calculating some typical loop contributions, it is then inconsistent to use results from our previous work which considered the role of resonances in nonleptonic hyperon decay at tree level \cite{BH1, BH2}. Therein we derived estimates on the resonance parameters at tree level which we also use in the present investigation. The inclusion of loops might change the numerical values of these parameters somewhat and, therefore, alter the contribution from resonances both in nonleptonic hyperon decay and $\Omega^-$ decay. In order to give a reliable estimate on chiral corrections in $\Omega^-$ decay, one has to consider---in addition to chiral loops for this process---also the the change in the resonance contribution due to the inclusion of loops in nonleptonic hyperon decay. This is clearly beyond the scope of the present investigation. Our purpose herein is to examine whether the inclusion of spin-1/2 resonances, which worked well in both ordinary and radiative nonleptonic hyperon decay, can be extended successfully to the case of weak decuplet decays. Our results suggest that the spin-1/2 resonances also play an essential role in weak nonleptonic decuplet decay. Before leaving this section, it is interesting to note that while our results are purely phenomenological, it is possible to compare the fit value of $h_c$, which leads in our model to $\langle \Xi^{*-} | H_W | \Omega^- \rangle = 0.22 \times 10^{-7}$ GeV, with what is expected based upon quark model considerations. A constituent quark model approach to the weak $\Xi^{*-} - \Omega^-$ matrix element would include two structures --- one from the direct four-quark weak Hamiltonian and a second from the related penguin term. However, the direct contribution to the $\Xi^{*-} - \Omega^-$ transition vanishes because of lack of multiple $s, \bar{s}$ quarks in $H_W$, so that a non-zero value for this matrix element can only arise from the penguin term. The size of this contribution is somewhat larger than what one might expect based on model calculations with perturbatively calculated Wilson coefficients evolved to $\mu \simeq 1$ GeV, but is consistent with the expected size of such effects when scaled into the low energy region $\mu \simeq 0.2$ GeV.\cite{DGH} \section{Summary} In this work we have examined the role of spin-1/2 intermediate state resonances in weak nonleptonic $\Omega^-$ decay. This study also serves as a consistency check for the results from nonleptonic hyperon decay. In two recent papers we were able to show that these resonances play an essential role for both ordinary and radiative nonleptonic hyperon decay.\cite{BH1,BH2} A much improved agreement with the experimental values for s- and p-waves in ordinary hyperon decay is brought about even at tree level and a significant negative value for the asymmetry parameter in the radiative decay $\Sigma^+ \rightarrow p \gamma$ is obtained as a very natural result within this picture. In the present work we have shown that this approach can be extended to the case of weak decuplet decays. The pertinent Lagrangian without spin-1/2 resonances has a single unknown weak parameter once the remaining weak parameters are fixed by the nonleptonic hyperon decays. Inclusion of the spin-1/2 resonances does not lead to any additional unknown parameters, since the two strong couplings involving the decuplet can be fixed from the strong decays of the spin-1/2 resonances to the decuplet. The spin-1/2 resonances contribute only to the decay $\Omega^- \rightarrow \Lambda K^-$. We determine the unknown parameter by a least-squares fit to the three two body decay modes of the $\Omega^-$. One obtains satisfactory agreement with experimental data and a nonvanishing asymmetry parameter $\alpha_{\Lambda K} = - 0.014$ which lies within the experimental range --- $\alpha_{\Lambda K} = - 0.026 \pm 0.026$. Since we do not work in the isospin limit we obtain a ratio $\Gamma(\Omega^- \rightarrow \Xi^0 \pi^-)/ \Gamma(\Omega^- \rightarrow \Xi^- \pi^0) = 2.1$ which is different from the isospin prediction of 2. This ratio is measured to be approximately 2.65.\cite{pdg} This indicates that the $\Delta I = 3/2$ amplitude from the current measurement of the rates for $\Gamma(\Omega^- \rightarrow \Xi \pi)$ appears to be significantly larger than in nonleptonic hyperon decay \cite{TV} and could signal a violation of the $\Delta I = 1/2$ rule for the $\Omega^-$ decay.\cite{CG} We do not test that here and have, therefore, neglected the 27-plet contribution. Our study suggests that the approach of including spin-1/2 resonances in nonleptonic hyperon decay can be extended to weak decuplet decays giving a satisfactory picture of weak $\Omega^-$ decay. In order to make a more definite statement one should, of course, go to higher orders and include meson loops as well as the contributions from additional resonances. However, this is beyond the scope of the present investigation. \section*{Acknowledgements} This work was supported in part by the Deutsche Forschungsgemeinschaft and by the National Science Foundation.
train/arxiv
BkiUczk5qhDBLTI4uEMa
5
1
\section{Introduction}\label{sec:intro} Composition tableaux were introduced in \cite{HLMvW-1} to define the basis of quasisymmetric Schur functions for the Hopf algebra of quasisymmetric functions. These functions are analogues of the ubiquitous Schur functions \cite{schur}, have been studied in substantial detail recently \cite{BTvW, HLMvW-1, HLMvW-2, Lauve-Mason, LMvW, TvW}, {and have consequently been the genesis of an active new branch of algebraic combinatorics discovering Schur-like bases in quasisymmetric functions \cite{AllenHallamMason, BBSSZ-0}, type $B$ quasisymmetric Schur functions \cite{JingLi, Oguz}, quasi-key polynomials \cite{AssafS, Searles} and quasisymmetric Grothendieck polynomials \cite{Monical}}. Just as Young tableaux play a crucial role in the combinatorics of Schur functions \cite{sagan, stanley-ec2}, composition tableaux are key to understanding the combinatorics of quasisymmetric Schur functions. The aim of this article is to shed light on certain enumerative aspects of composition tableaux by studying actions of the $0$-Hecke algebra of type $A$. Recall that the number of standard Young tableaux is given by the famous hook-length formula of Frame-Robinson-Thrall \cite{FRT}. While there is no known `hook-length formula' for enumerating standard composition tableaux, and the presence of large prime factors in the data suggests there might not be a simple closed formula, we hope that our article convinces the reader of other interesting enumerative aspects of composition tableaux and motivates further exploration. The following question serves as the primary motivation for this article: {Can we} characterize and enumerate the possible ordered pairs of permutations $(\sigma_1,\sigma_2)$ obtained by considering the relative order of entries in two adjacent columns of an arbitrary composition tableau? This question in the context of Young tableaux is uninteresting. On the other hand, it is natural with regards to composition tableaux as the relative order of entries in{, say,} column $i$ of a composition tableau $\tau$ is governed by the relative order of the entries in column $i-1$. This is because of the so-called triple condition/rule \cite[Definition 4.1]{HLMvW-1}. If the columns under consideration have the same number of entries, say $n$, then we say that the resulting pair $(\sigma_1,\sigma_2)$ is a \emph{compatible pair} of permutations in $\sgrp{n}$. We remark here that the assumption that the two columns possess the same number of entries is mild as this can always be achieved by supplementing the composition tableau under consideration with $0$s as suggested in \cite[Definition 4.1 part 3)]{HLMvW-1}. One of our main results is that the number of compatible pairs of permutations in $\sgrp{n}$ is $(n+1)^{n-1}$. To study the combinatorics of an arbitrary pair of adjacent columns in a composition tableau, we introduce a generalization thereof that we call permuted composition tableaux. Just as semistandard composition tableaux are intimately tied to semistandard augmented fillings \cite{HLMvW-1}, our permuted composition tableaux are connected to permuted basement semistandard augmented fillings introduced by Haglund-Mason-Remmel \cite{HMR}. Given our interest in the relative order of entries in columns, the authors' previous work \cite{TvW} hints at constructing an action of the $0$-Hecke algebra $H_{n}(0)$ on permuted composition tableaux. To this end, we generalize the operators from \cite{TvW} to our current setting and also obtain analogues of the results therein. To answer our original question, we focus on standard permuted composition tableaux of shape $(2^n)$. We refer to such tableaux as $2$-columned tableaux. The $0$-Hecke action that we construct naturally leads to a deeper study of descents in the first column in $2$-columned tableaux and they naturally come in four flavors. We construct a bijection between the set of $2$-columned tableaux and labeled binary trees that maps certain descent statistics on tableaux to ascent-descent statistics on labeled binary trees. Note that the ascent-descent statistics on labeled binary trees were first studied in unpublished work by Gessel, and functional equations for their distribution were established by Kalikow \cite{Kalikow} and Drake \cite{Drake-thesis}. Given the striking observations of Gessel \cite{Gessel-Oberwolfach} relating the distribution of these statistics to enumerative questions in the theory of hyperplane arrangements, much work has been done recently \cite{Bernardi, Corteel-Forge-Ventos, Forge, GesselGriffinTewari, Tewari}. That said, we will not focus on the hyperplane arrangements perspective in this article. \smallskip \noindent {\bf Outline of the article.} The paper essentially has two halves: the first half focuses on studying the combinatorics of permuted composition tableaux and defining a $0$-Hecke action on them. The second half concerns itself with enumerating and characterizing compatible pairs by way of Dyck paths and binary trees. In Section \ref{sec:prelims}, we introduce our main combinatorial objects, and develop most of the notation we need. Our main result here is Theorem~\ref{thm:generalized shift map} that makes explicit the link between reverse tableaux and permuted composition tableaux. In Section~\ref{sec:0-Hecke}, we construct a $0$-Hecke action on the set of permuted composition tableaux of a given shape. Section~\ref{sec:ascents-descents-trees} studies certain descents in $2$-columned tableaux and relates them to ascents-descents in labeled binary trees via bijections that use labeled Dyck paths as intermediate objects. Our main result in this section is Theorem~\ref{thm:stat-preserving bijection}. Section~\ref{sec:allowable acyclic} gives a characterization for compatible pairs in terms of pattern-avoidance. We demonstrate that compatible pairs are essentially allowable pairs introduced in \cite{AtkinsonThiyagarajah} and investigated further {in \cite{ALW, GilbeyKalikow, Hamel}}. To answer our original question, we show in Corollary~\ref{cor:existence of srcts} that every allowable pair can be obtained by standardizing the entries in the last two columns of some standard composition tableau. \section{Preliminaries}\label{sec:prelims} We denote the set of positive integers by $\mathbb{N}$. Given $n\in \mathbb{N}$, we define $[n]$ to be the set of the first $n$ positive integers $\{1,\dots, n\}$. The set of all words in the alphabet $\mathbb{N}$ is denoted by $\mathbb{N}^{*}$. The empty word is denoted by $\varepsilon$. A \emph{\text{ud-word}} % word in the alphabet {u,d} is a word in the alphabet $\{\upstep,\downstep\}$. A related notion is that of a \emph{labeled ud-word}, which is a word in the alphabet $\{{\upstep}_i,{\downstep}_i\;|\; i\in \mathbb{N}\}$. There is a natural projection $\chi$ from the set of labeled ud-words to the set of ud-words defined by mapping ${\upstep}_i$ to $\upstep$ and ${\downstep}_{i}$ to $\downstep$ for all $i\in \mathbb{N}$. \subsection{The symmetric group $\sgrp{n}$}\label{subsec:symmetric group} The symmetric group $\sgrp{n}$ is generated by the elements $s_1,\dots, s_{n-1}$ subject to the following relations \begin{eqnarray*} s_i^2&=&1 \text{ for } 1\leq i\leq n-1\\s_{i}s_{i+1}s_{i}&=&s_{i+1}s_is_{i+1} \text { for } 1\leq i\leq n-2\\s_{i}s_{j}&=&s_js_{i} \text{ if } \lvert i-j\rvert \ge 2. \end{eqnarray*} In the first relation above, the $1$ denotes the identity element in $\sgrp{n}$. In practice, one identifies $\sgrp{n}$ with the group of all bijections from $[n]$ to itself, otherwise known as \emph{permutations}, by setting $s_i$ to be the \emph{simple transposition} interchanging $i$ and $i+1$ for $1\leq i\leq n-1$. An expression for $\sigma\in \sgrp{n}$ of the form $s_{i_1}\cdots s_{i_p}$ that uses the minimal number of simple transpositions is called a \emph{reduced word} for $\sigma$. We write permutations in one-line notation and, on occasion, treat them as words in $\mathbb{N}^{*}$. Given a word $w=w_1\cdots w_n\in \mathbb{N}^{*}$, we define the \emph{standardization} of $w$, denoted by $\stan(w)$, to be the unique permutation $\sigma \in \sgrp{n}$ such that $\sigma(i) > \sigma (j)$ if and only if $w_i > w_j$ for $1\leq i<j \leq n$. For example, $\stan(3122)=4123$. An \emph{inversion} of $\sigma\in\sgrp{n}$ is an ordered pair $(p,q)$ such that $1\leq p<q\leq n$ and $\sigma(p)>\sigma(q)$. The set of inversions in $\sigma$ is denoted by $\Inv(\sigma)$. This given, define the (left) \emph{weak Bruhat order} $\leq_{L}$ on $\sgrp{n}$ by $\sigma_1 \leq_{L} \sigma_2$ if and only if $\Inv(\sigma_1)\subseteq \Inv(\sigma_2)$ \cite[Proposition 3.1.3]{bjorner-brenti}. We denote the cover relation in the weak Bruhat order by $\prec_L$. The symmetric group $\sgrp{n}$ endowed with the weak Bruhat order $\leq_L$ inherits the structure of a graded lattice with a unique minimum element (given by the identity permutation) and a unique maximum element (given by the reverse of the identity permutation). To emphasize the dependence on $n$, the identity permutation in $\sgrp{n}$ and its reverse will henceforth be denoted by $\epsilon_n$ and $\bar{\epsilon}_n$ respectively. \subsection{The \texorpdfstring{$0$-Hecke algebra $H_n(0)$}{0-Hecke algebra}}\label{subsec:reps} The $0$-Hecke algebra $H_n(0)$ is the $\mathbb{C}$-algebra generated by the elements $T_1,\ldots,T_{n-1}$ subject to the following relations \begin{eqnarray*} T_i^2&=&T_i \text{ for } 1\leq i\leq n-1\\T_{i}T_{i+1}T_{i}&=&T_{i+1}T_iT_{i+1} \text { for } 1\leq i\leq n-2\\T_{i}T_{j}&=&T_jT_{i} \text{ if } \lvert i-j\rvert \ge 2. \end{eqnarray*} If $s_{i_1}\cdots s_{i_p}$ is a reduced word for a permutation $\sigma\in \sgrp{n}$, then we define an element $T_{\sigma}\in H_{n}(0)$ as \begin{eqnarray*} T_{\sigma}=T_{i_1}\cdots T_{i_p}. \end{eqnarray*} The Word Property \cite[Theorem 3.3.1]{bjorner-brenti} of $\sgrp{n}$ implies that $T_{\sigma}$ is independent of the choice of reduced word. Moreover, the set $\{T_{\sigma}\;|\; \sigma\in \sgrp{n}\}$ is a linear basis for $H_{n}(0)$. Thus, the dimension of $H_{n}(0)$ is $n!$. In general, $H_{n}(0)$ is not semisimple and possesses a rich combinatorial representation theory \cite{carter, norton}. It is intimately related with the Hopf algebras of quasisymmetric functions and noncommutative symmetric functions respectively \cite{DKLT}, in the same way as symmetric group representation theory is intimately connected with the Hopf algebra of symmetric functions. More information about $H_{n}(0)$ and its representations can be found in \cite{carter,mathas}, and contemporary results can be found in \cite{BBSSZ, huang-1, huang-2, huang-3, Huang-Rhoades, Konig}. Our interest in the $0$-Hecke algebra stems from the authors' previous work in the context of providing a representation-theoretic interpretation for quasisymmetric Schur functions \cite{TvW}. \subsection{Compositions, partitions and diagrams}\label{subsec:compositions et cetera} A \emph{composition} $\alpha$ of a positive integer $n$ is a finite ordered list of positive integers $(\alpha_1,\dots, \alpha_k)$ satisfying $\sum _{i=1}^k\alpha_i=n$. The $\alpha_i$ are called the \emph{parts} of $\alpha$ and their sum is called the \emph{size} of $\alpha$ (denoted by $|\alpha|$). The number of parts is called the \emph{length} of $\alpha$ and is denoted by $\ell(\alpha)${, and we often denote $m$ consecutive parts $j$ by $j^m$}. The \emph{empty composition}, denoted by $\varnothing$, is the unique composition of size and length $0$. If $\alpha$ is a composition of size $n$, we denote this by $\alpha\vDash n$. From $\alpha=(\alpha_1,\dots,\alpha_k)\vDash n$, we can obtain another composition $\hat{\alpha}\vDash 2n$ of length $n$ by first incrementing each part of $\alpha$ by $1$ and subsequently adding $n-k$ parts equaling $1$ at the end. For instance, if $\alpha=(2,1,3)$, then $\hat{\alpha}=(3,2,4,1,1,1)$. If $\alpha=(\alpha_1,\dots,\alpha_k)\vDash n$ is such that $\alpha_1\geq \dots \geq \alpha_k$, then we say that $\alpha$ is a \emph{partition} of $n$ and denote this by $\alpha\vdash n$. The partition obtained by sorting the parts of $\alpha$ in weakly decreasing order is denoted by $\widetilde{\alpha}$. We depict a composition $\alpha=(\alpha_1,\dots,\alpha_k)\vDash n$ by its \emph{composition diagram}, also referred to as $\alpha$, which is a left-justified array of $n$ cells where the $i$-th row from the top has $\alpha_i$ cells. The \emph{augmented composition diagram} of $\alpha$, denoted by $\hat{\alpha}$, is the composition diagram of $\hat{\alpha}$. We refer to the first column of $\hat{\alpha}$ as its \emph{basement}. Note that if $\alpha$ is a partition, then the composition diagram of $\alpha$ is the Young diagram of $\alpha$ in English notation. \vspace*{-1mm} \subsection{Tableaux}\label{subsec:tableaux} Given a partition $\lambda\vdash n$, a \emph{reverse tableau} (henceforth abbreviated to $\ensuremath{\operatorname{RT}}$) $T$ of \emph{shape} $\lambda$ is a filling of the Young diagram of $\lambda$ with positive integers such that the rows decrease weakly from left to right, whereas the columns decrease strictly from top to bottom. Let $\ensuremath{\operatorname{RT}}(\lambda)$ denote the set of reverse tableaux of shape $\lambda$ all of whose entries are weakly less than $|\lambda|$. If $T\in \ensuremath{\operatorname{RT}}(\lambda)$ is such that its entries are all distinct, then we say that $T$ is \emph{standard}. We denote the set of standard $\ensuremath{\operatorname{RT}}$s of shape $\lambda$ by $\ensuremath{\operatorname{SRT}}(\lambda)$. Additionally, we refer to a standard $\ensuremath{\operatorname{RT}}$ as an $\ensuremath{\operatorname{SRT}}$. Given $\alpha\vDash n$ and $\sigma\in \sgrp{\ell(\alpha)}$, we define a \emph{permuted composition tableau} (henceforth abbreviated to $\mathrm{PCT}$) of \emph{shape} $\alpha$ and \emph{type} $\sigma$ to be a filling $\tau$ of the composition diagram $\alpha$ with positive integers such that the following conditions hold. \begin{enumerate} \item The entries in the first column are all distinct and the standardization of the word obtained by reading the first column from top to bottom is $\sigma$. \item The entries along the rows decrease weakly when read from left to right. \item For any configuration in $\tau$ of the type in Figure~\ref{fig:triple configuration}, if $a\geq c$ then $b>c$. We call this condition the \emph{triple condition}. \end{enumerate} \begin{figure}[ht] \centering \begin{align*} \ytableausetup{mathmode,boxsize=1.25em} \begin{ytableau} a & b\\ \none & \none[\vdots]\\ \none & c \end{ytableau} \end{align*} \caption{A triple {configuration.}} \label{fig:triple configuration} \end{figure} The first condition and the triple condition guarantee that the entries in any given column of a $\mathrm{PCT}$ are all distinct. Figure~\ref{fig:prct} gives a $\mathrm{PCT}$ of shape $(1,3,2,4)$ and type $1324$. \begin{remark} Note that our version of the triple condition follows \cite[Definition 4.2.6]{LMvW}. The version in \cite[Definition 4.1 part 3)]{HLMvW-1}, though stated differently, is equivalent. It is the latter that we employ in Section~\ref{sec:allowable acyclic}. \end{remark} Let $\mathrm{PCT}^{\sigma}(\alpha)$ denote the set of all $\mathrm{PCT}$s of shape $\alpha$ and type $\sigma $ all of whose entries are weakly less than $|\alpha|$. Additionally, let $$\mathrm{PCT}(\alpha)\coloneqq\coprod_{\sigma\in \sgrp{\ell(\alpha)}}\mathrm{PCT}^{\sigma}(\alpha),$$ where $\coprod$ indicates disjoint union. If $\tau\in \mathrm{PCT}^{\sigma}(\alpha)$ is such that its entries are all distinct, then we say that $\tau$ is \emph{standard}. We denote the set of standard $\mathrm{PCT}$s of shape $\alpha$ and type $\sigma$ by $\mathrm{SPCT}^{\sigma}(\alpha)$. Additionally, we refer to a standard $\mathrm{PCT}$ as an $\mathrm{SPCT}$. Finally, let $$\mathrm{SPCT}(\alpha)\coloneqq\coprod_{\sigma\in \sgrp{\ell(\alpha)}}\mathrm{SPCT}^{\sigma}(\alpha).$$ \begin{remark} From this point on, whenever we write $\mathrm{PCT}^{\sigma}(\alpha)$ or $\mathrm{SPCT}^{\sigma}(\alpha)$ without explicitly specifying $\sigma$, it is implicit that $\sigma$ is a permutation in $\sgrp{\ell(\alpha)}$. Furthermore, all the RTs (respectively $\mathrm{PCT}$s) we consider in this article belong to $\ensuremath{\operatorname{RT}}(\lambda)$ (respectively $\mathrm{PCT}(\alpha)$) for the appropriate $\lambda$ (respectively $\alpha$). \end{remark} \begin{figure}[ht] \centering \begin{align*} \ytableausetup{mathmode,boxsize=1em} \begin{ytableau} 1\\ 4 & 3 &2\\ 3 & 2\\ 7 & 5 & 5 & 3 \end{ytableau}\hspace{20mm} \begin{ytableau} 1\\ 7 & 5 &2\\ 6 & 4\\ 10 & 9 & 8 & 3 \end{ytableau} \end{align*} \caption{A $\mathrm{PCT}$ (left) and an $\mathrm{SPCT}$ (right) of shape $(1,3,2,4)$ and type $1324$.} \label{fig:prct} \end{figure} Given $\alpha\vDash n$ and $\tau\in \mathrm{SPCT}(\alpha)$, we say that an integer $1\leq i\leq n-1$ is a \emph{descent} of $\tau$ if $i+1$ lies weakly right of $i$ in $\tau$. The \emph{descent set} of $\tau$, denoted by $\des(\tau)$, consists of all descents in $\tau$. For instance, the descent set of the $\mathrm{SPCT}$ in Figure~\ref{fig:prct} is $\{1,2,4,6,7\}$. \begin{remark} Note that if $\sigma$ equals $\epsilon_{\ell(\alpha)}$, then a $\mathrm{PCT}$ of shape $\alpha$ and type $\sigma$ is in fact a \emph{composition tableau} (abbreviated henceforth to $\ensuremath{\operatorname{CT}}$) introduced in \cite[Definition 4.1]{HLMvW-1} of the same shape $\alpha$. Similarly, an $\mathrm{SPCT}$ of shape $\alpha$ and type $\sigma=\epsilon_{\ell(\alpha)}$ corresponds to a \emph{standard $\ensuremath{\operatorname{CT}}$} (abbreviated henceforth to $\ensuremath{\operatorname{SCT}}$) of shape $\alpha$. \end{remark} Let $\alpha \vDash n$ whose largest part is $\alpha _{max}$ and let $\rtau \in \mathrm{PCT}^{\sigma}(\alpha)$. Suppose that the entries in column $i$ for $1\leq i \leq \alpha_{max}$ read from top to bottom form some word $w^i$ in $\mathbb{N}^{*}$. We refer to $w^i$ as the \emph{$i$-th column word} of $\tau$. Furthermore, we define the \emph{standardized $i$-th column word} of $\rtau$, denoted by $\ensuremath{\operatorname{st}}_i(\rtau)$, to be $\stan(w^i)$. Note that $\ensuremath{\operatorname{st}}_1(\tau)$ is $\sigma$ since $\tau\in \mathrm{PCT}^{\sigma}(\alpha)$. We define the \emph{standardized column word} of $\rtau$, denoted by $\ensuremath{\operatorname{st}} (\rtau)$, to be the word $$\ensuremath{\operatorname{st}} (\rtau) = \ensuremath{\operatorname{st}}_1(\rtau) \ \ensuremath{\operatorname{st}}_2(\rtau) \cdots \ensuremath{\operatorname{st}}_{\alpha _{max}}(\rtau).$$ For the $\mathrm{PCT}$ in Figure~\ref{fig:prct}, the standardized column word is $1324 \ 213 \ 12 \ 1$. We now discuss two procedures connecting $\ensuremath{\operatorname{RT}}$s and $\mathrm{PCT}$s. The reader may interpret this correspondence as the analogue to that between $\ensuremath{\operatorname{CT}}$s and semistandard augmented fillings \cite{Mason-SLC}. Consider a composition $\alpha$ and permutation $\sigma\in \sgrp{\ell(\alpha)}$. Let $\alpha_{max}$ be the largest part of $\alpha$. Our first procedure, $\permutedtoreverse_{\sigma}$, takes $\tau\in \mathrm{PCT}^{\sigma}(\alpha)$ as input and outputs a filling $T$ of shape $\lambda\coloneqq\widetilde{\alpha}$ by considering the entries of column $i$ of $\tau$ in decreasing order and putting them in column $i$ of $\lambda$ from top to bottom for all $1\leq i\leq \alpha_{max}$. For our second procedure $\reversetopermuted_{\sigma}$, let $\lambda$ be a partition and let $\sigma\in \sgrp{\ell(\lambda)}$. This procedure takes $T\in \ensuremath{\operatorname{RT}}(\lambda)$ as input and outputs a filling $\tau$ as follows. \begin{enumerate} \item Consider the entries in the first column of $T$ and write them in rows $1, 2, \ldots, \ell(\lambda)$ so that the standardization of the word obtained by reading from top to bottom is $\sigma$. \item Consider the entries in column $2$ in decreasing order and place each of them in the row with the smallest index so that the cell to the immediate left of the number being placed is filled and the row entries weakly decrease when read from left to right. \item Repeat the previous step with the set of entries in column $k$ for $k= 3, \ldots , \lambda_1$. \end{enumerate} Figure~\ref{fig:prct<->rt} illustrates the procedures just introduced and motivates the theorem that follows. \begin{figure}[htbp] \centering \begin{align*} \ytableausetup{mathmode,boxsize=1em} \begin{ytableau} 11 & 8 & 6 & 4\\ 10 & 7 & 5\\ 9 & 3 &1\\ 2 \end{ytableau} \quad \mathrel{\mathop{\rightleftarrows}^{\mathrm{\reversetopermuted_{\sigma}}}_{\mathrm{\permutedtoreverse_{\sigma} }}} \quad \begin{ytableau} 10 & 8 & 6 & 4 \\ 2\\ 11 & 7 & 5 \\ 9 & 3 & 1 \end{ytableau} \end{align*} \caption{{An $\ensuremath{\operatorname{RT}}$} and a $\mathrm{PCT}$ related via the maps $\permutedtoreverse_{\sigma}$ and $\reversetopermuted_{\sigma}$ where $\sigma=3142$.} \label{fig:prct<->rt} \end{figure} \begin{theorem}\label{thm:generalized shift map} Let $\lambda$ be a partition and let $\sigma\in \sgrp{\ell(\lambda)}$. The map $$\permutedtoreverse_{\sigma}:\coprod_{\widetilde{\alpha}=\lambda}\mathrm{PCT}^{\sigma}(\alpha) \to \ensuremath{\operatorname{RT}}(\lambda)$$ is a bijection and its inverse is the map $\reversetopermuted_{\sigma}$. \end{theorem} \begin{proof} First, we show that $\permutedtoreverse_{\sigma}$ and $\reversetopermuted_{\sigma}$ are both injections, which suffices to conclude that they are in fact bijections. To this end, we will use \emph{permuted basement semistandard augmented fillings} (henceforth abbreviated to PBFs) introduced in \cite{HMR} generalizing the notion of semi-standard augmented fillings from \cite{Mason}. We refer the reader to \cite{HMR} for details on the terminology used in our proof. The reader will benefit from referring to Figure~\ref{fig:HMR} while reading this proof. Let $\lambda\vdash n$ and let $k=\ell(\lambda)$. Consider $\tau \in \mathrm{PCT}^{\sigma}(\alpha)$ where $\widetilde{\alpha}=\lambda$. Suppose that the first column word of $\tau$ is $w=w_1\cdots w_{k}$. Clearly $\stan(w)=\sigma$. Let $\hat{\sigma}\in \sgrp{n}$ be the permutation obtained by concatenating the entries in $[n]\setminus \{w_1,\dots, w_k\}$ at the end of $w$ in increasing order. Consider the filling $\hat{\tau}$ of the augmented composition diagram $\hat{\alpha}$ constructed as follows. \begin{itemize} \item The basement contains $\hat{\sigma}(i)$ for $i=1$ through $n$ from top to bottom. \item The rest of the diagram, which is essentially the composition diagram of $\alpha$, is filled exactly as $\tau$. \end{itemize} Given its construction, one may check that $\hat{\tau}$ is a PBF with basement permutation $\hat{\sigma}$. Indeed, the triple condition satisfied by $\tau$ ensures that, in $\hat{\tau}$, all type $A$ and $B$ triples are inversion triples and the $B$-increasing condition is satisfied. We omit the details. Crucial for us is the fact that the association of $\hat{\tau}$ to $\tau$ is one-to-one. Next, we describe a map that allows us to associate a reverse tableau with $\hat{\tau}$. Given a permutation $\pi$, Haglund-Mason-Remmel \cite[Section 4]{HMR} define a map $\rho_{\pi}$ that generalizes Mason's shift map \cite{Mason} (also known as the $\rho$ map). This map takes as input a PBF $T_1$ with basement permutation $\pi$ and outputs the unique PBF $T_2$ such that the entries in any column of $T_2$ read from top to bottom are the entries in the corresponding column of $T_1$ read in decreasing order. Note that the basement permutation of $T_2$ is $\bar{\epsilon}_n$. Haglund-Mason-Remmel (see discussion after \cite[Corollary 9]{HMR}) establish that $T_2$ is in fact a reverse tableau. The map $\rho_{\pi}^{-1}$ \cite[Page 309]{HMR} is the same as our map $\bar{\phi}_{\pi}$. In Figure~\ref{fig:HMR}, the two fillings in the middle are PBFs related by $\rho_{\pi}$ where $ \pi=10\ 2 \ 11\ 9\ 1 \ 3 \ 4\ 5 \ 6 \ 7 \ 8$ in one-line notation. In light of the preceding discussion, using the map $\rho_{\hat{\sigma}}$ we can associate the reverse tableau $\hat{T}\coloneqq \rho_{\hat{\sigma}}(\hat{\tau})$ with $\hat{\tau}$. Furthermore, given how $\rho_{\hat{\sigma}}$ operates, we conclude that the reverse tableau $T$ obtained by omitting the basement $\bar{\epsilon}_n$ from $\hat{T}$ may be otherwise obtained by sorting the entries in individual columns of $\tau$ in decreasing order and writing them along columns in the Young diagram of $\lambda=\widetilde{\alpha}$. Thus, we have $\permutedtoreverse_{\sigma}(\tau)=T$. The injectivity of $\permutedtoreverse_{\sigma}$ is explained next. Suppose that $\tau_1$ and $\tau_2$ satisfy $\permutedtoreverse_{\sigma}(\tau_1)= \permutedtoreverse_{\sigma}(\tau_2)$. Let $\hat{\sigma}_1, \hat{\sigma}_2\in \sgrp{n}$ be obtained by extending the first column words in $\tau_1$ and $\tau_2$ respectively as before. Our hypothesis implies that $\rho_{\hat{\sigma}_1}(\hat{\tau}_1)=\rho_{\hat{\sigma}_2}(\hat{\tau}_2)$. Thus, we have that the set of entries in the corresponding columns of $\tau_1$ and $\tau_2$ are the same. Since the standardized first column word of both $\tau_1$ and $\tau_2$ is $\sigma$, the first column of $\tau_1$ is the same as the first column of $\tau_2$. This implies $\hat{\sigma}_1 = \hat{\sigma}_2$, which, in view of {\cite[Theorem 10 part 2)]{HMR}}, implies that $\hat{\tau}_1=\hat{\tau}_2$. It follows immediately that $\tau_1=\tau_2$. Thus, $\permutedtoreverse_{\sigma}$ is an injection from $\coprod_{\widetilde{\alpha}=\lambda}\mathrm{PCT}^{\sigma}(\alpha) \to \ensuremath{\operatorname{RT}}(\lambda)$. To establish the injectivity of $\reversetopermuted_{\sigma}$, we again use results in \cite{HMR}, beginning with an alternative description of $\reversetopermuted_{\sigma}$. Let $T\in \ensuremath{\operatorname{RT}}(\lambda)$. Construct a PBF $\hat{T}$ of shape $\hat{\lambda}$ with basement permutation $\overline{\epsilon}_n$ by filling the remaining diagram, essentially the composition diagram of $\lambda$, exactly as $T$. To obtain $\reversetopermuted_{\sigma}(T)$, construct a permutation $\hat{\sigma}$ such that \begin{itemize} \item its first $k$ letters are a rearrangement of the entries in the first column of $T$, \item the standardization of the word formed by the first $k$ letters is $\sigma$, \item the last $n-k$ entries increase when read from left to right. \end{itemize} Then $\hat{\tau}\coloneqq\rho_{\hat{\sigma}}^{-1}(\hat{T})=\reversetopermuted_{\hat{\sigma}}(\hat{T})$ is a PBF with basement permutation $\hat{\sigma}$. Given how $\hat{\sigma}$ relates to $\sigma$ and to the first column of $T$, we infer that the first $k$ rows in the second column of $\hat{\tau}$ are the same as the first $k$ rows of the first column. This implies that the filling obtained by removing the basement from $\hat{\tau}$ is in fact {$\reversetopermuted_{\sigma}(T)$}. To establish the injectivity of $\reversetopermuted_{\sigma}$, argue as follows. Suppose $T_1, T_2\in \ensuremath{\operatorname{RT}}(\lambda)$ are such that $\reversetopermuted_{\sigma}(T_1)=\reversetopermuted_{\sigma}(T_2)$. The preceding discussion implies that there exist permutations $\hat{\sigma}_1, \hat{\sigma_2} \in \sgrp{n}$ such that $\rho_{\hat{\sigma}_1}^{-1}(\hat{T_1})=\rho_{\hat{\sigma}_2}^{-1}(\hat{T_2})$, where $\hat{T_1}$ and $\hat{T_2}$ are obtained by appending the basement permutation $\overline{\epsilon}_n$ as before. Again, we conclude that the sets of entries in corresponding columns of $T_1$ and $T_2$ are equal, which implies that $T_1=T_2$. To finish the proof, we need to establish that $\reversetopermuted_{\sigma}$ and $\permutedtoreverse_{\sigma}$ are mutually inverse. This follows from our alternate description for $\permutedtoreverse_{\sigma}$ and $\reversetopermuted_{\sigma}$ in terms of $\rho_{\hat{\sigma}}$ and $\rho_{\hat{\sigma}}^{-1}$ for an appropriately constructed $\hat{\sigma}$. \end{proof} Figure~\ref{fig:HMR} describes the various steps in the proof of Theorem~\ref{thm:generalized shift map} in the case of the $\ensuremath{\operatorname{RT}}$ and $\mathrm{PCT}$ from Figure~\ref{fig:prct<->rt}. Let $\sigma=3142$, $\alpha=(4,1,3,3)$ and $\lambda=(4,3,3,1)$. The $\mathrm{PCT}$ $\tau$ on the left belongs to $\mathrm{PCT}^{\sigma}(\alpha)$. The second filling from the left is the PBF $\hat{\tau}$ with basement permutation $\hat{\sigma}$ constructed in the preceding proof. We have $\hat{\sigma}=10\ 2 \ 11\ 9\ 1 \ 3 \ 4\ 5 \ 6 \ 7 \ 8$ in one-line notation. The second filling from the right is $\rho_{\hat{\sigma}}(\hat{\tau})$ and the rightmost filling is $\permutedtoreverse_{\sigma}(\tau)$. \begin{figure}[ht] \includegraphics{Example-HMR_bijection-1.pdf} \caption{The bijection between $\coprod_{\widetilde{\alpha}=\lambda}\mathrm{PCT}^{\sigma}(\alpha)$ and $\ensuremath{\operatorname{RT}}(\lambda)$ via PBFs.} \label{fig:HMR} \end{figure} We conclude this section by noting that if $\sigma=\epsilon_{\ell(\alpha)}$, then the map $\permutedtoreverse_{\sigma}$ maps a $\ensuremath{\operatorname{CT}}$ $\tau$ of shape $\alpha$ to an $\ensuremath{\operatorname{RT}}$ of shape $\widetilde{\alpha}$. \section{0-Hecke modules on $\mathrm{SPCT}$s}\label{sec:0-Hecke} In order to define an $H_n(0)$-action on $\mathrm{SPCT}$s we recall the concept of attacking defined in \cite{TvW}. In fact, we employ notions similar to those in \cite[Section 3]{TvW} throughout and essentially all results there are true in our case as well. We focus solely on deriving analogues relevant for this article. Given $\rtau \in \mathrm{SPCT}^{\sigma}(\alpha)$ for some $\alpha\vDash n$, and a positive integer $i$ such that $1\leq i\leq n-1$, we say that $i$ and $i+1$ are \emph{attacking} if either \begin{enumerate} \item $i$ and $i+1$ are in the same column in $\rtau$ (in which case we call them \emph{strongly attacking}), or \item $i$ and $i+1$ are in adjacent columns in $\rtau$, with $i+1$ positioned southeast of $i$ (in which case we call them \emph{weakly attacking}). \end{enumerate} Given an integer $i$ satisfying $1\leq i\leq n-1$, let $s_i(\rtau)$ denote the filling obtained by interchanging the positions of entries $i$ and $i+1$ in $\rtau$. Define operators $\pi_i$ for $1\leq i\leq n-1$ as follows. \begin{eqnarray}\label{eq:pi} \pi_{i}(\rtau)&=& \left\lbrace\begin{array}{ll}\rtau & i\notin \des(\rtau)\\ 0 & i\in \des(\rtau), i \text{ and } i+1 \text{ attacking}\\ s_{i}(\rtau) & i\in \des(\rtau), i \text{ and } i+1 \text{ { nonattacking}}\end{array}\right. \end{eqnarray} If $i\in \des(\rtau)$ is such that $i$ and $i+1$ are attacking (respectively nonattacking) then $i$ is an \emph{attacking descent} (respectively \emph{nonattacking descent}). \begin{theorem}\label{the:0heckerels} The operators $\{ \pi _i \} _{i=1}^{n-1}$ satisfy the same relations as $H_n(0)$. Thus, given a composition $\alpha\vDash n$, they define a $0$-Hecke action on $\mathrm{SPCT}^{\sigma}(\alpha)$. \end{theorem} The proof of the above theorem in the context of $\ensuremath{\operatorname{SCT}}$s involves a lengthy verification and relies on some preliminary lemmas \cite{TvW}. The techniques we need vary slightly and we discuss the differences next. The key lemma for proving that the operators give rise to a $0$-Hecke action on $\ensuremath{\operatorname{SCT}}$s is \cite[Lemma 3.7]{TvW} and it, in turn, relies on \cite[Lemmas 3.4, 3.5, 3.6]{TvW}. Since we avoid an approach involving box-adding operators here, we cannot use an analogue of \cite[Lemmas 3.4 and 3.5]{TvW} for $\mathrm{SPCT}$s. In fact, \cite[Lemma 3.6]{TvW} does not hold in our setting. In spite of this, we do have the following analogue of \cite[Lemma 3.7]{TvW}. \begin{lemma}\label{lem:switchPRCT} \begin{enumerate} \item If $\alpha \vDash n$, $\rtau \in \mathrm{SPCT}^{\sigma}(\alpha)$ and $j\in \des(\rtau)$ such that $j$ and $j+1$ are nonattacking, then $s_j(\rtau)\in \mathrm{SPCT}^{\sigma}(\alpha)$. \item If $\alpha \vDash n$, $\rtau \in \mathrm{SPCT}^{\sigma}(\alpha)$ and $j\notin \des(\rtau)$ such that $j$ is not in the cell to the immediate right of $j+1$, then $s_j(\rtau)\in \mathrm{SPCT}^{\sigma}(\alpha)$. \end{enumerate} \end{lemma} \begin{proof} It is clear in both cases that $s_j(\tau)$ has rows that strictly decrease when read from left to right and that the standardized first column word is $\sigma$. Thus, to establish that $s_j(\tau)\in \mathrm{SPCT}^{\sigma}(\alpha)$, we need to check that the triple condition holds. Our proof proceeds by checking the triple condition locally. That is, we focus on a triple configuration involving a fixed set of cells in $\tau$ and show that if the triple condition held for this set of cells, then it continues to hold for the same set of cells in $s_j(\tau)$. Consider throughout a fixed triple configuration in $\tau$ as in Figure~\ref{fig:triple configuration}. Assume now that $j$ is a nonattacking descent in $\tau$. If neither $j$ nor $j+1$ belongs to $\{a,b,c\}$, then the triple condition obviously holds in $s_j(\tau)$. Hence suppose this is not the case. Our premise implies that exactly one of $j$ or $j+1$ belongs to $\{a,b,c\}$. Thus, replacing the $j$ by $j+1$ in $\tau$ does not alter the relative order of entries in the cells in the triple configuration under consideration. Therefore, we conclude that $s_j(\tau)\in \mathrm{SPCT}^{\sigma}(\alpha)$. We establish the second part of the lemma using an analysis similar to before. Assume that $j\notin \des(\tau)$ such that $j$ is not in the cell to the immediate right of $j+1$. As before, we infer that both $j$ and $j+1$ cannot belong to $\{a,b,c\}$. In the case where neither belongs, the triple condition clearly holds in $s_j(\tau)$. Hence assume that one of $j$ or $j+1$ belongs to $\{a,b,c\}$. Replacing the $j$ by $j+1$ in $\tau$ does not alter the relative order of entries in the cells in the triple configuration under consideration. Thus the triple condition holds in $s_j(\tau)$. \end{proof} We now state lemmas which show that the relations satisfied by the $0$-Hecke algebra $H_n(0)$ are also satisfied by the operators $\{ \pi _i \} _{i=1} ^{n-1}$. The proofs are omitted given their similarity to \cite[Lemmas 3.9, 3.10 and 3.11]{TvW} respectively. We emphasize here that only the first half of Lemma~\ref{lem:switchPRCT} is needed for establishing the three lemmas that follow, and thereby, Theorem~\ref{the:0heckerels}. The second half of Lemma~\ref{lem:switchPRCT} is crucial towards establishing Lemma~\ref{lem:unique source sink}. \begin{lemma}\label{lem:pisquared} For $1\leq i\leq n-1$, we have $\pi_{i}^{2}=\pi_i$. \end{lemma} \begin{lemma}\label{lem:pidifferby2} For $1\leq i,j\leq n-1$ such that $\lvert i-j\rvert \geq 2$, we have $\pi_i\pi_j=\pi_j\pi_i$. \end{lemma} \begin{lemma}\label{lem:piiiplus1} For $1\leq i\leq n-2$, we have $\pi_i\pi_{i+1}\pi_i=\pi_{i+1}\pi_i\pi_{i+1}$. \end{lemma} \iffalse \begin{proof} Consider an $\mathrm{SPCT}$ $\rtau$ of size $n$. We will deal with various cases.\newline \emph{Case I: $i,i+1\notin \des(\rtau)$}\newline In this case we have $\pi_i(\rtau)=\pi_{i+1}(\rtau)=\rtau$. Thus, we clearly have that $ \pi_i\pi_{i+1}\pi_i(\rtau)=\pi_{i+1}\pi_i\pi_{i+1}(\rtau)$. \noindent \emph{Case II: $i\notin \des(\rtau)$, $i+1\in \des(\rtau)$}\newline In this case we have $\pi_{i}(\rtau)=\rtau$. If $i+1$ and $i+2$ are attacking, then $\pi_{i+1}(\rtau)=0$. This implies that $\pi_i\pi_{i+1}\pi_i(\rtau)=\pi_{i+1}\pi_i\pi_{i+1}(\rtau)=0$. Assume therefore that $i+1$ and $i+2$ are nonattacking. Then $\pi_{i+1}(\rtau)=s_{i+1}(\rtau)$. One can see that we have three possibilities in $s_{i+1}(\rtau)$, each of which is dealt with individually in what follows. \begin{itemize} \item Consider first the case where $i\notin \des(s_{i+1}(\rtau))$. Then we have $\pi_i\pi_{i+1}(\rtau)=\pi_{i}(s_{i+1}(\rtau))=s_{i+1}(\rtau)$. Thus, we have $\pi_{i+1}\pi_i\pi_{i+1}(\rtau)=s_{i+1}(\rtau)$. Using the fact that $\pi_i(\rtau)=\rtau$ gives us that $\pi_{i+1}\pi_i(\rtau)=s_{i+1}(\rtau)$. Since $i\notin \des(s_{i+1}(\rtau))$ by our assumption, we have that $\pi_i\pi_{i+1}\pi_i(\rtau)=s_{i+1}(\rtau)$, and we are done. \item Now consider the case where $i\in \des(s_{i+1}(\rtau))$ and $i$, $i+1$ attacking in $s_{i+1}(\rtau)$. Then $\pi_i\pi_{i+1}(\rtau)=\pi_{i}(s_{i+1}(\rtau))=0$ and hence $\pi_{i+1}\pi_i\pi_{i+1}(\rtau)=0$. Now, clearly $\pi_{i+1}\pi_i(\rtau)=\pi_{i+1}(\rtau)=s_{i+1}(\rtau)$. Thus, $\pi_i\pi_{i+1}\pi_i(\rtau)=0$ which concludes this case. \item Consider finally the case where $i\in \des(s_{i+1}(\rtau))$ with $i$, $i+1$ nonattacking in $s_{i+1}(\rtau)$. Then, $\pi_i\pi_{i+1}(\rtau)=s_is_{i+1}(\rtau)$. Notice that, in $s_is_{i+1}(\rtau)$, $i+1$ is not a descent. Hence $\pi_{i+1}\pi_i\pi_{i+1}(\rtau)=s_is_{i+1}(\rtau)$. This is precisely what $\pi_{i}\pi_{i+1}\pi_i(\rtau)$ is, as $\pi_i(\rtau)$ is just $\rtau$. \end{itemize} \noindent\emph{Case III: $i\in \des(\rtau)$, $i+1\notin\des(\rtau)$}\newline Firstly, notice that we have $\pi_{i+1}(\rtau)=\rtau$. If $i$ and $i+1$ are attacking in $\rtau$, then $\pi_{i}\pi_{i+1}(\rtau)=\pi_{i}(\rtau)=0$ and therefore $\pi_{i+1}\pi_i\pi_{i+1}(\rtau)=0$. Clearly, $\pi_{i}\pi_{i+1}\pi_{i}(\rtau)=0$ as well. Now, assume that $i$ and $i+1$ are nonattacking. Then $\pi_{i}\pi_{i+1}(\rtau)=s_i(\rtau)$. Again, we have the following three possibilities in $s_i(\rtau)$. \begin{itemize} \item Assume that $i+1\notin \des(s_i(\rtau))$. Then $\pi_{i+1}(s_i(\rtau))=s_i(\rtau)$. Thus, in particular, we have that $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=s_i(\rtau)$. On the other hand we have that $\pi_i\pi_{i+1}\pi_i(\rtau)=\pi_{i}\pi_{i+1}(s_i(\rtau))=\pi_i(s_i(\rtau))=s_i(\rtau)$. \item Assume now that $i+1\in \des(s_i(\rtau))$ with $i+1$, $i+2$ attacking in $s_i(\rtau)$. Then $\pi_{i+1}\pi_i(\rtau)=\pi_{i+1}(s_i(\rtau))=0$. This gives that $\pi_i\pi_{i+1}\pi_i(\rtau)=0$. Now, we have that $\pi_i\pi_{i+1}(\rtau)=\pi_i(\rtau)=s_i(\rtau)$. Thus, $\pi_{i+1}\pi_i\pi_{i+1}(\rtau)=0$ too. \item The third case is the one where $i+1\in \des(s_i(\rtau))$ and $i+1$, $i+2$ are nonattacking in $s_i(\rtau)$. Then $\pi_{i+1}\pi_i(\rtau)=s_{i+1}s_i(\rtau)$. Notice that $i\notin \des(s_{i+1}s_i(\rtau))$. Therefore, $\pi_i\pi_{i+1}\pi_{i}(\rtau)=s_{i+1}s_i(\rtau)$. We have that $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=\pi_{i+1}\pi_i(\rtau)=s_{i+1}s_i(\rtau)$, and this settles the last case here. \end{itemize} \noindent\emph{Case IV: $i\in \des(\rtau)$, $i+1\in \des(\rtau)$} \newline Assume first the scenario where $i$ and $i+1$ are attacking in $\rtau$. In this case, $\pi_i(\rtau)=0$ implying $\pi_{i}\pi_{i+1}\pi_i(\rtau)=0$. We have the following two situations arising. \begin{itemize} \item If $i+1$ and $i+2$ are also attacking in $\rtau$, then it is immediate that $\pi_{i+1}\pi_i\pi_{i+1}(\rtau)=0$. \item Assume now that $i+1$ and $i+2$ are nonattacking in $\rtau$. {Assume further that $i$ and $i+1$ do not both belong to the first column.} We claim that $i$ and $i+1$ are nonattacking in $\pi_{i+1}(\rtau)=s_{i+1}(\rtau)$ and that $i\in \des(s_{i+1}(\rtau))$. The fact that $i\in \des(s_{i+1}(\rtau))$ is immediate. If $i+1$ and $i+2$ occupy columns whose indices differ by at least 2 in $\rtau$, then it is clear that $i$ and $i+1$ will be nonattacking in $s_{i+1}(\rtau)$. Assume therefore that $i+1$ and $i+2$ occupy adjacent columns in $\rtau$. If $i$ and $i+1$ are also in adjacent columns of $\rtau$, then they occur in columns that are distance exactly 2 apart in $s_{i+1}(\rtau)$. Thus, this also gives that $i$ and $i+1$ are nonattacking in $s_{i+1}(\rtau)$. Now assume that $i$ and $i+1$ are both in column $k\geq 2$ in $\rtau$. \sout{Since $i+1$ and $i+2$ are nonattacking and in adjacent columns in $\rtau$, we know by Lemma \ref{lem:adjacentnonattacking}, that $k\geq 2$.} {By the triple condition, it follows that} $i+1$ occupies a cell that is strictly north of the cell occupied by $i$ in $\rtau$. Since, in $\rtau$, $i+2$ occupies a cell that is strictly northeast of the cell containing $i+1$, we get that $i$ and $i+1$ will be nonattacking in $s_{i+1}(\rtau)$. Thus $\pi_{i}\pi_{i+1}(\rtau)=s_is_{i+1}(\rtau)$. Now notice that $i+1\in \des(s_is_{i+1}(\rtau))$ and that $i+1$ and $i+2$ are attacking in $s_is_{i+1}(\rtau)$. This gives us that $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=0$. {Now assume that both $i$ and $i+1$ belong to the first column in $\tau$. If $i+1$ is strictly north of $i$, then the same arguments as before guarantee that $i\in \des(s_{i+1}(\tau))$ and $i$ is a nonattacking descent in $s_{i+1}(\tau)$. Then $\pi_{i}\pi_{i+1}(\tau)=s_is_{i+1}(\tau)$ and $ i+1$ is an attacking descent in $s_is_{i+1}(\tau)$. Thus $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=0$. Hence assume that $i$ is strictly north of $i+1$. If $i+2$ occupies the $j$-th column for $j\geq 3$, then $i$ will be a nonattacking descent in $s_{i+1}(\tau)$. Thus $\pi_i\pi_{i+1}(\tau)=s_is_{i+1}(\tau)$. Clearly, $i+1$ is an attacking descent in $s_is_{i+1}(\tau)$ and thus $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=0$. If $i+2$ is in the second column and occupies a row strictly north of the row containing $i$, then the same argument as before again guarantees that $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=0$. If $i+2$ is in the second column and occupies a row strictly south of the row containing $i$ but strictly north of the row containing $i+1$, then $i$ is an attacking descent in $s_{i+1}(\tau)$. Thus $\pi_i\pi_{i+1}(\tau)=0$ and therefore $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=0$.} \end{itemize} Assume now the second scenario that $i$ and $i+1$ are nonattacking in $\rtau$. Then $\pi_{i}(\rtau)=s_i(\rtau)$. Since $i+1\in \des(\rtau)$, we are guaranteed that $i+1\in \des(s_i(\rtau))$. Moreover, $i+1$ and $i+2$ are nonattacking in $s_i(\rtau)$. To see this, we will use very similar analysis as we did earlier. If $i+1$ and $i+2$ are nonattacking in $\rtau$, then we get that $i+1$ and $i+2$ occupy columns whose indices differ by at least 2 in $s_i(\rtau)$. Hence they are clearly nonattacking in this case. If $i+1$ and $i+2$ are attacking and in adjacent columns, we still reach the conclusion that $i+1$ and $i+2$ occupy columns whose indices differ by at least 2 in $s_i(\rtau)$. Thus, in this case too, we get that $i+1$ and $i+2$ are nonattacking in $s_i(\rtau)$. Assume finally that $i+1$ and $i+2$ are in the same column, say $k$, of $\rtau$. {As $i$ and $i+1$ are nonattacking, we must have that $k\geq 2$.} If $i$ and $i+1$ are in columns whose indices differ by at least 2 in $\rtau$, we know that $i+1$ and $i+2$ will be nonattacking in $s_i(\rtau)$. Hence, assume that $i$ and $i+1$ are in adjacent columns. Hence $i$ is in column $k-1$ and $i+1$ occupies a cell strictly northeast of the cell containing $i$. \sout{Lemma \ref{lem:adjacentnonattacking} implies that $k-1\geq 2$, which is equivalent to $k\geq 3$.} {As $k\geq 2$, the triple condition implies} \sout{Thus, we know} that, in $\rtau$, $i+2$ occupies a cell in column $k$ that is strictly north of the cell occupied by $i+1$. Hence, in $s_i(\rtau)$, $i+1$ and $i+2$ are indeed nonattacking. Thus $\pi_{i+1}\pi_{i}(\rtau)=s_{i+1}s_{i}(\rtau)$. Now we will consider two cases. \begin{itemize} \item If $i+1$ and $i+2$ are attacking in $\rtau$, then $i\in \des(s_{i+1}s_i(\rtau))$ with $i$ and $i+1$ attacking. Thus $\pi_i\pi_{i+1}\pi_i(\rtau)=0$ in this case. However, so is $\pi_{i+1}\pi_i\pi_{i+1}(\rtau)$ as $\pi_{i+1}(\rtau)=0$ already. \item Assume that $i+1$ and $i+2$ are nonattacking in $\rtau$. This implies that $i\in \des(s_{i+1}s_i(\rtau))$ with $i$ and $i+1$ nonattacking too. Thus, $\pi_i\pi_{i+1}\pi_i(\rtau)=s_is_{i+1}s_{i}(\rtau)$. Now $\pi_{i+1}(\rtau)=s_{i+1}(\rtau)$ and it is clear that $i\in \des(s_{i+1}(\rtau))$ with $i$, $i+1$ nonattacking. Thus, $\pi_i\pi_{i+1}(\rtau)=s_is_{i+1}(\rtau)$. Since $i$ and $i+1$ are nonattacking in $\rtau$ and $i\in \des(\rtau)$, we are guaranteed that $i+1\in \des(s_is_{i+1}(\rtau))$ with $i+1$, $i+2$ nonattacking. Thus $\pi_{i+1}\pi_{i}\pi_{i+1}(\rtau)=s_{i+1}s_is_{i+1}(\rtau)$. One can check that $s_is_{i+1}s_i(\rtau)=s_{i+1}s_is_{i+1}(\rtau)$ by noticing that in both cases the positions of $i$ and $i+2$ in $\rtau$ have been interchanged. \end{itemize} {This completes the proof.} \end{proof} \fi The proof of Theorem~\ref{the:0heckerels} follows from Lemmas~\ref{lem:pisquared}, \ref{lem:pidifferby2} and \ref{lem:piiiplus1}. Let $\mathbf{S}_{\alpha}$ denote the $\mathbb{C}$-linear span of all $\mathrm{SPCT}$s of shape $\alpha$. We have established that $\mathbf{S}_{\alpha}$ is an $H_n(0)$-module. We now construct a direct sum decomposition of $\mathbf{S}_{\alpha}$ by defining an equivalence relation based on standardized column words of $\mathrm{SPCT}$s. \subsection{Source and sink tableaux} Given a composition $\alpha \vDash n$, define an equivalence relation $ \sim _\alpha $ on $\mathrm{SPCT}(\alpha)$ by defining $\tau_1\sim _\alpha \tau_2$ if and only if $\ensuremath{\operatorname{st}} (\tau_1)= \ensuremath{\operatorname{st}}(\tau_2)$, that is, the entries in each column of $\tau_1$ are in the same relative order as the entries in the corresponding column of $\tau_2$. Suppose that the equivalence classes under $\sim _\alpha$ are {$E_1,\ldots, E_k$}. Let $\mathbf{S}_{\alpha,E_i}$ denote the $\mathbb{C}$-linear span of all $\mathrm{SPCT}$s in $E_i$ for $i=1, \ldots, k$. Then we get the following isomorphism of vector spaces \begin{equation}\label{eq:Ssum} \mathbf{S}_{\alpha}\cong\bigoplus_{i=1}^{k}\mathbf{S}_{\alpha,E_i}, \end{equation} which is in fact an $H_n(0)$-module isomorphism as the following lemma implies. We omit the proof as it is the same as \cite[Lemma 6.6]{TvW}. \begin{lemma}\label{lem:Emodule} Let $E_j$ for $j= 1, \ldots , k$ be the equivalence classes under $\sim _\alpha$ for $\alpha \vDash n$. Then for all $i$ such that $1\leq i\leq n-1$, we have that $\pi_i(\mathbf{S}_{\alpha,E_j}) \subseteq \mathbf{S}_{\alpha,E_j}$ for any $1\leq j\leq k$. \end{lemma} \begin{comment} \begin{proof} Consider $\rtau_1\in E_j$, and suppose further that $\pi_i(\rtau_1)=\rtau_2$ for some positive integer $i$. In particular we are assuming that $i$ is a nonattacking descent in $\rtau_1$. Therefore, all the entries strictly greater than $i$ in the same column as $i$ in $\rtau_1$ are actually all strictly greater than $i+1$. Also, all entries strictly lesser than $i+1$ in the same column as $i+1$ in $\rtau_1$ are actually all strictly lesser than $i$. Thus, once the positions of $i$ and $i+1$ in $\rtau_1$ are interchanged to obtain $\rtau_2$, we have that $\ensuremath{\operatorname{st}} (\rtau_2)=\ensuremath{\operatorname{st}} (\rtau_1)$. Thus, $\rtau_2$ belongs to $E_j$ as well. \end{proof} \end{comment} Next, we discuss two important classes of $\mathrm{SPCT}$s that form special representatives of each equivalence class. Let $\alpha\vDash n$. An $\mathrm{SPCT}$ $\rtau$ of shape $\alpha$ is said to be a \emph{source} tableau if for every $i\notin \des(\rtau)$ where $i\neq n$, we have that $i+1$ lies to the immediate left of $i$. An $\mathrm{SPCT}$ $\rtau$ of shape $\alpha$ is said to be a \emph{sink} tableau if for every $i\in \des(\rtau)$, we have that $i$ and $i+1$ are attacking. Figure~\ref{fig:source and sink} shows a source tableau (left) and a sink tableau (right) of shape $(1,3,2,4)$ and type $1324$. Note that the standardized column word of each {tableau} is $1324 \ 213 \ 12 \ 1$. \begin{figure}[htbp] \centering \begin{align*} \begin{ytableau} 1\\ 6 & 5 & 4\\ 3 & 2\\ 10 & 9 & 8 & 7 \end{ytableau} \hspace{5mm} \begin{ytableau} 3\\ 8 & 5 & 1\\ 7 & 4\\ 10 & 9 & 6 & 2 \end{ytableau} \end{align*} \caption{An example of a source {tableau} and a sink tableau.} \label{fig:source and sink} \end{figure} By performing an analysis very similar to that in \cite[Section 6]{TvW} we obtain the following analogue of \cite[Corollary 6.15]{TvW}, which is pertinent for our purposes. See Remark~\ref{rem:drn} for a brief discussion regarding its proof. \begin{lemma}\label{lem:unique source sink} There is a unique source {tableau} and a unique sink {tableau} in every equivalence class under $\sim _\alpha$. \end{lemma} \begin{remark}\label{rem:drn} The {proofs} of the various lemmas leading up to the proof of \cite[Corollary 6.15]{TvW} {employ} the notion of removable nodes of a composition diagram followed by distinguished removable nodes in $\ensuremath{\operatorname{SCT}}$s. The definition of removable nodes in the aforementioned case is inspired by the cover relation for the (left) composition poset {\cite[Section 2.2]{BTvW}}. Saturated chains in this poset are in bijection with $\ensuremath{\operatorname{SCT}}$s. Although we have not described $\mathrm{SPCT}$s as saturated chains in a poset on compositions, we may still define a removable node as follows. Given the composition diagram $\alpha$, we call a cell in position $(i,j)$ a \emph{removable node} if one of the two conditions below is met. \begin{enumerate} \item $\alpha_i=j\geq 2$ and there is no part of length $j-1$ strictly north of $\alpha_i$. \item $\alpha_i=j=1$. \end{enumerate} Figure~\ref{fig:removable nodes} shows $\alpha=(2,3,1,4)$ with removable nodes corresponding to cells filled with bullets. \begin{figure}[htbp] \centering \begin{align*} {\ytableausetup{mathmode,boxsize=0.8em} \begin{ytableau} *(white) & \bullet\\ *(white) & *(white) &*(white)\\ \bullet\\ *(white) & *(white) &*(white) & *(white) \end{ytableau}} \end{align*} \caption{The removable nodes of $\alpha=(2,3,1,4)$.} \label{fig:removable nodes} \end{figure} With this definition of removable node, one can define distinguished removable nodes in precisely the same way as in \cite[Section 6]{TvW}. In particular, the analogues of \cite[Propositions 6.13 and 6.14]{TvW}, which are crucial for deriving our Lemma~\ref{lem:unique source sink}, hold in our setting. Thus concludes our remark. \end{remark} Recall that our motivation as outlined in the introduction is to enumerate compatible pairs of permutations. Consider a $\ensuremath{\operatorname{CT}}$ $\tau$ of shape $\alpha$ whose $i$-th and $i+1$-th column, for some $i\geq 1$ contain the same number of entries, say $n$. Then $(\ensuremath{\operatorname{st}}_i(\tau), \ensuremath{\operatorname{st}}_{i+1}(\tau))$ is a compatible pair of permutations in $\sgrp{n}$. Note that if we consider the entries in columns $i$ and $i+1$ only, we can construct a filling of shape $(2^n)$. As the triple condition is clearly satisfied, this filling is in fact a $\mathrm{PCT}$. We infer that compatible pairs of permutations in $\sgrp{n}$ are a subset of the set of pairs of permutations $(\ensuremath{\operatorname{st}}_1(\tau), \ensuremath{\operatorname{st}}_2(\tau))$ corresponding to {tableaux} $\tau \in\mathrm{PCT}((2^n))$. In Section~\ref{sec:allowable acyclic}, we establish that this containment is in fact an equality. We proceed towards studying $\mathrm{SPCT}((2^n))$ in depth. \section{Descents in $2$-columned tableaux and ascent-descents on labeled binary trees}\label{sec:ascents-descents-trees} We refer to $\mathrm{SPCT}$s of shape $(2^n)$ for some $n\geq 1$ as \emph{$2$-columned tableaux} of \emph{size} $2n$. Consider the following refined classification of certain descents in a $2$-columned tableau $\tau$. Suppose $i$ is an entry in the first column of $\tau$. Then $i\in \des(\tau)$. Based on the relative position of $i+1$ in $\tau$, we may define the following sets. \begin{align*} \des_N(\tau)&=\{i\;|\; i+1 \text{ is in the first column north of }i\}\\ \des_S(\tau)&=\{i\;|\; i+1 \text{ is in the first column south of }i\}\\ \des_{NE}(\tau)&=\{i\;|\; i+1 \text{ is in the second column northeast of }i\}\\ \des_{SE}(\tau)&=\{i\;|\; i+1 \text{ is in the second column southeast of }{i\}} \end{align*} Additionally, we say that $i$ an $N$-descent (respectively $S$-descent) of $\tau$ if $i+1$ is in the first column and north (respectively south) of $i$. Similarly, we say that $i$ an $NE$-descent (respectively $SE$-descent) of $\tau$ if $i+1$ is in the second column and northeast (respectively southeast) of $i$. Figure~\ref{fig:descents in tab} shows a $\tau\in \mathrm{SPCT}((2^n))$ where $n=10$. The arcs on the left connect $i$ and $i+1$ where both belong to the first column. In particular, the number of red arcs on the left is equal to $|\des_{N}(\tau)|$ whereas the number of purple arcs on the left is equal to $|\des_{S}(\tau)|$. The arcs on the right connect rows where the $i$ is in the first column and the $i+1$ is in the second column. In particular, the number of red arcs on the right is equal to $|\des_{NE}(\tau)|$ whereas the number of purple arcs on the left is equal to $|\des_{SE}(\tau)|$. \begin{figure}[htbp] \includegraphics[scale=0.80]{Example-AscDesTab-21.pdf} \caption{The classification of descents in the first column of {$\tau$.}} \label{fig:descents in tab} \end{figure} To enumerate ordered pairs $(\ensuremath{\operatorname{st}}_1(\tau),\ensuremath{\operatorname{st}}_2(\tau))$ where $\tau$ ranges over $2$-columned tableaux of size $2n$, we need to enumerate equivalence classes under $\sim_{(2^n)}$. By Lemma~\ref{lem:unique source sink}, we may equivalently enumerate sink tableaux of shape $(2^n)$. Such a sink $\tau$ can be alternatively characterized as one that satisfies $|\des_{NE}(\tau)|=0$. This motivates our study of the distribution of the quadruple of statistics $$(|\des_N(\tau)|,|\des_S(\tau)|,|\des_{NE}(\tau)|,|\des_{SE}(\tau)|)$$ as $\tau$ varies over all tableaux in $\mathrm{SPCT}((2^n))$. We establish that this distribution coincides with the distribution of left ascents, left descents, right ascents and right descents over the set of labeled binary trees on $n$ nodes. To this end, we need certain combinatorial notions attached to well-known `Catalan objects' known as Dyck paths. \subsection{Dyck paths, unlabeled and labeled}\label{subsec:dyck-ldyck} Given a nonnegative integer $n$, a \emph{Dyck path} of \emph{semi-length} $n$ is a lattice path in the plane beginning at $(0,0)$ and ending at $(2n,0)$ consisting of \emph{up-steps}, which correspond to a translation by {$(1,1)$}, and \emph{down-steps}, which correspond to a translation by $(1,-1)$. We denote the set of Dyck paths of semi-length $n$ by $\dyck_n$. The cardinality of $\dyck_n$ is well-known to be ${\Cat}_{n}$, the $n$-th Catalan number equaling $\frac{1}{n+1}\binom{2n}{n}$. We can identify a Dyck path $D\in \dyck_n$ with a ud-word of length $2n$ by writing $\upstep$ (respectively $\downstep$) for every up-step (respectively down-step) in $D$ from left to right. We denote this word by $\dword(D)$ and refer to it as the \emph{Dyck word} of $D$. A Dyck path is \emph{prime} if it touches the $x$-axis at its initial and terminal points only. Every Dyck path $D$ can be factorized uniquely as a concatenation of prime Dyck paths $D_1\cdot D_2\cdots D_k$. We call each $D_i$ a \emph{factor} of $D$. Figure~\ref{fig:dyck} depicts an element $D\in \dyck_9$ with $4$ factors. We have $\dword(D)=\upstep\downstep \ \upstep\downstep \ \upstep\upstep\upstep\downstep\upstep\downstep\downstep\upstep\downstep\downstep \ \upstep\upstep\downstep\downstep$, where the spaces separate the Dyck words of the factors of $D$. \begin{figure}[htbp] \includegraphics[scale=0.75]{Example-DyckPath.pdf} \caption{A Dyck path $D$ of semi-length $9$.} \label{fig:dyck} \end{figure} Consider $D\in \dyck_n$. If $D$ is endowed with a labeling on its down-steps with distinct positive integers drawn from $[n]$, then we obtain a \emph{labeled Dyck path} $D'$ of semi-length $n$. We denote the set of labeled Dyck paths of semi-length $n$ by $\ldyck_n$. The cardinality of $\ldyck_n$ equals $n!\Cat_{n}$. We define prime labeled Dyck paths and factors of labeled Dyck paths in the obvious manner following the description in the unlabeled case. Given $D\in \ldyck_n$, we abuse notation and denote by $\dword(D)$ the Dyck word of the underlying unlabeled Dyck path. Constructing the \emph{labeled Dyck word} $\ldword(D)$ is slightly more involved and is described next. First we recursively assign labels to the up-steps from right to left. Consider the $i$-th up-step in $D$ for $i$ from $n$ down to $1$. Let $\mathfrak{D}_i$ (respectively $\mathfrak{U}_i$) be the set of labels that belong to down-steps (respectively up-steps) after the $i$-th up-step. Clearly $\mathfrak{U}_{n}=\emptyset$. Assign the minimum of $\mathfrak{D}_i\setminus \mathfrak{U}_i$ as the label of this $i$-th up-step. Note that as the algorithm executes, we always have that $\mathfrak{U}_i$ is a strict subset of $\mathfrak{D}_i$. Here we are crucially using the fact that $D$ is a (labeled) Dyck path. Thus, the minimum of $\mathfrak{D}_i\setminus \mathfrak{U}_i$ is well-defined and our procedure terminates when $i$ attains the value $1$. Next, read the resulting Dyck path, with labels on both up-steps and down-steps, from left to right and write ${\upstep}_{i}$ (respectively ${\downstep}_{i}$) for an up-step (respectively down-step) labeled $i$. The word thus obtained is the labeled Dyck word $\ldword(D)$. It is easy to see that for a labeled Dyck path $D$ with prime factorization $D_1\cdots D_k$, we have $$\ldword(D)=\ldword(D_1)\cdots \ldword(D_k).$$ Figure~\ref{fig:ldyck} shows a labeled Dyck path $D$ of semi-length $10$. The arcs join the up-steps and down-steps that share the same label during the procedure for computing $\ldword(D)$. We have $$\ldword(D)= {\upstep}_{7}{\upstep}_{3}{\downstep}_{7}{\downstep}_{3}\hspace{3mm}{\upstep}_{10}{\upstep}_{9}{\upstep}_{8}{\upstep}_{4}{\downstep}_{4}{\upstep}_{1}{\downstep}_{1}{\downstep}_{10}{\upstep}_{6}{\downstep}_{6}{\downstep}_{8}{\upstep}_{5}{{\upstep}_{2}}{\downstep}_{9}{\downstep}_{2}{\downstep}_{5}.$$ \begin{figure}[htbp] \includegraphics[scale=0.90]{Example-Ldword.pdf} \caption{A labeled Dyck path of semi-length $10$.} \label{fig:ldyck} \end{figure} \subsection{Bijection between $\ldyck_n$ and $\mathrm{SPCT}((2^n))$} \label{subsec:ldyck-prct} Recall that $|\ldyck_n|=n!\Cat(n)$. By Theorem~\ref{thm:generalized shift map}, we have that $|\mathrm{SPCT}^{\sigma} ((2^n))|={|\ensuremath{\operatorname{SRT}}((2^n))|}$ for all $\sigma\in \sgrp{n}$. The following bijection between $\ensuremath{\operatorname{SRT}}((2^n))$ and $\dyck_n$ is folklore. For $T\in \ensuremath{\operatorname{SRT}}((2^n))$, the $i$-th step for $1\leq i\leq 2n$ in the associated Dyck path is an up-step if $i$ is in the second column of $T$ and it is a down-step otherwise. Thus, we infer that $|\mathrm{SPCT}((2^n))|=|\coprod_{\sigma\in \sgrp{n}} \mathrm{SPCT}^{\sigma}((2^n))|=\sum_{\sigma\in \sgrp{n}} |\ensuremath{\operatorname{SRT}}((2^n))|=n!\Cat_n$. By a simple extension of the aforementioned bijection, we obtain one between $\ldyck_n$ and $\mathrm{SPCT}((2^n))$ {that} associates a $D\in \ldyck_n$ with $\tau\in \mathrm{SPCT}((2^n))$ as follows. For $1\leq i\leq 2n$, the $i$-th step is an up-step if $i$ belongs to the second column. Otherwise, the $i$-th step is a down-step labeled by the index of the row occupied by $i$ in $\tau$. We denote this map from $\mathrm{SPCT}((2^n))$ to $\ldyck_n$ by $\prcttoldyck$ and defer the proof that this is indeed a bijection to after we describe the inverse map, denoted by $\ldycktoprct$, next. \begin{figure}[htbp] \includegraphics[scale=0.9]{Example-SPCTtoLDyck-1.pdf} \caption{Transforming an SPCT to a labeled Dyck path.} \label{fig:spcttoldyck} \end{figure} Let $D\in \ldyck_n$ with $\ldword(D)=w_1\cdots w_{2n}$. Construct a filling $\tau$ of shape $(2^n) $ as follows. For $1\leq i\leq n$, let $1\leq p<q\leq 2n$ be the positive integers such that $ w_p={\upstep}_{i}$ and $w_q={\downstep}_{i}$. Then the $i$-th row of $\tau$ contains $q$ and $p$ in the first and second columns respectively. We define $\tau$ to be $\ldycktoprct(D)$. The next lemma establishes that $\ldycktoprct$ is the {inverse of} $\prcttoldyck$. \begin{lemma}\label{lem: prct_to_ldyck bijection} \emph{$\ldycktoprct$} is a bijection from $\ldyck_n$ to $\mathrm{SPCT}((2^n))$ with inverse given by \emph{$\prcttoldyck$}. \end{lemma} \begin{proof} Consider $D\in \ldyck_n$ and let $\tau=\ldycktoprct(D)$. To establish that $\tau$ is an $\mathrm{SPCT}$, we must show that the triple condition holds. Suppose that this is not the case. Then we have a configuration in $\tau$ of the type shown in Figure~\ref{fig:triple violation} such that $a>c>b$. \begin{figure}[h] \centering \begin{align*} \ytableausetup{mathmode,boxsize=1.25em} \begin{ytableau} a & b\\ \none & \none[\vdots]\\ \none & c \end{ytableau} \end{align*} \caption{A triple configuration in $\ldycktoprct(D)$.} \label{fig:triple violation} \end{figure} We interpret this configuration in terms of $D$. Henceforth, in this proof, we assume that the up-steps in $D$ are labeled using the procedure for computing $\ldword(D)$. Assume that $a$ and $b$ belong to row $i$ and that $c$ belongs to row $j$, where $1\leq i<j\leq n$. Then the $a$-th step (respectively $b$-th step) of $D$ is a down-step (respectively up-step) labeled $i$. Let $S_b$ and $S_c$ be the sets consisting of labels on the down-steps after the $b$-th and $c$-th steps respectively, ordered in increasing order. We have $S_c\subseteq S_b$ as $c>b$ and $i\in S_c$ as $a>c$. As the $c$-th step, which is an up-step, gets the label $j$, and $j>i$, we conclude that there exists an up-step after the $c$-th step that has the label $i$. This follows from the description of our procedure for constructing the labeled Dyck word. Thus, we arrive at a contradiction as the $b$-th step cannot be labeled $i$. We conclude that the triple condition holds in $\tau$ and that $\tau\in \mathrm{SPCT}((2^n))$. To show that $\ldycktoprct$ is a bijection, we establish its injectivity. If $\ldycktoprct(D_1)=\ldycktoprct(D_2)$ for labeled Dyck paths $D_1,D_2$, then we must have $\ldword(D_1)=\ldword(D_2)$. This implies that the unlabeled Dyck paths underlying $D_1$ and $D_2$ are the same. Additionally, the labels on the corresponding down-steps in $D_1$ and $D_2$ are equal. Thus, $D_1=D_2$ and we conclude that $\ldycktoprct$ is {a bijection}. It is clear that $\prcttoldyck(\tau)=D$ as the entries in the first column of $\tau$ and the indices of the rows they belong to completely determine the down-steps and their labels. We conclude that $\prcttoldyck$ and $\ldycktoprct$ are bijections that are mutually inverse. \end{proof} We proceed to describe unlabeled and labeled binary trees and subsequently introduce descent statistics that are equidistributed with the quadruple $(\des_N,\des_S,\des_{NE},\des_{SE})$. \subsection{Plane binary trees, unlabeled and labeled} A \emph{rooted tree} is a finite acyclic graph with a distinguished node called the \emph{root}. A \emph{plane binary tree} is a rooted tree in which every node has at most two children, of which at most one is called a \emph{left child} and at most one is called a \emph{right child}. Henceforth, we reserve the term binary tree for a plane binary tree. We denote the set of binary trees on $n\geq 1$ nodes by $\tree_n$. The set of nodes of $T\in \tree_n$ is denoted by $V(T)$, and the root node is referred to as $\rootof{T}$. We abuse notation on occasion and write $v\in T$ when we mean $v\in V(T)$. A node $v\in T$ is an \emph{internal node} if it has at least one child. Otherwise it is a \emph{leaf}. A \emph{\lpath} is a binary tree $T$ such that no node has a right child. We can decompose any binary tree $T$ into a set of \lpaths $\{T_1,\ldots,T_k\}$ by omitting all edges that connect an internal node to its right child, provided it exists. We refer to this set of \lpaths as the \emph{maximal left path decomposition} of $T$ and denote it by $\mlpd(T)$. Figure~\ref{fig:Tree_decomposition} depicts a binary tree $T$ on $10$ nodes with $V(T)={\{v_1,\ldots, v_{10}\}}$. The nodes that belong to the same shaded region constitute a \lpath in $\mlpd(T)$. Given $T\in \tree_n$, let $\mlpd(T)$ be {$\{T_1,\ldots, T_m\}$} where $T_1$ contains $\rootof{T}$. For every $T_i$ where $2\leq i\leq m$, there exists a unique node $v\in T$ such that $\rootof{T_i}$ is the right child of $v$ in $T$. We define $v$ to be the \emph{parent} of $T_i$ in $T$ and denote it by $\parent(T_i)$. In Figure~\ref{fig:Tree_decomposition}, consider the \lpath on nodes $v_{7}$ and $v_{8}$. Then its parent is $v_6$. Note the crucial fact that a binary tree $T$ is completely determined by specifying its maximal left path decomposition, the \lpath that contains the root, and the parents of the other left paths. \begin{figure}[htbp] \includegraphics[scale=1.1]{Example-TreeAndDecomposition-1.pdf} \caption{A binary tree with its maximal \lpath decomposition.} \label{fig:Tree_decomposition} \end{figure} A \emph{labeled plane binary tree} (or simply a \emph{labeled binary tree}) on $n\geq 1$ nodes is a binary tree whose nodes have distinct labels drawn from $[n]$. We denote the set of labeled binary trees on $n$ nodes for $n\geq 1$ by $\ltree_n$. Given a node $u\in T$, we refer to its label as $u^{\ell}$. For a labeled binary tree, we have the following refined classification for its edges. Suppose that $q$ is the right child of $p$. If $p^{\ell}< q^{\ell}$, then the edge joining $p$ and $q$ is a \emph{right ascent}. Otherwise it is a \emph{right descent}. Now suppose that $q$ is the left child of $p$. If $p^{\ell}< q^{\ell}$, then the edge joining $p$ and $q$ is a \emph{left ascent}. Otherwise it is a \emph{left descent}. Let $\mathrm{rasc}(T)$ (respectively, $\mathrm{rdes}(T)$, $\mathrm{lasc}(T)$ and $\mathrm{ldes}(T)$) {be the} number of right ascents (respectively, right descents, left ascents and left descents) in $T$. For the labeled binary tree $T$ in Figure~\ref{fig:Ascents-Descents}, we have that $\mathrm{rasc}(T)=3$, $\mathrm{rdes}(T)=1$, $\mathrm{lasc}(T)=1$, $\mathrm{ldes}(T)=3$. The thickened edges in the figure correspond to descents, both left and right. \begin{figure}[ht] \includegraphics{Example-AscDesTree.pdf} \caption{A labeled binary tree with $\mathrm{rasc}(T)\!=\!\!3$, $\mathrm{rdes}(T)\!=\!1$, $\mathrm{lasc}(T)\!=\!1$, $\mathrm{ldes}(T)\!=\!3$.} \label{fig:Ascents-Descents} \end{figure} \iffalse \begin{figure}[ht] \includegraphics{Example-AscDesTree.pdf} \caption{A labeled binary tree with $\mathrm{rasc}(T)\!\!=\!\!\mathrm{ldes}(T)\!\!=\!\!3$, $\mathrm{lasc}(T)\!\!=\!\!\mathrm{rdes}(T)\!\!=\!\!1$.} \label{fig:Ascents-Descents} \end{figure} \fi We will abuse notation and use the terms \lpath and maximal \lpath decomposition in the context of their natural labeled analogues respectively. \subsection{Labeled Dyck paths and labeled binary trees} \label{subsec:ltree<->ldyck} Note that since $|\tree _n|=\Cat _n$ the cardinalities of $\ldyck_n$ and $\ltree_n$ are equal. We seek a bijection between these two sets that tracks the quadruple of statistics $(\mathrm{lasc},\mathrm{ldes},\mathrm{rasc},\mathrm{rdes})$ on labeled binary trees in a manner that will eventually allow us to relate them to the quadruple {$(|\des_N|,|\des_S|,|\des_{NE}|,|\des_{SE}|)$} via the map $\ldycktoprct$. First, we need some more terminology pertaining to Dyck paths. A \emph{run} of $D\in \dyck_n$ is a maximal sequence of down-steps. A \emph{run} of $D\in \ldyck_n$ is defined similarly, except that we retain the labels. By considering all runs in $D\in \ldyck_n$ from right to left, we obtain the \emph{run sequence} of $D$. Consider a run in $D$ composed of $a$ steps. We can transform this run into a \lpath on $a$ nodes where the label of the nodes read from root to leaf are the labels in the run read from right to left. Thus, from the run sequence of $D$, say $(R_1,\dots, R_m)$, we obtain a sequence of \lpaths $(T_1,\dots, T_m)$. This datum can be used to construct a labeled binary tree as described in Algorithm~\ref{alg:ldycktoltree}. As usual, by the label on an up-step of a labeled Dyck path, we mean the label it acquires when computing its labeled Dyck word. \begin{algorithm}[htbp] \caption{{Transforming} a labeled Dyck path to a labeled binary tree \label{alg:ldycktoltree}} \begin{algorithmic}[1] \Require{$D\in\ldyck_n$.} \Ensure{$T\in\ltree_n$.} \vspace{5pt} \Function{ \ldycktoltree}{$D$} \State \parbox[t]{\dimexpr\linewidth-\algorithmicindent} {Let $(R_1,\ldots,R_m)$ be the run sequence of {$D$;}\strut} \State \parbox[t]{\dimexpr\linewidth-\algorithmicindent} {Let $(T_1,\ldots, T_m)$ be the sequence of \lpaths corresponding to the run sequence;\strut} \State Let $T\coloneqq T_1$, $i\coloneqq 2$; \While{$ i < m$} \State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{$j\coloneqq $ label on the up-step immediately after the rightmost down-step in $R_i$;\strut} \State \parbox[t]{\dimexpr\linewidth-\algorithmicindent} {Make the \lpath $T_i$ the right subtree of the node labeled $j$ in $T$;\strut} \State Let $T$ be this updated tree; \State $i:=i+1$; \EndWhile \State\Return{$T$}; \EndFunction \end{algorithmic} \end{algorithm} Figure~\ref{fig:ldyck->ltree} demonstrates how Algorithm~\ref{alg:ldycktoltree} transforms the labeled Dyck path in Figure~\ref{fig:ldyck} into a labeled tree. \begin{figure}[htbp] \includegraphics{Example-LdycktoLtree-1.pdf} \caption{Transforming a labeled Dyck path to a labeled binary tree.} \label{fig:ldyck->ltree} \end{figure} Note that the run sequence determines the maximal labeled left path decomposition of the resulting binary tree. To determine its structure completely, we need to designate which \lpath contains the root of the final binary tree and subsequently determine the parent nodes of the other left paths. The procedure for computing the labeled Dyck word accomplishes the latter goal. The proof that $\ldycktoltree$ is a bijection is postponed until after discussing the inverse map, which is slightly more involved. We note some key features of Algorithm~\ref{alg:ldycktoltree}. The details, which are straightforward, are left to the reader. \begin{lemma}\label{lem:two features ldycktoltree} Algorithm~\ref{alg:ldycktoltree} posseses the following features. \begin{enumerate} \item If $\emph{\ldword}(D)$ has last letter equal to ${\downstep}_i$ for some positive integer $i$, then the root of $\emph{\ldycktoltree}(D)$ has label $i$. \item If $\emph{\ldword}(D)$ has a contiguous subword of the type ${\downstep}_{i}{\downstep}_{j}$ for positive integers $i$ and $j$, then $i$ is the label of the left child of the node labeled $j$ in $\emph{\ldycktoltree}(D)$. \item If $\emph{\ldword}(D)$ has a contiguous subword of the type ${\downstep}_{i}{\upstep}_{j}$ for positive integers $i$ and $j$, then $i$ is the label of the right child of the node labeled $j$ in $\emph{\ldycktoltree}(D)$. \end{enumerate} \end{lemma} In Algorithm~\ref{alg:ltreetoldyck} we describe a procedure that converts $T\in \ltree_n$ to the labeled Dyck word of some $D\in \ldyck_n$. \begin{algorithm}[htbp] \caption{{Transforming} a labeled binary tree to a labeled Dyck path\label{alg:ltreetoldyck}} \begin{algorithmic}[1] \Require{$T\in\ltree_n$.} \Ensure{A labeled Dyck word corresponding to a labeled Dyck path in $\ldyck_n$.} \vspace{5pt} \Function{ \ltreetoldyck}{$T$} \State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Let $Q\coloneqq \emptyset$, $\mathrm{DyckWord}\coloneqq \varepsilon$, $\mathrm{NumSteps}\coloneqq 0$, {$\mathrm{flag}\coloneqq 1$;}\strut} \State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Let $C$ be the unique \lpath in $\mlpd(T)$ containing $\rootof{T}$;\strut} \While{$\mathrm{NumSteps} < 2n$} \If{$\mathrm{flag}$ is equal to 1} \State Let $v_1,\dots,v_m$ be the nodes in $C$ from root to leaf; \For{$1\leq j \leq m$} \State $Q\coloneqq Q\cup \{v_j\}$; \Comment{Push $i$ into $Q$} \State $i\coloneqq $label of node $v_j$; \State $\mathrm{DyckWord}\coloneqq {\downstep}_{i}\cdot\mathrm{DyckWord}$; \State $\mathrm{NumSteps}\coloneqq \mathrm{NumSteps}+1$; \EndFor \State $\mathrm{flag}\coloneqq 0$; \Else{} \State Let $v$ be the node in ${Q}$ whose label $m$ in minimal; \State $Q\coloneqq Q\setminus\{v\}$; \Comment{Pop minimum element from $Q$} \State $\mathrm{DyckWord}\coloneqq {\upstep}_{m}\cdot \mathrm{DyckWord}$; \State $\mathrm{NumSteps}\coloneqq \mathrm{NumSteps}+1$; \State Let $v$ be node in $T$ with label $m$; \If{$v$ has a right child $v'$ in $T$} \State Let $C$ be the unique \lpath in $\mlpd(T)$ rooted at {$v'$;} \State $\mathrm{flag}\coloneqq 1$; \EndIf \EndIf \EndWhile \State\Return{$\mathrm{DyckWord}$}; \EndFunction \end{algorithmic} \end{algorithm} We refer to the sequence of Steps 8, 9, 10 and 11 in Algorithm~\ref{alg:ltreetoldyck} as a (single) \emph{push} operation. The sequence of Steps 14, 15, 16 and 17 comprises a (single) \emph{pop} operation. \begin{table}[htbp] \begin{center} \begin{tabular}{| l | l | l |} \hline $\mathrm{NumSteps}$ & $Q$ & Operation\\ \hline $1$ & $\{5\}$ & Push $5$ into $Q$\\ \hline $2$ & $\{2,5\}$ & Push $2$ into $Q$\\ \hline $3$ & $\{2,5,9\}$ & Push $9$ into $Q$ \\ \hline $4$ & $\{5,9\}$ & Pop $2$ from $Q$\\ \hline $5$ & $\{9\}$ & Pop $5$ from $Q$\\ \hline $6$ & $\{8,9\}$ & Push $8$ into $Q$\\ \hline $7$ & $\{6,8,9\}$ & Push $6$ into $Q$\\ \hline $8$ & $\{8,9\}$ & Pop $6$ from $Q$\\ \hline $9$ & $\{8,9,10\}$ & Push $10$ into $Q$\\ \hline $10$ & $\{1,8,9,10\}$ & Push $1$ into $Q$ \\ \hline $11$ & $\{8,9,10\}$ & Pop $1$ from $Q$\\ \hline $12$ & $\{4,8,9,10\}$ & Push $4$ into $Q$\\ \hline $13$ & $\{8,9,10\}$ & Pop $4$ from $Q$\\ \hline $14$ & $\{9,10\}$ & Pop $8$ from $Q$\\ \hline $15$ & $\{10\}$ & Pop $9$ from $Q$\\ \hline $16$ & $\emptyset$ & Pop $10$ from $Q$\\ \hline $17$ & $\{3\}$ & Push $3$ into $Q$\\ \hline $18$ & $\{3,7\}$ & Push $7$ into $Q$\\ \hline $19$ & $\{7\}$ & Pop $3$ from $Q$\\ \hline$20$ & $\emptyset$ & Pop $7$ from $Q$\\ \hline \end{tabular} \end{center} \caption{Algorithm~\ref{alg:ltreetoldyck} applied to the labeled binary tree in Figure~\ref{fig:ldyck->ltree}.} \label{tab:push-pop} \end{table} Table~\ref{tab:push-pop} demonstrates the execution of Algorithm~\ref{alg:ltreetoldyck} with the labeled binary tree of Figure~\ref{fig:ldyck->ltree} as input. Reading the sequence of pushes and pop from bottom to top in this table and noting a $\downstep$ for a {push} and a $\upstep$ for a {pop}, along with the appropriate subscript, we obtain the following $\mathrm{DyckWord}$ as final output. Note that this is the labeled Dyck word corresponding to the {labeled} Dyck path in Figure~\ref{fig:ldyck->ltree}. {$$ {\upstep}_{7}{\upstep}_{3}{\downstep}_{7}{\downstep}_{3}{\upstep}_{10}{\upstep}_{9}{\upstep}_{8}{\upstep}_{4}{\downstep}_{4}{{\upstep}_{1}}{\downstep}_{1}{\downstep}_{10}{\upstep}_{6}{\downstep}_{6}{\downstep}_{8}{\upstep}_{5}{\upstep}_{2}{\downstep}_{9}{\downstep}_{2}{\downstep}_{5} $$} \iffalse \begin{table}[htbp] \begin{center} \begin{tabular}{| l | l | l |} \hline $\mathrm{NumSteps}$ & $Q$ & $\mathrm{DyckWord}$\\ \hline $0$ & $\emptyset$ & $\varepsilon$\\ \hline $1$ & $\{5\}$ & ${\upstep}_{(5)}$\\ \hline $2$ & $\{2,5\}$ & ${\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $3$ & $\{2,5,9\}$ & ${\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $4$ & $\{5,9\}$ & ${\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $5$ & $\{9\}$ & ${\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $6$ & $\{8,9\}$ & ${\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $7$ & $\{6,8,9\}$ & ${\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $8$ & $\{8,9\}$ & ${\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $9$ & $\{8,9,10\}$ & ${\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $10$ & $\{1,8,9,10\}$ & ${\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $11$ & $\{8,9,10\}$ & ${\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $12$ & $\{4,8,9,10\}$ & ${\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $13$ & $\{8,9,10\}$ & ${\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $14$ & $\{9,10\}$ & ${\downstep}_{(8)}{\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $15$ & $\{10\}$ & ${\downstep}_{(9)}{\downstep}_{(8)}{\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $16$ & $\emptyset$ & ${\downstep}_{(10)}{\downstep}_{(9)}{\downstep}_{(8)}{\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $17$ & $\{3\}$ & ${\upstep}_{(3)}{\downstep}_{(10)}{\downstep}_{(9)}{\downstep}_{(8)}{\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $18$ & $\{3,7\}$ & ${\upstep}_{(7)}{\upstep}_{(3)}{\downstep}_{(10)}{\downstep}_{(9)}{\downstep}_{(8)}{\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline $19$ & $\{7\}$ & ${\downstep}_{(3)}{\upstep}_{(7)}{\upstep}_{(3)}{\downstep}_{(10)}{\downstep}_{(9)}{\downstep}_{(8)}{\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline$20$ & $\emptyset$ & ${\downstep}_{(7)}{\downstep}_{(3)}{\upstep}_{(7)}{\upstep}_{(3)}{\downstep}_{(10)}{\downstep}_{(9)}{\downstep}_{(8)}{\downstep}_{(4)}{\upstep}_{(4)}{\downstep}_{(4)}{\upstep}_{(1)}{\upstep}_{(10)}{\downstep}_{(6)}{\upstep}_{(6)}{\upstep}_{(8)}{\downstep}_{(5)}{\downstep}_{(2)}{\upstep}_{(9)}{\upstep}_{(2)}{\upstep}_{(5)}$\\ \hline \end{tabular} \end{center} \end{table} \fi For the rest of this section, we adhere to the notation introduced in Algorithm~\ref{alg:ltreetoldyck}. Our next lemma shows that this algorithm is well-defined. In particular, it establishes that a pop is never performed on an empty $Q$ as the algorithm executes. The acyclicity of a tree guarantees that any node can be pushed at most once during the execution of Algorithm~\ref{alg:ltreetoldyck}. Furthermore, once it is pushed, it must take part in exactly one pop operation. In fact, the following stronger claim holds: Every node gets pushed and popped exactly once during the execution of our algorithm. This observation, which hinges on the fact that $T$ is connected, is present implicitly in the proof of the next lemma. \begin{lemma}\label{lem:validity of algorithm} In Algorithm~\ref{alg:ltreetoldyck}, there can never be a pop operation when $Q$ is empty. \end{lemma} \begin{proof} Suppose $Q$ becomes empty at some point during our algorithm upon popping the node $v$. Let $T_1\in \mlpd(T)$ be the \lpath that contains $v$. As $v$ is the last node to be popped from $Q$, we conclude that every other node in $T_1$ except $v$ has been popped from $Q$ before $v$. If $v$ has a right child $v_0$ in $T$, then the next step of our algorithm, after setting flag equal to $1$, pushes all nodes that belong to the \lpath in $\mlpd(T)$ containing $v_0$ into {$Q$. This} ensures that the next pop is performed on a nonempty $Q$. Assume henceforth that $v$ does not have a right child in $T$. We claim that $\mathrm{NumSteps}$ equals $2n$ upon performing the pop that removes $v$ from $Q$, thereby implying that the algorithm terminates subsequently. To establish this claim, we proceed by contradiction. Suppose $\mathrm{NumSteps}$ is strictly less than $2n$ upon popping $v$. Then there exists a node $v' \in T$ that has not been pushed into $Q$ yet. For if there was no such node, {then every} node in $T$ would have been pushed and popped from $Q${, since $Q$ becomes empty upon popping $v$}, contradicting our premise that $\mathrm{NumSteps}<2n$. Let $T_1'\in \mlpd(T)$ be the \lpath containing $v'$. As $v'$ has not been pushed into $Q$ yet, we infer that no node in $T_1'$ has been pushed into $Q$. In particular, $\rootof{T}$ does not belong to $T_1'$ and $T_1'\neq T_1$. Let $T_2'\in \mlpd(T)$ be the \lpath that contains $\parent(T_1')$. As no node of $T_1'$ has been pushed into $Q$, we infer that no element in $T_2'$ has been pushed into $Q$ either. As before, we have that $\rootof{T}$ is not in $T_2'$ and $T_2'\neq T_1$. Continuing this inductively we obtain an infinite sequence of distinct \lpaths $T_1',T_2',\dots$ such that $T_{i+1}'\in \mlpd(T)$ contains $\parent(T_i')$ for $i\geq 1$. Each of these \lpaths contains nodes that have not been pushed into $Q$. As $V(T)$ is finite, we have a contradiction. \end{proof} We note an important consequence of Lemma~\ref{lem:validity of algorithm}. At every stage during the execution of Algorithm~\ref{alg:ltreetoldyck}, the number of pushes weakly exceeds the number of pops. Additionally, every time a pop empties $Q$, the number of pushes equals the number of pops up until that stage. Conversely, if the number of pops equals the number of pushes at any stage, then $Q$ must be empty. These observations imply that in every suffix of $\mathrm{DyckWord}$, there are at least many $\downstep$'s as there are $\upstep$'s. In summary, we have the following. \begin{corollary}\label{cor:output is a dyck path} $\chi(\mathrm{DyckWord})$ is the Dyck word of a Dyck path. \end{corollary} Our next aim is to establish the stronger assertion that $\mathrm{DyckWord}$ is the labeled Dyck word of some labeled Dyck path. Let $D^u$ denote the Dyck path corresponding to $\chi(\mathrm{DyckWord})$. Let $D^{\ell}$ denote the labeled Dyck path whose underlying unlabeled Dyck path is $D^u$ and whose labels on the down-steps from left to right are derived from the indices of the $\downstep$'s in $\mathrm{DyckWord}$ from left to right. \begin{lemma} $\mathrm{DyckWord}$ equals $\emph{\ldword}(D^{\ell})$. \end{lemma} \begin{proof} Note that $\chi(\mathrm{DyckWord})$ equals $\dword(D^{\ell})$. Furthermore, by construction, the indices of the $\downstep$'s read from left to right in $\ldword(D^{\ell})$ coincide with the indices of the $\downstep$'s read from left to right in $\mathrm{DyckWord}$. Therefore, all we need to show is that the index of a $\upstep$ in $\mathrm{DyckWord}$ matches that of the corresponding $\upstep$ in $\ldword(D^{\ell})$. Let $W_{old}$ be the value of $\mathrm{DyckWord}$ at an intermediate stage during the execution of the algorithm just prior to a pop operation. Let $W_{new}$ be the value of $\mathrm{DyckWord}$ immediately after a single pop. Then $W_{new}={\upstep}_i \cdot W_{old}$ where $i$ is the minimum label amongst all nodes in $Q$ before the pop was performed. We discuss an alternate way to describe this value $i$. Let $A$ (respectively $B$) be the set of indices of $\downstep$s (respectively $\upstep$s) in $W_{old}$. Then $A$ consists of labels of the nodes that have been pushed into $Q$ (up until the intermediate stage we are considering), whereas $B$ consists of labels of nodes that have been both pushed and popped. Then $A\setminus B$ contains all labels corresponding to nodes in $Q$ before the pop. Thus, we infer that $i$ is the minimum element in $A\setminus B$. Now consider the sub-path of $D^{\ell}$ that corresponds to $W_{old}$. The labels on the down-steps are precisely the elements of $A$. The sub-path of $D^{\ell}$ corresponding to $W_{new}$ begins with an up-step. The preceding procedure where we computed the labels on up-steps coincides precisely with our description of the procedure that endows up-steps with labels during the computation of the labeled Dyck word. This finishes the proof. \end{proof} In other words, Algorithm~\ref{alg:ltreetoldyck} outputs the labeled Dyck word of $D^{\ell}$ given a labeled binary tree $T$. We note certain features of this algorithm that are immediate from its description. \begin{lemma}\label{lem:two features ltreetoldyck} Algorithm~\ref{alg:ltreetoldyck} possesses the following three features. \begin{enumerate} \item If the label of $\rootof{T}$ is $i$, then the last letter in $\mathrm{DyckWord}$ is ${\downstep}_i$. \item If $i$ is the label of the left child of the node labeled $j$ in $T$, then $\mathrm{DyckWord}$ has a contiguous subword of the type ${\downstep}_{i}{\downstep}_{j}$. \item If $i$ is the label of the right child of the node labeled $j$ in $T$, then $\mathrm{DyckWord}$ has a contiguous subword of the type ${\downstep}_{i}{\upstep}_{j}$. \end{enumerate} \end{lemma} The reader is invited to compare the statement of Lemma~\ref{lem:two features ltreetoldyck} to that of Lemma~\ref{lem:two features ldycktoltree}. This comparison naturally leads us to our next theorem. From this point on, we identify $\ltreetoldyck(T)$ with the labeled Dyck path $D^{\ell}$ instead of $\ldword(D^{\ell})$. Our next result establishes that $\ltreetoldyck$ is a bijection and its inverse is the map $\ldycktoltree$. \begin{theorem} $\emph{\ltreetoldyck}: \ltree_n \to \ldyck_n$ is a bijection and its inverse is given by $\emph{\ldycktoltree}$. \end{theorem} \begin{proof} As $\ldyck_n$ and $\ltree_n$ have the same cardinality, to establish the claim it suffices to show that $\ldycktoltree\circ \ltreetoldyck$ is the identity map. Let $D^{\ell}$ be the labeled Dyck path corresponding to the labeled Dyck word output by Algorithm~\ref{alg:ltreetoldyck} for some $T\in \ltree_n$. From Lemmas~\ref{lem:two features ldycktoltree} and \ref{lem:two features ltreetoldyck}, it follows that $\ldycktoltree(D^{\ell})=T$. This finishes the proof. \end{proof} Figure~\ref{fig:master figure} should aid the reader in understanding our next theorem. \begin{theorem}\label{thm:stat-preserving bijection} $ \emph{\ldycktoltree}\circ \emph{\prcttoldyck}: \mathrm{SPCT}((2^n)) \to \ltree_n$ is a bijection that maps the ordered quadruple of statistics $(|\des_N|,|\des_S|,|\des_{NE}|,|\des_{SE}|)$ to $(\mathrm{lasc},\mathrm{ldes},\mathrm{rasc},\mathrm{rdes})$. \end{theorem} \begin{proof} It is clear that $\ldycktoltree\circ \prcttoldyck$ is a bijection. We focus on proving the second half of the theorem. Consider $\tau\in \mathrm{SPCT}((2^n))$. Let $D$ be $\prcttoldyck(\tau)$ and let $T$ be $\ldycktoltree(D)$. Suppose first that $i$ and $i+1$ both belong to the first column in the $p$-th and $q$-th row in $\tau$ respectively. Then the $i$-th step of $D$ is a down-step labeled $p$ and the $i+1$-th step is down-step labeled $q$. In terms of $\ldword(D)$, this corresponds to an instance of a contiguous subword of the type ${\downstep}_p{\downstep}_q$. Lemma~\ref{lem:two features ldycktoltree} implies that $p$ is the label of the left child of the node labeled $q$ in $T$. If $i\in \des_N(\tau)$, then $p>q$ and the edge in $T$ joining the nodes labeled $p$ and $q$ is a left ascent. Thus we infer that {$|\des_N(\tau)|=\mathrm{lasc}(T)$}. A similar argument proves that {$|\des_S(\tau)|=\mathrm{ldes}(T)$}. Now suppose that $i$ belongs to the first column in the $p$-th row while $i+1$ belongs to the second column in the $q$-th row in $\tau$. Then the $i$-th step of $D$ is a down-step labeled $p$ and the $i+1$-th step is an up-step labeled $q$. In terms of $\ldword(D)$, this corresponds to an instance of a contiguous subword ${\downstep}_p{\upstep}_q$. Lemma~\ref{lem:two features ldycktoltree} implies that the node labeled $p$ is the right child of the node labeled $q$ in $T$. If $i\in \des_{NE}(\tau)$, then $p>q$ and the edge joining the nodes labeled $p$ and $q$ is a right ascent. Thus, we infer that {$|\des_{NE}(\tau)|=\mathrm{rasc}(T)$}. A similar argument proves that {$|\des_{SE}(\tau)|=\mathrm{rdes}(T)$}. \end{proof} \begin{figure}[htbp] \includegraphics[scale=0.9]{Example-MasterBijection-1.pdf} \caption{The statistic-preserving bijection between $\mathrm{SPCT}((2^n))$ and $\ltree_n$ by way of labeled Dyck paths.} \label{fig:master figure} \end{figure} We return to our question at the beginning of this section, that of enumerating sink tableaux of shape $(2^n)$. Recall that a $2$-columned tableau $\tau$ is a sink if {$|\des_{NE}(\tau)|=0$}. By Theorem~\ref{thm:stat-preserving bijection}, the set of sink tableaux of shape $(2^n)$ is in bijection with the set of labeled binary trees on $n$ nodes that have no right ascents. By a result originally due to Pak-Postnikov \cite{PakPostnikov} (see also \cite{Pak-slides}), the number of such trees is $(n+1)^{n-1}$. Thus, we arrive at the following result. \begin{theorem}\label{thm:standardized 2-column} The set of ordered pairs $(\ensuremath{\operatorname{st}}_1(\tau),\ensuremath{\operatorname{st}}_2(\tau))$ as $\tau$ ranges over all $2$-columned {$\mathrm{SPCT}$s} of size $2n$ is in bijection with the set of labeled binary trees on $n$ nodes with no right ascents. Thus, its cardinality is $(n+1)^{n-1}$. \end{theorem} In the next section, we show that the set of ordered pairs in Theorem~\ref{thm:standardized 2-column} does in fact coincide with the set of compatible pairs. To this end, we establish that for every $(\ensuremath{\operatorname{st}}_1(\tau),\ensuremath{\operatorname{st}}_2(\tau))$ obtained from some $\tau \in \mathrm{SPCT}((2^n))$, we can construct an $\ensuremath{\operatorname{SCT}}$ $\tau'$ that has two adjacent columns, say $i$ and $i+1,$ satisfying $\ensuremath{\operatorname{st}}_i(\tau')=\ensuremath{\operatorname{st}}_1(\tau)$ and $\ensuremath{\operatorname{st}}_{i+1}(\tau')=\ensuremath{\operatorname{st}}_2(\tau)$. An alternative characterization for $(\ensuremath{\operatorname{st}}_1(\tau),\ensuremath{\operatorname{st}}_2(\tau))$ using pattern-avoidance is key. \section{Allowable pairs of permutations and acyclic graphs}\label{sec:allowable acyclic} We begin with a lemma characterizing standardized first and second column words of $2$-columned tableaux in terms of pattern-avoidance. The {straightforward} proof relies on the triple condition and is omitted. \begin{lemma}\label{lem:tau pattern avoidance} Consider $\tau \in \mathrm{SPCT} ((2^n))$. Denote the entry in the $i$-th row and $j$-th column by $\tau_{(i,j)}$. Then the following properties hold. \begin{enumerate} \item For $1\leq i<j\leq n$, If $\tau_{(i,1)} >\tau_{(j,1)}$, then $\tau_{(i,2)} > \tau_{(j,2)}$. \item For $1\leq i<j<k \leq n$, if $\tau_{(i,1)} <\tau_{(j,1)}< \tau_{(k,1)}$, then we cannot have that $\tau_{(i,2)}>\tau_{(k,2)}>\tau_{(j,2)}$. \end{enumerate} \end{lemma} Consider $\sigma,\gamma\in \sgrp{n}$. We say that $(\sigma,\gamma)$ is \emph{$(21,12)$-avoiding} if $\sigma(i)>\sigma(j)$ for integers $1\leq i<j\leq n$ implies that $\gamma(i)>\gamma(j)$. Similarly, we say that it is \emph{$(123,312)$-avoiding} if $\stan(\sigma(i)\sigma(j)\sigma(k))=123$ for integers $1\leq i<j<k\leq n$ implies that $\stan(\gamma(i)\gamma(j)\gamma(k))\neq 312$. The statement that $(\sigma,\gamma)$ is $(21,12)$-avoiding is equivalent to saying that $\sigma \leq_{L} \gamma$ as it implies that $\Inv(\sigma)\subseteq \Inv(\gamma)$. We call an ordered pair of permutations $(\sigma,\gamma)$ an \emph{allowable pair} provided it is both $(21,12)$ and $(123,312)$-avoiding. Lemma~\ref{lem:tau pattern avoidance} states that $(\ensuremath{\operatorname{st}}_1(\tau),\ensuremath{\operatorname{st}}_2(\tau))$ is an allowable pair for any $2$-columned tableau $\tau$. \begin{remark} The choice of terminology may be explained by noticing that a pair $(\sigma,\gamma)$ is allowable in our sense if and only if the pair $(\sigma^c,\gamma^c)$ is allowable in the sense of \cite{AtkinsonThiyagarajah}. Here $\sigma^{c}$ refers to the \emph{complement} of $\sigma$, that is, the permutation obtained by replacing $i$ with $n-i+1$ if $\sigma\in \sgrp{n}$. Under the complementation map, our pattern-avoidance characterization for allowable pairs is equivalent to the one noted in the remark preceding \cite[Lemma 3]{ALW}. \end{remark} Our next aim is to show that the set of allowable pairs is equal to the set of ordered pairs of permutations considered in Theorem~\ref{thm:standardized 2-column}. We achieve this by associating a directed acyclic graph to an allowable pair. The acyclicity of this graph implies that its node set can be endowed with the structure of a poset as follows. Given nodes $v,w$, we define $v\geq w$ if there exists a directed path from $v$ to $w$. For the graph we construct, a linear extension of the associated poset, known as a \emph{topological sort} in theoretical computer science, is a $2$-columned tableau. We provide the details next. Given permutations {$\sigma_1,\ldots ,\sigma_k\in \mathfrak{S}_n$} where $k\geq 2$, we say that the sequence $(\sigma_1,\ldots,\sigma_k)$ is an \emph{allowable sequence} of \emph{length} $k$ if $(\sigma_j,\sigma_{j+1})$ is allowable for all $1\leq j\leq k-1$. With an allowable sequence {$(\sigma_1,\ldots,\sigma_k)$} we associate a directed simple graph $\mathbf{G}\coloneqq \mathbf{G}(\sigma_1,\dots,\sigma_k)$ as follows. Consider the composition diagram of shape $(k^n)$. Place a node in the middle of each cell of this diagram. These nodes together form the node (or vertex) set of $\mathbf{G}$. We reference each node by the pair $(i,j)$ where $i$ is the row and $j$ is the column to which this node belongs. The edge set of $\mathbf{G}$ is determined by $(\sigma_1,\dots,\sigma_k)$ as follows. \begin{enumerate} \item For all $1\leq i\leq n$ and $1\leq j\leq k-1$, draw a directed edge from $(i,j)$ to $(i,j+1)$. We refer to these edges as \emph{horizontal edges}. \item For all $1\leq j\leq k$, draw a directed edge from $(p,j)$ to $(i,j)$ where $1\leq i\neq p\leq n$ are such that $\sigma_{j}(i)<\sigma_{j}(p)$. We refer to these edges as \emph{vertical edges}. \item For all $2\leq j\leq k$, draw a directed edge from $(p,j)$ to $(i,j-1)$ where $1\leq i< p\leq n$ are such that $\sigma_{j}(i)<\sigma_{j}(p)$. We refer to these edges as \emph{diagonal edges}. \end{enumerate} For any $1\leq j\leq k$ and $1\leq i\leq n$, the nodes $(i,j)$ are said to belong to the $j$-th column. Note that all horizontal edges are oriented from west to east, whereas all diagonal edges are oriented from southeast to northwest. Furthermore, they connect nodes that belong to consecutive columns. Figure~\ref{fig:acyclic} depicts the graph $\mathbf{G}$ associated to the allowable sequence $(123,132,321)$. Note that it does not contain any directed cycles. Thus, its nodes can be assigned distinct numbers from $1$ through $9$ such that all arrows point from nodes with larger labels to those with smaller labels. While this labeling is not necessarily unique, the conditions on the edge set guarantee that we obtain an $\mathrm{SPCT}$ of shape $(3^3)$ in this manner. We emphasize here that the condition for the existence of a diagonal edge is such that the triple condition, {using the version stated in \cite[Definition 4.1]{HLMvW-1}}, holds. \begin{figure}[htbp] \includegraphics[scale=0.6]{Example-AcyclicGraphCompatible-1.pdf} \caption{The directed graph $\mathbf{G}((123,132,321)).$} \label{fig:acyclic} \end{figure} \begin{remark} Henceforth, whenever we use the term cycle, we mean a simple directed cycle. Thus we do not allow repeating edges or nodes (except the initial and terminal nodes that are the same). Also, an edge directed from node $A$ towards node $B$ is denoted by $\overrightarrow{AB}$. \end{remark} \begin{theorem}\label{thm:G 2 permutation acyclic} Given an allowable pair $(\sigma_1,\sigma_2)$, we have that $\mathbf{G}(\sigma_1,\sigma_2)$ is acyclic. \end{theorem} \begin{proof} Throughout this proof, let $\mathbf{G}\coloneqq \mathbf{G}(\sigma_1,\sigma_2)$. Additionally, let the subgraph of $\mathbf{G}$ induced by the vertical edges be $\mathbf{G}'$. Note that $\mathbf{G}'$ is acyclic. Thus, any cycle in $\mathbf{G}$ must use at least one diagonal edge. Furthermore, if there are edges $\overrightarrow{AB}$ and $\overrightarrow{BC}$ in $\mathbf{G}'$, then there is a directed edge $\overrightarrow{AC}$. This implies that two consecutive vertical edges in a cycle in $\mathbf{G}$ may be replaced by a single vertical edge. Applying this reduction repeatedly to any cycle results in a (potentially) new cycle where no two consecutive edges are vertical. We call such a cycle \emph{reduced}. Note that $\mathbf{G}$ is acyclic if and only if it does not contain a reduced cycle. We first show by contradiction that $\mathbf{G}$ does not contain a reduced 3-cycle. Assume that there is a reduced 3-cycle in $\mathbf{G}$. This cycle must involve exactly one diagonal edge. Suppose this diagonal edge is directed from $A\coloneqq (i_1,2)$ to $B\coloneqq (i_2,1)$ where $i_1>i_2$. Let $C$ and $D$ be the nodes $(i_1,1)$ and $(i_2,2)$ respectively. Then we have the configuration shown in Figure~\ref{fig:3cyc} in $\mathbf{G}$. \begin{figure}[ht] \includegraphics[scale=0.5]{Example-3cycle-1b.pdf} \caption{Configuration in a potential 3-cycle.} \label{fig:3cyc} \end{figure} As the edge $\overrightarrow{AB}$ is diagonal, we conclude there is a vertical edge $\overrightarrow{AD}$. As the pair $(\sigma_1,\sigma_2)$ is $(21,12)$-avoiding, there exists a vertical edge $\overrightarrow{CB}$. A reduced 3-cycle that involves $\overrightarrow{AB}$ must have the third node participating in the cycle being either $C$ or $D$. As the horizontal edges are oriented eastwards, $\mathbf{G}$ {cannot contain} a reduced 3-cycle. Next, we show that $\mathbf{G}$ does not contain a reduced $4$-cycle. Assume that $\mathbf{G}$ contains a reduced $4$-cycle. Such a cycle must involve exactly one diagonal edge, one horizontal edge and two non-consecutive vertical edges. Suppose it involves nodes $A$, $B$, $C$ and $D$ where the edge $\overrightarrow{AB}$ is diagonal. Let $A\coloneqq (i_1,2)$ and $B\coloneqq (i_2,1)$ where $i_1>i_2$. Figures~\ref{fig:case1}, ~\ref{fig:case2}, and ~\ref{fig:case3} depict all the possibilities for such a reduced $4$-cycle. We analyze each case separately. \begin{figure}[ht] \begin{minipage}[b]{0.30\textwidth} \begin{centering} \includegraphics[scale=0.45]{Example-4cycle-3a.pdf} \caption{Case 1} \label{fig:case1} \end{centering} \end{minipage} \begin{minipage}[b]{0.30\textwidth} \begin{centering} \includegraphics[scale=0.45]{Example-4cycle-2a.pdf} \caption{Case 2} \label{fig:case2} \end{centering} \end{minipage} \begin{minipage}[b]{0.30\textwidth} \begin{centering} \includegraphics[scale=0.45]{Example-4cycle-1a.pdf} \caption{Case 3} \label{fig:case3} \end{centering} \end{minipage} \end{figure} In Case~1 shown in Figure~\ref{fig:case1}, let $E$ and $F$ be the nodes $(i_2,2)$ and $(i_1,1)$ respectively. Let $C$ and $D$ be the nodes $(i_3,1)$ and $(i_3,2)$ respectively where $i_3<i_2$. As $\overrightarrow{AB}$ is diagonal, following the argument used in establishing the non-existence of a reduced 3-cycle, we infer the existence of vertical edges $\overrightarrow{AE}$ and $\overrightarrow{FB}$ respectively. Observe that the vertical edges $\overrightarrow{FB}$ and $\overrightarrow{BC}$ together imply that $\sigma_1(i_3)<\sigma_1(i_2)<\sigma_1(i_1)$. Similarly, the vertical edges $\overrightarrow{DA}$ and $\overrightarrow{AE}$ together imply that $\sigma_2(i_2)<\sigma_2(i_1)<\sigma_2(i_3)$. These two sequences of inequalities contradict the fact that $(\sigma_1,\sigma_2)$ is $(123,312)$-avoiding. Therefore, the 4-cycle shown in Figure~\ref{fig:case1} cannot occur in $\mathbf{G}$. We will treat Cases 2 and 3 in Figures~\ref{fig:case2} and \ref{fig:case3} uniformly. Let $C$ be the node $(i_3,1)$ where $i_3> i_2$. Note that $i_3\neq i_1$ as that would give rise to a 3-cycle, which is impossible. Let $E$ be the node $(i_2,2)$. As $(\sigma_1,\sigma_2)$ is $(21,12)$-avoiding, the edge $\overrightarrow{BC}$ implies the existence of the vertical edge $\overrightarrow{ED}$. The diagonal edge $\overrightarrow{AB}$ implies the existence of the vertical edge $\overrightarrow{AE}$. Now note that the nodes $A,D,E$ are involved in a $3$-cycle in $\mathbf{G}'$, an impossibility. At this point we have established that $\mathbf{G}$ does not contain any reduced 3-cycle or 4-cycle. Hence it does not contain a $3$-cycle or a $4$-cycle. We proceed by induction to show that $\mathbf{G}$ does not contain a reduced $k$-cycle for $k\geq 5$. Consider a reduced $k$-cycle in $\mathbf{G}$ whose nodes read by following the directed edges clockwise are {$A_1, A_2, \ldots, A_k$} where we further assume that the edge $\overrightarrow{A_1A_2}$ is diagonal. There exist two possibilities for the edge $\overrightarrow{A_2A_3}$. If it is horizontal, then the non-existence of a 3-cycle in $\mathbf{G}$ implies that there is a vertical edge $\overrightarrow{A_1A_3}$. This in turn yields a shorter cycle of length $k-1$ on the nodes {$A_1,A_3,\ldots, A_k$}. If, on the other hand, the edge from $\overrightarrow{A_2A_3}$ is vertical, then we are guaranteed that the edge $\overrightarrow{A_3A_4}$ is horizontal. As $\mathbf{G}$ contains no reduced $4$-cycles, we infer the existence of the vertical edge $\overrightarrow{A_1A_4}$. Thus, from the original cycle we obtain a shorter cycle of length $k-2$ on the nodes {$A_1,A_4,\ldots, A_k$}. It follows that if $\mathbf{G}$ contains a reduced $k$-cycle, then it contains a strictly shorter cycle on a subset of the original set of nodes participating in the cycle. As $\mathbf{G}$ contains no 3-cycles or 4-cycles, it does not contain any reduced $k$-cycles for $k\geq 5$. This finishes the proof.\end{proof} As $\mathbf{G}(\sigma_1,\sigma_2)$ is acyclic for an allowable pair $(\sigma_1,\sigma_2)$ where $\sigma_1,\sigma_2\in \sgrp{n}$, we infer that there exists $\tau\in \mathrm{SPCT}((2^n))$ such that $(\ensuremath{\operatorname{st}}_1(\tau),\ensuremath{\operatorname{st}}_2(\tau))=(\sigma_1,\sigma_2)$. Using Theorem~\ref{thm:standardized 2-column}, we conclude that the number of allowable pairs $(\sigma_1,\sigma_2)$ with $\sigma_1,\sigma_2\in \sgrp{n}$ is $(n+1)^{n-1}$. Of course, this fact was proved before \cite{ALW, AtkinsonThiyagarajah, Hamel} but our proof is different from the aforementioned ones. Armed with the above insight, we will show that allowable pairs are the same as compatible pairs. Prior to that, we need the following generalization of Theorem~\ref{thm:G 2 permutation acyclic}. \begin{theorem}\label{thm:general G acyclic} Consider the allowable sequence {$(\sigma_1,\ldots, \sigma_k)$} for $k\geq 2$. The directed graph {$\mathbf{G}(\sigma_1,\ldots, \sigma_k)$} is acyclic. \end{theorem} \begin{proof} Throughout this proof, let {$\mathbf{G}\coloneqq \mathbf{G}(\sigma_1,\ldots, \sigma_k)$}. Let $\mathbf{G}_h$ (respectively $\mathbf{G}_v$) be the subgraph of $\mathbf{G}$ induced by the horizontal edges (respectively vertical edges). All edges in $\mathbf{G}_h$ are oriented eastwards, whereas all edges in $\mathbf{G}_v$ go from larger values to smaller values in terms of the permutations in our allowable sequence. Thus, $\mathbf{G}_h$ and $\mathbf{G}_v$ are both acyclic. We will establish the assertion in the theorem by induction on $k$. The base case $k=2$ is Theorem~\ref{thm:G 2 permutation acyclic}. Assume that the claim holds for all allowable sequences of length $i$ where $2\leq i<k$. Suppose for the sake of contradiction that $\mathbf{G}$ is not acyclic. Let $\mathbf{C}$ be a cycle in $\mathbf{G}$. Observe that $\mathbf{C}$ must contain at least one node each from the first column and the $k$-th column. If it did not, then $\mathbf{C}$ would in fact be a cycle in either {$\mathbf{G}(\sigma_1,\ldots,\sigma_{k-1})$} or {$\mathbf{G}(\sigma_2,\ldots,\sigma_k)$}, considered as subgraphs of $\mathbf{G}$ in the natural manner. Our inductive hypothesis forbids either possibility. Since diagonal edges connect nodes in consecutive columns, $\mathbf{C}$ has nodes in all columns from $1$ through $ k$. Consider the induced subgraph of $\mathbf{C}$ formed from nodes that belong to the first and second columns. This induced subgraph is a disjoint union of directed paths, say {$\mathbf{P}_1,\ldots, \mathbf{P}_m$}. Suppose that, amongst the aforementioned paths, the paths {$\mathbf{P}_{i_1},\ldots, \mathbf{P}_{i_r}$} are those that contain at least one node from the first column. As our cycle is simple, the initial and terminal nodes of each such path are distinct and furthermore, belong to the second column. Thus, each $\mathbf{P}_{i_j}$ for $1\leq j\leq r$ contains at least one diagonal edge and one horizontal edge. As $\mathbf{G}(\sigma_1,\sigma_2)$ considered as a subgraph of $\mathbf{G}$ is acyclic, the vertical edge that connects the initial and terminal node of each $\mathbf{P}_{i_j}$ for $1\leq j\leq r$ acquires a unique orientation. Replacing each $\mathbf{P}_{i_j}$ with the vertical edge connecting its initial and terminal nodes gives another cycle $\mathbf{C'}$ in $\mathbf{G}$. Our construction implies that $\mathbf{C'}$ does not involve any node in the first column. Thus, {$\mathbf{G}(\sigma_2,\ldots,\sigma_k)$} considered as a subgraph of $\mathbf{G}$ contains $\mathbf{C'}$, which is a contradiction. This finishes the proof. \end{proof} The acyclicity of {$\mathbf{G}(\sigma_1,\ldots,\sigma_k)$} implies that we can label its nodes with distinct positive integers from $1$ through $kn$ such that all edges in $\mathbf{G}$ point from nodes with larger labels to those with smaller ones. Figure~\ref{fig:tableau given allowable} demonstrates the resulting $\mathrm{SPCT}$ for the directed graph from Figure~\ref{fig:acyclic}. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{Example-TableauGivenAllowable-1.pdf} \caption{An $\mathrm{SPCT}$ corresponding to $\mathbf{G}((123,132,321))$.} \label{fig:tableau given allowable} \end{figure} \begin{theorem}\label{thm:existence of prct} Given an allowable sequence {$(\sigma_1,\ldots,\sigma_k)$} where {$\sigma_1,\ldots,{\sigma _k} \in \sgrp{n}$}, there exists an $\mathrm{SPCT}$ {$\tau$ of} shape $(k^n)$ such that $\ensuremath{\operatorname{st}}_i(\tau)=\sigma_i$ for $1\leq i\leq k$. \end{theorem} To obtain a similar statement for $\ensuremath{\operatorname{SCT}}$s, we require that $\sigma_1$ be the identity permutation in $\sgrp{n}$. First, we use the weak Bruhat order to extend allowable pairs to allowable sequences whose first coordinate is the identity. \begin{lemma}\label{lem:extending allowable pairs} Given an allowable pair $(\delta_1,\delta_2)$ where $\delta_1,\delta_2\in \sgrp{n}$, there exists an allowable sequence {$(\sigma_1,\ldots,\sigma_k)$} where {$k\geq 3$} such that $\sigma_{k-1}=\delta_1$, $\sigma_{k}=\delta_2$ and $\sigma_1=\epsilon_n$, { and $\delta_1=\epsilon_n$ if $k=2$}. \end{lemma} \begin{proof} Consider a maximal chain $\sigma_1\prec_L \sigma_2 \prec_L \cdots \prec_L \sigma_{k-1}=\delta_1$ in the weak Bruhat order, where $\sigma_1=\epsilon_n$. We claim that {$(\sigma_1,\ldots,\sigma_{k-1})$} is an allowable sequence. It suffices to show that if $\gamma_1,\gamma_2\in \sgrp{n}$ satisfy $\gamma_1\prec_L \gamma_2$, then $(\gamma_1,\gamma_2)$ is an allowable pair. Since $\gamma_1\prec_L\gamma_2$, we know that $\gamma_2=s_p\gamma_1$ for some $p$ where $p$ comes before $p+1$ in $\gamma_1$. As inversions in $\gamma_1$ are also inversions in $\gamma_2$, we infer that $(\gamma_1,\gamma_2)$ is $(21,12)$-avoiding. We proceed to show that $(\gamma_1,\gamma_2)$ is $(123,312)$-avoiding as well. Suppose there are $1\leq i<j<k\leq n$ such that $\gamma_1(i)<\gamma_1(j)<\gamma_1(k)$. We claim that we cannot have $\gamma_2(j)<\gamma_2(k)<\gamma_2(i)$. Suppose to the contrary. As $\gamma_2$ is obtained from $\gamma_1$ by swapping $p$ and $p+1$, we have that $\gamma_1$ and $\gamma_2$ differ in exactly two positions. If neither $p$ nor $p+1$ belong to $\{\gamma_1(i),\gamma_1(j),\gamma_1(k)\}$, then $\gamma_2(i)<\gamma_2(j)<\gamma_2(k)$ as well. If both $p$ and $p+1$ belong to $\{\gamma_1(i),\gamma_1(j),\gamma_1(k)\}$, then $\stan(\gamma_2(i)\gamma_2(j)\gamma_2(k))$ is a permutation that is distance $1$ from the identity in the Hasse diagram of the weak Bruhat order on $\sgrp{3}$. In particular, $\stan(\gamma_2(i)\gamma_2(j)\gamma_2(k))\neq 312$. If exactly one of $p$ or $p+1$ belongs to $\{\gamma_1(i),\gamma_1(j),\gamma_1(k)\}$, then $\stan(\gamma_1(i)\gamma_1(j)\gamma_1(k))=\stan(\gamma_2(i)\gamma_2(j)\gamma_2(k))$. Thus, $(\gamma_1,\gamma_2)$ is $(123,312)$-avoiding and we conclude that $(\gamma_1,\gamma_2)$ is an allowable pair. The preceding argument implies that {$(\sigma_1,\ldots,\sigma_{k-1}=\delta_1)$} is an allowable sequence and therefore, {$(\sigma_1,\ldots,\sigma_{k-1},\delta_2)$} is an allowable sequence as well. \end{proof} We have the following corollary of Lemma~\ref{lem:extending allowable pairs} which establishes that allowable pairs and compatible pairs are the same notion. \begin{corollary}\label{cor:existence of srcts} Given an allowable pair $(\sigma,\gamma)$ where $(\sigma,\gamma)\in \sgrp{n}$, there exists an $\ensuremath{\operatorname{SCT}}$ $\tau$ of shape $(k^n)$ where $k\geq 2$ and such that {$\ensuremath{\operatorname{st}}_{k-1}(\tau)=\sigma$ and $\ensuremath{\operatorname{st}}_k(\tau)=\gamma$}. \end{corollary} \begin{proof} Construct an allowable sequence {$(\sigma_1,\ldots,\sigma_{k-1},\sigma_k)$} where $\sigma_1=\epsilon_n$, $\sigma_{k-1}=\sigma$ and $\sigma_k=\gamma$. Then {$\mathbf{G}(\sigma_1,\ldots,\sigma_k)$} is acyclic. The conclusion now follows. \end{proof} \section{Final remarks} \begin{enumerate} \item As mentioned earlier, the representation theory of $H_n(0)$ in type $A$ is related to the algebra of quasisymmetric functions. Indeed, there is a map $ch$ called the \emph{quasisymmetric characteristic} \cite{DKLT} that associates a quasisymmetric function to an $H_n(0)$-module. Mimicking the proof of \cite[{Theorem 5.2}]{TvW}, one can establish that $$ch(\mathbf{S}_{\alpha})=\sum_{\tau\in \mathrm{SPCT}(\alpha)} F_{\comp(\tau)}.$$ Here $\comp(\tau)$ is the composition of $n$ naturally associated with the subset $\des(\tau)$ of $[n-1]$ and $F_{\gamma}$ is the fundamental quasisymmetric function indexed by a composition $\gamma$ \cite[{Chapter 7}]{stanley-ec2}. It would be interesting to investigate the algebraic/combinatorial properties of $ch(\mathbf{S}_{\alpha})$ in addition to the representation-theoretic properties of the $\mathbf{S}_{\alpha}$. \item In the introduction we mentioned that the study of ascents and descents on labeled binary trees is {tied to} the enumeration of chambers in various Coxeter deformations. The number of tableaux in $\mathrm{SPCT}((2^n))$ is $n!\Cat_{n}$, which equals the number of regions in the Catalan arrangement in $\mathbb{R}^n$ defined by the hyperplanes $x_i-x_j=0,\pm 1$ for $1\leq i<j\leq n$. The sink tableaux of shape $(2^n)$ are distinguished representatives of the equivalence classes under $\sim_{(2^n)}$ and by Theorem~\ref{thm:standardized 2-column}, there are $(n+1)^{n-1}$ many of them. This is the number of regions in the Shi arrangement in $\mathbb{R}^n$ defined by the hyperplanes $x_i-x_j=0, 1$ for $1\leq i<j\leq n$. In view of this, it would be interesting to understand how the coarsening of regions of the Catalan arrangement into the regions of the Shi arrangement relates to grouping of tableaux in $\mathrm{SPCT}((2^n))$ into equivalence classes under $\sim_{(2^n)}$. Our $0$-Hecke operators translate into operators on regions of the Catalan arrangement and this viewpoint merits further study. \end{enumerate} \section*{Acknowledgements} We would like to thank Sara Billey, Ira Gessel and Sean Griffin for extremely helpful discussions. We would also like to thank the anonymous referees for their valuable feedback and suggestions. \def$'${$'$}
train/arxiv
BkiUdeY4uBhi1OGmc4In
5
1
\section{Introduction} Pairing is an important component of the correlations in atomic nuclei at low-excitation energy \cite{heyde:1994,dean:2003,brink:2005}. The Sn isotopes provide a unique laboratory to probe the neutron-neutron pairing correlations, because the large proton shell gap at $Z=50$ ensures that the low-lying nuclear structure is largely unaffected by proton particle-hole excitations across the shell gap. Moreover, experimental data of the Sn isotopes in three major shells have become available in recent years thanks to intensive experimental activity with radio-active beam facilities. There exist several theoretical approaches to investigate pairing correlations in atomic nuclei, ranging from fundamental ab initio calculations to studies based on a more phenomenological footing \cite{dean:2003}. In the present contribution, we will employ a Woods-Saxon \cite{bohr:1998} plus level-independent Bardeen-Cooper-Schrieffer (BCS) pairing Hamiltonian \cite{bardeen:1957,bohr:1958} as a global probe for pairing correlations in the ground state of Sn. The level-independent, or reduced, BCS Hamiltonian has a complete basis of Bethe Ansatz eigenstates \cite{richardson:1963,richardson:1964a}, and belongs to the class of Richardson-Gaudin (RG) integrable models \cite{gaudin:1976,dukelsky:2004a}. Integrability offers unique opportunities to investigate pairing correlations. On the one hand, the RG variables in the pair-product structure allow for a transparent graphical representation, as well as a clear-cut connection with bosonization approximations \cite{ring:2004} via a pseudo-deformation of the quasi spin algebra \cite{debaerdemacker:2012b}. On the other hand, physical observables related to particle removal and addition properties \cite{grasso:2012} can be obtained conveniently using Slavnov's theorem for the RG model \cite{faribault:2008}. \section{Richardson-Gaudin integrability for Sn isotopes } The reduced BCS Hamiltonian is given by \cite{heyde:1994} \begin{equation}\label{richardson:hamiltonian} \hat{H}=\sum_{i=1}^m \varepsilon_i\hat{n}_i + g\sum_{i,k=1}^m\hat{S}^\dag_i \hat{S}_k, \end{equation} with $\hat{S}^\dag_i=\sum_{m_i>0}(-)^{j_i-m_i}a^\dag_{j_im_i}a^\dag_{j_i-mi}$ the nucleon-pair creation operator in a single-particle level $\varepsilon_i$ with (spherical) quantum numbers ($i\equiv n_i,l_i,j_i$) and of degeneracy $\Omega_i=2j_i+1$. This Hamiltonian supports a complete set of Bethe Ansatz eigenstates parametrised by the set of RG variables $\{x\}$ that are a solution of the RG equations \cite{richardson:1963,richardson:1964a}. The associated eigenstate energy is then given as $E=\sum_{\alpha=1}^{N_p}x_\alpha+\sum_{i=1}^m\varepsilon_i v_i$, with $v_i$ the seniority \cite{talmi:1993}, and $N_p$ the number of pairs. The single-particle levels are provided by a Woods-Saxon potential \cite{bohr:1998}, for which we used a recent global parametrisation \cite{schwierz:2007}, and the single-particle energy spectrum for \textsuperscript{100}Sn is given in Table \ref{table:tdadecomposition}. We followed a global prescription $g=g_0/\sqrt{A}$ for the pairing interaction, in order to reproduce the 3-point pairing gaps $\Delta^{(3)}(A)=(-)^A[BE(A)-2BE(A-1)+BE(A-2)]$ \cite{bohr:1998}, presented in Figure \ref{figure:S2nDelta}b. \begin{figure}[!htb] \begin{center} \includegraphics{fig1-S2nDelta} \caption{Experimental (squares) and theoretical (circles) two-neutron separation energies $S_{2n}$ (a) and three-point pairing gaps $\Delta^{(3)}$ (b). Experimental data taken from \cite{audi:2003}.}\label{figure:S2nDelta} \end{center} \end{figure} The two-neutron separation energies $S_{2n}=[BE(A)-BE(A-2)]$ \cite{bohr:1998} are given in Figure \ref{figure:S2nDelta}a, following a general linear trend, with the exception of a small kink around mid shell, signaling a sub-shell closure. The calculated curve is smoother than the experimental values at this point, consistent with the overestimated pairing gaps $\Delta^{(3)}$ around mid shell. Recent measurements showed a decrease in the $B(E2:0^+_1\rightarrow 2^+_1)$ strength around mid shell \cite{jungclaus:2011}, which was qualitatively attributed \cite{morales:2011} to this sub-shell effect in the seniority scheme \cite{talmi:1993}. Figure \ref{figure:variables} depicts the RG variables for the ground state of the even-even \textsuperscript{102-130}Sn isotopes, and sheds more light on the sub-shell structure. Weakly correlated pair states give rise to a clustering of RG variables around the single-particle poles in the complex plane, whereas collective pairing states organise the RG variables along a broad arc in the complex plane \cite{dukelsky:2004a,debaerdemacker:2012b}. The pairing interaction in the lighter isotopes is strong enough to distribute the RG variables along an arc in the complex plane, however the arc only extends over the $d_{5/2}$ and $g_{7/2}$ sub-shell single particle poles. For the heavier nuclei, the pairs separate into two distinct sets, with seven RG variables clustering around the $d_{5/2}$ and $g_{7/2}$ sub-shell poles and the remaining forming a collective arc around the other poles. For medium-heavy nuclei, there is a gradual transition between both situations. This structure can be quantified using the pseudo-deformation scheme, where all RG variables can be labeled according to their collective behaviour in the Tamm-Dancoff Approximation (TDA) (see Table \ref{table:tdadecomposition}) \cite{debaerdemacker:2012b}. From the table, it can be seen that the TDA structure is consistent with the discussed sub-shell structure. The lightest isotopes are consistent with a collective TDA condensation in the $d_{5/2}$ and $g_{7/2}$ sub shell, whereas the TDA structure of the heavier isotopes points towards a normal filling of the $d_{5/2}$ and $g_{7/2}$ sub shell, with the additional pairs collectively distributed over the $s_{1/2}$, $d_{3/2}$ and $h_{11/2}$ sub shell. \begin{figure}[!htb] \begin{center} \includegraphics{fig2-rgvariables} \caption{The RG variables (circles) and single-particle poles (squares) of the even-even \textsuperscript{102-130}Sn isotopes.}\label{figure:variables} \end{center} \end{figure} \begin{table}[!htb] \begin{center} \begin{tabular}{lr|ccccccccccccccc} level & $\varepsilon_i$ [MeV] & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline\hline $d_{5/2}$ & -11.164 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 3 & 3 & 3 & 3 & 3 & 3 & 3\\ $g_{7/2}$ & -10.275 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 6 & 7 & 4 & 4 & 4 & 4 & 4\\ $s_{1/2}$ & -9.124 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 5 & 6 & 1 & 1\\ $d_{3/2} $ & -8.766 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 6 & 2\\ $h_{11/2}$ & -7.754 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5\\ \hline \end{tabular} \end{center} \caption{The single-particle energies $\varepsilon_{i}$ obtained from a Woods-Saxon potential \cite{schwierz:2007}, and the TDA eigenmode decomposition of the $0^+$ground state for the even-even isotopes \textsuperscript{102-130}Sn. The number of active pairs $N_p$ in the isotope \textsuperscript{A}Sn is given in the upper row ($N_p=(A-50)/2$).}\label{table:tdadecomposition} \end{table} \section{Conclusions} We have investigated pairing correlations in the Sn isotopes by inspecting the location of the RG variables with respect to the single-particle poles in the complex plane, generated by a schematic Woods-Saxon plus reduced BCS Hamiltonian. The results point towards a sub-shell structure, consistent with previous studies. We expect this structure to also be reflected in the relevant transition rates; this will be investigated in future publications \ack SDB is an FWO-Vlaanderen post-doctoral fellow and acknowledges an FWO travel grant for a "long stay abroad" at the University of Amsterdam (The Netherlands). VH acknowledges financial support from the FRS-FNRS Belgium. This project is also supported by Belspo IAP Grant no P7/12 (SDB, VH, and KH). \section*{References} \providecommand{\newblock}{}
train/arxiv
BkiUdTg5qoYAmdCYOWuf
5
1
\section{Introduction}\label{sec:intro} As first demonstrated by Nagaoka, in the limit of infinite Hubbard's repulsion $U$ the ground state for simple bipartite lattices in the nearest-neighbour approximation is a saturated ferromagnetic state for a low density $\delta $ of current carriers (doubly occupied states (``doubles'') or empty states (``holes'') in an almost half-filled band)~\cite{Nagaoka:1966}. Nagaoka considered the stability of saturated ferromagnetic state (sFM) and found its spin-wave instability with increasing $\delta$ and decreasing $U$. Roth applied a variational principle to this problem and obtained two critical concentrations~\cite{Roth:1969}. The first one, $\delta _{\mathrm{c}}$, corresponds to instability of saturated ferromagnetic state, and the second one, $\delta _{% \mathrm{c}}^{\prime }$, to the second-order transition from non-saturated ferromagnetism into paramagnetic state. Zarubin and Irkhin \cite{Zarubin:2004,Zarubin:2012} have applied the $1/z$-expansion of the Green's functions in the many-electron representation \cite{Hubbard-IV:1965,Irkhin:1994} for the Hubbard model and obtained an interpolation description of saturated and non-saturated ferromagnetism. When introducing the Heisenberg exchange $J$ ($t-J$ model) a tendency to antiferromagnetism occurs since the ground state at $n = 1$ is AFM insulator. The hole states in AFM matrix (for empty conduction band) in the nearest-neighbor hopping approximation at $J = 0$ were found to be incoherent \cite{Brinkman:1970, Varma:1988, Kane:1989}. For finite $J$ the states near the band bottom form a narrow coherent band with small residue of order $|J/t| \ll 1$ and heavy mass $\sim |t/J|$ \cite{Kane:1989}. However, this picture is broken by different ways: (i) in the presence of next-nearest neighbor hopping which strongly affects the form of magnetic order; (ii) for finite density of carriers which makes Neel AFM order to be unfavorable; (iii) for finite Hubbard $U$ when a large number of spin excitation can be involved. The competition of FM and AFM ordering results in occurrence of spiral magnetic ordering \cite{Igoshev:2010} or the magnetic phase separation \cite{Visscher:1973, Igoshev:2010, Igoshev:2015}. These results were obtained under the assumption that saturated ferromagnetism is the ground state at finite doping and sufficiently large $U$. Here we present a more general physical picture taking into account finite next-nearest electron hopping which results, in particular, in occurrence of an unusual correlated antiferromagnetic state even at infinite $U$. \section{Formalism}\label{sec:formalism} We consider the Hubbard model~\cite{Hubbard-I:1963} \begin{equation} \label{eq:original_H} H=\sum_{ij\sigma\sigma'} t_{ij}\delta_{\sigma\sigma'} c^\dag_{i\sigma}c^{}_{j\sigma'}+U\sum_i n_{i\uparrow}n_{i\downarrow}, \end{equation} with the electron hopping $t_{ij} = -t$ for the nearest neighbors and $t'$ for the next-nearest neighbors (we assume $t>0$), $c^\dag_{i\sigma},c^{}_{i\sigma}$ are the electron creation and annihilation operators, respectively, $n_{i\sigma}=c^\dag_{i\sigma}c^{}_{i\sigma}$, $i$ is the site number, $\sigma$ is the spin projection. The local spin space rotation around $x$ axis, matching different site magnetization vectors along, say, $z$ axis, by the angle $\mathbf{QR}_i$ (where $\bf Q$ is a spiral wave vector, ${\bf R}_i$ is the site position) is applied for the consideration of plane magnetic spirals. This maps the spiral magnetic state into an effective ferromagnetic one, but the hopping term in the Hamiltonian becomes non-diagonal with respect to index $\sigma$: $t_{ij}\delta_{\sigma\sigma'}\rightarrow t^{\sigma\sigma'}_{ij}=\exp[\mathrm{i}\mathbf{Q}(\mathbf{R}_i-\mathbf{R}_j)\sigma^x]_{\sigma\sigma'}t_{ij}$ in Eq. (\ref{eq:original_H}). The Hartree--Fock treatment of the many--particle Coulomb interaction term replaces it to some effective field $U\langle n_{i\bar\sigma}\rangle $ which mixes the averaged contributions from singly and doubly occupied states. However, this is not satisfactory even qualitatively, especially at large $U$. A simple way of taking into account the correlation effects is an extension of the configuration space to a bosonic sector by introducing the {\it slave-boson} annihilation (creation) operators $e_i(e_i^\dag)$, $p_{i\sigma}(p_{i\sigma}^\dag), d_i(d_i^\dag)$ for empty, singly and doubly occupied states, respectively~\cite{Kotliar_SB:1986, Fresard:1992}. The transitions between the site states originating from intersite electron transfer are now accompanied by corresponding transitions in bosonic sector. The equivalence of the original and new description is achieved by the replacement $c_{i\sigma}\rightarrow \mathsf{z}_{i\sigma}c_{i\sigma}$, where $\mathsf{z}_{i\sigma} = (1 - d^\dag_i d^{}_i - p^\dag_{i\sigma}p^{}_{i\sigma})^{-1/2}\left(e^\dag_ip^{}_{i\sigma} + p^\dag_{i\bar\sigma}d^{}_{i} \right)(1 - e^\dag_i e^{}_i - p^\dag_{i\bar\sigma}p^{}_{i\bar\sigma} )^{-1/2}$, which extends the action of $c_{i\sigma}$ on the bosonic subspace language in conjunction with the constraints \begin{equation} \label{eq:eta-constraint} e^\dag_ie^{}_i+\sum_\sigma p^\dag_{i\sigma}p^{}_{i\sigma}+d^\dag_id^{}_i=1, \end{equation} \begin{equation} \label{eq:lmb-constraint} d^\dag_id^{}_i+p^\dag_{i\sigma}p^{}_{i\sigma}=c^\dag_{i\sigma}c^{}_{i\sigma}. \end{equation} The presence of the constraints can be taken into account within the functional integral formalism via the Lagrange multipliers $\eta_i$ for Eq.~(\ref{eq:eta-constraint}) and $\lambda_{i\sigma}$ for Eq.~(\ref{eq:lmb-constraint})) introduced into the action. Within the saddle-point approximation $e_i,p_{i\sigma},d_i$ are replaced by $i$-independent $c$-numbers, and $\mathsf{z}_{i\sigma}$ by $z_\sigma =(d^2+p_\sigma^2)^{-1/2}(ep_\sigma+p_{\bar\sigma}d)(e^2+p_{\bar\sigma}^2)^{-1/2}\le 1$. Then the thermodynamical potential $\Omega$ has the form \begin{equation}\label{eq:Omega_general} \Omega = \Omega_c + \Omega_b, \end{equation} where $\Omega_ c = -T\sum_{\nu\mathbf{k}} \ln(1 + \exp(-\beta (E_\nu(\mathbf{k})-\mu)))/N$, $\Omega_b = - 2\lambda d^2 - \lambda (p^2_{\uparrow} + p^2_\downarrow) + \Delta (p^2_{\uparrow} - p^2_\downarrow)$, $\mu$ being chemical potential and \begin{equation} E_{\nu}(\mathbf{k})=(z^2_\uparrow+z^2_\downarrow)e_{\rm s}(\kk)/2+\lambda + (-1)^\nu\sqrt{D_\mathbf{k}}, \end{equation} are eigenvalues of effective fermionic Hamiltonian \begin{equation}\label{eq:H_eff} H^c_{\sigma\sigma'}(\mathbf{k}) = \lambda - \Delta\sigma^z_{\sigma\sigma'} + z_\sigma z_{\sigma'}(e_{\rm s}(\kk)\delta_{\sigma\sigma'} + e_{\rm a}(\kk)\sigma^x_{\sigma\sigma'}), \end{equation} \begin{equation}\label{eq:Dk_def} D_\mathbf{k}=\left((z^2_\uparrow-z^2_\downarrow)e_{\rm s}(\kk)/2 - \Delta \right)^2+(e_{\rm a}(\kk) z_\uparrow z_\downarrow)^2 \end{equation} and $e_{\rm s,a}(\mathbf{k}) =(t_{\mathbf{k}+\mathbf{Q}/2}\pm t_{\mathbf{k}-\mathbf{Q}/2})/2$. For convenience we have introduced $\lambda=(\lambda_\uparrow+\lambda_\downarrow)/2$, $\Delta=-(\lambda_\uparrow-\lambda_\downarrow)/2$. Direct calculation of the action extremum with respect to boson variables and Lagrange multipliers yields SBA equations, see \cite{Igoshev:2015}. We introduce the electronic density $n \equiv \sum_\sigma\langle n_{i\sigma}\rangle = \sum_\sigma p^2_\sigma + 2d^2$, and amplitude of (sublattice) magnetization $m \equiv \sum_\sigma\sigma\langle n_{i\sigma}\rangle = p^2_\uparrow - p^2_\downarrow$. The electronic Green's function \begin{equation} G_{\sigma\sigma'}(\mathbf{k}, E) =\frac1{N}\sum_{i} \exp(-{\rm i}\mathbf{k}\mathbf{R}_{ij})\langle \! \langle \mathsf{z}_{i\sigma}c_{i\sigma} |\mathsf{z}^\dag_{j\sigma'}c^\dag_{j\sigma'} \left\rangle \! \right\rangle_E \end{equation} is replaced in spirit of SBA by \begin{equation} G_{\sigma\sigma'}(\mathbf{k}, E) = z_\sigma z_{\sigma'} \left(E - H^c_{\sigma\sigma'}(\mathbf{k})\right)^{-1}_{\sigma\sigma'} + G^{\rm inc}_{\sigma\sigma'}(\mathbf{k}, E), \end{equation} where $G^{\rm inc}$ contains both the incoherent contributions to the Green's function and the contribution of the interaction of electrons with well-defined collective excitations~\cite{Kane:1989}. Using the Bogolubov transformation which diagonalizes the $H^c$ (see Eq.~(\ref{eq:H_eff})) $c_{\mathbf{k}\sigma} = \sum_{\nu}T_{\sigma,\nu}(\mathbf{k})\alpha_{\mathbf{k}\nu}$, we obtain the expression for contribution of coherent part of the Green's function \begin{equation} \sum_{\sigma\sigma'}G^{\rm coh}_{\sigma\sigma'}(\mathbf{k}, E) = \sum_\nu \frac{a_\nu}{E - E_\nu(\mathbf{k})}, \end{equation} \begin{equation}\label{eq:local_z} a_\nu(\mathbf{k}) = \sum\nolimits_{\sigma\sigma'}z_{\sigma}z_{\sigma'}\bar{T}_{\sigma,\nu}(\mathbf{k})T_{\sigma',\nu}(\mathbf{k}) \end{equation} are bilinears of {\it local} residues $z_\sigma$. The loss of the quasiparticle weight in coherent states is seen from the sum rule \begin{equation}\label{eq:sum_rule} \int \rho(E)\, dE = \sum\nolimits_{\nu\mathbf{k}}a_\nu(\mathbf{k}) = \sum\nolimits_{\sigma}z_\sigma^2, \end{equation} where $\rho(E) = -\pi^{-1}\sum_{\mathbf{k}\sigma\sigma'}{\rm Im}G^{\rm coh}_{\sigma\sigma'}(\mathbf{k}, E)$ is the coherent contribution to the density of states~(DOS). For infinite $U$ and $n < 1$ we have $d = 0$, so that $e^2 = \delta = 1 - n$; for $n > 1$ we have $e = 0$ and should put $d^2 = \delta = n-1$. However, for simplicity we present the formulas in terms of $e^2$ only (note that the results for $n > 1$ are obtained from those for $n <1$ by the replacement $n\rightarrow 2-n, t'\rightarrow - t'$). Unlike HFA approximation ($\Delta_{\rm HFA} = Um/2$), the solution for $\Delta$ becomes bounded even at $U \rightarrow\infty$. Now we consider analytically the important case of AFM (or spiral) order at small number of holes ($\delta \rightarrow 0$). Since $(z_{\uparrow}^2 - z_{\downarrow}^2)e_{\rm s}(\kk)/2 - \Delta < 0$ for most $\mathbf{k}$-points in the Brillouin zone we can expand (\ref{eq:Dk_def}) in $z_{\uparrow}z_{\downarrow}$. Behavior of $p_\downarrow$ depends dramatically on the value of the lattice sum \begin{equation} C = \frac1{N}\sum_{\mathbf{k}}\frac{e_{\rm a}^2(\kk)}{(e_{\rm s}(\mathbf{k}_m) - e_{\rm s}(\kk))^2}, \end{equation} $\mathbf{k}_m$ being the position of maximum of lower ($\nu = 1$) subband. For $\mathbf{Q} = \mathbf{Q}_{\rm AFM}$ ($\mathbf{Q}_{\rm AFM}= (\pi,\pi)$ for the square and $\mathbf{Q}_{\rm AFM}= (\pi, \pi, \pi)$ for the simple cubic (sc) lattice) we have $e_{\rm s}(\kk)\propto t'$, and $C$ decreases as $|t'|$ increases. At $C < 1$ we get $p_\downarrow \propto e$ and \begin{equation}\label{eq:C_expr} \omega\equiv \lim_{e\rightarrow 0} (p^2_\downarrow/e^2) = C/(1 - C). \end{equation} At the same time, $z_{\uparrow}^2 \rightarrow (1 + \omega)^{-1}$ is finite, and $z_{\downarrow}^2 = e^2$ in the limit $\delta \rightarrow 0$, so that direct AFM gap $\Delta = e_{\rm s}(\kk_m)(1 - C)/2$, does not vanish. In the case $C > 1$ the equation (\ref{eq:C_expr}) is violated and actually $p_\downarrow \propto \sqrt{e}$. In this case $\lim_{e\rightarrow 0}(p^4_\downarrow/e^2) = x^2$, where \begin{equation}\label{eq:x} x^2 = \frac{\kappa + e_s(\mathbf{k}_m)}{N^{-1}\sum_{\mathbf{k}}e_{\rm a}^2(\kk)(e_{\rm s}(\kk) + \kappa)^{-2}[e_{\rm a}^2(\mathbf{k})/(e_{\rm s}(\kk) + \kappa) - e_{\rm s}(\kk)]}, \end{equation} and $\kappa$ satisfies \begin{equation}\label{eq:delta_equation} (1/N)\sum_{\mathbf{k}}e_{\rm a}^2(\kk)/(\kappa + e_{\rm s}(\kk))^2 = 1. \end{equation} This results in $1 - m \sim 2x\sqrt{\delta}$, and both $\Delta \sim -\kappa\sqrt{\delta/x}/2$ and $z_{\uparrow}^2 \sim \sqrt{\delta/x}$ vanish as $\sqrt{\delta}$. Direct calculation of the lattice sum $C$ allows to determine the character of AFM state in the vicinity of half-filling. For the square lattice with $t'<0$ we find $C > 1$ at $0 < -t' < -t_{\rm c} = t/\sqrt{2\pi}\approx 0.4t$ and $C < 1$ otherwise. For $t' > 0$, $e_{\rm a}(\mathbf{k}_m)\ne 0$, and $C$ always diverges which is connected with the stability of sFM state. For the sc lattice we have $C < 1$ which implies ``usual'' antiferromagnetic behavior. To consider the competition of sFM and AFM state in the limit $\delta\rightarrow0$ we expand the free energy $\mathcal{F} = \Omega + \mu n$ of spiral state (see Eq.~(\ref{eq:Omega_general})) by $\delta$ \begin{equation}\label{eq:F_expansion} \mathcal{F}_{\rm AFM} = \delta \left(\kappa + (1/{N})\sum\nolimits_{\mathbf{k}} {e_{\rm a}^2(\kk)}/{(\kappa + e_{\rm s}(\kk)) } \right) + o(\delta), \end{equation} where $\kappa$ is a solution to the equation (\ref{eq:delta_equation}) in the case $C > 1$ and $-e_{\rm s}(\kk_m)$ otherwise. For sFM (Nagaoka) state \begin{equation}\label{eq:F_Nagaoka_expansion} \mathcal{F}_{\rm sFM} = -4\delta(t + t') + o(\delta). \end{equation} The expansion of Eq.~(\ref{eq:F_expansion}) up to $\delta^2$ yields a description of phase separation (PS) into (almost) uncorrelated AFM state at $\delta = 0$ and strongly correlated AFM state at finite doping \section{Results} We start from the case $U=\infty$ making the focus on the properties of the system in the close vicinity of $n = 1$ (small $\delta$). The square lattice ground state diagram, calculated via the comparing $\Omega$ for different phases in terms of $n$ and $t'\ge 0$ is depicted in Fig.~\ref{fig:phase_diagram_sq_inf}. It contains the regions of all the commensurate magnetic phases (antiferromagnetic (AFM) and ferromagnetic (FM)), the spiral magnetic states and the paramagnetic phase as well. For $t'=0$ the picture is symmetric with respect to the half-filling due to the particle-hole symmetry. Antiferromagnetic state at $n=1$ is replaced by saturated FM state at arbitrarily small doping in accordance with Nagaoka theorem~\cite {Nagaoka:1966}. At moderate doping $\delta\sim0.3$ FM phase goes to the spiral $(Q,\pi)$ structure through the first order transition with PS. $(Q,\pi)$ state smoothly transforms to the $(0,\pi)$ order, which corresponds to a layered antiferromagnet. At large doping $\delta\sim0.6$ the magnetic order becomes suppressed and the second order transition to the paramagnetic (PM) phase occurs. Finite $t'$ values destroy the particle-hole symmetry and the diagram becomes strongly asymmetric. In the hole-doped half of the diagram the FM phase gradually displaces other states with increase of $t'/t$ and at $t'\gtrsim0.27t$ it occupies all the $n<1$ region. For the electron-doped half of the diagram ($n>1$) the FM phase region, on the contrary, becomes narrower with increase of $t'/t$ eventually being replaced by the diagonal spiral $(Q,Q)$ order phase region at $t'\sim (0.1-0.15)t$. The regions of spiral magnetic phases, adjoining to ferromagnetic regions through the phase separation, are narrowed similarly with FM regions up to $t'\sim0.17t$ eventually being replaced by AFM state. Further increase of $t'$ yields the boundary of AF and PM states being weakly dependent on $\delta$. \begin{figure}[!h] \center \includegraphics[width=0.49\textwidth]{diagram_inf_sq.eps} \caption{ (Color online) Ground state magnetic phase diagram of the Hubbard model with infinite $U/t$ for square lattice within SBA. The spiral phase regions are denoted according to the form of their wave vector (concrete {\it number} $Q$ depends on the point $(n,t')$ of the region). Filling shows the phase separation regions. Bold (blue) lines denote the second-order phase transitions. Solid (red) lines correspond to the boundaries between the regions of the homogeneous phase and phase separation. Bold dashed (red) lines denote the first order phase transitions in the case where the region of the phase separation is narrow. Dashed (red) horizontal lines separate the phase separation regions corresponding to different phase pairs. The boundary of AFM phase at very small $n-1>0$ which is stable with respect to sFM phase (see discussion at the end of Section \ref{sec:formalism} is shown schematically by long-dashed (violet) line since this boundary is not found numerically because of precision problems at extremely small carrier density. } \label{fig:phase_diagram_sq_inf} \end{figure} Generally, the instabilities of sFM state are (i) instability with respect to collective magnetic (spiral or AFM) excitations (typically accompanied by first order phase transition); (ii) spin-flip instability resulting in the unsaturated FM state formation. Within the SBA the necessary stability condition of sFM with respect to the second type of instability is $\varepsilon_2 \equiv \min\limits_{\mathbf{k}}E_2(\mathbf{k}) < \mu$, $\varepsilon_2$ being the bottom of upper subband. We define also $\epsilon_1 = \max\limits_{\mathbf{k}}E_1(\mathbf{k})$ which is the top of lower subband. Since saturated ferromagnetic states implies vanishing of both $p_\downarrow$ and $d$, the expansion of SBA equations yields $d/p_\downarrow = u + \sqrt{1 + u^2}$, with $u = (1/2)\left({U}(E_{\rm kin}/(ep_\uparrow) + \varepsilon_{\rm B}(ep_\uparrow))^{-1} - e/p_\uparrow + p_\uparrow/e\right)$, where $\varepsilon_{\rm B}$ is the bottom of bare band and $E_{\rm kin} = \sum_{\mathbf{k}}t_{\mathbf{k}}f_{\mathbf{k}}$ is kinetic energy of electrons in the spin up subband. In this case quasiparticle residues coincides with local ones: $z^2_\uparrow = 1$, $z^2_\downarrow = \delta(1 + u p_\uparrow/e)^2/(1 + u^2)$. The instability of sFM (Nagaoka) state with respect to uFM can occur far from nesting features of electronic spectrum and van Hove singularities of bare DOS, favoring saturated ferromagnetism, and is actually absent in Fig.~\ref{fig:phase_diagram_sq_inf}. The different quantities $n$ scans at fixed $t'=0, 0.2t, 0.5t$ corresponding to Fig. \ref{fig:phase_diagram_sq_inf} are presented in Fig.~\ref{fig:quantities}. \begin{figure}[!h] \includegraphics[width=0.5\textwidth]{aux2.eps} \caption{ (Color online) Left axis: the density dependence of $q = Q_x$ (green), $\mu$ (black), relative magnetization $r = m/(1-|1-n|)$ (blue), $z_{\uparrow}^2$ (dashed violet), $z_{\downarrow}^2$ (dashed orange line). Right axis: $\varepsilon_1$ (red), $\varepsilon_2$ (light blue line). Lower panel presents $t' = 0$, middle panel $t' = 0.2t$, upper panel $t' = 0.5t$. Phases are denoted at the panel bottom. Vertical dashed lines denote the region of discontinuous phase transition (phase separation). } \label{fig:quantities} \end{figure} In the PM region, far away from half-filling, the local residues $z^2_\sigma$ demonstrate a typical square-root dependence like that in the Brinkman-Rice theory of metal-insulator transition~\cite{Brinkman_MI:1970,Kotliar_SB:1986}. The relative magnetization $m/(1-|1-n|)$ at $n < 1$ appears to be bounded from above by about 0.6 for the spiral state ($\mathbf{Q} = (0,\pi)$ or $\mathbf{Q} = (q,\pi)$) which forms well away of half-filling. These results are strongly different from HFA results where large AFM gap $2\Delta_{\rm HFA} = Um$ causes $m/(1-|1-n|)\sim 1$. The spiral states exist only in a small density interval being unstable with respect to sFM state when density becomes closer to $n = 1$. The picture is strongly different in the case $n > 1$. The point of instability with respect to spiral (AFM) state of paramagnetic phase shifts towards to $n = 1$ with increasing $t'$. The phase region of AFM state is rather narrow, whereas the sFM region is fully absent. Generally, the correlated AFM phase possesses small $z^2_\sigma$ which is related to the transfer of the most of spectral weight into incoherent states. While $z^2_\downarrow$ linearly tends to zero as $\delta\rightarrow 0$, $z^2_\uparrow$ behaves differently depending on the value of $t'$: for $t'=0.2t$ $z^2_\uparrow \sim \sqrt{\delta}$, whereas for $t' = 0.5t$ $z^2_\uparrow$ tends to finite value. We stress the difference of the behavior of AFM gap $2\Delta$ in the limit $\delta\rightarrow 0$ for $t' = 0.2t$ and $t' = 0.5t$. While at $n \ne 1$ typically $\varepsilon_1 > \epsilon_2$ (the absence of the gap between the subbands), we find that in the case $t' = 0.2$ in the close vicinity of half-filling ($\delta<0.09$) the AFM state has a gap between subbands. Another interesting consequence of this difference is the different asymptotics for sublattice magnetization: $1 - m\propto \sqrt{\delta}$ for $t' = 0.2$ and $1 - m\propto \delta$ which agrees with the above analytics. The vanishing of spectral weight in the system with small $|t'|$ agrees with the results of earlier investigations of the motion of hole in AFM matrix~\cite{Varma:1988, Kane:1989} within the $t-J$ model in the nearest-neighbour approximation. They found that for $J=0$ the spectrum is incoherent, and for finite $J$ a narrow coherent peak with small residue of order $J/t \ll 1$ occurs near the band bottom. Introducing small ``direct'' exchange $J$ (e.g.~via the superexchange mechanism), yields a cutoff of divergence in Eq.~(\ref{eq:C_expr}), so that $\Delta\rightarrow \Delta + J m/2$ and $z^2_\uparrow$ becomes finite near half-filling. A similar cutoff takes place in the finite $U$ Hubbard model where effectively $J\sim t^2/U$. Now we consider in detail the influence of finite values of $U$ on the properties of the system. \begin{figure}[!h] \center \includegraphics[width=0.49\textwidth]{diagram_U50.eps} \caption{ (Color online) The same as in Fig.~\ref{fig:phase_diagram_sq_inf} for the square lattice at $U = 50t$. AFI is antiferromagnetic insulator state at $n = 1$. } \label{fig:phase_diagram_U50} \end{figure} In Fig.~\ref{fig:phase_diagram_U50} the ground state magnetic phase diagram at $U = 50t$ is presented. On can see that wide PS region occurs in the vicinity of $n = 1$. At $n < 1$ we find the PS into HFA-like AFM insulator and sFM state which width satisfies earlier estimate~\cite{Igoshev:2010, Visscher:1973} \begin{equation}\label{eq:Visscher} \delta < \delta_{\rm PS} = \sqrt{{2t}/{[\pi (1+2t'/t) U]}}, \end{equation} At the same time, at $n > 1$ sFM state becomes unstable in the vicinity of half-filling with respect to the formation of AFM state with partially suppressed quasiparticle weight: $z_{\downarrow}^2\sim \delta$ or $\sqrt{\delta}$. Thus AFM (or spiral) state occurs at arbitrarily large $U$ at $t' < 0$ which strongly changes the results by hiding the region of non-Fermi-behavior (with the size estimated as $\delta\sim ~ t/U$\cite{Kotliar:1988}). To consider in detail the properties of the states taking part in PS we present the density dependence of $z$-factors (Figs.~\ref{fig:z_at_t1=0.2} and~\ref{fig:z_at_t1=0.5}) for different $U$. \begin{figure}[!h] \center \includegraphics[angle=-90,width=0.49\textwidth]{t1=0.2.z.eps} \caption{ (Color online) $z^2_\uparrow$ (solid line) and $z^2_\downarrow$ (dashed line) factors for the square lattice at $U/t = 30, 70, 100, 200, \infty$, $t' = 0.2t$. Except for the $(q,q)$ phase, we have at $n<1$ sFM phase, and at $n>1$ AFM phase. The breaks at vertical dashed line corresponds to boundaries of PS regions. } \label{fig:z_at_t1=0.2} \end{figure} \begin{figure}[!h] \center \includegraphics[angle=-90,width=0.49\textwidth]{t1=0.5.z.eps} \caption{ (Color online) The same as in Fig.~\ref{fig:z_at_t1=0.2} for $t'=0.5t$. } \label{fig:z_at_t1=0.5} \end{figure} We find that at $n<1$ $\Delta$ is almost insensitive to $U$ and is nearly the same in both sFM and spiral phases; the position of instability of PM state ($t'=0.2$) with respect to spiral phase is also almost fixed. In the sFM state $\Delta$ is much larger than its AFM value at $n>1$ ($t'<0$) which decreases with increasing $U$ for both $t'=0.2t$ and $0.5t$. In the close vicinity of half-filling we obtain quite different behavior: at $t'= 0.2$ a precursor of unusual AFM behavior is found, $\Delta$ tends to saturation as $\delta$ arrive at $0$; at $t'=0.5$ it increases almost linearly. Both these dependences take a place until PS occurs. While $z_\downarrow^2\sim \delta$ irrespective of $U$, $z_\uparrow^2$ behavior always depends strongly on $t'$: at $t' = 0.2t$ we find a decrease of $z_{\uparrow}^2$ with $\delta$ which is guessed as a precursor of square-root vanishing at $U = \infty$ (unusual AFM behavior, hidden by PS). Note that the gap between AFM subbands exists ($\varepsilon_1 < \varepsilon_2$) in some $\delta$ region at large enough $U<\infty$. At $t' = 0.5t$ we find only a weak decrease of $z_{\uparrow}^2$ with increasing $U$, the dependence on $\delta$ being also weak. The instability of sFM with respect to the bound state of hole and spin flip on the square lattice was considered in \cite{Oles} where the energies of the states were compared in the framework of a variational principle. It was found that sFM phase become unstable at $t' < -0.255t$. This conclusion was supported by DMRG study \cite{DMRG} where sFM phase was found to be stable up to $t'> -0.214t$ \textit{and} $n < 0.99$. We see that these DMRG results, although are reproduced at $\delta \gtrsim 0.01$ should be reconsidered at smaller hole concentrations. Direct calculation of free energies of sFM and AFM state in limit $\delta\rightarrow0$ for the square lattice using Eqs.~(\ref{eq:F_expansion}) and (\ref{eq:F_Nagaoka_expansion}) indicates favourability of AFM state (this state is not shown in Fig.~\ref{fig:phase_diagram_sq_inf} due to precision problems at very small $\delta$). \begin{figure}[!h] \includegraphics[width=0.5\textwidth]{diagram_sc.eps} \caption{ (Color online) The phase diagram for sc lattice, the notations being as in Fig.~\ref{fig:phase_diagram_sq_inf}; uFM denotes the region of non-saturated ferromagnetic state. $\mathbf{Q}_{\rm AFM} = (\pi,\pi,\pi)$. } \label{fig:phase_diagram_sc_inf} \end{figure} The phase diagram for the simple cubic lattice is shown in Fig.~\ref{fig:phase_diagram_sc_inf}. Whereas for the square lattice ferromagnetism is always saturated owing to influence of the logarithmic van Hove singularity, the magnetic phase diagram of the sc lattice contains the region of unsaturated ferromagnetism. An example of spin-resolved density of states in the vicinity of transition from saturated ferromagnetic (``half-metallic'', sFM) state to uFM state in shown in Fig.~\ref{fig:spin_DOS}. One can see that, besides band narrowing, a shift of spin subbands occurs~\cite{Harris-Lange:1967}, which favors occurrence of ferromagnetism, in contrast with the simple Hubbard-I approximation~\cite{Hubbard-I:1963}. \begin{figure}[!h] \includegraphics[angle=-90, width=0.5\textwidth]{DOS_uFM.eps} \caption{ (Color online) Spin resolved DOS in the vicinity of the sFM--uFM transition for sc lattice, $t' = 0$ for $U = \infty$ from sFM (solid lines, $n = m = 0.83, z_{\uparrow}^2 = 1, z_{\downarrow}^2 = 0.17$) and uFM (dashed lines, $n = 0.70, m = 0.59, z_{\uparrow}^2 = 0.84, z_{\downarrow}^2 = 0.31$). Spin up (down) contributions are shown by red (blue) lines. } \label{fig:spin_DOS} \end{figure} The behavior of spin-up states in the saturated ferromagnetic state coincides with that of free electrons, whereas spin-down states below the Fermi level are strongly incoherent~\cite{Irkhin:1985, Edwards:1970}. The latter states are disregarded in our approximation and are therefore absent in Fig. \ref{fig:spin_DOS}; they should be taken into account to restore above-discussed sum rule (\ref{eq:sum_rule}) for the density of states. It is remarkable that the amplitude of the peaks appears to be the same for both subbands. As discussed in Sect.~\ref{sec:formalism}, there is no heavy-electron AFM phases for sc lattice. The behavior of $\varepsilon_1$ and $\varepsilon_2$ relatively to $\mu$ allows to introduce the classification of transition from saturated to non-saturated AFM state. The density of states transitions driven by $\delta$ (from paramagnetic to non-saturated AFM and saturated AFM with $\mathbf{Q} = (0, \pi, \pi)$) for rather close points are shown in Fig.~\ref{fig:AFM_transition1} and \ref{fig:AFM_transition2}. One can see that at small $|t'|$ (Fig.~\ref{fig:AFM_transition1}) upper and lower subbands overlap considerably near the transition, whereas the energy dependence of density of states (DOS) strongly changes its form due to formation of the AFM order. For large $|t'|=0.45$ another picture occurs: the transition from saturated to non-saturated AFM state results in broadening of upper subband and contraction of the lower one, which is caused by AFM order, similar to FM case. This similarity is a consequence of the fact at large $|t'|$ the electron transport includes to a large extent next-nearest neighbour sites with parallel spins. The main distinction with FM case is strong difference in amplitude of partial subband DOS's which is a consequence of $\mathbf{k}$-dependent quasiparticle residue in AFM state. \begin{figure}[!h] \includegraphics[angle=-90, width=0.5\textwidth]{DOS_AFM1.eps} \caption{ (Color online) AFM subband resolved DOS for the sc lattice with $U = \infty, t' = -0.1t$ in the vicinity of transition from paramagnetic state~(black lines), $n = 0.4, z^2_\sigma = 0.75$ and antiferromagnetic state~(red lines), $\mathbf{Q}_{\rm AFM} = (0, \pi,\pi)$, $n = 0.475, m = 0.068, z_{\uparrow}^2 = 0.72, z_{\downarrow}^2 = 0.66$. Solid line is total density of states, dashed (dotted) line is DOS for lower (upper) AFM subband. } \label{fig:AFM_transition1} \end{figure} \begin{figure}[!h] \includegraphics[angle=-90, width=0.5\textwidth]{DOS_AFM2.eps} \caption{ (Color online) AFM subband resolved DOS for the sc lattice with $U = \infty, t' = -0.45t$ for paramagnetic phase (black line, $n = 0.78, z^2_\sigma = 0.38$), `non-saturated' AFM state (blue line, $n = 0.8, m = 0.3, z_{\uparrow}^2 = 0.45, z_{\downarrow}^2 = 0.27$) and `saturated' AFM state (red line, $n = 0.94, m = 0.9, z_{\uparrow}^2 = 0.76, z_{\downarrow}^2 = 0.06$). } \label{fig:AFM_transition2} \end{figure} To conclude, we have presented the picture of magnetic phase transitions in the strongly correlated Hubbard model. Although HFA cannot yield reasonable results for the properties of the system at large $U/t$, SBA results provides a detailed information including considerable renormalization $z$-factors. Further investigation with proper inclusion of the incoherent states and spin dynamics are required. \section{Acknowledgments} The research was carried out within the state assignment of FASO of Russia (theme ``Quantum'' No. 01201463332). This work was supported in part by Ural Division of RAS (project no. 15-8-2-9, 15-8-2-12) and by the Russian Foundation for Basic Research (project no. 16-02-00995) and Act 211 Government of the Russian Federation 02.A03.21.0006. The main amount of calculations was performed using the ``Uran'' cluster of IMM UB RAS.
train/arxiv
BkiUdCk25V5isMEL6zxf
5
1
\section{Introduction} Recent years have witnessed a number of successes in applying modern reinforcement learning (RL) methods to many fields, including robotics~\cite{tobin17, levine15} and competitive gaming~\cite{silver16, mnih15}. Impressively, most of these successes have been achieved by using general-purpose RL methods that are applicable to a host of problems. Prevalent general-purpose RL approaches can be broadly categorized into: (a) \emph{model-based approaches}~\cite{deisenroth2012,gu2016,lillicrap2015}, in which an agent attempts to learn a model for the dynamics by observing the evolution of its state sequence; and (b) \emph{model-free approaches}, including DQN~\cite{mnih15}, and TRPO~\cite{schulman15}, in which the agent attempts to learn an optimal policy directly, by observing rewards from the environment. While model-free approaches typically require more samples to learn a policy of equivalent accuracy, they are naturally more robust to model mis-specification. A literature that is closely related to model-free RL is that of \emph{zero-order or derivative-free} methods for stochastic optimization; see the book by~\cite{spall03} for an overview. Here, the goal is to optimize an unknown function from noisy observations of its values at judiciously chosen points. While most analytical results in this space apply to convex optimization, many of the procedures themselves rely on moving along randomized approximations to the directional derivatives of the function being optimized, and are thus applicable even to non-convex problems. In the particular context of RL, variants of derivative-free methods, including TRPO~\cite{schulman15}, PSNG~\cite{rajeswaran17} and evolutionary strategies~\cite{salimans2017}, have been used to solve highly non-convex optimization problems and have been shown to achieve state-of-the-art performance on various RL tasks. While many RL algorithms are easy to describe and run in practice, certain theoretical aspects of their behavior remain mysterious, even when they are applied in relatively simple settings. One such setting is the most canonical problem in continuous control, that of controlling a linear dynamical system with quadratic costs, a problem known as the linear quadratic regulator (LQR). A recent line of work~\cite{abbasi2011, abbasi2018, abeille2017, cohen18, dean17, dean18, faradonbeh17, kakade18, tu18, tu182} has sought to delineate the properties and limitations of various RL algorithms in application to LQR problems. An appealing property of LQR systems from an analytical point of view is that the optimal policy is guaranteed to be linear in the states~\cite{kalman60,Whittle96}. Thus, when the system dynamics are known, as in classical control, the optimal policy can be obtained by solving the discrete-time algebraic Ricatti equation. In contrast, methods in reinforcement learning target the case of unknown dynamics, and seek to learn an optimal policy on the basis of observations. A basic form of model-free RL for linear quadratic systems involves applying derivative-free methods in the space of linear policies. It can be used even when the only observations possible are the costs from a set of rollouts, each referred to as a sample\footnote{Such an offline setting with multiple, restarted rollouts should be contrasted with an online setting, in which the agent interacts continuously with the environment, and no hard resets are allowed. In contrast to the offline setting, the goal in the online setting is to control the system for all time steps while simultaneously learning better policies, and performance is usually measured in terms of regret.}, and when our goal is to obtain a policy whose cost is at most $\epsilon$-suboptimal. The sample complexity of a given method refers to the number of samples, as a function of the problem parameters and tolerance, required to meet a given tolerance $\epsilon$. With this context, we are led to the following concrete question: \emph{What is the sample complexity of derivative-free methods for the linear quadratic regulator?} This question underlies the analysis in this paper. In particular, we study a standard derivative-free algorithm in an offline setting and derive explicit bounds on its sample complexity, carefully controlling the dependence on not only the tolerance $\epsilon$, but also the dimension and conditioning of the underlying problem. Our analysis treats two distinct forms of randomness in the underlying linear system. In the first setting---more commonly assumed in practice---the linear updates are driven by an additive noise term~\cite{dean17}, whereas in the second setting, the initial state is chosen randomly but the linear dynamics remain deterministic~\cite{kakade18}. We refer to these two settings, respectively, as the \emph{additive noise setting}, and the \emph{randomly initialized setting.} We are now in a position to discuss related work on the problem, and to state our contributions. \paragraph{Related work:} Quantitative gaps between model-based and model-free reinforcement learning have been studied extensively in the setting of finite state-action spaces~\cite{agrawal2017, dann2017, azar2017}, and several interesting questions here still remain open. For continuous state-action spaces and in the specific context of the linear quadratic systems, classical system identification has been model-based, with a particular focus on asymptotic results (e.g., see the book by~\cite{ljung1987} as well as references therein). Non-asymptotic guarantees for model-based control of linear quadratic systems were first obtained by~\cite{fietcher97}, who studied the offline problem under additive noise and obtained non-asymptotic rates for parameter identification using nominal control procedures. In more recent work, Dean et al.~\cite{dean17} proposed a robust alternative to nominal control, showing an improved sample complexity as well as better-behaved policies. The online setting for model-based control of linear quadratic systems has also seen extensive study, with multiple algorithms known to achieve sub-linear regret~\cite{dean18, abbasi2011, abeille2017, ibrahimi12, cohen19}. In this paper, we study model-free control of these systems, a problem that has seen some recent work in both the offline~\cite{kakade18} and online~\cite{abbasi2018} settings. Most directly relevant to our work is the paper of Fazel et al.~\cite{kakade18}, who studied the offline setting for the randomly initialized variant of the LQR, and showed that a population version of gradient descent (and natural gradient descent), when run on the non-convex LQR cost objective, converges to the global optimum. In order to turn this into a derivative-free algorithm, they constructed near-exact gradient estimates from reward samples and showed that the sample complexity of such a procedure is bounded polynomially in the parameters of the problem; however, the dependence on various parameters is not made explicit in their analysis. We remark that Fazel et al. also show polynomially bounded sample complexity for a zero order algorithm which builds near exact estimates of the \emph{natural} gradient, although this requires access to a stronger oracle than the one assumed in this paper. Also of particular relevance to our paper is the extensive literature on zero-order optimization. Flaxman et al.~\cite{flax04} showed that these methods can be analyzed for convex optimization by making an explicit connection to function smoothing, and Agarwal et al.~\cite{agarwal10} improved some of these convergence rates. Results are also available for strongly convex~\cite{jamrec12}, smooth~\cite{ghalan13} and convex~\cite{nesterov11, duchi15, wang17} functions, with Shamir characterizing the fundamental limits of many problems in this space~\cite{shamir12, shamir17}. Broadly speaking, all of the methods in this literature can be seen as variants of \emph{stochastic search}: they proceed by constructing estimates of directional derivatives of the function from randomly chosen zero order evaluations. In the regime where the function evaluations are stochastic, different convergence rates are obtained based on whether such a procedure uses a \emph{one-point estimate} that is obtained from a single function evaluation~\cite{flax04}, or a \emph{$k$-point estimate}~\cite{agarwal10} for some $k \geq 2$. There has also been some recent work on zero-order optimization of non-convex functions satisfying certain smoothness properties that are motivated by statistical estimation~\cite{wang18}. \paragraph{Our contributions} In this paper, we study both randomly initialized and additive-noise linear quadratic systems in the offline setting through the lens of derivative-free optimization. We begin with a general result that characterizes the convergence behavior of a canonical derivative-free algorithm when applied to a general class of functions satisfying certain curvature conditions. In particular, our main contribution is to establish upper bounds on the sample complexity as a function of the dimension, error tolerance, and curvature parameters of the problem instance. We then specialize this result to a variety of LQR models. In contrast to prior work, the rates that we provide are explicit, and the algorithms that we analyze are standard and practical one-point and two-point variants of the random search heuristic. Our results reveal interesting dichotomies between the settings of one-point and two-point feedback, as well as the models involving random initialization and additive noise. Our main contribution is stated in the following informal theorem (to be stated more precisely in the sequel): \paragraph{Main Theorem (informal).} \emph{With high probability, one can obtain an $\epsilon$-approximate solution to any linear quadratic system from observing the noisy costs of $\widetilde{\mathcal{O}}(1 / \epsilon^2)$ trajectories from the system, which can be further reduced to $\widetilde{\mathcal{O}}(1 / \epsilon)$ trajectories when pairs of costs are observed for each trajectory. } \newline In our theoretical statements, the multiplicative pre-factors are explicit lower-order polynomials of the dimension of the state space, and curvature properties of the cost function. From a technical standpoint, we build upon some known properties of the LQR cost function established in past work on randomly initialized systems~\cite{kakade18}, and establish de novo some analogous properties for the additive noise setting. We also isolate and sharpen some key properties that are essential to establishing sharp rates of zero-order optimization; as an example, for the setting with random-initialization and one-point reward feedback studied by Fazel et al.~\cite{kakade18}, establishing these properties allows us to analyze a natural algorithm that improves\footnote{While the rates established by Fazel et al.~\cite{kakade18} are not explicit, their analysis is conservative and yields a bound of order $1/\epsilon^4$ up to logarithmic factors. To be clear, the properties that we establish also enable us to provide a sharper analysis of their algorithm; see Appendix~\ref{app:fazel} to follow.} the dependence of the bound on the error tolerance $\epsilon$ from at least $\order{1/\epsilon^4}$ to $\order{1/\epsilon^2}$. Crucially, our analysis is complicated by the fact that we must ensure that the iterates are confined to the region in which the linear system is stable, and such stability considerations introduce additional restrictions on the parameters used in our optimization procedure. \section{Background and problem set-up} \label{sec:setup} In this section, we discuss the background related to zero-order optimization and the setup for the linear quadratic control problem. \subsection{Optimization background} \label{subsec:opt_back} We first introduce some standard optimization related background and assumptions, and make the zero-order setting precise. \paragraph{Stochastic zero-order optimization:} We consider optimization problems of the form \begin{align} \label{eqn:general_zero_order_prob} \min_{\x \in \mathcal{X}} \f(\x) & :\,= \Exs_{\factorvar \sim \mathcal{D}}{\left[\F(\x, \factorvar)\right]}, \end{align} where $\factorvar$ is a zero mean random variable\footnote{While the zero mean assumption on $\factorvar$ is not strictly necessary for generic optimization, the canonical (additive noise) LQR settings that we specialize our results to require noise to be zero mean. So we make this assumption at the outset for convenience.} that represents the noise in the problem, and the function $\f$ above can be non-convex in general with a possibly non-convex domain $\mathcal{X} \subseteq \ensuremath{\mathbb{R}}^{\dims}$. In particular, we consider stochastic zero-order optimization methods with oracle access to noisy function evaluations. We operate under two distinct oracle models. The first is the one-point setting, in which the optimizer specifies a point $x \in \mathcal{X}$, and an evaluation consists of an instantiation of the random variable $\F(\x, \factorvar)$. The second is the two-point extension of such a setting, in which the optimizer specifies a pair of points $(\x, y)$, then an instantiation of the random variable $\factorvar$ occurs, and the optimizer obtains the values $\F(\x, \factorvar)$ and $\F(y, \factorvar)$. Crucially, the function evaluations $\F(\x, \factorvar)$ and $\F(y, \factorvar)$ share the same noise, so the two-point oracle cannot be reduced to querying the one-point oracle twice (where sharing the same noise across multiple function evaluations cannot be guaranteed). Such two-point settings are known in the optimization literature to enjoy reduced variance of gradient estimates~\cite{agarwal10, duchi15, shamir17}. \paragraph{Function properties:} Before defining the optimization problems considered in this paper by instantiating the pair of functions $(f, F)$, let us precisely define some standard properties that make repeated appearances in the sequel. \begin{definition}[Locally Lipschitz Gradients] \label{def:lipschitz_gradient_lqr} A continuously differentiable function $\g$ with domain $\domg$ is said to have $(\phi, \beta)$ locally Lipschitz gradients at $x \in \domg$ if \begin{align} \euclidnorm{\nabla \g{(\y)} - \nabla \g{(\x)}} \leq \phi \euclidnorm{\y - \x} \qquad \mbox{for all $y \in \domg$ with $\|x-y\|_2 \leq \beta$.} \end{align} \end{definition} We often say that $\g$ has locally Lipschitz gradients, by which we mean for each $x \in \domg$ the function $\g$ has locally Lipschitz gradients, albeit with constants $(\phi, \beta)$ that may depend on $x$. This property guarantees that the function $\g$ has at most quadratic growth locally around every point, but the shape of the quadratic and the radius of the ball within which such an approximation holds may depend on the point itself. \begin{definition}[Locally Lipschitz Function] \label{def:lipschitz_cost} A continuously differentiable function $\g$ with domain $\domg$ is said to be $(\Lipcon, \zeta)$ locally Lipschitz at $x \in \domg$ if \begin{align} \label{EqnLocalLipschitz} \vert \g{(\y)} - \g{(\x)} \vert \leq \Lipcon \euclidnorm{\y - \x} \qquad \mbox{for all $\y \in \domg$ such that $\|x-y\|_2 \leq \zeta$.} \end{align} \end{definition} As before, when we say that the function $\g$ is locally Lipschitz, we mean that this condition holds for all $x \in \domg$, albeit with parameters $(\Lipcon, \zeta)$ that may depend on $x$. The local Lipschitz property guarantees that the function $\g$ grows no faster than linearly in a local neighborhood around each point. \begin{definition}[PL Condition] \label{def:pl_inequality} A continuously differentiable function $\g$ with domain $\domg$ and a finite global minimum $\g^*$ is said to be $\Plconst$-PL if it satisfies the Polyak-\L ojasiewicz (PL) inequality with constant $\Plconst > 0$, given by \begin{align} \label{EqnPLInequality} \vecnorm{\ensuremath{\nabla} \g{(\x)}}^2 & \geq \Plconst \; \big( \g{(\x)} - \g^* \big) \qquad \mbox{for all $x \in \domg$.} \end{align} \end{definition} The PL condition, first introduced by Polyak~\cite{polyak63} and Lojasiewicz~\cite{loj63}, is a relaxation of the notion of strong convexity. It allows for a certain degree of non-convexity in the function $\g$. Note that inequality~\eqref{EqnPLInequality} yields an upper bound on the gap to optimality that is proportional to the squared norm of the gradient. Thus, while the condition admits non-convex functions, it requires that all first-order stationary points also be global minimizers. Karimi et al.~\cite{schmidt16} recently showed that many standard first-order convex optimization algorithms retain their attractive convergence guarantees over this more general class. \subsection{Optimal control background} \label{sec:LQR_background} We now turn to some basic background on optimal control and reinforcement learning. An optimal control problem is specified by a dynamics model and a real-valued cost function. The dynamics model consists of a sequence of functions $\braces{h_t(\state[t], \control[t], z_t)}_{t \geq 0}$, which models how the state vector $\state[t]$ transitions to the next state $\state[t+1]$ when a control input $\control[t]$ is applied at a timestep $t$. The term $z_t$ captures the noise disturbance in the system. The cost function $c_t(\state[t], \control[t])$ specifies the cost incurred by taking an action $\control[t]$ in the state $\state[t]$. The goal of the control problem is to find a sequence of control inputs $\braces{\control[t]}_{t \geq 0}$, dependent on the history of states $\mathcal{H}_t :\,= (\state[0], \state[1], \ldots, \state[t-1])$, so as to solve the optimization problem \begin{align} \min \mathbb{E} \left[\sum_{t \geq 0} \gamma^t c_t(\state[t], \control[t]) \right] \qquad \text{s.t. } \state[t+1] = h_t(\state[t], \control[t], z_t), \end{align} where the expectation above is with respect to the noise in the transition dynamics as well as any randomness in the selection of control inputs, and $0 < \gamma \le 1$ represents a multiplicative discount factor. A mapping from histories $\mathcal{H}_t$ to controls $\control[t]$ is called a \emph{policy}, and the above minimization is effectively over the space of policies. There is a distinction to be made here between the classical fully-observed setting in stochastic control in which the dynamics model $h_t$ is known---in this case, such a problem may be solved (at least in principle) by the Bellman recursion~\cite{bertsekas2005}, and the system identification setting in which the dynamics are completely unknown. We operate in the latter setting, and accommodate the further assumption that even the cost function $c_t$ is unknown. In this paper, we assume that the state space is $\statedim$-dimensional, and the control space is $\controldim$-dimensional, so that $\state[t] \in \ensuremath{\mathbb{R}}^\statedim$ and $\control[t] \in \ensuremath{\mathbb{R}}^\controldim$. The linear quadratic system specifies particular forms for the dynamics and costs, respectively. In particular, the cost function obeys the quadratic form \begin{align*} c_t = \state[t]^{\top} Q \state[t] + \control[t]^{\top} R \control[t] \end{align*} for a pair of positive definite matrices $(Q, R)$ of the appropriate dimensions. Additionally, the dynamics model is linear in both states and controls, and takes the form \begin{align*} \state[t+1] = A \state[t] + B \control[t] + \error[t], \end{align*} where $A$ and $B$ are transition matrices of the appropriate dimension, and the random variable $\error[t]$ models additive noise in the problem which is drawn i.i.d. for each $t$ from a distribution $\noisedistributionlqr$. We call this setting the \emph{noisy dynamics} model. We also consider the \emph{randomly initialized} linear quadratic system without additive noise, in which the state transitions obey \begin{align*} \state[t+1] &= A \state[t] + B \control[t], \end{align*} and the randomness in the problem comes from choosing the initial state $\state[0]$ at random from a distribution $\statedistributionlqr$. Throughout this paper, we assume\footnote{It is important to note that our assumption of identity covariance of the noise distributions can be made without loss of generality: for a problem with known, non-identity (but full-dimensional) covariance $\Sigma$, we may reparametrize the problem with the modifications \begin{align*} A' = \Sigma^{-1/2} A \Sigma^{1/2}, \quad B' = \Sigma^{-1/2} B, \text{ and } s'_t = \Sigma^{-1/2} \state[t] \text{ for all } t \geq 0, \end{align*} in which case the new problem with states $s'_t$ and the pair of transition matrices $(A', B')$ is driven by noise satisfying the assumptions~\eqref{eq:propnoise}.} that for both distributions $\mathcal{D} \in \{ \noisedistributionlqr, \statedistributionlqr\}$ and for a random variable $v \sim \mathcal{D}$, we have \begin{align} \label{eq:propnoise} \EE [v] = 0, \quad \EE [vv^\top] = I, \text{ and } \| v \|_2^2 \leq C_{\statedim} \; \; \text{ a.s.} \end{align} While we assume boundedness of the distribution for convenience, our results extend straightforwardly to sub-Gaussian distributions by appealing to high-probability bounds for quadratic forms of sub-Gaussian random vectors~\cite{HanWri71,Wri73,HsuKakZha12} and standard truncation arguments. The final iteration complexity also changes by at most poly-logarithmic factors in the problem parameters; for brevity, we operate under the assumptions~\eqref{eq:propnoise} throughout the paper and omit standard calculations for sub-Gaussian distributions. By classical results in optimal control theory~\cite{kalman60,Whittle96}, the optimal controller for the LQR problem under both of these noise models takes the linear form $\control[t] = -\Kstar \state[t]$, for some matrix $\Kstar \in \mathbb{R}^{\controldim \times \statedim}$. When the system matrices are known, the controller matrix $\Kstar$ can be obtained by solving the discrete-time algebraic Riccati equation~\cite{riccati1700}. With the knowledge that the optimal policy is an invariant linear transformation of the state, one can re-parametrize the LQR objective in terms of the linear class of policies, and focus on optimization procedures that only search over the class of linear policies. Below, we define such a parametrization under the noise models introduced above, and make explicit the connections to the stochastic optimization model~\eqref{eqn:general_zero_order_prob}. \paragraph{Random initialization} For each choice of the (random) initial state $\state[0]$, let $\Cinit(\K ; \initialstate)$ denote the cost of executing a linear policy $\K$ from initial state $\initialstate$, so that \begin{align} \Cinit(\K; \initialstate) :\,= \sum_{t=0}^{\infty} \gamma^t \bigg( \state[t]^{\top} Q \state[t] + \control[t]^{\top} R \control[t] \bigg), \end{align} where we have the noiseless dynamics $\state[t+1] = A\state[t] + B \control[t]$ and $\control[t] = -\K \state[t]$ for each $t \geq 0$, and $0 < \gamma \le 1$. While $\Cinit(\K; \initialstate)$ is a random variable that denotes some notion of sample cost, our goal is to minimize the population cost \begin{align} \Cinit(\K) :\,= \mathbb{E}_{\initialstate \sim \statedistributionlqr} [\Cinit(\K; \initialstate)] \label{eqn:cost_fun_random_init} \end{align} over choices of the policy $\K$. \paragraph{Noisy dynamics} In this case, the noise in the problem is given by the sequence of random variables $\mathcal{Z} = \{\error[t] \}_{t \geq 0}$, and for every instantiation of $\mathcal{Z} \sim \noisedistributionlqr^{\mathbb{N}} :\,= (\noisedistributionlqr \otimes \noisedistributionlqr \otimes \ldots)$, our sample cost is given by the function \begin{align*} \Cdyn(\K; \mathcal{Z}) :\,= \sum_{t = 0}^{\infty} \gamma^t \bigg( \state[t]^{\top} Q \state[t] + \control[t]^{\top} R \control[t] \bigg), \end{align*} where we have $\state[0] = 0$, random state evolution $\state[t+1] = A \state[t] + B \control[t] + \error[t]$ and action $\control[t] = -\K \state[t]$ for each $t \geq 0$, and $0 < \gamma < 1$. In contrast to the random initialization setting, the discount factor in this setting obeys $\gamma < 1$, since this is required to keep the costs finite. Once again, we are interested in optimizing the population cost function \begin{align} \Cdyn(\K) :\,= \mathbb{E}_{\mathcal{Z} \sim \noisedistributionlqr^{\mathbb{N}}} [\Cdyn(\K; \mathcal{Z})]. \label{eqn:cost_fun_noisy_dyn} \end{align} From here on, the word policy will always refer to a linear policy, and since we work with this natural parametrization of the cost function, our problem has effective dimension $\ensuremath{D} = \statedim \cdot \controldim$, given by the product of state and control dimensions. A policy $\K$ is said to stabilize the system $(A,B)$ if we have $\rho_\text{spec}(A-B\K) < 1$, where $\rho_\text{spec}(\cdot)$ denotes the spectral radius of a matrix. We assume throughout that the LQR system to be optimized is controllable, meaning that there exists some policy $\K$ satisfying the condition $\rho_\text{spec}(A-B\K) < 1$. Furthermore, we assume access to~\emph{some} policy $\K[0]$ with finite cost; this is a mild assumption that is can be satisfied in a variety of ways; see the related literature by Fazel et al.~\cite{kakade18} and Dean et al.~\cite{dean18}. We use such a policy $\K[0]$ as an initialization for our algorithms. \subsubsection{Some properties of the LQR cost function} \label{sec:lqrprop} Let us turn to establishing properties of the pair of population cost functions $\left(\Cinit(\K), \Cdyn(\K) \right)$ and their respective sample variants $\left(\Cinit(\K, \initialstate), \Cdyn(\K; \mathcal{Z}) \right)$, in order to place the problem within the context of optimization. First, it is important to note that both the population cost functions $\left(\Cinit(\K), \Cdyn(\K) \right)$ are non-convex. In particular, for any unstable policy, the state sequence blows up and the costs becomes infinite, but as noted by Fazel et al.~\cite{kakade18}, the stabilizing region \mbox{$\{\K: \rho_\text{spec}(A-B\K) < 1 \}$} is non-convex, thereby rendering our optimization problems non-convex. In spite of this non-convexity, the cost functions exhibit many properties that make them amenable to fast stochastic optimization methods. Variants of the following properties were first established by Fazel et al.~\cite{kakade18} for the random initialization cost function $\Cinit$. The following Lemma \ref{lem:lipschitz_cost_lqr} and Lemma \ref{lem:lipschitz_gradient_lqr} require certain refinements of their claims, which we prove in Appendix~\ref{sec:randint_appendix}. Lemma~\ref{lem:pl_inequality} follows directly from Lemma 3 in Fazel et al.~\cite{kakade18}. Lemma~\ref{lem:noisy-random} relates the population cost of the noisy dynamics model to that of the random initialization model in a pointwise sense. \begin{lemma}[LQR Cost is locally Lipschitz] \label{lem:lipschitz_cost_lqr} Given any linear policy $\K$, there exist positive scalars $(\lipconK{\K}, \widetilde{\lipconK{\K}}, \radiustwo{\K})$, depending on the function value $\Cinit(\K)$, such that for all policies $\Ktwo$ satisfying ${\fronorm{\Ktwo - \K} \leq \radiustwo{\K}}$, and for all initial states $s_0$, we have \begin{subequations} \begin{align} \vert \Cinit(\Ktwo) - \Cinit(\K) \vert &\leq \lipconK{\K} \fronorm{\Ktwo - \K}, \text{ and} \\ \vert \Cinit(\Ktwo; \initialstate) - \Cinit(\K; \initialstate) \vert &\leq \widetilde{\lipconK{\K}} \fronorm{\Ktwo - \K}. \end{align} \end{subequations} \end{lemma} \begin{lemma}[LQR Cost has locally Lipschitz Gradients] \label{lem:lipschitz_gradient_lqr} Given any linear policy $\K$, there exist positive scalars $(\radiusone{\K}, \smoothnessK{\K})$, depending on the function value $\Cinit(\K)$, such that for all policies $\Ktwo$ satisfying $\fronorm{\Ktwo - \K} \leq \radiusone{\K}$, we have \begin{align} \fronorm{\ensuremath{\nabla} \Cinit(\Ktwo) - \ensuremath{\nabla} \Cinit(\K)} \leq \smoothnessK{\K} \fronorm{\Ktwo - \K}. \end{align} \end{lemma} \begin{lemma}[LQR satisfies PL] \label{lem:pl_inequality} There exists a universal constant $\ensuremath{\mu_{{\sf lqr}}} > 0$ such that for all stable policies $\K$, we have \begin{align*} \fronorm{\ensuremath{\nabla} \Cinit(\K)}^2 \geq \ensuremath{\mu_{{\sf lqr}}} \big( \Cinit(\K) - \Cinit(\Kstar) \big), \end{align*} where $\Kstar$ is the global minimum of the cost function $\Cinit$. \end{lemma} For the sake of exposition, we have stated these properties without specifying the various smoothness and PL constants. Appendix~\ref{sec:randint_appendix} collects explicit expressions for the tuple $(\lipconK{\K}, \widetilde{\lipconK{\K}}, \smoothnessK{\K}, \radiusone{\K}, \radiustwo{\K}, \ensuremath{\mu_{{\sf lqr}}})$ as functions of the parameters of the LQR problem. \begin{lemma}[Equivalence of population costs up to scaling] \label{lem:noisy-random} For all policies $\K$, we have \begin{align*} \Cdyn(\K) = \frac{\gamma}{1 - \gamma} \Cinit(\K). \end{align*} \end{lemma} Lemma~\ref{lem:noisy-random} thus shows that, at least in a population sense, both the noisy dynamics and random initialization models behave identically when driven by noise with the same first two moments. Hence, the properties posited by Lemmas~\ref{lem:lipschitz_cost_lqr}, ~\ref{lem:lipschitz_gradient_lqr}, and~\ref{lem:pl_inequality} for the \emph{population} cost function $\Cinit(\K)$ also carry over to the function $\Cdyn(\K)$. In particular, the cost function $\Cdyn(\K)$ is also $\left(\frac{\gamma}{1 - \gamma} \smoothnessK{\K}, \radiusone{\K} \right)$ locally smooth and $\left(\frac{\gamma}{1 - \gamma} \lipconK{\K}, \radiustwo{\K} \right)$ locally Lipschitz, and also globally $\frac{\gamma}{1 - \gamma}\ensuremath{\mu_{{\sf lqr}}}$-PL. We stress that although the population costs are very similar, the observed costs in the two cases are quite different. \subsubsection{Stochastic zero-order oracle in LQR} Let us now describe the form of observations that we make in the LQR system. Recall that we are operating in the derivative-free setting, where we have access to only (noisy) function evaluations and not the problem parameters; in particular, the tuple $(A,B,Q,R)$ that parametrizes the LQR problem is unknown. Our observations consist of the noisy function evaluations $\Cinit(\K; \initialstate)$ or $\Cdyn(\K; \mathcal{Z})$. We consider both the one-point and two-point settings in the former case. In the one-point setting for the randomly initialized model, a \emph{query} of the function at the point $\K$ obtains the noisy function value $\Cinit(\K; \initialstate)$ for an initial state $\initialstate$ drawn at random from the distribution $\statedistributionlqr$. In the two-point setting, a query of the function at the points $(\K,\ensuremath{K'})$ obtains the pair of noisy function values $\Cinit(\K; \initialstate)$ and $\Cinit(\ensuremath{K'}; \initialstate)$ for an initial state $\initialstate$ drawn at random; this setting has an immediate operational interpretation as running two policies with the same random initialization. The one-point query model is defined analogously for the noisy dynamics cost $\Cdyn$. A few points regarding our query model merit discussion. First, note that in the context of the control objective, each query produces a noisy sample of the long term trajectory cost, and so our sample complexity is measured in terms of the number of \emph{rollouts}, or trajectories. Such an assumption is reasonable since the ``true" sample complexity that also takes into account the length of the trajectories is only larger by a small factor---the truncated, finite cost converges exponentially quickly to the infinite sum for stable policies.\footnote{To elaborate further on this point, note that the length of the rollout required to obtain a $\delta$-accurate cost evaluation for policy $K$ will depend on both $\delta$ as well as the eigen-structure of the matrix $A - BK$. However, assuming that this matrix has maximum eigenvalue $\rho < 1$ (which is a common assumption in the related literature~\cite{dean18,cohen19}), the dependence on $\delta$ is quite mild: we only require a rollout of length $\order{\log (1 / \delta)}$, with the constant pre-factor depending on $\rho$ (or equivalently, on $\mathcal{C}(K_0)$. Since we are interested in obtaining $\epsilon$-approximations to the optimal policy, it suffices to obtain $\mathsf{poly}(\epsilon)$-approximate cost evaluations per trajectory to avoid a blow-up of the bias in our estimates (see, e.g.,~\cite{kakade18}), and this only adds another factor $\log (1 / \epsilon)$ to our sample complexity when measured in terms of the number of iterations. To avoid tracking these additional factors, we work with the offline setting defined above.} The offline nature of the query model also assumed access to restarts of the system, which can be obtained in a simulation environment. Second, we note that while the one-point query model was studied by Fazel et al.~\cite{kakade18} for the random initialization model---albeit with sub-optimal guarantees---we also study a two-point query model, which is known to lead to faster convergence rates in zero-order stochastic optimization~\cite{duchi15}. Finally, note that our setting of the problem---in which we are only given access to (noisy) evaluations of the cost of the policy and not to the state sequence---intentionally precludes the use of procedures that rely on observations of the state sequence. This setting allows us to distill the difficulties of truly `model-free' control, since it prevents any possibility of constructing a dynamics model from our observations; the latter is, loosely speaking, the guiding principle of model-based control. This is not to suggest that practical applications of learning-based LQR control take this form, but rather to provide a concrete framework within which model-based and model-free algorithms can be separated, by endowing them with distinct information oracles. In doing so, we hope to lay the broader foundations for studying derivative-free methods in the context of model-free reinforcement learning. \section{Main results} \label{sec:results2} We now turn to a statement of our main result, which characterizes the convergence rate of a natural derivative-free algorithm for any (population) function that satisfies certain PL and smoothness properties. We thus obtain, as corollaries, rates of zero-order optimization algorithms when applied to the functions $\Cinit$ and $\Cdyn$; these corollaries are collected in Section~\ref{sec:cons-lqr}. \subsection{Stochastic zero-order algorithm} We analyze a standard zero-order algorithm for stochastic optimization~\cite{agarwal10,shamir17} in application to the LQR problem. We begin by introducing some notation required to describe this algorithm, operating in the general setting where we want to optimize a function $\f: \mathcal{X} \mapsto \ensuremath{\mathbb{R}}$ of the form $\f(x) = \mathbb{E}_{\factorvar \sim\mathcal{D}} [F(x; \factorvar)]$. Here we assume the inclusion $\mathcal{X} \subseteq \ensuremath{\mathbb{R}}^d$, and let $\mathcal{D}$ denote a generic source of randomness in the zero-order function evaluation. The zero-order algorithms that we study here use noisy function evaluations in order to construct near-unbiased estimates of the gradient. Let us now describe how such an estimate is constructed in the one-point and two-point settings. Let $\ensuremath{\mathbb{S}}^{\dims - 1} = \{u \in \ensuremath{\mathbb{R}}^d: \| u \|_2 = 1\}$ denote the $d$-dimensional unit shell. Let $\ensuremath{\operatorname{Unif}}(\ensuremath{\mathbb{S}}^{\dims - 1})$ denote the uniform distribution over the set $\ensuremath{\mathbb{S}}^{\dims - 1}$. For a given scalar $r > 0$ and a random direction $u \sim \ensuremath{\operatorname{Unif}}(\ensuremath{\mathbb{S}}^{\dims - 1})$ chosen independently of the random variable $\factorvar$, consider the one point gradient estimate \begin{subequations} \begin{align} \gradest_r^1(\x, u, \factorvar) & :\,= F(\x + r u,\factorvar) \; \frac{\dims}{r} \ensuremath{u}, \label{eq:gradonepoint} \end{align} and its two-point analogue \begin{align} \gradest_r^2(\x, u, \factorvar) & :\,= \big[ F(\x + r u,\factorvar) - F(\x - r u, \factorvar) \big] \; \frac{\dims} {2r} \ensuremath{u}. \label{eq:gradtwopoint} \end{align} \end{subequations} Here $\xi$ should be viewed as an instantiation of the underlying random variable; in the two point setting, we compute a gradient estimate with the \emph{same instantiation} of the noise used to evaluate $F$ at the points $x \pm r u$. In both the one-point and two-point cases, the resulting ratios are almost unbiased approximations of the secant ratio that defines the derivative at $x$, and these approximations get better and better as the \emph{smoothing radius} $r$ gets smaller. On the other hand, small values of the radius $r$ may result in estimates with large variance. Our algorithms make use of such randomized approximations in a sequence of rounds by choosing appropriate values of the radius $r$; the general form of such an algorithm is stated below. \begin{algorithm} \caption{Stochastic Zero-Order Method} \label{sgd_simple} \begin{algorithmic}[1] \State{Given iteration number $T \geq 1$, initial point $\x[0] \in \mathcal{X}$, step size $\eta > 0$ and smoothing radius $r > 0$} \For{$t \in \{ 0, 1, \ldots, T-1 \}$} \State{ Sample $\factorvar_t \sim \mathcal{D}$ and $u_t \sim \mbox{Unif}(\ensuremath{\mathbb{S}}^{\dims - 1})$} \State{ $\gradest(x_t) \gets \begin{cases} \gradest_r^1(\x[t], u_t, \factorvar_t) \; \text{ if operating in one-point setting} \\ \gradest_r^2(\x[t], u_t, \factorvar_t) \; \text{ if operating in two-point setting.} \end{cases} $} \State {$\x[t+1] \gets \x[t] - \eta \gradest (x_t)$} \EndFor \Return $\x[T]$ \end{algorithmic} \end{algorithm} \subsection{Convergence guarantees} \label{sec:conv-guar} We now turn to analyzing Algorithm~\ref{sgd_simple} in the settings of interest. In particular, our first (main) theorem is stated as a generic optimization result for non-convex functions which are (locally) smooth and satisfy the PL inequality, which we then specialize to various LQR settings. As mentioned before, the difficulty of optimizing the LQR cost functions is governed by multiple factors such as stability, non-convexity of the feasible set, and non-convexity of the objective. Furthermore, the Lipschitz gradient and Lipschitz properties for this cost function only hold locally with the radius of locality depending on the current iterate. Most crucially, the function is infinite outside of the region of stability, and so large steps can have disastrous consequences since we do not have access to a projection oracle that brings us back into the region of stability. It is thus essential to control the behavior of our stochastic, high variance algorithm over the entire course of optimization. Our strategy to overcome these challenges is to perform a careful martingale analysis, showing that the iterates remain bounded throughout the course of the algorithm; the rate depends, among other things, on the variance of the gradient estimates obtained over the course of the algorithm. By showing that the algorithm remains within the region of finite cost, we can also obtain good bounds on the local Lipschitz constants and gradient smoothness parameters, so that our step-size can be set accordingly. Let us now introduce some notation in order to make this intuition precise. We operate once again in the setting of general function optimization, i.e., we are interested in optimizing a function $f(\x) = \mathbb{E}_{\factorvar} [\F(\x; \factorvar)]$ obeying the (global) PL inequality with constant $\mu$, as well as certain local curvature conditions. Recall that we are given an initial point $\x[0]$ with finite cost $\f(\x[0])$; the global upper bound on the cost that we target in the analysis is set according to the cost $\f(\x[0])$ of this initialization. Given the initial gap to optimality $\Delta_0 :\,= \f(\x[0]) - \f(\xstar)$, we define the set \begin{align} \boundedset{0} :\,= \bigr \{ \x \mid \f(\x) - \f(\xstar) \leq 10 \diff{0} \bigr \}, \end{align} corresponding to points $\x$ whose cost gap is at most ten times the initial cost gap $\diff{0}$. Assume that the function $\f$ is $(\smoothness_x, \radiusone{\x})$ locally smooth and $(\Lipcon_{\x}, \radiustwo{\x})$ locally Lipschitz at the point $\x$. Thus, both of these properties hold simultaneously within a neighborhood of radius \mbox{$\rho_{\x} = \min\{ \radiusone{\x}, \radiustwo{\x} \}$} of the point $\x$. Now define the quantities \begin{align*} \globalSmooth{0} :\,= \sup_{\x \in \boundedset{0}} \smoothness_x, \qquad \Lipcon_{0} :\,= \sup_{\x \in \boundedset{0}} \Lipcon_{\x}, \quad \text{ and } \quad \rho_{0} :\,= \inf_{\x \in \boundedset{0}} \rho_{\x}. \end{align*} By defining these quantities, we have effectively transformed the local properties of the function $\f$ into global properties that hold over the bounded set $\boundedset{0}$. We also define a convenient functional of these curvature parameters $\curvature_{0} :\,= \min \left\{ \frac{1}{2 \globalSmooth{0} }, \frac{\rho_{0}}{\Lipcon_{0}} \right\}$, which simplifies the statements of our results. Importantly, these smoothness properties only hold locally, and so we must also ensure that the steps taken by our algorithm are not too large. This is controlled by both the step-size as well as the norms of our gradient estimate $\gradest$ computed over the course of the algorithm. Define the uniform bounds \begin{align*} \gradbound_{\infty} = \sup_{\x \in \boundedset{0}} \| \gradest(\x) \|_2, \qquad \text{ and } \qquad \gradbound_{2} = \sup_{\x \in \boundedset{0}} \EE \left[ \| \gradest(\x) - \EE \left[ \gradest(\x) \mid \x \right] \|_2^2 \right] \end{align*} on the point-wise gradient norm and its variance, respectively. Note that these quantities also depend implicitly on the smoothing radius $r$ and on how the gradient estimate $\gradest$ is computed. With this set-up, we are now ready to state the main result regarding the convergence rate of Algorithm~\ref{sgd_simple} on the functions of interest. Note that here and throughout the rest of the paper, $C$ denotes some universal constant (which may change from line to line). For two sequences $g_n$ and $h_n$, we also use the standard notation $g_n \sim h_n$ and $g_n = \Theta(h_n)$ interchangeably, to mean that the sequences are within a (universal) constant multiplicative factor of each other. \begin{theorem} \label{thm:mainthm} Suppose that the step-size and smoothing radius are chosen so as to satisfy \begin{subequations} \begin{align} \eta \leq \min\left\{ \frac{\epsilon \Plconst}{240 \globalSmooth{0} \gradbound_{2} }, \; \frac{1}{2 \globalSmooth{0}}, \; \frac{\rho_0}{\gradbound_{\infty}} \right\}, \qquad \text{ and } \qquad r \leq \min \left\{ \frac{\curvature_{0} \Plconst}{8 \globalSmooth{0}} \sqrt{\frac{\epsilon}{15}}, \; \frac{1}{2\globalSmooth{0}}\sqrt{\frac{\epsilon \Plconst}{30} }, \; \rho_{0} \right\}. \end{align} Then for a given error tolerance $\epsilon$ such that $\epsilon \log (120 \Delta_0 / \epsilon) < \frac{10}{3} \Delta_0$, the iterate $\x[\ensuremath{T}]$ of Algorithm~\ref{sgd_simple} after \mbox{$\ensuremath{T} = \frac{4}{\eta \Plconst} \log\left( \frac{ 120 \diff{0}}{\epsilon} \right)$} steps satisfies the bound \begin{align} \f(\x[\ensuremath{T}]) - \f(\xstar) \leq \epsilon \end{align} \end{subequations} with probability greater than $3/4$. \end{theorem} A few comments on Theorem~\ref{thm:mainthm} are in order. First, notice that the algorithm is guaranteed to return an $\epsilon$-accurate solution with constant probability $\frac{3}{4}$. This probability bound of $\frac{3}{4}$ in itself can be sharpened by a slightly more refined analysis with different constants. Additionally, by examining the proof, it can be seen that we establish a result (cf. Proposition~\ref{prop:thm} in Section~\ref{sec:proofs}) that is slightly stronger than Theorem~\ref{thm:mainthm}, and then obtain the theorem from this more general result. The proof of the theorem itself is relatively short, and makes use of a carefully constructed martingale along with an appropriately defined stopping time. As mentioned before, the main challenge in the proof is to ensure that we have bounded iterates while still preserving the strong convergence properties of zero-order stochastic methods for smooth functions that satisfy the PL property. It should be noted that Theorem~\ref{thm:mainthm} is a general guarantee: it characterizes the zero-order complexity of optimizing locally smooth functions that satisfy a PL inequality in terms of properties of the gradient estimates obtained over the course of the algorithm. In particular, two properties of these estimates appear: the variance of the estimate, as well as a uniform bound on its size. These quantities, in turn, depend on both the noise in the zero-order evaluations as well as our choice of query model. In the next section, we specialize Theorem~\ref{thm:mainthm} so as to derive particular consequences for the LQR models introduced above. \subsection{Consequences for LQR optimization} \label{sec:cons-lqr} Theorem~\ref{thm:mainthm} yields immediate consequences for LQR optimization in various settings, and the dependence of the optimization rates on the tolerance $\epsilon$ is summarized by Table~\ref{tab:lqr}. We state and discuss precise versions of these results below. \begin{table}[] \centering \begin{tabular}{c|c|c|c|c|} \cline{2-5} \multicolumn{1}{r|}{Parameter settings} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Smoothing radius \\ $r$\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Variance \\ $\gradbound_2$\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Step-size \\ $\eta$\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}} \#queries \\ $T$\end{tabular}} \\ \multicolumn{1}{l|}{Query Model} & & & & \\ \hline \multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}One-point LQR \\ (Random initialization/\\ Noisy dynamics)\end{tabular}} & $\order{\sqrt{\epsilon}}$ & $\order{\epsilon^{-1}}$ & $\order{\epsilon^{2}}$ & $\ordertil{\epsilon^{-2}}$ \\ \hline \multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Two-point LQR \\ (Random initialization)\end{tabular}} & $\order{\sqrt{\epsilon}}$ & $\order{1}$ & $\order{\epsilon}$ & $\ordertil{\epsilon^{-1}}$ \\ \hline \end{tabular} \caption{Derivative-free complexity of LQR optimization under the two query models, as a function of the final error tolerance $\epsilon$. The multiplicative pre-factors are functions of the effective dimension $\ensuremath{D}$ and curvature parameters, and differ in the three cases; see the statements of the corollaries below. } \label{tab:lqr} \end{table} First, let us consider the random initialization model. From the various lemmas in Section~\ref{sec:lqrprop}, we know that the population objective $\Cinit(\K)$ is locally $(\smoothnessK{\K}, \radiusone{\K})$ smooth and $(\lipconK{\K}, \radiustwo{\K})$ Lipschitz, and also globally $\ensuremath{\mu_{{\sf lqr}}}$-PL. By assumption, we are given a starting point $\K[0]$ having finite population cost $\Cinit(\K[0])$. Proceeding as in the previous section, we may thus define the set \begin{align} \label{EqnDefnBoundedSet} \boundedset{\ensuremath{{\sf lqr}}} :\,= \braces{ \K \mid \Cinit(\K) - \Cinit(\Kstar) \leq 10 \diff{0}}, \end{align} corresponding to point $\x$ whose cost gap is at most ten times the initial cost gap to optimality \mbox{$\diff{0} = \Cinit(\K[0]) - \Cinit(\Kstar)$.} Now define the quantities \begin{align*} \globalSmooth{\ensuremath{{\sf lqr}}} :\,= \sup_{\K \in \boundedset{\ensuremath{{\sf lqr}}}} \smoothnessK{\K}, \qquad \Lipcon_{\ensuremath{{\sf lqr}}} :\,= \sup_{\K \in \boundedset{\ensuremath{{\sf lqr}}}} \lipconK{\K}, \quad \text{ and } \quad \rho_{\ensuremath{{\sf lqr}}} :\,= \inf_{\K \in \boundedset{\ensuremath{{\sf lqr}}}} \localK{\K}, \end{align*} thereby transforming the local smoothness properties of the function $\Cinit$ into global properties that hold over the bounded set $\boundedset{0}$. Once again, let $\curvature_{\ensuremath{{\sf lqr}}} :\,= \min \left\{ \frac{1}{2 \globalSmooth{\ensuremath{{\sf lqr}}} }, \frac{\rho_{\ensuremath{{\sf lqr}}}}{\Lipcon_{\ensuremath{{\sf lqr}}}} \right\}$ be a functional of these curvature parameters that simplifies the statements of our results. \footnote{Let us make a brief comment on the finiteness of these quantities in the absence of compactness. The quantity $\phi_{\ensuremath{{\sf lqr}}}$ is finite, simply by definition of the set $\boundedset{\ensuremath{{\sf lqr}}}$. In the sequel, we show that for any $K \in \boundedset{\ensuremath{{\sf lqr}}}$, $\phi_K$ can be bounded by a polynomial of $10 \Delta_0$. Hence, $\phi_\ensuremath{{\sf lqr}}$ can also be bounded by a polynomial of $10 \Delta_0$, implying it is finite. A similar argument shows that $\lambda_\ensuremath{{\sf lqr}}$ is finite and $\rho_\ensuremath{{\sf lqr}}>0$.} With this setup, we now establish the following corollaries for derivative-free policy optimization for linear quadratic systems. \begin{corollary}[One-point, Random initialization] \label{cor:init1} Suppose that the step-size and smoothing radius are chosen such that {\normalsize{ \begin{align*} \eta & \leq C \min\left\{ \frac{\epsilon \ensuremath{\mu_{{\sf lqr}}} r^2}{\globalSmooth{\ensuremath{{\sf lqr}}} C_{\statedim}^2 \ensuremath{D}^2 [\Cinit(\K[0])]^2 }, \; \frac{1}{\globalSmooth{\ensuremath{{\sf lqr}}}}, \; \frac{\rho_{\ensuremath{{\sf lqr}}} r}{C_{\statedim} \ensuremath{D} [\Cinit(\K[0])]} \right\}, \text{ and } \\ r &\leq \min \left\{ \frac{\curvature_{\ensuremath{{\sf lqr}}} \ensuremath{\mu_{{\sf lqr}}}}{8 \globalSmooth{\ensuremath{{\sf lqr}}}} \sqrt{\frac{\epsilon}{15}}, \; \frac{1}{2\globalSmooth{\ensuremath{{\sf lqr}}}}\sqrt{\frac{\epsilon \ensuremath{\mu_{{\sf lqr}}}}{30} }, \; \rho_{\ensuremath{{\sf lqr}}}, \; \frac{10 \Cinit(\K[0])}{\Lipcon_{\ensuremath{{\sf lqr}}}} \right\}, \end{align*} }} for some universal constant $C$. Then for any error tolerance $\epsilon$ such that $\epsilon \log (120 \Delta_0 / \epsilon) < \frac{10}{3} \Delta_0$, running Algorithm~\ref{sgd_simple} for \mbox{$\ensuremath{T} = \frac{4}{\eta \Plconst} \log\left( \frac{ 120 \diff{0}}{\epsilon} \right)$} iterations yields an iterate $\K[\ensuremath{T}]$ such that \begin{align*} \Cinit(\K[\ensuremath{T}]) - \Cinit(\Kstar) \leq \epsilon \end{align*} with probability greater than $3/4$. \end{corollary} \begin{comment} \begin{remark} \label{rem:randint} If the initial state distribution $\statedistributionlqr$ is also Gaussian, the $C_{\statedim}^2$ term in the step-size bound above can be replaced by $1$. \end{remark} \apcomment{This case is interesting, but the Gaussian distribution doesn't satisfy the uniform boundedness condition required to bound $\gradbound_{\infty}$. Should probably leave it out.} \end{comment} Let us parse this result briefly. Treating the other parameters as constants, note that it is valid to choose $r \sim \epsilon^{1/2}$; the above result then shows that with a choice of step-size $\eta \sim \epsilon^2$, the canonical zero-order algorithm converges using $T \sim \eta^{-1} \log (1 / \epsilon) = \ordertil{\epsilon^{-2}}$ steps. This is in spite of the high-variance estimates obtained by the algorithm, and the theorem also guarantees stability of all the iterates with constant probability. Interestingly, the result above (or more generally, Theorem~\ref{thm:mainthm}) also yields an $\ordertil{\epsilon^{-2}}$ convergence rate for the family of high-variance \emph{minibatch} derivative-free algorithms, where $k$ zero-order samples are used to estimate the gradient at any point, thereby reducing its variance. The canonical algorithm corresponds to the case $k = 1$, while that of Fazel et al. corresponds to the case of some large $k$. In particular, choosing a minibatch of size $k$ results in the variance of the gradient $G_2$ being reduced by a factor $k$, allowing us to increase our step-size proportionally and converge in $1/k$-fraction of the number of iterations (but with the same number of zero-order evaluations in total). For completeness, we provide an analysis tailored to the algorithm of Fazel et al.~\cite{kakade18} in Appendix~\ref{app:fazel}, which shows that our techniques can be used to sharpen their rates to guarantee $\epsilon$-approximate policy optimization with $\ordertil{\epsilon^{-2}}$ zero-order evaluations. Let us also briefly discuss the upper bounds on the step-size that are required for the corollary to hold. As stated, the step-size is required to satisfy the bound $\eta \leq \frac{r \rho_{\ensuremath{{\sf lqr}}}}{10 \Cinit(\K[0])}$, but this condition is an artifact of the analysis and can be removed (see Appendix~\ref{app:fazel}). In addition, the step-size is also required to be bounded by the curvature properties of the function. Operationally speaking, this means that for larger step-sizes, we are unable to guarantee stability of the policies obtained over the course of the algorithm. Such a bottleneck is in fact also observed in practice, as shown in Figure~\ref{fig:plt_minibatch} for both the one-point and two-point settings. \input{exp-minibatch.tex} We now turn to the two-point setting, in which we obtain two noisy evaluations per query. \begin{corollary}[Two-point, Random initialization] \label{cor:init2} Suppose that the step-size and smoothing radius are chosen so as to satisfy \begin{align*} \eta \leq \min\left\{ \frac{\epsilon \ensuremath{\mu_{{\sf lqr}}}}{240 \globalSmooth{\ensuremath{{\sf lqr}}} \ensuremath{D} \Lipcon_{\ensuremath{{\sf lqr}}}^2 }, \; \frac{1}{2 \globalSmooth{\ensuremath{{\sf lqr}}}}, \; \frac{\rho_{\ensuremath{{\sf lqr}}}}{\ensuremath{D} \Lipcon_{\ensuremath{{\sf lqr}}}} \right\}, \qquad \text{ and } \qquad r \leq \min \left\{ \frac{\curvature_{\ensuremath{{\sf lqr}}} \ensuremath{\mu_{{\sf lqr}}}}{8 \globalSmooth{\ensuremath{{\sf lqr}}}} \sqrt{\frac{\epsilon}{15}}, \; \frac{1}{2\globalSmooth{\ensuremath{{\sf lqr}}}}\sqrt{\frac{\epsilon \ensuremath{\mu_{{\sf lqr}}}}{30} }, \; \rho_{\ensuremath{{\sf lqr}}} \right\}. \end{align*} Then for any error tolerance $\epsilon$ such that $\epsilon \log (120 \Delta_0 / \epsilon) < \frac{10}{3} \Delta_0$, running Algorithm~\ref{sgd_simple} for \mbox{ $\ensuremath{T} = \frac{4}{\eta \Plconst} \log\left( \frac{ 120 \diff{0}}{\epsilon} \right)$} iterations yields an iterate $\K[\ensuremath{T}]$ such that \begin{align*} \Cinit(\K[\ensuremath{T}]) - \Cinit(\Kstar) \leq \epsilon \end{align*} with probability greater than $3/4$. \end{corollary} As known from the literature on zero-order optimization in convex settings~\cite{duchi15, shamir17}, the two-point query model allows us to substantially reduce the variance of our gradient estimate, thus ensuring much faster convergence than with one-point evaluations. The most salient difference is the fact that we now converge with $\ordertil{1/\epsilon}$ iterations as opposed to the $\ordertil{1/\epsilon^2}$ iterations required in Corollary~\ref{cor:init1}. This gap between the two settings is substantial and merits further investigation, but in general, it is clear that two-point evaluations should certainly be used if available. This gap, and other differences, are discussed shortly. Let us now turn to establishing convergence results for the noisy dynamics model in the one-point setting. Note that Lemma~\ref{lem:noisy-random} provides a way to directly relate the population costs of the random initialization and noisy dynamics models; furthermore, the set $\boundedset{\ensuremath{{\sf lqr}}}$ is exactly the same. In addition, since we look at a discounted cost $\Cdyn$ in this setting, the corresponding curvature parameters have an inherent dependence on $\discount$ which we denote using corresponding subscripts. With an additional computation of the variance and norm of the gradient estimates, we then obtain the following corollary for one-point optimization of the noisy dynamics model. Our statement involves the constants \begin{align*} \gradbound_{2, \ensuremath{{\sf lqr}}} & :\,= \left( \frac{\ensuremath{D}}{r}\cdot \frac{2 (\opnorm{Q} + \opnorm{R} \lambda_{\ensuremath{{\sf lqr}}, \discount}^2) C_{\statedim}}{1-\sqrt{\gamma}}\right)^2 \cdot \left( \frac{20\Cdyn(\K[0])}{\sigma_{\min}(Q)} \bigg( \frac{1 - \discount}{\discount} \bigg) \right)^{3} \text{ and } \\ \gradbound_{\infty, \ensuremath{{\sf lqr}}} & :\,= \frac{\ensuremath{D}}{r}\cdot \frac{2 (\opnorm{Q} + \opnorm{R} \lambda_{\ensuremath{{\sf lqr}}, \discount}^2) C_{\statedim}}{1-\sqrt{\gamma}} \cdot \left( \frac{20\Cdyn(\K[0])}{\sigma_{\min}(Q)} \bigg( \frac{1 - \discount}{\discount} \bigg) \right)^{3/2}. \end{align*} \begin{corollary}[One-point, Noisy dynamics] \label{cor:noisydyn} Suppose that the step-size and smoothing radius are chosen so as to satisfy \begin{align*} \eta & \leq \min\left\{ \frac{\epsilon \mu_{\ensuremath{{\sf lqr}}, \discount}}{240 \globalSmooth{\ensuremath{{\sf lqr}}, \discount} \gradbound_{2, \ensuremath{{\sf lqr}}} }, \; \frac{1}{2 \globalSmooth{\ensuremath{{\sf lqr}}, \discount}}, \; \frac{\rho_{\ensuremath{{\sf lqr}}, \discount}}{\gradbound_{\infty, \ensuremath{{\sf lqr}}}} \right\}, \text{ and } \\ r & \leq \min \left\{ \frac{\curvature_{\ensuremath{{\sf lqr}}, \discount}\cdot \mu_{\ensuremath{{\sf lqr}}, \discount}}{8 \globalSmooth{\ensuremath{{\sf lqr}}, \discount}} \sqrt{\frac{\epsilon}{15}}, \; \frac{1}{2\globalSmooth{\ensuremath{{\sf lqr}}, \discount}}\sqrt{\frac{\epsilon \cdot\mu_{\ensuremath{{\sf lqr}}, \discount}}{30} }, \; \rho_{\ensuremath{{\sf lqr}}, \discount} \right\}. \end{align*} Then for any error tolerance $\epsilon$ such that $\epsilon \log (120 \Delta_0 / \epsilon) < \frac{10}{3} \Delta_0$, Algorithm~\ref{sgd_simple} with \mbox{ $\ensuremath{T} = \frac{4}{\eta \Plconst_{\ensuremath{{\sf lqr}}, \discount}} \log\left( \frac{ 120 \diff{0}}{\epsilon} \right)$} iterations yields an iterate $\K[\ensuremath{T}]$ such that \begin{align*} \Cdyn(\K[\ensuremath{T}]) - \Cdyn(\Kstar) \leq \epsilon \end{align*} with probability greater than $3/4$. \end{corollary} \begin{comment} \apcomment{modify above bound with correct noise variance, and by introducing factors of $\gamma$ as necessary.} \begin{remark} \label{rem:noisydyn} If the noise distribution $\noisedistributionlqr$ is also Gaussian, we may instead use the bound \begin{align*} \gradbound_{2, \ensuremath{{\sf lqr}}} \leq \frac{D^2}{r^2} \cdot \left( 20 \Cdyn (\K[0]) \right)^2. \end{align*} \end{remark} \apcomment{Similarly to above, this case is interesting, but the Gaussian distribution doesn't satisfy the uniform boundedness condition required to bound $\gradbound_{\infty}$. Should probably leave it out.} \end{comment} Thus, we have shown that the one-point settings for both the random initialization and noisy dynamics models exhibit similar behaviors in the different parameters. Reasoning heuristically, such a behavior is due to the fact that the additional additive noise in the dynamics is quickly damped away by the discount factor, so that the cost is dominated by the noise in the initial iterates. The variance bound, however, is substantially different, and this leads to the differing dependence on the smoothness parameters and dimension of the problem. Another interesting problem studied in the noisy dynamics model is one of bounding the regret of online procedures. Equipped with a high probability bound on convergence---as opposed to the constant probability bound currently posited by Corollary~\ref{cor:noisydyn}---the offline guarantee and associated algorithm can in principle be turned into a no-regret learner in the online setting. We leave this extension to future work. Let us now briefly discuss the dependence of the various bounds on the different parameters of the LQR objective, in the various cases above. \input{exp-eps.tex} \paragraph{Dependence on $\epsilon$:} Our bounds illustrate two distinct dependences on the tolerance parameter $\epsilon$. In particular, the zero-order complexity scales proportional to $\epsilon^{-2}$ for both one-point settings (Corollaries~\ref{cor:init1} and~\ref{cor:noisydyn}), but proportional to $\epsilon^{-1}$ in the two-point setting (Corollary~\ref{cor:init2}). As alluded to before, this distinction arises due to the lower variance of the gradient estimator in the two-point setting. Lemma~\ref{lem:lipschitz_cost_lqr} establishes the Lipschitz property of the LQR cost function for each instantiation of the noise variable $s_0$, which ensures that the Lipschitz constant of our \emph{sample} cost function is also bounded; therefore, the noise of the problem reduces as we approach the optimum solution. In contrast, the optimization problem with one-point evaluations becomes more difficult the closer we are to the optimum solution, since the noise remains constant, while the ``signal" in the problem (measured by the rate of decrease of the population cost function) reduces as we approach the optimum. The $O(1/ \epsilon^2)$ dependence in the one-point settings is reminiscent of the complexity required to optimize strongly convex and smooth functions~\cite{agarwal10, shamir12}, and it would be interesting if a matching lower bound could also be proved in this LQR setting\footnote{Note that this lower bound follows immediately for the class of PL and smooth functions.}. Even in the absence of such a lower bound, the one-point setting is strictly worse than the two-point setting even with respect to the other parameters of the problem, which we discuss next. Figure~\ref{fig:plt_eps} shows the convergence rate of the algorithm in all three settings as a function of $\epsilon$, where we confirm that scalings in practice corroborate our theory quite accurately. It is also worth noting that model-based algorithms for this problem require $\order{\epsilon^{-1}}$ trajectory samples to return an $\epsilon$-approximate policy in the noisy dynamics setting (see, e.g.~\cite{dean17}). Thus, while a one-point zero-order method is outperformed by these algorithms---note that the comparison is not quite fair, since zero-order algorithms only require access to noise cost evaluations and not the state sequence---a two-point variant is similar to model-based methods in its dependence\footnote{Note that the comparison is inherently imprecise, since we are comparing upper bounds to upper bounds. In practice, one would certainly prefer the use of a model-based method when provided access to the state sequence.} on $\epsilon$. \paragraph{Dependence on dimension:} The dependence on dimension enters once again via our bound on the variance of the gradient estimate, as is typical of many derivative-free procedures~\cite{duchi15, shamir17}. The two-point setting gives rise to the best dimension dependence (linear in $\ensuremath{D}$), and the reason is similar to why this occurs for convex optimization~\cite{shamir17}. It is particularly interesting to compare the dimension dependence to results in model-based control. There, in the noisy dynamics model, the sample complexity scales with the sum of state and control dimensions $\statedim + \controldim$, whereas the dependence in the two-point setting is on their product $\ensuremath{D} = \statedim \cdot \controldim$. However, each observation in that setting consists of a state vector of length $\statedim$, while here we only get access to scalar cost values, and so in that loose sense, the complexities of the two settings are comparable. In the one-point setting, the dependence on dimension is significantly poorer, and at least quadratic. This of course ignores other dimension-dependent factors such as $C_{\statedim}$, as well as the curvature parameters $(\smoothness_{\ensuremath{{\sf lqr}}}, \Lipcon_{\ensuremath{{\sf lqr}}}, \mu)$ (see the discussion below). \input{exp-init.tex} \paragraph{Dependence on curvature parameters:} The iteration complexity scales linearly in the smoothness parameter of the problem $\smoothness_{\ensuremath{{\sf lqr}}}$, and quadratically in the other curvature parameters. See Appendix~\ref{sec:polynomials_bounded} for precise definitions of these parameters for the LQR problem. In particular, it is worth noting that our tightest bounds for these quantities depend on the dimension of the problem implicitly for some LQR instances, and are actually lower-order polynomials of the initial cost. In practice, however, it is likely that much sharper bounds can be proved on these parameters, e.g., in simulation (see Figure~\ref{fig:plt_init}), the dependence of the sample complexity on the initial cost is in fact relatively weak---of the order $\mathcal{C}(\K[0])^2$---and our bounds are clearly not sharp in that sense. \section{Proofs of main results} \label{sec:proofs} In this section, we provide proofs of Theorem~\ref{thm:mainthm}, and Corollaries~\ref{cor:init1},~\ref{cor:init2}, and~\ref{cor:noisydyn}. The proofs of the corollaries require many technical lemmas, whose proofs we postpone to the appendix. \subsection{Proof of Theorem~\ref{thm:mainthm}} Recall that by assumption, the population function $f$ has domain $\mathcal{X} \subseteq \ensuremath{\mathbb{R}}^d$ and satisfies the following properties over the restricted domain $\boundedset{0} \subseteq \mathcal{X}$, previously defined in equation~\eqref{EqnDefnBoundedSet}: \begin{enumerate} \item[(a)] It has $(\globalSmooth{0}, \rho_{0})$-locally Lipschitz gradients, \item[(b)] It is $(\Lipcon_{0}, \rho_{0})$-locally Lipschitz, and \item[(c)] It is globally $\Plconst$-PL. \end{enumerate} Recall the values of the step-size $\eta$, smoothing radius $r$, and iteration complexity $T$ posited by Theorem~\ref{thm:mainthm}. For ease of exposition, it is helpful to run our stochastic zero-order method on this problem for $2T$ iterations; we thus obtain a (random) sequence of iterates $\{ \x[t] \}_{t=0}^{2T}$. For each \mbox{$t = 0, 1, 2, \ldots$,} we define the cost error \mbox{$\Delta_t = \f(\x[t]) - \f(\xstar)$}, as well as the stopping time \begin{align} \label{EqnDefnStoppingTime} \tau & :\,= \min \Big \{t \mid \Delta_t > 10 \Delta_0 \Big\}. \end{align} In words, the time $\tau$ is the index of the first iterate that exits the bounded region $\boundedset{0}$. The gradient estimate $\gradest$ at any point $\x \in \boundedset{0}$ is assumed to satisfy the bounds \begin{align*} {\sf var}(\gradest(\x)) \leq \gradbound_{2} \quad \text{ and } \quad \| \gradest(\x) \|_2 \leq \gradbound_{\infty} \text{ almost surely}. \end{align*} With this set up in place, we now state and prove a proposition that is stronger than the assertion of Theorem~\ref{thm:mainthm}. \begin{proposition} \label{prop:thm} With the parameter settings of Theorem~\ref{thm:mainthm}, we have \begin{align*} \EE[ \Delta_T 1_{\tau > T}] \leq \epsilon/20, \end{align*} and furthermore, the event $\{\tau > T \}$ occurs with probability greater than $4/5$. \end{proposition} \noindent Let us verify that Proposition~\ref{prop:thm} implies the claim of Theorem~\ref{thm:mainthm}. We have \begin{align*} \ensuremath{\mathbb{P}}\{ \diff{T} \geq \epsilon \} &\leq \ensuremath{\mathbb{P}}\{ \diff{T} 1_{\tau > T} \geq \epsilon \} + \ensuremath{\mathbb{P}}\{ 1_{\tau \leq T} \} \\ &\stackrel{(i)} \leq \frac{1}{\epsilon} \EE[ \diff{T} 1_{\tau > T} ] + \ensuremath{\mathbb{P}}\{ 1_{\tau \leq T} \} \\ &\stackrel{(ii)} \leq 1/20 + 1/5 \\ &\leq 1/4, \end{align*} where step (i) follows from Markov's inequality, and step (ii) from Proposition~\ref{prop:thm}. Thus, Theorem~\ref{thm:mainthm} follows as a direct consequence of Proposition~\ref{prop:thm}, and we dedicate the rest of the proof to establishing Proposition~\ref{prop:thm}. Let $\EE^t$ to represent the expectation conditioned on the randomness up to time $t$. The following lemma bounds the progress of one step of the algorithm: \begin{lemma} \label{lem:PLsmooth} Given any function satisfying the previously stated properties, suppose that we run Algorithm~\ref{sgd_simple} with smoothing radius $r \leq \rho_{0}$, and with a step-size $\eta$ such that $\| \eta \estgrad{t} \|_2 \leq \rho_{0}$ almost surely. Then for any $t = 0, 1, \ldots$ such that $\x[t] \in \boundedset{0}$, we have \begin{align} \EE^t \left[ \diff{t + 1} \right] & \leq \Big ( 1 - \frac{\eta \Plconst}{4} \Big ) \diff{t} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}. \end{align} \end{lemma} \noindent The proof of the lemma is postponed to Section~\ref{sec:PLsmooth}. Taking it as given, let us now establish Proposition~\ref{prop:thm}. Proposition~\ref{prop:thm} has two natural parts; let us focus first on proving the bound on the expectation. Let $\ensuremath{\mathcal{F}}_{t}$ denote the $\sigma$-field containing all the randomness in the first $t$ iterates. Conditioning on this $\sigma$-field yields \begin{align*} \EE [\Delta_{t+1} 1_{\tau > t+1} \mid \ensuremath{\mathcal{F}}_{t} ] & \leq \EE [\Delta_{t+1} 1_{\tau > t} \; \mid \; \ensuremath{\mathcal{F}}_{t} ] \; \stackrel{(i)}{=} \; \EE [\Delta_{t+1} \; \mid \; \ensuremath{\mathcal{F}}_{t} ] 1_{\tau > t}, \end{align*} where step (i) follows since $\tau$ is a stopping time, and so the random variable $1_{\tau > t}$ is determined completely by the sigma-field $\ensuremath{\mathcal{F}}_{t}$. \\ \noindent We now split the proof into two cases. \paragraph{Case 1:} Assume that $\tau > t$, so that we have the inclusion $\x[t] \in \boundedset{0}$. In addition, note that the iterate $\x[t+1]$ is obtained after a stochastic zero-order step whose size is bounded as \begin{align*} \enorm{\eta \estgrad{t}} \; \leq \; \eta \gradbound_{\infty} \leq \rho_{0}, \end{align*} where we have used the fact that $\eta \leq \frac{\rho_{0}}{\gradbound_{\infty}}$. We may thus apply Lemma~\ref{lem:PLsmooth} to obtain \begin{subequations} \begin{align} \label{EqnCaseOneBound} \EE [\Delta_{t+1} \; \mid \; \ensuremath{\mathcal{F}}_{t} ] & \leq \Big(1 - \frac{\eta \Plconst}{4} \Big) \diff{t} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}. \end{align} \paragraph{Case 2:} In this case, we have $\tau \leq t$, so that \begin{align} \label{EqnCaseTwoBound} \EE [\Delta_{t+1} \; \mid \; \ensuremath{\mathcal{F}}_{t} ] 1_{\tau > t} = 0. \end{align} \end{subequations} Now combining the bounds~\eqref{EqnCaseOneBound} and~\eqref{EqnCaseTwoBound} from the the two cases yields the inequality \begin{align} \EE [\Delta_{t+1} \; \mid \; \ensuremath{\mathcal{F}}_{t} ] 1_{\tau > t} &\leq \left\{ \Big(1 - \frac{\eta \Plconst}{4} \Big) \diff{t} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120} \right\} 1_{{\tau > t}} \label{eq:recbound1} \\ & \leq \Big(1 - \frac{\eta \Plconst}{4} \Big) \: \Delta_t 1_{{\tau > t}} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}. \nonumber \end{align} Taking expectations over the sigma-field $\ensuremath{\mathcal{F}}_{t}$ and then arguing inductively yields \begin{align*} \EE [\Delta_{t+1} 1_{\tau > t+1} ] & \leq \Big( 1 - \frac{\eta \Plconst}{4} \Big)^{t+1} \Delta_0 + \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right) \sum_{i = 0}^{t} \Big(1 - \frac{\eta \Plconst}{4} \Big)^i \\ & \leq \Big(1 - \frac{\eta \Plconst}{4} \Big)^{t+1} \Delta_0 + 2\frac{\eta}{\Plconst} \globalSmooth{0} \gradbound_{2} + \frac{4\epsilon}{120}. \end{align*} Setting $t +1 = T$ then establishes the first part of the proposition with substitutions of the various parameters. We now turn to establishing that $\ensuremath{\mathbb{P}} \{\tau > T \} \geq 4/5$. We do so by setting up a suitable super-martingale on our iterate sequence and appealing to classical maximal inequalities. Recall that we run the algorithm for $2T$ steps for convenience, and thereby obtain a set of $2T$ random variables $\{\Delta_1, \ldots, \Delta_{2T}\}$. With the stopping time $\tau$ defined as before~\eqref{EqnDefnStoppingTime}, define the stopped process \begin{align*} Y_t & :\,= \Delta_{\tau \wedge t} + (2T - t) \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right) \quad \mbox{ for each $t \in [2T]$.} \end{align*} Note that by construction, each random variable $Y_t$ is non-negative and almost surely bounded by the locally Lipschitz nature of the function. We claim that $\{Y_t\}_{t = 0}^{2T}$ is a super-martingale. In order to prove this claim, we first write \begin{align} \label{EqnAshwinOwesPizza} \EE[Y_{t + 1} \mid \ensuremath{\mathcal{F}}_{t} ] = \EE [ \Delta_{\tau \wedge (t+1)} 1_{\tau \leq t} \mid \ensuremath{\mathcal{F}}_{t} ] + \EE [ \Delta_{\tau \wedge (t+1)} 1_{\tau > t} \mid \ensuremath{\mathcal{F}}_{t} ] + (2T - (t+1)) \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right). \end{align} Beginning by bounding the first term on the right-hand side, we have \begin{subequations} \begin{align} \label{EqnTermOne} \EE [ \Delta_{\tau \wedge (t+1)} 1_{\tau \leq t} \mid \ensuremath{\mathcal{F}}_{t} ] = \EE [ \Delta_{\tau \wedge t} 1_{\tau \leq t} \mid \ensuremath{\mathcal{F}}_{t} ] = \Delta_{\tau \wedge t} 1_{\tau \leq t}. \end{align} As for the second term, we have \begin{align} \EE [ \Delta_{\tau \wedge (t+1)} 1_{\tau > t} \mid \ensuremath{\mathcal{F}}_{t} ] & = \EE [ \Delta_{t+1} 1_{\tau > t} \mid \ensuremath{\mathcal{F}}_{t} ] \nonumber \\ & = \EE [ \Delta_{t+1} \mid \ensuremath{\mathcal{F}}_{t} ] 1_{\tau > t} \nonumber \\ & \stackrel{(iii)}{\leq} \Big(1 - \frac{\eta \Plconst}{4} \Big) \Delta_t 1_{\tau > t} + \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right) 1_{\tau > t} \nonumber \\ \label{EqnTermTwo} & \leq \Big( 1 - \frac{\eta \Plconst}{4} \Big) \Delta_{\tau \wedge t} 1_{\tau > t} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}, \end{align} \end{subequations} where step (iii) follows from using inequality~\eqref{eq:recbound1}. Substituting the bounds~\eqref{EqnTermOne} and~\eqref{EqnTermTwo} into our original inequality~\eqref{EqnAshwinOwesPizza}, we find that \begin{align*} \EE[Y_{t + 1} \mid \ensuremath{\mathcal{F}}_{t} ] & = \EE [ \Delta_{\tau \wedge (t+1)} 1_{\tau \leq t} \mid \ensuremath{\mathcal{F}}_{t} ] + \EE [ \Delta_{\tau \wedge (t+1)} 1_{\tau > t} \mid \ensuremath{\mathcal{F}}_{t} ] + (2T - (t+1)) \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right) \\ & \leq \Delta_{\tau \wedge t} 1_{\tau \leq t} + (1 - \eta \Plconst/4) \Delta_{\tau \wedge t} 1_{\tau > t} + \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right) + (2T - (t+1)) \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right) \\ & \stackrel{(iv)}{\leq} \Delta_{\tau \wedge t} + (2T - t) \left( \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}\right) \\ & = Y_t, \end{align*} where step (iv) follows from the inequality $\eta \Plconst \Delta_{\tau \wedge t} \geq 0$. We have thus verified the super-martingale property. Finally, applying Doob's maximal inequality for super-martingales (see, e.g.~\citealp{durrett10}) yields \begin{align*} \Pr\{ \max_{t \in [2T]} Y_t \geq \nu \} & \leq \frac{\EE[Y_0]}{\nu} \\ & = \frac{1}{\nu} \left( \Delta_0 + 2T \left\{ \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120} \right\} \right) \\ & \stackrel{(v)}{=} \frac{1}{\nu} \left( \Delta_0 + \frac{\epsilon}{5} \log (120 \Delta_0/ \epsilon) \right), \end{align*} where step (v) follows from the substitutions $T = \frac{4}{\eta \Plconst} \log (120 \Delta_0 / \epsilon)$, and $\eta \leq \frac{\epsilon \Plconst}{240 \globalSmooth{0} \gradbound_{2}}$. As long as $\epsilon$ is sufficiently small so as to ensure that $\epsilon \log (120 \Delta_0 / \epsilon) < 5 \Delta_0$, setting $\nu = 10 \Delta_0$ completes the proof. \subsubsection{Proof of Lemma~\ref{lem:PLsmooth}} \label{sec:PLsmooth} Recall that the domain of the function $\f$ is $\mathcal{X} \subseteq \mathbb{R}^{\dims}$. For a scalar $r > 0$, the smoothed version $\fr(\x)$ is given by $\fr(\x) :\,= \Exs \left[ \f(\x + r \unifdirball) \right]$, where the expectation above is taken with respect to the randomness in $\unifdirball$, and $\unifdirball$ has uniform distribution on a $\dims$-dimensional ball $\ensuremath{\mathbb{B}}^{\dims}$ of unit radius. The estimate $\gradest$ of the gradient $\ensuremath{\nabla} \fr$ at $\x$ is given by \begin{align*} \gradest(\x) = \begin{cases} F(\x + r u,\factorvar) \; \frac{\dims}{r} \ensuremath{u} \; &\text{ if operating in one-point setting} \\ \big [ F{(\x + r u,\factorvar)} - F{(\x - r u, \factorvar)} \big] \; \frac{\dims}{2r} \ensuremath{u} \; &\text{ if operating in two-point setting,} \end{cases} \end{align*} where $\ensuremath{u}$ has a uniform distribution on the shell of the sphere $\ensuremath{\mathbb{S}}^{\dims - 1}$ of unit radius, and $\factorvar$ is sampled at random from $\mathcal{D}$. The following result summarizes some useful properties of the smoothed version of $\f$, and relates it to the gradient estimate $\gradest$. \begin{lemma} \label{lem:Thm1_lemma} The smoothed version $\fr$ of $\f$ with smoothing radius $r$ has the following properties: \begin{itemize} \item[(a)] $\ensuremath{\nabla} \fr(x) = \Exs \left[ \gradest(x) \right]$. \item[(b)] $\enorm{\ensuremath{\nabla} \fr(\x) - \ensuremath{\nabla} \f(x)} \leq \globalSmooth{0} r$. \end{itemize} \end{lemma} \noindent Versions of these properties have appeared in past work~\cite{flax04,agarwal10,shamir17}, but we provide proofs in Appendix~\ref{AppSmoothing} for completeness. Taking Lemma~\ref{lem:Thm1_lemma} as given, we now prove Lemma~\ref{lem:PLsmooth}. Let $\ensuremath{\mathcal{F}}_{t}$ denote the sigma field generated by the randomness up to iteration $t$, and $\Exs$ denote the total expectation operator. We define $\CondExs{t} :\,= \Exs\left[ \cdot \mid \ensuremath{\mathcal{F}}_{t} \right]$ as the expectation operator conditioned on the sigma field $\ensuremath{\mathcal{F}}_{t}$. Recall that the function $\f$ is smooth with smoothness parameter $\globalSmooth{0}$, and we have \begin{align*} \CondExs{t} \bracket{ \f(\x[t + 1]) - \f(\x[t]) } & \leq \CondExs{t} \left[ \inprod{\ensuremath{\nabla} \f(\x[t])}{\x[t + 1] - \x[t]} + \frac{\globalSmooth{0}}{2} \enorm{\x[t + 1] - \x[t]}^2 \right] \\ & \stackrel{(i)}{=} - \inprod{ \eta \ensuremath{\nabla} \f(\x[t])}{\ensuremath{\nabla} \fr(\x[t])} + \frac{\globalSmooth{0} \eta^2}{2} \CondExs{t} \left[ \enorm{\gradest(\x[t])}^2 \right] \\ & \stackrel{(ii)}{=} - \eta \enorm{\ensuremath{\nabla} \f(\x[t])}^2 + \eta \globalSmooth{0} r \enorm{\ensuremath{\nabla} \f(\x[t])} + \frac{\globalSmooth{0} \eta^2}{2} \CondExs{t} \left[ \enorm{\gradest(\x[t])}^2 \right]. \end{align*} Steps (i) and (ii) above follow from parts (a) and (b), respectively, of Lemma~\ref{lem:Thm1_lemma}. Now make the observation that \begin{align*} \CondExs{t} \left[ \enorm{\gradest(\x[t])}^2 \right] &= {\sf var}(\gradest(\x[t])) + \enorm{\ensuremath{\nabla} \fr(\x[t])}^2 \\ & \leq {\sf var}(\gradest(\x[t])) + 2 \enorm{\ensuremath{\nabla} \f(\x[t])}^2 + 2 \enorm{\ensuremath{\nabla} \fr(\x[t]) - \ensuremath{\nabla} \f(\x[t])}^2 \\ & \leq \gradbound_{2} + 2 \enorm{\ensuremath{\nabla} \f(\x[t])}^2 + 2 (\globalSmooth{0} r)^2. \end{align*} In addition, since the function is locally smooth at the point $\x[t]$, we have \begin{align*} (\curvature - \curvature^2 \globalSmooth{0}/2) \enorm{ \ensuremath{\nabla} \f(\x[t]) }^2 & \leq \f(\x[t]) - \f(\x[t] - \curvature \ensuremath{\nabla} \f(\x[t]) ) \\ &\leq \f(\x[t]) - \f(\xstar), \end{align*} for some parameter $\curvature$ chosen small enough such that the relation $\curvature \enorm{ \ensuremath{\nabla} \f( \x[t]) } \leq \rho_{0}$ holds. We may thus set $\curvature = \curvature_0 = \min \left\{ \frac{1}{2 \globalSmooth{0} }, \frac{\rho_{0}}{\Lipcon_{0}} \right\}$ and recall the notation $\diff{t} = \f(\x[t]) - \f(\xstar)$ to obtain \begin{align*} \CondExs{t} \bracket{ \diff{t+1} - \diff{t}} & \leq -\eta \enorm{\ensuremath{\nabla} \f(\x[t])}^2 + \eta \globalSmooth{0} r \frac{2}{\curvature_{0}} \diff{t}^{1/2} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \globalSmooth{0} \eta^2 \left( \enorm{\ensuremath{\nabla} \f(\x[t])}^2 + (\globalSmooth{0} r)^2 \right) \\ & \stackrel{(iii)}{\leq} - \frac{\eta \Plconst}{2} \diff{t} + 2 \frac{\eta \globalSmooth{0} r }{\curvature_{0}} \diff{t}^{1/2} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \globalSmooth{0} \eta^2 (\globalSmooth{0} r)^2, \\ & \stackrel{(iv)}{\leq} - \frac{\eta \Plconst}{2} \diff{t} + \frac{\eta \Plconst}{4} \diff{t} + 4 \frac{\eta (\globalSmooth{0} r)^2 }{\Plconst \curvature_{0}^2} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \globalSmooth{0} \eta^2 (\globalSmooth{0} r)^2, \end{align*} where step (iii) follows from applying the PL inequality and using the fact that $\eta \leq \frac{1}{2 \globalSmooth{0}}$, and step (iv) from the inequality $2ab \leq a^2 + b^2$ which holds for any pair of scalars $(a, b)$. Recall the assumed bounds on our parameters, namely \begin{align*} \eta \leq \min \left\{ \frac{\epsilon \Plconst}{240 \globalSmooth{0}}, \frac{1}{2 \globalSmooth{0}} \right\}, \quad \text{ and} \quad r \leq \frac{1}{2 \globalSmooth{0}} \min\left \{ \curvature_{0} \Plconst \sqrt{\frac{\epsilon}{240}}, \frac{1}{\globalSmooth{0}}\sqrt{\frac{\epsilon \Plconst}{30} } \right\}. \end{align*} Using these bounds, we have \begin{align*} \CondExs{t} \bracket{ \diff{t+1} - \diff{t} } & \leq - \frac{\eta \Plconst}{4} \diff{t} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}. \end{align*} Finally, rearranging yields \begin{align} \label{eqn:PL_exp_recursion} \CondExs{t} \left[ \diff{t+1} \right] \leq \Big(1 - \frac{\eta \Plconst}{4} \Big) \diff{t} + \frac{\globalSmooth{0} \eta^2}{2} \gradbound_{2} + \eta \Plconst \frac{\epsilon}{120}, \end{align} which completes the proof of Lemma~\ref{lem:PLsmooth}. \subsection{Proof of Corollary~\ref{cor:init1}} \label{sec:cor1} Recall the properties of the LQR cost function $\Cinit$ that were established in Lemmas~\ref{lem:lipschitz_cost_lqr} through~\ref{lem:pl_inequality}. Taking these properties as given (see Appendix~\ref{sec:randint_appendix} for the proofs of the lemmas), the only remaining detail is to establish the bounds \begin{align} \label{EqnCoffee} \gradbound_2 \leq C \left(\frac{\ensuremath{D}}{r} C_{\statedim} \Cinit(\K[0]) \right)^2 \quad \text{ and } \quad \gradbound_{\infty} &\leq C \frac{\ensuremath{D}}{r} C_{\statedim} \Cinit(\K[0]). \end{align} In fact, it suffices to prove the second bound in equation~\eqref{EqnCoffee}, since we have $\gradbound_{2} \leq \gradbound_{\infty}^2$. Given a unit vector $u$, the norm of the gradient estimate can be bounded as \begin{align*} \| \gradest_t \|_2 &= \frac{\ensuremath{D}}{r} \Cinit(\K[t] + ru; s_0) \\ & \stackrel{(i)}{=} \frac{\ensuremath{D}}{r} s_0^\top P_{\K} s_0 \\ &\leq \frac{\ensuremath{D}}{r} \enorm{ s_0}^2 \opnorm{P_{\K}} \\ & \stackrel{(ii)}{\leq} C_{\statedim} \frac{\ensuremath{D}}{r} \Cinit(\K[t] + ru), \end{align*} where step (i) follows from the relation~\eqref{eq:ckquad}, and step (ii) from the relation~\eqref{eq:ctop}, since $P_{\K}$ is a PSD matrix. Finally, since $r \leq \rho_{\ensuremath{{\sf lqr}}}$, the local Lipschitz property of the function $\Cinit$ yields \begin{align*} \Cinit(\K[t] + ru) &\leq \Cinit(\K[t]) + r \Lipcon_{\K} \\ & \leq \Cinit(\K[t]) + r \Lipcon_{\ensuremath{{\sf lqr}}} \\ & \stackrel{(iii)}{\leq} 10 \Cinit(\K[0]) + 10 \Cinit(\K[0]), \end{align*} where step (iii) uses the fact that $\K[t] \in \boundedset{\ensuremath{{\sf lqr}}}$ so that $\Cinit(\K[t]) \leq 10 \Cinit(\K[0])$, and the upper bound $r \leq \frac{10 \Cinit(\K[0])}{\Lipcon_{\ensuremath{{\sf lqr}}}}$. Putting together the pieces completes the proof. \subsection{Proof of Corollary~\ref{cor:init2}} As before, establishing Corollary~\ref{cor:init2} requires bounds on the values of the pair $(\gradbound_{2}, \gradbound_{\infty})$, since the remaining properties are established in Lemmas~\ref{lem:lipschitz_cost_lqr} through~\ref{lem:pl_inequality}. In particular, let us establish bounds on these quantities for general optimization of a function with a two-point gradient estimate. The following computations closely follow those of Shamir~\cite{shamir17}. \paragraph{Second moment control:} Using the law of iterated expectations, we have \begin{align*} \mathbb{E} \bigg[ \euclidnorm{\dims \frac{\F(\x + r \ensuremath{u}, \factorvar) - \F(\x - r \ensuremath{u}, \factorvar)}{2r} \ensuremath{u} }^2 \bigg] & = \mathbb{E} \bigg[ \mathbb{E} \bigg[ \euclidnorm{\dims \frac{\F(\x + r \ensuremath{u}, \factorvar) - \F(\x - r \ensuremath{u}, \factorvar)}{2r} \ensuremath{u} }^2 \bigg \vert \factorvar \bigg] \bigg]. \end{align*} Define the placeholder variable $q$ and now evaluate: \begin{align*} \mathbb{E} \bigg[ \euclidnorm{\dims \frac{\F(\x + r \ensuremath{u}, \factorvar) - \F(\x - r \ensuremath{u}, \factorvar)}{2r} \ensuremath{u} }^2 \bigg \vert \factorvar \bigg] &= \frac{\dims^2}{4 r^2} \mathbb{E} \bigg[ (\F(\x + r \ensuremath{u}, \factorvar) - \F(\x - r \ensuremath{u}, \factorvar))^2 \euclidnorm{\ensuremath{u}}^2 \bigg \vert \factorvar \bigg]. \\ & \stackrel{(i)}{=} \frac{\dims^2}{4 r^2} \mathbb{E} \bigg[ (\F{(\x + r \ensuremath{u}, \factorvar)} - \F(\x - r \ensuremath{u}, \factorvar))^2 \bigg \vert \factorvar \bigg] \\ & = \frac{\dims^2}{4 r^2} \mathbb{E} \bigg[ (\F(\x + r \ensuremath{u}, \factorvar) - q - \F(\x - r \ensuremath{u}, \factorvar) + q)^2 \bigg \vert \factorvar \bigg] \\ & \stackrel{(ii)}{\leq} \frac{\dims^2}{2 r^2} \mathbb{E} \bigg[ (\F(\x + r \ensuremath{u}, \factorvar) - q)^2 + (\F(\x - r \ensuremath{u}, \factorvar) - q)^2 \bigg \vert \factorvar \bigg], \end{align*} where equality (i) follows from the fact that $\ensuremath{u}$ is a unit vector and inequality (ii) follows from the inequality $(a - b)^2 \leq 2(a^2 + b^2)$. We further simplify this to obtain: \begin{align*} \mathbb{E} \bigg[ \euclidnorm{\dims \frac{\F(\x + r \ensuremath{u}, \factorvar) - \F(\x - r \ensuremath{u}, \factorvar)}{2r} \ensuremath{u} }^2 \bigg \vert \factorvar \bigg] & \stackrel{(i)}{\leq} \frac{\dims^2}{r^2} \mathbb{E} \bigg[ (\F(\x + r \ensuremath{u}, \factorvar) - q)^2 \bigg \vert \factorvar \bigg] \\ & \stackrel{(ii)}{\leq} \frac{\dims^2}{r^2} \sqrt { \mathbb{E} \bigg[ (\F(\x + r \ensuremath{u}, \factorvar) - q)^4 \bigg \vert \factorvar \bigg] }, \end{align*} where inequality (i) follows from the symmetry of the uniform distribution on the sphere, and inequality (ii) follows from Jensen's inequality. For a fixed $\factorvar$, we now define $q = \mathbb{E}[\F(\x + r \ensuremath{u}, \factorvar) \vert \factorvar]$. Substituting this expression yields \begin{align*} \mathbb{E} \bigg[ \euclidnorm{\dims \frac{\F(\x + r \ensuremath{u}, \factorvar) - \F(\x - r \ensuremath{u}, \factorvar)}{2r} \ensuremath{u} }^2 \bigg \vert \factorvar \bigg] & \leq \frac{\dims^2}{r^2} \sqrt { \mathbb{E} \bigg[ (\F(\x + r \ensuremath{u}, \factorvar) - \mathbb{E}[\F(\x + r \ensuremath{u}, \factorvar) \vert \factorvar] )^4 \bigg \vert \factorvar \bigg] } \\ & \stackrel{(i)}{\leq} \frac{\dims^2}{r^2} \frac{(\Lipcon r)^2}{\dims} \\ & = \dims \Lipcon^2, \end{align*} where inequality (i) follows directly from Lemma 9 in Shamir~\cite{shamir17}. The lemma can be applied since we are conditioning on $\factorvar$, and all the randomness lies in the selection of $\ensuremath{u}$. We have thus established the claim in part (c). \paragraph{Gradient estimates are bounded:} Note that smoothing radius $r$ satisfies $r \leq \rho_{0}$, where $\rho_{0}$ is the radius within which the function is Lipschitz. Consequently, the local Lipschitz property of $\F$ implies that \begin{align*} \enorm{\gradest_{t}} &:\,= \euclidnorm{ \dims \frac{\F(\x[t] + r u_t,\factorvar_t) - \F(\x[t] - r u_t, \factorvar_t)}{2r} u_t } \\ & \leq \euclidnorm{ \dims \frac{\F(\x[t] + r u_t; \factorvar_t) - \F(\x[t]; \factorvar_t)}{2r} u_t } + \euclidnorm{ \dims \frac{\F(\x[t]; \factorvar_t) - \F(\x[t] - r u_t; \factorvar_t)}{2r} u_t } \\ & \leq \dims \Lipcon_{0} \frac{2 \| r u_t \|_2}{2 r} = \dims \Lipcon_{0}. \end{align*} \subsection{Proof of Corollary~\ref{cor:noisydyn}} \label{sec:cor3} As in Section~\ref{sec:cor1}, we establish bounds on the values $\gradbound_{2}$ and $\gradbound_{\infty}$ for the noisy LQR dynamics model. In particular, we derive a bound on $\gradbound_{\infty}$ and use the fact that $\gradbound_{2} \leq \gradbound_{\infty}^2$ to establish the bound on $\gradbound_{2}$. For deriving these bounds, we use properties of the cost function $\Cdyn$ and its connections with $\Cinit$ which are established in Lemma~\ref{lem:noisy-random} and Lemma~\ref{lem:unifcost}; the proofs of these are deferred to Appendix~\ref{sec:noisy_case}. In particular, we establish the bounds \begin{align*} \gradbound_2 & \leq \left( \frac{\ensuremath{D}}{r}\cdot \frac{2 (\opnorm{Q} + \opnorm{R} \lambda_{\ensuremath{{\sf lqr}}, \discount}^2) C_{\statedim}}{1-\sqrt{\gamma}}\right)^2 \cdot \left( \frac{20\Cdyn(\K[0])}{\sigma_{\min}(Q)} \bigg( \frac{1 - \discount}{\discount} \bigg) \right)^{3}, \text{ and} \\ \gradbound_{\infty} &\leq \frac{\ensuremath{D}}{r}\cdot \frac{2 (\opnorm{Q} + \opnorm{R} \lambda_{\ensuremath{{\sf lqr}}, \discount}^2) C_{\statedim}}{1-\sqrt{\gamma}} \cdot \left( \frac{20\Cdyn(\K[0])}{\sigma_{\min}(Q)} \bigg( \frac{1 - \discount}{\discount} \bigg) \right)^{3/2}. \end{align*} For any unit vector $u$, we have, \begin{align*} \| \gradest_t \|_2 &= \frac{\ensuremath{D}}{r} \Cdyn(\K[t] + ru; \mathcal{Z}) \\ & \stackrel{\ensuremath{{\sf (i)}}}{\leq} \frac{\ensuremath{D}}{r}\cdot \frac{2 (\opnorm{Q} + \opnorm{R} \lambda_{\ensuremath{{\sf lqr}}, \discount}^2) C_{\statedim}}{1-\sqrt{\gamma}} \cdot \left( \frac{\Cdyn(\K[t] + ru)}{\sigma_{\min}(Q)} \bigg( \frac{1 - \discount}{\discount} \bigg) \right)^{3/2}, \end{align*} where $\ensuremath{{\sf (i)}}$ follows from using the bound in Lemma~\ref{lem:unifcost}, as well as the explicit choice of $\lambda_{\ensuremath{{\sf lqr}}, \discount}$ made using Lemma~\ref{bound_smoothness_etc}. Finally, using Lemma~\ref{lem:noisy-random} and since $r \leq \rho_{\ensuremath{{\sf lqr}}, \discount}\,$, the local Lipschitz property of the function $\Cinit$ yields \begin{align} \label{eq:boundCKru} \Cdyn(\K[t] + ru) & \leq \frac{\discount}{1-\discount}\cdot \Cinit(\K[t] + ru) \nonumber\\ & \leq \frac{\discount}{1-\discount}\cdot\left(\Cinit(\K[t]) + r \Lipcon_{\K, \discount} \right)\nonumber\\ & \leq \frac{\discount}{1-\discount}\cdot\left( \Cinit(\K[t]) + r \Lipcon_{\ensuremath{{\sf lqr}}, \discount} \right)\nonumber\\ & \stackrel{\ensuremath{{\sf (i)}}}{\leq} \frac{\discount}{1-\discount} \cdot \left( 10 \Cinit( \K[0]) + 10 \Cinit(\K[0]) \right), \end{align} where step $\ensuremath{{\sf (i)}}$ uses the fact that $\K[t] \in \boundedset{\ensuremath{{\sf lqr}}}$ so that $\Cinit(\K[t]) \leq 10 \Cinit(\K[0])$, and the upper bound $r \leq \frac{10 \Cinit(\K[0])}{\Lipcon_{\ensuremath{{\sf lqr}}, \discount}}$. Putting together the pieces completes the proof. \section{Discussion} \label{SecDiscussion} In this paper, we studied the model-free control problem over linear policies through the lens of derivative-free optimization. We derived quantitative convergence rates for various zero-order methods when applied to learn optimal policies based on data from noisy linear systems with quadratic costs. In particular, we showed that one-point and two-point variants of a canonical derivative-free optimization method achieve fast rates of convergence for the non-convex LQR problem. Notably, our proof deals directly with some additional difficulties that are specific to this problem and do not arise in the analysis of typical optimization algorithms. More precisely, our proof involves careful control of both the (potentially) unbounded nature of the cost function, and the non-convexity of the underlying domain. Interestingly, our proof only relies on certain local properties of the function that can be guaranteed over a bounded set; for this reason, the optimization-theoretic result in this paper (stated as Theorem~\ref{thm:mainthm}) is more broadly applicable beyond the RL setting. While this paper analyzes a canonical zero-order optimization algorithm for model-free control of linear quadratic systems, many open questions remain. One such question concerns lower bounds for LQR problems in the model-free setting, thereby showing quantitative gaps between such a setting and that of model-based control. While we conjecture that the convergence bounds of Corollaries~\ref{cor:init1},~\ref{cor:init2}, and~\ref{cor:noisydyn} are sharp in terms of their dependence on the error tolerance $\epsilon$, establishing this rigorously will require ideas from the extensive literature on lower bounds in zero-order optimization~\cite{shamir12}. Another important direction is establish the sharpness (or otherwise) of our bounds in terms of the dimension of the problem, as well as to obtain tight characterizations of the local curvature parameters of the problem around a particular policy $\K$ in terms of the cost at $\K$. We also mention that our sharp characterizations of the cost function are likely to be useful in sharpening analyses\footnote{Here again, the techniques of Fazel et al.~\cite{kakade18} yield a bound of the order $\ordertil{\epsilon^{-4}}$, but we conjecture that this bound should be improvable at least to $\ordertil{\epsilon^{-2}}$.} of the natural gradient algorithm~\cite{kakade18} as well as in analyzing the popular REINFORCE algorithm as applied to the LQR problem. We leave these interesting questions to future work. In the broader context of model-free reinforcement learning as well, there are many open questions. First, a derivative-free algorithm over linear policies is reasonable even in other systems; can we establish provable guarantees over larger classes of problems? Second, there is no need to restrict ourselves to linear policies; in practical RL systems, derivative-free algorithms are run for policies that parametrized in a much more complex fashion. How does the sample complexity of the problem change with the class of policies over which we are optimizing? \subsection*{Acknowledgements} This work was partially supported by National Science Foundation grant NSF-DMS-1612948 and Office of Naval Research grant ONR-N00014-18-1-2640 to MJW. AP was additionally supported in part by NSF CCF-1704967. PLB gratefully acknowledges the support of the NSF through grant IIS-1619362. KB was supported in part by AFOSR through grant FA9550-17-1-0308 and NSF through grant IIS-1619362.
train/arxiv
BkiUey_xK03BfHDu5QpL
5
1
\section{\label{sec:1}Introduction} The~anomalous thermal behaviour of~dilute magnetic alloys~(DMA) has~been the~subject of~extensive experimental and~theoretical research over the~past decades. The~main stream of~theoretical investigations~(e.g.~Refs.~\onlinecite{kondo,bloomfield,hepp,wilson,rajan,andrei,% filyov2,wiegmann,hewson,yosida,rudavskii}) has~focused on~the~construction of~a~conductivity~theory and~thermodynamics of~DMA on~the~grounds of~the~\mbox{s-d}~Hamiltonian~$H_{\text{s-d}}$ introduced by~\mbox{Kasuya}\cite{kasuya}. \mbox{Kondo's}~theory of~DMA impurity resistivity~$\Delta \rho$\cite{kondo} has~successfully explained the~experimentally observed dependence of~$\Delta \rho$ on~impurity concentration~$c$ and~temperature~$T$ in~the~vicinity of~the~resistivity minimum. Unfortunately, the~theory fails at~$T=0$, where~\mbox{Kondo's}~expression for~$\Delta \rho$ exhibits a~logarithmic singularity. The~question of~removing this~singularity is~known as~the~``\mbox{Kondo}~problem''. Substantial progress in~understanding the~anomalous~DMA~thermodynamics was~made by~\mbox{Andrei}~et~al.\cite{rajan,andrei} who~solved~$H_{\text{s-d}}$ thermodynamics for~an~\mbox{s-d}~system with indistinguishable impurities and~point \mbox{s-d}~interaction. Their~rescaled thermodynamic functions, agree, up~to~a~small error, with~experimental impurity specific heat and~magnetization of~$\text{(LaCe)Al}_2$. The~solution found~in~Refs.~\onlinecite{rajan,andrei} yields universal single-impurity curves for~each thermodynamic function corresponding to~a~given value of~impurity spin, which~are~independent of~impurity concentration~$c$. The~shape of~experimental plots of~DMA~thermodynamic functions, in~general, varies slowly with~$c$~(e.g.~Refs.~\onlinecite{kondo,franck,felsch}), meaning that~their~dependence on~$c$ is~nonlinear. This~type of~dependence has~been~accounted~for by~theory only in~exceptional cases~(e.g.~Refs.~\onlinecite{kondo,souletie}). A~different solution of~$H_{\text{s-d}}$ thermodynamics, which~treats a~dilute \mbox{s-d}~system (with a~smeared \mbox{s-d}~interaction) containing arbitrarily positioned distinguishable impurities and~yields nonlinear dependence of~thermodynamic functions on~$c$, was~presented by~the~author in~Refs.~\onlinecite{m2,m3}. The~first quantization Hamiltonian~$H_{\text{s-d}}$ in~this~approach is \begin{equation} H_{\text{s-d}}^{(n,M)}=A^{(n)}\left( H_0^{(n)} + g^2 \sum_{\alpha=1}^M \sum_{i=1}^n U(\mathbf{R}_{\alpha}-\mathbf{r}_i)\otimes S_{z\alpha}\sigma_{zi} \right), \label{H_sd} \end{equation} $n$ $(M)$ denoting the~number of~electrons (impurities), $A^{(n)}$~the~antisymmetrizer with~respect~to electron variables with~indices~$i=1,\ldots , n $, and \begin{equation} H_0^{(n)}=-\sum_{i=1}^n \frac{\hslash^2}{2m} \mathit{\Delta}_i. \label{H0} \end{equation} $\mathbf{R}_{\alpha}$ denotes the~position vector of~the~$\alpha$th~impurity, $S_{z\alpha}$~its spin operator and~$\mathbf{r}_i$, $\sigma_{zi}$~stand for~the respective quantities of~the~$i$th~electron. $U \ge 0$~represents any~sufficiently regular function depending on~$|\mathbf{R}_{\alpha}-\mathbf{r}_i|$, which~allows application of~the~\mbox{Feynman}-\mbox{Kac}~theorem to~$\exp [-\beta H_{\text{s-d}}^{(n,M)}]$\cite{simon}. This~theorem was~applied in~Ref.~\onlinecite{m2} to~derive an~upper and~lower bound to~the~system's free~energy per~electron, \begin{equation*} f(H_{\text{s-d}}^{(n,M)},\beta) := - (n\beta)^{-1} \ln \tr \exp [-\beta H_{\text{s-d}}^{(n,M)}], \end{equation*} and~to~prove that~the~two bounds coaleesce in~the~limit of~small~$c \to 0$~($\dlim$) and, as~a~consequence, that \begin{equation} \lim_{n \to \infty} \dlim f(H_{\text{s-d}}^{(n,M)},\beta) = \lim_{n \to \infty} \dlim \min_{\xi,\eta} f(h^{(n,M)}(\xi,\eta),\beta) = \lim_{n \to \infty} f(H_0^{(n)} A^{(n)},\beta), \label{free_en} \end{equation} where~$h^{(n,M)}(\xi,\eta)$ is~the~mean-field~Hamiltonian of~an~\mbox{s-d}~system~$S_0$ with~separated electron and~impurity variables. Both the~electron subsystem~$S_{\text{e}}$ and~the~impurity subsystem~$S_{\text{imp}}$ of~$S_0$ consist of~noninteracting particles. According~to~Eq.~\eqref{free_en}, $h^{(n,M)}(\xi,\eta)$~is~almost thermodynamically equivalent to~$H_{\text{s-d}}$ in~the~extreme dilute limit. The~1-electron Hamiltonian of~$S_{\text{e}}$ has~the~form \begin{equation} h_{\text{e}}^{(1,M)}(\xi,\eta)=\tilde{h}_{\text{e}}^{(1,M)}(\xi,\eta)+ \frac{1}{2} M (\xi^2-\eta^2)\mathbb{I} \label{h_e} \end{equation} with \begin{equation} \tilde{h}_{\text{e}}^{(1,M)}(\xi,\eta)=H_0^{(1)} - g\sqrt{n} (\xi-\eta) \sum_{\alpha=1}^M U_{\alpha}^{(1)}\otimes \sigma_z^{(1)}. \label{h_e_tilde} \end{equation} $U_{\alpha}^{(1)}$~denoting the~multiplication operator by~$U(\mathbf{R}_{\alpha} -\mathbf{r}_i)$ and~$\eta(\xi)=\xi - f_2(\xi)$, where \begin{equation} f_2(\xi)= - \frac{g}{\sqrt{n}}\left\langle S_z \right\rangle_{h_{ \text{imp}}^{(1)}}\text{,} \qquad \left\langle B \right\rangle_{h} := \frac{\tr(B\exp (-\beta h))}{\tr \exp (-\beta h)}, \label{f2} \end{equation} whereas~$h_{\text{imp}}^{(1)}$ is~the~1-impurity Hamiltonian of~$S_{\text{imp}}$: \begin{equation} h_{\text{imp}}^{(1)}(\xi)=g\sqrt{n}\xi S_{z\alpha} + \frac{1}{2} g^2 S_{z\alpha}^2. \label{h_imp} \end{equation} The~necessary condition for~the~minimum in~Eq.~\eqref{free_en} takes the~form \begin{equation} \xi=f_3(\xi) \label{xi} \end{equation} with \begin{equation} f_3(\xi) := f_1\left(f_2(\xi)\right)+f_2(\xi), \end{equation} \begin{equation} f_1(\xi) := g\sqrt{n}\left\langle \Gamma_1^n U_{\alpha}^{(1)}\sigma_z^{(1)} \right\rangle_{n\Gamma_1^n \tilde{h}_{\text{e}}^{(1,M)}(\xi,0)}, \label{f1} \end{equation} \begin{equation} \Gamma_1^n B^{(1)} := A^{(n)}\left( B^{(1)}\otimes \mathbb{I}^{(n-1)} \right) A^{(n)}, \label{gamma} \end{equation} $n \Gamma_1^n h^{(1)}$~denoting the~Hamiltonian of~$n$~noninteracting fermions with~the~1-fermion Hamiltonian~$h^{(1)}$~(cf.~Ref.~\onlinecite{kummer}). The~mean-field thermodynamics founded on~Eq.~\eqref{free_en} was~used in~Ref.~\onlinecite{m3} to~explain the~presence of~the~impurity heat~capacity peak of~CuCr and~$\text{(LaCe)Al}_2$ in~the~vicinity of~the~\mbox{Kondo}~temperature~$T_K$. In~contradistinction to~earlier papers~(e.g.~Refs.~\onlinecite{rajan,andrei,anderson1}) scaling procedures were~not~used and nonlinear dependence of~the~CuCr peak's shape on~$c$ was~taken into~account. One~of~the~shortcomings of~DMA~theories founded on~\mbox{s-d}~type Hamiltonians is~the~omission of~the~\mbox{Coulomb}~attraction between impurity ions and~conduction electrons. This~problem has~been~treated in~various ways in~the~past. \mbox{Kondo} introduced an~additive term in~his~resistivity formula\cite{kondo} to~account for~these~interactions. In~Refs.~\onlinecite{edwards,ambegaokar} the~equilibrium state of~an~electron~gas interacting with~impurity ions was~studied by~averaging the~1-electron \mbox{Green's}~function over~impurity positions. By~applying this~method to~the~1-particle equilibrium density matrix of~a~quantum particle in~a~field of~randomly positioned wells, representing the~screened \mbox{Coulomb}~potential at~each~impurity site, it~was~shown in~Ref.~\onlinecite{m3} that~in~the~low temperature regime, a~gas of~such particles behaves effectively, with~respect to~1-particle measurements, like a~gas of~free particles at~an~inverse temperature~$t$ related to~the~system's real temperature~$T$ by~the~equality \begin{equation} t(\delta,T) = \delta^{-1} \tanh (\delta (k_B T)^{-1}) \label{t} \end{equation} where~$\delta = \frac{1}{2}\hslash \sqrt{Mu_2 m^{-1}}$, $u_2$~denoting the~2nd~derivative at~the~well's minimum. Accordingly, the~inverse temperature~$\beta =(k_B T)^{-1}$ of~the~\mbox{s-d}~system under consideration will~be subsequently replaced by~$t(\delta, T + \Delta T)$ and~$\delta$, $\Delta T$~will~be~treated as~adjustable parameters. The~shift~$\Delta T$ is~introduced in~order to~compensate omission of~other~DMA interactions in~$H_{\text{s-d}}$. In~the~high temperature range, $t(\delta,T)$~approaches smoothly~$\beta$, whereas~$\lim t(\delta,T) = \delta^{-1}$ as~$T \to 0$. Replacement of~$\beta$ by~$t$ in~\mbox{Kondo's}~resistivity formula\cite{kondo}, therefore allows to~account~for the~\mbox{Coulomb}~interactions between impurity ions and~conduction electrons and~to~remove the~singularity in~his~theory. In~Section~\ref{sec:2} the~resulting expression for~DMA impurity resistivity~$\Delta \rho$ is~shown to~give a~good fit with~experimental~$\Delta \rho$ for~CuFe with~$c=110\text{ ppm}$. \mbox{Kondo's}~expression for~the~total DMA resistivity~$\rho$ is~also shown to~comply with~experimental~$\rho$ under this~substitution. The~objective of~subsequent sections is~to~study the~impurity heat~capacity~$\Delta C$, magnetization~$\Delta M$ and~susceptibility~$\Delta \chi$ of~the~mean-field~system~$S_0$ and~to~adjust the~constants which~enter these~thermodynamic functions to~obtain the~best possible agreement with~experiment. For~alloys with~spin~$1/2$ impurities, such as~CuFe and~$\text{(LaCe)Al}_2$, there~is~good agreement between theory and~experiment, as~regards dependence on~$c$ and~$T$ in~the~low-temperature range. In~particular, nonlinear dependence of~$\Delta C$ on~iron concentration in~CuFe has~been~accounted~for successfully the~first time. The~magnitude of~all~thermodynamic functions agrees with~experiment. For~the~CuCr alloys, containing spin~$3/2$ ions, agreement is~slightly weaker, presumably due~to~the~simplicity of~the~assumed \mbox{s-d}~interaction in~Eq.~\eqref{H_sd}, which~permits only orbital \mbox{s-wave}~scattering~(cf.~Ref.~\onlinecite{bloomfield}). Computations were~carried~out using Wolfram's~Mathematica~5.2. \section{\label{sec:2}A~solution of~the~\mbox{Kondo}~problem} In~1964 \mbox{Kondo} derived his~well known formula\cite{kondo} for~the~impurity resistivity of~DMA: \begin{equation} \Delta \rho = c \rho_A + c \rho_M \left( 1 - 3 z g^2 \varepsilon_F^{-1} \ln T \right) \label{rho} \end{equation} where~$\rho_A$, $\rho_M$, $z$~are constants and~$z$ is~positive for~antiferromagnetic \mbox{s-d}~interaction. Inclusion of~lattice resistivity~$\rho_L = a T^5$ yields \mbox{Kondo's}~expression for~the~total DMA~resistivity~$\rho$ , which~provides a~good fit to the~$c$, $T$~dependence of~experimental data on~resistivity of~several~DMA~(cf.~Ref.~\onlinecite{kondo}). The~breakdown of~formula~\eqref{rho} at~$T=0$ can~be easily amended by~noting that \mbox{Kondo's}~theory takes into~account the~\mbox{Coulomb}~attraction between conduction electrons and~impurity ions simply by~including an~additive constant into~the~r.h.s. of~Eq.~\eqref{rho}. From~the~viewpoint of~Eq.~\eqref{t} it~would~be more appropriate to~replace in~Eq.~\eqref{rho} the~true inverse temperature of~the~alloy by~$t$. For~$\Delta T = 0$, the~resulting expression for~$\Delta \rho$ then~takes the~form \begin{equation} \Delta \rho = \Delta \rho_0 + \Delta \rho_1 \ln (\tanh (\frac{\delta}{k_B T})) \label{Delta_rho} \end{equation} and~is~regular at~$T=0$. Plausibility of~formula~\eqref{Delta_rho} was~tested by~adjusting the~constants~$\Delta \rho_0$, $\Delta \rho_1$, $\delta$~to~fit experimental~$\Delta \rho (T)$ data for~CuFe, with~$c=22 \text{ ppm}$, plotted in~Ref.~\onlinecite{daybell}. The~function~\eqref{Delta_rho}, for~$\Delta \rho_0 = 1.455/10^9\text{ ohm cm per ppm}$, $\Delta \rho_1 = 0.07/10^9\text{ ohm cm per ppm}$ and~$\delta = 2/10^4\text{ eV}$, is~depicted in~Fig.~\ref{fig:1}. Agreement with~experiment is~good. \begin{figure} \includegraphics{fig1} \caption{\label{fig:1}Impurity resistivity~$\Delta \rho$ of~CuFe, with~$c=22\text{ ppm}$, as~given by~Eq.~\eqref{Delta_rho}, for~$\Delta \rho_0 = 1.455/10^9\text{ ohm per ppm}$, $\Delta \rho_1 = 0.07/10^9\text{ ohm cm per ppm}$ and~$\delta = 2/10^4\text{ eV}$. The~points are~experimental results from~Ref.~\onlinecite{daybell}.} \end{figure} The~experimental~$\Delta \rho$ data for~CuFe with~$c=22\text{ ppm}$ are~quite typical. Resistivity measurements of~a~variety of~DMA samples point~to the~close similarity of~$\Delta \rho (T) / \Delta \rho (0)$ curves for~various alloys\cite{heeger}. Formula~\eqref{Delta_rho} can~be therefore expected to~provide good agreement with~experiment for~a~large class of~DMA. \mbox{Hamann's}~expression for~impurity resistivity\cite{hamann}, with~$t$ replacing~$\beta$, was~also~confronted with~the~data of~Ref.~\onlinecite{daybell}. Qualitative agreement was~found. Performing the~substitution~$\beta\to t$ in~\mbox{Kondo's}~expression for~total DMA resistivity~$\rho$, one obtains \begin{equation} \rho(T) = \Delta \rho(T) + \Delta \rho_2 \left( \tanh(\frac{\delta}{k_B T})\right)^{-5}. \label{rho_T_ham} \end{equation} For~$\Delta \rho_0 = 319.2937\times 10^{-4} \mu\text{ohm cm }$, $\Delta \rho_1 = 2 \times 10^{-3} \mu\text{ohm cm }$, $\Delta \rho_2 = 1.065\times 10^{-7} \mu\text{ohm cm }$ and~$\delta = 2.66 \times 10^{-4}\text{ eV}$, the~function~$\rho(T)$ provides a~good fit to~the~experimental plot of~$\rho(T)$ for~CuFe with~$c=1.23\times 10^{-3}$~(Ref.~\onlinecite{berg}) and~is~depicted in~Fig.~\ref{fig:2}. \begin{figure} \includegraphics{fig2} \caption{\label{fig:2}The~total resistivity~$\rho$ of~CuFe, with~$c=1.23/10^3$, as~given by~Eq.~\eqref{rho_T_ham}, for~$\Delta \rho_0 = 319.2937/10^4\text{ $\mu$ohm cm}$, $\Delta \rho_1 = 2/10^3\text{ $\mu$ohm cm}$, $\Delta \rho_2 = 1.065/10^7\text{ $\mu$ohm cm}$ and~$\delta = 2.66/10^4\text{ eV}$. The~points are~experimental results from~Ref.~\onlinecite{berg}.} \end{figure} The~graphs of~$k_B T$ and~$t(\delta,T)^{-1}$ for~$\delta = 2.66 \times 10^{-4}\text{ eV}$ are~plotted in~Fig.~\ref{fig:3}. Close similarity of~the~two plots above~$15\text{ K}$ shows that~Eq.~\eqref{rho_T_ham} provides an~extension of~\mbox{Kondo's}~formula~\eqref{Delta_rho} to~the~vicinity of~$0 \text{ K}$. \begin{figure} \includegraphics{fig3} \caption{\label{fig:3}The~graphs of~$k_B T$ and~$t(\delta,T)^{-1}$ for~$\delta = 2.66/10^4 \text{ eV}$.} \end{figure} \section{\label{sec:3}Mean-field impurity heat~capacity} Expressions for~the~mean-field energy~$\Delta U_{\text{s-d}}$ and~heat~capacity~$\Delta C$ of~a~dilute \mbox{s-d}~system, relative to~that~of~a~free electron~gas, will~be~derived, using the~Hamiltonian~$h^{(n,M)}(\xi,\eta)$, and~compared with~experimental data on~impurity heat~capacity of~CuFe\cite{franck,mattis}, $\text{(LaCe)Al}_2$\cite{bader} and~CuCr\cite{triplett}. The~system's inverse temperature~$\beta$ will~be~replaced by~$t(\delta,T)$. To~compensate deficiencies of~$H_{\text{s-d}}$, such as~non-inclusion of~the~interaction between d-electrons present in~the~\mbox{Anderson}~Hamiltonian, a~shift~$\Delta T > 0$ in~the~temperature scale will~be~implemented~(cf.~Refs.~\onlinecite{bloomfield,bader}). Thus~the~effective inverse temperature~(EIT) \begin{equation} t(\delta, T + \Delta T) = \delta^{-1} \tanh \left( \delta (k_B (T+ \Delta T))^{-1}\right). \end{equation} \subsection{\label{subsec:3:1}CuFe alloys} The~spin of~the~Fe~ions in~CuFe equals~$1/2$ according to~Ref.~\onlinecite{mattis}, therefore \begin{equation} f_2(\xi) = \frac{\gamma}{M\sqrt{n_1}} \tanh \left( t(\delta, T+\Delta T)\gamma \sqrt{n_1} \xi \right) \end{equation} where~$n=Mn_1$, $\gamma = \sqrt{M} g$, $n_1$~denoting the~number of~conduction electrons per~impurity. $f_1$~defined by~Eq.~\eqref{f1} is~a~linear function in~the~simplest approximation:~$f_1(\xi) = b_0 + b_1 \xi + \ldots$\cite{m3}. The~expectation energy of~a~spin~$1/2$ system~$S_{\text{imp}}$ containing~$M$ impurities with~the~1-impurity Hamiltonian~\eqref{h_imp} equals \begin{equation} U_{\text{imp}} = \left\langle h_{\text{imp}}^{(M)} \right\rangle_{h_{\text{imp}} ^{(M)}} = - M^2 n_1 \xi f_2(\xi) + \frac{1}{2}\gamma^2 \label{U_imp} \end{equation} and, according~to~Appendix~B of~Ref.~\onlinecite{m3}, the~interaction energy of~electrons with~the~Hamiltonian~$\tilde{h}_{\text{e}}^{(n,M)}(\xi,\eta)$ equals \begin{equation} \Delta U_{\text{e}} = \left\langle \tilde{h}_{\text{e}}^{(n,M)}(\xi,\eta) \right\rangle_{\tilde{h}_{\text{e}}^{(n,M)}(\xi,\eta)} - \left\langle n \Gamma_1^n H_0^{(1)} \right\rangle_{n \Gamma_1^n H_0^{(1)}} = - M n f_2(\xi) (b_0 + b_1 f_2(\xi) + \ldots ), \label{DeltaU} \end{equation} where~$\xi$ is~the~minimizing solution of~Eq.~\eqref{xi}. Using~Eqs.~\eqref{h_e},~\eqref{U_imp},~\eqref{DeltaU} and~the~definition \begin{equation} h^{(n,M)}(\xi,\eta) := h_{\text{imp}}^{(M)}(\xi) + h_{\text{e}}^{(n,M)}(\xi,\eta) \end{equation} one~obtains \begin{equation} \Delta U_{\text{s-d}} = U_{\text{imp}} + \Delta U_{\text{e}} + M n \xi f_2(\xi) - \frac{1}{2} M n f_2^2(\xi). \label{DeltaU_sd} \end{equation} The~$n$-electron, $M$-impurity spin~$1/2$ \mbox{s-d}~system will~be now treated as~a~subsystem of~a~sample~$S$ containing one mole of~impurities. The~energy~$\Delta U_{\text{S}} = 6.022\times 10^{23} M^{-1}\Delta U_{\text{s-d}}$ of~such sample expressed in~mcals, equals \begin{equation} \Delta U_{\text{S}} = 602.2\times 38271.78 M^{-1} \left( \frac{1}{2}\gamma^2 - \frac{1}{2} M^2 n_1 f_2^2(\xi) - M^2 n_1 f_2(\xi) (b_0 + b_1 f_2(\xi) + \ldots) \right), \end{equation} if~$b_0$, $\gamma$, $\delta$, $\xi$~are~given in~powers of~eV. The~excess heat~capacity of~one~mole of~CuFe alloy, relative to~that~of~one~mole of~pure~Cu, then~equals \begin{equation} \Delta C = n_1^{-1} \left( \frac{\partial \Delta U_{\text{S}}}{\partial T} + \frac{\partial \Delta U_{\text{S}}}{\partial \xi} \frac{\partial \xi}{\partial T} \right), \label{DeltaC} \end{equation} where~$\xi$~is~the~unique minimizing solution of~Eq.~\eqref{xi} and \begin{equation} \frac{\partial \xi}{\partial T} = - \frac{\partial f_3}{\partial T}\left( \frac{ \partial f_3}{\partial \xi} - 1 \right)^{-1}. \label{partial_xi} \end{equation} The~mean-field~$\Delta C(T + \Delta T)$ curves best fitting to~experimental data of~Refs.~\onlinecite{franck,mattis} were~obtained for~the~values of~$M$, $\gamma$, $b_0$, $b_1$, $\delta$, $\Delta T$~given in~Table~\ref{tab:1} and~are~depicted in~Fig.~\ref{fig:4}. Agreement with~experiment is~good, especially for~$c=0.05 \%$, $0.1\%$ and below~$10\text{ K}$. Discrepancies at~higher temperatures are~presumably due~to~experimental error, which increases with~temperature~(e.g.~Refs.~\onlinecite{bader,triplett}), and~to~an~increase of~the~spin values of~Fe~ions in~this~temperature range\cite{bloomfield,daybell}. In~fact, \mbox{Triplett}~et~al.\cite{triplett} estimate the~spin of~Fe~ions in~CuFe to~be~equal~$3/2$. \begin{table} \caption{\label{tab:1}} \begin{ruledtabular} \begin{tabular}{c|c|c|c|c|c|c|c|c} Alloy & $c$ & $n_1$ & $b_0[10^{-3}\sqrt{\text{eV}}]$ & $b_1$ & $\gamma [\sqrt{\text{eV}}]$ & $M$ & $\delta [10^{-4}\text{ eV}]$ & $\Delta T [\text{K}]$ \\ \hline CuFe & $0.05 \%$ & $2000$ & $0.1$ & $-101$ & $0.19$ & $10^8$ & $7.14$ & $2.6$ \\ \hline CuFe & $0.1 \%$ & $1000$ & $0.1$ & $-101$ & $0.245$ & $10^9$ & $7.48$ & $3$ \\ \hline CuFe & $0.2 \%$ & $500$ & $0.1$ & $-101$ & $0.288$ & $10^{10}$ & $7.48$ & $3.9$ \\ \hline CuCr & $212\times 10^{-7}$ & $10^7/212$ & $1.01/n_1$ & $-631$ & $0.086$ & $36000$ & $10^{-13}$ & $0.78$ \\ \hline CuCr & $51\times 10^{-6}$ & $10^6/51$ & $1.09/n_1$ & $-461$ & $0.091$ & $248500$ & $10^{-13}$ & $1.05$ \\ \hline \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \includegraphics{fig4} \caption{\label{fig:4}Impurity heat capacity~$\Delta C$ of~CuFe given by~Eq.~\eqref{DeltaC}, with~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $\Delta T$~equal to~the~values in~Table~\ref{tab:1}. The~points are~experimental results from~Refs.~\onlinecite{franck,mattis}.} \end{figure} The~minimizing solutions of~Eq.~\eqref{xi} are~plotted in~Fig.~\ref{fig:5}. Uniqueness of~these~solutions is~proved in~the~Appendix. \begin{figure} \includegraphics{fig5} \caption{\label{fig:5}The~plots of~the~minimizing solutions~$\xi(T+\Delta T)$ of~Eq.~\eqref{xi} corresponding to~the~$\Delta C(T+\Delta T)$ curves of~CuFe depicted in~Fig.~\ref{fig:4}.} \end{figure} It~is~worth emphasizing how~significantly variation of~$c$ affects the~shape of~both experimental and~theoretical plots of~$\Delta C(T + \Delta T)$ in~Fig.~\ref{fig:4}. It~has~been~suggested by~some~authors~(e.g.~Refs.~\onlinecite{triplett,brock}) that~nonlinearity of~$\Delta C$ in~$c$ observed in~DMA is~due to~impurity-impurity interactions. The~above analysis shows that~this~nonlinearity can~be~explained on~the~grounds of~the~\mbox{s-d}~Hamiltonian without additional interaction terms. Another remarkable property of~the~theoretical plots of~$\Delta C(T + \Delta T)$ in~Fig.~\ref{fig:4} is~their~dependence on~$M$. The~best fitting values~$M_{\text{f}}$ of~$M$ fall in~the~range~$1 \ll M_{\text{f}} \ll A = 6.022 \times 10^{23}$. The~sample~$S$ can~be therefore viewed as~made~up of~magnetic domains, each~containing~$M_{\text{f}}$ impurities with~a~favoured impurity-spin orientation, which~varies, in~general, from~one domain to~another. Experiment has~confirmed existence of~magnetic domains in~some~magnetic materials~(e.g.~Ref.~\onlinecite{aharoni}). The~orientation of~electron spins is~opposite to~that~of~impurities, as~follows from~Eqs.~\eqref{h_e_tilde},~\eqref{h_imp}. Since the~solution~$\xi(T)$ of~Eq.~\eqref{xi} decreases with~decreasing~$T$~(Fig.~\ref{fig:5}), it~follows that~the~ordering of~impurity spins declines as~the~temperature is~lowered. A~similar dependence of~$\xi(T)$ on~$c$ can~be~observed. In~order to~find~$\lim \xi(T)$ as~$T\to 0$ for~small enough~$c$ let~us recall that~$\lim t(\delta, T)=\beta$ as~$\delta \to 0$. The~graphs of~$\xi(T)$ for~$t=\beta$, depicted in~Fig.~3 of~Ref.~\onlinecite{m3}, as~well as~the form of~Eq.~\eqref{xi} in~all cases considered therein, show that as~$T\to 0 $, $\lim \xi(T) = 0$ if~$\delta = 0$, $\Delta T = 0$. Hence, there~is no~ordering of~impurity spins at~$T=0$ if~$\delta = 0$, $\Delta T = 0$, a~picture which agrees with~the~RKKY~description of~interactions between impurity spins in~a~DMA. \subsection{\label{subsec:3:2}$\text{(LaCe)Al}_2$} \mbox{Bader}~et~al.\cite{bader} performed interesting measurements of~$\Delta C / c$ on~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$, with~$x=0.0064$, in~external magnetic fields ranging from~$0 \text{ kOe}$ to~$38 \text{ kOe}$. For strong fields \mbox{Andrei}~et.~al., obtained good agreement of~their~1-impurity~$\Delta C(T)$ function with~rescaled data of~Ref.~\onlinecite{bader}. According to~Refs.~\onlinecite{felsch,bader}, a~typical \mbox{Kondo}~effect, without any~superconducting side-effects observed in~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ with~Ce content exceeding~$x=0.0067$. However, for~$x=0.0064$, \mbox{Bader}~et.~al. estimate the~difference between the~expected normal~state and~measured superconducting-state heat~capacities as~insignificant. Thus~a~mean-field normal-state theory of~$\Delta C/c$ for~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$, with~$x=0.0064$, can~be~expected to~provide good agreement with~experiment. The~number of~valence electrons per~host atom in~LaAl will~be~assumed~$8/3$. For~$x=0.0064$, \begin{equation*} c=\frac{0.0064}{2.9936} = \frac{4}{1871}, \qquad n_1 = \frac{8}{3} c^{-1} = \frac{3742}{3}. \end{equation*} The~ground~state of~the~Ce ion can~be~described by~a~spin with~magnitude~$1/2$ and~a~modified~\mbox{Lande}~factor~$g' = 10/7$~(Ref.~\onlinecite{felsch}), therefore in~the~presence of~an~external magnetic field~$H$, the~system's~Hamiltonian is \begin{equation} H_{\text{s-d}}^{(n,M)}(H) = H_{\text{s-d}}^{(n,M)} - \frac{1}{2} g' \mu_B \tilde{H} \sum_{\alpha} S_{z\alpha} - \frac{1}{2} \mu_B H A^{(n)} \sum_i \mathbb{I}_0 \otimes \sigma_{zi}, \label{H_sd_H} \end{equation} $\mathbb{I}_0$~denoting the~identity in~$L^2(\mathbb{R}^3)^n$ and \begin{equation} f_2(\xi) = \frac{\gamma}{M\sqrt{n_1}} \tanh \left( t(\delta, T + \Delta T) (\gamma \sqrt{n_1}\xi - \frac{1}{2}g' \mu_B \tilde{H})\right), \label{f_2_H} \end{equation} where~$\tilde{H}=g_0 H$ is~the~effective magnetic field at~each~impurity site and~$\mu_B$ the~\mbox{Bohr}~magneton. The~excess energy~$\Delta U_S/c$ of~a~sample~$S$ of~$\text{(LaCe)Al}_2$, with~$x=0.0064$, expressed in~joules, equals \begin{equation} \begin{split} c^{-1} \Delta U_S & = \frac{1}{4} M^{-1} 1871 \times 602.2 \times 160.2 \times \\ & \left( \frac{1}{2} \gamma^2 - \frac{1}{2} M^2 n_1 f_2(\xi)^2 - M^2 n_1 f_2(\xi) \left( \xi - f_2(\xi) \right) + \frac{1}{2} \gamma^{-1} \mu_B g' \tilde{H} M^2 \sqrt{n_1} f_2(\xi) \right) \\ \end{split} \label{Delta_USC} \end{equation} if~$b_0$, $\gamma$, $\delta$, $\xi$~are~expressed in~powers of~eV. For~one mole of~impurities \begin{equation} \Delta C = \frac{\partial \Delta U_S}{\partial T} + \frac{\partial \Delta U_S}{\partial \xi} \frac{\partial \xi}{\partial T}. \label{DeltaC_mole} \end{equation} Adjusting the~parameters~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $\Delta T$, $g_0$, one~obtains the~best fitting to~experiment~$\Delta C(T + \Delta T)/c$ curves plotted in~Fig.~\ref{fig:6}. The~corresponding values of~the~parameters are~given in~Table~\ref{tab:2}. The~mean-field thermodynamics founded on~the~Hamiltonian~$h^{(n,M)}$ thus~provides satisfactory agreement with~experimental data on~the~field dependence of~$\Delta C(T)$. \begin{figure} \includegraphics{fig6} \caption{\label{fig:6}The~plots of~$\Delta C(T+\Delta T)/c$ of~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ in~various external magnetic fields, as~given by~Eqs.~\eqref{DeltaC},~\eqref{Delta_USC}, for~$x=0.0064$ and~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $\Delta T$, $g_0$~equal to~the~values in~Table~\ref{tab:2}. The~points are~experimental results from~Ref.~\onlinecite{bader}.} \end{figure} \begin{table} \caption{\label{tab:2}} \begin{ruledtabular} \begin{tabular}{c|c|c|c|c|c|c|c|c} $b_0[10^{-4}\sqrt{\text{eV}}]$ & $b_1$ & $\gamma[10^{-4}\sqrt{\text{eV}}]$ & $M$ & $\delta[10^{-5}\text{ eV}]$ & $\Delta T [\text{K}]$ & $g_0$ & $H [\text{kOe}]$ & $g^2[10^{-9}\text{ eV}]$\\ \hline $1.9$ & $-13101$ & $3.1$ & $65$ & $3.1215$ & $0.39$ & - & $0$ & $1.4785$ \\ \hline $1.9$ & $-13101$ & $3.1$ & $64$ & $0.6243$ & $0.27$ & $0.006$ & $2$ & $1.50156$ \\ \hline $1.9$ & $-13101$ & $7.285$ & $215$ & $3.1215$ & $0.11$ & $0.006$ & $20$ & $2.46843$ \\ \hline $1.9$ & $-13101$ & $10.85$ & $850$ & $12.486$ & $0.1$ & $0.006$ & $38$ & $1.38497$ \\ \hline \end{tabular} \end{ruledtabular} \end{table} The~value of~$g_0$ was~found by~adjusting~$\Delta C(T+\Delta T)/c$ to~experiment for~$H=2\text{ kOe}$, with~$b_0$, $b_1$, $\gamma$~equal to~their~best fitting values for~$H=0$ and~allowing only small variations of~$M$. The~smallness of~$g_0$ indicates the~strong influence of~the~\mbox{Kondo}~effect in~the~formation of~a~polarization cloud of~conduction electrons around each~impurity. The~cloud screens each~magnetic ion from~interactions with other~magnetic ions\cite{mattis} and, as~implied by~the~inequality~$g_0 \ll 1$, also~from~the~applied field~$H$. \subsection{\label{subsec:3:3}CuCr alloys} \mbox{Triplett}~et.~al.\cite{triplett} performed high-precision measurements of~CuCr impurity heat~capacity~$\Delta C$ for~a~variety of~concentrations. For~$c=51\text{ ppm}$ their~$\Delta C(T)$ peak is~well defined and~terminates at~low temperatures with~a~$\Delta C$ jump which they consider to~be~the~effect of~impurity-impurity interactions. Explanation of~these~experimental~$\Delta C(T)$ data above the~jump temperature is~ a~challenge to~any~\mbox{s-d}~theory. According to~\mbox{Monod}~et.~al.\cite{monod} the~spin of~the~Cr~ions in~CuCr equals~$3/2$. Thus~one~finds \begin{equation} f_2(\xi) = \frac{\gamma}{M\sqrt{n_1}} \frac{3 \mathrm{e}^{-4t\gamma^2 M^{-1}}\sinh \left( 3t\gamma \sqrt{n_1}\xi \right) + \sinh \left( t\gamma \sqrt{n_1}\xi \right)}{\mathrm{e}^{-4t\gamma^2 M^{-1}}\cosh \left( 3t\gamma \sqrt{n_1}\xi \right) + \cosh \left( t\gamma \sqrt{n_1}\xi \right)} \end{equation} and \begin{equation} U_{\text{imp}} = \left\langle h_{\text{imp}}^{(M)} \right\rangle_{h_{ \text{imp}}^{(M)}} = - M^2 n_1 \xi f_2(\xi) + \frac{1}{2} \gamma^2 + f_4(\xi) \label{Uimp_sd} \end{equation} where \begin{equation*} f_4(\xi) = 4 \gamma^2 \frac{\mathrm{e}^{-4t\gamma^2 M^{-1}}\cosh \left( 3t\gamma \sqrt{n_1}\xi \right)}{\mathrm{e}^{-4t\gamma^2 M^{-1}}\cosh \left( 3t\gamma \sqrt{n_1}\xi \right) + \cosh \left( t\gamma \sqrt{n_1}\xi \right)}. \end{equation*} Using~Eqs.~\eqref{DeltaU},~\eqref{DeltaU_sd}, one~obtains the~following formula for~$\Delta U_S$ (expressed in~joules) of~a~sample~$S$ of CuCr: \begin{equation} \Delta U_S = 602.2 \times 160.2 \times M^{-1} \left( \frac{1}{2}\gamma^2 + f_4(\xi) - \frac{1}{2}M^2 n_1 f_2^2(\xi) - M^2 n_1 f_2(\xi)(\xi - f_2(\xi)) \right), \label{Delta_US} \end{equation} if~$b_0$, $\gamma$, $\delta$, $\xi$~are given in~powers of~eV. $\Delta C(T+\Delta T)$~obtains using~Eqs.~\eqref{partial_xi}, \eqref{DeltaC_mole}. The~best fitting graphs of~$\Delta C(T+\Delta T)/c$ for~$c=21.7\text{ ppm}$ and~$c= 51\text{ ppm}$ are~plotted in~Figs.~\ref{fig:7},~\ref{fig:8}. The~corresponding values of~parameters~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $\Delta T$~are given in~Table~\ref{tab:1}. \begin{figure} \includegraphics{fig7} \caption{\label{fig:7}The~graph of~$\Delta C(T+\Delta T)/c$ of~CuCr with~$c=21.7\text{ ppm}$ as~given by~Eqs.~\eqref{DeltaC},~\eqref{Delta_US}, with~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $\Delta T$~equal to~the~values in~Table~\ref{tab:1}. The~points are~experimental results from~Ref.~\onlinecite{triplett}.} \end{figure} \begin{figure} \includegraphics{fig8} \caption{\label{fig:8}The~graph of~$\Delta C(T+\Delta T)/c$ of~CuCr with~$c=51\text{ ppm}$ as~given by~Eqs.~\eqref{DeltaC},~\eqref{Delta_US}, with~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $\Delta T$~equal to~the~values in~Table~\ref{tab:1}. The~points are~experimental results from~Ref.~\onlinecite{triplett}.} \end{figure} Agreement with~experimental data of~Ref.~\onlinecite{triplett} is~satisfactory, especially for~~$c=21.7\text{ ppm}$, although not~as~good as~for~CuFe~(Section~\ref{subsec:3:1}), presumably due~to~simplicity of~the~assumed \mbox{s-d}~interaction in~Eq.~\eqref{H_sd} and~variation of~Cr~spin values at~higher temperatures. It~has~been~suggested\cite{bloomfield,schrieffer} that~for~larger impurity spins the~\mbox{s-d}~interaction should~account~for the~momentum dependence of~\mbox{s-d}~coupling. \section{\label{sec:4}Magnetization of~$\text{(LaCe)Al}_2$} Various measurements of~DMA impurity magnetization~$\Delta M$~(e.g.~Ref.~\onlinecite{felsch} and~references therein) point to~a~similar field dependence of~the~$\Delta M(H,T)$ vs.~$H/T$ curves. A~typical experimental plot of~$\Delta M(H,T)$, for~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ with~$x=0.015$, can~be~found in~Ref.~\onlinecite{felsch}. The~single-impurity theoretical~$\Delta M(H,T)$ curves found by~\mbox{Andrei}~et~al.\cite{rajan,andrei} fit the~rescaled data of~Ref.~\onlinecite{felsch} up~to~a~small error. As~implied by~the~form of~the~mean-field counterpart of~the~Hamiltonian~\eqref{H_sd_H}, viz., \begin{equation*} h^{(n,M)}(H) = h^{(n,M)}(\xi,\eta) - \frac{1}{2} g' \mu_B \tilde{H} \sum_{\alpha} S_{z\alpha} - \frac{1}{2} \mu_B H \sum_i \mathbb{I}_0 \otimes \sigma_{zi} A^{(n)}, \end{equation*} for~a~mole of~spin~$1/2$ impurities \begin{equation} \Delta M = \frac{1}{2} g' \mu_B \sum_{\alpha=1}^A \left\langle S_{z\alpha} \right\rangle_{h_{\text{imp}}^{(A)}} = \frac{1}{2} g' \mu_B A \tanh\left( t(\delta, T+\Delta T) (\frac{1}{2} g' \mu_B \tilde{H} - \gamma \sqrt{n_1}\xi) \right) \label{DeltaM} \end{equation} where~$\xi$ is~the~unique solution of~Eq.~\eqref{xi} with~$f_2(\xi)$ given by~Eq.~\eqref{f_2_H}. The~resulting plots of~$\Delta M(H,T)$ for~various applied fields are~depicted in~Fig.~\ref{fig:9}. The~corresponding values of~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $g_0$~are~presented in~Table~\ref{tab:3}. Agreement with~experiment is~good in~the~range of~low temperatures, but~less satisfactory at~higher~$T$. \begin{figure} \includegraphics{fig9} \caption{\label{fig:9}The~impurity magnetization~$|\Delta M|$ of~$(\text{La}_{1-x} \text{Ce}_x)\text{Al}_2$ in~various external magnetic fields as~given by~Eq.~\eqref{DeltaM}, with~$x=0.015$ and~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $\Delta T$, $g_0$~equal to~the~values in~Table~\ref{tab:3}. The~points are~experimental results from~Ref.~\onlinecite{felsch}.} \end{figure} \begin{table} \caption{\label{tab:3}} \begin{ruledtabular} \begin{tabular}{c|c|c|c|c|c|c|c|c} $b_0[10^{-11}\sqrt{\text{erg}}]$ & $b_1$ & $\gamma[10^{-10}\sqrt{\text{erg}}]$ & $M$ & $\delta[10^{-17}\text{erg}]$ & $\Delta T [\text{K}]$ & $g_0$ & $H [\text{kOe}]$ & $g^2[10^{-24}\text{erg}]$\\ \hline $28.8576$ & $-13101$ & $3.9237$ & $10^4$ & $2$ & $0.045$ & $0.006$ & $1$ & $15.3954$ \\ \hline $38.4768$ & $-13101$ & $7.8474$ & $2\times 10^4$ & $3.1$ & $0.094$ & $0.006$ & $2$ & $30.7908$ \\ \hline $48.096$ & $-13101$ & $19.6185$ & $2.5\times 10^4$ & $4$ & $0.18$ & $0.006$ & $5$ & $153.9542$ \\ \hline $48.096$ & $-13101$ & $39.237$ & $3\times 10^4$ & $4.2$ & $0.23$ & $0.006$ & $10$ & $513.1807$ \\ \hline \end{tabular} \end{ruledtabular} \end{table} \section{\label{sec:5}Magnetic susceptibility} The~zero-field impurity susceptibility \begin{equation} \Delta \chi = \left( \frac{\partial \Delta M}{\partial \tilde{H}} + \frac{\partial \Delta M}{\partial \xi} \frac{\partial \xi}{\partial \tilde{H}} \right)_{\tilde{H}=0} \label{Delta_chi} \end{equation} has~been the~most frequently measured property of~DMA. The~theory of~$\Delta \chi$, developed by~\mbox{Souletie}~et~al.\cite{souletie} for~a~DMA with~RKKY~interaction between impurities, predicts a~dependence of~the~form \begin{equation} \Delta \chi(T,c) = f(T/c) \label{Delta_chi_Tc} \end{equation} where~$f$ is~a~function independent of~concentration. \mbox{Felsch}~et~al.\cite{felsch} have~confirmed approximate validity of~Eq.~\eqref{Delta_chi_Tc} for~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ with~$x$ ranging from~$0.02$ to~$0.06$. Here validity of~formula~\eqref{Delta_chi}, with~$\Delta M$ given by~Eq.~\eqref{DeltaM}, is~tested on~$\Delta \chi$ experimental data for~CuFe with~$c=110\text{ ppm}$~(Ref.~\onlinecite{daybell}) and~$(\text{La}_{1-x}\text{Ce}_x) \text{Al}_2$ with~$x=0.015$, $0.02$~(Ref.~\onlinecite{felsch}). \subsection{\label{subsec:5:1}CuFe} \mbox{Daybell}~et~al.\cite{daybell} expressed their~measured~$\Delta \chi$ values for~CuFe in~emu~per~gram of~alloy per~ppm. Formula~\eqref{DeltaM}, expressed in~these~units, takes the~form \begin{equation} \Delta M_{\text{CuFe}} = \frac{1}{220} g' \mu_B M_{\text{Fe}} \tanh\left( t(\delta, T+\Delta T)(\frac{1}{2} g' \mu_B \tilde{H} - \gamma \sqrt{n_1}\xi)\right) \end{equation} where~$M_{\text{Fe}} = 1.042032405\times 10^{18}$ is~the~number of~Fe~ions contained in~one~gram of~CuFe with~$c=110\text{ ppm}$. A~possible fit of~the~resulting function~$\Delta \chi$, expressed in~these units, to~the~experimental data on~$\Delta \chi$, for~CuFe from~Ref.~\onlinecite{daybell} is~presented in~Fig.~\ref{fig:10}. The~corresponding values of~the~parameters are~given in~Table~\ref{tab:4}. Concavity of~the~$\Delta \chi(T^{-1})$ curve in~Fig.~\ref{fig:10} appears not~to~be~fully adjustable to~concavity of~the~experimental plot~at higher temperatures, however, agreement is~satisfactory in~the~range of~low temperatures. The~discrepancy between theory and~experiment at~higher~$T$ is~presumably due~to~the~increase of~Fe~spin values with~increasing~$T$~(Ref.~\onlinecite{daybell}). \begin{figure} \includegraphics{fig10} \caption{\label{fig:10}The~impurity susceptibility~$\Delta \chi(T^{-1})$ of~CuFe, with~$c=110 \text{ ppm}$, as~given by~Eq.~\eqref{Delta_chi}, with~$b_0$, $b_1$, $\gamma$, $M$, $\delta$, $g'$, $\Delta T$~equal to~the~values in~Table~\ref{tab:4}. The~points are~experimental results from~Ref.~\onlinecite{daybell}.} \end{figure} \begin{table} \caption{\label{tab:4}} \begin{ruledtabular} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|} Alloy & $x$ & $n_1$ & $b_0[10^{-10}\sqrt{\text{erg}}]$ & $b_1$ & $\gamma [10^{-15}\sqrt{\text{erg}}]$ & $M$ & $\delta [10^{-17}\text{ erg}]$ & $g'$ & $\Delta T [\text{K}]$ \\ \hline CuFe & - & $10^5/11$ & $8$ & $-101$ & $1$ & $10^4$ & $1$ & $1.05$ & $0.005$ \\ \hline $(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ & $0.015$ & $1592/3$ & $2.4048$ & $-13101$ & $392370$ & $200$ & $1$ & $10/7$ & $0.2$\\ \hline $(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ & $0.02$ & $1192/3$ & $2.4048$ & $-13101$ & $392370$ & $400$ & $0.1$ & $10/7$ & $0.6$ \\ \hline \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \includegraphics{fig11} \caption{\label{fig:11}The~inverse impurity susceptibility~$\Delta \chi(T+ \Delta T )^{-1}$ of~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ according to~Eq.~\eqref{Delta_chi}, with~$x$, $b_0$, $b_1$, $\gamma$, $M$, $\delta$, $g'$, $\Delta T$~equal to~the~values in~Table~\ref{tab:4}. The~points are~experimental results from~Ref.~\onlinecite{felsch}.} \end{figure} \subsection{\label{subsec:5:2}$\text{(LaCe)Al}_2$} \mbox{Felsch}~et~al.\cite{felsch} performed detailed measurements of~$(\text{La}_{1-x}\text{Ce}_x)\text{Al}_2$ susceptibility for~$x$ ranging from~$0.01$ to~$0.2$. Their~plots of~$\Delta \chi(T)^{-1}$ for~$x=0.01$, $0.015$ are almost indistinguishable and~follow a~\mbox{Curie}-\mbox{Weiss}~law \begin{equation} \Delta \chi (T) = \chi_{CF}(T) T / (T+\Theta), \qquad \chi_{CF} T = \text{const} \label{Delta_chi_T} \end{equation} for~$T \in (0.15\text{ K}, 3\text{ K})$. Below~$0.15\text{ K}$ their data on~$\Delta \chi(T)^{-1}$ for~$x=0.015$, deviate from~Eq.~\eqref{Delta_chi_T} to~lower values, contrary to~measurements of~$\Delta \chi$ on~AuV\cite{dam} and earlier theories\cite{anderson1,anderson2,schotte,wilson} which~predict a~flattening-off of~$\Delta \chi(T)^{-1}$ to higher values. For~$x \ge 0.02$ the~$\Delta \chi(T)^{-1}$ curves of~Ref.~\onlinecite{felsch} no~longer obey~Eq.~\eqref{Delta_chi_T} and~exhibit a~weak concavity. Eq.~\eqref{Delta_chi}, with~$\Delta M$ expressed by~Eq.~\eqref{DeltaM}, provides a~good fit to~the~$\Delta \chi(T+\Delta T)^{-1}$ data of~Ref.~\onlinecite{felsch} for~$x=0.01$, $0.015$ and, as~could~be~expected from~Eq.~\eqref{free_en}, a~less satisfactory adjustment for~$x \ge 0.02$. The~plots of~$\Delta \chi(T+\Delta T)^{-1}$, as~given by~Eq.~\eqref{Delta_chi}, are~depicted in~Fig.~\ref{fig:11} for~$x=0.015$, $0.02$. A~minor deviation from~the~\mbox{Curie}-\mbox{Weiss}~law at~very low temperatures, similar to~the~one~found for~AuV in~Ref.~\onlinecite{dam}, can~be~observed. The~corresponding values of~$b_0$, $b_1$, $\gamma$, $M$, $\delta$~are given in~Table~\ref{tab:4}. \section{\label{sec:6}Concluding remarks} The~mean-field theory of~dilute \mbox{s-d}~systems presented in~Ref.~\onlinecite{m2} has, in~general, proved successful in~providing quantitative explanation of~the~$T$, $c$, $H$~dependence of~DMA heat~capacity, magnetization and~susceptibility in~the~range of~low temperatures. It~has~been~shown that~nonlinear dependence of~DMA impurity heat~capacity on~$c$ can~be~explained in~the~dilute limit exclusively in~terms of~the~\mbox{s-d}~interaction, without introducing an~impurity-impurity potential. The~EIT~$t$ has~improved the~dependence of~all~thermodynamic functions on~temperature and~removed the~singularity in~\mbox{Kondo's}~expression for~DMA impurity resistivity. Deviations of~the~theory from~experiment, in~the~case of~heat~capacity, magnetization and~CuFe susceptibility, are~presumably due to~increasing spin values of~impurity ions observed in~some DMA at~higher temperatures and simplicity of~the~\mbox{s-d}~coupling assumed. It~has~been~suggested that, apart~from~\mbox{s-wave}, the~coupling should~account~for the~\mbox{d-type} character of~the~interaction. Further improvement of~the~theory can~be~expected after including higher expansion terms of~$f_1$ and~correction terms to~the~mean-field free~energy. The~variation of~impurity spin values at~higher temperatures also~suggests investigation of~the~thermodynamics of~a~more general \mbox{s-d}~system containing impurity spins with various~$S=1/2,1,3/2,\ldots$.
train/arxiv
BkiUe845qhLBlnv5FWSI
5
1
\section{Introduction} Any programmable response of a material to external stimulation can be interpreted as computation. To implement a logical function in a material one must map space-time dynamics of an internal structure of a material onto a space of logical values. This is how experimental laboratory prototypes of unconventional computing devices are made: logical gates, circuits and binary adders employing interaction of wave-fragments in light-sensitive Belousov-Zhabotinsky media \cite{ref1}, swarms of soldier crabs \cite{ref2}, growing lamellipodia of slime mould Physarum polycephalum~\cite{adamatzky2016logical}, crystallisation patterns in ``hot ice'' \cite{adamatzky2009hot}, peristaltic waves in protoplasmic tubes \cite{adamatzky2014slime}. In many cases logical circuits are `built' or evolved from a previously disordered material \cite{miller2014evolution}, e.g. networks of slime mould \emph{Physarum polycephalum} \cite{whiting2016towards}, bulks of nanotubes \cite{broersma2012nascence}, nano particle ensembles \cite{bose2015evolution, broersma2017computational}. In these works the computing structures could be seen as growing on demand, and logical gates develop in a continuum where an optimal distribution of material minimised internal energy. A continuum exhibiting such properties can be coined as a ``self-optimising continuum''. Slime mould of \emph{Physarum polycephalum} well exemplifies such a continuum: the slime mould is capable of solving many computational problems, including mazes and adaptive networks \cite{adamatzky2016advances}. Other examples of the material behaviour include bone remodelling \cite{christen2014bone}, roots elongation \cite{mazzolai2010plant}, sandstone erosion \cite{bruthans2014sandstone}, crack and lightning propagation \cite{achtziger2000optimization}, growth of neurons and blood vessels etc. Some other physical systems suitable for computations were also proposed in \cite{miller2014evolution, turner2014neuroevolution, banzhaf2006guidelines, miller2002evolution}. In all these cases, a phenomenon of the formation of an optimum layout of material is related to non-linear laws of material behaviour, resulting in the evolution of material structure governed by algorithms similar to those used in a topology optimisation of structures~\cite{klarbring2010dynamical}. We develop the ideas of material optimisation further and show, in numerical models, how logical circuits can be build in a conductive material self-optimise its structure governed by configuration of inputs and outputs. The paper is structured as follows. In Sect.~\ref{topologyoptimisation} we introduce topology optimisation aimed to solve a problem of a stationary heat conduction. Gates {\sc and} and {\sc xor} are designed and simulated in Sects.~\ref{andgate} and \ref{xorgate}. We design one-bit half-adder in Sect.~\ref{onebithalfadder}. Directions of further research are outlined in Sect.~\ref{discussion}. \section{Topology optimisation} \label{topologyoptimisation} A topology optimisation in continuum mechanics aims to find a layout of a material within a given design space that meets specific optimum performance targets \cite{bendsoe2013topology, hassani2012homogenization, huang2010evolutionary}. The topology optimisation is applied to solve a wide range of problems \cite{bendsoe2005topology}, e.g. maximisation of heat removal for a given amount of heat conducting material \cite{bejan1997constructal}, maximisation of fluid flow within channels \cite{borrvall2003topology}, maximisation of structure stiffness and strength \cite{bendsoe2005topology}, development of meta-materials satisfying specified mechanical and thermal physical properties \cite{bendsoe2005topology}, optimum layout of plies in composite laminates \cite{stegmann2005discrete}, the design of an inverse acoustic horn \cite{bendsoe2005topology}, modelling of amoeboid organism growing towards food sources \cite{Safonov20161}, optimisation of photonics-crystal band-gap structures \cite{men2014robust}. A standard method of the topology optimisation employs a modelling material layout that uses a density of material, $\rho$, varying from 0 (absence of a material) to 1 (presence of a material), where a dependence of structural properties on the density of material is described by a power law. This method is known as Solid Isotropic Material with Penalisation (SIMP) \cite{zhou1991coc}. An optimisation of the objective function consists in finding an optimum distribution of $\rho$: $\min_\rho f(\rho)$. The problem can be solved in various numerical schemes, including the sequential quadratic programming (SQP) \cite{wilson1963simplicial}, the method of moving asymptotes (MMA) \cite{svanberg1987method}, and the optimality criterion (OC) method \cite{bendsoe2005topology}. The topology optimisation problem can be replaced with a problem of finding a stationary point of an Ordinary Differential Equation (ODE) \cite{klarbring2010dynamical}. Considering density constraints on $\rho$, the right term of ODE is equal to a projection of the negative gradient of the objective function. Such optimisation approach is widely used in the theory of projected dynamical systems \cite{nagurney2012projected}. Numerical schemes of topology optimisation solution can be found using simple explicit Euler algorithm. As shown in \cite{klarbring2012dynamical} iterative schemes match the algorithms used in bone remodelling literature \cite{harrigan1994bone}. In this work the topology optimisation problem as applied to heat conduction problems \cite{gersborg2006topology}. Consider a region in the space $\Omega$ with a boundary $\Gamma=\Gamma _D \cup \Gamma _N$, $\Gamma _D \cap \Gamma_N= \emptyset$, separated for setting the Dirichlet (D) and the Neumann (N) boundary conditions. For the region $\Omega $ we consider the steady-state heat equation given in: \begin{equation} \nabla \cdot k \nabla T +f=0 \text{ in } \Omega \end{equation} \begin{equation} T = T_0 \text{ on } \Gamma_D \end{equation} \begin{equation} (k \nabla T) \cdot n = Q_0 \text{ on } \Gamma_N \end{equation} where $T$ is a temperature, $k$ is a heat conduction coefficient, $f$ is a volumetric heat source, and $n$ is an outward unit normal vector. At the boundary $\Gamma_D$ a temperature $T=T_0$ is specified in the form of Dirichlet boundary conditions, and at the boundary $\Gamma _N$ of the heat flux $(k \nabla T) \cdot n$ is specified using Neumann boundary conditions. The condition $(k \nabla T) \cdot n = 0$ specified at the part of $\Gamma_N$ means a thermal insulation (adiabatic conditions). When stating topology optimisation problem for a solution of the heat conduction problems it is necessary to find an optimal distribution for a limited volume of conductive material in order to minimise heat release, which corresponds to designing a thermal conductive device. It is necessary to find an optimum distribution of material density $\rho$ within a given area $\Omega$ in order to minimise the cost function: \begin{equation} \text{Minimize } C(\rho) = \int_\Omega \nabla T \cdot (k (\rho) \nabla T) \end{equation} \begin{equation} \text{Subject to } \int_\Omega \rho <M \end{equation} In accordance with the SIMP method the region being studied can be divided into finite elements with varying material density $\rho_i$ assigned to each finite element $i$. A relationship between the heat conduction coefficient and the density of material is described by a power law as follows: \begin{equation} k_i = k_{\min} + (k_{\max} - k_{\min}) \rho^p_i , \hspace{5mm} \rho_i \in \lfloor 0, 1 \rfloor \end{equation} where $k_i$ is a value of heat conduction coefficient at the $i$-th finite element, $\rho_i$ is a density value at the $i$-th element, $k_{\max}$ is a heat conduction coefficient at $\rho_i=1$, $k_{\min}$ is a heat conduction coefficient at $\rho_i=0$, $p$ is a penalisation power ($p>1$). In order to solve the problem (1)--(6) we apply the following techniques used in the dynamic systems modelling. Assume that $\rho$ depends on a time-like variable $t$. Let us consider the following differential equation to determine density in $i$-th finite element, $\rho_i$, when solving the problem stated in (1)--(6): \begin{equation} \acute{\rho_i}=\lambda (\frac{C_i(\rho_i)}{\rho_i V_i} - \mu), \hspace{5mm} C_i(\rho_i) = \int_{\Omega_i} \nabla T \cdot (k_i(\rho) \nabla T) d\Omega \end{equation} where dot above denotes the derivative with respect to $t$, $\Omega_i$ is a domain of $i$-th finite element, $V_i$ is a volume of $i$-th element, $\lambda$ and $\mu$ are positive constants characterising behaviour of the model. This equation can be obtained by applying methods of the projected dynamical systems \cite{klarbring2012dynamical} or bone remodelling methods \cite{harrigan1994bone, mullender1994physiological, payten1998optimal}. For numerical solution of equation (8) a projected Euler method is used \cite{nagurney2012projected}. This gives an iterative formulation for the solution finding $\rho_i$ \cite{klarbring2010dynamical}: \begin{equation} \rho^{n+1}_i = \rho^n_i + q[\frac{C_i(\rho^n_i)}{\rho^n_i V_i} - \mu^n] \end{equation} where $q = \lambda \Delta t$, $\rho^{n+1}_i$ and $\rho^n_i$ are the numerical approximations of $\rho_i(t+\Delta t)$ and $\rho_i(t)$, $\mu^n =\frac{\sum_i C_i(\rho^n_i)}{\sum_i \int_{\Omega_i} \rho_{ev} d\Omega}$, $\rho_{ev}$ is a specified mean value of density. We consider a modification of equation (8): \begin{equation} \rho^{n+1}_i = \begin{cases} \rho^n_i + \theta \text{ if } \frac{C_i(\rho^n_i)}{\rho^n_i V_i} - \mu^n \geq 0, \\ \rho^n_i - \theta \text{ if } \frac{C_i(\rho^n_i)}{\rho^n_i V_i} - \mu^n < 0, \end{cases} \end{equation} where $\theta$ is a positive constant. Then we calculate a value of $\rho _i^{n+1}$ using equation (9) and project $\rho _i$ onto a set of constraints: \begin{equation} \rho^{n+1}_i = \begin{cases} \rho_{\max} \text{ if } \rho^{n+1}_i > \rho_{\max}, \\ \rho^{n+1}_1 \text{ if } \rho_{\min} \leq \rho^{n+1}_i \leq \rho_{\max},\\ \rho_{\min} \text{ if } \rho^{n+1}_i < \rho_{\min} \end{cases} \end{equation} where $\rho_{\min}$ is a specified minimum value of $\rho_i$ and $\rho_{\max}$ is a specified maximum value of $\rho_i$. A minimum value is taken as the initial value of density for all finite elements: $\rho_i^0=\rho_{\min}$. \section{Specific parameters} \label{Specificparameters} The algorithm above is implemented in ABAQUS \cite{Abaqus2014} using the modification of the structural topology optimisation plug-in, UOPTI, developed previously \cite{Safonov2015}. Calculations were performed using topology optimisation methods for the finite element model of $200 \times 200 \times 1$ elements. Cube-shaped linear hexahedral elements of DC3D8 type with a unit length edges were used in calculations. The elements used have eight integration points. The cost function value is updated for each finite element as a mean value of integration points for an element under consideration \cite{Abaqus2014}. The model can be described by the following parameters: $\rho_{\min}$ and $\rho_{\max}$ are minimum and maximum values of $\rho_i$, $M=\sum_i \int_{\Omega_i} \rho_{ev} d\Omega$ is a mass of the conductive material, $\theta$ is an increment of $\rho_i$ at each time step, $p$ is a penalisation power, $k_{\max}$ is a heat conduction coefficient at $\rho_i = 1$, $k_{\min}$ is a heat conduction coefficient at $\rho_i = 0$. All parameters but $M$ are the same for all six (three devices with two type of boundary conditions) implementations: $\rho_{\max}=1$, $\rho_{\min}=0.01$, $\theta=0.03$, $p=2$, $K_{\max}=1$, $K_{\min}=0.009$. The parameter $M$ is specified as follows: $M=2000$ for {\sc and}, {\sc xor} in Dirichlet boundary conditions on inputs, and one-bit half-adder for both types of boundary conditions; $M=800$ for {\sc and} gate and $M=400$ for {\sc xor} gate in Neumann boundary conditions. We use the following notations. Input logical variables are $x$ and $y$, output logical variable is $z$. They takes values 0 ({\sc False}) and 1 ({\sc True}). Sites in input stimuli the simulated material are $I_x$ and $I_y$ (inputs), $O$, $O_1$, $O_2$ (outputs). Sites of outlets are $V$, $V_1$ and $V_2$ (temperature are set to 0 in the outlet, so we use symbol $V$ by analogy with vents in fluidic devices). Temperature at the sites is shown as $T_{I_x}$, $T_{I_y}$, $T_{O}$, $T_{O_1}$ etc. We show distances between as $l(I_x, I_y)$, $l(I_x, O)$ etc. Logical values are represented by temperature: $x=1$ is $T_{I_x}=100$ and $x=0$ is $T_{I_x}=0$, the same for $y$. We input data in the gates by setting up thermal boundary conditions are set at the input sites and adiabatic boundary conditions for other nodes. The temperature at each point is specified by setting equal values in 4 neighbour nodes belonging to the same finite element. Temperature at outputs and outlets is set to zero of all experiments: $T_O=T_{O_1}=T_{O_2}=0$, $T_V=T_{V_1}=T_{V_2}=0$. To maintain specified boundary conditions we setup a thermal flow through the boundary points. Intensity of the flows is determined via solution of the thermal conductivity equation at each iteration. Therefore intensity of the thermal streams via input, output and outlet sites changes during the simulation depending on a density distribution of the conductive material. Namely, if we define zero temperature at a site the intensity of the stream though the site will be negative if a density of the conductive material is maximal; the intensity will be zero if the material density is minimal. In case when we do not define a temperature at a site the intensity is non-zero if the density is maximal and zero if the density is minimal. Therefore, instead of talking about temperature at the output we talk about thickness of the conductive material. Namely, if the material density value at the output site $O$ is minimal, $\rho_O = \rho_{\min}$, we assume logical output 0 ({\sc False}). If the density $\rho_O = \rho_{\max}$ we assume logical output 1 ({\sc True}). The material density for all finite elements is set to a minimum value $\rho _i^0=\rho _{\min}$ at the beginning of computation. In case of Dirichlet boundary conditions in inputs, in $x=0$ and $y=0$ the temperature is constant and equal to zero at all points, therefore the temperature gradient is also zero, $\nabla T=0$. The cost function is also equals to zero at all points: $C_i(\rho _i)=0$. As the initial density for all finite elements is set to a minimum value $\rho_i^0=\rho_{\min}$ then from equations (9) and (10) follows that the density stays constant and equal to its minimum value $\rho_i^n=\rho_{\min}$. Therefore, the density value at $O$ point is minimal, $\rho_O=\rho_{\min}$ which indicates logical output 0. Further we consider only situations when one of the inputs is non-zero. In case of Neumann boundary conditions in inputs a flux in each site is specified by setting the flux through the face of the finite element to which the site under consideration belongs. Adiabatic boundary conditions are set for other nodes. The logical value of $x$ is represented by the value of given flux in $I_x$, $Q_{I_x}$. The logical value of $y$ is represented by the value of given flux in $I_y$, $Q_{I_y}$. Flux $Q_{I_x}=0$ represents $x=0$ and flux $Q_{I_x}=1$ represents $x=1$. Figures in the paper show density distribution of the conductive material. The maximum values of $\rho$ are shown by red colour, the minimum values by blue colour. \section{{\sc and} gate} \label{andgate} \subsection{Dirichlet boundary conditions} \begin{figure}[!tbp] \centering \subfigure[]{\includegraphics[scale=0.25]{fig1a} } \subfigure[]{\includegraphics[scale=0.25]{fig1b}} \subfigure[]{\includegraphics[scale=0.25]{fig1c}} \caption{{\sc and} gate implementation with Dirichlet boundary conditions. (a) Scheme of inputs and outputs. (bc) Density distribution $\rho$ for inputs (b) $x=1$ and $y=0$ and (c ) $x=1$ and $y=1$. } \label{fig1} \end{figure} \begin{figure}[!tbp] \centering \subfigure[$t=10$]{ \includegraphics[scale=0.85]{fig2a} } \subfigure[$t=20$]{ \includegraphics[scale=0.85]{fig2b} } \subfigure[$t=30$]{ \includegraphics[scale=0.85]{fig2c} } \subfigure[$t=40$]{ \includegraphics[scale=0.85]{fig2d} } \subfigure[$t=50$]{ \includegraphics[scale=0.85]{fig2e} } \subfigure[$t=100$]{ \includegraphics[scale=0.85]{fig2f} } \caption{Density distribution, $\rho$, in the implementation of {\sc and} gate for inputs $x=1$ and $y=0$, Dirichlet boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 100 steps.} \label{fig2} \end{figure} \begin{figure}[!tbp] \centering \subfigure[$t=10$]{ \includegraphics[scale=0.85]{fig3a} } \subfigure[$t=20$]{ \includegraphics[scale=0.85]{fig3b} } \subfigure[$t=30$]{ \includegraphics[scale=0.85]{fig3c} } \subfigure[$t=40$]{ \includegraphics[scale=0.85]{fig3d} } \subfigure[$t=50$]{ \includegraphics[scale=0.85]{fig3e} } \subfigure[$t=100$]{ \includegraphics[scale=0.85]{fig3f} } \caption{Density distribution, $\rho$, in the implementation of {\sc and} gate for inputs $x=1$ and $y=1$, Dirichlet boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 100 steps.} \label{fig3} \end{figure} Let us consider implementation of a {\sc and} gate in case of the Dirichlet boundary conditions in the input sites. The input $I_x$ and $I_y$ and output $O$ sites are arranged at the vertices of an isosceles triangle (Fig.~\ref{fig1}a): $l(I_x, I_y)=102$, $l(I_x, O)=127$, $l(I_y, O)=127$. The Dirichlet boundary conditions are set to $I_x$, $I_y$ and $O$. The material density distribution for inputs $x=1$ and $y=0$ is shown in Fig.~\ref{fig1}b. The maximum density region connects $I_x$ with $I_y$ and no material is formed at site $O$, thus output is 0. The space-time dynamics of the gate is shown in Fig.~\ref{fig2}. When both inputs are {\sc True}, $x=1$ and $y=1$, domains with maximum density of the material span input sites with output site, ($I_x, O$) and ($I_y, O$) (Fig.~\ref{fig1}c). Therefore the density value at the output is maximal, $\rho_O = \rho_{\max}$ which indicated logical output 1 ({\sc True}). Figure~\ref{fig3} shows intermediate results of density distribution in the gate for $x=1$ and $y=1$. Supplementary videos can be found here \cite{Safonov2016}. \subsection{Neumann boundary conditions.} \begin{figure}[!tbp] \centering \subfigure[]{\includegraphics[scale=0.25]{fig5a}} \subfigure[]{\includegraphics[scale=0.25]{fig5b}}\\ \subfigure[]{\includegraphics[scale=0.25]{fig5c}} \subfigure[]{\includegraphics[scale=0.25]{fig5d}} \caption{{\sc and} gate implementation in case of Neumann boundary conditions. (a) Scheme of the gate. Density distribution, $\rho$, for inputs (b) $x=1$, $y=0$, (c ) $x=0$, $y=1$, (d) $x=1$, $y=1$.} \label{fig5} \end{figure} \begin{figure}[!tbp] \centering \subfigure[$t=10$]{ \includegraphics[scale=0.85]{fig6a} } \subfigure[$t=20$]{ \includegraphics[scale=0.85]{fig6b} } \subfigure[$t=30$]{ \includegraphics[scale=0.85]{fig6c} } \subfigure[$t=40$]{ \includegraphics[scale=0.85]{fig6d} } \subfigure[$t=50$]{ \includegraphics[scale=0.85]{fig6e} } \subfigure[$t=200$]{ \includegraphics[scale=0.85]{fig6f} } \caption{Density distribution, $\rho$, in the implementation of {\sc and} gate for inputs $x=1$ and $y=1$, Neumann boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 200 steps.} \label{fig6} \end{figure} Let us consider the implementation of the {\sc and} in case of Neumann boundary conditions for input points. Scheme of the gate is shown in Fig.~\ref{fig5}a. The distance between $I_x$ and $I_y$ is 40 points, the distance between $I_x$ and $V$ is 70 points and between $I_y$ and outlet $V$ 90 points. The output site $O$ is positioned in the middle of the segment $(I_x, I_y)$. Boundary conditions in $I_x$, $I_y$ and $V$ are set as fluxes, i.e. Neumann boundary conditions. Figure~\ref{fig5}b shows density distribution, $\rho$ for inputs $x=1$ and $y=0$. The maximum density region develops along the shortest path $(I_x, V)$. Therefore, the density value at $O$ is minimal, $\rho_O=\rho_{\min}$, which represents logical output {\sc False}. For inputs $x=0$ and $y=1$ (Fig.\ref{fig5}c) the maximal density region is formed along the path $(I_y, V)$, i.e. $\rho_O=\rho_{\min}$ and the logical output is {\sc False}. The material density distribution for inputs $x=1$ and $y=1$ is shown in Fig.\ref{fig5}d. The maximum density region develops along the path $(I_y, I_x, V)$. Thus $\rho_O=\rho_{\max}$ and logical output is {\sc True}. Figure \ref{fig6} shows intermediate results of simulating density distribution, $\rho$, for inputs $x=1$ and $y=1$. At beginning of computation the material develops in proximity of $I_x$, $I_y$ and $V$ (Fig.~ \ref{fig6}a). Then $I_x$ and $V$ are connected by a domain with highest density of the material (Fig.~ \ref{fig6}b). The thinner region of high-density material is further develops between $I_x$ and $I_y$ (Fig.~ \ref{fig6}c--f). \section{{\sc xor} gate} \label{xorgate} \subsection{Dirichlet boundary conditions} \begin{figure}[!tbp] \centering \subfigure[]{\includegraphics[scale=0.24]{fig4a} } \subfigure[]{\includegraphics[scale=0.24]{fig4b} } \subfigure[]{\includegraphics[scale=0.24]{fig4c} } \caption{{\sc xor} gate implementation with Dirichlet boundary conditions. (a) Scheme of inputs and outputs. (bc) Density distribution $\rho$ for inputs (b) $x=1$ and $y=0$ and (c ) $x=1$ and $y=1$. } \label{fig4} \end{figure} Let us consider the implementation of the {\sc xor} gate in case of Dirichlet boundary conditions for input points. We use similar design as in {\sc and} gate (Fig.~\ref{fig1}) but use two inputs $I_x$ and $I_y$, output $O$ and outlet $V$. The site of output $O$ in {\sc and} gate is assigned outlet $V$ function and the output site $O$ is positioned in the middle of the segment connecting sites $I_x$ and $I_y$ (Fig~\ref{fig4}a). The temperature at $V$ point is set to 0, $T_V=0$, no temperature boundary conditions are set at $O$. If only one input is {\sc True} a region of maximum density material is formed along a shortest path between $I_x$ and $I_y$. Therefore, the density value $\rho_O=\rho_{\max}$ thus indicated output {\sc True} (Fig.~\ref{fig4}b, $x=1$, $y=0$). When both inputs variables are {\sc True}, $x=1$ and $y=1$, maximum density regions are formed along the path $(I_x, V)$ and $(I_y, V)$ not along $(I_x, I_y)$. Thus $\rho_0 = \rho_{\min}$, i.e. logical output {\sc False} (Fig.~\ref{fig4}c, $x=1$, $y=1$). \subsection{Neumann boundary conditions.} \begin{figure}[!tbp] \centering \subfigure[]{\includegraphics[scale=0.25]{fig7a}} \subfigure[]{\includegraphics[scale=0.25]{fig7b}}\\ \subfigure[]{\includegraphics[scale=0.25]{fig7c}} \subfigure[]{\includegraphics[scale=0.25]{fig7d}} \caption{{\sc xor} gate implementation in case of Neumann boundary conditions. (a) Scheme of the gate. Density distribution, $\rho$, for inputs (b) $x=1$, $y=0$, (c ) $x=0$, $y=1$, (d) $x=1$, $y=1$.} \label{fig7} \end{figure} Let us consider the implementation of {\sc xor} gate in case of Neumann boundary conditions for input points. The gate has five sites: inputs $I_x$ and $I_y$, output $O$, outlets $V_1$ and $V_2$ (Fig.~\ref{fig7}a). Sites $I_x$, $I_y$, $V_1$ and $V_2$ are vertices of the square with the side length 42 points. The output site $O$ is positioned at the intersection of diagonals of the square. Boundary conditions in $I_x$, $I_y$, $V_1$ and $V_2$ are set as fluxes, i.e. Neumann boundary conditions. To ensure convergence of solutions for the stationary problem of heat conduction (1) the fluxes at $V_1$ and $V_2$ are set equal to the negative half-sum of fluxes in $I_x$ and $I_y$: $Q_{V_1} = Q_{V_2} = - \frac{Q_{I_x}+Q_{I_y}}{2}$. For $x=1$ and $y=0$ the maximum density domain is formed between $I_x$ and $V_1$ and between $I_x$ and $V_2$ (Fig~\ref{fig7}b). The output site $O$ sits at the $(I_x, V_2)$ diagonal, therefore $\rho_O = \rho_{\max}$, and thus the logical output is {\sc True}. When inputs are $x=0$ and $y=1$ the maximum density domain is formed between $I_y$ and $V_2$ and between $I_y$ and $V_1$ (Fig~\ref{fig7}c). The output site $O$ sits at the $(I_y, V_1)$ diagonal, therefore $\rho_O = \rho_{\max}$, and thus the logical output is {\sc True}. When goths inputs are {\sc True}, $x=1$ and $y=1$, domains of high-density material develop along shortest paths $(I_x, V_1)$ and $(I_y, V_2)$ (Fig~\ref{fig7}d). These domain do not cover the site $O$, therefore logical output is {\sc False}. \section{One-bit half-adder} \label{onebithalfadder} \subsection{Dirichlet boundary conditions} \begin{figure}[!tbp] \centering \subfigure[]{\includegraphics[scale=0.25]{fig8a}} \subfigure[]{\includegraphics[scale=0.25]{fig8b}} \subfigure[]{\includegraphics[scale=0.25]{fig8c}} \caption{One-bit half-adder implementation in case of Dirichlet boundary conditions. (a) Scheme of the adder. Density distribution, $\rho$, for inputs (b) $x=1$, $y=0$, (c ) $x=1$, $y=1$. } \label{fig8} \end{figure} To implement the one-bit half-adder in case of Dirichlet boundary conditions for input points we combine designs of {\sc and} and {\sc xor} gates (Figs.~\ref{fig1}a and \ref{fig4}a). We introduce the following changes to the scheme shown in Fig~\ref{fig4}a: the former outlet $V$ is designated as output $O_1$, the former output $O$ is designated as output $O_2$ (Fig.~\ref{fig8}a). Temperature value at $O_1$ is set zero, $T_{O_1}=0$. No temperature boundary conditions are set at $O_2$. The output $O_1$ indicated logical value $xy$ and the output $O_2$ logical value $x \oplus y$. When only one of the inputs is {\sc True} and other {\sc False}, e.g. $x=1$ and $y=0$ as shown in Fig.~\ref{fig8}b, the density value at $O_1$ is minimal, $\rho_{O_1}=\rho_{\min}$, and the density value at $O_2$ is maximal, $\rho_{O_2}=\rho_{\max}$. Thus $O_1$ indicated {\sc False} and $O_2$ {\sc True}. For inputs $x=1$ and $y=1$ we have $\rho_{O_1}=\rho_{\max}$ and $\rho_{O_2}=\rho_{\min}$, i.e. logical outputs {\sc True} and {\sc False}, respectively. \subsection{Neumann boundary conditions} \begin{figure}[!tbp] \centering \subfigure[]{\includegraphics[scale=0.25]{fig9a}} \subfigure[]{\includegraphics[scale=0.25]{fig9b}}\\ \subfigure[]{\includegraphics[scale=0.25]{fig9c}} \subfigure[]{\includegraphics[scale=0.25]{fig9d}} \caption{One-bit half-adder implementation in case of Neumann boundary conditions. (a) Scheme of the adder. Density distribution, $\rho$, for inputs (b) $x=1$, $y=0$, (c ) $x=0$, $y=1$, (b) $x=1$, $y=1$. } \label{fig9} \end{figure} \begin{figure}[!tbp] \centering \subfigure[$t=10$]{ \includegraphics[scale=0.85]{fig10a} } \subfigure[$t=20$]{ \includegraphics[scale=0.85]{fig10b} } \subfigure[$t=30$]{ \includegraphics[scale=0.85]{fig10c} } \subfigure[$t=40$]{ \includegraphics[scale=0.85]{fig10d} } \subfigure[$t=50$]{ \includegraphics[scale=0.85]{fig10e} } \subfigure[$t=69$]{ \includegraphics[scale=0.85]{fig10f} } \caption{Density distribution, $\rho$, in the implementation of one-bit half-adder for inputs $x=1$ and $y=1$, Neumann boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 69 steps.} \label{fig10} \end{figure} Let us consider the implementation of one-bit half-adder in case of Neumann boundary conditions for input points. The devices consists of seven sites: two inputs $I_x$ and $I_y$, two outputs $O_1$ and $O_2$, three outlets $V_1$, $V_2$ and $V_3$ (Fig.~\ref{fig9}a). Sites $I_x$, $I_y$, $O_1$ and $O_2$ are vertices of a square with the side length 40. The output $O_2$ is positioned at the intersection of diagonals of this square. The output $O_1$ is positioned at the middle of the segment connecting $V_2$ and $V_3$. The distance between $V_1$ and $V_3$ is 36 points, the distance between $V_3$ and $V_2$ is 51 points. The output $O_1$ represent logical function $xy$ and the output $O_2$ function $x \oplus y$. Boundary conditions in $I_1$, $I_2$, $V_1$, $V_2$ and $V_3$ are set as fluxes, thus corresponding to Neumann boundary conditions. To ensure convergence of solutions for the stationary problem of heat conduction (1) the flux values at $V_1$, $V_2$ and $V_3$ are set equal to one third of the negative sum of fluxes in $I_x$ and $I_2$: $Q_{V_1} = Q_{V_2} = Q_{V_3} = - \frac{Q_{I_x}+Q_{I_y}}{3}$. Figure~\ref{fig9}b shows results of calculating density distribution $\rho$ for inputs $x=1$ and $y=0$. There the maximum density region connects $I_x$ with $V_1$, $V_2$ and $V_3$. The density domain $(I_x, V_3)$ is not a straight line because the system benefits most when a of the segment $(I_x, V_3)$ coincide with the segment $(I_x, V_1)$. The site $O_2$ is covered by maximum density domain $(I_x, V_1)$, $\rho_{O_2} = \rho_{\max}$, thus representing logical value {\sc True}; the output $O_1$ is {\sc False} because $\rho_{O_2} = \rho_{\min}$. The density distribution $\rho$ calculated for inputs $x=0$ and $y=1$ is shown in Fig.~\ref{fig9}c. The maximum density region connects $I_y$ with $V_1$, $V_2$ and $V_3$ via paths $(I_y, V_1)$, $(I_y, V_2)$, $(I_y, V_3)$. The output $O_2$ belongs to $(I_y, V_2)$ therefore it indicates logical output {\sc True}. The output $O_1$ indicated {\sc False} because it is not covered by a high density domain. The density distribution $\rho$ calculated for inputs $x=1$ and $y=1$ is shown in Fig.~\ref{fig9}d. The maximum density regions are developed along paths $(I_1, V_2)$, $(I_2, V_1)$, $(I_2, V_3)$ and $(V_2, V_3)$. There is heat flux between $V_1$ and $V_2$ which forms a segment of high density material. The high density material covers $O_1$, therefore the output $O_1$ indicate logical value {\sc True}. The output $O_2$ is not covered by a high density material, thus {\sc False}. Figure \ref{fig10} shows intermediate results of simulating density distribution $\rho$ for inputs $x=1$ and $y=1$. \section{Discussion} \label{discussion} We implemented logical gates and circuits using optimisation of conductive material when solving stationary problems of heat conduction. In the simplest case of two sites with given heat fluxes the conductive material is distributed between the sites in a straight line. The implementations of gates presented employ several sites, exact configuration of topologically optimal structures of the conductive material is determined by value of input variables. The algorithm of optimal layout of the conductive material is similar to a biological process of bone remodelling. The algorithm proposed can be applied to a wide range of biological networks, including neural networks, vascular networks, slime mould, plant routes, fungi mycelium. These networks will be the subject of further studies. In future we can also consider an experimental laboratory testing of the numerical implementations of logical gates, e.g. via dielectric breakdown tests, because the phenomenon is also described by Laplace's stationary heat conduction equation which takes into account the evolution of conductivity of a medium determined by the electric current. The approach to developing logical circuits, proposed by us, could be used in fast-prototyping of experimental laboratory unconventional computing devices. Such devices will do computation by changing properties of their material substrates. First steps in this direction have been in designing Belousov-Zhabotinsky medium based computing devices for pattern recognition~\cite{fang2016pattern} and configurable logical gates~\cite{wang2016configurable}, learning slime mould chip~\cite{whiting2016towards}, electric current based computing~\cite{ayrinhac2014electric}, programmable excitation wave propagation in living bioengineered tissues~\cite{mcnamara2016optically}, heterotic computing~\cite{kendon2015heterotic}, memory devices in digital collides~\cite{phillips2014digital}. \section*{Supplemetary materials} \subsection*{{\sc xor} gate, Neumann boundary conditions} \begin{itemize} \item inputs $x=0$, $y=1$: \url{https://www.youtube.com/watch?v=osB12UqM3-w} \item inputs $x=1$, $y=0$: \url{https://www.youtube.com/watch?v=lKMeu1nFuak} \item inputs $x=1$, $y=1$: \url{https://www.youtube.com/watch?v=AxdCVVtIqgk} \end{itemize} \subsection*{One-bit half-adder, Neumann boundary conditions} \begin{itemize} \item inputs $x=0$, $y=1$: \url{https://www.youtube.com/watch?v=i81WTCrg8Lg} \item inputs $x=1$, $y=0$: \url{https://www.youtube.com/watch?v=impbwJXjCAM} \item inputs $x=1$, $y=1$: \url{https://www.youtube.com/watch?v=ubrgfzlAQQE} \end{itemize} \bibliographystyle{elsarticle-num}
train/arxiv
BkiUbHjxK7kjXMEc7E0r
5
1
\section{Introduction} The number of Milky Way (MW) globular clusters (GCs) known to possess extended tidal features is growing. Thanks to large, well-calibrated imaging surveys such as Sloan Digital Sky Survey (SDSS) \citep{2012ApJS..203...21A} and Pan-STARRS1 (PS1) \citep{2016arXiv161205560C}, as well as the more recent astrometric Gaia Space Mission \citep{2018A&A...616A...1G}, tidal features have been found around a few tens of GCs in the MW halo \citep[e.g.,][Kuzma et al., in prep]{2020MNRAS.495.2222S}, ranging from the massive inner halo NGC 5139 \citep{2019NatAs.tmp..258I,2021MNRAS.507.1127K} to the low mass outer halo Palomar 1 \citep{2010MNRAS.408L..66N}. The most emphatic display of tidal features in the MW, however, belongs to Palomar 5 \citep[Pal 5,][]{2001ApJ...548L.165O,2003AJ....126.2385O}. The tidal tails of Pal 5 have now been traced across more than 20$\degr$ on the sky \citep{2006ApJ...641L..37G, 2016MNRAS.463.1759B, 2020ApJ...889...70B} and it remains one of the very few stellar streams in the halo with a known progenitor. As the most striking example of GC disruption in action, the tidal tails of Pal 5 have been the subject of many studies over the last two decades, many of which have focused on how their properties can constrain the Galactic potential \citep[e.g.,][]{2004AJ....127.2753D,2012A&A...546L...7M,2014ApJ...795...94B,2015ApJ...803...80K,2015ApJ...799...28P, 2016ApJ...833...31B}. The present-day stellar mass of Pal 5 plus its tidal tails has been estimated to be $1.2-9\times 10^4$ M$_{\odot}$, with 4300 M$_{\odot}$ of that retained in the main body \citep{2017ApJ...842..120I, 2019AJ....158..223P}. This implies that roughly half of the total mass of the system is now populating the tails. The tails also show some level of substructure, including gaps and peaks \citep[e.g.][]{2012ApJ...760...75C, 2016ApJ...819....1I, 2017MNRAS.470...60E}, ``wiggles" and ``fanning" \citep{2020ApJ...889...70B}. While the reality and origin of fine structure along the Pal 5 tails has often been debated \citep[e.g.,][]{2016MNRAS.460.2711T, 2016ApJ...819....1I}, most studies agree on the fact that there is a gross asymmetry in the length of the leading and trailing tails. Early mapping work was limited by the SDSS footprint but \cite{2016MNRAS.463.1759B} were able to use the more extensive coverage of the PS1 survey to show that the leading tail could only be traced photometrically for about half the length ($\approx 8\degr$) of the trailing tail ($\approx 16\degr$). In particular, they showed that, at the photometric depth of PS1, the leading tail could be followed to $\delta\approx-6\degr$ on the sky before ending abruptly. Using deeper photometry from the DECam Legacy Survey \cite[DECaLS; ][]{2019AJ....157..168D}, \citet{2020ApJ...889...70B} demonstrated that leading tail debris could be detected slightly beyond this point, but that it was spread out in a low-density fan. Such fanning could indicate triaxiality in the potential \citep{2015ApJ...799...28P} or the effect of a rotating bar \citep[see][]{,2017NatAs...1..633P,2020ApJ...889...70B}. On the other hand, \cite{2020MNRAS.493.4978S} analyse Gaia DR2 data and suggest that the leading tail can be traced nearly as far as the trailing tail in that dataset. However this analysis is based on main sequence turn-off stars with magnitudes at the faint limit of the survey and the assumption of a cluster metallicity of [Fe/H]=-3.1 dex compared to the measured value of [Fe/H]=-1.5 dex \citep{2017A&A...601A..41K}. A further complication is that, at these faint magnitudes, there is also contamination along the sightline to the leading tail from the Sagittarius (Sgr) stream \citep{2016ApJ...819....1I, 2020ApJ...889...70B}. Spectroscopic studies of the Pal 5 stream have thus far been more limited than photometric ones. The first kinematic exploration of the Pal 5 tails was performed by \cite{2009AJ....137.3378O}, where they explored an 8.5$^{\circ}$ extent of the tidal tails. Using 17 confirmed red giant branch (RGB) members, they measured a linear radial velocity gradient of 1.0 km s$^{-1}$ deg$^{-1}$ along the stream, as well as a small intrinsic velocity dispersion of $\sim2$ km s$^{-1}$. \citet[][hereafter K15]{2015MNRAS.446.3297K}, \citet{2016ApJ...823..157I} and \citet[][]{2017ApJ...842..120I} (hereafter Ib17) conducted more extensive spectroscopic searches, covering a $\approx 20^{\circ}$ region along the stream, and confirmed this mild radial velocity gradient and low dispersion. \begin{figure*}\label{fig1} \begin{center} \includegraphics[width=0.9\textwidth]{Fig1.pdf}\label{fig:obs_fields} \end{center} \caption{Top: Location of the observed targets overplotted on a density map of EDR3 stars. The dashed line indicates the stream track determined by \citet{2017ApJ...842..120I}. Pal 5 resides at (229$\degr$, 0$\degr$) and the dashed circle indicates the Jacobi radius of 11 arcmin \citep{2019AJ....158..223P}. Bottom: PS1 photometry (left) and Gaia EDR3 proper motions (right) of our stream targets overlaid on those of stars lying within the Jacobi radius of Pal 5. The left panel includes a Dartmouth stellar isochrone \citep{2008ApJS..178...89D} of age=13.65 Gyr and [Fe/H]=--1.56 dex. In the right panel, the proper motion of the main body of Pal 5 is $(\mu^{*}_{\alpha},\mu_{\delta})= (-2.76,-2.65)~\rm{mas}~ \rm{yr}^{-1}$.} \end{figure*} While none of these radial velocity studies have found evidence for fanning in the tails, they have not probed the extreme ends of the tails where such behaviour might be expected. \cite{2019AJ....158..223P} have recently explored kinematics along the Pal 5 streams using RR Lyrae (RRL) stars that have Gaia DR2 proper motions (PMs). Intriguingly, they find a few RRLs with high probability of stream membership to be considerably offset from the stream track, including two stars in the region where \citet{2020ApJ...889...70B} find evidence for fanning in their star count map. No radial velocities are available for these stars, however. \begin{table} \centering \caption{List of observations.} \label{tab:obslist} \begin{tabular}{@{}ccccccc@{}} \hline \hline & Field 1& Field 2\\ \hline R.A. (J2000)&$227\overset{\circ}{.}946$&$224\overset{\circ}{.}425$\\ Dec. (J2000)&$-1\overset{\circ}{.}266$&$-4\overset{\circ}{.}690$\\ Date-obs&25/02/2017&24/02/2017\\ Exp. Time&$3\times1200$s&$3\times1800$s\\ Avg. Seeing&$2.6''$&$1.5''$\\ Ang. Distance$^{\dagger}$&$1.75^{\circ}$&$6.6^{\circ}$\\ \hline $^{\dagger}$ Angular distance from Pal 5. \end{tabular} \end{table} In this paper, we present a new kinematic study of the leading Pal 5 tidal tail. This work probes greater angular distances from the cluster than previous radial velocity studies and includes the region where the stream has been suggested to fan out based on deep photometry. In Section 2, we discuss the observations performed and the probabilistic methods we have employed to identify stream members. Section 3 presents our results and we discuss our findings in Section 4. We present our conclusions in Section 5. \section{The Data} \subsection{Observations \& Target Selection} AAOmega is a multi-fibre, dual-beam spectrograph that is mounted on the 3.9m Anglo-Australian Telescope (AAT) at Siding Springs Observatory \citep{2006SPIE.6269E..0GS}. When coupled to the Two Degree Field (2dF) fibre positioning system \citep{2002MNRAS.333..279L}, it provides 392 science fibres that can be configured across a 2$\degr$ diameter circular field. We used AAOmega+2dF with a dichroic centered at 5700 \r{A} to split the incoming light down the red and blue arms, which held the 1700D and 580V gratings respectively. The 1700D grating covers the wavelength range $\sim$8400--8880 \r{A} which includes the \ion{Ca}{II}\ triplet absorption lines at 8498 \r{A}, 8542 \r{A} and 8662 \r{A} and has a resolution R=10000. In the blue, the 580V grating covers the range $\sim$3800--5800 \r{A} with a resolution of R=1300 and covers the \ion{Mg}{I} triplet lines (also known as Mg $b$) at 5167 \r{A}, 5173 \r{A} and 5184 \r{A}. As part of Opticon proposal 17A/063 (PI Ferguson) two fields on the Pal 5 stream were observed on 24th and 25th of February 2017 under dark conditions and mostly clear skies, with 354 and 352 stars targeted per field on respective nights. Arc spectra and quartz lamp flat-fields were obtained before and after each set of science observations and a series of bias exposures were taken at the start of each night. Each target field had three sets of observations with the closest field to the main body of Pal 5 having individual exposures of 1200s while the outermost field had 1800s exposures. Both fields were chosen to lie along the leading tail of the Pal 5 stream as traced by \cite{2016MNRAS.463.1759B}. Field~1 is located 1.75$^{\circ}$ from the center of the cluster, while Field~2 is at the furthest extent mapped to date, 6.6$^{\circ}$ away, in a region that has not previously been studied spectroscopically. Fig. \ref{fig:obs_fields} shows the sky positions of our fields relative the main body of Pal 5 and the observations are summarised in Table \ref{tab:obslist}. The target selection was based on the PS1 DR1 photometry \citep{2016arXiv161205560C} with the primary consideration being the location of stars in colour-magnitude space with respect to the expected locus for Pal~5. In Fig. \ref{fig:obs_fields}, we show a colour-magnitude diagram (CMD) of the stars which lie within the Jacobi radius of Pal~5, calculated to be 11 arcmin by \citet{2019AJ....158..223P}, on which our spectroscopic targets are overlaid. As can be seen, our targets are fairly bright stars ({\it i$_{\rm PS1} \leq 18$}) which largely lie on the upper RGB and blue horizontal branch (HB). As our original target selection was performed before the release of Gaia DR2 astrometric data, it is reasonable to expect a modest to significant level of field contamination in our sample. This is confirmed in Fig. \ref{fig:obs_fields} which shows the Gaia EDR3 \citep[EDR3;][]{2016A&A...595A...1G,2020arXiv201201533G,2020arXiv201203380L} PMs of our targets overlaid on those of stars within the Jacobi radius of Pal 5. The astrometric data complements the radial velocities and [Fe/H] measurements that we will present in this paper and in Sec. \ref{kin} we will detail how we combine all this information to isolate a clean sample of Pal 5 stars. \subsection{Reduction \& Processing} We began by reducing the spectra with 2df Data Reduction (2\textsc{dfdr}\footnote{\url{http://www.aao.gov.au/2df/aaomega/aaomega_2dfdr.html}}) software package, with the default settings for both gratings. This performed the standard processes of debiasing, flat-fielding, wavelength calibration, extraction and sky subtraction using the designated sky fibres. At the end of this process, we removed any cosmic ray residuals by median-combining the spectra for each star. In the regions of the \ion{Ca}{II}\ triplet, the signal-to-noise of our targets ranges from $\sim2-30$ per pixel in Field 1 and $\sim5-50$ per pixel for Field 2. Due to their superior signal-to-noise and spectral resolution, we use these red arm spectra for the bulk of our analysis. To determine the radial velocities (RVs) of our targets, we used the python package \textsc{PyAstronomy}\footnote{https://github.com/sczesla/PyAstronomy} \citep{pya}. The \texttt{crosscorrRV} routine takes a target spectrum and cross-correlates it with a template spectrum, returning the shift as a velocity measurement. The template used was a synthetic spectrum that contained the \ion{Ca}{II}\ triplet lines broadened by a Gaussian at the resolution of the 1700D filter using the \texttt{instrBroadGaussFast} routine. We added random noise to each pixel in the target spectra based on the variance of the flux in that pixel \citep[e.g.,][]{2018MNRAS.477.4565S}, and subsequently performed the cross-correlation across the wavelength interval of 8450 \r{A}-8700 \r{A} - a region mostly bereft of sky residuals. This process was repeated 100 times per star to produce a distribution of measured radial velocities. We fit a Gaussian to the resulting distribution and present the mean and standard deviation as the measured radial velocity and associated 1$\sigma$ uncertainty. Lastly, we corrected for barycentric motion using the routine \texttt{helcorr}, and from now on we refer to these heliocentric radial velocities as $V_{R}$. We also use the \ion{Ca}{II}\ triplet lines to determine stellar metallicities through the well-established method based on their equivalent widths (EWs) \citep[e.g.,][]{1991AJ....101.1329A,2010A&A...513A..34S}. Using the \texttt{equivalent\_width} routine in the \textsc{specutils}\footnote{https://specutils.readthedocs.io/en/stable/} python package, we measured the EWs of all three of the \ion{Ca}{II}\ triplet lines on normalised spectra that had been wavelength-shifted to zero heliocentric velocity. Specifically, we used this routine to fit a Gaussian profile in 10 \r{A} bandpasses centred on each \ion{Ca}{II}\ line. In order to estimate the EW uncertainties for a given star, we repeated these measurements on each of the 100 realizations created in the radial velocity calculations. Similarly, we adopt the mean and sigma of a Gaussian fit to the resultant distribution as the EW and its associated uncertainty. To derive the metallicity of an RGB star, the summed \ion{Ca}{II}\ EW, $\Sigma EW_{\ion{Ca}{II}}$, can be related to [Fe/H] through knowledge of either the distance to the star or the difference between the V magnitude of the star and that of HB of the Pal 5 cluster, $V-V_{HB}$. In this work, we adopt the \ion{Ca}{II}\ calibration presented in \citet{2013MNRAS.434.1681C} which is valid over the range $-4.0\leq$ [Fe/H]$ \leq +0.5$. This calibration takes the form of: \begin{equation} \begin{split} {\rm[Fe/H]}=a+b\times (V-V_{HB})+c\times \Sigma EW_{\ion{Ca}{II}}\\+d\times \Sigma EW_{\ion{Ca}{II}}^{-1.5}+e\times \Sigma EW_{\ion{Ca}{II}} \times (V-V_{HB}) \label{eq:feh} \end{split} \end{equation} where the coefficients $a,\,b,\,c,\,d$ and $e$ are listed in Table. \ref{tab:coefflist}. We adopt $V_{HB}$=17.51 mag for Pal 5 \citep{1996AJ....112.1487H} and calculated the V-band magnitudes of our stars from their PS1 photometry using the transformation equations in \cite{2018BlgAJ..28....3K}. Uncertainties in the metallicities come from combining the uncertainties of the \ion{Ca}{II}\ EWs and the uncertainties on the calibration coefficients. Because of the assumption of a fixed magnitude for the HB, it should be emphasised that the metallicities we derive are strictly valid for genuine RGB stars at the distance of Pal 5. While \citet{2016ApJ...819....1I} detect a slight distance gradient along the Pal~5 stream ranging from $\Delta(m-M)= 0.14\pm0.09$ mag at the extent of the trailing edge to $-0.09\pm0.04$ mag at the extent of the leading edge, this shift in distance translates to a mere $\sim0.03$~dex in [Fe/H] and is well within our uncertainties. Our blue arm spectra have sufficient resolution to individually measure the gravity-sensitive \ion{Mg}{I} triplet lines at $\sim$5170~\r{A}, which have been shown to be useful for separating foreground dwarfs from the RGB stars that we are interested in \cite[e.g.,][ K15]{2009AJ....137.3378O}. The EWs of these lines have been measured in the same way as the \ion{Ca}{II}\ triplet lines described above. While the EW of the stronger \ion{Mg}{I} 8807~\r{A} line could also have been used for dwarf-giant separation \citep[e.g.,][]{Battaglia2012}, it was not always available due to flexure in the spectrograph. \begin{table} \centering \caption{List of coefficients from Table 4 from \citet{2013MNRAS.434.1681C} used in Eq. \ref{eq:feh}.}. \label{tab:coefflist} \begin{tabular}{@{}cc@{}} \hline \hline Coefficient & Value\\ \hline a&$-3.45\pm0.04$\\ b&$0.11\pm0.02$\\ c&$0.44\pm0.006$\\ d&$-0.65\pm0.12$\\ e&$0.03\pm0.003$\\ \hline \end{tabular} \end{table} \subsection{Identifying Pal 5 stream stars} \label{subsec:mem} The distance and faintness of the Pal~5 stream conspire to make it challenging to cleanly isolate member stars from the significant foreground and background contaminant populations along its sightline. This is further exacerbated by the presence of the Sgr stream in this part of the sky \citep{2016ApJ...819....1I, 2020ApJ...889...70B}. While the Sgr stream lies at a larger line-of-sight distance, red clump stars from this system contaminate the region of the CMD where faint Pal 5 RGB stars lie \citep[see Fig. 1 of][]{2020ApJ...889...70B}. These considerations motivate us to pursue a probabilistic approach to membership assignment in which we combine the new spectroscopic information described above with Gaia EDR3 astrometry and PS1 photometry. We began by cross-matching our observed targets with EDR3, matching stars with the closest EDR3 source within a search radius of 2 arcsec. We then removed any stars with a well resolved parallax, $(\omega - 3\sigma_{\omega})>0$ mas, since these will lie in the foreground. We also disregarded stars with a measured velocity uncertainty $>5$ km s$^{-1}$, as this value corresponds to spectra with poor signal-to-noise. Together, these cuts remove 434 stars (61 per cent) of the total sample. To avoid any potential biases, we do not consider further cuts on PMs or line-of-sight velocities at this stage of analysis. Our approach to determining the probability of a given star belonging to the Pal 5 stream ($P_{P5}$), or to the field ($P_{MW}$), is to use the log-likelihood ($\ln{\mathcal{L}_{R}}$) ratio, otherwise known as the Neyman-Pearson test (see chapter 9 of \citealt{1993stp..book.....L}). To this end, we consider in turn the likelihood ratio of the following features: dwarf/giant separation ($\mathcal{L}_{R,D/G}$), [Fe/H] ($\mathcal{L}_{R,[Fe/H]}$) and proximity to the dereddened Pal 5 RGB $(\mathcal{L}_{R,CMD})$. The combined log-likelihood is: \begin{equation}\label{eq:loglike_tot} \ln{\mathcal{L}_{R}}=\ln{\mathcal{L}_{R,CMD}} + \ln{\mathcal{L}_{R,D/G}}+\ln{\mathcal{L}_{R,[Fe/H]}} \end{equation} \noindent where $\mathcal{L}_{R,(CMD,D/G,[Fe/H])}$ is the ratio $P_{P5}/P_{MW}$ for the specific feature. A $\ln{\mathcal{L}_{R}}$>0 implies that a star has a higher likelihood of being a Pal 5 stream member, while $\ln{\mathcal{L}_{R}}$<0 implies it is more likely to belong to the contaminating field. A value of $\ln{\mathcal{L}_{R}}$=0 implies neither scenario is favoured. \subsubsection{Photometric Selection}\label{subsubsec:photsel} Fig. \ref{fig:obs_fields} shows that our target selection broadly traces the RGB of Pal 5. To quantify the probability that a given star belongs to Pal 5 based on its CMD position, we measure its difference in colour from a Dartmouth stellar isochrone selected to best represent the cluster \citep{2008ApJS..178...89D}. For this, we use an isochrone with [Fe/H]=$-1.56$ dex \citep[as measured by ][]{2017A&A...601A..41K}) and age$=13.5$ Gyr, shifted to the distance of Pal 5. Firstly, we de-reddened the PS1 photometry for each target using the updated \citet{1998ApJ...500..525S} reddening maps from \citet{2011ApJ...737..103S} with the Python package, DustMaps\footnote{\url{https://dustmaps.readthedocs.io/en/latest/index.html}} \citep{2018JOSS....3..695M}. The de-reddened magnitudes are denoted $g_0$ and $i_0$. We then assigned each star a probability according to the following equation \citep[e.g.,][]{2019MNRAS.485.2010G}: \begin{equation} P_{P5,CMD}=\frac{1}{\sqrt{2 \pi \sigma_{(g-i)_{0}}^2}} \exp \left(\frac{-((g-i)_{0}-(g-i)_{iso})^2}{2\sigma^2_{(g-i)_{0}}}\right)\label{eq:p5cmd} \end{equation} \noindent where $(g-i)_0$ and $(g-i)_{iso}$ are the de-reddened colour and isochrone colour at the star's $i_0$-band magnitude, and $\sigma_{(g-i)_0}$ is the uncertainty in the de-reddened colour. In this and the following steps, we have only considered stars that lie along the RGB; targets that are located along the HB will be dealt with separately (see Sec. \ref{kin}). The colour width of our RGB selection box (see Fig. \ref{fig:obs_fields}) is at most $\Delta(g-i)_0=0.4$ for a given $i_0$ mag. Across this rather modest range in colour, we have assumed for simplicity that field contaminants are uniformly distributed and assign each star a probability based on a uniform probability distribution. That is: \begin{equation} P_{MW,CMD} = 1/\Delta(g-i)_0\label{eq:fieldcmd} \end{equation} Combining the probabilities from equations \ref{eq:p5cmd} and \ref{eq:fieldcmd}, we define $\mathcal{L}_{R,CMD}=P_{P5,CMD}/P_{MW,CMD}$. \subsubsection{Dwarf/Giant Separation} The gravity-sensitive \ion{Mg}{I} lines are commonly used to distinguish between RGB stars and foreground dwarf stars. For a given metallicity and temperature, these features have larger EWs in high-gravity dwarf stars than they do in giants. The \ion{Mg}{I} lines around 5170 \r{A} have been used for this purpose in previous studies of the Pal 5 stream \cite[e.g.,][ K15]{2009AJ....137.3378O} as well as other diffuse halo structures \citep{2012AJ....143...88C}. These latter authors explicitly demonstrated that this is an effective discriminant for giant stars at the metallicity of Pal 5 and a population of more metal-rich dwarfs. In Fig. \ref{fig:DGS_dist} we plot the summed EWs of the \ion{Mg}{I} and \ion{Ca}{II}\ triplets measured for our sample. This diagram shows bimodal structure with the field dwarfs populating the higher sums of the \ion{Mg}{I} EW at a given \ion{Ca}{II}\ EW (see also K15). The one-dimensional distribution of \ion{Mg}{I} triplet EWs ($\Sigma EW_{\mathrm{{\ion{Mg}{I}}}}$) is shown the right panel and can be modelled by two normal distributions with the form: \begin{equation}\label{eq:dgs} P_{X,D/G} = \frac{1}{\sqrt{2 \pi(\sigma_{\Sigma EW_{\mathrm{\ion{Mg}{I}}}}^2+\sigma_{X}^2}) } \exp \left(\frac{-(\Sigma EW_{\mathrm{\ion{Mg}{I}- X)^2}}}{2 (\sigma_{\Sigma EW_{\mathrm{\ion{Mg}{I}}}}^2+\sigma_{X}^2)}\right) \end{equation} \noindent where $X$ and $\sigma_X$ are the mean and standard deviations of the dwarf and giant distributions, and $\sigma_{\Sigma EW_{\mathrm{\ion{Mg}{I}}}}$ is the uncertainty in $\Sigma EW_{\mathrm{\ion{Mg}{I}}}$. After both distributions have been found, we calculate the likelihood ratio of $\mathcal{L}_{R,D/G}=P_{P5,D/G}/P_{MW,D/G}$. \subsubsection{Metallicity Selection} \citet{2017A&A...601A..41K} presented a detailed chemical analysis of Pal 5 based on high resolution spectra of 15 RGB stars. They derive a mean metallicity of [Fe/H]$_{P5}= -1.56\pm0.02\pm0.06$ dex (presented with statistical and systematic uncertainties). As there is no evidence for any [Fe/H] spread in the main body of Pal 5, we assume that the stars in the tidal stream will posses the same [Fe/H] abundance. Therefore, we have adopted this value when assigning stars their associated $P_{P5,[Fe/H]}$, which takes the form of another normal distribution: \begin{equation}\label{eq:metp5} \begin{aligned} P_{P5,[Fe/H]}=\frac{1}{\sqrt{2 \pi (\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{P5}})}}\cross \\ \exp \left(\frac{-([Fe/H]-[Fe/H]_{P5})^2}{2(\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{P5}})}\right) \end{aligned} \end{equation} \noindent where [Fe/H] is our measured value using the \ion{Ca}{II}\ EW, $\sigma_{[Fe/H]}$ is its uncertainty and $\sigma_{[Fe/H]_{P5}}$ is the uncertainty of $[Fe/H]_{P5}$, obtained by summing the statistical and systematic terms in quadrature. To inform our choice of a model for the field component, we plot in Fig. \ref{fig:FEH} the inferred [Fe/H] distribution of all stars which remain after parallax and low signal-to-noise removal. While these values only represent actual metallicities for stars at the distance of Pal 5, it is notable that the overall distribution of this quantity can be well-described by a normal distribution. Therefore, we can define $P_{MW,[Fe/H]}$ in a similar manner to Eq. \ref{eq:metp5}: \begin{equation} \begin{aligned} P_{MW,[Fe/H]}=\frac{1}{\sqrt{2 \pi ({\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{MW}}})}}\cross \\ exp\left(\frac{-([Fe/H]-[Fe/H]_{MW})^2}{2(\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{MW}})}\right)\label{eq:metMW} \end{aligned} \end{equation} \noindent where $[Fe/H]_{MW}$ and $\sigma_[Fe/H]_{MW}$ are the mean and sigma of the normal distribution fit to the data. We note that while a small number of confirmed Pal 5 stream members will be included in our model of the field component, they are greatly outnumbered by the much more significant contaminant population, making any bias in determining $P_{MW,[Fe/H]}$ unlikely. Finally, we can calculate the likelihood ratio as $\mathcal{L}_{R,[Fe/H]}=P_{P5,[Fe/H]}/P_{MW,[Fe/H]}$. \subsubsection{Calculating Membership Probability} To transform our log-likelihoods into membership probabilities, we can calculate the probability that a given star belongs to Pal~5 ($P_{mem}$) using the following equation: \begin{equation}\label{eq:loglike1} P_{mem}=\frac{f\mathcal{L}_{P5}}{f \mathcal{L}_{P5} + (1-f) \mathcal{L}_{MW}} \end{equation} \noindent where $\mathcal{L}_{P5/MW}$ is the likelihood value with respect Pal~5 and the field respectively (not to be confused with the likelihood ratio $\mathcal{L}_{R}$), and $f$ is the normalisation value between the two populations. In our case, we have simply assumed that membership to Pal 5 or the MW field is equally probable, thus $f=0.5$. We acknowledge that this is arbitrary, however it does remove any bias towards either Pal 5 membership or the field which is reasonable in our analysis. As we have calculated the log-likelihood ratio ($\ln{\mathcal{L}_{R}}$) from Eq. \ref{eq:loglike_tot}, we can simplify Eq. \ref{eq:loglike1} by removing the common factor of $0.5\mathcal{L}_{MW}$ from the denominator to define the probability of a star that belongs to Pal 5 in terms of the likelihood ratio $\mathcal{L}_{R}$ defined in Eq. \ref{eq:loglike_tot}, \cite[e.g.][]{1993stp..book.....L,2011MNRAS.413.2895J}: \begin{equation}\label{eq:loglike_P5} P_{mem }=\frac{\mathcal{L}_{R}}{\mathcal{L}_{R}+ 1} \end{equation} As we have set the normalisation factor to 0.5, this implies that stars with $P_{mem,P5} \geq 0.5$ are high likelihood Pal 5 members. With this assumption, we find 75 stars or $\sim11$\ per cent of our total sample are likely to belong to Pal 5. To avoid introducing biases, we have thus far ignored the kinematics of stars in identifying likely Pal 5 stream members. In the next Section, we will now consider how such measurements can inform our final sample definition. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig2.pdf} \end{center} \caption{Dwarf/Giant separation for our targets after the removal of targets with significant EDR3 parallax measurements and low signal-to-noise spectra. The sum of the \ion{Ca}{II}\ EWs is shown along the x-axis, while the summed \ion{Mg}{I} b EWs are along the y-axis. The right histogram shows the distribution of summed \ion{Mg}{I} b EWs when integrating over the x-axis. Two peaks can be clearly seen, corresponding to the dwarf population (peak near $\Sigma EW_{\mathrm{\ion{Mg}{I}}}$=5) and the giant population (lower peak near $\Sigma EW_{\mathrm{\ion{Mg}{I}}}=1$).} \label{fig:DGS_dist} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig3.pdf} \end{center} \caption{Distribution of \ion{Ca}{II}\ EW inferred [Fe/H] for all observed targets, after the removal of targets with significant EDR3 parallax measurements and low signal-to-noise spectra. These values only reflect genuine [Fe/H] for stars at the distance of Pal 5. The dashed line indicates the measured [Fe/H] of Pal 5 (-1.56 dex) from high resolution spectroscopy.} \label{fig:FEH} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{Fig4.pdf} \end{center} \caption{Diagnostic plots of the velocity measurements used in our membership assignment. Stars with $P_{mem}>=0.5$ are colour-coded according to field location, as in Fig. \ref{fig:obs_fields}, while stars with $P_{mem}<0.5$ are shown in grey. Left: Proper motion distribution, with the dashed ring indicating the 2 mas yr$^{-1}$ boundary used for final sample definition. The open grey circles indicate the expected proper motion of Sgr stars from the model presented in \citet{2021MNRAS.501.2279V}, which lie outside the the dashed circle. Right: Radial velocity histogram with the vertical dashed lines indicating our selected velocity range of --80 to --40 km s$^{-1}$. The solid arrow shows the radial velocity of Pal 5 at --57.4 km s$^{-1}$. Also shown by the grey unfilled histogram is the expected velocity distribution of Sgr stars from the model presented in \citet{2021MNRAS.501.2279V}, which are all outside our velocity range.} \label{fig:kin_select} \end{figure*} \subsection{Incorporating Kinematics for Final Sample Definition}\label{kin} The PM of Pal 5 has been measured repeatedly using Gaia DR2 and EDR3 data \citep[e.g.,][]{2019MNRAS.484.2832V,2021MNRAS.505.5978V}. However, the area of interest in this study is located outside the main body of the cluster and any PM filtering for our sample needs to account for how the PM of stars change as a function of position along the stream. This has recently been examined by \cite{2019AJ....158..223P} who find that while the PMs of stars do vary along the length of the stream, they do not substantially change over the regions we are probing in this study. As a result, we proceed with stars that lie within 2 mas yr$^{-1}$ of the main body PM of Pal 5\footnote{$\mu^{*}_{\alpha}=\mu_{\alpha}\cos{\delta}$}: $(\mu^{*}_{\alpha},\mu_{\delta})=(-2.75,-2.65)$ mas yr$^{-1}$ \citep{2019MNRAS.484.2832V}. This is a more conservative estimate than the 3 mas yr$^{-1}$ threshold used by \citet{2020MNRAS.493.4978S}, which is large enough to include contamination from the Sgr stream. Indeed, using the model of \cite{2021MNRAS.501.2279V} in the region 223$\degr<\alpha<229.5\degr$ deg and $-20\degr<\delta< 10\degr$, we find that the Sgr stream in this direction has a PM of $(\mu^{*}_{\alpha},\mu_{\delta})\approx(-1.1,-0.5)$ mas yr$^{-1}$ (see Fig. \ref{fig:kin_select}). Our PM selection thus provides a cleaner sample of stars in the Pal 5 leading tail but may potentially suffer from minor incompleteness if there are very energetic stars present. Disentangling such a hot population from the contamination from the Sgr stream and the Milky Way field will be very difficult based on currently-available data but may be possible with either improved distances and/or detailed chemistry (e.g. elemental abundance ratios). The radial velocity distribution of our high likelihood Pal 5 members is also shown in Fig. \ref{fig:kin_select}. This shows an obvious grouping of stars between $-80$ and $-40$ km s$^{-1}$ but with many outliers. The \cite{2021MNRAS.501.2279V} Sgr stream model predicts that stream stars along this sightline will contribute at radial velocities $\geq -20$ km s$^{-1}$ and so do not pose an issue. As previous kinematic studies have found a radial velocity gradient of $\sim -1$ km s$^{-1}$ deg$^{-1}$ along the stream (e.g. K15, Ib17), we find it reasonable to expect that Pal 5 stream members will be contained within this velocity range across the radial extent of our fields. When our constraints on PMs and radial velocities are incorporated into our sample definition, this removes all but 15 of the high probability Pal 5 stars. To this sample, we also add one star located on the HB (i.e. with $(g-i)_{0}\le 0.6$ mag in Fig. \ref{fig1}). The HB was not considered as part of our photometric selection described in Sec. \ref{subsubsec:photsel} but we expect there to be very little field contamination in this part of the CMD. For this population, we have only considered the kinematic cuts in the PM and radial velocity, and find that these are satisfied by a single star in the outermost field. In total, we present these 16 stars as {\it bona fide} Pal 5 stream members and their properties are listed in Table \ref{tab:p5stars}. \begin{table*} \centering \caption{List of {\it bona fide} Pal 5 stream members from our AAT sample. The naming convention indicates with field the star belongs to, followed by its designated number.} \label{tab:p5stars} \begin{tabular}{@{}ccccccc@{}} \hline \hline Star&R.A.&Dec&$V_R$&$(g-i)_{0}$&$i_{0}$&[Fe/H]\\ (Field-number)&(deg, J2000)&(deg, J2000)& ($\rm{km}\,\rm{s}^{-1}$)&\multicolumn{2}{c}{(mag)}&(dex)\\ \hline 1-448$^{*}$&228.419&-0.708&$-56.87\pm1.98$&0.83&16.3&$0.01\pm0.13$\\ 1-396&228.539&-0.878&$-65.50\pm3.91$&0.69&17.8&$-3.04\pm1.04$\\ 1-272$^{*}$&228.118&-1.214&$-56.09\pm1.87$&0.68&17.7&$-0.59\pm0.41$\\ 1-265&227.972&-1.245&$-51.52\pm2.13$&0.73&16.7&$-1.59\pm0.16$\\ 1-190$^{*}$&227.663&-1.446&$-58.37\pm0.96$&0.85&16.2&$-0.99\pm0.11$\\ 1-225$^{*}$&227.840&-1.484&$-52.00\pm1.44$&0.77&16.9&$-1.42\pm0.14$\\ 1-135&227.327&-1.637&$-46.83\pm1.55$&0.81&16.7&$-0.82\pm0.12$\\ 1-113&227.168&-1.760&$-40.43\pm4.36$&0.70&17.3&$-0.72\pm0.15$\\ 2-346&224.939&-4.332&$-59.97\pm1.74$&0.69&17.0&$-1.22\pm0.1$\\ 2-312&224.645&-4.525&$-62.05\pm1.63$&0.72&17.0&$-1.06\pm0.09$\\ 2-253&223.607&-4.805&$-79.72\pm2.04$&0.78&16.9&$-1.28\pm0.11$\\ 2-214&225.323&-4.819&$-67.20\pm1.17$&0.78&16.4&$-1.99\pm0.07$\\ 2-550$^{\dagger}$&224.349&-5.052&$-49.54\pm2.92$&-0.35&17.4&$-1.96\pm0.1$\\ 2-132&224.982&-5.087&$-48.56\pm3.7$&0.62&17.8&$-3.22\pm0.16$\\ 2-069&224.783&-5.326&$-70.84\pm0.28$&0.89&15.7&$-1.47\pm0.04$\\ 2-033&224.177&-5.501&$-63.32\pm1.15$&0.78&16.5&$-1.15\pm0.08$\\ \hline \multicolumn{7}{l}{$^{*}$ These stars are in common with Ib17 - the velocity presented here is the weighted-average value.}\\ \multicolumn{7}{l}{$^{\dagger}$ Horizontal branch star.}\\ \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{Fig5.pdf} \end{center} \caption{Our {\it bona fide} Pal 5 stars. Stars are colour-coded by which field they are located in, as in Fig. \ref{fig:obs_fields}. Top row: On-sky distribution of our member stars. Stars in the inner field closely follow the stream track whereas those in the outermost field display a broader distribution on the sky. Bottom left: CMD of member stars. Also plotted is the Dartmouth stellar isochrone used in the photometric selection described in section \ref{subsubsec:photsel}. Bottom right: PM distribution of member stars. } \label{fig:evidence} \end{figure*} \section{Results} In Fig. \ref{fig:evidence}, we show the spatial distribution, CMD and PM distribution of our final sample of Pal 5 stream stars. We see that eight of these stars lie in Field 1, the closest to the cluster centre, and they tightly follow the stream track. The remaining eight stars lie in the outermost field and appear to show a significant spread in position on the sky, suggestive of stream fanning; these stars lie further along the leading tail than has been previously explored spectroscopically. To be able to better quantify this behaviour, and also examine how our new measurements compare to the known gradients along the stream, we combine our radial velocities with literature measurements to revisit the kinematics of the Pal 5 stream along its entire observed extent. Before doing this, we note that of the eight stars we have confirmed in Field 1, four of these stars have been previously identified by Ib17: stars 1-190, 1-225, 1-277 and 1-448. We compare the radial velocities that we have derived for these stars with those found by Ib17 and find an offset of $V_R-V_{Ib17}=0.4 \pm 1.9$ km~s$^{-1}$. As Ib17's measurements are based on higher signal-to-noise spectra than ours, we adopt a weighted-average between Ib17's and our measured velocity for these stars for the rest of the analysis (see Table \ref{tab:p5_fullist}). \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{Fig6.pdf} \end{center} \caption{Left: EDR3 PM distribution of Ib17 (green), \citet{2016ApJ...823..157I} (light blue) and \citet{2009AJ....137.3378O} (pink) {\it bona fide} Pal 5 members. Our sample are shown as black points. The black dashed ring displays a 2 mas yr$^{-1}$ radius around the measured Pal 5 PM from \citet{2021MNRAS.505.5978V}. It is clear that significant contamination exists in these earlier samples. Right: Spatial distribution of in our combined sample of \pk{109} {\it bona fide} stream members with the stream track of \pk{\citet{2016ApJ...819....1I} and \citet{2017ApJ...842..120I} under-laid}. The stars closely follow the stream track until the edge of the leading tail, near ($\alpha, \delta$)$\sim (225,-5)$, where the fanning reported in this paper is observed. The dashed circle indicates the location of Pal~5 itself.} \label{fig:all_spat_pm} \end{figure*} With the astrometric measurements provided by EDR3 now available, we are also able to clean previous kinematic samples of Pal~5 stars from contaminants. We have gathered the radial velocities that have been presented in \cite{2009AJ....137.3378O} (18 stars), \cite{2016ApJ...823..157I} (130 stars) and Ib17 (130 stars), noting that the latter includes all stars that were previously reported in K15 and 13 stars from \cite{2009AJ....137.3378O}. We subject these samples to the same astrometric and kinematic selections we performed on our own sample - that is, removing stars with a resolved parallax $(\omega - 3\sigma_{\omega})>0$ mas, radial velocity uncertainties $\geq$ 5 km s$^{-1}$, radial velocities outside the range of $-80$ and $-40$ km s$^{-1}$ and PMs larger than 2 mas yr$^{-1}$ of the main body of the Pal 5 cluster. As seen in the left panel in Fig. \ref{fig:all_spat_pm}, there is significant contamination in these earlier kinematic samples of Pal 5 stream stars. We find that out of the 130 stars from Ib17 \citep[including the stars stars in common with our sample and][]{2009AJ....137.3378O}, only 91 stars (75 unique, 16 in common with other data sets) stars satisfy the astrometric and kinematic selection criteria adopted in this paper. For the \cite{2016ApJ...823..157I} sample, only 2 out of the 130 stars are retained and for the \cite{2009AJ....137.3378O} sample, 16 out of 18 stars are retained (of which 12 are present in Ib17). Accounting for stars in common and unique to each sample, we present four, two and 75 unique stars (81 unique stars in total) as Pal 5 system members from the studies of \cite{2009AJ....137.3378O}, \cite{2016ApJ...823..157I} and Ib17 respectively. Adding in the 12 stars in common between \cite{2009AJ....137.3378O} and Ib17, and our 16 stars (of which four are in common with Ib17), we present an extended sample of 109 {\it bona fide} members belonging to the Pal 5 cluster and its tidal tails stars spread across $\sim$22$\degr$ on the sky. The properties of these stars, including their sky positions, radial velocities, PMs and PS1 photometry, are presented in Table \ref{tab:p5_fullist} while their spatial distribution across the sky is shown in the right panel of Fig \ref{fig:all_spat_pm}. The next stage in our analysis is to transform the PMs and spatial coordinates of this extended sample to a spherical coordinate system aligned with the stream. For this, we adopt the transformation provided by \cite{2020ApJ...889...70B} in which the cluster origin lies at $(\phi_{1}, \phi_{2})=(0,0)\degr$ and the leading arm of the tidal tail is along the positive direction. We fit a second-order polynomial to these data to calculate the best fit track to the trailing and leading tails. We find that the trailing tail can be described as: \begin{equation} \phi_{2,trailing}(\phi_1)=0.0108\phi^2_{1}+0.0394\phi_{1}-0.2768 \end{equation} \noindent while the leading tail follows: \begin{equation} \phi_{2,leading}(\phi_1)=0.0053\phi^2_{1}+0.2043\phi_{1}+0.0163 \end{equation} \noindent in stream coordinates (see top left panel of Fig. \ref{fig:kim_fig}). In constructing these fits, we have excluded all stars that lie within the Jacobi radius to avoid the cluster influencing the track. To characterise the stream width along its length, we calculate for each star the difference between its coordinate $\phi_2$ and the stream track at its $\phi_1$ position, which we denote as $\Delta\phi_2$. We then fit the mean $\Delta\phi_2$ (in $2.5\degr$ bins as a function of $\phi_1$) with a second order polynomial to estimate the behaviour of the width along the stream (Fig. \ref{fig:fanning}). We see an increase in stream width at $\phi_{1}>5\degr$ reaching approximately 0.5$\degr$ at $\phi_{1}=7\degr$, confirming the visual impression of fanning in the leading tail that is seen in Fig. \ref{fig:evidence}. This also agrees well with a photometric detection of fanning in this region presented by \cite{2020ApJ...889...70B} (see their Fig. 3). The stream width becomes smaller, as expected, as we move along the stream towards the trailing tail and up until $\phi_{1}= -10 \degr$ can be characterised by a width of $\approx 0.2\degr$. The polynomial fit suggests that the stream may become wider again at the observed edge of the trailing tail but there are too few stars in this region for this to be meaningful. \begin{table*} \centering \caption{List of Pal 5 members compiled in this study. These stars are drawn from the new data presented in this work (K22) as well those from the previous studies of \citealt{2009AJ....137.3378O} (O09), \citet{2016ApJ...823..157I} (Is16) and Ib17 that are retained after astrometric and kinematic cleaning. We provide sky positions, PMs and their uncertainties, radial velocities and their uncertainties and the $g$-band magnitudes and $(g-i)$ colours from PS1. The last column indicates the origin of their first identification. Only the top five rows are shown here, the rest is available online.} \label{tab:p5_fullist} \begin{tabular}{@{}ccccccccccc@{}} \hline \hline \multicolumn{1}{c}{R.A.} & \multicolumn{1}{c}{Dec.} & \multicolumn{1}{c}{$\mu^{*}_{\alpha}$} & \multicolumn{1}{c}{$\sigma_{\mu^{*}_{\alpha}}$} & \multicolumn{1}{c}{$\mu_{\delta}$} & \multicolumn{1}{c}{$\sigma_{\mu_{\delta}}$} & \multicolumn{1}{c}{$V_R$} & \multicolumn{1}{c}{$\sigma_{V_R}$} & \multicolumn{1}{c}{$g_{PS1}$} & \multicolumn{1}{c}{$(g-i)_{PS1}$} & \multicolumn{1}{c}{Source} \\ \multicolumn{2}{c}{(deg, J2000)} & \multicolumn{2}{c}{(mas yr$^{-1}$)} & \multicolumn{2}{c}{(mas yr$^{-1}$)} & \multicolumn{2}{c}{(km s$^{-1}$)} & \multicolumn{2}{c}{(mag)}&\\ \hline 240.342&6.036&-2.89&0.03&-2.18&0.02&-47.35&0.78&16.18&1.16&Ib17\\ 237.723&5.027&-3.35&0.12&-3.50&0.10&-50.13&2.50&18.47&0.83&Ib17\\ 236.502&4.285&-2.74&0.13&-2.38&0.11&-53.06&2.87&18.66&0.93&Ib17\\ 235.844&4.241&-2.84&0.11&-2.02&0.11&-50.22&3.22&18.04&0.87&Ib17\\ 236.025&4.193&-2.74&0.10&-2.49&0.09&-48.50&2.16&18.16&0.84&Ib17\\ \hline \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{Fig7.pdf} \end{center} \caption{Inferred trends in various parameters along the stream, using the fit parameters from Table \ref{tab:kin_models} displayed as a function of the stream coordinate $\phi_{1}$. Top left: sky positions in the Pal 5 stream coordinate frame $(\phi_{1},\phi_{2})$. Top right: Radial velocities along the stream. Bottom left: $\mu_{\phi_{1}}$ along the stream. Bottom right: $\mu_{\phi_{2}}$ along the stream. The linear and quadratic model fits to the radial velocities and proper motions are showin in green and blue respectively, with the shaded regions indicating the 1$\sigma$ uncertainties on the corresponding model. The points are the same as Fig. \ref{fig:all_spat_pm} (left), and the dashed black line indicates the quadratic fit to the RRL PMs from \citet{2019AJ....158..223P}.} \label{fig:kim_fig} \end{figure*} With our extended Pal 5 sample, we can also revisit the radial velocity gradient across $\approx22\degr$ on the sky, compared to the 16$\degr$-18$\degr$ of the \citet{2016ApJ...823..157I} and Ib17 spectroscopic samples. Following Ib17, we have employed a ``conservative formulation" of a Bayesian automatic outlier rejection algorithm to fit the data \citep[see Chapter 8.3 of][]{Sivia:2006ty}. Using this technique, we explored both a linear ($A+ B \phi_{1}$) and quadratic ($A+B \phi_{1} + C\phi_{1}^2$) fit to the radial velocities and each component of the PMs as function of position along the stream, $\phi_1$. The parameters of these fits are shown in Table \ref{tab:kin_models} and the models, along with their associated $1\sigma$ uncertainties, are shown in Fig. \ref{fig:kim_fig}. For the linear model, we find a radial velocity gradient of $-0.81 \pm 0.14$ km s$^{-1}$ deg$^{-1}$ along the stream, which is consistent within the uncertainties of the values found in other work where linear models have been adopted \citep[e.g.,][K15]{2009AJ....137.3378O,2016ApJ...823..157I}. When considering the radial velocity gradient of Ib16, 0.7 km s$^{-1}$ deg$^{-1}$, this gradient was calculated with respect to the standard gnomonic coordinate $\xi$ - a tangential horizontal projection about the cluster center. When we calculate our radial velocity gradient in that coordinate system, we find a consistent value of $0.93 \pm 0.2$ km s$^{-1}$ deg$^{-1}$. No previous analyses have explored a quadratic fit to the stream radial velocity distribution along the tails. We find that the quadratic term is consistent with zero within uncertainties ($-0.07\pm 0.04$ km s$^{-1}$ deg$^{-1}$), while the linear term is similar ($-1.34\pm0.28$ km s$^{-1}$ deg$^{-1}$) to the aforementioned linear gradient within the uncertainties. We conclude that a linear model of the radial velocities remains the best description over the angular extent of the stream probed here. At $\phi_{1}=0$ (i.e. the location of the cluster), we find $V_{R}=-55.67\pm0.28$ km s$^{-1}$ with the linear model and $V_{R}=-55.50 \pm 0.28$ km s$^{-1}$ with the quadratic model, both similar to Ib17, though slightly different from the radial velocities of $-57.4\pm0.4$ km s$^{-1}$ from K15 and $-58.6\pm0.2$ km s$^{-1}$ from \citet{2019MNRAS.482.5138B}. We note that restricting the constant term in the linear and quadratic models to this velocity does not affect the results significantly. We perform similar fits to the PM components, $\mu^*_{\phi_{1}}$\footnote{$\mu^*_{\phi_{1}}=\mu_{\phi_{1}}\cos{\phi_{2}}$} and $\mu_{\phi_{2}}$. In both instances, there is a small linear gradient of $0.03\pm0.01$ mas yr$^{-1}$ deg $^{-1}$ and $0.04\pm0.004$ mas yr$^{-1}$ deg$^{-1}$ for $\mu^{*}_{\phi_{1}}$ and $\mu_{\phi_{2}}$ respectively. Further, both leading terms in the quadratic fit to $\mu^{*}_{\phi_{1}}$ and $\mu_{\phi_{2}}$, are consistent with zero within 1 $\sigma$. \cite{2019AJ....158..223P} have recently studied the PMs of {\it bona fide} RRL candidates in Pal 5 as function of $\phi_{1}$ using a quadratic model. For the proper motion of the Pal 5 cluster, we find $(\mu^{*}_{\phi_{1}},\mu_{\phi_{2}})=(3.77\pm0.02,0.73\pm0.02)$ mas yr $^{-1}$, which is in excellent agreement with their value, as well as that of \citet{2021MNRAS.505.5978V} when transformed back to equatorial coordinates. The PM trends we find along the length of the stream are in good agreement as well with their results (see the lower panels of Fig. \ref{fig:kim_fig}), which is very encouraging given that these trends have been fit to completely independent samples of stars. However, we note that while our linear and quadratic fits to $\mu_{\phi_{2}}$ are in very good agreement to theirs at locations $-10^{\circ} \la \phi_{1} \la 5^{\circ}$, they rapidly diverge from their quadratic model at the extreme ends of the stream. The reason for this discrepancy is likely due to the relative lack of stars in these outermost reaches in the two samples -- \cite{2019AJ....158..223P} fit their model to 27 RRLs along the cluster/stream, with only a small number of stars guiding the fit at distances $|\phi_{1}|>10^{\circ}$. We, too, suffer from small number of stars at those locations along the stream. Clearly, it is essential to identify further {\it bona fide} members at such high distance from the progenitor to provide more stringent constraints on the 3D kinematics, as well as to establish if the fanning uncovered here continues further along the leading tail. Finally, we can also re-examine the radial velocity dispersion within the tails using our combined dataset. To do this, we define $\Delta V_R$ to be the difference between the observed radial velocity of a given star and the expected radial velocity at the star's location from the model fits in Table \ref{tab:kin_models}. We then fit the resulting distribution of $\Delta V_R$ with normal distribution with a mean of zero, and report the 1$\sigma$ value as the velocity dispersion, $\sigma$. Both the linear and quadratic model fits are found to show a similar radial velocity dispersion within the debris, $\sigma_{V}=2.17 \pm 0.27$ km s$^{-1}$ and $2.50 \pm 0.24$ km s$^{-1}$ respectively, which is consistent with previously published values (e.g. K15). We explored if there is any trend in the radial velocity dispersion along the tails and found that it that varies very little, increasing by $\lesssim 1$ km s$^{-1}$ into the fanned region of the leading tail. Further, we also find a low dispersion for both directions of proper motion: $\sigma_{\mu_{\phi_{1}}} = 0.12 \pm 0.01$ mas yr$^{-1}$, and $\sigma_{\mu_{\phi_{2}}} = 0.16 \pm 0.01$ mas yr$^{-1}$ within the tail, also with no significant variation along it. Hence, the velocity dispersion, along all three velocity dimensions, is characteristically low and rather constant within the current sample of stars. This is not surprising, as the Pal 5 stream has long been noted to be a kinematically cold structure \citep[e.g.,][]{2009AJ....137.3378O}. \begin{table*} \begin{center} \caption{The coefficients of the linear ($A+ B \phi_{1}$) and quadratic ($A+B \phi_{1} + C\phi_{1}^2$) fits to the radial velocity $V_{R}$, and to the proper motions $\mu_{\Phi_{1}}$ and $\mu_{\Phi_{2}}$, as a function of the tangential projection along the stream, $\phi_{1}$. The top rows contain the fit coefficients while $\sigma$ shows the velocity and PM dispersion about the stream. } \label{tab:kin_models} \begin{tabular}{@{}ccccccc} \hline \hline Parameters&\multicolumn{2}{c}{$V_R$}&\multicolumn{2}{c}{$\mu^{*}_{\Phi_{1}}$}&\multicolumn{2}{c}{$\mu_{\Phi_{2}}$}\\ Model&linear & quadratic&linear & quadratic&linear & quadratic\\ \hline $A$&$-55.67\pm0.27$&$-55.50\pm0.28$&$3.77\pm0.02$&$3.77\pm0.02$&$0.73\pm0.02$&$0.73\pm0.02$\\ $B$&$-0.81\pm0.14$&$-1.34\pm0.28$&$0.03\pm0.01$&$0.03\pm0.01$&$0.04\pm0.004$&$0.03\pm0.004$\\ $C$&$--$&$-0.07\pm0.04$&$--$&$-0.0004\pm0.0005$&$--$&$-0.0003\pm0.0003$\\ $\sigma$&$2.17\pm0.27$&$2.75\pm0.24$&$0.12\pm0.01$&$0.13\pm0.01$&$0.16\pm0.01$&$0.16\pm0.01$\\ \hline \end{tabular} \end{center} \end{table*} \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{Fig8.pdf} \end{center} \caption{Stream width (blue shaded region) as a function of position along the Pal 5 stream. The black points represent stars in the extended sample. The large increase in width in the leading tail, beyond $\phi_1 > 5 \degr$, is indicative of the fanning. The black cross indicates the position of Pal 5 itself.} \label{fig:fanning} \end{figure} \section{Discussion} Prior to this work, spectroscopic studies of Pal 5 member stars have been confined to the most dense and identifiable regions of the tidal stream. However, in recent years, photometric studies have shown that there is a faint extension to the leading edge of the stream \citep{2016MNRAS.463.1759B}, and that the stars in this region may actually fan out in angular distance from the nominal stream track \citep{2020ApJ...889...70B}. The outermost field observed in the present study covers this previously-unexplored region and confirms the existence of stars in this region with kinematics and metallicities that make them highly probable to be {\it bona fide} members of the Pal 5 stream. Stream fanning has been explored from a theoretical standpoint by various authors. \citet{2015ApJ...799...28P} showed how the morphology of the Pal 5 stream on its own could constrain the shape of the Milky Way's dark matter halo. In particular, they argued against the triaxial potential of the \citet{2010ApJ...714..229L} for the dark halo on the basis of the thin, curved appearance of the extent of the Pal 5 stream known at that time. Our confirmation of fanning in a more distant part of the leading tail is intriguing in this context but needs to be assessed against the fact that no such behaviour is seen along the well-studied trailing tail, as expected in their models. \citet{2016ApJ...824..104P} discuss fanning in the Ophiuchus stream as a result of chaotic orbits caused by a rotating Galactic bar. The modelling work of \citet{2020ApJ...889...70B} shows that a rotating massive bar can also lead to fanning in the leading tail of Pal 5 but that this scenario predicts other effects which are not compatible with the observations. Much of the recent modelling work on the Pal 5 stream has focused on the effect of other perturbations acting on the tails, seeking explanations for the observed inhomogeneous density structure, the marked length asymmetry of the leading and trailing arms and ``wiggles" in the stream track. Specifically, dark matter subhalos piercing the stream have been invoked to explain peaks and gaps in the density distribution \citep[e.g.][]{2012ApJ...760...75C}, and dynamical resonances induced by the presence of the Galactic bar have been exploited to explain the apparent truncation in the leading tail \citep[e.g.,][]{2017MNRAS.470...60E,2017NatAs...1..633P}. As shown by \citet{2020ApJ...889...70B}, there is currently no one scenario that can explain all of the observations to date. The new radial velocities we provide here, as well as our construction of a PM-cleaned catalogue of previously-identified stream members, should prove very useful in future modelling work. Broadly speaking, we find that the radial velocities of the stars identified in the present study are in qualitative agreement with the predictions of the modelling work by \citet{2017MNRAS.470...60E}. However, it can be seen that in their work, which considers the evolution of the tails within a smooth static potential as well as ones in which interactions with dark matter clumps, giant molecular clouds and a rotating bar take place, the different models are largely degenerate in their leading arm radial velocity predictions (see their Figs. 7, 9 and 11). In this context, we point out that the internal structural and dynamical properties of the progenitor cluster may have equally important effects on the morphology and kinematics of the resulting tidal streams. Indeed, \citet{2012MNRAS.420.2700K} showed that fan-like structures naturally form in stellar streams produced by a star cluster on an eccentric orbit within a tidal field. These features result entirely from the expected dynamical evolution of a collisional stellar system, without the need to invoke any additional external perturbation. Further to this, it has been noted that unusual stream features, such as the "dog-leg" feature seen in the stream that wraps NGC 1097 \citep{2015arXiv150403697A}, can be reproduced only when a rotating progenitor is considered. Similarly, some of the features of the Sgr stream appear to confirm the presence of significant angular momentum in Sgr itself \citep{2021ApJ...908..244D}, and the N-body models that connect the massive GC NGC 5139 ($\omega$ Centauri) and the associated tidal stream Fimbulthul \citep{2019NatAs.tmp..258I}, require a progenitor with some level of internal rotation. These examples underscore the need for renewed efforts to explore how Pal 5 internal structure could affect the morphology, asymmetry and kinematics of its tidal tails, alongside further work on the role of external perturbations. \section{Conclusions} In this paper, we present the results of a spectroscopic study of the leading tail of the Pal 5 stream. We combine our measurements of line-strengths and radial velocities with Gaia EDR3 astrometry and PS1 photometry to derive membership probabilities following a log-likelihood ratio approach. We find 16 stars with {\it bona fide} of membership, of which four were previously known and eight of them lie in a previously unexplored part of the leading tail. The sky locations of these stars confirm the presence of fanning in this part of the stream, as recently suggested by \citet{2020ApJ...889...70B}. We also revisit previous radial velocity studies of the Pal 5 stream and clean them of contaminants using astrometry measurements from Gaia ERD3. Combined with the new measurements presented in this paper, this yields a sample of 109 {\it bona fide} members of the Pal 5 GC and its tidal tails. We fit the radial velocities and PMs of this sample as a function of the stream coordinate $\phi_1$ with linear and quadratic models. We find a linear radial velocity gradient of $-0.81 \pm 0.14$ km s$^{-1}$ deg$^{-1}$ across the extent of the stream, in keeping with previous studies that probed a more limited angular extent. We also find a stream velocity dispersion of $2.17\pm0.27$ km s$^{-1}$, also in agreement with previous results. The quadratic fits across the debris return very similar results. We provide our catalogue in the hopes that it can be used for future modeling work and stress the importance of such work examining the role of both internal and external factors on the stream properties. Most work to date has focused on external factors alone (e.g. dark matter sub-halo impacts, the influence of a rotating bar) but thus far none of these scenarios can explain the entirety of the current set of observations. Ultimately, more observations of this iconic disrupting star cluster system are required. Deep photometry beyond the currently-known stream extent will allow for constraints to be placed on the presence of fanning in the trailing tail, and on whether the leading tail really does peter out in a low surface brightness fan or if it reappears again at more southern declinations \citep[e.g.][]{2017NatAs...1..633P}. Further spectroscopic measurements, especially in large areas adjacent to and beyond the currently-known extent of the tails, will also be critical for confirming membership in these extremely contaminated parts of the sky and for mapping out radial velocity and velocity dispersion trends. It is fortuitous that Pal 5's equatorial location means that it will be an accessible target for both the upcoming WEAVE \citep{2012SPIE.8446E..0PD} and 4MOST \citep{2019Msngr.175....3D} large multi-object spectrograph surveys, which are set to begin operations within the next 2-3 years. \section*{Acknowledgements} This work makes use of the following software packages: \textsc{astropy} \citep{2013A&A...558A..33A,2018AJ....156..123A}, \textsc{dustmaps} \citep{2018JOSS....3..695M}, \textsc{Gala} \citep{gala,adrian_price_whelan_2020_4159870}, \textsc{matplotlib} \citep{Hunter:2007}, \textsc{numpy} \citep{2011CSE....13b..22V}, \textsc{PyAstronomy} \citep{pya}, \textsc{scipy} \citep{2020SciPy-NMeth}, \textsc{SpecUtils} \citep{2020zndo...3718589E}. PBK is grateful for the support by a Commonwealth Rutherford Fellowship from the Commonwealth Scholarship Commission in the UK. ALV and PBK acknowledge support from a UKRI Future Leaders Fellowship (MR/S018859/1). Based on data acquired at the Anglo-Australian Telescope under program A/2017A/102 (Opticon 17A/063). We acknowledge the traditional custodians of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 730890. This material reflects only the authors views and the Commission is not liable for any use that may be made of the information contained therein. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.\\ The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. \textit{Facility}: AAT (AAOmega) \section*{Data Availability} The list of members of Pal~5 which underpins this article and corresponding figures are available in an extended version of Table 4, provided as online supplementary material. \bibliographystyle{mnras} \section{Introduction} The number of Milky Way (MW) globular clusters (GCs) known to possess extended tidal features is growing. Thanks to large, well-calibrated imaging surveys such as Sloan Digital Sky Survey (SDSS) \citep{2012ApJS..203...21A} and Pan-STARRS1 (PS1) \citep{2016arXiv161205560C}, as well as the more recent astrometric Gaia Space Mission \citep{2018A&A...616A...1G}, tidal features have been found around a few tens of GCs in the MW halo \citep[e.g.,][Kuzma et al., in prep]{2020MNRAS.495.2222S}, ranging from the massive inner halo NGC 5139 \citep{2019NatAs.tmp..258I,2021MNRAS.507.1127K} to the low mass outer halo Palomar 1 \citep{2010MNRAS.408L..66N}. The most emphatic display of tidal features in the MW, however, belongs to Palomar 5 \citep[Pal 5,][]{2001ApJ...548L.165O,2003AJ....126.2385O}. The tidal tails of Pal 5 have now been traced across more than 20$\degr$ on the sky \citep{2006ApJ...641L..37G, 2016MNRAS.463.1759B, 2020ApJ...889...70B} and it remains one of the very few stellar streams in the halo with a known progenitor. As the most striking example of GC disruption in action, the tidal tails of Pal 5 have been the subject of many studies over the last two decades, many of which have focused on how their properties can constrain the Galactic potential \citep[e.g.,][]{2004AJ....127.2753D,2012A&A...546L...7M,2014ApJ...795...94B,2015ApJ...803...80K,2015ApJ...799...28P, 2016ApJ...833...31B}. The present-day stellar mass of Pal 5 plus its tidal tails has been estimated to be $1.2-9\times 10^4$ M$_{\odot}$, with 4300 M$_{\odot}$ of that retained in the main body \citep{2017ApJ...842..120I, 2019AJ....158..223P}. This implies that roughly half of the total mass of the system is now populating the tails. The tails also show some level of substructure, including gaps and peaks \citep[e.g.][]{2012ApJ...760...75C, 2016ApJ...819....1I, 2017MNRAS.470...60E}, ``wiggles" and ``fanning" \citep{2020ApJ...889...70B}. While the reality and origin of fine structure along the Pal 5 tails has often been debated \citep[e.g.,][]{2016MNRAS.460.2711T, 2016ApJ...819....1I}, most studies agree on the fact that there is a gross asymmetry in the length of the leading and trailing tails. Early mapping work was limited by the SDSS footprint but \cite{2016MNRAS.463.1759B} were able to use the more extensive coverage of the PS1 survey to show that the leading tail could only be traced photometrically for about half the length ($\approx 8\degr$) of the trailing tail ($\approx 16\degr$). In particular, they showed that, at the photometric depth of PS1, the leading tail could be followed to $\delta\approx-6\degr$ on the sky before ending abruptly. Using deeper photometry from the DECam Legacy Survey \cite[DECaLS; ][]{2019AJ....157..168D}, \citet{2020ApJ...889...70B} demonstrated that leading tail debris could be detected slightly beyond this point, but that it was spread out in a low-density fan. Such fanning could indicate triaxiality in the potential \citep{2015ApJ...799...28P} or the effect of a rotating bar \citep[see][]{,2017NatAs...1..633P,2020ApJ...889...70B}. On the other hand, \cite{2020MNRAS.493.4978S} analyse Gaia DR2 data and suggest that the leading tail can be traced nearly as far as the trailing tail in that dataset. However this analysis is based on main sequence turn-off stars with magnitudes at the faint limit of the survey and the assumption of a cluster metallicity of [Fe/H]=-3.1 dex compared to the measured value of [Fe/H]=-1.5 dex \citep{2017A&A...601A..41K}. A further complication is that, at these faint magnitudes, there is also contamination along the sightline to the leading tail from the Sagittarius (Sgr) stream \citep{2016ApJ...819....1I, 2020ApJ...889...70B}. Spectroscopic studies of the Pal 5 stream have thus far been more limited than photometric ones. The first kinematic exploration of the Pal 5 tails was performed by \cite{2009AJ....137.3378O}, where they explored an 8.5$^{\circ}$ extent of the tidal tails. Using 17 confirmed red giant branch (RGB) members, they measured a linear radial velocity gradient of 1.0 km s$^{-1}$ deg$^{-1}$ along the stream, as well as a small intrinsic velocity dispersion of $\sim2$ km s$^{-1}$. \citet[][hereafter K15]{2015MNRAS.446.3297K}, \citet{2016ApJ...823..157I} and \citet[][]{2017ApJ...842..120I} (hereafter Ib17) conducted more extensive spectroscopic searches, covering a $\approx 20^{\circ}$ region along the stream, and confirmed this mild radial velocity gradient and low dispersion. \begin{figure*}\label{fig1} \begin{center} \includegraphics[width=0.9\textwidth]{Fig1.pdf}\label{fig:obs_fields} \end{center} \caption{Top: Location of the observed targets overplotted on a density map of EDR3 stars. The dashed line indicates the stream track determined by \citet{2017ApJ...842..120I}. Pal 5 resides at (229$\degr$, 0$\degr$) and the dashed circle indicates the Jacobi radius of 11 arcmin \citep{2019AJ....158..223P}. Bottom: PS1 photometry (left) and Gaia EDR3 proper motions (right) of our stream targets overlaid on those of stars lying within the Jacobi radius of Pal 5. The left panel includes a Dartmouth stellar isochrone \citep{2008ApJS..178...89D} of age=13.65 Gyr and [Fe/H]=--1.56 dex. In the right panel, the proper motion of the main body of Pal 5 is $(\mu^{*}_{\alpha},\mu_{\delta})= (-2.76,-2.65)~\rm{mas}~ \rm{yr}^{-1}$.} \end{figure*} While none of these radial velocity studies have found evidence for fanning in the tails, they have not probed the extreme ends of the tails where such behaviour might be expected. \cite{2019AJ....158..223P} have recently explored kinematics along the Pal 5 streams using RR Lyrae (RRL) stars that have Gaia DR2 proper motions (PMs). Intriguingly, they find a few RRLs with high probability of stream membership to be considerably offset from the stream track, including two stars in the region where \citet{2020ApJ...889...70B} find evidence for fanning in their star count map. No radial velocities are available for these stars, however. \begin{table} \centering \caption{List of observations.} \label{tab:obslist} \begin{tabular}{@{}ccccccc@{}} \hline \hline & Field 1& Field 2\\ \hline R.A. (J2000)&$227\overset{\circ}{.}946$&$224\overset{\circ}{.}425$\\ Dec. (J2000)&$-1\overset{\circ}{.}266$&$-4\overset{\circ}{.}690$\\ Date-obs&25/02/2017&24/02/2017\\ Exp. Time&$3\times1200$s&$3\times1800$s\\ Avg. Seeing&$2.6''$&$1.5''$\\ Ang. Distance$^{\dagger}$&$1.75^{\circ}$&$6.6^{\circ}$\\ \hline $^{\dagger}$ Angular distance from Pal 5. \end{tabular} \end{table} In this paper, we present a new kinematic study of the leading Pal 5 tidal tail. This work probes greater angular distances from the cluster than previous radial velocity studies and includes the region where the stream has been suggested to fan out based on deep photometry. In Section 2, we discuss the observations performed and the probabilistic methods we have employed to identify stream members. Section 3 presents our results and we discuss our findings in Section 4. We present our conclusions in Section 5. \section{The Data} \subsection{Observations \& Target Selection} AAOmega is a multi-fibre, dual-beam spectrograph that is mounted on the 3.9m Anglo-Australian Telescope (AAT) at Siding Springs Observatory \citep{2006SPIE.6269E..0GS}. When coupled to the Two Degree Field (2dF) fibre positioning system \citep{2002MNRAS.333..279L}, it provides 392 science fibres that can be configured across a 2$\degr$ diameter circular field. We used AAOmega+2dF with a dichroic centered at 5700 \r{A} to split the incoming light down the red and blue arms, which held the 1700D and 580V gratings respectively. The 1700D grating covers the wavelength range $\sim$8400--8880 \r{A} which includes the \ion{Ca}{II}\ triplet absorption lines at 8498 \r{A}, 8542 \r{A} and 8662 \r{A} and has a resolution R=10000. In the blue, the 580V grating covers the range $\sim$3800--5800 \r{A} with a resolution of R=1300 and covers the \ion{Mg}{I} triplet lines (also known as Mg $b$) at 5167 \r{A}, 5173 \r{A} and 5184 \r{A}. As part of Opticon proposal 17A/063 (PI Ferguson) two fields on the Pal 5 stream were observed on 24th and 25th of February 2017 under dark conditions and mostly clear skies, with 354 and 352 stars targeted per field on respective nights. Arc spectra and quartz lamp flat-fields were obtained before and after each set of science observations and a series of bias exposures were taken at the start of each night. Each target field had three sets of observations with the closest field to the main body of Pal 5 having individual exposures of 1200s while the outermost field had 1800s exposures. Both fields were chosen to lie along the leading tail of the Pal 5 stream as traced by \cite{2016MNRAS.463.1759B}. Field~1 is located 1.75$^{\circ}$ from the center of the cluster, while Field~2 is at the furthest extent mapped to date, 6.6$^{\circ}$ away, in a region that has not previously been studied spectroscopically. Fig. \ref{fig:obs_fields} shows the sky positions of our fields relative the main body of Pal 5 and the observations are summarised in Table \ref{tab:obslist}. The target selection was based on the PS1 DR1 photometry \citep{2016arXiv161205560C} with the primary consideration being the location of stars in colour-magnitude space with respect to the expected locus for Pal~5. In Fig. \ref{fig:obs_fields}, we show a colour-magnitude diagram (CMD) of the stars which lie within the Jacobi radius of Pal~5, calculated to be 11 arcmin by \citet{2019AJ....158..223P}, on which our spectroscopic targets are overlaid. As can be seen, our targets are fairly bright stars ({\it i$_{\rm PS1} \leq 18$}) which largely lie on the upper RGB and blue horizontal branch (HB). As our original target selection was performed before the release of Gaia DR2 astrometric data, it is reasonable to expect a modest to significant level of field contamination in our sample. This is confirmed in Fig. \ref{fig:obs_fields} which shows the Gaia EDR3 \citep[EDR3;][]{2016A&A...595A...1G,2020arXiv201201533G,2020arXiv201203380L} PMs of our targets overlaid on those of stars within the Jacobi radius of Pal 5. The astrometric data complements the radial velocities and [Fe/H] measurements that we will present in this paper and in Sec. \ref{kin} we will detail how we combine all this information to isolate a clean sample of Pal 5 stars. \subsection{Reduction \& Processing} We began by reducing the spectra with 2df Data Reduction (2\textsc{dfdr}\footnote{\url{http://www.aao.gov.au/2df/aaomega/aaomega_2dfdr.html}}) software package, with the default settings for both gratings. This performed the standard processes of debiasing, flat-fielding, wavelength calibration, extraction and sky subtraction using the designated sky fibres. At the end of this process, we removed any cosmic ray residuals by median-combining the spectra for each star. In the regions of the \ion{Ca}{II}\ triplet, the signal-to-noise of our targets ranges from $\sim2-30$ per pixel in Field 1 and $\sim5-50$ per pixel for Field 2. Due to their superior signal-to-noise and spectral resolution, we use these red arm spectra for the bulk of our analysis. To determine the radial velocities (RVs) of our targets, we used the python package \textsc{PyAstronomy}\footnote{https://github.com/sczesla/PyAstronomy} \citep{pya}. The \texttt{crosscorrRV} routine takes a target spectrum and cross-correlates it with a template spectrum, returning the shift as a velocity measurement. The template used was a synthetic spectrum that contained the \ion{Ca}{II}\ triplet lines broadened by a Gaussian at the resolution of the 1700D filter using the \texttt{instrBroadGaussFast} routine. We added random noise to each pixel in the target spectra based on the variance of the flux in that pixel \citep[e.g.,][]{2018MNRAS.477.4565S}, and subsequently performed the cross-correlation across the wavelength interval of 8450 \r{A}-8700 \r{A} - a region mostly bereft of sky residuals. This process was repeated 100 times per star to produce a distribution of measured radial velocities. We fit a Gaussian to the resulting distribution and present the mean and standard deviation as the measured radial velocity and associated 1$\sigma$ uncertainty. Lastly, we corrected for barycentric motion using the routine \texttt{helcorr}, and from now on we refer to these heliocentric radial velocities as $V_{R}$. We also use the \ion{Ca}{II}\ triplet lines to determine stellar metallicities through the well-established method based on their equivalent widths (EWs) \citep[e.g.,][]{1991AJ....101.1329A,2010A&A...513A..34S}. Using the \texttt{equivalent\_width} routine in the \textsc{specutils}\footnote{https://specutils.readthedocs.io/en/stable/} python package, we measured the EWs of all three of the \ion{Ca}{II}\ triplet lines on normalised spectra that had been wavelength-shifted to zero heliocentric velocity. Specifically, we used this routine to fit a Gaussian profile in 10 \r{A} bandpasses centred on each \ion{Ca}{II}\ line. In order to estimate the EW uncertainties for a given star, we repeated these measurements on each of the 100 realizations created in the radial velocity calculations. Similarly, we adopt the mean and sigma of a Gaussian fit to the resultant distribution as the EW and its associated uncertainty. To derive the metallicity of an RGB star, the summed \ion{Ca}{II}\ EW, $\Sigma EW_{\ion{Ca}{II}}$, can be related to [Fe/H] through knowledge of either the distance to the star or the difference between the V magnitude of the star and that of HB of the Pal 5 cluster, $V-V_{HB}$. In this work, we adopt the \ion{Ca}{II}\ calibration presented in \citet{2013MNRAS.434.1681C} which is valid over the range $-4.0\leq$ [Fe/H]$ \leq +0.5$. This calibration takes the form of: \begin{equation} \begin{split} {\rm[Fe/H]}=a+b\times (V-V_{HB})+c\times \Sigma EW_{\ion{Ca}{II}}\\+d\times \Sigma EW_{\ion{Ca}{II}}^{-1.5}+e\times \Sigma EW_{\ion{Ca}{II}} \times (V-V_{HB}) \label{eq:feh} \end{split} \end{equation} where the coefficients $a,\,b,\,c,\,d$ and $e$ are listed in Table. \ref{tab:coefflist}. We adopt $V_{HB}$=17.51 mag for Pal 5 \citep{1996AJ....112.1487H} and calculated the V-band magnitudes of our stars from their PS1 photometry using the transformation equations in \cite{2018BlgAJ..28....3K}. Uncertainties in the metallicities come from combining the uncertainties of the \ion{Ca}{II}\ EWs and the uncertainties on the calibration coefficients. Because of the assumption of a fixed magnitude for the HB, it should be emphasised that the metallicities we derive are strictly valid for genuine RGB stars at the distance of Pal 5. While \citet{2016ApJ...819....1I} detect a slight distance gradient along the Pal~5 stream ranging from $\Delta(m-M)= 0.14\pm0.09$ mag at the extent of the trailing edge to $-0.09\pm0.04$ mag at the extent of the leading edge, this shift in distance translates to a mere $\sim0.03$~dex in [Fe/H] and is well within our uncertainties. Our blue arm spectra have sufficient resolution to individually measure the gravity-sensitive \ion{Mg}{I} triplet lines at $\sim$5170~\r{A}, which have been shown to be useful for separating foreground dwarfs from the RGB stars that we are interested in \cite[e.g.,][ K15]{2009AJ....137.3378O}. The EWs of these lines have been measured in the same way as the \ion{Ca}{II}\ triplet lines described above. While the EW of the stronger \ion{Mg}{I} 8807~\r{A} line could also have been used for dwarf-giant separation \citep[e.g.,][]{Battaglia2012}, it was not always available due to flexure in the spectrograph. \begin{table} \centering \caption{List of coefficients from Table 4 from \citet{2013MNRAS.434.1681C} used in Eq. \ref{eq:feh}.}. \label{tab:coefflist} \begin{tabular}{@{}cc@{}} \hline \hline Coefficient & Value\\ \hline a&$-3.45\pm0.04$\\ b&$0.11\pm0.02$\\ c&$0.44\pm0.006$\\ d&$-0.65\pm0.12$\\ e&$0.03\pm0.003$\\ \hline \end{tabular} \end{table} \subsection{Identifying Pal 5 stream stars} \label{subsec:mem} The distance and faintness of the Pal~5 stream conspire to make it challenging to cleanly isolate member stars from the significant foreground and background contaminant populations along its sightline. This is further exacerbated by the presence of the Sgr stream in this part of the sky \citep{2016ApJ...819....1I, 2020ApJ...889...70B}. While the Sgr stream lies at a larger line-of-sight distance, red clump stars from this system contaminate the region of the CMD where faint Pal 5 RGB stars lie \citep[see Fig. 1 of][]{2020ApJ...889...70B}. These considerations motivate us to pursue a probabilistic approach to membership assignment in which we combine the new spectroscopic information described above with Gaia EDR3 astrometry and PS1 photometry. We began by cross-matching our observed targets with EDR3, matching stars with the closest EDR3 source within a search radius of 2 arcsec. We then removed any stars with a well resolved parallax, $(\omega - 3\sigma_{\omega})>0$ mas, since these will lie in the foreground. We also disregarded stars with a measured velocity uncertainty $>5$ km s$^{-1}$, as this value corresponds to spectra with poor signal-to-noise. Together, these cuts remove 434 stars (61 per cent) of the total sample. To avoid any potential biases, we do not consider further cuts on PMs or line-of-sight velocities at this stage of analysis. Our approach to determining the probability of a given star belonging to the Pal 5 stream ($P_{P5}$), or to the field ($P_{MW}$), is to use the log-likelihood ($\ln{\mathcal{L}_{R}}$) ratio, otherwise known as the Neyman-Pearson test (see chapter 9 of \citealt{1993stp..book.....L}). To this end, we consider in turn the likelihood ratio of the following features: dwarf/giant separation ($\mathcal{L}_{R,D/G}$), [Fe/H] ($\mathcal{L}_{R,[Fe/H]}$) and proximity to the dereddened Pal 5 RGB $(\mathcal{L}_{R,CMD})$. The combined log-likelihood is: \begin{equation}\label{eq:loglike_tot} \ln{\mathcal{L}_{R}}=\ln{\mathcal{L}_{R,CMD}} + \ln{\mathcal{L}_{R,D/G}}+\ln{\mathcal{L}_{R,[Fe/H]}} \end{equation} \noindent where $\mathcal{L}_{R,(CMD,D/G,[Fe/H])}$ is the ratio $P_{P5}/P_{MW}$ for the specific feature. A $\ln{\mathcal{L}_{R}}$>0 implies that a star has a higher likelihood of being a Pal 5 stream member, while $\ln{\mathcal{L}_{R}}$<0 implies it is more likely to belong to the contaminating field. A value of $\ln{\mathcal{L}_{R}}$=0 implies neither scenario is favoured. \subsubsection{Photometric Selection}\label{subsubsec:photsel} Fig. \ref{fig:obs_fields} shows that our target selection broadly traces the RGB of Pal 5. To quantify the probability that a given star belongs to Pal 5 based on its CMD position, we measure its difference in colour from a Dartmouth stellar isochrone selected to best represent the cluster \citep{2008ApJS..178...89D}. For this, we use an isochrone with [Fe/H]=$-1.56$ dex \citep[as measured by ][]{2017A&A...601A..41K}) and age$=13.5$ Gyr, shifted to the distance of Pal 5. Firstly, we de-reddened the PS1 photometry for each target using the updated \citet{1998ApJ...500..525S} reddening maps from \citet{2011ApJ...737..103S} with the Python package, DustMaps\footnote{\url{https://dustmaps.readthedocs.io/en/latest/index.html}} \citep{2018JOSS....3..695M}. The de-reddened magnitudes are denoted $g_0$ and $i_0$. We then assigned each star a probability according to the following equation \citep[e.g.,][]{2019MNRAS.485.2010G}: \begin{equation} P_{P5,CMD}=\frac{1}{\sqrt{2 \pi \sigma_{(g-i)_{0}}^2}} \exp \left(\frac{-((g-i)_{0}-(g-i)_{iso})^2}{2\sigma^2_{(g-i)_{0}}}\right)\label{eq:p5cmd} \end{equation} \noindent where $(g-i)_0$ and $(g-i)_{iso}$ are the de-reddened colour and isochrone colour at the star's $i_0$-band magnitude, and $\sigma_{(g-i)_0}$ is the uncertainty in the de-reddened colour. In this and the following steps, we have only considered stars that lie along the RGB; targets that are located along the HB will be dealt with separately (see Sec. \ref{kin}). The colour width of our RGB selection box (see Fig. \ref{fig:obs_fields}) is at most $\Delta(g-i)_0=0.4$ for a given $i_0$ mag. Across this rather modest range in colour, we have assumed for simplicity that field contaminants are uniformly distributed and assign each star a probability based on a uniform probability distribution. That is: \begin{equation} P_{MW,CMD} = 1/\Delta(g-i)_0\label{eq:fieldcmd} \end{equation} Combining the probabilities from equations \ref{eq:p5cmd} and \ref{eq:fieldcmd}, we define $\mathcal{L}_{R,CMD}=P_{P5,CMD}/P_{MW,CMD}$. \subsubsection{Dwarf/Giant Separation} The gravity-sensitive \ion{Mg}{I} lines are commonly used to distinguish between RGB stars and foreground dwarf stars. For a given metallicity and temperature, these features have larger EWs in high-gravity dwarf stars than they do in giants. The \ion{Mg}{I} lines around 5170 \r{A} have been used for this purpose in previous studies of the Pal 5 stream \cite[e.g.,][ K15]{2009AJ....137.3378O} as well as other diffuse halo structures \citep{2012AJ....143...88C}. These latter authors explicitly demonstrated that this is an effective discriminant for giant stars at the metallicity of Pal 5 and a population of more metal-rich dwarfs. In Fig. \ref{fig:DGS_dist} we plot the summed EWs of the \ion{Mg}{I} and \ion{Ca}{II}\ triplets measured for our sample. This diagram shows bimodal structure with the field dwarfs populating the higher sums of the \ion{Mg}{I} EW at a given \ion{Ca}{II}\ EW (see also K15). The one-dimensional distribution of \ion{Mg}{I} triplet EWs ($\Sigma EW_{\mathrm{{\ion{Mg}{I}}}}$) is shown the right panel and can be modelled by two normal distributions with the form: \begin{equation}\label{eq:dgs} P_{X,D/G} = \frac{1}{\sqrt{2 \pi(\sigma_{\Sigma EW_{\mathrm{\ion{Mg}{I}}}}^2+\sigma_{X}^2}) } \exp \left(\frac{-(\Sigma EW_{\mathrm{\ion{Mg}{I}- X)^2}}}{2 (\sigma_{\Sigma EW_{\mathrm{\ion{Mg}{I}}}}^2+\sigma_{X}^2)}\right) \end{equation} \noindent where $X$ and $\sigma_X$ are the mean and standard deviations of the dwarf and giant distributions, and $\sigma_{\Sigma EW_{\mathrm{\ion{Mg}{I}}}}$ is the uncertainty in $\Sigma EW_{\mathrm{\ion{Mg}{I}}}$. After both distributions have been found, we calculate the likelihood ratio of $\mathcal{L}_{R,D/G}=P_{P5,D/G}/P_{MW,D/G}$. \subsubsection{Metallicity Selection} \citet{2017A&A...601A..41K} presented a detailed chemical analysis of Pal 5 based on high resolution spectra of 15 RGB stars. They derive a mean metallicity of [Fe/H]$_{P5}= -1.56\pm0.02\pm0.06$ dex (presented with statistical and systematic uncertainties). As there is no evidence for any [Fe/H] spread in the main body of Pal 5, we assume that the stars in the tidal stream will posses the same [Fe/H] abundance. Therefore, we have adopted this value when assigning stars their associated $P_{P5,[Fe/H]}$, which takes the form of another normal distribution: \begin{equation}\label{eq:metp5} \begin{aligned} P_{P5,[Fe/H]}=\frac{1}{\sqrt{2 \pi (\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{P5}})}}\cross \\ \exp \left(\frac{-([Fe/H]-[Fe/H]_{P5})^2}{2(\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{P5}})}\right) \end{aligned} \end{equation} \noindent where [Fe/H] is our measured value using the \ion{Ca}{II}\ EW, $\sigma_{[Fe/H]}$ is its uncertainty and $\sigma_{[Fe/H]_{P5}}$ is the uncertainty of $[Fe/H]_{P5}$, obtained by summing the statistical and systematic terms in quadrature. To inform our choice of a model for the field component, we plot in Fig. \ref{fig:FEH} the inferred [Fe/H] distribution of all stars which remain after parallax and low signal-to-noise removal. While these values only represent actual metallicities for stars at the distance of Pal 5, it is notable that the overall distribution of this quantity can be well-described by a normal distribution. Therefore, we can define $P_{MW,[Fe/H]}$ in a similar manner to Eq. \ref{eq:metp5}: \begin{equation} \begin{aligned} P_{MW,[Fe/H]}=\frac{1}{\sqrt{2 \pi ({\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{MW}}})}}\cross \\ exp\left(\frac{-([Fe/H]-[Fe/H]_{MW})^2}{2(\sigma^2_{[Fe/H]}+\sigma^2_{[Fe/H]_{MW}})}\right)\label{eq:metMW} \end{aligned} \end{equation} \noindent where $[Fe/H]_{MW}$ and $\sigma_[Fe/H]_{MW}$ are the mean and sigma of the normal distribution fit to the data. We note that while a small number of confirmed Pal 5 stream members will be included in our model of the field component, they are greatly outnumbered by the much more significant contaminant population, making any bias in determining $P_{MW,[Fe/H]}$ unlikely. Finally, we can calculate the likelihood ratio as $\mathcal{L}_{R,[Fe/H]}=P_{P5,[Fe/H]}/P_{MW,[Fe/H]}$. \subsubsection{Calculating Membership Probability} To transform our log-likelihoods into membership probabilities, we can calculate the probability that a given star belongs to Pal~5 ($P_{mem}$) using the following equation: \begin{equation}\label{eq:loglike1} P_{mem}=\frac{f\mathcal{L}_{P5}}{f \mathcal{L}_{P5} + (1-f) \mathcal{L}_{MW}} \end{equation} \noindent where $\mathcal{L}_{P5/MW}$ is the likelihood value with respect Pal~5 and the field respectively (not to be confused with the likelihood ratio $\mathcal{L}_{R}$), and $f$ is the normalisation value between the two populations. In our case, we have simply assumed that membership to Pal 5 or the MW field is equally probable, thus $f=0.5$. We acknowledge that this is arbitrary, however it does remove any bias towards either Pal 5 membership or the field which is reasonable in our analysis. As we have calculated the log-likelihood ratio ($\ln{\mathcal{L}_{R}}$) from Eq. \ref{eq:loglike_tot}, we can simplify Eq. \ref{eq:loglike1} by removing the common factor of $0.5\mathcal{L}_{MW}$ from the denominator to define the probability of a star that belongs to Pal 5 in terms of the likelihood ratio $\mathcal{L}_{R}$ defined in Eq. \ref{eq:loglike_tot}, \cite[e.g.][]{1993stp..book.....L,2011MNRAS.413.2895J}: \begin{equation}\label{eq:loglike_P5} P_{mem }=\frac{\mathcal{L}_{R}}{\mathcal{L}_{R}+ 1} \end{equation} As we have set the normalisation factor to 0.5, this implies that stars with $P_{mem,P5} \geq 0.5$ are high likelihood Pal 5 members. With this assumption, we find 75 stars or $\sim11$\ per cent of our total sample are likely to belong to Pal 5. To avoid introducing biases, we have thus far ignored the kinematics of stars in identifying likely Pal 5 stream members. In the next Section, we will now consider how such measurements can inform our final sample definition. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig2.pdf} \end{center} \caption{Dwarf/Giant separation for our targets after the removal of targets with significant EDR3 parallax measurements and low signal-to-noise spectra. The sum of the \ion{Ca}{II}\ EWs is shown along the x-axis, while the summed \ion{Mg}{I} b EWs are along the y-axis. The right histogram shows the distribution of summed \ion{Mg}{I} b EWs when integrating over the x-axis. Two peaks can be clearly seen, corresponding to the dwarf population (peak near $\Sigma EW_{\mathrm{\ion{Mg}{I}}}$=5) and the giant population (lower peak near $\Sigma EW_{\mathrm{\ion{Mg}{I}}}=1$).} \label{fig:DGS_dist} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig3.pdf} \end{center} \caption{Distribution of \ion{Ca}{II}\ EW inferred [Fe/H] for all observed targets, after the removal of targets with significant EDR3 parallax measurements and low signal-to-noise spectra. These values only reflect genuine [Fe/H] for stars at the distance of Pal 5. The dashed line indicates the measured [Fe/H] of Pal 5 (-1.56 dex) from high resolution spectroscopy.} \label{fig:FEH} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{Fig4.pdf} \end{center} \caption{Diagnostic plots of the velocity measurements used in our membership assignment. Stars with $P_{mem}>=0.5$ are colour-coded according to field location, as in Fig. \ref{fig:obs_fields}, while stars with $P_{mem}<0.5$ are shown in grey. Left: Proper motion distribution, with the dashed ring indicating the 2 mas yr$^{-1}$ boundary used for final sample definition. The open grey circles indicate the expected proper motion of Sgr stars from the model presented in \citet{2021MNRAS.501.2279V}, which lie outside the the dashed circle. Right: Radial velocity histogram with the vertical dashed lines indicating our selected velocity range of --80 to --40 km s$^{-1}$. The solid arrow shows the radial velocity of Pal 5 at --57.4 km s$^{-1}$. Also shown by the grey unfilled histogram is the expected velocity distribution of Sgr stars from the model presented in \citet{2021MNRAS.501.2279V}, which are all outside our velocity range.} \label{fig:kin_select} \end{figure*} \subsection{Incorporating Kinematics for Final Sample Definition}\label{kin} The PM of Pal 5 has been measured repeatedly using Gaia DR2 and EDR3 data \citep[e.g.,][]{2019MNRAS.484.2832V,2021MNRAS.505.5978V}. However, the area of interest in this study is located outside the main body of the cluster and any PM filtering for our sample needs to account for how the PM of stars change as a function of position along the stream. This has recently been examined by \cite{2019AJ....158..223P} who find that while the PMs of stars do vary along the length of the stream, they do not substantially change over the regions we are probing in this study. As a result, we proceed with stars that lie within 2 mas yr$^{-1}$ of the main body PM of Pal 5\footnote{$\mu^{*}_{\alpha}=\mu_{\alpha}\cos{\delta}$}: $(\mu^{*}_{\alpha},\mu_{\delta})=(-2.75,-2.65)$ mas yr$^{-1}$ \citep{2019MNRAS.484.2832V}. This is a more conservative estimate than the 3 mas yr$^{-1}$ threshold used by \citet{2020MNRAS.493.4978S}, which is large enough to include contamination from the Sgr stream. Indeed, using the model of \cite{2021MNRAS.501.2279V} in the region 223$\degr<\alpha<229.5\degr$ deg and $-20\degr<\delta< 10\degr$, we find that the Sgr stream in this direction has a PM of $(\mu^{*}_{\alpha},\mu_{\delta})\approx(-1.1,-0.5)$ mas yr$^{-1}$ (see Fig. \ref{fig:kin_select}). Our PM selection thus provides a cleaner sample of stars in the Pal 5 leading tail but may potentially suffer from minor incompleteness if there are very energetic stars present. Disentangling such a hot population from the contamination from the Sgr stream and the Milky Way field will be very difficult based on currently-available data but may be possible with either improved distances and/or detailed chemistry (e.g. elemental abundance ratios). The radial velocity distribution of our high likelihood Pal 5 members is also shown in Fig. \ref{fig:kin_select}. This shows an obvious grouping of stars between $-80$ and $-40$ km s$^{-1}$ but with many outliers. The \cite{2021MNRAS.501.2279V} Sgr stream model predicts that stream stars along this sightline will contribute at radial velocities $\geq -20$ km s$^{-1}$ and so do not pose an issue. As previous kinematic studies have found a radial velocity gradient of $\sim -1$ km s$^{-1}$ deg$^{-1}$ along the stream (e.g. K15, Ib17), we find it reasonable to expect that Pal 5 stream members will be contained within this velocity range across the radial extent of our fields. When our constraints on PMs and radial velocities are incorporated into our sample definition, this removes all but 15 of the high probability Pal 5 stars. To this sample, we also add one star located on the HB (i.e. with $(g-i)_{0}\le 0.6$ mag in Fig. \ref{fig1}). The HB was not considered as part of our photometric selection described in Sec. \ref{subsubsec:photsel} but we expect there to be very little field contamination in this part of the CMD. For this population, we have only considered the kinematic cuts in the PM and radial velocity, and find that these are satisfied by a single star in the outermost field. In total, we present these 16 stars as {\it bona fide} Pal 5 stream members and their properties are listed in Table \ref{tab:p5stars}. \begin{table*} \centering \caption{List of {\it bona fide} Pal 5 stream members from our AAT sample. The naming convention indicates with field the star belongs to, followed by its designated number.} \label{tab:p5stars} \begin{tabular}{@{}ccccccc@{}} \hline \hline Star&R.A.&Dec&$V_R$&$(g-i)_{0}$&$i_{0}$&[Fe/H]\\ (Field-number)&(deg, J2000)&(deg, J2000)& ($\rm{km}\,\rm{s}^{-1}$)&\multicolumn{2}{c}{(mag)}&(dex)\\ \hline 1-448$^{*}$&228.419&-0.708&$-56.87\pm1.98$&0.83&16.3&$0.01\pm0.13$\\ 1-396&228.539&-0.878&$-65.50\pm3.91$&0.69&17.8&$-3.04\pm1.04$\\ 1-272$^{*}$&228.118&-1.214&$-56.09\pm1.87$&0.68&17.7&$-0.59\pm0.41$\\ 1-265&227.972&-1.245&$-51.52\pm2.13$&0.73&16.7&$-1.59\pm0.16$\\ 1-190$^{*}$&227.663&-1.446&$-58.37\pm0.96$&0.85&16.2&$-0.99\pm0.11$\\ 1-225$^{*}$&227.840&-1.484&$-52.00\pm1.44$&0.77&16.9&$-1.42\pm0.14$\\ 1-135&227.327&-1.637&$-46.83\pm1.55$&0.81&16.7&$-0.82\pm0.12$\\ 1-113&227.168&-1.760&$-40.43\pm4.36$&0.70&17.3&$-0.72\pm0.15$\\ 2-346&224.939&-4.332&$-59.97\pm1.74$&0.69&17.0&$-1.22\pm0.1$\\ 2-312&224.645&-4.525&$-62.05\pm1.63$&0.72&17.0&$-1.06\pm0.09$\\ 2-253&223.607&-4.805&$-79.72\pm2.04$&0.78&16.9&$-1.28\pm0.11$\\ 2-214&225.323&-4.819&$-67.20\pm1.17$&0.78&16.4&$-1.99\pm0.07$\\ 2-550$^{\dagger}$&224.349&-5.052&$-49.54\pm2.92$&-0.35&17.4&$-1.96\pm0.1$\\ 2-132&224.982&-5.087&$-48.56\pm3.7$&0.62&17.8&$-3.22\pm0.16$\\ 2-069&224.783&-5.326&$-70.84\pm0.28$&0.89&15.7&$-1.47\pm0.04$\\ 2-033&224.177&-5.501&$-63.32\pm1.15$&0.78&16.5&$-1.15\pm0.08$\\ \hline \multicolumn{7}{l}{$^{*}$ These stars are in common with Ib17 - the velocity presented here is the weighted-average value.}\\ \multicolumn{7}{l}{$^{\dagger}$ Horizontal branch star.}\\ \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{Fig5.pdf} \end{center} \caption{Our {\it bona fide} Pal 5 stars. Stars are colour-coded by which field they are located in, as in Fig. \ref{fig:obs_fields}. Top row: On-sky distribution of our member stars. Stars in the inner field closely follow the stream track whereas those in the outermost field display a broader distribution on the sky. Bottom left: CMD of member stars. Also plotted is the Dartmouth stellar isochrone used in the photometric selection described in section \ref{subsubsec:photsel}. Bottom right: PM distribution of member stars. } \label{fig:evidence} \end{figure*} \section{Results} In Fig. \ref{fig:evidence}, we show the spatial distribution, CMD and PM distribution of our final sample of Pal 5 stream stars. We see that eight of these stars lie in Field 1, the closest to the cluster centre, and they tightly follow the stream track. The remaining eight stars lie in the outermost field and appear to show a significant spread in position on the sky, suggestive of stream fanning; these stars lie further along the leading tail than has been previously explored spectroscopically. To be able to better quantify this behaviour, and also examine how our new measurements compare to the known gradients along the stream, we combine our radial velocities with literature measurements to revisit the kinematics of the Pal 5 stream along its entire observed extent. Before doing this, we note that of the eight stars we have confirmed in Field 1, four of these stars have been previously identified by Ib17: stars 1-190, 1-225, 1-277 and 1-448. We compare the radial velocities that we have derived for these stars with those found by Ib17 and find an offset of $V_R-V_{Ib17}=0.4 \pm 1.9$ km~s$^{-1}$. As Ib17's measurements are based on higher signal-to-noise spectra than ours, we adopt a weighted-average between Ib17's and our measured velocity for these stars for the rest of the analysis (see Table \ref{tab:p5_fullist}). \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{Fig6.pdf} \end{center} \caption{Left: EDR3 PM distribution of Ib17 (green), \citet{2016ApJ...823..157I} (light blue) and \citet{2009AJ....137.3378O} (pink) {\it bona fide} Pal 5 members. Our sample are shown as black points. The black dashed ring displays a 2 mas yr$^{-1}$ radius around the measured Pal 5 PM from \citet{2021MNRAS.505.5978V}. It is clear that significant contamination exists in these earlier samples. Right: Spatial distribution of in our combined sample of \pk{109} {\it bona fide} stream members with the stream track of \pk{\citet{2016ApJ...819....1I} and \citet{2017ApJ...842..120I} under-laid}. The stars closely follow the stream track until the edge of the leading tail, near ($\alpha, \delta$)$\sim (225,-5)$, where the fanning reported in this paper is observed. The dashed circle indicates the location of Pal~5 itself.} \label{fig:all_spat_pm} \end{figure*} With the astrometric measurements provided by EDR3 now available, we are also able to clean previous kinematic samples of Pal~5 stars from contaminants. We have gathered the radial velocities that have been presented in \cite{2009AJ....137.3378O} (18 stars), \cite{2016ApJ...823..157I} (130 stars) and Ib17 (130 stars), noting that the latter includes all stars that were previously reported in K15 and 13 stars from \cite{2009AJ....137.3378O}. We subject these samples to the same astrometric and kinematic selections we performed on our own sample - that is, removing stars with a resolved parallax $(\omega - 3\sigma_{\omega})>0$ mas, radial velocity uncertainties $\geq$ 5 km s$^{-1}$, radial velocities outside the range of $-80$ and $-40$ km s$^{-1}$ and PMs larger than 2 mas yr$^{-1}$ of the main body of the Pal 5 cluster. As seen in the left panel in Fig. \ref{fig:all_spat_pm}, there is significant contamination in these earlier kinematic samples of Pal 5 stream stars. We find that out of the 130 stars from Ib17 \citep[including the stars stars in common with our sample and][]{2009AJ....137.3378O}, only 91 stars (75 unique, 16 in common with other data sets) stars satisfy the astrometric and kinematic selection criteria adopted in this paper. For the \cite{2016ApJ...823..157I} sample, only 2 out of the 130 stars are retained and for the \cite{2009AJ....137.3378O} sample, 16 out of 18 stars are retained (of which 12 are present in Ib17). Accounting for stars in common and unique to each sample, we present four, two and 75 unique stars (81 unique stars in total) as Pal 5 system members from the studies of \cite{2009AJ....137.3378O}, \cite{2016ApJ...823..157I} and Ib17 respectively. Adding in the 12 stars in common between \cite{2009AJ....137.3378O} and Ib17, and our 16 stars (of which four are in common with Ib17), we present an extended sample of 109 {\it bona fide} members belonging to the Pal 5 cluster and its tidal tails stars spread across $\sim$22$\degr$ on the sky. The properties of these stars, including their sky positions, radial velocities, PMs and PS1 photometry, are presented in Table \ref{tab:p5_fullist} while their spatial distribution across the sky is shown in the right panel of Fig \ref{fig:all_spat_pm}. The next stage in our analysis is to transform the PMs and spatial coordinates of this extended sample to a spherical coordinate system aligned with the stream. For this, we adopt the transformation provided by \cite{2020ApJ...889...70B} in which the cluster origin lies at $(\phi_{1}, \phi_{2})=(0,0)\degr$ and the leading arm of the tidal tail is along the positive direction. We fit a second-order polynomial to these data to calculate the best fit track to the trailing and leading tails. We find that the trailing tail can be described as: \begin{equation} \phi_{2,trailing}(\phi_1)=0.0108\phi^2_{1}+0.0394\phi_{1}-0.2768 \end{equation} \noindent while the leading tail follows: \begin{equation} \phi_{2,leading}(\phi_1)=0.0053\phi^2_{1}+0.2043\phi_{1}+0.0163 \end{equation} \noindent in stream coordinates (see top left panel of Fig. \ref{fig:kim_fig}). In constructing these fits, we have excluded all stars that lie within the Jacobi radius to avoid the cluster influencing the track. To characterise the stream width along its length, we calculate for each star the difference between its coordinate $\phi_2$ and the stream track at its $\phi_1$ position, which we denote as $\Delta\phi_2$. We then fit the mean $\Delta\phi_2$ (in $2.5\degr$ bins as a function of $\phi_1$) with a second order polynomial to estimate the behaviour of the width along the stream (Fig. \ref{fig:fanning}). We see an increase in stream width at $\phi_{1}>5\degr$ reaching approximately 0.5$\degr$ at $\phi_{1}=7\degr$, confirming the visual impression of fanning in the leading tail that is seen in Fig. \ref{fig:evidence}. This also agrees well with a photometric detection of fanning in this region presented by \cite{2020ApJ...889...70B} (see their Fig. 3). The stream width becomes smaller, as expected, as we move along the stream towards the trailing tail and up until $\phi_{1}= -10 \degr$ can be characterised by a width of $\approx 0.2\degr$. The polynomial fit suggests that the stream may become wider again at the observed edge of the trailing tail but there are too few stars in this region for this to be meaningful. \begin{table*} \centering \caption{List of Pal 5 members compiled in this study. These stars are drawn from the new data presented in this work (K22) as well those from the previous studies of \citealt{2009AJ....137.3378O} (O09), \citet{2016ApJ...823..157I} (Is16) and Ib17 that are retained after astrometric and kinematic cleaning. We provide sky positions, PMs and their uncertainties, radial velocities and their uncertainties and the $g$-band magnitudes and $(g-i)$ colours from PS1. The last column indicates the origin of their first identification. Only the top five rows are shown here, the rest is available online.} \label{tab:p5_fullist} \begin{tabular}{@{}ccccccccccc@{}} \hline \hline \multicolumn{1}{c}{R.A.} & \multicolumn{1}{c}{Dec.} & \multicolumn{1}{c}{$\mu^{*}_{\alpha}$} & \multicolumn{1}{c}{$\sigma_{\mu^{*}_{\alpha}}$} & \multicolumn{1}{c}{$\mu_{\delta}$} & \multicolumn{1}{c}{$\sigma_{\mu_{\delta}}$} & \multicolumn{1}{c}{$V_R$} & \multicolumn{1}{c}{$\sigma_{V_R}$} & \multicolumn{1}{c}{$g_{PS1}$} & \multicolumn{1}{c}{$(g-i)_{PS1}$} & \multicolumn{1}{c}{Source} \\ \multicolumn{2}{c}{(deg, J2000)} & \multicolumn{2}{c}{(mas yr$^{-1}$)} & \multicolumn{2}{c}{(mas yr$^{-1}$)} & \multicolumn{2}{c}{(km s$^{-1}$)} & \multicolumn{2}{c}{(mag)}&\\ \hline 240.342&6.036&-2.89&0.03&-2.18&0.02&-47.35&0.78&16.18&1.16&Ib17\\ 237.723&5.027&-3.35&0.12&-3.50&0.10&-50.13&2.50&18.47&0.83&Ib17\\ 236.502&4.285&-2.74&0.13&-2.38&0.11&-53.06&2.87&18.66&0.93&Ib17\\ 235.844&4.241&-2.84&0.11&-2.02&0.11&-50.22&3.22&18.04&0.87&Ib17\\ 236.025&4.193&-2.74&0.10&-2.49&0.09&-48.50&2.16&18.16&0.84&Ib17\\ \hline \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{Fig7.pdf} \end{center} \caption{Inferred trends in various parameters along the stream, using the fit parameters from Table \ref{tab:kin_models} displayed as a function of the stream coordinate $\phi_{1}$. Top left: sky positions in the Pal 5 stream coordinate frame $(\phi_{1},\phi_{2})$. Top right: Radial velocities along the stream. Bottom left: $\mu_{\phi_{1}}$ along the stream. Bottom right: $\mu_{\phi_{2}}$ along the stream. The linear and quadratic model fits to the radial velocities and proper motions are showin in green and blue respectively, with the shaded regions indicating the 1$\sigma$ uncertainties on the corresponding model. The points are the same as Fig. \ref{fig:all_spat_pm} (left), and the dashed black line indicates the quadratic fit to the RRL PMs from \citet{2019AJ....158..223P}.} \label{fig:kim_fig} \end{figure*} With our extended Pal 5 sample, we can also revisit the radial velocity gradient across $\approx22\degr$ on the sky, compared to the 16$\degr$-18$\degr$ of the \citet{2016ApJ...823..157I} and Ib17 spectroscopic samples. Following Ib17, we have employed a ``conservative formulation" of a Bayesian automatic outlier rejection algorithm to fit the data \citep[see Chapter 8.3 of][]{Sivia:2006ty}. Using this technique, we explored both a linear ($A+ B \phi_{1}$) and quadratic ($A+B \phi_{1} + C\phi_{1}^2$) fit to the radial velocities and each component of the PMs as function of position along the stream, $\phi_1$. The parameters of these fits are shown in Table \ref{tab:kin_models} and the models, along with their associated $1\sigma$ uncertainties, are shown in Fig. \ref{fig:kim_fig}. For the linear model, we find a radial velocity gradient of $-0.81 \pm 0.14$ km s$^{-1}$ deg$^{-1}$ along the stream, which is consistent within the uncertainties of the values found in other work where linear models have been adopted \citep[e.g.,][K15]{2009AJ....137.3378O,2016ApJ...823..157I}. When considering the radial velocity gradient of Ib16, 0.7 km s$^{-1}$ deg$^{-1}$, this gradient was calculated with respect to the standard gnomonic coordinate $\xi$ - a tangential horizontal projection about the cluster center. When we calculate our radial velocity gradient in that coordinate system, we find a consistent value of $0.93 \pm 0.2$ km s$^{-1}$ deg$^{-1}$. No previous analyses have explored a quadratic fit to the stream radial velocity distribution along the tails. We find that the quadratic term is consistent with zero within uncertainties ($-0.07\pm 0.04$ km s$^{-1}$ deg$^{-1}$), while the linear term is similar ($-1.34\pm0.28$ km s$^{-1}$ deg$^{-1}$) to the aforementioned linear gradient within the uncertainties. We conclude that a linear model of the radial velocities remains the best description over the angular extent of the stream probed here. At $\phi_{1}=0$ (i.e. the location of the cluster), we find $V_{R}=-55.67\pm0.28$ km s$^{-1}$ with the linear model and $V_{R}=-55.50 \pm 0.28$ km s$^{-1}$ with the quadratic model, both similar to Ib17, though slightly different from the radial velocities of $-57.4\pm0.4$ km s$^{-1}$ from K15 and $-58.6\pm0.2$ km s$^{-1}$ from \citet{2019MNRAS.482.5138B}. We note that restricting the constant term in the linear and quadratic models to this velocity does not affect the results significantly. We perform similar fits to the PM components, $\mu^*_{\phi_{1}}$\footnote{$\mu^*_{\phi_{1}}=\mu_{\phi_{1}}\cos{\phi_{2}}$} and $\mu_{\phi_{2}}$. In both instances, there is a small linear gradient of $0.03\pm0.01$ mas yr$^{-1}$ deg $^{-1}$ and $0.04\pm0.004$ mas yr$^{-1}$ deg$^{-1}$ for $\mu^{*}_{\phi_{1}}$ and $\mu_{\phi_{2}}$ respectively. Further, both leading terms in the quadratic fit to $\mu^{*}_{\phi_{1}}$ and $\mu_{\phi_{2}}$, are consistent with zero within 1 $\sigma$. \cite{2019AJ....158..223P} have recently studied the PMs of {\it bona fide} RRL candidates in Pal 5 as function of $\phi_{1}$ using a quadratic model. For the proper motion of the Pal 5 cluster, we find $(\mu^{*}_{\phi_{1}},\mu_{\phi_{2}})=(3.77\pm0.02,0.73\pm0.02)$ mas yr $^{-1}$, which is in excellent agreement with their value, as well as that of \citet{2021MNRAS.505.5978V} when transformed back to equatorial coordinates. The PM trends we find along the length of the stream are in good agreement as well with their results (see the lower panels of Fig. \ref{fig:kim_fig}), which is very encouraging given that these trends have been fit to completely independent samples of stars. However, we note that while our linear and quadratic fits to $\mu_{\phi_{2}}$ are in very good agreement to theirs at locations $-10^{\circ} \la \phi_{1} \la 5^{\circ}$, they rapidly diverge from their quadratic model at the extreme ends of the stream. The reason for this discrepancy is likely due to the relative lack of stars in these outermost reaches in the two samples -- \cite{2019AJ....158..223P} fit their model to 27 RRLs along the cluster/stream, with only a small number of stars guiding the fit at distances $|\phi_{1}|>10^{\circ}$. We, too, suffer from small number of stars at those locations along the stream. Clearly, it is essential to identify further {\it bona fide} members at such high distance from the progenitor to provide more stringent constraints on the 3D kinematics, as well as to establish if the fanning uncovered here continues further along the leading tail. Finally, we can also re-examine the radial velocity dispersion within the tails using our combined dataset. To do this, we define $\Delta V_R$ to be the difference between the observed radial velocity of a given star and the expected radial velocity at the star's location from the model fits in Table \ref{tab:kin_models}. We then fit the resulting distribution of $\Delta V_R$ with normal distribution with a mean of zero, and report the 1$\sigma$ value as the velocity dispersion, $\sigma$. Both the linear and quadratic model fits are found to show a similar radial velocity dispersion within the debris, $\sigma_{V}=2.17 \pm 0.27$ km s$^{-1}$ and $2.50 \pm 0.24$ km s$^{-1}$ respectively, which is consistent with previously published values (e.g. K15). We explored if there is any trend in the radial velocity dispersion along the tails and found that it that varies very little, increasing by $\lesssim 1$ km s$^{-1}$ into the fanned region of the leading tail. Further, we also find a low dispersion for both directions of proper motion: $\sigma_{\mu_{\phi_{1}}} = 0.12 \pm 0.01$ mas yr$^{-1}$, and $\sigma_{\mu_{\phi_{2}}} = 0.16 \pm 0.01$ mas yr$^{-1}$ within the tail, also with no significant variation along it. Hence, the velocity dispersion, along all three velocity dimensions, is characteristically low and rather constant within the current sample of stars. This is not surprising, as the Pal 5 stream has long been noted to be a kinematically cold structure \citep[e.g.,][]{2009AJ....137.3378O}. \begin{table*} \begin{center} \caption{The coefficients of the linear ($A+ B \phi_{1}$) and quadratic ($A+B \phi_{1} + C\phi_{1}^2$) fits to the radial velocity $V_{R}$, and to the proper motions $\mu_{\Phi_{1}}$ and $\mu_{\Phi_{2}}$, as a function of the tangential projection along the stream, $\phi_{1}$. The top rows contain the fit coefficients while $\sigma$ shows the velocity and PM dispersion about the stream. } \label{tab:kin_models} \begin{tabular}{@{}ccccccc} \hline \hline Parameters&\multicolumn{2}{c}{$V_R$}&\multicolumn{2}{c}{$\mu^{*}_{\Phi_{1}}$}&\multicolumn{2}{c}{$\mu_{\Phi_{2}}$}\\ Model&linear & quadratic&linear & quadratic&linear & quadratic\\ \hline $A$&$-55.67\pm0.27$&$-55.50\pm0.28$&$3.77\pm0.02$&$3.77\pm0.02$&$0.73\pm0.02$&$0.73\pm0.02$\\ $B$&$-0.81\pm0.14$&$-1.34\pm0.28$&$0.03\pm0.01$&$0.03\pm0.01$&$0.04\pm0.004$&$0.03\pm0.004$\\ $C$&$--$&$-0.07\pm0.04$&$--$&$-0.0004\pm0.0005$&$--$&$-0.0003\pm0.0003$\\ $\sigma$&$2.17\pm0.27$&$2.75\pm0.24$&$0.12\pm0.01$&$0.13\pm0.01$&$0.16\pm0.01$&$0.16\pm0.01$\\ \hline \end{tabular} \end{center} \end{table*} \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{Fig8.pdf} \end{center} \caption{Stream width (blue shaded region) as a function of position along the Pal 5 stream. The black points represent stars in the extended sample. The large increase in width in the leading tail, beyond $\phi_1 > 5 \degr$, is indicative of the fanning. The black cross indicates the position of Pal 5 itself.} \label{fig:fanning} \end{figure} \section{Discussion} Prior to this work, spectroscopic studies of Pal 5 member stars have been confined to the most dense and identifiable regions of the tidal stream. However, in recent years, photometric studies have shown that there is a faint extension to the leading edge of the stream \citep{2016MNRAS.463.1759B}, and that the stars in this region may actually fan out in angular distance from the nominal stream track \citep{2020ApJ...889...70B}. The outermost field observed in the present study covers this previously-unexplored region and confirms the existence of stars in this region with kinematics and metallicities that make them highly probable to be {\it bona fide} members of the Pal 5 stream. Stream fanning has been explored from a theoretical standpoint by various authors. \citet{2015ApJ...799...28P} showed how the morphology of the Pal 5 stream on its own could constrain the shape of the Milky Way's dark matter halo. In particular, they argued against the triaxial potential of the \citet{2010ApJ...714..229L} for the dark halo on the basis of the thin, curved appearance of the extent of the Pal 5 stream known at that time. Our confirmation of fanning in a more distant part of the leading tail is intriguing in this context but needs to be assessed against the fact that no such behaviour is seen along the well-studied trailing tail, as expected in their models. \citet{2016ApJ...824..104P} discuss fanning in the Ophiuchus stream as a result of chaotic orbits caused by a rotating Galactic bar. The modelling work of \citet{2020ApJ...889...70B} shows that a rotating massive bar can also lead to fanning in the leading tail of Pal 5 but that this scenario predicts other effects which are not compatible with the observations. Much of the recent modelling work on the Pal 5 stream has focused on the effect of other perturbations acting on the tails, seeking explanations for the observed inhomogeneous density structure, the marked length asymmetry of the leading and trailing arms and ``wiggles" in the stream track. Specifically, dark matter subhalos piercing the stream have been invoked to explain peaks and gaps in the density distribution \citep[e.g.][]{2012ApJ...760...75C}, and dynamical resonances induced by the presence of the Galactic bar have been exploited to explain the apparent truncation in the leading tail \citep[e.g.,][]{2017MNRAS.470...60E,2017NatAs...1..633P}. As shown by \citet{2020ApJ...889...70B}, there is currently no one scenario that can explain all of the observations to date. The new radial velocities we provide here, as well as our construction of a PM-cleaned catalogue of previously-identified stream members, should prove very useful in future modelling work. Broadly speaking, we find that the radial velocities of the stars identified in the present study are in qualitative agreement with the predictions of the modelling work by \citet{2017MNRAS.470...60E}. However, it can be seen that in their work, which considers the evolution of the tails within a smooth static potential as well as ones in which interactions with dark matter clumps, giant molecular clouds and a rotating bar take place, the different models are largely degenerate in their leading arm radial velocity predictions (see their Figs. 7, 9 and 11). In this context, we point out that the internal structural and dynamical properties of the progenitor cluster may have equally important effects on the morphology and kinematics of the resulting tidal streams. Indeed, \citet{2012MNRAS.420.2700K} showed that fan-like structures naturally form in stellar streams produced by a star cluster on an eccentric orbit within a tidal field. These features result entirely from the expected dynamical evolution of a collisional stellar system, without the need to invoke any additional external perturbation. Further to this, it has been noted that unusual stream features, such as the "dog-leg" feature seen in the stream that wraps NGC 1097 \citep{2015arXiv150403697A}, can be reproduced only when a rotating progenitor is considered. Similarly, some of the features of the Sgr stream appear to confirm the presence of significant angular momentum in Sgr itself \citep{2021ApJ...908..244D}, and the N-body models that connect the massive GC NGC 5139 ($\omega$ Centauri) and the associated tidal stream Fimbulthul \citep{2019NatAs.tmp..258I}, require a progenitor with some level of internal rotation. These examples underscore the need for renewed efforts to explore how Pal 5 internal structure could affect the morphology, asymmetry and kinematics of its tidal tails, alongside further work on the role of external perturbations. \section{Conclusions} In this paper, we present the results of a spectroscopic study of the leading tail of the Pal 5 stream. We combine our measurements of line-strengths and radial velocities with Gaia EDR3 astrometry and PS1 photometry to derive membership probabilities following a log-likelihood ratio approach. We find 16 stars with {\it bona fide} of membership, of which four were previously known and eight of them lie in a previously unexplored part of the leading tail. The sky locations of these stars confirm the presence of fanning in this part of the stream, as recently suggested by \citet{2020ApJ...889...70B}. We also revisit previous radial velocity studies of the Pal 5 stream and clean them of contaminants using astrometry measurements from Gaia ERD3. Combined with the new measurements presented in this paper, this yields a sample of 109 {\it bona fide} members of the Pal 5 GC and its tidal tails. We fit the radial velocities and PMs of this sample as a function of the stream coordinate $\phi_1$ with linear and quadratic models. We find a linear radial velocity gradient of $-0.81 \pm 0.14$ km s$^{-1}$ deg$^{-1}$ across the extent of the stream, in keeping with previous studies that probed a more limited angular extent. We also find a stream velocity dispersion of $2.17\pm0.27$ km s$^{-1}$, also in agreement with previous results. The quadratic fits across the debris return very similar results. We provide our catalogue in the hopes that it can be used for future modeling work and stress the importance of such work examining the role of both internal and external factors on the stream properties. Most work to date has focused on external factors alone (e.g. dark matter sub-halo impacts, the influence of a rotating bar) but thus far none of these scenarios can explain the entirety of the current set of observations. Ultimately, more observations of this iconic disrupting star cluster system are required. Deep photometry beyond the currently-known stream extent will allow for constraints to be placed on the presence of fanning in the trailing tail, and on whether the leading tail really does peter out in a low surface brightness fan or if it reappears again at more southern declinations \citep[e.g.][]{2017NatAs...1..633P}. Further spectroscopic measurements, especially in large areas adjacent to and beyond the currently-known extent of the tails, will also be critical for confirming membership in these extremely contaminated parts of the sky and for mapping out radial velocity and velocity dispersion trends. It is fortuitous that Pal 5's equatorial location means that it will be an accessible target for both the upcoming WEAVE \citep{2012SPIE.8446E..0PD} and 4MOST \citep{2019Msngr.175....3D} large multi-object spectrograph surveys, which are set to begin operations within the next 2-3 years. \section*{Acknowledgements} This work makes use of the following software packages: \textsc{astropy} \citep{2013A&A...558A..33A,2018AJ....156..123A}, \textsc{dustmaps} \citep{2018JOSS....3..695M}, \textsc{Gala} \citep{gala,adrian_price_whelan_2020_4159870}, \textsc{matplotlib} \citep{Hunter:2007}, \textsc{numpy} \citep{2011CSE....13b..22V}, \textsc{PyAstronomy} \citep{pya}, \textsc{scipy} \citep{2020SciPy-NMeth}, \textsc{SpecUtils} \citep{2020zndo...3718589E}. PBK is grateful for the support by a Commonwealth Rutherford Fellowship from the Commonwealth Scholarship Commission in the UK. ALV and PBK acknowledge support from a UKRI Future Leaders Fellowship (MR/S018859/1). Based on data acquired at the Anglo-Australian Telescope under program A/2017A/102 (Opticon 17A/063). We acknowledge the traditional custodians of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 730890. This material reflects only the authors views and the Commission is not liable for any use that may be made of the information contained therein. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.\\ The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. \textit{Facility}: AAT (AAOmega) \section*{Data Availability} The list of members of Pal~5 which underpins this article and corresponding figures are available in an extended version of Table 4, provided as online supplementary material. \bibliographystyle{mnras}
train/arxiv
BkiUevTxK3YB9raX2CcQ
5
1
\section{Introduction} In a cluster of galaxies, the galactic interstellar medium can be transferred into intergalactic space by various processes \citep[and references therein]{Boselli2006}. The fate of the gas is, however, still not totally understood. Some of the gas would be cooled to form stars in intergalactic space; some would accrete back to the parent galaxy or to other galaxies in the cluster and some would be heated to become hot plasma in the cluster. If the gas is heated to $\sim 10^7$ K, it would be observed in X-ray. Actually, such X-ray tails associated with galaxies have been reported by many studies \citep[e.g.,][]{Iwasawa2003,Wang2004,Machacek2005,Sun2005,Sun2006,Fujita2006,Machacek2006,Sun2007,Sun2010,Wezgowiec2011,Gu2013}. The gas in the ionization-recombination balance phase would be observed in H$\alpha $. Galaxies' H$\alpha $\ ``tails'' have also been reported by many studies \citep[e.g.,][]{Gavazzi2000, Gavazzi2001,Yoshida2002,Sun2007,Yagi2007,Kenney2008, Sun2010,Smith2010,Yagi2010,Fossati2012}. If the gas remains cool, it would observed in radio waves as HI\ tails \citep[e.g.,][]{Oosterloo2005,Chung2007,Vollmer2007}, or molecular clouds \citep[e.g.,][]{Vollmer2008,Vollmer2012}. Several studies have presented models and and simulations of the evolution of the gas in galaxies \citep[e.g.,][]{Roediger2006,Roediger2007,Roediger2008,Kepferer2008,Tonnesen2012}. Note that the coexistence of the X-ray, H$\alpha $\ and HI\ phase in a tail is rare \citep{Roediger2009,Sun2010}. NGC~4388 in the Virgo cluster is one of the rare samples. From deep H$\alpha $\ imaging, \citet{Yoshida2002} discovered a very extended ionized region near the galaxy. The region was spectroscopically observed with the Faint Object Camera and Spectrograph \citep[FOCAS]{Kashikawa2002} at the Subaru Telescope \citep{Yoshida2004}. \citet{Yoshida2004} confirmed that the region had a recession velocity comparable to that of NGC~4388. They also argued that the H$\alpha $\ tail would have been made by ram-pressure stripping \citep{Gunn1972,Fujita1999}. \citet{Oosterloo2005} observed the region in HI\ 21cm with the Westerbork Synthesis Radio Telescope and found that the HI\ tail extends far distant to $>120$ kpc from NGC~4388. \citet{Kenney2008} discovered another group of H$\alpha $\ clouds between NGC~4438 to the east and M86 to the west. The clouds have $\sim$ 0 heliocentric recession velocity. Some part of the NGC~4438-M86 H$\alpha $\ clouds overlap the extended gas of NGC~4388, but the difference of the heliocentric recession velocity enables us to distinguish them. And recently, the association of an X-ray gas to the tail near NGC~4388 was reported by \citet{Wezgowiec2011}. The origin of the tail of NGC~4388 was discussed in several studies. \citet{Vollmer2009} assumed that NGC~4388 is about 150~Myr after the closest approach to the cluster center, i.e. M87, and calculated the tangential velocities and the line-of-sight distance. They also mentioned the possibility that the tail was formed by the ram pressure from the intracluster medium (ICM) of the M86 group. In the first paper of this series \citep[hereafter paper I]{Gu2013}, we reported that the HI\ gas accompanies the hot X-ray gas even $\sim$ 100 kpc away from NGC~4388, and NGC~4388 and its HI\ tail lie in front of M86 system. We compared the effect of the ICM of the M86 group and that of the Virgo cluster (M87), and concluded that the tail would have been made by ram-pressure stripping of ICM of the Virgo cluster. In this paper, we report the identification of four intracluster/intergalactic star-forming regions in the Virgo cluster, which are associated with the HI\ tail of NGC~4388. Intergalactic star formation in stripped gas from a galaxy was reported by several authors \citep{Owen2006,Sun2007, Cortese2007,Yoshida2008,Sun2010,Sivanandam2010,Smith2010, Hester2010,Fumagalli2011,Abramson2011, Fossati2012,Yoshida2012,ArrigoniBattaia2012,Boissier2012,Ohyama2013}. Besides those in the tails, isolated star-forming regions in the Virgo cluster were also reported \citep{Gerhard2002,Arnaboldi2003,Cortese2003,Cortese2004}. \citet{Kronberger2008} and \citet{Kapferer2009} simulated the ram-pressure stripping of a galaxy and the star formation in the tail. \citet{Kronberger2008} showed that a significant fraction of stars would be formed in the tail, and \citet{Kapferer2009} predicted that young stars would be detectable throughout the whole tail up to 400 kpc. \citet{Yamagami2011} showed that molecular clouds should be formed in the tail. The study in this paper would be directly compared with these model predictions. The structure of this paper is as follows; In Section 2 and Appendix A, we described the data we used, which includes a new spectroscopic observation. The spectroscopic data were analysed in Section 3, and physical parameters of four H$\alpha $\ emitting regions were derived. The nature of the regions was discussed in Section 4, and summarized in Section 5. We adopt the distance to NGC~4388 and its tail as 16.7 Mpc \citep{Yoshida2002}. The distance modulus is m-M=31.11, and one arcsec corresponds to 81.0 pc. The magnitude system is AB-system \citep{Oke1983} unless otherwise noted. \section{Data} \subsection{Imaging data} For detailed analysis of the H$\alpha $\ counterpart along the NGC~4388 tail, we retrieved data obtained with the Subaru Prime Focus Camera \citep[Suprime-Cam][]{Miyazaki2002} around the HI\ tail in N-A-L659 filter \citep[NA659 hereafter]{Yoshida2002,Okamura2002,Hayashino2003} and R (W-C-RC)-band filter. The center wavelength of NA659 was 6600\AA\ and the full width at half maximum (FWHM) was 100\AA, which corresponds to H$\alpha $\ in recession velocity of $v$=1700$\pm$2300 km s$^{-1}$. The NA659 imaging dataset was the same as that used in \citet{Yoshida2002}, but we reprocessed it from the raw data for this paper. Recently, the region was observed much deeper in R-band. We used the data for the continuum subtraction. Also, V (W-J-V), i (W-S-I+), and N-A-L503 \citep[NA503 hereafter]{Yoshida2002,Okamura2002,Hayashino2003} were available. The center wavelength of NA503 was 5020\AA\ and FWHM was 100\AA. NA503 covered [OIII]4959 and [OIII]5007 at recession velocity $v$=3680$\pm$3020, and $v$=780$\pm$3000 km s$^{-1}$, and did not cover H$\beta $\ in the Virgo cluster. The detail of the Suprime-Cam data reduction is given in Appendix A. In the Appendix, the data in other wavelength are also described. From eye inspection, we selected two fields for spectroscopic observation. We call the four clumps of the H$\alpha $\ emitting regions HaR-1, 2, 3, and 4 from the north to the south (Figure \ref{fig:VirgoFC}). Their appearances are shown in Figures \ref{fig:stamps12} and \ref{fig:stamps34}, and coordinates are given in Table \ref{tab:pos}. HaR-2 consists of several sub-clumps. We call them HaR-2a, 2b, 2c and 2d, from the north to the south, respectively. Their multiband magnitudes are given in Tables \ref{tab:photom} and \ref{tab:photom2}. It should be noted that there are other H$\alpha $\ emitting regions around the tail. The sampling of the H$\alpha $\ emitting region in this study was not a complete selection. \subsection{Spectroscopic data} \subsubsection{FOCAS Observation and data reduction} We carried out a longslit spectroscopy at the two fields (HaR-1,2 and HaR-3,4) on 2013-03-03(UT) with the Faint Object Camera and Spectrograph \citep[FOCAS]{Kashikawa2002} attached to the Subaru Telescope. We used the longslit of 0.8 arcsec width, and 300B grism without order-cutting filters. The dispersion was $\sim$ 1.38 \AA\ pixel$^{-1}$. The pixel scale of the FOCAS data was 0.207 arcsec along the slit. The data reduction was done in a standard manner: overscan was subtracted and flat-fielded. Cosmic rays are removed by python implementation of LA Cosmic package \citep{vanDokkum2001}% \footnote{\url{http://obswww.unige.ch/~tewes/cosmics_dot_py/}}. The wavelength was calibrated using the sky emissions and the Thorium-Argon comparison lamp obtained during the observation. The measured instrumental profile was $\sim$ 9 \AA\ in FWHM. Then, the sky emissions and the absorption features of sky background (mainly moonlights) are subtracted. The 2D spectra around H$\beta $\ and H$\alpha $\ are shown in Figure \ref{fig:2Dspec}. Feige~67 was observed with longslit of 2.0 arcsec width as a spectrophotometric standard. The calibrated spectrum was obtained from calspec in STSci\footnote{\url{ftp://ftp.stsci.edu/cdbs/current_calspec/}}. The atmospheric dispersion corrector (ADC) at Cassegrain focus was not available in the observation, and wavelength-dependent positional shift of the target across the slit should occur. In our observation condition, the differential atmospheric dispersion was negligible for HaR-3,4, while $\sim$ 13\% loss of flux may exist in the worst case for HaR-1,2. We therefore regard H$\beta $/H$\alpha $\ of HaR-1 and 2 as possibly having suffered 13\% systematic error. \subsubsection{Spatial distribution and apertures} The spatial distribution of flux around H$\alpha $, [NII]6584, and continuum is shown in Figure \ref{fig:xprof}. H$\alpha $\ and [NII] are not net emission line but include continuum flux. In HaR-1 and 4, H$\alpha $, [NII], and continuum profiles are similar. In HaR-3, the continuum shows a small gradient, which would be a contamination from the neighboring galaxy at the position x$\sim$ 21. In HaR-2, the spatial profile was rather complicated. At x$\sim$24, H$\alpha $, and [NII] has a peak, while the continuum does not. At x$\sim$25, the continuum has a peak, and H$\alpha $\ and [NII] shows a bump. Around x$\sim$27, H$\alpha $\ and [NII] shows a small peak, but the continuum was $\sim$0. Then a second peak of the continuum was seen at the position in the slit x$\sim$29, where H$\alpha $\ and [NII] are not so strong as in the other part. We therefore divide the aperture of HaR-2 into four which correspond to the clumps in Figure \ref{fig:stamps12} bottom. Then spectra were extracted for the H$\alpha $\ peaks with a Gaussian weight along the slit. The spectra are shown in Figure \ref{fig:spec}. In the Figure, the wavelength dependence of the instrumental and atmospheric throughput is corrected, except for the possible slit loss. The neighboring galaxy between HaR-3 and 4 was found to be a background star-forming galaxy at z=0.118. \subsubsection{Measurement of the line strengths} The strength of H$\alpha $, [NII]6584, [SII]6717,6731, [OIII]5007, and H$\beta $\ are measured by Gaussian fitting. In $6560\AA<\lambda<6820\AA$, H$\alpha $, [NII], and [SII] are fitted by a multivariate least-square fitting. We assume that the background continuum is constant in the fitting region, the redshift is the same for the lines, and the FWHM of the lines is the same (dominated by instrumental profile). The strength of [NII]6548 was fixed to be 1/3 of [NII]6584. The estimated redshifts are shown in Table \ref{tab:redshift}. In $4800\AA<\lambda<5100\AA$, H$\beta $\ and [OIII]5007 are measured in the same way. [OIII]4959 was assumed to be 1/3 of [OIII]5007. At HaR-2c and HaR-2d, the lines are too weak and we could not fit well. Since the seeing size was comparable to the slit width, and the slit width was narrower than the spatial extent of the HaRs, the flux value from the spectra has little physical meaning, but the flux ratios are meaningful. The ratios are shown in Table \ref{tab:lineratio}. The error of each parameter was estimated by a residual bootstrap method of 1000 realization, except for H$\alpha $/H$\beta $. The error of H$\alpha $/H$\beta $\ was estimated from the error of H$\alpha $\ flux and that of H$\beta $\ flux by the residual bootstrapping, assuming that their errors were independent. For HaR-1 and HaR-2, 13\% error due to possible differential atmospheric dispersion effect was also included. In Figure \ref{fig:lineratios}, diagnostic line ratio plots are shown. All the regions have a line ratio similar to an HII\ region. Moreover, [OI]6300 was not detected in any of the regions, possible contribution of shock as seen in H$\alpha $\ knots in \citet{Yoshida2004} are negligible in our targets. Note that [OI]6300 of the HaRs are not affected by the atmospheric [OI]6300 or [OI]6363, because of the redshift. \section{Results} \subsection{Redshift and position} The HaRs have large ($>$2000 km s$^{-1}$) recession velocity. The recession velocity of NGC~4388 is also large; 2524 km s$^{-1}$ \citep{Lu1993}, and that of the HI\ tail is 2000 -- 2550 km s$^{-1}$ \citep{Oosterloo2005}. As the recession velocity of the Virgo cluster is 1079 km s$^{-1}$ \citep{Ebeling1998}, the peculiar velocity of NGC~4388 and its tail is $>900$ km s$^{-1}$. In the 48$\times$36 arcmin region shown in Figure \ref{fig:VirgoFC}, the number of galaxies which have recession velocity larger than 1500 km s$^{-1}$ in NASA/IPAC Extragalactic Database(NED) is seven. Two of them, AGC~226080 and VCC~956, have HI redshift only. As the beam of the HI observation is large enough \citep[3.3$\times$3.8 arcmin;][]{Giovanelli2007} to include the HI tail of NGC~4388, the high velocity would be a measurement of the tail by an accidental overlap. The recession velocity of VCC896 is uncertain, since it is inconsistent among \citet{Conselice2001}, Sloan Digital Sky Survey Data Release 7 \citep[SDSS2 DR7;][]{DR7}% \footnote{\url{http://cas.sdss.org/dr7/en/}} and SDSS3 DR9 \citep{DR9}% \footnote{\url{http://skyserver.sdss3.org/dr9/}}. The rest four have optical spectroscopic redshift and consistent among literatures. We marked the four in Figure \ref{fig:VirgoFC}. The HaRs are very likely to be associated with the HI\ tail of NGC~4388, because of the large resession velocity and their position; HaR-1 and HaR-2 overlap the HI, and HaR-3 and HaR-4 lie near the contour edge (Figure \ref{fig:VirgoFC}), and their nearest galaxy with large recession velocity is NGC~4388. \subsection{Size and morphology} HaR-1 is a point source, and was only recognized in narrowband. It is marginally detected in NUV, and in optical broadbands which include strong emissions. Since the seeing size in the NA659 was about 0.75 arcsec, which corresponds to 60 pc, the size of HaR-1 would be smaller than 60 pc. HaR-2 shows a blobby and elongated appearance. The size was 530$\times$ 210 pc. HaR-2ab and HaR-2d were recognized separately in optical and infrared images, while they were merged in UV image due to the 6 arcsec resolution of Galaxy Evolution Explorer (GALEX), HaR-3 had a slightly elongated shape toward HaR-4 and showed a sign of a tail with a blob. The tail was contaminated by the light from the neighboring background galaxy, and the length of the tail was uncertain. The size of HaR-3 including the tail was $>$300 $\times$ 180 pc. HaR-4 showed a possible multi-core or elongated core surrounded by a blob and extended plume (Figure \ref{fig:stamps34}). The size including the plume was 380 $\times$ 360 pc. It is interesting that HaR-3 had an H$\alpha $\ tail toward NGC~4388. In HaR-2ab and HaR-4, tail/plume was on the far side of NGC~4388. The distance between HaR-3 and HaR-4 is 13 arcsec (1 kpc), and the difference of the recession velocity of HaR-3 and 4 is 30km s$^{-1}$. Even if the mass of HaR-3 and 4 are $\sim 10^6$ M$_{\odot}$, they would not be bound gravitationally. \subsection{Internal extinction and H$\alpha $\ luminosity} Internal extinction was estimated from the Balmer decrement H$\beta $/H$\alpha $\ in our FOCAS spectra. We adopted the formula by \citet{Calzetti1994}, \begin{equation} E(B-V)\simeq 0.935 \: \ln((H\alpha/H\beta)/2.88). \end{equation} The estimated internal extinction is shown in Table \ref{tab:Haflux}. In our observation, the seeing size was comparable to the slit width. We therefore did not try the absolute spectrophotometric calibration from the spectra only. Instead, the spectra were used to convert NA659 magnitude to the total flux of H$\alpha $. The extinction corrected H$\alpha $\ luminosity are given in Table \ref{tab:Haflux}. \subsection{Metallicity} \citet{Kewley2002} presented model curves to estimate oxygen abundance (log(O/H)+12) using several combinations of emission lines with various values of the ionization parameter ($q$). We can use [NII]/[SII], [NII]/H$\alpha $, and [NII]/[OIII] to estimate the metallicity. [NII]/[OIII] was corrected for the internal extinction using H$\beta $/H$\alpha $, and other two ratios were not corrected. $q$ was also estimated from the curves. From the fitting, the metallicity of the regions was estimated as log(O/H)+12$\sim$8.6--8.7. It was comparable to the solar abundance \citep{Grevesse2010}. \citet{Yoshida2004} reported that the metallicity of H$\alpha $-emitting filaments near NGC~4388 was almost at the solar value. Their ionization parameters correspond to log(U)=-2.6, -3.5, -3.2 and -3.5, for HaR-1, 2a, 3, and 4, respectively, where $U$ is the ratio of the ionization photon density to the electron density. Since log(U) decreases with age of the HII\ region, relatively high value of HaR-1 implies that HaR-1 would be younger. \subsection{Electron density} From the Figure 5.8 of \citet{Osterbrock2006}, \citet{ODell2013} derived the formula to estimate the electron density ($n_e$) \begin{equation} \log (n_e) = 4.705 -1.9875 \: \left( {\rm [SII]6716/[SII]6731} \right), \end{equation} which was applicable in 0.65$<$[SII]6716/[SII]6731$<$1.3. In this study, the error of [SII]6717/[SII]6731 was large, and all except HaR-1 was consistent with $n_e \lesssim 10^2$ cm$^{-3}$. HaR-1 may have a higher electron density $n_e \sim 10^3$ cm$^{-3}$. \subsection{Mass and Age estimation} \label{sec:mass-age} We calculated model magnitudes and H$\alpha $\ flux at different age and compared them with the observation. The detail of the model is given Appendix \ref{sec:agemassdetail}. We plotted the model age versus the model mass to reproduce the observation in Figure \ref{fig:SB99}. The abscissa is the logarithmic age from the burst. The ordinate is the required stellar mass of the HaR to reproduce the observed magnitude/luminosity at each age. Because the time evolution of the H$\alpha $\ luminosity, UV, optical and MIR magnitudes are different, the crossing point of the tracks will indicate the stellar mass and the age. H$\alpha $, IR, and UV were comparable around log(T)=6.8 and log(mass)=4.3 in Geneva model for HaR-2, HaR-3, and HaR-4, while i has an offset by $\sim$ 0.4 dex, which corresponds to $\sim$1 magnitude and a factor of $\sim$2.5. Though the consistency in the Padova model was not as good as that of the Geneva model, the best-fit mass and age were comparable. The disagreement of the estimated mass and age among the data might be explained partly by the difference in aperture sizes. As UV resolution was worse, it required the largest aperture (r=6 arcsec), and may include contaminants, which made the observed luminosity larger and shifted the lines in Figure \ref{fig:SB99} upward. However, IR had a smaller aperture (r=3 arcsec) than UV, and H$\alpha $\ is based on NA659 photometry, which was aperture photometry of 2.5 $\times$ Kron radius ($rK$). The 2.5$rK$ was 2$\pm$0.3 arcsec in the data, and therefore it was the smallest aperture, because of the best seeing. Suprime-Cam i-band and Kitt Peak National Observatory (KPNO) mosaic ha-band were also measured in an aperture of 2.5$rK$ radius, which was 2--3 arcsec. Nevertheless, H$\alpha $, UV, and IR showed a good agreement, while i-band and KPNO ha-band showed an offset in HaR-2 and HaR-3. As the i-band magnitude of Suprime-Cam was comparable to SDSS photometry for HaR-4, which was based on Petrosian radius, and the KPNO photometry of ha-band showed a similar trend, the disagreement would not be caused by photometric error. The disagreement also could come from model uncertainties and adopted assumptions, such as initial mass function (IMF), instantaneous star formation, and/or dust extinction model. A possible model is to divide the region into several sub-regions and to assume different extinctions. For example, if some part has a large extinction, UV is suppressed and optical and IR fluxes could be larger relative to UV and H$\alpha $. In fact, HaR-2 consisted of sub-clumps with various extinctions. Though we can consider various star-formation histories and models, it is beyond the scope of this study. We do not go further to tune the star-formation histories and the models. In Figure \ref{fig:SB99}, HaR-1 shows large disagreement of the mass and the age among the data. Since HaR-1 would have a smaller mass, which was expected from fainter i-band magnitude, it may have suffered a stochastic effect on IMF \citep[e.g.,][]{Koda2012}. As shown in Table \ref{tab:redshift}, HaR-1 and 2 are 66 kpc and HaR-3 and 4 are 35 kpc away from NGC~4388. \citet{Oosterloo2005} and \citet{Vollmer2009} assumed that the age of the HI\ cloud is $\sim$ 200~Myr, and estimated the projected velocity of NGC~4388 to the cloud was 500 km s$^{-1}$ If we adopt the value as the relative velocity of HaRs to NGC~4388, it should take 130~Myr (HaR-1 and 2) and 70~Myr (HaR-3 and 4) to reach the distance. Since the time-scales are longer than the stellar age estimated above, it suggests that the star formation in HaRs began after the gas left the host galaxy. This is consistent with the prediction by \citet{Yamagami2011}. \section{Discussion} \subsection{Comparison with previous studies} Table \ref{tab:SFregions} summarizes the literature concerning a stripped tail in nearby clusters of galaxies. Since the environment and physical condition of the tails (the ambient gas temperature and density, the relative velocity, the density of the tail, etc) shows a variety. we cannot compare them directly. We can at least say that HaR-1 and 2 are the most distant star-forming regions in the table, and HaR-1 may be the least massive region. Also, we can say that $10^{4-5}$ M$_{\odot}$ and $\sim 10^7$ yr star-forming regions in a stripped tail are not peculiar objects. The age of the regions in \citet{Fumagalli2011} is substantially larger than the others probably because a different model of the star-formation history was adopted. The mass estimation is not affected by the different star-formation history. The detection of the star-forming regions was based on H$\alpha $\ and/or UV emissions, and therefore the sampling should be biased to the younger age. \subsection{Comparison with HI\ distribution and recession velocity} The detailed heliocentric velocity distributions of HI\ \footnote{Provided by Dr. J. van Gorkom. The beam of the data was elongated from north to south: 77 arcsec in declination and 24.5 arcsec in right ascension.} showed that the intensity of the HI\ at the HaR-1,2 field had a wide peak around the recession velocity $\sim$ 2240 km s$^{-1}$ (intensity-weighted mean) with width $\sim$ 150 km s$^{-1}$. HaR-1 and 2 are just on the peak. This means that the star-forming regions had only small peculiar velocity relative to the ambient HI\ cloud along the line of sight. Meanwhile, HI at HaR-3,4 field showed no significant peak. Since the star formation in stripped tail is thought to occur in the highest-density regions \citep{Kapferer2009}, it is puzzling that the HI\ flux at HaR-3,4 field was low. As we reported in paper I, the tail of NGC~4388 is accompanied by a relatively cool X-ray gas, and the distribution of X-ray emission showed a marginal enhancement at the downstream of HaR-3,4. The HI\ gas around HaR-3 and 4 might already have evaporated, except for the densest region. Another possibility is that such dense clouds are less affected by ram pressure, and left behind the HI\ gas \citep{Yamagami2011}. \subsection{Positional offset of H$\alpha $\ and stars, and dark cloud around HaR-1 and HaR-2} The 2D spectrum and the slit profile of HaR-2 (Figures \ref{fig:2Dspec},\ref{fig:xprof}) shows that the H$\alpha $-strong region (HaR-2a) appears left to the continuum regions (HaR-2b). They resemble the {\it fireballs} found in the ram-pressure stripped tail in RB199 in the Coma cluster \citep{Yoshida2008,Yoshida2012}. If the gas cloud (HaR-2a, 2c) is now free from the ram pressure, the gas and stars would move similarly. On the other hand, because the ram pressure affects differently the stars and the gas, it would cause a differential movement between the gas and the stars. The positional shift between the gas and the stars implies that ram-pressure deceleration is still working here, 66 kpc away from the host galaxy. Another possible explanation for the offset is that the gas has been consumed from the leading edge of the cloud, and the star formation is propagated to the following regions. In HaR-4, a plume also existed on the other side of the parent galaxy. In HaR-3, however, the tail appeared on the side of the parent galaxy. The direction was coincident to HaR-4. The projected distance between HaR-3 and HaR-4 was about 1.0 kpc and the difference of the recession velocity was 30 km s$^{-1}$. If their mass is $\sim 10^5$ M$_{\sun}$, the velocity difference is so large that they cannot be gravitationally bound. If HaR-3 and HaR-4 had experienced an encounter in the past, it means that such a small-scale motion existed in the tail. Illuminated by the stars of the M86 envelope, a dark filamentary cloud was recognized around HaR-1 and HaR-2. A high contrast image in V-band is shown as Figure \ref{fig:DC}. In optical bands of Figure \ref{fig:stamps12}, the cloud was also recognized. The dark cloud shows an elongated and twisting shape. The length was $\sim$110 arcsec (7 kpc) and the width was $\sim$7 arcsec (600 pc) around HaR-2. It is uncertain whether the dark cloud is associated with the NGC~4388 tail, an accidental overlap of a dark cloud of M86, or even if it is another dark cloud. However, as such filaments of the dark cloud are only recognized along the HI\ tail of NGC~4388, it is likely that the dark cloud is associated with the NGC~4388 HI\ tail. These filaments of the dark cloud do not show any H$\alpha $\ emitting regions except that with HaR-1 and HaR-2. As there are only few star-forming regions along the tail, it is unlikely that the dusts were created in the tail, and the origin of the dusts should be NGC~4388. It implies that the dusts were affected by the ram-pressure. The ram-pressure stripping of dusts from disk of galaxies in the Virgo cluster was suggested by several authors; NGC~4402 by \citet{Crowl2005}, NGC~4438 by \citet{Cortese2010b}, and NGC~4330 by \citet{Abramson2011}. \citet{Cortese2010a} studied HI-deficient galaxies in the Virgo cluster, and found that dust is stripped in the cluster environment as well as gas. \citet{Yoshida2012} demonstrated the coincidence of the ram-pressure stripping of HI, H$\alpha $\ and the dust in IC~4040. The elongated and twisting shape of the dark cloud downstream of HaR-1 and HaR-2 may also be explained as a result of the ram pressure which is still effective around the regions. In paper I, we showed evidence that M86 is more distant than the NGC~4388 tail. The total extinction of the cloud was estimated by assuming that the background M86 light was smooth. The estimated extinction at a region between HaR-1 and HaR-2 was $\sim$ 15\% and $\sim$ 10\% in V-band and in R-band, respectively. It corresponds to $E(B-V)\sim 0.05$. As the $E(B-V)$ estimated from the Balmer decrement in HaR-1 is much larger as $E(B-V)$= 0.59$^{+0.13}_{-0.15}$, it is suggested that the dust in the dark filament would be locally dense around the star-forming regions. \subsection{Star formation in the clouds} Based on the argument by \citet{Yamagami2011}, we discuss the star formation in the molecular clouds that later shine as the observed H$\alpha $\ emitting regions. \citet{Elmegreen1997} derived the relation between the mass of a cloud $M_c$ and the star-formation efficiency $\epsilon$ when the cloud is under pressure $P_c$. We estimate in paper I that the ram-pressure on NGC~4388 and the clouds should be $P_{\rm ram}\sim 4\times 10^{-12}\rm\: dyn\: cm^{-12}$. In section \ref{sec:mass-age}, we found that the total mass of stars in each H$\alpha $\ emitting region is $M_{*}\sim 10^{4}$--$10^{4.5}\: M_{\odot}$. Assuming that $P_c=P_{\rm ram}$, the mass of the molecular cloud before the stars were born is $M_c \sim 10^{5.5}\: M_{\odot}$ because $\epsilon\sim 0.1$ (lower part of Figure~4 in \citealt*{Elmegreen1997}). For this mass and pressure, the time-scale of star formation in the cloud is $\tau_{\rm form}\sim 10^7$~yr (upper part of Figure~4 in \citealt*{Elmegreen1997}; they referred to the time-scale as 'disruption time'), which is consistent with the age of the stars in the regions ($\sim 10^{6.8}$~yr). \citet{Yamagami2011} also discussed the disruption of the clouds by Kelvin-Helmholtz (KH) instability via interaction with the ambient ICM. The relation among $M_c$, $P_c$ and the cloud radius $R_c$ before star formation starts is given by \begin{equation} \frac{M_c}{R_c^2}\sim 190\:{\rm M_{\odot}\: pc^2} \left(\frac{P_c}{1.38\times 10^{12}\rm\: erg\: s^{-1}}\right)^{1/2} \end{equation} \citep{Elmegreen1989}. For $M_c$ and $P_c$ we estimated, the cloud radius was $R_c\sim 43$~pc. The time-scale of the development of KH instability is \begin{equation} \tau_{\rm KH}=\frac{(\rho_c+\rho_{\rm ICM})R_c} {(\rho_c\rho_{\rm ICM})^{1/2}v_c} \end{equation} where $\rho_c=3M_c/(4\pi R_c^3)$ is the density inside the cloud, and $v_c$ is the relative velocity between the cloud and the ICM \citep{Murray1993}. We estimate in paper I that $\rho_{\rm ICM}\sim 2.1\times 10^{-28}\rm\: g\: cm^{-3}$, and thus we obtain $\tau_{\rm KH}\sim 1.5\times 10^{7}$~yr. Since $\tau_{\rm KH} \sim \tau_{\rm form}$, the cloud could marginally produce the stars before KH instability develops. This may indicate that no magnetic fields are required to protect the cloud from the development of KH instability \citep{Yamagami2011}. \subsection{Nature of HaR-1} From the H$\alpha $\ luminosity, the H$\alpha $\ ionizing photon emission rate of HaR-1 was 10$^{47.41}$ s$^{-1}$. Assuming case-B recombination, the logarithm of the total hydrogen ionizing photon emission rate $Q_H$ was $log(Q_H) \sim 47.8^{+0.15}_{-0.13}$. According to \citet{Sternberg2003}, a single B0V dwarf is enough to produce the photon rate, and other stars such as B giants and O dwarfs would make too much H$\alpha $. However, a single B0V star is insufficient to reproduce the observed NUV magnitude. The absolute V magnitude of B0V star is -4 \citep[e.g.;][]{Wegner2006}. The NUV-V color of B0V was calculated with empirical spectral energy distributions (SEDs) by \citet{Pickles1998} as -1.0 mag. The absolute NUV magnitude is therefore $\sim -5$. Meanwhile, the absolute magnitudes of HaR-1 converted from observed NUV magnitudes and the internal extinction correction was -8.8$\pm$0.7 mag. The observed UV magnitudes were $>3$ mag brighter. The UV-optical color was also inconsistent with the following models. We calculated the $NUV-i$ color of various SEDs, and found that the bluest stars are O9V--B0V (NUV-i$\sim$-1.7). The observed lower limit of NUV-i color of HaR-1 was $NUV-i<-1.6$, and if we correct the internal extinction of $E_{\rm gas}(B-V)=0.59$, the color is $NUV-i<-2.9$. The model SEDs, internal extinction estimated from the Balmer decrement, the extinction law, and the factor of 0.44 were therefore inconsistent with the observed color. \section{Summary} In the HI\ tail of NGC~4388 in the Virgo cluster, we identified four star-forming systems at a projected distance of 35 kpc and 66 kpc away from the parent galaxy. Main results are as follows: \begin{enumerate} \item The line ratios show typical HII\ regions of solar metallicity, with recession velocity comparable to that of HI\ tail, which supports the idea that they are associated with the tail. \item The H$\alpha $\ luminosity and multiband photometry are fit with a burst model (STARBURST99), and we obtained reasonable solutions within 0.3 dex. From the fitting, the mass is 10$^{4-4.5}$ M$_{\odot}$ and the age is $\sim 10^{6.8}$ years, except for the faintest one (HaR-1). The faintest one would have mass smaller than $10^3$ M$_{\odot}$, and was not fitted well, possibly because of the stochastic effect. \item Their young age and large projected distance means that they were formed after they were removed from the parent galaxy. This supports the theoretical prediction by \citet{Yamagami2011}. The estimated age and mass, and the ram-pressure estimated in paper I suggested that a magnetic field to protect the cloud from Kelvin-Helmholz instability \citep{Yamagami2011} may not be necessary for the clouds. \item One of the regions at 66 kpc away from NGC~4388 (HaR-2) shows an offset of H$\alpha $\ emission from stars (fireball feature). This implies that ram-pressure is still effective here. \item Two of the regions (HaR-3,4) exist out of HI\ distribution. Evaporation of the HI\ and/or the different deceleration by ram-pressure between the HI\ gas and the condensed region may explain the result. \item Two of the regions (HaR-1,2) lay at a leading edge of a filamentary dark cloud. The cloud would have been stripped from NGC~4388 by the ram-pressure. The extinction of the dark cloud is smaller than the internal extinction of the regions. The dust in the dark cloud would be locally dense around the star-forming regions. \end{enumerate} As the spectroscopic observation of this study was not a complete sampling of the H$\alpha $\ emitting regions along the tail, we did not investigate the global features in the tail such as total star-formation rate. A future complete survey of the candidates of the star-forming region will quantitatively set the constraints on the star-formation models in the tail. \acknowledgments We thank Drs. Tom Oosterloo and Jacqueline van Gorkom for kindly providing us their map of HI\ flux and the detailed HI\ recession velocity distribution around our targets. We acknowlege the referee Dr. Giuseppe Gavazzi for his suggestions and comments. This work is based on observations obtained with the Subaru Telescope. We acknowledge Subaru staff for support of the observation. This work has made use of the SDSS-III database, NED database, SMOKA archive, STARS/MASTARS archive\footnote{\url{http://stars2.naoj.hawaii.edu/}}, NOAO science archive, Mikulski Archive for Space Telescopes (MAST), NASA/IPAC Infrared Service Archive (IRSA), MPA-JHU DR7 release of spectrum measurements% \footnote{\url{http://www.mpa-garching.mpg.de/SDSS/DR7/}}, Montage service by IRSA\footnote{ funded by the NASA's Earth Science Technology Office, Computational Technnologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. The code is maintained by the NASA/IPAC IRSA.}, and computer systems at Astronomy Data Center of NAOJ. YF is supported by KAKENHI (23540308).
train/arxiv
BkiUbP85qhLACDi6AtPe
5
1
\section{Introduction} Let $G$ be a compact connected Lie group with Lie algebra $g$. Let $I(G)$ denote the ring of ${\mbox{\rm Ad}}(G)$-invariant polynomials on $g$. Then $I(G)^1:=\{f\in I(G)\:|\:f(0)=0\}$ is a maximal ideal of $I(G)$. By $\hat{I}(G)$ we denote the $I(G)^1$-adic completion of $I(G)$. We define $\tilde{I}(G):=\hat{I}(G)/{\bf C} 1$. Let $M$ be a closed oriented $G$-manifold. Then Lott \cite{lott94} defined equivariant higher analytic torsion $T(M)$ of $M$ (see Def. \ref{tvonm}). To be precise, in \cite{lott94}, Def. 2, he defined an element $T(M,g^M,F)\in \hat{I}(g)$, where $g^M$ is a $G$-equivariant Riemannian metric and $F$ is a equivariant flat hermitean vector bundle with trivial momentum map \cite{lott94} (14). In our case for $F$ we take the trivial flat hermitean bundle $F:=M\times {\bf C}$, where $G$ acts on the first factor. By \cite{lott94}, Cor. 1, the class $T(M):=[T(M,g^M,M\times {\bf C})]\in \tilde{I}(g)$ is independent of $g^M$. By definition $T(M)$ is a differential topological invariant of the $G$-manifold $M$. If $M$ is even-dimensional, then by \cite{lott94}, Prop. 9, we have $T(M)=0$. Let ${\rm Or }(G)$ denote the orbit category of $G$ (see L\"uck \cite{lueck89}, Def. 8.16), and let $U(G)$ be the Euler ring of $G$ (\cite{lueck89}, Def. 5.10). By \cite{lueck89}, Prop. 5.13, we can identify $$U(G)=\prod_{[G/H]\in {\rm Or }(G)} {\bf Z} [G/H]\ ,$$ where the product runs over all isomorphism classes of objects of ${\rm Or }(G)$. If $X$ is a $G$-space of the $G$-homotopy type of a finite $G$-CW complex, then we can define its equivariant Euler characteristic $\chi_G(X)\in U(G)$. If ${E_\alpha}$ is the finite collection of $G$-cells of $X$, then $$\chi_G(X):=\sum_\alpha (-1)^{\dim(E_\alpha)} [G/t(E_\alpha)]\ ,$$ where $t(E)=H$ is the type of the cell $E=G/H\times D^{\dim(E_\alpha)}$ (see \cite{lueck89}, Lemma 5.6). Any compact $G$-manifold has the $G$-homotopy type of a finite $G$-CW complex (\cite{lueck89}, 4.36), and thus $\chi_G(M)\in U(G)$ is well defined. In the present note we define a homomorphism $T_G:U(G)\rightarrow \tilde{I}(G)$ (Lemma \ref{tge}), such that our main result can be formulated as follows. \begin{theorem}\label{main} Let $G$ be a compact connected Lie group. If $M$ is a closed oriented $G$-manifold, then $$T(M)=T_G \: \chi_G(M)\ .$$ \end{theorem} This theorem answers essentially the question posed by Lott \cite{lott94}, Note 4. As we will see below it can be employed to compute $T(M)$ effectively. Let $H\subset G$ be a closed subgroup. Then by \cite{lueck89}, 7.25 and 7.27, there is a restriction map ${\rm res}^G_H:U(G)\rightarrow U(H)$ such that ${\rm res}^G_H \chi_G(M)=\chi_H({\rm res}^G_H\:M)$ for any compact $G$-manifold, where ${\rm res}^G_H(M)$ denotes $M$ with the induced action of $H$. The inclusion $h\hookrightarrow g$ induces a map ${\rm res}^G_H:\tilde{I}(G)\rightarrow \tilde{I}(H)$. It is an immediate consequence of the Definition \ref{tvonm} of $T(M)$, that \begin{equation}\label{tz1}{\rm res}^G_H T(M):= T({\rm res}^G_H \: M)\ .\end{equation} This is compatible with \begin{equation}\label{tz2} {\rm res}^G_H \circ T_G = T_H\circ {\rm res}^G_H \ . \end{equation} Let $S(G)\subset {\rm Or }(G)$ be the full subcategory with objects $G/H$, where $H$ is isomorphic to $S^1$. By Corollary \ref{ress1} the collection ${\rm res}^G_H T(M)$, $G/H\in S(G)$, determines $T(M)$. In order to compute $T(M)$ it is thus sufficient to define $T_{S^1}:U(S^1)\rightarrow \tilde{I}(S^1)$. If $H\subset G$ is isomorphic to $S^1$, then $T_H$ is defined, and we have $${\rm res}^G_H T(M)=T({\rm res}^G_H M)=T_H \chi_H {\rm res}^G_H(M)=T_H {\rm res}^G_H \chi_G(M)\ .$$ In order to give an explicit formula for $T(M)$ in terms of the $G$-homotopy type of $M$ it remains to give the formula for $T_{S^1}$. Since $T_{S^1}$ has to satisfy Theorem \ref{main}, we are forced to put \begin{eqnarray}\label{ts1} T_{S^1}([S^1/S^1])&=&T(*)=0\\ T_{S^1}([S^1/H])&=&T(S^1/H), \quad H\not=S^1\nonumber\ . \end{eqnarray} For $n\in {\bf N}$ let $F_n:S^1\rightarrow S^1$ be the $n$-fold covering. The derivative $F_{n*}$ of $F_n$ at $1\in S^1$ is multiplication by $n$. By $\tilde{F}_n:\tilde{I}(S^1)\rightarrow \tilde{I}(S^1)$ we denote the induced map. If $H\subset S^1$ is different from $S^1$, then it is a cyclic subgroup of finite order $|H|$. It is again an easy consequence of the Definition \ref{tvonm} of $T(M)$, that \begin{equation} \label{uy1} T(S^1/H)=\tilde{F}_{|H|} T(S^1)\ .\end{equation} Let $S^1:=\{z\in {\bf C}\:|\: |z|=1\}$. We identify $s^1\cong {\bf R}$ such that the exponential map is given by $\exp(y):={\rm e}^{iy}$. Then $I(S^1)={\bf C}[y]$, and we identify $\tilde{I}(S^1)\cong y{\bf C}(y)$. By \cite{lott94}, Prop. 11, we then have $$T(S^1)=2\sum_{k=1}^\infty \left(\begin{array}{c}4k\\2k\end{array}\right) {\rm Li}_{2k+1}(1)\left(\frac{y}{8\pi}\right)^{2k}\ ,$$ where $${\rm Li}_j(z):=\sum_{m=1}^\infty \frac{z^m}{m^j}\ .$$ It follows that $$T(S^1/H)=2\sum_{k=1}^\infty \left(\begin{array}{c}4k\\2k\end{array}\right) {\rm Li}_{2k+1}(1)\left(\frac{y|H|}{8\pi}\right)^{2k}\ .$$ We now discuss some consequences. \begin{lem} If $M$ is a closed oriented $S^1$-manifold, then $T(M)$ and $\chi(M)$ together determine $\chi_{S^1}(M)$. \end{lem} {\it Proof.$\:\:\:\:$} $\chi(M)$ is the coefficient at $[S^1/S^1]$ of $\chi_{S^1}(M)$. Let $\{H_1, \dots, H_l\}$ be the finite set of orbit types of $M$ with $H_i\not=S^1$. Since ${\rm Li}_{j}(1)\not=0$ for all $j\in {\bf N}$, $j\ge 2$, the torsion $T(M)$ determines the numbers $r_l:=\sum_{i=1}^l |H_i|^j$, $j\in 2{\bf N}$. But vice versa the numbers $r_l$ determine $|H_i|$ and therefore $H_i$, $i=1,\dots l$.\hspace*{\fill}$\Box$ \\[0.5cm]\noindent \begin{lem}\label{torus} Let $T$ be a $k$-dimensional torus and $H\subset T$ be a closed subgroup. If $\dim(T/H)\ge 2$, then $T(T/H)=0$, and if $\dim(T/H)=1$, then $T(T/H)=\tilde{P}(T(S^1))$, where $\tilde{P}:\tilde{I}(S^1)\rightarrow \tilde{I}(T)$ is induced by the projection $P:T\rightarrow T/H\cong S^1$. \end{lem} {\it Proof.$\:\:\:\:$} Let $R\in S(T)$. Then $\chi_R ({\rm res}^T_R T/H)=\chi((T/H)/R)[R/R\cap H]$. If $\dim(T/H)\ge 2$, then $(T/H)/R$ is a torus and $\chi((T/H)/R)=0$. If $\dim(T/H)=1$, then $\chi((T/H)/R)\not=0$ iff $(T/H)/R$ is a point. Thus $\chi_R({\rm res}^T_R T/H)=[R/R\cap H]$. The Lemma now follows from (\ref{tz2}) and (\ref{uy1}).\hspace*{\fill}$\Box$ \\[0.5cm]\noindent Let $G/K$ be a compact symmetric space associated to the Cartan involution $\theta$ of $G$. We fix a $\theta$-stable maximal torus $T\subset G$. Then $T\cap K=:S$ is a maximal compact torus of $K$. The rank of $G/K$ is by definition ${\mbox{\rm rank}}(G/K):=\dim(T)-\dim(S)$. Let $W_G(T)$, and $W_K(T)$ be the Weyl groups of $(G,T)$ and $(K,T)$. If ${\mbox{\rm rank}}(G/K)=1$, then for $w\in W_G(T)$ we have a projection $P_w:T\rightarrow T/S^w\cong S^1$, where $S^w=wSw^{-1}$. It induces a map $\tilde{P}_w:\tilde{I}(S^1)\rightarrow \tilde{I}(T)$. Since ${\rm res}^G_T:\tilde{I}(G)\rightarrow \tilde{I}(T)$ is injective, the following Lemma gives an explicit computation of $T(G/K)$. \begin{lem} If ${\mbox{\rm rank}}(G/K)\ge 2$, then $T(G/K)=0$, and if ${\mbox{\rm rank}}(G/K)=1$, then ${\rm res}^G_T T(M)=\sum_{W_G(T)/W_K(T)} \tilde{P}_w(T(S^1))$. \end{lem} {\it Proof.$\:\:\:\:$} Fix $S^1\cong R\subset T$. If $H\subset T$ is a closed subgroup, then $\chi_R({\rm res}^T_R T/H)=0$ except if $\dim(T/H)=1$. In \cite{bunke972} we have shown that $$\chi_T({\rm res}^G_T \:G/K)= \sum_{W_G(T)/W_K(T)} [T/S^w] + \mbox{higher dimensional staff}\ .$$ Hence if ${\mbox{\rm rank}}(G/K)\ge 2$, then by Lemma \ref{torus} ${\rm res}^G_T T(M)=0$, and if ${\mbox{\rm rank}}(G/K)=1$, then $${\rm res}^G_T T(G/K)= \sum_{W_G(T)/W_K(T)} T[T/S^w]\ .$$ Applying \ref{torus} we obtain the desired result. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent We now briefly describe the contents of the remainder of the paper. In Section \ref{sumform} we prove our main analytic result Theorem \ref{sumff} saying that $T(M)$ is essentially additive. In Section \ref{covprod} we study the behaviour of $T(M)$ under coverings and with respect to cartesian products. In Section \ref{formalities} we extend the analytic results to manifolds with corner singularities using certain formal considerations. In Section \ref{subgr} we show that $T(M)$ is determined by its restrictions to all subgroups $H\cong S^1$. In Section \ref{s1} we first prove Theorem \ref{main} for $G=S^1$, and then we construct $T_G$ and finish the proof of Theorem \ref{main} for general $G$. \section{Additivity of equivariant torsion}\label{sumform} We first recall the definition of higher equivariant torsion \cite{lott94}, Def. 2. Let $G$ be a connected Lie group with Lie algebra $g$. Let $M$ be a closed oriented $G$-manifold. We write $\Omega(M):=C^\infty(M,\Lambda^*T^*M)$ and $d:\Omega(M)\rightarrow \Omega(M)$ for the differential of the de Rham complex. For $X\in g$ let $X^*\in C^\infty(M,TM)$ denote the corresponding fundamental vector field. We set $$I:=\sum_{\alpha} X^\alpha\otimes i_{X^*_\alpha}\in S(g^*)\otimes {\mbox{\rm End}}(\Omega(M))\ ,$$ where $X_\alpha\in g$, $X^\alpha\in g^*$ run over a base of $g$ or dual base of $g^*$, respectively, and $i_Y$ denotes interior multiplication by the vector field $Y$. We choose a $G$-invariant Riemannian metric $g^M$. It induces a pre Hilbert space structure on $\Omega(M)$, and we let $e_Y$ be the adjoint of $i_Y$. We set $E:=\sum_\alpha X^\alpha\otimes e_{X^*_\alpha}$. For $t>0$ we define $$d_t:=\sqrt{t}d-\frac{1}{4\sqrt{t}}I,\quad \delta_t:= \sqrt{t} d^*+\frac{1}{4\sqrt{t}}E\ .$$ Then we put \begin{equation}\label{hamil}D_t:=\delta_t-d_t\in S(g^*)\otimes {\mbox{\rm End}}(\Omega(M))\ .\end{equation} Let $S(g^*)^1:=\{f\in S(g^*)\:|\: f(0)=0\}$, and let $\hat{S}(g^*)$ be the $S(g^*)^1$-adic completion. Since $$D_t^2=-t \Delta \quad (\mbox{mod $S^1(g^*)\otimes {\mbox{\rm End}}(\Omega(M))$})$$ we can form $${\rm e}^{D_t^2} \in \hat{S}(g^*)\otimes {\mbox{\rm End}}(\Omega(M))\ .$$ Moreover we have $${\rm Tr}_s N {\rm e}^{D_t^2}\in \hat{I}(G)\ ,$$ where $N$ is the ${\bf Z}$-grading operator on $\Omega(M)$, and ${\rm Tr}_s$ is the ${\bf Z}_2$-graded trace on ${\mbox{\rm End}}(\Omega(M))$. Define $\chi^\prime(M):=\sum_{p=0}^\infty p (-1)^p \dim\: H^*(M,{\bf R})$. Then the function $$s\mapsto -\frac{1}{\Gamma(s)}\int_0^\infty ({\rm Tr}_s N {\rm e}^{D_t^2}-\chi^\prime(M)) t^{s-1} dt$$ is holomorphic for ${\rm Re }(s)>>0$, and it has a meromorphic continuation to all of ${\bf C}$ which is regular at $s=0$. \begin{ddd}\label{tvonm} The equivariant higher torsion $T(M)\in\tilde{I}(G)$ of the $G$-manifold $M$ is represented by $$-\frac{d}{ds}_{|s=0}\frac{1}{\Gamma(s)}\int_0^\infty ({\rm Tr}_s N {\rm e}^{D_t^2} -\chi^\prime(M)) t^{s-1} dt\ .$$ \end{ddd} If $M$ is odd-dimensional, then by \cite{lott94}, Cor. 1, $T(M)$ is independent of the choice of the $G$-invariant Riemannian metric $g^M$. If $M$ is even-dimensional, the by \cite{lott94}, Prop 9, we have $T(M)=0$. Let $M$ be a closed oriented $G$-manifold, and let $N$ be a $G$-invariant oriented hypersurface such that $M\setminus N$ has two components, i.e. there are compact manifolds $M_1$, $M_2$ with boundary $\partial M_i=N$, $i=1,2$ such that $M=M_1\cup_N M_2$. We form the closed oriented $G$-manifolds $\tilde{M_i}:=M_i\cup_NM_i$, the doubles of $M_i$. \begin{theorem}\label{sumff} $$2T(M)=T(\tilde{M}_1)+T(\tilde{M}_2)\ .$$ \end{theorem} {\it Proof.$\:\:\:\:$} We choose Riemannian metrics on $M$ and $\tilde{M}_i$, $i=1,2$. Then let $D_t$ and $D_{t,i}$, $i=1,2$ denote the operators (\ref{hamil}) for $M$ and $\tilde{M}_i$, respectively. We define $\delta(t)\in \hat{I}(G)$ by $$\delta(t):=2{\rm Tr}_s N{\rm e}^{D_t^2}-{\rm Tr}_s N{\rm e}^{D_{t,1}^2}- {\rm Tr}_s N{\rm e}^{D_{t,2}^2} -( 2\chi^\prime(M)-\chi^\prime(\tilde{M}_1)-\chi^\prime(\tilde{M}_2))\ .$$ We have to show that $$0=[-\frac{d}{ds}_{|s=0}\frac{1}{\Gamma(s)} \int_0^\infty \delta(t) t^{s-1} dt] \ ,$$ where $[.]$ denotes the class of $"."$ in $\tilde{I}(G)$. We now specialize the choice of Riemannian metrics. We choose a $G$-invariant collar neighbourhood $(-1,1)\times N\hookrightarrow M$ such that $\{0\}\times N$ is mapped to $N$. Then we assume that $g^M$ is a product metric $dr^2+g^N$ on the collar. The metric $g^M$ induces natural Riemannian metrics $g^{\tilde{M}_i}$ on $\tilde{M}_i$. For $R>1$ let $g^M(R)$ be the Riemannian metric which coincides with $g^M$ outside the collar, and which is such that the collar is isometric to $(-R,R)\times N$. Similarly we obtain metrics $g^{\tilde{M}_i}(R)$ on $\tilde{M}_i$. Let $\delta(t,R)$ be defined with respect to these choices of metrics. While $\delta(t,R)$ may depend on $R$, it is known that $$[-\frac{d}{ds}_{|s=0}\frac{1}{\Gamma(s)} \int_0^\infty \delta(t,R) t^{s-1} dt]\in \tilde{I}(G)$$ is independent of $R$. The proof of the theorem is obtained by studying the behaviour of $\delta(t,R)$ as $R$ tends to infinity. Note that $\hat{I}(G)$ is a locally convex topological vector space. \begin{prop}\label{pr1} For any seminorm $|.|$ on $\hat{I}(G)$ there are constants $C<\infty,c>0$ such that for all $t>0$, $R>1$ $$|\delta(t,R)|<C{\rm e}^{-\frac{c R^2}{t}}\ .$$ \end{prop} {\it Proof.$\:\:\:\:$} This follows from a standard argument using the finite propagation speed method \cite{cheegergromovtaylor82}. We leave the details to the interested reader. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent Let $I(G)_1\subset \hat{I}(G)$ be the closed subspace of at most linear invariant polynomials on $g$ and put $\check{I}(G):=\hat{I}(G)/I(G)_1$. By $[[.]]$ we denote classes in this topological quotient space. \begin{prop}\label{pr2} For any seminorm $|.|$ on $\check{I}(G)$ there is a constant $C<\infty$ such that for all $R>1$, $t>1$ $$|[[\delta(t,R)]]| < C t^{-1} R \ .$$ \end{prop} {\it Proof.$\:\:\:\:$} This is a consequence of the more general estimate \begin{equation}\label{est}|[[{\rm Tr}_s N{\rm e}^{D_t(R)^2}]]|< C t^{-1} R \end{equation} which also holds for $M$ replaced by $\tilde{M}_i$. Here $D_t(R)$ denotes the operator (\ref{hamil}) associated to $g^M(R)$. We can assume that $|.|$ is the restriction to $\check{I}(G)$ of a seminorm of $\hat{S}(g^*)/S_1(g^*)$, where $S_1(g^*)$ denotes the subspace ${\bf C}\oplus g^*$. There is an $m>0$ depending on $|.|$ such that $|[[U]]|=0$ for all $U\in \hat{S}({\bf g}^*)^{m}$. Let $\Delta(R)$ denote the Laplace operator on differential forms associated to the Riemannian metric $g^M(R)$. We have $$D_t^2(R)=-t\Delta(R) + {\cal N} + \frac{1}{t} {\cal N}_1\ ,$$ (to be precise we should write ${\cal N}(R),{\cal N}_1(R)$) where \begin{eqnarray*} {\cal N}&:=&\frac{1}{4}[d^*(R)-d,E+I]\\ {\cal N}_1&:=& Q \\ Q&:=& \frac{1}{16} [I,E]\ , \end{eqnarray*} (the commutators are understood in the graded sense) belong to $S({\bf g}^*)^1\otimes {\mbox{\rm End}}(\Omega(M))$. As in \cite{berlinegetzlervergne92}, 9.46, we write \begin{eqnarray} {\rm Tr}_s N{\rm e}^{D_t(R)^2}&=&\sum_{k=0}^\infty \int_{\Delta_k} U_k(\sigma,R) d\sigma \ , \label{volt} \\ U_k(\sigma,R)&:=&{\rm Tr}_s N {\rm e}^{-\frac{1}{4}t\sigma_0 D(R)^2}({\cal N} + \frac{1}{t} {\cal N}_1)\dots ({\cal N} + \frac{1}{t} {\cal N}_1){\rm e}^{-\frac{1}{4}t\sigma_k D(R)^2}\ ,\nonumber \end{eqnarray} where $\Delta_k\subset {\bf R}^{k+1}$ denotes the standard simplex such that $\Delta_k\ni \sigma=(\sigma_0,\dots,\sigma_k)$ satisfies $\sum_{i=0}^k \sigma_i=1$. The Riemannian metric ${\gaaa}^M(R)$ induces a pre Hilbert space structure on $\Omega(M)$. The trace (operator) norm $\|.\|_1$ ($\|.\|$) on ${\mbox{\rm End}}(\Omega(M))$ and $|.|$ together induce norms on $\hat{S}(g)/S_1(g)\otimes {\mbox{\rm End}}(\Omega(M))$ which we also denote by $\|.\|_1$ ($\|.\|)$). \begin{lem}\label{trace} There is a constant $C<\infty$ such that for all $t>1$ and $R>1$ we have $$\|{\rm e}^{-t\Delta(R)}\|_1 < C R\ .$$ \end{lem} {\it Proof.$\:\:\:\:$} The operator ${\rm e}^{-t\Delta(R)}$ is positive. Thus $\|{\rm e}^{-t\Delta(R)}\|_1={\rm Tr}\: {\rm e}^{-t\Delta(R)}$. Let $W(t,x,y)(R)$ be the integral kernel of ${\rm e}^{-t\Delta(R)}$. The family $(M,g^M(R))$ of Riemannian manifolds has uniformly bounded geometry as $R$ varies in $[1,\infty)$, i.e. there are uniform curvature bounds, and the injectivity radius is uniformly bounded from below. Standard heat kernel estimates (see e.g. \cite{cheegergromovtaylor82}) imply that there is a constant $C_1<\infty$ such that for all $x\in M$, $t>1$, $R>1$ we have $|W(t,x,x)(R)| < C_1$. In particular, for some $C,C_2<\infty$ independent of $R>1$, $t>1$ we have \begin{eqnarray*} {\rm Tr}\: {\rm e}^{-t\Delta(R)}&=& \int_M {\mbox{\rm tr}}\: W(t,x,x)(R) {\rm vol}_{g^M(R)}(x)\\ &<& C_2 {\rm vol}_{g^M(R)}(M)\\ &<& C R \ . \end{eqnarray*} This finishes the proof of the lemma. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent \begin{lem}\label{ooo} There is a $C<\infty$ such that for all $R>1$ and $t,s> 0$ we have $$\|[[{\rm e}^{-t\Delta(R)}{\cal N}{\rm e}^{-s\Delta(R)}]]\| < C (t^{-1/2}+s^{-1/2})\ .$$ \end{lem} {\it Proof.$\:\:\:\:$} Since ${\cal N}=[d^*(R)-d,E+I]$ and $\|E+I\|$ is uniformly bounded w.r.t. $R$ it suffices to show that there exists $C_1<\infty$ such that for all $R>1$ and $t>0$ we have $$\|{\rm e}^{-t\Delta(R)}d\|<C_1t^{-1/2},\quad \|{\rm e}^{-t\Delta(R)}d^*(R)\|<C_1t^{-1/2}\ .$$ We consider the first estimate. Note that $dd^*(R) + d^*(R)d=\Delta(R)$, and the ranges of $dd^*(R)$ and $d^*(R)d=$ are perpendicular. Thus \begin{eqnarray*} \|{\rm e}^{-t\Delta(R)}d\|&=&\|{\rm e}^{-t\Delta(R)}dd^*(R){\rm e}^{-t\Delta(R)}\|^{1/2}\\ &\le&\|{\rm e}^{-t\Delta(R)}\Delta(R){\rm e}^{-t\Delta(R)}\|^{1/2}\\ &=&t^{-1/2} \|{\rm e}^{-t\Delta(R)}t\Delta(R){\rm e}^{-t\Delta(R)}\|^{1/2}\\ &\le&t^{-1/2} \:\sup_{x\ge 0} x{\rm e}^{-x}\\ &\le&C_1 t^{-1/2}\ . \end{eqnarray*} \hspace*{\fill}$\Box$ \\[0.5cm]\noindent If $A$ is of trace class and $B$ is bounded, then we have $|{\rm Tr} \: AB|\le \|B\| \|A\|_1$. Note that $\|{\cal N}_1\|$ is uniformly bounded w.r.t. $R$. Applying this and Lemmas \ref{ooo} and \ref{trace} to $U_k$ we obtain $C, C_1 <\infty$ such that for all $R>1$ and $t>1$ we have \begin{eqnarray}|[[U_k(\sigma,R)]]| &<& C_1 R t^{-k/2} \sum_{i=0}^k \sigma_i^{-1/2} \nonumber\\ |[[\int_{\Delta_k} U_k(\sigma,R) d\sigma]]| &<& C t^{-k/2} R \ .\label{tz4}\end{eqnarray} Note that $|[[U_k]]|=0$ for $k>m$. In order to obtain (\ref{est}) from (\ref{volt}) and (\ref{tz4}) it remains to discuss $U_1$. Since ${\cal N}\in S(g)^1$ there exists $C,C_1<\infty$ such that for all $R>1$ and $t>1$ \begin{eqnarray*} |[[U_1(\sigma,R)]]|&=& |[[{\rm Tr}_s N {\rm e}^{-t\sigma_0 \Delta(R)}\frac{1}{t} {\cal N}_1 {\rm e}^{-t\sigma_1 \Delta(R)}]]|\\ &=& |[[{\rm Tr}_s N \frac{1}{t} {\cal N}_1 {\rm e}^{-t \Delta(R)}]]| \\ &<& C_1 R t^{-1}\\ |[[\int_{\Delta_1} U_1(\sigma,R) d\sigma]]| &< & C R t^{-1} \end{eqnarray*} This finishes the proof of the proposition.\hspace*{\fill}$\Box$ \\[0.5cm]\noindent We now continue with the proof of the theorem. Let $|.|$ any seminorm on $\check{I}(G)$ as in the proof of Proposition \ref{pr2}. By Propositions \ref{pr1} and \ref{pr2} we can write $$\sigma(R):=-\frac{d}{ds}_{|s=0}\frac{1}{\Gamma(s)} \int_0^\infty [[\delta(t,R)]] t^{s-1} dt\ ,$$ and the integral converges at $t=0$ and $t=\infty$ uniformly in $s\in (-1/2,1/2)$. We can perform the derivative and obtain \begin{eqnarray*}\sigma(R) &=&-\int_0^\infty [[\delta(t,R)]] t^{-1} dt \\ &=&-\int_0^R [[\delta(t,R)]] t^{-1} dt + \int_R^\infty [[\delta(t,R)]] t^{-1} dt \ . \end{eqnarray*} By Proposition \ref{pr1} there are $C_1<\infty$, $c_1>0$ such that for all $R>1$ we have \begin{eqnarray*} |[[\int_0^{R^{3/2}} \delta(t,R) t^{-1} dt]]| &\le& \int_0^{R^{3/2}} C {\rm e}^{-\frac{c R^2}{t}} t^{-1} dt \\ &\le & C_1 {\rm e}^{-c_1 R^{1/2}}\ . \end{eqnarray*} Moreover by Proposition \ref{pr2} there is a $C<\infty$ such that for all $R>1$ \begin{eqnarray*} |[[\int_{R^{3/2}}^\infty \delta(t,R) t^{-1} dt]]| &\le& \int_{R^{3/2}}^\infty C R t^{-2} dt\\ &=& C R^{-1/2} \ . \end{eqnarray*} We now let $R$ tend to infinity and take into account that $\sigma(R)$ is independent of $R$ in order to conclude that \begin{equation}\label{rrr1}\sigma(R)=0\ .\end{equation} We have shown that $[[T(M)]]=[[T(\tilde{M}_1)]]+[[T(\tilde{M}_2)]]$. We now consider the remaining component $T_1(M)\in I_1(G)/{\bf C} 1$. Note that ${\cal N}=-\frac{1}{2}L + [d,E]+[d^*,I]$, where $L:=\sum_\alpha X^\alpha\otimes L_{X_\alpha^*}$ and $L_Y$ denotes the Lie derivative with respect to the vector field $Y$. Since $[d,E],$ and $[d^*,I]$ shift the form degree by $\pm 2$ we obtain $${\rm Tr}_s N {\cal N} {\rm e}^{-t \Delta}= - \frac{1}{2}{\rm Tr}_s N L {\rm e}^{-t \Delta}\ .$$ Let $\rho_{an}(M,g^M):G\rightarrow {\bf C}$ denote the equivariant analytic torsion defined by \cite{lottrothenberg91} $$\rho_{an}(M,g^M)(g):= -\frac{d}{ds}_{|s=0}\frac{1}{\Gamma(s)}\int_0^\infty ({\rm Tr}_s N g{\rm e}^{t\Delta} -\chi^\prime(M)) t^{s-1} dt \ .$$ If we define $$\delta(t,g):= 2{\rm Tr}_s N g{\rm e}^{t\Delta}-{\rm Tr}_s N g{\rm e}^{t\Delta_1}- {\rm Tr}_s N g{\rm e}^{t\Delta_2} -( 2\chi^\prime(M)-\chi^\prime(\tilde{M}_1)-\chi^\prime(\tilde{M}_2))\ , $$ then there are $C<\infty$, $c>0$ such that for all $g\in G$ \begin{eqnarray*} |\delta(t,g)|\le C {\rm e}^{-\frac{c}{t}}&&\forall t\in (0,1]\\ |\delta(t,g)|\le C{\rm e}^{-ct}&&\forall t\in [1,\infty)\ .\end{eqnarray*} The first estimate is again a consequence of the finite propagation speed method \cite{cheegergromovtaylor82}. Similar estimates hold for the derivative of $\delta(t,g)$ w.r.t. $g$. We have $$\sigma_1(g):=-\int_0^\infty \delta(t,g) t^{-1} dt=2\rho_{an}(M,g^M)-\rho_{an}(\tilde{M}_1,g^{\tilde{M}_1})-\rho_{an}(\tilde{M}_2,g^{\tilde{M}_2})\ .$$ On the one hand in \cite{bunke972} we have shown that on the dense subset of $G$ consisting of elements of finite order $$2\rho_{an}(M,g^M)-\rho_{an}(\tilde{M}_1,g^{\tilde{M}_1})-\rho_{an}(\tilde{M}_2,g^{\tilde{M}_2}) = const\ .$$ On the other hand $\sigma_1$ is differentiable. We conclude \begin{eqnarray*} 0&=&d_{|g=1} \sigma_1\\ &=&-\int_0^\infty d_{|g=1}\delta(t,.) t^{-1} dt\\ &=&-\int_0^\infty (2{\rm Tr}_s N L {\rm e}^{-t \Delta}-{\rm Tr}_s N L {\rm e}^{-t \Delta_1}-{\rm Tr}_s N L {\rm e}^{-t \Delta_1}) dt\\ &=&-2(2T_1(M)-T_1(\tilde{M}_1)-T_1(\tilde{M}_2))\ .\end{eqnarray*} This finishes the proof of the theorem. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent \section{Products and coverings}\label{covprod} Let $G$ be a compact connected Lie group and $\Gamma$ be a finite group. Let $C(\Gamma)$ denote the algebra of ${\bf C}$-valued functions on $\Gamma$. We need the generalization of higher equivariant analytic torsion $T^\Gamma(M)\in \tilde{I}(G)\otimes C(\Gamma)$ mentioned in \cite{lott94}, Note 3. Let $M$ be a closed oriented $G\times \Gamma$- manifold equipped with a $G\times \Gamma$-invariant Riemannian metric $g^M$. Set $$\chi^\prime(M)(\gamma):=\sum_{p=0}^\infty p (-1)^p {\rm Tr} \: H^p(\gamma)\ ,$$ where $H^p(\gamma)$ is the induced action of $\gamma\in\Gamma$ on $H^p(M,{\bf R})$. Then we define $T^\Gamma(M)\in \tilde{I}(G)\otimes C(\Gamma)$ to be the element represented by the function $$\gamma\mapsto \frac{d}{ds}_{|s=0}\frac{1}{\Gamma(s)}\int_0^\infty ({\rm Tr}_s N \gamma {\rm e}^{D_t^2} t^{s-1}-\chi^\prime(M)(\gamma)) dt\ .$$ Let $M$ be a closed oriented $G\times \Gamma$-manifold and $N$ be a closed oriented $\Gamma$-manifold. Then we form the closed oriented $G\times \Gamma$-manifold $M\times N$, where $\Gamma$ acts diagonally. We choose a $G\times\Gamma$-invariant Riemannian metric $g^M$, a $\Gamma$-invariant Riemannian metric $g^N$, and we let $g^{M\times N}$ be the product metric. Define the $\Gamma$-equivariant Euler characteristic $\chi^\Gamma(N)\in C(\Gamma)$ of a closed $\Gamma$-manifold $N$ by $$\chi^\Gamma(N)(\gamma):=\sum_{p=0}^\infty (-1)^p {\rm Tr}\:H^p(\gamma)\ .$$ \begin{lem}\label{prdf} If $\chi^\Gamma(M)=0$, then $$T^\Gamma(M\times N)=T^\Gamma(M)\chi^\Gamma(N)\ .$$ \end{lem} {\it Proof.$\:\:\:\:$} We write $D_t(M), D_t(N),D_t(M\times N$ for the operators (\ref{hamil}) on $M,N,M\times N$. Let $\Delta(N)$ be the Laplace operator on $\Omega(N)$. On the level of Hilbert space closures we have $${\rm clo}_{L^2}\Omega(M\times N)={\rm clo}_{L^2}\Omega(M)\otimes{\rm clo}_{L_2}\Omega(N)\ .$$ With respect to this splitting we can write $$D_t(M\times N)^2=D_t(M)^2\times 1 - 1\otimes t\Delta(N)\ .$$ If $\gamma\in\Gamma$, then \begin{eqnarray*} &&{\rm Tr}_s N\gamma {\rm e}^{D_t(M\times N)^2}\\ &=&{\rm Tr}_s (N\otimes 1 + 1\otimes N)(\gamma\otimes\gamma) ({\rm e}^{D_t(M)^2}\otimes{\rm e}^{-t\Delta(N)})\\ &=&{\rm Tr}_s N \gamma {\rm e}^{D_t(M)^2} {\rm Tr}_s \gamma {\rm e}^{-t\Delta(N)} + {\rm Tr}_s \gamma {\rm e}^{D_t(M)^2} {\rm Tr}_s N \gamma {\rm e}^{-t\Delta(N)}\ . \end{eqnarray*} By the equivariant McKean-Singer formula \cite{berlinegetzlervergne92}, Thm. 6.3, we have ${\rm Tr}_s \gamma {\rm e}^{-t\Delta(N)}=\chi^\Gamma(N)(\gamma)$. Moreover we have \begin{eqnarray*} \frac{d}{dt}{\rm Tr}_s \gamma {\rm e}^{D_t(M)^2} &=&{\rm Tr}_s \gamma \frac{d}{dt}D_t^2 {\rm e}^{D_t(M)^2}\\ &=&{\rm Tr}_s [ \frac{d}{dt}D_t, \gamma D_t{\rm e}^{D_t(M)^2}]\\ &=&0\\ \lim_{t\to \infty }{\rm Tr}_s \gamma {\rm e}^{D_t(M)^2}&=& \lim_{t\to \infty }{\rm Tr}_s \gamma {\rm e}^{-t\Delta(M)}\\ &=&\chi^\Gamma(M)(\gamma)\\ &=& 0\ . \end{eqnarray*} It follows $${\rm Tr}_s N\gamma {\rm e}^{D_t(M\times N)^2}= \chi^\Gamma(N)(\gamma) {\rm Tr}_s N \gamma {\rm e}^{D_t(M)^2}\ .$$ This implies the assertion of the Lemma. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent Let $N$ be a closed oriented $G\times\Gamma$-manifold such that $\Gamma$ acts freely on $N$. Let $M:=\Gamma\backslash N$. Then $M$ is a closed oriented $G$-manifold. We equip $N$ with a $G\times \Gamma$-invariant Riemannian metric and define $g^M$ such that the projection $\pi:N\rightarrow M$ becomes a local isometry. Let $\int_\Gamma:C(\Gamma)\rightarrow {\bf C}$ be the integral over $\Gamma$ with respect to the normalized Haar measure. We denote the induced map $\tilde{I}(G)\otimes C(\Gamma)\rightarrow \tilde{I}(G)$ by the same symbol. \begin{lem}\label{covform} $$T(M)=\int_\Gamma \:T^\Gamma(N)\ .$$ \end{lem} {\it Proof.$\:\:\:\:$} Note that $\Pi:=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma} \gamma$ acts on $\Omega(N)$ as projection onto the subspace of $\Gamma$-invariant forms which can be identified with $\Omega(M)$ using the pull-back $\pi^*$. Moreover, $D_t(M)$ coincides with the restriction of $D_t(N)$ to the range of $\Pi$. We have \begin{eqnarray*} \frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma} {\rm Tr}_s N\gamma {\rm e}^{D_t(N)^2}&=& {\rm Tr}_s N \Pi {\rm e}^{D_t(N)^2}\\ &=&{\rm Tr}_s N {\rm e}^{D_t(M)^2}\ . \end{eqnarray*} This implies the assertion of the Lemma. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent \section{Manifolds with corner singularities}\label{formalities} In this section we extend the definition of $T(M)$, $T^\Gamma(M)$, and the results of Section \ref{covprod} to manifolds with corner singularities. A compact manifold with a corner singularitiy of codimension one is just a manifold with boundary. Corner singularities of codimension two arise if we admit that boundaries have itself boundaries. In general a corner singularity of codimension $m$ of a $n$-dimensional manifold is modelled on $(R_+)^m\times R^{n-m}$, where $R_+=[0,\infty)$. Let $M$ be a compact manifold with corner singularities. Then the boundary of $M$ can be decomposed into pieces $\partial_1 M\cup\dots\cup\partial_l M$. We do not require that the pieces $\partial_i M$ are connected. If $x\in M$ belongs to a corner singularity of codimension $m$, then $x$ meets exactly $m$ pieces of $\partial M$. For $i\in \{1,\dots l\}$ we can form the double $\tilde{M}_i:=M\cup_{\partial_i M}M$ of $M$ along the piece $\partial_i M$. Then $\tilde{M}_i$ is again a compact manifold with corner singularities. In particular it has $l-1$ boundary pieces $\partial_j\tilde{M}_i=\partial_j M\cup_{\partial_j M\cap \partial_i M}\partial_j M$, $j\not=i$. The notion corner singularities and the construction of the double extends to compact oriented $G$-manifolds in the obvious way. We define $T(M)$ for compact oriented $G$-manifolds inductively with respect to the number $l(M)$ of boundary pieces. If $l(M)=0$, then $T(M)$ is already defined. Assume now that $l(M)$ is defined for all $M$ with $l(M)<l$. If $M$ is now a compact oriented $G$-manifold with $l(M)=l$. Then we set $$T(M):=\frac{1}{2} T(\tilde{M_1})\ .$$ If $l>1$, then we have to check that this definition is independent of the numbering of boundary components. It suffices to show that $T(\tilde{M_1})=T(\tilde{M_2})$. Note that $\tilde{\tilde{M}_1}_2$ and $\tilde{\tilde{M}_2}_1$ are $G$-diffeomorphic. Using the induction hypothesis $$2T(\tilde{M_1})=T(\tilde{\tilde{M}_1}_2)=T(\tilde{\tilde{M}_2}_1)=2T(\tilde{M_2})\ .$$ Thus $T(M)$ is well defined. The doubling trick was introduced by \cite{lottrothenberg91}, Ch. IX. Instead of the formal definition above one could also employ absolute and relative boundary boundary conditions in order to define higher equivariant analytic torsion $T(M,abs)$, $T(M,rel)$ for $G$-manifolds with boundary. If the Riemannian metric is choosen to be product near the boundary, then $T(M)=\frac{1}{2} T(M,abs)+T(M,rel)$. The sum formula \ref{sumff} has now the nice reformulation \begin{equation}\label{sumfff} T(M)=T(M_1)+T(M_2) \ . \end{equation} It has the following generalization: \begin{kor}\label{summ2} Let $M_i$, $i=1,2$, be compact oriented $G$-manifolds with corner singularities. If we are given a $G$-diffeomorphism $\partial_1 M_1\cong \partial_1 M_2$, then we form the manifold with corner singularities $M:=M_1\cup_{\partial_1 M_i}M_2$, and we have $$T(M)=T(M_1)+T(M_2)\ .$$ \end{kor} {\it Proof.$\:\:\:\:$} We employ induction by the number of boundary pieces. The assertion is true if $M$ is closed. Assume that the corollary holds true for all $M$ with $l(M)<l$. Let $M=M_1\cup_{\partial_1 M_i}M_2$ now be a manifold with $l(M)=l$ and $l\ge 1$. Then we can assume that $l(M_1)\ge 2$. Let $\partial_1 M$ be the piece corresponding to $\partial_2 M_1$. We distinguish the cases (a): $\partial_2 M_1\cap \partial 1M_1=\emptyset$ and (b): $\partial_2 M_1\cap \partial_1M_1\not=\emptyset$. In case (a) let $\partial_1 M$ be the piece corresponding to $\partial_2 M_1$. Then using the induction hypothesis $$T(M)=\frac{1}{2}T(\tilde{M}_1)=\frac{1}{2}T(\tilde{M_1}_2)+T(M_2)=T(M_1)+T(M_2)\ .$$ In case (b) there is a boundary piece $\partial_2 M_2$ meeting $\partial_1 M_2$. Then $M$ has a boundary piece $\partial_1 M:=\partial_2 M_1\cup_{\partial_1 M_i\cap \partial_2 M_i}\partial_2 M_2$. Again using the induction hypothesis we have $$T(M)=\frac{1}{2}T(\tilde{M}_1)=\frac{1}{2}T(\tilde{M_1}_2) + \frac{1}{2}T(\tilde{M_2}_2)=T(M_1)+T(M_2)\ .$$ This proves the corollary. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent Let $\Gamma$ be an additional finite group. For a $G\times\Gamma$-manifold with corner singularities we require that that the pieces $\partial_i M$ are compact $G\times \Gamma$-manifolds with corner singularities as well. A Riemannian metric on a manifold with corner singularities is compatible if it is a product metric $g^{(R_+)^m} + g^{{\bf R}^{n-m}}$ at a corner of codimension $m$. Then we can form the doubles $\tilde{M}_i$ metrically. Let $M$ be a compact oriented $G\times \Gamma$-manifold with corner singularities equipped with a compatible $G\times\Gamma$- invariant Riemannian metric. Then we define $T^\Gamma(M)$ for $G\times\Gamma$-manifolds with corner singularities using the same formal procedure as for trivial $\Gamma$. We can generalize Lemma \ref{covform} to this case. Let $N$ be a compact oriented $G\times\Gamma$-manifold with corner singularities such that $\Gamma$ acts freely and form $M:=\Gamma\backslash N$. \begin{kor}\label{covformex} $$T(M)=\int_\Gamma T^\Gamma(N)\ .$$ \end{kor} {\it Proof.$\:\:\:\:$} We argue by induction with respect to the number of boundary pieces. If $l(N)=0$, then this is just Lemma \ref{covform}. Assume now that the corollary holds true for all $N$ with $l(N)<l$. Let now $N$ be a compact oriented $G\times\Gamma$-manifold with corner singularities such that $\Gamma$ acts freely and $l(N)=l\ge 1$. Then consider the covering $\tilde{N}_1\rightarrow \tilde{M}_1$. Applying the induction hypothesis we obtain $$T(M)=\frac{1}{2} T(\tilde{M}_1)=\frac{1}{2}\int_\Gamma T^\Gamma(\tilde{N}_1)=\int_\Gamma T^\Gamma(N)\ .$$ This proves the corollary. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent Let $M$ be a closed oriented $G\times \Gamma$-manifold and $N$ be a compact oriented $\Gamma$-manifold with corner singularities. Then we form the compact oriented $G\times \Gamma$-manifold $M\times N$ with corner singularities, where $\Gamma$ acts diagonally. We choose a $G\times\Gamma$-invariant Riemannian metric $g^M$, a $\Gamma$-invariant compatible Riemannian metric $g^N$, and we let $g^{M\times N}$ be the product metric which is again invariant and compatible. We define the $\Gamma$-equivariant Euler characteristic $\chi^\Gamma(N)\in C(\Gamma)$ of a $\Gamma$-manifold $N$ with corner singularities with $l(N)\ge 1$ inductively with respect to the number of boundary pieces by $$\chi^\Gamma(N):=\frac{1}{2}\chi^\Gamma(\tilde{N}_1)\ .$$ We leave it to the interested reader to express $\chi^\Gamma(N)$ in terms of equivariant Euler characteristics of the components of the filtration of $N$. The main feature of this definition is that the equivariant Euler characteristic is additive under glueing along boundary pieces. We have the following generalization of Lemma \ref{prdf}. \begin{kor}\label{prdfex} If $\chi^\Gamma(M)=0$, then $$T^\Gamma(M\times N)=T^\Gamma(M)\chi^\Gamma(N)\ .$$ \end{kor} {\it Proof.$\:\:\:\:$} We argue by induction over the number of boundary pieces $l(N)$. If $l(N)=0$, then this is just Lemma \ref{prdf}. Assume that the corollary holds true if $l(N)<l$. Let now $N$ be such that $l(N)=l\ge 1$. Let $\partial_1(M\times N):=M\times \partial_1 N$. Then using the induction hypothesis and the additivity of $\chi^\Gamma$ we obtain $$T^\Gamma(M\times N)=\frac{1}{2}T^\Gamma((\widetilde{M\times N})_1)=\frac{1}{2}T^\Gamma(M)\chi^\Gamma(\tilde{N}_1)= T^\Gamma(M)\chi^\Gamma(N)\ .$$ This proves the corollary. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent \section{Restriction to subgroups}\label{subgr} Let $G,H$ be a connected compact Lie groups with Lie algebras $g,h$. If $f:H\rightarrow G$ is a homomorphism, then $f_*:h\rightarrow g$ induces a map $\tilde{f}:I(G)\rightarrow I(H)$. If $H\subset G$ is a closed subgroup and $i$ denotes the inclusion, then we set $\tilde{i}=:{\rm res}^G_H$. If $g\in G$, then we put $H^g:=gHg^{-1}$. Let $\alpha_g:H\rightarrow H^g$ be given by $\alpha_g(h):=ghg^{-1}$. Let $M$ is a closed oriented $G$-manifold with corner singularities. If $f:H\rightarrow G$ is a homomorphism, then we denote by $f^* M$ the $H$-manifold $M$ with action induced by $f$. If $H\subset G$ is a closed subgroup, then we put ${\rm res}^G_H M:=i^* M$. The following Lemma is an immediate consequence of the definition of $T(M)$. \begin{lem}\label{ress} {\bf (1)$\:\:$:} If $f:H\rightarrow G$ is a homomorphism, then $\tilde{f}T(M)=T(f^*M)$. In particular, if $H\subset G$ is closed, then ${\rm res}^G_H T(M)=T({\rm res}^G_H M)$.\\ {\bf (2)$\:\:$:}If $H\subset G$ is closed, then for all $g\in G$ we have $\tilde{\alpha}_g{\rm res}^G_H T(M) = {\rm res}^G_{H^g} T(M)$ for all $g\in G$. \end{lem} The association $H\subset G\mapsto \tilde{I}(H)=:\tilde{I}_G(H)$ assembles to give a contravariant functor $\tilde{I}_G:{\rm Or }(G)\rightarrow {\bf C}-vect$. If $f:H\rightarrow G$ is a homomorphism, then it induces a natural functor $f_*:{\rm Or }(H)\rightarrow {\rm Or }(G)$ sending $H/K$ to $G/f(K)$. For $K\subset H$ let $f_K:K\rightarrow f(K)$ be the restriction of $f$ to $K$. The collection $\{\tilde{f}_K\}$, $K\in H$, provides a natural transformation $\tilde{f}:\tilde{I}_G\circ f_*\rightarrow \tilde{I}_H$. Let $f^*:\lim_{{\rm Or }(G)} \tilde{I}_G \rightarrow \lim_{{\rm Or }(H)}\tilde{I}_H$ denote the induced map. Lemma \ref{ress} says that $G/H\mapsto T({\rm res}^G_H M)$ is a section of $\tilde{I}_G$. Since ${\rm Or }(G)$ has a final object $G/G$, we have an isomorphism \begin{equation}\label{ident}\lim_{{\rm Or }(G)} \tilde{I}_G\cong \tilde{I}(G)\end{equation} given by restriction to the final object. By $S(G)$ we denote the full subcategory of ${\rm Or }(G)$ of those objects $G/H$ with $H\cong S^1$. We denote the space of sections of $\tilde{I}_{G|S(G)}$ by $V(G)$, i.e. $$V(G):=\lim_{S(G)} \tilde{I}_G\ .$$ There is a natural restriction map $$R_G:\tilde{I}(G)\cong\lim_{{\rm Or }(G)} \tilde{I} \rightarrow V(G)\ .$$ \begin{lem} \label{ress1} $R_G$ is injective. \end{lem} {\it Proof.$\:\:\:\:$} Let $T\subset G$ be a maximal torus and denote by $j$ its inclusion. There is a functor $j_{*|S(T)}:S(T)\rightarrow S(G)$. Let $J^*:\lim_{S(G)} \tilde{I}_G \rightarrow \lim_{S(T)} \tilde{I}_T$ be induced by the natural tranformation $\tilde{j_{*|S(T)}}:\tilde{I}_{G|S(G)}\circ j_{*|S(T)}\rightarrow \tilde{I}_{T|S(T)}$. Then $R_T\circ j^*=J^*\circ R_G$. In order to prove that $R_G$ is injective it is therefore sufficient to show that $j^*$ and $R_T$ are injective. Now $j^*$ is injective since it coincides with ${\rm res}^G_T :\tilde{I}(G)\rightarrow \tilde{I}(T)$ under the identification (\ref{ident}), and the latter map well known to be injective. Let $t$ be the Lie algebra of $T$. The kernel of $\exp:t\rightarrow T$ defines a ${\bf Z}$ structure on $t$. The set of subspaces $h\subset t$ corresponding to objects $T/H\in S(T)$ with $H\cong S^1$ is just the set of integral points of the projective space $P(t\otimes {\bf C})$. Injectivity of $R_T$ follows easily from the fact that the set of integral points of $P(t\otimes {\bf C})$ is Zariski dense. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent \begin{kor}\label{fgd} $T(M)$ is uniquely determined by the values of $T({\rm res}^G_H M)$ for all $H\subset G$ with $H\cong S^1$. \end{kor} \section{The map $T_G$}\label{s1} We need the following technical result. \begin{lem}\label{decomp} Let $M$ be a closed manifold. Then there exists a Riemannian metric $g^M$ and a decomposition $M=\cup_i B_i$ of $M$ into manifolds with corner singularities such that the $B_i$ are contractible and the restriction of $g^M$ to $B_i$ is compatible for all $i$. \end{lem} {\it Proof.$\:\:\:\:$} We choose a smooth triangulation of $M$. Then there is another smooth triangulation ${\cal T}$ which is dual to the first one. We choose small closed tubular neighbourhoods $U_\sigma$ of the simplices $\sigma$ of ${\cal T}$. We now proceed inductively. Assume that in the steps $0,\dots,l-1$ we have already defined $B_i$, $i=1,\dots r$. In the $l$'th step we let $B_{r+1},\dots$ be the intersections of $U_\sigma\cap (M\setminus {\rm int}(\cup_{i=1}^r B_i))$, where $\sigma$ runs over all simplices of ${\cal T}$ of dimension $j$. By choosing the tubular neighbourhoods appropriately, this construction gives manifolds $B_i$ with corner singularities. Now one can construct an appropriate Riemannian metric. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent Recall that if $M$ is a manifold with corner singularities and $M$ has at least one boundary piece, then we define inductively $\chi(M):=\frac{1}{2}\chi(\tilde{M}_1)$. In particular, if $M=\cup_i B_i$ is a decomposition as in Lemma \ref{decomp}, then \begin{equation}\ \label{p1oi} \chi(M)=\sum_i \chi(B_i)\ .\end{equation} Recall the definition (\ref{ts1}) of $T_{S^1}:U(S^1)\rightarrow \tilde{I}(S^1)$. \begin{prop} {\bf (1)$\:\:$:} Let $M$ be a closed oriented $S^1$-manifold. Then $T(M)=T_{S^1}\chi_{S^1}(M)$.\\ {\bf (2)$\:\:$:} If in addition $M$ is even-dimensional, then $\chi_{S^1}(M)=a[S^1/S^1]$ for some $a\in {\bf Z}$. \end{prop} {\it Proof.$\:\:\:\:$} A compact $S^1$-manifold has a finite number of orbit types $H_1,\dots,H_l$. We employ induction by the number of orbit types $l(M)$. We first assume that $l(M)=1$. If $H_1=S^1$, then $T(M)=0$ and $\chi_{S^1}(M)=\chi(M)[S^1/S^1]$. Thus $T_{S^1}\chi_{S^1}(M)=\chi(M)T_{S^1}[S^1/S^1]=0$, too. We now consider the case that $H_1\not=S^1$. Then by \cite{bredon72}, II.5.2., we have a smooth locally trivial fibre bundle $M\rightarrow M/S^1$ with fibre $S^1/H_1$. Let $M/S^1=\cup_i B_i$ be a decomposition of $M/S^1$ into manifolds with corner singularities given by Lemma \ref{decomp}. Then $M_{|B_i}\cong S^1/H_1\times B_i$. Using Corollaries \ref{summ2} and \ref{prdfex}, (\ref{ts1}), (\ref{p1oi}), and $\chi_{S^1}(M)=\chi(M/S^1)[S^1/H_1]$ we obtain \begin{eqnarray*} T(M)&=&\sum_i T(M_{|B_i})\\ &=&\sum_i T(S^1/H_1\times B_i)\\ &=&T(S^1/H_1) \sum_i \chi(B_i)\\ &=&T_{S^1}[S^1/H_1]\chi(M/S^1)\\ &=&T_{S^1} \chi_{S^1}(M)\ . \end{eqnarray*} This proves assertion {\bf (1)} for $l(M)=1$. If $M$ is even-dimensional closed, then $M/S^1$ is odd-dimensional, and $\chi(M/S^1)=0$ by Poincar\'e duality. Assertion {\bf (2)} follows. Now assume that the proposition holds true for all $M$ with $l(M)<l$. Let $M$ be a closed oriented $S^1$-manifold with $l(M)=l$. Without loss of generality we can assume that $H:=H_1\not=S^1$. By \cite{bredon72}, VI 2.5., the fixed point set $M_H$ of $H$ is a smooth submanifold of $M$ with normal bundle $NM_H$, which we identify with an equivariant tubular neighbourhood of $M_H$ using the exponential map provided by a $S^1$-invariant Riemannian metric $g^M$. Assume that $M$ is odd-dimensional. By Corollary \ref{summ2} we have $T(M)=T(M\setminus NM_H) + T(\bar{NM_H})$. Let $N$ be the double of $M\setminus NM_H$. Then $l(N)\le l-1$, and we can apply the induction hypothesis in order to obtain $T(M\setminus NM_H)=\frac{1}{2}T(N)=\frac{1}{2}T_{S^1}\chi_{S^1}T(N)$. Note that $$\chi_{S^1}(N)=2\chi_{S^1}(M\setminus NM_H)-\chi_{S^1}(\partial \bar{NM_H})\ .$$ Note that $\partial \bar{NM_H}$ is even-dimensional, closed and orientable. Since $l(\partial \bar{NM_H})<l$ we have by our induction hypothesis $\chi_{S^1}(\partial \bar{NM_H})=a[S^1/S^1]$ for some $a\in{\bf Z}$. This implies $T_{S^1}\chi_{S^1}(\partial \bar{NM_H})=0$ and \begin{equation}\label{teil1} T(M\setminus NM_H)=T_{S^1}\chi_{S^1}(M\setminus NM_H)\ . \end{equation} We now compute $T(\bar{NM_H})$. Since $l(M_H)=1$ we have a smooth locally trivial fibre bundle $M_H\rightarrow M_H/S^1$ with fibre $S^1/H$. Let $M_H/S^1=\cup_i B_i$ be a decomposition of $M_H/S^1$ into manifolds with corner singularities given by Lemma \ref{decomp}. Then $M_{H,i}:=(M_H)_{|B_i}\cong S^1/H\times B_i$. Since $H$ acts orientation preserving, the bundle $NM_H$ admits an $H$-invariant complex structure. The restriction $(NM_H)_{|M_{H,i}}$ can be written as $S^1\times V_i/H$, where $V_i\rightarrow B_i$ is a complex vector bundle on which $H$ acts fibrewise linear. Since a complex linear action of a cyclic group $H$ can always be extended to the connected group $S^1$, we obtain $\chi^H(V_i)(\gamma)=\chi(V_i)$ for all $\gamma\in H$. Moreover we have $\chi^H(S^1)=0$. Thus we can apply Corollaries \ref{prdfex} and \ref{covformex} in order to obtain $$T(S^1\times V_i/H)=\int_\Gamma T^H(S^1)\chi^H(V_i)=\int_\Gamma T^H(S^1)\chi(V_i)=T(S^1/H)\chi(V_i)\ .$$ Since $\bar{NM_H}$ and $M_H$ are $S^1$-homotopy equivalent, we have $\chi_{S^1}(\bar{NM_H})=\chi_{S^1}(M_H)$. Moreover, $\sum_i\chi(V_i)=\sum_i\chi(B_i)=\chi(M_H/S^1)$ and $\chi_{S^1}(M_H)=\chi(M_H/S^1) [S^1/H]$. Thus we obtain by Corollary \ref{summ2} \begin{eqnarray} T(\bar{NM_H})&=&\sum_i T(S^1\times V_i/H)\nonumber\\ &=&\sum_i T(S^1/H)\chi(V_i)\nonumber\\ &=&T_{S^1}[S^1/H] \chi(M_H/S^1)\nonumber\\ &=&T_{S^1}\chi_{S^1}(\bar{NM_H})\label{rt2}. \end{eqnarray} We have $$\chi_{S^1}(M)=\chi_{S^1}(M\setminus NM_H)+\chi_{S^1}(\bar{NM_H})-\chi_{S^1}(\partial\bar{NM_H})\ .$$ Since $T_{S^1} \chi_{S^1}(\partial\bar{NM_H})=0$, combining (\ref{teil1}) and (\ref{rt2}) we obtain the desired formula $T(M)=T_{S^1}\chi_{S^1}(M)$ for $M$ odd-dimensional. Assume now that $M$ is even-dimensional and that $l(M)=l$. Then $T(M)=0$, and {\bf (1)} follows from {\bf (2)}. We now show {\bf (2)}. We have $$\chi_{S^1}(M)=\chi_{S^1}(M\setminus NM_H)+\chi_{S^1}(M_H)\ .$$ We can apply the induction hypothesis to $M_H$ and the double of $M\setminus NM_H$. It follows that $\chi_{S^1}(M\setminus NM_H)=\frac{1}{2}\chi_{S^1}(\partial \bar{NM_H})+ a[S^1/S^1]$. The restriction $\partial \bar{NM_H}_{|M_{H,i}}$ is isomorphic to $S^1\times \partial S V_i/H$, where $S V_i$ denotes the sphere bundle of $V_i$. Let $U$ be the unit sphere in a fibre of $NM_H$. Using that $M_H/S^1$ is closed, orientable, and odd-dimensional, we obtain \begin{eqnarray*} \chi_{S^1}(\partial \bar{NM_H})&=& \sum_i \chi_{S^1}(S^1\times U/H)\chi(B_i)\\ &=&\chi_{S^1}(S^1\times U/H) \chi(M_H/S^1)\\ &=&0\ . \end{eqnarray*} This finishes the proof of {\bf (2)}. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent We now construct $T_G$. The collection $T_H$, $H\in S(G)$, forms a natural transformation from the functor $H\mapsto U(H)$ to $H\mapsto \tilde{I}(H)$. Thus we obtain a homomorphism $$\tilde{T}:\lim_{S(G)} U\rightarrow V(G)\ .$$ Let $\tilde{{\rm res}} : U(G)\rightarrow \lim_{S(G)}$ be given by the collection ${\rm res}^G_H$, $H\in S(G)$. If $M$ is a compact $G$-manifold, then we let $\tilde{\chi}(M)\in \lim_{S(G)}U$ be given by the section $S(G)\ni H\mapsto \chi_H(M)\in U(H)$. Then $\tilde{{\rm res}} \chi_G(M)=\tilde{\chi}(M)$. \begin{lem}\label{tge} There is a unique homomorphism $T_G:U(G)\rightarrow \tilde{I}(G)$ such that $R_G\circ T_G=\tilde{T}\circ \tilde{{\rm res}}$. \end{lem} {\it Proof.$\:\:\:\:$} For $G/K\in {\rm Or }(G)$ we shall have \begin{eqnarray*} R_G\circ T_G[G/K]&=&\tilde{T}\circ \tilde{{\rm res}} \chi_G(G/K)\\ &=&\tilde{T}\circ\tilde{\chi}(G/K)\\ &=&\{S(G)\ni H\mapsto T_H \circ\chi_H \circ{\rm res}^G_H(G/K)\}\\ &=&\{S(G)\ni H\mapsto T({\rm res}^G_H\:G/K)\}\\ &=&\{S(G)\ni H\mapsto {\rm res}^G_H T(G/K)\}\\ &=&R_G T(G/K) \ . \end{eqnarray*} Hence by injectivity of $R_G$ (Lemma \ref{ress1}) we are forced to define $T_G[G/K]:=T(G/K)$. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent We now finish the proof of Theorem \ref{main}. Let $M$ be a closed oriented $G$-manifold. Then we have \begin{eqnarray*} R_G\circ T_G\chi_G(M)&=&\tilde{T}\circ \tilde{{\rm res}} \chi_G(M)\\ &=&\tilde{T}\circ \tilde{\chi}(M)\\ &=&\{S(G)\ni H\mapsto T_H \circ\chi_H \circ{\rm res}^G_H(M)\}\\ &=&\{S(G)\ni H\mapsto T({\rm res}^G_H\:M)\}\\ &=&\{S(G)\ni H\mapsto {\rm res}^G_H T(M)\}\\ &=&R_G T(M) \ . \end{eqnarray*} We conclude that $T_G\chi_G(M)=T(M)$ by Lemma \ref{ress1}.\hspace*{\fill}$\Box$ \\[0.5cm]\noindent \bibliographystyle{plain}
train/arxiv
BkiUbcLxK0wg09FXUEjx
5
1
\section{Neutrinos Associated with the Highest Energy Cosmic Rays} The flux of cosmic rays is summarized in Fig.\,1a,b\cite{gaisseramsterdam}. The energy spectrum follows a broken power law. The two power laws are separated by a feature dubbed the ``knee''; see Fig.\,1a. Circumstantial evidence exists that cosmic rays, up to EeV energy, originate in galactic supernova remnants. Any association with our galaxy disappears however in the vicinity of a second feature in the spectrum referred to as the ``ankle''. Above the ankle, the gyroradius of a proton in the galactic magnetic field exceeds the size of the galaxy and it is generally assumed that we are witnessing the onset of an extragalactic component in the spectrum that extends to energies beyond 100\,EeV. Experiments indicate that the highest energy cosmic rays are predominantly protons or, possibly, nuclei. Above a threshold of 50 EeV these protons interact with cosmic microwave photons and lose their energy to pions before reaching our detectors. This is the Greissen-Zatsepin-Kuzmin cutoff that limits the sources to our supercluster of galaxies. \begin{figure}[h] \centering\leavevmode \includegraphics[width=6in]{halzen_fig1.eps} \caption{At the energies of interest here, the cosmic ray spectrum consists of a sequence of 3 power laws. The first two are separated by the ``knee'' (left panel), the second and third by the ``ankle''. There is evidence that the cosmic rays beyond the ankle are a new population of particles produced in extragalactic sources; see right panel.} \end{figure} Models for the origin of the highest energy cosmic rays fall into two categories, top-down and bottom-up. In top-down models it is assumed that the cosmic rays are the decay products of cosmological remnants with Grand Unified energy scale $M_{GUT} \sim 10^{24}\rm\,eV$. These models predict neutrino fluxes most likely within reach of first-generation telescopes such as AMANDA, and certainly detectable by future kilometer-scale neutrino observatories\cite{PR}. In bottom-up scenarios it is assumed that cosmic rays originate in cosmic accelerators. Accelerating particles to TeV energy and above requires massive bulk flows of relativistic charged particles. These are likely to originate from the exceptional gravitational forces in the vicinity of black holes. Examples include the dense cores of exploding stars, inflows onto supermassive black holes at the centers of active galaxies and annihilating black holes or neutron stars. Before leaving the source, accelerated particles pass through intense radiation fields or dense clouds of gas surrounding the black hole. This results in interactions producing pions decaying into secondary photons and neutrinos that accompany the primary cosmic ray beam as illustrated in Fig.\,2. \begin{figure}[h] \centering\leavevmode \includegraphics[width=4.25in]{halzen_fig2.eps} \caption{Cosmic beam dump: sketch of cosmic ray accelerator producing photons and neutrinos.} \end{figure} How many neutrinos are produced in association with the cosmic ray beam? The answer to this question, among many others\cite{PR}, provides the rationale for building kilometer-scale neutrino detectors. We first consider a neutrino beam produced at an accelerator laboratory; see Fig.\,2. Here the target absorbs all parent protons as well as the secondary electromagnetic and hadronic showers. Only neutrinos exit the dump. If nature constructed such a ``hidden source'' in the heavens, conventional astronomy will not reveal it. It cannot be the source of the cosmic rays, however, because in this case the dump must be transparent to protons. A more generic ``transparent'' source can be imagined as follows: protons are accelerated in a region of high magnetic fields where they interact with photons via the processes $p + \gamma \rightarrow \Delta \rightarrow \pi^0 + p$, $p + \gamma \rightarrow \Delta \rightarrow \pi^+ + n$. While the secondary protons may remain trapped in the acceleration region, equal numbers of neutrons, neutral and charged pions escape. The energy escaping the source is therefore equally distributed between cosmic rays, gamma rays and neutrinos produced by the decay of neutrons and neutral and charged pions, respectively. The neutrino flux from a generic transparent cosmic ray source is often referred to as the Waxman-Bahcall flux\cite{wb1}. It is easy to calculate and the derivation is revealing. Figure 1b shows a fit to the observed spectrum above the ``ankle'' that can be used to derive the total energy in extragalactic cosmic rays. The energy content of this component is $\sim 3 \times 10^{-19}\rm\,erg\ cm^{-3}$, assuming an $E^{-2}$ energy spectrum with a GZK cutoff. The power required for a population of sources to generate this energy density over the Hubble time of $10^{10}$\,years is $\sim 3 \times 10^{37}\rm\,erg\ s^{-1}$ per (Mpc)$^3$ or, as often quoted in the literature, $\sim 5\times10^{44}\rm\,TeV$ per year per (Mpc)$^3$. This works out to\cite{TKG} \begin{itemize} \item $\sim 3 \times 10^{39}\rm\,erg\ s^{-1}$ per galaxy, \item $\sim 3 \times 10^{42}\rm\,erg\ s^{-1}$ per cluster of galaxies, \item $\sim 2 \times 10^{44}\rm\,erg\ s^{-1}$ per active galaxy, or \item $\sim 2 \times 10^{52}$\,erg per cosmological gamma ray burst. \end{itemize} The coincidence between these numbers and the observed output in electromagnetic energy of these sources explains why they have emerged as the leading candidates for the cosmic ray accelerators. The coincidence is consistent with the relationship between cosmic rays and photons built into the ``transparent'' source. In the photoproduction processes roughly equal energy goes into the secondary neutrons, neutral and charged pions whose energy ends up in cosmic rays, gamma rays and neutrinos, respectively. We therefore assume that the same energy density of $\rho_E \sim 3 \times 10^{-19}\rm\,erg\ cm^{-3}$, observed in cosmic rays and electromagnetic energy, ends up in neutrinos with a spectrum $E_\nu dN / dE_{\nu} \sim E^{-\gamma}\rm\, cm^{-2}\, s^{-1}\, sr^{-1}$ that continues up to a maximum energy $E_{\rm max}$. The neutrino flux follows from the relation $ \int E_\nu dN / dE_{\nu} = c \rho_E / 4\pi $. For $\gamma = 1$ and $E_{\rm max} = 10^8$\,GeV, the generic source of the highest energy cosmic rays produces a flux of $ {E_\nu}^2 dN / dE_{\nu} \sim 5 \times 10^{-8}\rm\, GeV \,cm^{-2}\, s^{-1}\, sr^{-1} $. There are several ways to modify this simple prediction: \begin{itemize} \item The derivation fails to take into account the fact that there are more cosmic rays in the universe producing neutrinos than observed at earth because of the GZK-effect and neglects evolution of the sources with redshift. This increases the neutrino flux by a factor $\sim$\,3, possibly more. \item For proton-$\gamma$ interactions muon neutrinos receive only 1/4 of the energy of the charged pion in the decay chain $\pi^+\rightarrow \mu^+ +\nu_{\mu}\rightarrow e^+ +\nu_e +\bar{\nu}_{\mu} +\nu_{\mu}$ assuming that the energy is equally shared between the 4 leptons and taking into account that oscillations over cosmic distances distribute the neutrino energy equally among the 3 flavors. \end{itemize} The corrections approximately cancel. Studying specific models of cosmic ray accelerators one finds that the energy supplied by the black hole to cosmic rays usually exceeds that transferred to pions, for instance by a factor 5 in the case of gamma ray bursts. We therefore estimate that the muon-neutrino flux associated with the sources of the highest energy cosmic rays is loosely confined to the range $ {E_\nu}^2 dN / dE_{\nu}= 1\sim 5 \times 10^{-8}\rm\, GeV \,cm^{-2}\, s^{-1}\, sr^{-1} $ yielding $10 \,{\sim}\, 50$ detected muon neutrinos per km$^2$ per year. This number depends weakly on $E_{\rm max}$ and the spectral slope~$\gamma$. The observed event rate is obtained by folding the predicted flux with the probability that the neutrino is actually detected in a high energy neutrino telescope; the latter is given by the ratio of the muon and neutrino interaction lengths in the detector medium, $\lambda_\mu / \lambda_\nu$\cite{PR}. This flux has to be compared with the sensitivity of ${\sim}10^{-7}\rm\, GeV\ cm^{-2}\, s^{-1}\,sr^{-1}$ reached during the first 4 years of operation of the completed AMANDA detector in 2000--2003\cite{hill}. The analysis of the data has not been completed, but a preliminary limit of $2.9 \times 10^{-7}\rm\,GeV\ cm^{-2}\,s^{-1}\,sr^{-1}$ has been obtained with a single year of data\cite{b10-diffuse}. On the other hand, after three years of operation IceCube will reach a diffuse flux limit of $E_{\nu}^2 dN / dE_{\nu} = 2\,{\sim}\, 7 \times 10^{-9}\rm\,GeV \,cm^{-2}\, s^{-1}\, sr^{-1}$. The exact value depends on the magnitude of the dominant high energy atmospheric neutrino background from the prompt decay of atmospheric charmed particles\cite{ice3}. The level of this background is difficult to anticipate. A cosmic flux at the ``Waxman-Bahcall" level will result in the observation of several hundred neutrinos in IceCube\cite{ice3}. \section{Kilometer-Scale Detectors} Arguing that a generic cosmic accelerator produces equal energies in cosmic rays, photons and neutrinos, we derived the ``Waxman-Bahcall'' flux. A kilometer-scale detector is required to detect the roughly $10 {\sim} 50$ events per km$^2$ year. Model calculations assuming that active galaxies or gamma-ray bursts are the actual sources of cosmic rays yield similar, or even smaller event rates. The case for kilometer-scale detectors also emerges from the consideration of ``guaranteed'' cosmic fluxes. Neutrino fluxes are guaranteed when both the accelerator and the pion producing target material can be identified. Examples include: \begin{itemize} \item The extragalactic cosmic rays produce $0.1 \sim$ a few events per km$^2$ year in interactions with cosmic microwave photons. Furthermore, these cosmic rays are magnetically trapped in galaxy clusters and may produce additional neutrinos on the X-ray emitting gas in the cluster. \item Galactic cosmic rays interact with hydrogen in the disk producing an observable neutrino flux in a kilometer-scale detector. \item Air shower arrays have observed a ``directional'' flux of cosmic rays from the galactic plane, unlikely to be protons whose directions are scrambled in the magnetic field. The flux appears only in a narrow energy range from $1\,{\sim}\, 3$\,EeV, the energy where neutrons reach typical galactic kiloparsec distances within their lifetime of minutes. Both the directionality and the characteristic energy make a compelling case for electrically neutral neutron primaries. For every neutron reaching earth, a calculable number decays into electron antineutrinos before reaching us. Their flux should be observable in neutrino telescopes\cite{luis}: from the Cygnus region at the South Pole and from the galactic center for a Mediterranean detector. \end{itemize} In conclusion, observation of ``guaranteed'' sources also requires kilometer-size neutrino detectors, preferably operated over many years. Finally and most importantly, with recent observations\cite{hess} of the supernova remnant RX J1713.7-3946 using the H.E.S.S. atmospheric Cherenkov telescope array, gamma-ray astronomy may have pointed at a truly guaranteed source of cosmic neutrinos\cite{alvarezhalzen}. The observations of TeV-gamma rays from the supernova remnant may have identified the first site where protons are accelerated to energies typical of the main component of the galactic cosmic rays\cite{hess}. Although the resolved image of the source (the first ever at TeV energies!) reveals TeV emission from the whole supernova remnant, it shows a clear increase of the flux in the directions of known molecular clouds. This naturally suggests the possibility that protons, shock accelerated in the supernova remnant, interact with the dense clouds to produce neutral pions that are the source of the observed increase of the TeV signal. Furthermore, the high statistics H.E.S.S. data for the flux are power-law behaved over a large range of energies without any signature of a cutoff characteristic of synchrotron or inverse-Compton sources. Other interpretations are not ruled out\cite{hiraga} but, fortunately, higher statistics data is forthcoming. If future data confirms that a fraction of the TeV flux of RX J1713.7-3946 is of neutral pion origin, then the accompanying charged pions will produce a guaranteed neutrino flux of at least 20 muon-type neutrinos per kilometer-squared per year\cite{alvarezhalzen}. From a variety of such sources we can therefore expect event rates of cosmic neutrinos of galactic origin similar to those estimated for extragalactic neutrinos in the previous section. Supernovae associated with molecular clouds are a common feature of the OB associations that exist throughout the galactic plane. They have been suspected to be the sources of the galactic cosmic rays for some time. It is important to realize that the relation between the neutrino and gamma flux is robust\cite{alvarezhalzen}. The $\nu_\mu + \bar\nu_\mu$ neutrino flux ($dN_\nu/dE_\nu$) produced by the decay of charged pions in the source can be derived from the observed gamma ray flux by imposing energy conservation: \begin{equation} \int_{E_{\gamma}^{\rm min}}^{E_{\gamma}^{\rm max}} E_\gamma {dN_\gamma\over dE_\gamma} dE_\gamma = K \int_{E_{\nu}^{\rm min}}^{E_{\nu}^{\rm max}} E_\nu {dN_\nu\over dE_\nu} dE_\nu \label{conservation} \end{equation} where ${E_{\gamma}^{\rm min}}$ ($E_{\gamma}^{\rm max}$) is the minimum (maximum) energy of the photons that have a hadronic origin. ${E_{\nu}^{\rm min}}$ and ${E_{\nu}^{\rm max}}$ are the corresponding minimum and maximum energy of the neutrinos. The factor $K$ depends on whether the $\pi^0$'s are of $pp$ or $p\gamma$ origin. Its value can be obtained from routine particle physics. In $pp$ interactions 1/3 of the proton energy goes into each pion flavor on average. In the pion-to-muon-to-electron decay chain 2 muon-neutrinos are produced with energy $E_\pi/4$ for every photon with energy $E_\pi/2$ (on average). Therefore the energy in neutrinos matches the energy in photons and $K=1$. This flux has to be reduced by a factor 2 because of oscillations. The estimate should be considered a lower limit because the photon flux to which the calculation is normalized, may be partially absorbed in the source or in the interstellar medium. \section{Neutrino Telescopes: First ``Light''} While it has been realized for many decades that the case for neutrino astronomy is compelling, the challenge has been to develop a reliable, expandable and affordable detector technology to build the kilometer-scale telescopes required to do the science. Conceptually, the technique is simple. In the case of a high-energy muon neutrino, for instance, the neutrino interacts with a hydrogen or oxygen nucleus in deep ocean water and produces a muon travelling in nearly the same direction as the neutrino. The Cerenkov light emitted along the muon's kilometer-long trajectory is detected by a lattice of photomultiplier tubes deployed on strings at depth shielded from radiation. The orientation of the Cerenkov cone reveals the roughly collinear muon and neutrino direction. The AMANDA detector, using natural 1 mile-deep Antarctic ice as a Cerenkov detector, has operated for more than 4 years in its final configuration of 667 optical modules on 19 strings. The detector is in steady operation collecting roughly $7\,{\sim}\, 10$ neutrinos per day using fast on-line analysis software. The lower number will yield a background-free sample all the way to the horizon. AMANDA's performance has been calibrated by reconstructing muons produced by atmospheric muon neutrinos in the 50\,GeV to 500\,TeV energy range\cite{nature}. Using the first 4 years of AMANDA\,II data, the AMANDA collaboration is performing a search for the emission of muon neutrinos from spatially localized directions in the northern sky. Only the year 2000 data have been published~\cite{HS}. The skyplot for the 4 years of data is shown in Fig.\,3. A 90\% upper limit on the neutrino fluency of point sources is at the level of $6 \times 10^{-8}\rm\, GeV \,cm^{-2}\, s^{-1}$ or $10^{-10}\rm \,erg\ cm^{-2}\,s^{-1}$, averaged over declination. This corresponds to a flux of $6 \times 10^{-9}\rm\, cm^{-2}\, s^{-1}$ integrated above 10\,GeV assuming an $E^{-2}$ energy spectrum typical for shock acceleration of particles in high energy sources. In a search for 33 preselected sources, the most significant excess is from the Crab with 10 events where 5 are expected, not significant given the number of trials. IceCube is needed to make conclusive observations of sources. There are several ways to further improve the detector's reach, for instance by including time information on the sources provided by observations at other (photon) wavelengths. \begin{figure}[t] \centering\leavevmode \includegraphics[width=5in]{halzen_fig3.eps} \caption{Skymap showing declination and right ascension of neutrinos detected by the AMANDA\,II detector during four Antarctic winters of operation in 2000--2003.} \end{figure} The AMANDA\,II detector has reached a high-energy effective telescope area of 25,000$\,\sim\, $40,000\,m$^2$, depending on declination. This represents an interesting milestone: known TeV gamma ray sources, such as the active galaxies Markarian 501 and 421, should be observed in neutrinos if the number of gamma rays and neutrinos emitted are roughly equal as expected from cosmic ray accelerators producing pions\cite{alvarezhalzen}. Therefore AMANDA must detect the observed TeV photon sources soon, or, its observations will exclude them as the sources of cosmic rays. \section{Mediterranean Telescopes} Below PeV energy, South Pole neutrino telescopes do not cover the Southern sky, which is obscured by the large flux of cosmic ray muons and neutrinos. This and the obvious need for more than one telescope --- accelerator experiments have clearly demonstrated the value of multiple detectors --- provide compelling arguments for deploying northern detectors. With the first observation of neutrinos by a detector in Lake Baikal with a telescope area of 2500\,m$^2$ for TeV muons\cite{baikal} and after extensive R\&D efforts by both the ANTARES\cite{antares} and NESTOR\cite{nestor} collaborations in the Mediterranean, there is optimism that the technological challenges to build neutrino telescopes in deep sea water have been met. Both Mediterranean collaborations have demonstrated their capability to deploy and retrieve optical sensors, and have reconstructed down-going muons with optical modules deployed for R\&D tests. The ANTARES neutrino telescope is under construction at a 2400\,m deep Mediterranean site off Toulon, France. It will consist of 12 strings, each equipped with 75 optical sensors mounted in 25 triplets. The detector performance has been fully simulated\cite{antares} with the following results: a sensitivity after one year to point sources of $0.4-5 \times 10^{-15}\rm\, cm^{-2}\, s^{-1}$ (note that this is the flux of secondary muons, not neutrinos) and to a diffuse flux of $0.9 \times 10^{-7}\rm\, GeV \,cm^{-2}\, s^{-1}$ above 50\,TeV. As usual, an $E^{-2}$ spectrum has been assumed for the signal. AMANDA\,II data have reached similar point source limits ($0.6 \times 10^{-15}\rm\, cm^{-2}\, s^{-1}\,sr^{-1}$) using 4 Antarctic winters, or about 1000 days, of data\cite{HS}). This value depends weakly on declination. Also the diffuse limits reached in the absence of a signal are comparable\cite{hill}. We have summarized the sensitivity of both experiments in Table~1, where they are also compared to the sensitivity of IceCube. \begin{table}[h] \caption{} \tabcolsep1.25em \def1.5{1.5} \begin{center} \begin{tabular}{cccc} \hline & \bf IceCube& \bf AMANDA-II$^*$& \bf ANTARES\\ \hline \bf \# of PMTs& 4800 / 10 inch& 600 / 8 inch& 900 / 10 inch\\ \hline \parbox{6.5em}{{\bf Point source sensitivity} (muons/year)}& $6\times 10^{-17}\,\rm cm^{-2}\,\rm s^{-1}$& \parbox{9em}{\centering $1.6\times 10^{-15}\,\rm cm^{-2}\, s^{-1}$ weakly dependent\\ on oscillations}& \parbox{10em}{\centering 0.4--$5\times 10^{-15}\rm\, cm^{-2}\,s^{-1}$ depending\\ on declination}\\[.2in] \hline \parbox{6.5em}{{\bf diffuse limit$^\dagger$} (muons/year)}& \parbox{7.5em}{\centering 3--$12\times 10^{-9}\,\rm GeV\break cm^{-2}\,s^{-1}\,sr^{-1}$}& \parbox{6em}{\centering $2\times 10^{-7}\rm\,GeV\break cm^{-2}\, s^{-1}\,sr^{-1}$}& \parbox{6.5em}{\centering $0.8\times 10^{-7}\rm\,GeV\break cm^{-2}\,s^{-1}\,sr^{-1}$}\\[.1in] \hline \multicolumn{4}{l}{$^*$includes systematic errors}\\ \multicolumn{4}{l}{$^\dagger$depends on assumption for background from atmospheric neutrinos from charm} \end{tabular} \end{center} \end{table} Given that AMANDA and ANTARES operate at similar depths and have similar total photocathode area (AMANDA\,II is actually a factor of 2 smaller with 667 8-inch versus 900 10-inch photomultipliers for Antares), the above comparison provides us with a first glimpse at the complex question of the relative merits of water and ice as a Cherenkov medium. The conclusion seems to be that, despite differences in optics and in the background counting rates of the photomultipliers, the telescope sensitivity is approximately the same for equal photocathode area. The comparison is summarized in Table~1 where we have tabulated the sensitivity of AMANDA and Antares to points sources and to a diffuse flux of neutrinos. At this time, in the absence of a discovery, it is the sensitivity and not the area or angular resolution that represents the relevant quantity of merit. In the same context, the NEMO collaboration has done the interesting exercise of simulating the IceCube detector (augmented from 4800 to 5600 optical modules; see next section) in water rather than ice. One finds a slightly reduced sensitivity in water, probably not significant within errors and at no energy larger than 50\%\cite{emigneco}. Notice that in several years of operation a kilometer-scale detector like IceCube can improve the sensitivity of first-generation telescopes by two orders of magnitude. \section{Kilometer-scale Neutrino Observatories} The baseline design of kilometer-scale neutrino detectors maximizes sensitivity to $\nu_\mu$-induced muons with energy above hundreds of GeV, where the acceptance is enhanced by the increasing neutrino cross section and muon range and the Earth is still largely transparent to neutrinos. The mean-free path of a $\nu_\mu$ becomes smaller than the diameter of the earth above 70\,TeV --- above this energy neutrinos can only reach the detector from angles closer to the horizon. Good identification of other neutrino flavors becomes a priority, especially because $\nu_\tau$ are not absorbed by the earth. Good angular resolution is required to distinguish possible point sources from background, while energy resolution is needed to enhance the signal from astrophysical sources, which are expected to have flatter energy spectra than the background atmospheric neutrinos. Overall, AMANDA represents a proof of concept for the kilometer-scale neutrino observatory, IceCube\cite{ice3}, now under construction. IceCube will consist of 80 kilometer-length strings, each instrumented with 60 10-inch photomultipliers spaced by 17~m. The deepest module is 2.4~km below the surface. The strings are arranged at the apexes of equilateral triangles 125\,m on a side. The instrumented (not effective!) detector volume is a cubic kilometer. A surface air shower detector, IceTop, consisting of 160 Auger-style Cherenkov detectors deployed over 1\,km$^{2}$ above IceCube, augments the deep-ice component by providing a tool for calibration, background rejection and air-shower physics, as illustrated in Fig.~4. \begin{figure}[h] \centering\leavevmode \includegraphics[width=6in]{halzen_fig4.eps} \caption{Relative sizes of the IceCube, AMANDA, and Superkamiokande neutrino detectors. AMANDA will be operated as a lower threshold subsystem of IceCube. As the size of the detector grows, so does the threshold energy of neutrinos detected.} \end{figure} The transmission of analogue photomultiplier signals from the deep ice to the surface, used in AMANDA, has been abandoned. The photomultiplier signals will be captured and digitized inside the optical module. The digitized signals are given a global time stamp with a precision of $<10$\,ns and transmitted to the surface. The digital messages are sent to a string processor, a global event trigger and an event builder. Construction of the detector commences in the Austral summer of 2004/2005 and continues for 6 years, possibly less. The growing detector will take data during construction, with each string coming online within days of deployment. The data streams of IceCube, and AMANDA\,II, embedded inside IceCube, will be merged off-line using GPS timestamps. IceCube will offer advantages over AMANDA\,II beyond its larger size: it will have a higher efficiency and superior angular resolution in reconstructing tracks, map showers from electron- and tau-neutrinos (events where both the production and decay of a $\tau$ produced by a $\nu_{\tau}$ can be identified) and, most importantly, measure neutrino energy. Simulations, benchmarked by AMANDA data, indicate that the direction of muons can be determined with sub-degree accuracy and their energy measured to better than 30\% in the logarithm of the energy. The direction of showers will be reconstructed to better than 10$^\circ$ above 10\,TeV and the response in energy is linear and better than 20\%. Energy resolution is critical because, once one establishes that the energy exceeds 1\,PeV, there is no atmospheric muon or neutrino background in a kilometer-square detector and full sky coverage of the telescope is achieved. The background counting rate of IceCube signals is expected to be less than 0.5\,kHz per optical sensor. In this low background environment, IceCube can detect the excess of MeV anti-$\nu_e$ events from a galactic supernova. NEMO, an INFN R\&D project in Italy, has been mapping Mediterranean sites and studying novel mechanical structures, data transfer systems as well as low power electronics, with the goal to deploy a next-generation detector similar to IceCube. A concept has been developed with 81 strings spaced by 140\,m. Each consists of 18 bars that are 20\,m long and spaced by 40\,m. A bar holds a pair of photomultipliers at each end, one looking down and one horizontally. As already mentioned, the simulated performance\cite{NEMO} is, not unexpectedly, similar to that of IceCube with a similar total photocathode area as the NEMO concept. Recently, a wide array of projects have been initiated to detect neutrinos of the highest energies, typically above a threshold of 10 EeV, exploring other experimental signatures: horizontal air showers and acoustic or radio emission from neutrino-induced showers. Some of these experiments, such as the Radio Ice Cerenkov Experiment\cite{frichter} and an acoustic array in the Caribbean\cite{lehtinen}, have taken data; others are under construction, such as the Antarctic Impulsive Transient Antenna\cite{gorham}. The more ambitious EUSO/OWL project aims to detect the fluorescence of high energy cosmic rays and neutrinos from a detector attached to the International Space Stations. \section*{Acknowledgments} I thank my AMANDA/IceCube collaborators and Teresa Montaruli for discussions. This research was supported in part by the National Science Foundation under Grant No.~OPP-0236449, in part by the U.S.~Department of Energy under Grant No.~DE-FG02-95ER40896, and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
train/arxiv
BkiUdmDxK7Ehm4qs0nek
5
1
\subsection{The overview of TEE-based privacy architecture} In this section, we introduce the details of the TEE-based privacy architecture. Figure~\ref{fig:cloak-tee} shows overview of the architecture. $F$ means contract code that clearly expresses the business logic $f$ of VMPT. $P$ is a privacy policy, which expresses the privacy specification of VMPT, specifically what's the secret data and who are participants. $E$ is able to receive private data, run $F$, generate proof $p$ and deliver result. $V$ is a public verifier, which is responsible for verifying the legitimacy of the $S'$ with $p$ and update state. In following, we will introduce how to simultaneously achieve the security properties $P1\wedge P2\wedge P3$ of VMPT based on this architecture. \begin{figure}[h] \includegraphics[width=8.5cm]{images/vp-arch.png} \caption{TEE-based verifiable privacy architecture} \label{fig:cloak-tee} \end{figure} \subsection{Privacy policy of VMPT} To achieve $P1$, we should avoiding privacy leakage both in execution and in inputs parameter or return value. As executing $F$ in enclaves has already achieved execution privacy, we additionally need to specify privacy policy, $P$, to instruct $E$ enforce the privacy of input and output. The privacy specification of the MPC protocol will be limit by $P$ hash. \subsection{Target function of VMPT} To achieve $P2$, we could easily express the $f$ of VMPT by smart contracts, marked as target function $F$, and limit it by $F$ hash. Although $F$ is expected to be expressed in a user-friendly way, it's never limited by blockchain supported contract language. We will further explain this in session~\ref{sec:tee-verifier}. \subsection{TEE-based executor of VMPT} To achieve $P3$, we first need basic run-time of $E$, then we will introduce how to construct the proof of MPT result that proves the bound of the typle, $(F, P, E)$. \subsubsection{Enclave runtime} $E$ is the core of the architecture, it consists of three modules, Privacy Interpreter Enclave (PIE), Key Management Enclave (KME) and Virtual Machine Enclave (VME). The participants of an VMPT seals their secret $x_i$, and transaction info, \hbox{\emph{e.g.}}\xspace function name and contract name, by $E$'s public key, sign the transaction, and send it to the $E$. Upon receiving the request, $E$ revealing the data and processes the transaction by PIE, PIE reads the transaction related $P$, $S$, and $F$ from the ledger, then, wait to receive secrets from all parties. After secrets are obtained and checked, PIE call VME to execute $F$. When the execution finished, PIE enforce $y_i$ private to $x_i$. It also encrypt new plaintext state according to $P$ to get $S'$, sends a transaction with $S'$ and proof to update the verifier's old state, $S$. As $F$ is expressed by Solidity here, the VME of $E$ should hold Ethereum Virtual Machine (EVM) to run the compiled bytecode. We note that VME could be easy to expand to WASM, JVM, \hbox{\emph{etc.}}\xspace. \subsubsection{Proof generation}\label{sec:proof-generation} To achieve $P2$, we built a trust chain based on the \textit{Non-broken device assumption}. The assumption means the real TEE device has not been compromised. We note that this is an easy-to-reach assumption and adopted in lots of related works~\cite{matetic2018delegatee, Sinisa2019BITE}. Furthermore, we'd also propose practical ways to reduce the risk of broken assumption. This will be discussed in session~\ref{sec:tee-consensus}. Based on this assumption, we integrate three proofs to achieve the whole verifiablity. \myparagraph{Device authenticity proof} The trust of SGX device is based on a remote attestation report from Intel IAS. Thus, we first attest and publish the report of each SGX. \begin{lstlisting}[caption=Simplified attestation report of SGX from Intel IAS, label=listing:proofData][!htbp] { "X-IASReport-Signature": <string>, "reportData": { "isvEnclaveQuoteStatus": <string>, "isvEnclaveQuoteBody": <string> { "REPORTBODY": { "MRENCLAVE": <string>, "REPORTDATA": <string> } } } } \end{lstlisting} \myparagraph{Trust runtime proof} Based on real SGX devices, we also need to prove that the communication channel and code runtime is trusted. To defend the user from Man in the middle attack, two keys $E_{pk}$, \textit{encKey}, and $V_{pk}$, \textit{verKey}, will be bound in the IAS report. $E_{pk}$ is an encryption key, \hbox{\emph{e.g.}}\xspace RSA public key, with which users can make sure only the code in enclave available to private data. $V_{pk}$ is a verification key, \hbox{\emph{e.g.}}\xspace ECDSA public key, with which users can make sure the authenticity of results. Then, with a bound code hash, \textit{TEEMRs}, the in-mutable of executor runtime is also achieved. \begin{lstlisting}[language=Solidity, caption=The registration data of SGX on-chain, label=listing:registerData][!htbp] struct RegisterData { string verKey; string encKey; string[] TEEMRs; string IASReport; } \end{lstlisting} List~\ref{listing:registerData} shows the data structure used by SGX to register itself on-chain before receiving users' private data. It would be verified by users to make sure device authenticity and executor trust. \myparagraph{Result proof} To prove that the result is the real result of the private contract with the expected $F$ and $P$. SGX sign the report with the hash of $F$, $P$, $S$ and $S'$ using the signing key corresponding to $V_{pk}$ of trusted runtime $E$. The report could be the verifiable MPC proof, $p$. \subsection{On-chain contract verifier}\label{sec:tee-verifier} To verify $p$ on-chain, \codename will deploy the predefined \texttt{CloakService} and verifier contract to blockchain first. These contracts wholly call $V$. \subsubsection{TEE executor registration contract} \texttt{CloakService} is deployed for registering the $E$'s runtime proof. It register IAS reports to make them in-mutable and available. verifier contract would call \textit{verify} to check the proof of VMPT. \subsubsection{Transaction result verifier contract} The verifier contract will call \texttt{CloakService} to verify two conditions. First, the transaction's original sender matched one of the registered TEE with the specific SGX measurement in the pool. Second, the bound of $F$, $P$, $S$ is expected. If all these are satisfied, the verifier contract will update its contract state. An example of a verifier will be showed in session~\ref{sec:verifier-contract}. \begin{lstlisting}[language=Solidity, caption=The generated service contract, label=listing:contracts] contract CloakService { mapping<address => RegisterData> public devicesInfo; function registerWorker( address workerId, RegisterData deviceInfo ) public { // check deviceInfo, register a new TEE device } function verify( uint256[] proof, uint256 TEEMR, uint256 codeHash, uint256 policyHash, uint256 functionHash, uint256 oldStateHash, ) public returns (bool) { // check tx.origin, codeHash, policyHash return true; } } \end{lstlisting} \subsection{Optimizations} \subsubsection{Patch processing transactions as Layer-2} As a state machine, a blockchain world state transforms according to old state and new transactions. We can batch process the transactions. The scalability of the \textit{VP-Arch} could be significantly higher than the main chain. \textit{VP-Arch} could periodically synchronize the state with the main chain or synchronize the state responding to the user request. \subsubsection{Improve credibility through consensus}\label{sec:tee-consensus} We apply a consensus between $E$ to reduce the risk of devices being compromised. Since all known SGX attacks require to access the device and cause limited loss~\cite{Zhang2020SurveyOA, Nilsson2020ASO}, SGX-based nodes are believed as None-Byzantine nodes in some degree, which makes \textit{VP-Arch} high scalable and trusted. \subsection{Develop Confidential Smart Contract} \myparagraph{Annotate Privacy Invariants} Developers could annotate variable owner in the declaration statement to one of the \{\owner{all}, \owner{me}, \owner{id}, \owner{tee}\}. The \owner{all} means public; \owner{me} means the \code{msg.sender}; \owner{id} is declared variable in type \code{address}; \owner{tee} means any registered address of SGX with \codename runtime. With \codename, users could intuitively specify the MPT in Figure~\ref{fig:vmpt-definition} as a \codename smart contract, the \textit{.cloak} file in Listing~\ref{listing:cloak}. In line 1, the developer could declare the key of \code{balances} as a temporary variable \owner{k}, then specifies the corresponding value is owned by the account with address \code{k}, \hbox{\emph{e.g.}}\xspace, \code{balances[tenderer]} is only known by the \code{tenderer} in line 23. In line 2, the developer specifies \code{mPrice} should be public. In line 6-7, to handle an uncertain number of suppliers, the developer declares owners \owner{p} and specifies the owners' owned data separately in two dynamic arrays. In line 10, the return value \code{sPrice} is owned by the \code{winner}. In line 12-13, the developer \owner{reveal} private data to another owner, which is forced by \codename to avoid unconsciously leaking privacy. In line 14-24, it computes the lowest price, the second lowest price, and the winner. The computation is based on the operation between private data from different parties, \hbox{\emph{e.g.}}\xspace, \code{bids[i] < sPrice}, \code{ balances[tenderer] += sPrice}. \begin{lstlisting}[language=Solidity, caption=\codename smart contract of bidding procurement, label=listing:cloak] contract SupplyChain { mapping(address !k => uint @k) balances; uint @all mPrice; function biddingProcure( address[!p] parties, uint[@p] bids, address tenderer ) public returns (address winner, uint @winner sPrice) { winner = parties[0]; uint mPrice = reveal(bids[0], all); sPrice = reveal(bids[0], winner); for (uint i = 1; i < parties.length; i++) { if (bids[i] < mPrice) { winner = parties[i]; sPrice = mPrice; mPrice = bids[i]; } else if (bids[i] < sPrice) { sPrice = bids[i]; } } balances[tenderer] -= sPrice; balances[winner] += sPrice; } } \end{lstlisting} \myparagraph{\emph{Annotation Checker}} Taking a \codename smart contract, \codename ignores the annotation to checks the Solidity validation first. Then, \codename builds an Abstract Syntax Tree (AST) for further analysis. It infers data owner and checks the privacy invariants. It traversals the AST in post-order and updates each parent node's owner $o_p=o_l\cup o_r$. The $o_l$ and $o_r$ is the owner set of the left and right child node respectively. \codename recognizes a function as an MPT if $TEE\in o$ or $|o\setminus\{all\}|\geq2$. The latter means the function will take private data from different parties. Then, \codename checks privacy invariants consistency. For example, \codename prohibits developers from implicitly assigning their private data to variables owned by others. \subsection{Deploy Confidential Smart Contract} \myparagraph{\emph{Policy Generator}} With checked AST, \code{Policy Generator} generates a privacy config $P$ for the contract. $P$ simplifies and characterizes the privacy invariants. Typically, $P$ includes variables with data type and owners. It also includes ABI, a read-write set of each function. Specifically, $P$ records each function's characteristics from four aspects, \textit{inputs}, \textit{read}, \textit{mutate} and \textit{return}. The \textit{inputs} includes its parameters with specified data type and owner; \textit{read} records state variables the function read in execution; \textit{mutate} records the contract states it mutated; \textit{return} records the return variables. Since $P$ has recorded the details of state variables in the head, \hbox{\emph{e.g.}}\xspace, data type and owner, \code{Policy Generator} leaves the variable identities in \textit{read}, \textit{mutate} and \textit{return}. \myparagraph{\emph{Code Generator}} \code{Code Generator} generates a service contract $F$ and a verifier contract $V$. While leaving the computation logic in $F$, \code{Code Generator} generates $V$ to verify the result and update the state. In $V$, \code{Code Generator} first imports a pre-deployed \codename TEE registration contract, which holds a list of registered SGXs with \codename runtime. Then \codename transforms each MPT function in \textit{.cloak} into a new function in $V$, which verifies the MPT proof $p$ and assigns new state $C(s')$ later. \myparagraph{\codename Client} With configured nodes IP and ports by developers, \codename deploys the confidential smart contract \textbf{runtime} to TEE-Blockchain Architecture to get trusted \codename executors $E$s. The \textbf{runtime} includes \code{VM} and a \code{Enc/Dec Module}. Then, \codename deploys a SGX registration contract on the blockchain and registers the $E$s' certificate. For each \codename smart contract, \codename will deploys generated $P$ and $F$ to $E$s and $V$ to the blockchain separately. When a participant proposes an MPT, each participant $i$ provides the $x_i$ to SDK. $x_i$ will be encrypted and sent to $E$s. According to deployed $P$, $E$s wait for all private \code{inputs}, synchronize the \code{read} state, construct a transaction, and execute it in enclaves. Then, $E$s encrypt return values and mutated states according to \code{return} and \code{mutate}. Finally, $E$s announce a result transaction on-chain with an MPT proof $p$. The $p$ is $E$s' signature, \hbox{\emph{i.e.}}\xspace, $p=Sig_E<P, F, C(s), C(r_i), C(s')>$. It means compliant to $P$, $E$s confidentially execute $F$ with private inputs $x_i$ and old state $s$ committed by $C(s)$ in enclaves, commit return value $r_i$ and new state $s'$ to get result $C(r_i), C(s')$. Upon receiving the announcement transaction with proof, all nodes and $V$ could believe an MPT real happened and get the result. \section{Introduction} \label{sec:introduction} \input{introduction} \section{Multi-party Transaction} \label{sec:vmpt} \input{vmpt} \section{The \codename Framework} \label{sec:cloak} \input{cloak} \section{Preliminary Evaluation} \label{sec:evaluation} \input{evaluation} \section{Demonstration Description} \label{sec:demo} \input{demo} \bibliographystyle{plain}
train/arxiv
BkiUdiE4ukPiEebgxZKD
5
1
\section{Qubit Model and Participation Ratios} Figure\,\ref{fig:full} shows an example design of a full differential qubit, include the qubit electrodes and an shielding ground plane (gray). The full design can be broken up into the junction and tapered wires (red), a ribbon capacitor (blue) and a coplanar capacitor (green). The capacitances and losses from the combination of the structures would then be used to optimize the design. \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 140 20 160 30,clip] {TaperFigFull.pdf} \caption{\textbf{Example of full transmon design.} Example design of a differential transmon qubit that incorporates several of the capacitance structures described here. Red shows the junction and tapered wire, blue is a ribbon capacitor, and green is a coplanar capacitor. The outer ground plan is gray. The capacitances and losses would be added to give a good approximation for the entire design. } \label{fig:full} \end{figure} We are interested here in calculating the loss from dielectrics, since the loss from the metallic structures are typically negligible for superconductors. For a crystalline substrate such as silicon or sapphire, the loss is dominated by the thin surface layers of the films \cite{wenner_surfloss}: the metal-air (MA), metal-substrate (MS) and substrate-air (SA), typically coming from amorphous oxides. The total loss tangent for these thin layers is given by $\Sigma p_i \tan \delta_i$, where surface interface type $i$ has loss tangent $\tan \delta_i$ and participation ratio of the stored energy \begin{align} p_i = \frac{\epsilon_i/2}{W} \int dA \, t_i\,|E_i|^2 \end{align} where the normal volume integral is replaced by a surface integral $dA$ for a thin dielectric layer with thickness $t_i$, dielectric constant $\epsilon_i$, and a surface electric field $E_i$. The participation ratio is normalized by the total total capacitor energy $W = CV^2/2$, where $C$ is the total capacitance and $V$ the voltage. When designing the qubit, the qubit capacitance $C$ is usually fixed to a desired parameter. Because the results are more easy to interpret in terms of design distances, it is convenient to describe the qubit capacitance in terms of a length using \begin{align} C \equiv \epsilon_0 L \ . \end{align} For $C = 100\,\textrm{fF}$, a value used for a qubit non-linearity of about 200\,MHz, one finds $L = 11.3\,\textrm{mm}$. For thin films, the electric fields of the top and bottom dielectrics can be considered separately. Thus the electric fields $E_0$ can be solved for $\epsilon = \epsilon_0$, the free space value, and then multiplied by 1 for the solution on the air side, and by $\epsilon_s$ for the substrate side. The surface dielectrics can be taken into account with the three participation ratios \cite{wenner_surfloss} \begin{align} p_\textrm{MA} =& \frac{1}{\epsilon_\textrm{MA}} \frac{t_\textrm{MA}}{L\epsilon_0 /2} \Big[\frac{\epsilon_0}{2}\int_\textrm{MA} dA \, \,|E_0/V|^2 \Big] \ , \label{eq:MA} \\ p_\textrm{MS} =& \frac{\epsilon_s^2}{\epsilon_\textrm{MS}} \, \frac{t_\textrm{MS}}{L\epsilon_0 /2} \Big[ \frac{\epsilon_0}{2} \int_\textrm{MS} dA \, \,|E_0/V|^2 \Big] \ , \label{eq:MS}\\ p_\textrm{SA} =& \ \epsilon_\textrm{SA} \, \frac{t_\textrm{SA}}{L \epsilon_0 /2} \Big[ \frac{\epsilon_0}{2} \int_\textrm{SA} dA\, \,|E_0/V|^2 \Big] \ , \label{eq:SA} \end{align} where the area integrals correspond to the appropriate surfaces for each type, and the bracketed terms are called surface energies. Tangential fields are only included for the SA formula, as appropriate for thin films \cite{wenner_surfloss}. For the above MA and MS surface energies, the electric field is for only one side of the metal. Thus for the total surface energy $U$ calculated in the next sections, the above MA and MS energies in brackets should be $U/2$. Dielectric constants are for a silicon substrate $\epsilon_s = 11.7$, aluminum oxide $\epsilon_\textrm{MA}=\epsilon_\textrm{MS} = 9.8$, and silicon dioxide $\epsilon_{SA} = 3.8$; the relative weights of the MA:MS:SA dielectric terms are $0.10:14:3.8$. \section{Differential Parallel Plate Capacitor} The simplest geometry is a parallel plate capacitor. This contribution is typically needed when transmon qubits are made using bump-bonded substrates, where the second substrate acts as a ground plane above the qubit metal pads and thus adds capacitance to the qubit. This structure can be treated as parallel plate with each plate having width $w$ and length $\ell$ and a separation $s$ to the ground plane, with capacitance \begin{align} C_\textrm{p} = (1/2) \epsilon_0 \ell w/s \ , \end{align} where the $1/2$ factor coming from the differential design of the qubit, where the capacitance of each parallel plate is in series. The electric field in each differentially-driven capacitor is $E_\textrm{p} = \tfrac{1}{2}V/s$, and the total surface energy is \begin{align} U_\textrm{p} & = (\epsilon/2) 2(2\ell w)(V/2s)^2 \ , \end{align} where a factor of 2 comes from the two parallel plates, and another from surface loss at the 2 plates of the capacitor. The participation ratio for the parallel-plate capacitor is \begin{align} p_\textrm{MA}^\textrm{p} =& \frac{1}{\epsilon_\textrm{MA}} \frac{t_\textrm{MA}}{L} \frac{\ell w}{s^2} \ . \label{eqs:pp} \end{align} Participation ratios are written with first the dielectric factor, then the dielectric thickness, and finally the geometric factors for the design. \section{Thickness correction} The finite thickness of the metal film changes the surface electric fields mostly at the edges of the film. Since the edge fields will be similar for different geometries, their effect will be calculated here for a simple flat coaxial film. The resulting simple correction to the surface energy can then be applied to different geometries. It is useful to start with a 2-D solution of a coax line, with an inner conductor of radius $r$ and an outer conductor of radius $R$ as illustrated in Fig.\,\ref{fig:Types}a. The solution for the radial electric field on the surface of the inner conductor is \begin{align} E_\textrm{c} & = \frac{V}{r \ln(R/r)}, \label{Ecoax} \end{align} with its strength decreasing with radius $x$ as \begin{align} E_\textrm{c}(x) = (r/x) E_\textrm{c} \label{Ecoax_x} \end{align} \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 140 10 160 20,clip] {TaperFigTypes.pdf} \caption{\textbf{Design geometries.} Drawings of capacitor geometries considered here. a) Cross section of coax, with inner radius $r$ and outer radius $R$. b) Cross section of flat coax, with width $2\overline{r}$ of inner thin film. c) Top view of coplanar capacitor, with width $2a$ and length $\ell$ for each section, separated from the ground plane by gap $b-a$. d) Enlarged view of junction wiring, with length $d$ for each wire from junction to capacitor film. The width of the wire is $2\overline{r}$ for a straight wire, drawn in red. A tapered wire is drawn in blue. } \label{fig:Types} \end{figure} The electric field energy $(\epsilon/2) \int E^2 dv$ is calculated from a volume integral $dv$ of the electric field $E$. Because the interest here is for the surface energy in a 2-D geometry, we compute the surface energy $U/\ell$ for a line length $\ell$ so that the full energy will be multiplied by the surface thickness and length $\ell$. For the coax geometry, the surface energy of the inner metal at radius $r$ is \begin{align} U_\textrm{c}^\textrm{m}/\ell & = (\epsilon/2) 2 \pi r E_\textrm{c}^2 \\ & = \epsilon E_c^2 r\, \pi . \label{Ucoax} \end{align} The surface energy corresponding to a substrate surface along a cut through the middle of the coax is \begin{align} U_\textrm{c}^\textrm{s}/\ell & = (\epsilon/2)\ 2 \int_r^R E_\textrm{c}(x)^2 dx \\ & = \epsilon E_c^2 r\, [1-r/R] , \end{align} where the factor of 2 before the integral comes from the left and right substrate sides. Fig.\,\ref{fig:Types}b shows a flat coax, where the circular inner conductor is replaced by a thin film of width $2\overline{r}$. The electric field magnitude along the coordinate $x$ for this flat is found from numerical solutions for all $x$ to be given by a conformal-mapping solution \begin{align} E_\textrm{f} & = \frac{V}{\overline{r} \ln(2R/\overline{r})} \label{Eflat} \ ,\\ E_\textrm{f}(x) & = E_\textrm{f} \sqrt{\frac{\overline{r}}{|\overline{r}+x|}} \sqrt{\frac{\overline{r}}{|\overline{r}-x|}} \ , \label{Eflat2} \end{align} which fits well for $R > 2 \overline{r} $. The electric field is perpendicular to the metal surface but parallel to the substrate surface. The voltage integral checks properly \begin{align} \int_{\overline{r}}^R E_\textrm{f}(x)\, dx = V [1+O(\overline{r} ^2/R^2)]\ . \end{align} Figure \ref{fig:Flat} shows a comparison between the numerical solution and the formula of Eqs.\,(\ref{Eflat})-(\ref{Eflat2}), showing excellent agreement. The square-root divergence at the metal edge is characteristic of the electric fields of thin metal films. \begin{figure}[b] \includegraphics[width=0.48\textwidth, trim = 110 20 150 40,clip] {TaperFigFlat.pdf} \caption{\textbf{Electric field of flat coax.} Plot of the surface electric field for a flat coax for both the metal surface (black) and substrate (blue), obtained by numerical simulation (dots). The solid lines (green, red) are predictions from Eqs.\,(\ref{Eflat})-(\ref{Eflat2}) and fit well the numerics. Parameters are $\overline{r} = 10\,\mu\textrm{m}$ and $R = 100\,\mu\textrm{m}$ } \label{fig:Flat} \end{figure} The surface energy for the metal surface ($|x| < \overline{r}$) is \begin{align} U_\textrm{f}^\textrm{m1}/\ell & = (\epsilon/2) E_f^2 \ 4\int_0^{\overline{r}-t/2} \frac{\overline{r}^2}{\overline{r}^2-x^2} \ dx \\ & \simeq \epsilon E_\textrm{f}^2 \overline{r} \ln(4\overline{r}/t) \label{eq:ufm1} \end{align} where the factor of 4 is for the top/bottom and left/right parts of the metal, and the logarithmic divergence in the integral at the edge is cut-off at half the thickness $t$ of the film. Numerical simulation for a film with a rectangular cross-section of thickness $t$ shows that the electric fields within $t/2$ of the outside corner have a power law behavior with exponent $p = -1/3$, as appropriate for a 90 degree corner \cite{wenner_surfloss,Jackson}. As an initial approximate solution, this power law dependence of the corner field is then matched to the computed field $E_\textrm{f}(\overline{r}-t/2)$ at a distance $t/2$ from the corner. At a distance $r_c$ from the corner, the corner field is \begin{align} E_\textrm{c} & = E_\textrm{f}(\overline{r}-t/2) \ [r_c/(t/2)]^{p} \label{eq:edgec} \\ &= E_f \sqrt{\overline{r}/t} \ [2r_c/t]^{p} . \end{align} Including all 4 corners, with 2 sides per corner, the line energy for the corner is approximately \begin{align} U_\textrm{f}^\textrm{m2}/\ell &= (\epsilon/2) E_f^2 (\overline{r}/t)\ 8 \int_0^{t/2} [2r_c/t]^{2p} dr_c. \\ & = 4 \epsilon E_f^2 (\overline{r}/t) (t/2)/(1+2p) \\ & = \epsilon E_f^2 \overline{r} \ [2/ (1+2p)] \ . \label{Ufm2} \end{align} With $1+2p = 1/3$, the numerical factor in Eq.\,(\ref{Ufm2}) is 6 and does not depend on $t$. The total surface energy for the metal is the sum of the two energies \begin{align} U_\textrm{f}^\textrm{m}/\ell & = \epsilon E_\textrm{f}^2 \overline{r} \ [\ln(4\overline{r}/t) + c_m] \label{eq:ufm} \ , \end{align} where $c_m$ is the corner correction for the finite thickness of the metal. Figure\,\ref{fig:Corner} gives $c_m$ obtained from numerical integration of the surface energy. The corner correction is slowly varying with relative film thickness $t/\overline{r}$ and has a typical value \begin{align} c_m &= 5.0 \end{align} close to the value 6 obtained above by scaling of the corner fields. It is useful that the surface energy is predicted well even for thick film, with thickness as much as one-half the width. The edges typically contribute about 1/3 of the total surface energy. Also shown is the case of a semicircular edge, which lowers the surface energy a non-negligible but small amount, providing a lower bound for the correction of a rounded edge. \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 140 45 180 55,clip] {TaperFigCorner.pdf} \caption{\textbf{Corner corrections.} Plot of corner corrections obtained by numerical simulation of films with finite thickness $t/\overline{r}$ and a straight vertical edge, for the metal $c_m$ (black) and substrate $c_s$ (blue). Data in red is for a semi-circular edge, showing that sharp corners have a non-negligible but non-dominant effect. The corrections vary slowly with thickness and are taken as $c_m = 5.0$ and $c_s = 1.6$ } \label{fig:Corner} \end{figure} This result shows that a constant term added to the logarithmic cut-off term well represents the corner fields. Note the similarity to Eq.\,(\ref{Ucoax}). Here, the bracket term in Eq.\,(\ref{eq:ufm}) is slightly larger than the corresponding $\pi$ constant in Eq.\,(\ref{Ucoax}), as expected since the flat coax has large edge fields. The correction factor $\ln(4\overline{r}/t) \rightarrow \ln(4\overline{r}/t) + c_m$ will be used in all formulas for the metal edge. The surface energy for the substrate surface ($\overline{r} < x < R$) is \begin{align} U_\textrm{f}^\textrm{s1}/\ell & = (\epsilon/2) E_f^2 \ 2\int_{\overline{r}+t/2}^R \frac{\overline{r}^2}{x^2-\overline{r}^2} \ dx \\ & = \epsilon E_\textrm{f}^2\,(\overline{r}/2) \Big[\ln(4\overline{r}/t)-\ln\frac{R+\overline{r}}{R-\overline{r}} \, \Big] \ , \end{align} where the factor of 2 is for the left/right parts of the substrate. Like found for the metal surface, numerical integration for finite thickness gives a corner correction 1.6. Since typically $\overline{r} \ll R$, the total substrate surface energy is \begin{align} U_\textrm{f}^\textrm{s}/\ell & = \epsilon E_\textrm{f}^2\, (\overline{r}/2) [\ln(4\overline{r}/t) + c_s - 2\overline{r}/R] \label{eq:ufs} \ ,\\ c_s &= 1.6 \ . \end{align} This is smaller than the surface energy for the metal surface since it does not include a sharp edge. The correction factor $\ln(4\overline{r}/t) \rightarrow \ln(4\overline{r}/t)+ c_s$ will be used in all formulas for the substrate edge. \section{Differential Ribbon Capacitor} Considered next is the capacitance between the two leads of the qubit, modeled as two long and straight ribbons as illustrated in Fig.\,\ref{fig:Qubit}. Each ribbon has metal spanning a distance $a$ to $b$ from the centerline, with length $\ell \gg b$ and a metal thickness $t$. A conformal-mapping solution from Ref.\,\cite{IBM} is used for the electric fields. The ribbon capacitance for a differential voltage $V$ is \begin{align} C_\textrm{r} &= [(\epsilon_s + 1)/2] \,\epsilon_0 \ell /C_K(a/b) \ , \label{eq:Cr} \\ C_K &= K(a/b)/K'(a/b) \label{eq:CK}\\ & \simeq (1/\pi)\ln[2(1+\sqrt{a/b})/(1-\sqrt{a/b})] \ , \label{eq:CKa}\\ K'(k) &= K(\sqrt{1-k^2}) \end{align} where $K(k)$ is the complete elliptic integral of the first kind. Equation\,(\ref{eq:CKa}) is an excellent approximation to Eq.\,(\ref{eq:CK}). The effective dielectric constant has contribution from both the air ($\epsilon_0/2$) and substrate ($\epsilon_s\epsilon_0/2$). From the conformal-mapping solution Eq.\,(5) of Ref.\,\cite{IBM}, the surface fields are \begin{align} |E_r(x)|^2 = \Big( \frac{V/2}{K(a/b)} \Big)^2 \frac{b^2}{|(x^2-a^2)(x^2-b^2)|} \ , \label{eq:Erib} \end{align} where the $E$ field is parallel to the surface on the substrate and perpendicular on the metal. The surface integral is evaluated in three sections: \begin{align} &\textrm{inner}\ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 < x < a-t/2 \ ,\\ &\textrm{center}\ \ \ \ \ a+t/2 < x < b-t/2 \ ,\\ &\textrm{outer}\ \ \ \ \ \ \, b+t/2 < x < \infty \ , \end{align} giving \begin{align} S_i &= \int_0^{a-t/2} dx \, \frac{b^2}{|(x^2-a^2)(x^2-b^2)|} \\ &= \frac{ \frac{1}{a}\ln\frac{a-x}{a+x}+\frac{1}{b}\ln\frac{b-x}{b+x} } {2(1-a^2/b^2)} \ \Big|_0^{a-t/2} \\ &\simeq \frac{ \frac{1}{a}\ln\frac{4a}{t}+\frac{1}{b}\ln\frac{b-a}{b+a} } {2(1-a^2/b^2)} \ , \\ S_c &\simeq \frac{ \frac{1}{a}(\ln\frac{4a}{t}+\ln\frac{b-a}{b+a})+\frac{1}{b}(\ln\frac{4b}{t}+\ln\frac{b-a}{b+a}) } {2(1-a^2/b^2)} \ ,\\ S_o &\simeq \frac{ \frac{1}{a}\ln\frac{b-a}{b+a}+\frac{1}{b}\ln\frac{4b}{t} } {2(1-a^2/b^2)} \ . \end{align} Note that $S_c = S_i+S_o$. The surface energy of the center metal section is \begin{align} U_\textrm{r}^\textrm{m}/\ell &= (\epsilon/2)\ 4 \,[V/2K(a/b)]^2 S_c \\ &= \frac{\epsilon V^2}{2\,K^2(a/b)} \frac{S_a(c_m)}{a} \label{eq:upm} \ ,\\ S_a(c_m) &\equiv \frac{ (\ln\frac{4a}{t}+c_m+\ln\frac{b-a}{b+a})+\frac{a}{b}(\ln\frac{4b}{t}+c_m+\ln\frac{b-a}{b+a}) } {2(1-a^2/b^2)} \ . \end{align} where the factor of 4 comes from the two ribbons and the top/bottom surfaces. The dimensionless surface integral is obtained from $S_a/a = S_c$, along with adding the corner correction $c_m$ for a finite thickness The surface energy of the inner and outer substrate sections is \begin{align} U_\textrm{r}^\textrm{s}/\ell &= (\epsilon/2)\ 2 \,[V/2K(a/b)]^2 (S_i+S_0) \\ &= \frac{\epsilon V^2}{4\,K^2(a/b)} \frac{S_a(c_s)}{a} \label{eq:ups} \ , \end{align} where the factor of 2 is from the two sides of the ribbon. The substrate surface energy is smaller than the metal by approximately a factor of 2. The ribbon capacitor has participation ratios coming from the surface-air, metal-substrate, and substrate air interfaces, calculated using Eqs.\,(\ref{eq:MA})-(\ref{eq:SA}), (\ref{eq:upm}) and (\ref{eq:ups}) \begin{align} \label{eqs:ribstart} p_\textrm{MA}^\textrm{r} =& \frac{1}{\epsilon_\textrm{MA}} \frac{t_\textrm{MA}}{L} \frac{\ell}{a}\ \frac{S_a(c_m)}{2\,K^2(a/b)} \ ,\\ p_\textrm{MS}^\textrm{r} =& \frac{\epsilon_s^2}{\epsilon_\textrm{MS}} \frac{t_\textrm{MS}}{L} \, \frac{\ell}{a}\ \frac{S_a(c_m)}{2\,K^2(a/b)} \ ,\\ p_\textrm{SA}^\textrm{r} = & \, \epsilon_\textrm{SA} \, \frac{t_\textrm{SA}}{L} \frac{\ell}{a}\ \frac{S_a(c_s)}{2\,K^2(a/b)} \ . \label{eqs:ribend} \end{align} The only difference in the 3 participation ratios is the dielectric factors and the small changed from the corner constant. \begin{figure}[b] \includegraphics[width=0.48\textwidth, trim = 140 65 210 60, clip] {TaperFigSurf.pdf} \caption{\textbf{Surface energies.} Plot of the normalized surface energies $S_a(c_m)/K^2$ for ribbon (black) and $S_a(c_s)/K'^2$ for coplanar (blue) geometries, versus the normalized distance $(b-a)/a$. The plot uses the normalized thickness $a/t = 1000$. The solid lines are for the metal surface with $c_m = 5.0$, whereas dashes are for the substrate with $c_s = 1.6$. Also plotted in red is $S_a/KK'$ for the case of all capacitance coming from the ribbon or coplanar geometry. } \label{fig:Surf} \end{figure} The black lines in Fig.\,\ref{fig:Surf} is a plot of the dimensionless surface energy $S_a(c_m)/K^2$ for the metal (solid) and $S_a(c_s)/K'^2$ for the substrate (dashed) as a function of the normalized distance $(b-a)/a$. The metal surface energy is greater because of the higher corner constant $c_m > c_s$. As the distance $b-a$ increases, the surface energy decreases. Typical designs use $(b-a)/a \sim 1$. Note that the surface energy drops by a non-negligible amount with the lower corner constant, showing that the edge fields from the finite thickness are important. For the case where all of the capacitance comes from the ribbon $C_r = \epsilon_0 L$, the participation for the metal-substrate interface is \begin{align} p_\textrm{MS}^\textrm{r}(C_r) =& \frac{\epsilon_s^2}{\epsilon_\textrm{MS}(\epsilon_s+1)/2} \frac{t_\textrm{MS}}{a} \,\frac{1}{2} \ \frac{S_a(c_m)}{K(a/b)K'(a/b)} \ . \end{align} The last factor is the geometric mean of the ribbon and coplanar curves of Fig.\,\ref{fig:Surf}, shown in red. \section{Differential Coplanar Capacitor} The qubit can also have capacitance to ground. This can be modeled as a coplanar structure as shown in Fig.\,\ref{fig:Types}c, where each side of the qubit has a pad with width $2a$ and length $\ell \gg a$, with a ground plane at a distance $b$ from the centerline. As this is the ``dual'' of the ribbon capacitor, with metal and substrate switched, similar conformal solutions can be used with minor modifications. The differential capacitance is \begin{align} C_\textrm{c} =& (1/2)[(\epsilon_s + 1)/2]\,\epsilon_0\ell\,4 C_K(a/b) \ , \label{eq:Ccop} \end{align} where $C_K$ is defined in Eq.\,(\ref{eq:CK}), and the initial factor of 1/2 comes from the two coplanar capacitors in series. From Eq.\,(25) of Ref.\,\cite{IBM}, the electric field for each coplanar capacitor is \begin{align} |E_c(x)|^2 = \Big( \frac{V/2}{K'(a/b)} \Big)^2 \frac{b^2}{|(x^2-a^2)(x^2-b^2)|} \ , \label{eq:Ecopl} \end{align} where now the field is perpendicular to the substrate in the inner and outer sections, and parallel in the center. The surface energy of the metal sections is similar to the ribbon case except for an extra factor of 2 to account for the series capacitors, as can be seen from Fig.\,\ref{fig:Types}c since $\ell$ only accounts for half of the total length. The metal and substrate surface energies are \begin{align} U_\textrm{c}^\textrm{m}/\ell &= \frac{\epsilon V^2}{K'^2(a/b)} \frac{S_a(c_m)}{a} \label{eq:ucm} \ , \\ U_\textrm{c}^\textrm{s}/\ell &= \frac{\epsilon V^2}{2\,K'^2(a/b)} \frac{S_a(c_s)}{a} \label{eq:ucs} \ , \end{align} Note the similarities to the ribbon formulas. The participation ratios are twice as large as the ribbon and with $K$ replaced by $K'$ \begin{align} \label{eqs:copstart} p_\textrm{MA}^\textrm{c} =& \frac{1}{\epsilon_\textrm{MA}} \frac{t_\textrm{MA}}{L} \frac{2\,\ell}{a}\ \frac{S_a(c_m)}{2\,K'^2(a/b)} \ ,\\ p_\textrm{MS}^\textrm{c} =& \frac{\epsilon_s^2}{\epsilon_\textrm{MS}} \frac{t_\textrm{MS}}{L} \, \frac{2\,\ell}{a}\ \frac{S_a(c_m)}{2\,K'^2(a/b)} \ ,\\ p_\textrm{SA}^\textrm{c} = & \, \epsilon_\textrm{SA} \, \frac{t_\textrm{SA}}{L} \frac{2\,\ell}{a}\ \frac{S_a(c_s)}{2\,K'^2(a/b)} \ . \label{eqs:copend} \end{align} For the case where all the capacitance comes from the coplanar structure, the participation is the same as the ribbon design, for example \begin{align} p_\textrm{MS}^\textrm{c}(C_c) = p_\textrm{MS}^\textrm{r}(C_r) \ . \label{eq:Ccoppart} \end{align} A single-ended coplanar design is used to test resonators. In this case, the coplanar capacitance of Eq.\,(\ref{eq:Ccop}) does not have the initial 1/2 term. The surface energy $U$ and participation ratios are a factor of 2 larger. The participation $p_\textrm{MS}^\textrm{r}(C_c)$ includes these two factors, so Eq.\,(\ref{eq:Ccoppart}) is unchanged for the single-ended design. \section{Differential Ribbon Capacitor \\ With Ground} Planar transmons are typically designed to have a ground plane surrounding the qubit capacitor, as shown in Fig.\,\ref{fig:full}. For the ribbon capacitor considered previously, a ground plane is included here from $c$ to infinity and $-c$ to minus infinity, where $c>b$. The capacitance and surface loss is computed numerically and then fit to functions based on the previous ribbon formulas. The surface electric field is well described a simple modification to the ribbon case of Eq.\,(\ref{eq:Erib}) \begin{align} |E_{rg}(x)|^2 = |E_r(x)|^2 \frac{c^2}{|x^2-c^2|} \ . \label{eq:Eribgnd} \end{align} Integration of surface charge from the numerical solutions gives a simple modification to the ribbon differential capacitance of Eq.\,(\ref{eq:Cr}) \begin{align} C_\textrm{rg} &\simeq C_\textrm{r}/[1-(x_e/c)^2]^{0.23} \ ,\\ x_e &= b -0.15(b-1.2a) \ . \end{align} Figure\,\ref{fig:RibGnd} shows the numerical results (points) and the fit function (line) versus ground plane separation $(c-b)/b$ for three values of $a$, representative of $a/b$ ratios that would commonly be used. The fit function represents the numerical results well. \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 170 20 220 30,clip] {TaperFigRibGnd.pdf} \caption{\textbf{Ribbon capacitor with ground.} Plot of the capacitance (top panel) and surface loss (bottom panel) versus the ground plane separation $(c-b)/b$, for $a = (25,50,70)\,\mu$m (top to bottom) and $b = 100\,\mu$m. Points are numerical simulations and lines are fit formulas. The metal and substrate surface loss is colored black and blue, respectively. Numerical simulations are for a infinitely thin metal with an integration cutoff $t/2=0.05\,\mu$m, with the fit also using $c_m=c_s=0$. Dashed lines include corner correction $c_m=5.0$ and $c_s=1.6$. } \label{fig:RibGnd} \end{figure} For the surface loss of the metal, the fit function for the numerical results are \begin{align} U_\textrm{rg}^\textrm{m}/\ell &= \epsilon V^2 \Bigg[ \frac{0.98}{2\,K^2(a/b)} \frac{S_a(a,b,t,c_m)}{a} \\ & \ \ \ \ \ \ + \frac{1.70}{2\,K'^2(b/c)} \frac{S_{ao}(b,c,t,c_m)}{b} \Bigg] \ , \end{align} where the contribution of $S_{ao}$ corresponds to the outer metal of a coplanar capacitor between $b$ and $c$ \begin{align} S_{ao}(b,c,t,c_m) &= \frac{ \ln\frac{c-b}{c+b}+\frac{b}{c}(\ln\frac{4c}{t}+c_m) }{2(1-b^2/c^2)} \ . \end{align} The surface loss for both the inner and outer substrate gaps gives the fit function \begin{align} U_\textrm{rg}^\textrm{s}/\ell &= \epsilon V^2 \Bigg[ \frac{0.95}{4\,K^2(a/b)} \frac{S_a(a,b,t,c_s)}{a} \\ & \ \ \ \ \ \ + \frac{0.80}{4\,K'^2(b/c)} \frac{S_{a}(b,c,t,c_s)}{b} \Bigg] \ , \end{align} where the second contribution of $S_a$ corresponds to the substrate of a coplanar capacitor between $b$ and $c$. Figure\,\ref{fig:RibGnd} shows the numerical results are well represented by the fit functions. \section{Differential Junction Wires} The connections between the Josephson junction and the capacitor electrodes are made through two junction wires of total length $2d$, as shown in Fig.\,\ref{fig:Types}d. Treating these wires as round with radius $r$ and placed end-to-end each with length $d$, the surface field can be calculated numerically using the potential matrix Eq.\,(\ref{Mcyl}) for a 2-D geometry with cylindrical symmetry. For a differential voltage $V$, the surface electric field as a function of distance $y$ from the junction is well described by \begin{align} E_\textrm{cw}(y) = \frac{1}{2}\ \frac{V}{r \ln(2y/r)} \ , \label{eq:ewapx} \end{align} as shown in Fig.\,\ref{fig:Cyl}. In comparison with Eq.\,(\ref{Ecoax}), the second term is equivalent to a coax of inner radius $r$ and outer radius $2y$. The first term 1/2 represents this coax placed in series with a second coax of the same dimensions, which represents the fields emanating from one circular wire, expanding to a distance $2y$, and then converging in to the other circular wire. This 2-D numerical calculation also allows the radius $r$ to change with distance $y$. Modeling a linear taper with $r = S\,y$, the electric field is found to be well described by Eq.\,(\ref{eq:ewapx}) with $r$ replaced by $r(y)$, as long as the taper is not too large $S < 0.4$; larger slopes are found not to reduce the electric field significantly. Figure\,\ref{fig:Cyl} shows numerical results for both a straight and tapered cylindrical wire, with good agreement to the approximation formula Eq.\,(\ref{eq:ewapx}). \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 100 20 150 40,clip] {TaperFigCyl.pdf} \caption{\textbf{Electric field of cylindrical wire.} Plot of the surface electric field for a cylindrical wire for both straight ($S=0$) and tapered ($S=0.2$) radius. Numerical solutions (dots) match well with the approximation formulas (lines) of Eq.\,(\ref{eq:ewapx}). Parameters are $r = 0.1\,\mu\textrm{m}$ and $d = 100\,\mu\textrm{m}$. The uptick of the field at the end of the wire is expected for an edge field. } \label{fig:Cyl} \end{figure} A solution for a flat wire can be obtained by assuming the wire has the $x$-dependence of the electric field as given in Eq.\,(\ref{Eflat2}), but with an overall dependence of $E_\textrm{fw}$ with $y$ that is determined numerically. The appendix shows how to solve this problem with a potential matrix. Numerical solutions for both straight and tapered flat wires show that a good fitting function is \begin{align} E_\textrm{fw}(y) = \frac{1}{2}\ \frac{V}{\overline{r}\, \ln(4y/\overline{r})} \ , \label{eq:Efw} \end{align} which has the form of $E_\textrm{f}$ in Eq.\,(\ref{Eflat}) but with $2R$ replaced by $4y$. It is again valid for small slope $S < 0.4$. The factor of 4 in the logarithm can be understood as a factor of 2 from the coax to the wire geometry, and another factor of 2 from the circular to flat coax formula in Eqs.\,(\ref{Ecoax}) to (\ref{Eflat}). The metal surface energy for a straight wire of constant width can thus be found by integrating this surface field, which is equivalent to integrating one-fourth of the line energy Eq.\,(\ref{eq:ufm}) over the wire length \begin{align} U_\textrm{sw}^\textrm{m} & = 2 \int_{2\overline{r}}^d [\tfrac{1}{4} U_\textrm{f}^{m}(R=2y)/\ell]\, dy \\ &= 2 \epsilon V^2 \int_{2\overline{r}}^d \frac{\ln(4\overline{r}/t)+c_m}{4\ \overline{r} \ln^2(4y/\overline{r})} \, dy \label{eq:Uswint} \\ &\simeq \frac{\epsilon V^2}{2} \frac{\ln(4\overline{r}/t)+ c_m}{\ln^2(d/\overline{r})}\ \frac{d}{\overline{r}} \ , \label{eq:uwmint} \end{align} where the factor of 2 before the integral accounts for both junction wires. The last formula was fit to numerical integration. Because of the $d/\overline{r}$ factor, this surface energy can be large, so a more optimal solution is to taper the wire as explained below. Similarly from Eq.\,(\ref{eq:ufs}), the surface energy of the substrate for a straight wire of constant width is \begin{align} U_\textrm{sw}^\textrm{s} & = 2 \int_{2\overline{r}}^d [\tfrac{1}{4}U_\textrm{f}^{s}(R=2y)/\ell]\, dy \\ &\simeq \frac{\epsilon V^2}{4} \frac{\ln(4\overline{r}/t)+ c_s}{\ln^2(d/\overline{r})} \ \frac{d}{\overline{r}} \ , \label{eq:uwsint} \end{align} The participation ratios for straight wires are \begin{align} \label{eqs:swstart} p_\textrm{MA}^\textrm{sw} =& \frac{1}{\epsilon_\textrm{MA}} \frac{t_\textrm{MA}}{L}\frac{d}{\overline{r}} \, p_\textrm{M}^{\textrm{sw}\prime} \ , \\ p_\textrm{MS}^\textrm{sw} =& \frac{\epsilon_s^2}{\epsilon_\textrm{MS}} \frac{t_\textrm{MS}}{L}\frac{d}{\overline{r}}\, p_\textrm{M}^{\textrm{sw}\prime} \ , \\ p_\textrm{SA}^\textrm{sw} =& \, \epsilon_\textrm{SA} \, \frac{t_\textrm{SA}}{L}\frac{d}{\overline{r}} \, p_\textrm{S}^{\textrm{sw}\prime} \ , \end{align} where multiplicative factors are given by \begin{align} p_\textrm{M}^{\textrm{sw}\prime} =& \frac{1}{2}\frac{\ln(4\overline{r}/t)+c_m}{ \ln^2(d/\overline{r})} \ ,\\ p_\textrm{S}^{\textrm{sw}\prime} =& \frac{1}{2}\frac{\ln(4\overline{r}/t)+c_s}{ \ln^2(d/\overline{r})} \ . \label{eqs:swend} \end{align} As found previously, the equations differ only in the epsilon factors and the corner constant. The capacitance of the straight junction wires is found from numerical simulation \begin{align} C_\textrm{sw} \simeq 4.1\, [(\epsilon_s + 1)/2]\,\epsilon_0 \,d/\ln(d/\overline{r}) \ . \end{align} \section{Tapered Junction Wires} The large $d/\overline{r}$ ratio in the above participation ratios contributes to a large surface energy, since the small width of the wires produce large electric fields at its surface. As surface loss decreases with increasing size, it is natural to increase the width of the wire to lower loss. A solution to minimize surface energy is to taper the wire, increasing the wire width with increasing distance $y$ from the junction as shown in Fig.\,\ref{fig:Types}d. The contribution to the line energy, the surface energy per line length $dy$, is the integrand of Eq.\,(\ref{eq:Uswint}), where $\overline{r}$ is now a function of $y$. The integrand is is minimized at distances $y/t = (10, 100, 1000)$ for a half-width $\overline{r}/y = (0.363, 0.402, 0.425)$, respectively. An effective solution is to taper the wire according to $\overline{r}(y) = \textrm{max}(\overline{r}_0, (y-5t)S)$ with the taper starting at $y = 5t$, optimizing the slope $S$ for lowest energy. Numerical integration of the line energy gives the metal surface energy for a tapered wire that is fit by \begin{align} U_\textrm{tw}^\textrm{m} & \simeq 0.68 \,\epsilon V^2 \frac{\ln(d/\overline{r}_0)}{S}\ \frac{\ln(4Sd/t)+c_m}{\ln^2(4/S)} \, \ , \label{eq:uwsint} \end{align} Although this has a minimum energy at slope $S = 0.45$, it is a broad minimum increasing by 2\% at $S=0.28$ and only 10\% at $S=0.16$. Note that this formula is similar to Eq.\,(\ref{eq:uwmint}) for a constant width wire, except for the logarithm dependence on $d$. Similarly, the substrate surface energy for a tapered wire is fit by \begin{align} U_\textrm{tw}^\textrm{s} & \simeq 0.29 \,\epsilon V^2 \frac{\ln(d/\overline{r}_0)}{S} \ \frac{\ln(4Sd/t)+c_s}{\ln^2(4/S)}\, . \label{eq:uwsint} \end{align} \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 110 20 150 40,clip] {TaperFigWires.pdf} \caption{\textbf{Junction wire surface energy.} Metal surface energy (and loss) of the junction wire for straight and tapered designs versus wire length per side $d$. The tapered wire shows significantly lower energy for wire lengths $d \gtrsim 10\,\mu\textrm{m}$. At large distances the straight and tapered energy scale with $d$ approximately linearly and logarithmicly, respectively. Parameters are $t = \overline{r}_0 = 0.1\,\mu\textrm{m}$ and $S=0.4$. } \label{fig:Wires} \end{figure} The metal surface loss for the junction wires is plotted in Fig.\,\ref{fig:Wires} for the straight and tapered cases, obtained by numerical integration of Eq.\,(\ref{eq:Uswint}). At small distances $d \lesssim 5\,\mu\textrm{m}$, the two results are similar, but at large distances the logarithmic scaling makes the tapered loss significantly lower. It is standard practice to increase the overall size of the qubit capacitor to lower its loss. When using a large $d$, it is thus increasing important to optimally design the junction wires with a taper. The formulas for the participation ratios for a tapered wire are \begin{align} \label{eqs:twstart} p_\textrm{MA}^\textrm{tw} =& \frac{1}{\epsilon_\textrm{MA}} \frac{t_\textrm{MA}}{L}\frac{\ln(d/\overline{r}_0)}{S} \, p_\textrm{M}^{\textrm{tw}\prime} \ , \\ p_\textrm{MS}^\textrm{tw} =& \frac{\epsilon_s^2}{\epsilon_\textrm{MS}} \frac{t_\textrm{MS}}{L}\frac{\ln(d/\overline{r}_0)}{S}\, p_\textrm{M}^{\textrm{tw}\prime} \ , \\ p_\textrm{SA}^\textrm{tw} =& \, \epsilon_\textrm{SA} \, \frac{t_\textrm{SA}}{L}\frac{\ln(d/\overline{r}_0)}{S} \, p_\textrm{S}^{\textrm{tw}\prime} \ , \end{align} with multiplicative factors \begin{align} p_\textrm{M}^{\textrm{tw}\prime} =& 0.68 \frac{\ln(4Sd/t)+c_m}{\ln^2(4/S)} \ ,\\ p_\textrm{S}^{\textrm{tw}\prime} =& 0.58 \frac{\ln(4Sd/t)+c_s}{\ln^2(4/S)} \ . \label{eqs:twend} \end{align} It is recommended using a continuous taper as described above, not a stepped taper, since the continuous taper is optimal at every distance from the junction $x$. Also, the sharp corners of the steps will produce large electric fields and increase the surface energy. The capacitance of the tapered junction wires is found from numerical simulation \begin{align} C_\textrm{tw} \simeq 3.5\, [(\epsilon_s + 1)/2]\,\epsilon_0\, \sqrt{S}\,d \ . \end{align} \section{Discussion} \begin{table}[b] \begin{tabular}{| l || c | c | c|} \hline \textbf{loss$\times$10$^6$ } & \hspace{8pt} MA \hspace{8pt} & \hspace{8pt} MS \hspace{8pt} & \hspace{8pt} SA \hspace{8pt} \\ \hline Ref.\,\cite{wenner_surfloss} & 0.10 \hspace{2pt} & 6.13 & 4.02 \\ \hline This work & 0.060 & 5.93 & 3.57\\ \hline \hline \textbf{Participation ratio (\%)} \ \ \ & MA & MS & SA \\ \hline Ref.\,\cite{partLL} ($d=0.28\,\mu$m) & 0.017 & 0.297 & 0.156 \\ \hline This work ($d=0$) & \hspace{1.8pt} 0.0012 & 0.139 & 0.027\\ \hline \end{tabular} \caption{\textbf{Check data}. Comparison of prior numerical results with formulas from this paper. The first example shows good agreement for $(a,b,t)=(2.5,4.5,0.1)\,\mu$m and surface parameters ($\epsilon_s, \epsilon_\textrm{MA}, \epsilon_\textrm{MS}, \epsilon_{SA}) = (10,10,10,10)$, thickness 3\,nm and loss tangent 0.002. The second example does not agree well and uses $(a,b,t)=(3,6,0.25)\,\mu$m and surface parameters ($\epsilon_s, \epsilon_\textrm{MA}, \epsilon_\textrm{MS}, \epsilon_{SA}) = (11.7,11.4,10,4)$ and thickness 2\,nm, but different trench depths $d$. } \label{tab:check} \end{table} These formulas agree well with a prior numerical simulation, as detailed in Table\,\ref{tab:check}. The first example shows good agreement with numerical results from Ref.\,\cite{wenner_surfloss}. Note the only difference in the MS and SA formulas are from the corner constants $c_m$ and $c_s$. The second example does not agree well with the MS geometry of Ref.\,\cite{partLL}, although the results here are for a flat substrate with no trenching. It is unexpected that the prior numerical results with trenching gives a higher participation ratio for \textit{all} surfaces. Surface loss closely scales as the inverse of the system size, as described previously in Ref.\,\cite{wenner_surfloss}. However, the calculation for the participation ratio from the junction wires have the opposite effect, as its participation increases with length. Thus there is a crossover in distance $d$ where the surface loss of the wire goes from relatively unimportant to dominant. Formulas for predicting this crossover is an important result of this work. Table \ref{tab:ex} shows the participation ratios for the 3 interfaces and 5 qubit capacitance types, for an example geometry of size scale of $\sim 100\,\mu\textrm{m}$ that is appropriate for current devices. Here a constant thickness 2\,nm of the surface oxides is assumed. The ribbon has the same participation as the coplanar geometry, as expected. Of course, predictions depend on actual device parameters, which can be readily made with these formulas. For the qubit capacitance, the metal-substrate (MS) interface dominates the surface participation. For the ribbon design, the substrate-air (SA) is about 10 times smaller due to the dielectric factors, half the surface, and a lower corner constant $c_s$. However, the wire loss is not much smaller and clearly indicates that for present designs this contribution should be carefully considered. Importantly, the tapering of the wire will produce a significant improvement in qubit performance, about a factor of 2. \begin{table}[b] \begin{tabular}{| c | c || c | c | c|} \hline \hspace{15pt} interface \hspace{15pt} & \hspace{8pt} Eqs. \hspace{8pt} & \hspace{8pt} MA \hspace{8pt} & \hspace{8pt} MS \hspace{8pt} & \hspace{8pt} SA \hspace{8pt} \\ \hline \hline parallel plate & (\ref{eqs:pp}) & 8.16e-5 & & \\ \hline ribbon & (\ref{eqs:ribstart})-(\ref{eqs:ribend}) & 1.04e-6 & 1.42e-4 & 2.74e-5 \\ \hline coplanar & (\ref{eqs:copstart})-(\ref{eqs:copend}) & 1.04e-6 & 1.42e-4 & 2.74e-5 \\ \hline \hline straight wires & (\ref{eqs:swstart})-(\ref{eqs:swend}) & 7.47e-7 & 1.02e-4 & 1.30e-5 \\ \hline tapered wires & (\ref{eqs:twstart})-(\ref{eqs:twend}) & 4.21e-7 & 5.76e-5 & 9.47e-6 \\ \hline \end{tabular} \caption{\textbf{Participation ratios} for various qubit structures. The top three are for the primary qubit capacitance; for comparison purpose, each uses a length $\ell$ such that its capacitance is 100\,fF. The bottom two are for straight and tapered wires that connect the junction with the qubit capacitance. Geometry parameters are thickness $t = 0.1\,\,\mu\textrm{m}$; parallel plate $(s,w,\ell_p)=(5,100,1130)\,\mu\textrm{m}$; ribbon and coplanar $(a,b,\ell_r,\ell_c)=(50,100,1391,1138)\,\mu\textrm{m}$; junction wires $(2d,\overline{r},\overline{r}_0)=(100,0.1,0.1)\,\mu\textrm{m}$ and $S=0.4$. Dielectric parameters are ($\epsilon_s, \epsilon_\textrm{MA}, \epsilon_\textrm{MS}, \epsilon_{SA}) = (11.7,9.8,9.8,3.8)$. For simplicity, here the oxide thickness is assumed to be $t_\textrm{MA} = t_\textrm{MS} = t_\textrm{SA} = 2\,\textrm{nm}$; results can be simply scaled with expected thickness. Total loss can be estimated by multiplying the surface loss tangents \cite{partLL}; typical values for amorphous insulators are 0.005 \cite{TLSloss}. } \label{tab:ex} \end{table} When qubit designs use multiple chips that are bump-bonded together, a parallel plate capacitance is often formed between the qubit chip and ground. Table \ref{tab:ex} shows that the participation ratio of this structure needs to be considered even for a plate separation of $s = 5\,\mu\textrm{m}$, especially since the thickness of the other surfaces are likely less than 3\,nm. Although the formulas predict surface energy will decrease slightly with taper slopes greater than 0.4, doing so is not recommended since numerical simulations show that electric fields do not decrease in this range. Besides, the surface energy only slightly decreases above a slope of 0.2. An interesting question is how much more surface loss is there for thin films, arising from the large fields at the edges. It is possible to compare surface energy for a round coax and flat coax of the same width using Eqs.\,(\ref{Ucoax}) and (\ref{eq:ufm}), which shows that the ratio of the metal surface energy is \begin{align} \frac{U_\textrm{f}^\textrm{m}}{U_\textrm{c}^\textrm{m}} & \simeq \frac{\ln(4\overline{r}/t)+c_m}{\pi} \simeq 4.0 \ \end{align} for $\overline{r} = 50\,\mu\textrm{m}$ and $t = 0.1\,\mu\textrm{m}$, typical dimensions considered here. Although the metal-film edges produce more loss, the increase is still acceptable. Note that the logarithm factor is 7.6, so that about 1/3 of the surface energy comes from the corners within $t/2$ of the edges. The MA and MS participation formulas in Eqs.\,(\ref{eq:MA})-(\ref{eq:MS}) use surface energies $U/2$ for the two sides of the film, which are then multiplied by dielectric constant factors. However, since the metal film usually sits on top of the substrate, this splitting of surface energy should change somewhat. One expects the air side of the surface energy to include both sides of the top corner and the outside of the bottom corner, while the substrate side only includes the film edge of the bottom corner. Since these two sides of the corner contributes similarly, one expects the constant factor added to the logarithm to be about $1.5\,c_m$ for the air side and and $0.5\,c_m$ for the substrate side. For example, this modification changes the MA prediction of Table\,\ref{tab:check} from 0.060 to 0.077, closer to the numerical result 0.010. Since the MS interface clearly dominates in the participation ratio, there has been effort to minimize this oxide layer by surface treating the silicon wafer before depositing the metal film \cite{HFdip}. The MS thickness and loss tangent are thus parameters that should be measured carefully to optimize a design. The qubit capacitor is much larger size than the junction and its wires are often made in a separate step patterned with optical lithography, while the junction and wire is patterned with electron-beam lithography. If the surface treatment is easier or even possible with the optical lithography step, it is then recommended that the taper is brought down to within $1\,\mu\textrm{m}$ or so of the junction to minimize its loss. In this case the data in Fig.\,\ref{fig:Wires} would be used to estimate the loss from both sections of wire; because of the logarithmic dependence, there would still be some contribution from even the short junction section. \section{Two-level States} Surface loss typically comes from two-level states (TLS) \cite{TLSloss}, which saturate and produce less loss at high excitation fields. Using the numerically computed surface fields, the dependence on power can be found by scaling the reduction in loss from the local electric field $E$ with \begin{align} E^2 &\rightarrow E^2/\sqrt{1+E^2/E_s^2} \label{eq:TLSsat} \\ & = E\,E_s \ \ \ \textrm{for}\ E \gg E_s \ , \end{align} where the saturation electric field $E_s$ depends on microscopic parameters of the TLS \cite{TLSloss}. Since saturation measurements are typically made with coplanar resonators, numerical integration of the surface loss is shown in Fig.\,\ref{fig:Sat} for three values of $a$, each with the gap equal to the inner metal width $b = 2a$. As expected, for large saturation fields (loss at low power) the largest resonator gives lowest loss. At large fields, the loss of all three resonators converges. This behavior can be understood using dimensional analysis: scaling all the lengths by $D$ decrease the electric field by $E \sim 1/D$, but increases the surface integration by $D$. For loss at low power, the integral scales as $E^2 D \sim 1/D$. But when saturated, $E E_s D \sim E_s$ gives constant scaling. Figure\,\ref{fig:Sat} also shows volume saturation, for example coming from TLS in the substrate. \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 150 10 290 10,clip] {TaperFigSat.pdf} \caption{\textbf{Saturation loss of a coplanar resonator.} Top panel: Plot of surface energy versus saturation electric field $E_s$, using surface loss scaled for saturation according to Eq.\,(\ref{eq:TLSsat}). The single-ended coplanar resonators have voltage $V=1\,$Volt and parameters $a=(2,10,50)\,\mu$m, $b=2a$ and $t=0.1\,\mu$m. In the low power limit (high saturation $E_s$), the loss is inversely proportional to $1/a$, whereas at high power the curves merge together. The plus symbol $(+)$ is the characteristic crossover point, given here by the single-ended prediction of the surface energy Eq.\,(\ref{eq:ucm}) and $3E_c(0)$, where $E_c(0)$ is the center $x=0$ electric field of Eq.\,(\ref{eq:Ecopl}). Bottom panel: Volume energy versus saturation field for the same geometric parameters and colors. At low fields the volume energy is equivalent to the capacitance per unit length for all curves, as expected. At left, the characteristic saturation field scales inversely with metal dimension. } \label{fig:Sat} \end{figure} The analysis so far has treated the dissipation continuously. However, surface loss comes from a bath of two-level states, with individual states that are spectroscopically observable for small-area devices \cite{TLSloss}. Simple models predict both the magnitude and the density of TLS, so its spectrum can be extremely useful for identifying the physical location of the loss. The dipole moment of the TLS couples to the electric field of the 0 to 1 qubit transition. This produces a qubit splitting with random frequency and splitting size, but with a maximum splitting size $S_\textrm{max}$ given by Eq.\,(3) of Ref.\,\cite{TLSloss} that is proportional to the qubit electric field and inversely proportional to the square-root of the qubit capacitance. For a junction capacitor with parallel-plate separation 2\,nm and a qubit capacitance 2\,pF, a value $S_\textrm{max} = $ 74\,MHz was measured. For a transmon qubit with $C = $ 0.1\,pF, the above scaling gives \begin{align} S_\textrm{max} = (330\,\textrm{MHz} \cdot 2\,\textrm{nm})\ E/V \ , \end{align} where $E/V$ has a dimension of inverse distance and has been computed here for the various surface electric fields. For the MS interface of a ribbon capacitor, Fig.\,\ref{fig:TLSrib} shows a plot of $S_\textrm{max}$ versus the distance from the inner corner $r_c$, which includes edge corrections at a distance less than the half-thickness $t/2$. The size of the largest splittings are in the few hundred kilohertz range. \begin{figure}[b] \includegraphics[width=0.48\textwidth, trim = 110 20 180 20,clip] {TaperFigTLSrib.pdf} \caption{\textbf{TLS for ribbon capacitor.} Plot of maximum splitting $S_\textrm{max}$ versus distance from edge of ribbon capacitor $r_c$. Black is ribbon solution of $\epsilon_s/\epsilon_{MS}$ multiplied by Eq.\,(\ref{eq:Erib}) ($\sim r_c^{-1/2}$), and blue is edge scaling Eq.\,(\ref{eq:edgec}) ($\sim r_c^{-1/3}$). The integrated area is shown on the top x-axis for $\ell = $1.4\,mm and 2 MS electrodes, showing that the effective areas are typically much greater than $1\,\mu\textrm{m}^2$ (arrow). } \label{fig:TLSrib} \end{figure} The number of splittings is proportional to the capacitor volume. Figure\,2 of Ref.\,\cite{TLSloss} shows that the size of the splittings have a log-normal distribution, so that the largest splittings are between $S_\textrm{max}/3$ and $S_\textrm{max}$ and have a density $0.5/\mu\textrm{m}^2$GHz. The expected TLS density of the ribbon capacitor can be estimated by the effective integrated area $A(S)$, obtained by multiplying $r_c$ by twice the length of the ribbon 2.8\,mm and a factor 3/2 to account for the thicker 3\,nm thick surface oxide. Since the observed splittings are dominated by those close to $S_\textrm{max}$, the splitting density $\rho_S$ between splittings $S_1$ and $S_2$ is thus approximately given by \begin{align} \rho_S \simeq (0.5/\mu\textrm{m}^2\textrm{GHz})[A(S_2)-A(S_1)] \ . \end{align} If one assumes the qubit splitting measurements spans a 2\,GHz frequency range, then the first observable splitting should occur on average for an integrated area $A = 1\,\mu\textrm{m}^2$. Using Fig.\,\ref{fig:TLSrib}, the largest splittings at 3 nm should have size 300\,kHz, with an average spacing of about one per 200\,MHz in the qubit frequency. For a parallel plate capacitor, the effective distance for the electric field is the separation multiplied by $\epsilon_\textrm{MA} = 9.8$. For the example of Table\,\ref{tab:ex}, this gives $49\,\mu\textrm{m}$. One finds a splitting size of 13\,kHz, and an effective area of 1.5 times the capacitor area. Junction wire results are shown in Fig.\,\ref{fig:TLSwire} for the untapered and tapered cases. These plots were obtained by numerically breaking up the wire into about 100k sections, then computing $S_\textrm{max}$ and the differential area $dA$ for each section. The curve is obtained by sorting $S_\textrm{max}$ from large to small, and then cumulative summing over the corresponding $dA$ to obtain the integrated area $A$. The tapered case shows lower splittings $S_\textrm{max}$, consistent with the continuum theory. The TLS become statistically observable for $A > 1\,\mu\textrm{m}^2$, which predicts splittings in the several MHz range. The dependence on $d$ shows that the dominant contribution to the TLS are for distances greater than about $10\,\mu\textrm{m}$; shorter distances are unimportant because they have small areas. The dominance of an intermediate length scale is perhaps a surprising result, and shows why detailed theory is needed to optimize the wire design. \begin{figure}[t] \includegraphics[width=0.48\textwidth, trim = 130 40 175 50,clip] {TaperFigTLSwire.pdf} \caption{\textbf{TLS for junction wire.} Plot of $S_\textrm{max}$ versus integrated area for untapered ($S = 0$) and tapered ($S=0.2$) wires. The electric fields are obtained from Eqs.\,(\ref{Eflat2}) and (\ref{eq:Efw}), and numerically breaking up the wire into small area sections. The tapered wire shows lower maximum splitting, consistent with the continuum results. For the untapered wire, the observable areas $A > 1\,\mu\textrm{m}^2$ (arrow) has $S_\textrm{max}$ in the 1-4\,MHz range, whereas for tapered it is below 1.5\,MHz. Note that the dominant contribution comes for wire surfaces at a distance greater than about $10\,\mu\textrm{m}$. } \label{fig:TLSwire} \end{figure} This result suggests that undercutting the junction wires into the substrate can be an effective solution to decrease the contribution of the metal-substrate interface. Since there is little contribution at small distances, it is not essential to undercut around the junction, which should improve the reliability of the fabrication and the stability of the junctions. \section{Summary} Calculation of participation ratios and surface loss is challenging because of the divergence of the electric fields at metal edges. Previously, these fields were solved in the infinitely thin limit using solutions from conformal mapping. Here, the solutions were extended to the useful limit where the thin surface oxide (few nm) is less than the metal film thickness ($0.1\,\mu$m), and less than the typical film size ($100\,\mu$m). The finite thickness condition was solved via a calculation that matched the conformal fields to edge fields, then checked and refined with numerical simulation. Going forward, these formulas are also useful when checking numerical simulations for systematic errors due to meshing. Formulas are given for common capacitor structures. By separating out the geometery of actual designs, participation ratios can be calculated accurately and then used to optimize the design. This is an important check on numerical calculations since misleading results can come from finite meshing when structures range in size from nanometers to millimeters. For junction wires, a solution for the capacitance and surface loss was obtained using well formed models, approximations and numerics, which should give accurate and reliable formulas. A tapered junction wire was shown to have superior performance compared to straight wires when the wire length is longer than about $\sim 10\,\mu$m. This design feature is important for the latest generation of devices that use large capacitor size to lower surface loss. A further design improvement for the taper was suggested. These electric field solutions enable a prediction of the TLS spectrum, which could be invaluable to identify where the TLS comes from in the qubit design. Finally, it is hoped that these results will encourage researchers to precisely test surface loss theory, and measure in additional experiments the various surface loss parameters. By doing so, this should speed the optimization and development of long coherence time qubits.
train/arxiv
BkiUcn7xK7FjYCv2RO6e
5
1
\section{Introduction} Gamma-Ray Bursts (GRBs hereafter) are cosmological explosions whose emission spans all the electromagnetic spectrum, from soft $\gamma$-rays down to X-rays, optical/NIR and radio (see, e.g., \citealt{Piran2004}). According to the T$_{90}$ duration of their short-lived prompt emission, they are classified as short-duration (T$_{90}<$ 2\,s) and long-duration GRBs (T$_{90}\ge$ 2\,s; \citealt{Kouveliotou1993}). This (apparently) arbitrary and crude separation has a deep connection with the progenitor's nature of the burst: while long-duration GRBs are produced in the catastrophic explosion of massive single stars, as confirmed by many long-duration GRBs associated with Supernovae (see, e.g., \citealt{Galama1998, Hjorth2003, Stanek2003}), short-duration GRBs flag the merger of two neutron stars or a neutron star and a black hole, as proved by the outstanding detection of the first multi-messenger event GW~170817/GRB~170817A \citep{Abbott2017}. Moreover, there is a dichotomy between the hosts of short- and long-GRBs: while long-duration GRBs are found predominantly in high star forming regions (\citealt{Berger2014,Klose2019}, and references therein) \emph{all} morphological types of galaxies can harbour a short-GRB \citep{Berger2009, Fong2013, Berger2014}. Because of their high luminosities (up to 10$^{53-54}$\,erg\,s$^{-1}$ \citep{Piran2004, Kumar2015}, GRBs can be detected up to the highest refshifts: the farthest GRB current known is GRB 090429B at a photometric redshift of $z = 9.4$ \citep{Cucchiara2011}. Therefore, they can be used as probes of the early Universe \citep{Fryer2021}. As GRBs could be cosmologically distant events, some of them might be gravitationally lensed (e.g., \citealt{Paynter2021} and references therein). Because of the strong lensing effect, photons coming from a distant source travel different geometric paths as they approach the foreground lensing object and form multiple magnified images of the same background source \citep{Congdon2018}. As a consequence, we observe variations in the lensed images with a time delay, which depends on the gravitational potential of the lens. In the case of GRBs, if gravitationally lensed, we expect to measure a bright $\gamma$-ray pulse followed by a dimmer duplicate. To date only a few GRBs have been suggested as candidate lensed events, namely GRB~950830 \citep{Paynter2021}, GRB~210812A \citep{Veres2021}, GRB~081126A and GRB~090717A \citep{Lin2021}, based on the analysis of their light curves. Among these, GRB~200716C triggered the \emph{Fermi} Gamma-ray Burst Monitor (GBM) at 22:57:41 UT on 2020 July 16, which classified it as a long-duration GRB \citep{Fermi2020, Veres2020}. The prompt emission was subsequently detected by \emph{Swift} Burst Alert Telescope (BAT) and X-Ray Telescope (XRT; \citealt{Ukwatta2020}), \emph{AGILE} Mini-CALorimeter \citep{Ursi2020}, \emph{CALET} Gamma-Ray Burst Monitor \citep{Torii2020}, Insight-HXMT/HE \citep{Xue2020}, Konus/Wind \citep{Frederiks2020}. \citet{D'Avanzo2020} detected an extended source in the Sloan Digital Sky Survey (SDSS) within $\sim1$\,arcsec from the location of the optical afterglow of GRB~200716C, and they estimated a photometric redshift of $z=0.348\pm0.053$ for SDSS J130402.36$+$293840.6 (J1304$+$2938 hereafter). Other optical detections of this galaxy have been subsequently reported \citep{Kumar2020, Pozanenko2020, Kann2020}. Based on the photometric redshfit, \citet{Frederiks2020} found that GRB~200716C is a clear outlier in the Amati relation \citep{Amati2002} for 138 Konus/Wind\footnote{Konus is a GRB monitor launched on the GGS-Wind spacecraft in November 1994.} long-duration GRBs. Offsets of about $\sim2$\,dex in energy from the main Amati correlation typically indicate short-duration GRBs \citep[e.g.,][]{Willingale2017}. Because of this and the presence of two pulses separated by $2.16$\,s with similar spectral and temporal properties in the prompt emission, it was recently proposed that GRB~200716C might not be a long-GRB, but a short-GRB that is lensed by an intermediate mass black hole ($M_{\rm IMBH} \sim 10^5$\,M$_{\odot}$, \citealt{Wang2021, Yang2021}). According to this scenario, the optical source J1304$+$2938, could be a foreground galaxy hosting the intermediate mass black hole (IMBH) that gravitationally deforms the emission from GRB~200716C (hence, a background source). Studying J1304+2938 at multiple wavelengths is, therefore, crucial to shed light on the nature of this burst. In this Letter, we present a detailed radio study of this galaxy, based on archival survey data and new, dedicated, deep and high angular resolution radio observations. This Letter is structured as follows: the observations and their analysis are reported in Section~\ref{sec:obs}. We present and discuss our results in Section~\ref{sec:results} and \ref{sec:discussion}, respectively. In Section~\ref{sec:conclusions} we conclude with a brief summary. We assume a standard $\Lambda$-CDM cosmology with $H_{0} = 69.32$\,km\,Mpc$^{-1}$\,s$^{-1}$, $\Omega_{\rm m}=0.286$ and $\Omega_{\Lambda}=0.714$ \citep{Hinshaw2013}. \section{Observations} \label{sec:obs} \subsection{Multi-wavelength archival data } We searched for J1304$+$2938 in publicly available data and surveys first. Its coordinates are (J2000) $\alpha =13^{\rm h}04^{\rm m}02.371^{\rm s}$, $\delta = +29^{\circ}38^{\prime}40.66^{\prime\prime}$. The galaxy is detected with the LOw Frequency ARray (LOFAR) at 130--170\,MHz (LOFAR J130402.62$+$293839.8, \citealt{Hardcastle2016}), the Wide-field Infrared Survey Explorer (WISE; \citealt{Wright2010}) at 3.4, 4.6, 12 and 22\,\textmu m (WISEA J130402.47$+$293839.3) and the SDSS \citep{Adelman2008} in the optical $z$, $i$, $r$, $g$ and $u$ filters (SDSS J130402.37$+$293840.6, \citealt{D'Avanzo2020}). For these three surveys, we obtained the flux densities directly from the above references. We also investigated the Rapid Australian SKA Pathfinder Continuum Survey (RACS; \citealt{McConnell2020}) at 0.89\,GHz, the Faint Images of the Radio Sky at Twenty-centimeters (FIRST; \citealt{Becker1995}) and the NRAO Very Large Array Sky Survey (NVSS, \citealt{Condon1998}) at 1.4\,GHz, the Very Large Array Sky Survey (VLASS; \citealt{Lacy2020}) at 3\,GHz. The angular resolution and the epoch of each observation are provided in Table \ref{tab:obs_arx}. At radio wavelengths, the archival observations with the highest angular resolution are those from the VLASS, with the beam size being 2.5$^{\prime\prime}$. We downloaded the FITS images from The Canadian Initiative for Radio Astronomy Data Analysis (CIRADA\footnote{\url{http://cutouts.cirada.ca}}) for the NRAO surveys and from the CSIRO ASKAP Science Data Archive (CASDA\footnote{\url{https://data.csiro.au/domain/casdaObservation}}) for the RACS, and we subsequently performed Gaussian fits with \texttt{JMFIT} task in the Astronomical Image Processing System ({\sc aips}; \citealt{Greisen2003}). We show these multi-wavelength measurements in Fig.~\ref{fig:spectrum}. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{200716C_spectrum4.pdf} \caption{Publicly available flux density measurements (mJy) as a function of frequency (GHz) for J1304$+$2938 from 0.1 to 10$^6$\,GHz. The inset shows the LOFAR data, while the arrows indicate the 3$\sigma$ upper limits. The red line shows a power law $F\propto\nu^{\alpha}$ with spectral index $\alpha = -0.75$.} \label{fig:spectrum} \end{figure*} \subsection{EVN and {\it e}-MERLIN follow-up} We observed J1304$+$2938 at 5\,GHz with the European VLBI Network (EVN) on 2021 October 23 for a total time of 6\,hours (PI: Giarratana; project: EG118A). These data were recorded at 2048\,Mbits\,s$^{-1}$ and correlated at the Joint Institute for VLBI in Europe (JIVE) into 8 sub-bands (IFs) with 32\,MHz bandwidth and 64 channels each, through two polarizations (RR, LL). We also performed a sensitive 12\,hours VLBI observation with the European VLBI Network (EVN) EVN including the {\it enhanced} Multi-Element Remotely Linked Interferometer Network ({\it e}$-$MERLIN) at 1.6\,GHz (PI: Giarratana; project: EG118B) on 2021 October 30. The data were recorded at 1024\,Mbits\,s$^{-1}$ and correlated at the Joint Institute for VLBI in Europe (JIVE) into 8 sub-bands (IFs) with 16\,MHz bandwidth and 32 channels each, through two polarizations (RR, LL). The averaging time for the visibilities was of 2\,s. The structure of the observations followed a typical phase-referencing experiment, with scans of $\sim 3$\,min on the target followed by scans of $\sim 1.5$\,min on two phase reference sources (J1310+3220 and J1300+2830). 3C345 was the fringe finder and bandpass calibrator for both the 1.6 and 5\,GHz observations. The calibration and imaging were performed using AIPS following the standard procedure for EVN phase referenced observations\footnote{\url{https://www.evlbi.org/evn-data-reduction-guide}}, except that for the global fringe fitting, for which we used both the phase calibrators in the following way. We first derived the solutions for J1300+2830, which we applied to the target and the other calibrators. We then derived the residual solutions using a model of other calibrator J1310+3220, and applied these final solutions to J1310+3220 and the target. The time/bandwidth limited field-of-view of these observations was of about $\sim 5$\,arcsec, but the source is well localised in the VLASS observations with an angular resolution of 2.5\,arcsec. Therefore, we searched for the radio emission of the putative host galaxy in an area of 2.5\,arcsec in diameter. We adopted a natural weighting scheme to maximise the sensitivity to detect any potential extended structure. We obtained dirty images with an rms of 8\,\textmu Jy\,beam$^{-1}$ at 1.6\,GHz, and 9.6\,\textmu Jy\,beam$^{-1}$ at 5\,GHz. At 1.6\,GHz the largest angular scale detectable $\vartheta_{\rm LAS}$ is of about 2\,arcsec, while at 5\,GHz it is $\vartheta_{\rm LAS} \sim 50$\,mas. \subsection{TNG Spectroscopy} \label{subsec:tng} We performed a dedicated spectroscopic follow-up of J1304$+$2938 with DOLORES (Device Optimized for the LOw RESolution) installed at the Telescopio Nazionale Galileo (TNG), with the aim of confirm its photometric redshift of $z = 0.348 \pm 0.053$ as reported by the SDSS. We took a single 30\,min observation on the night of 2022 March 5 with the LR-B grism and a long-slit of 1.0$^{\prime\prime}$ width. The mean air mass during the observation was 1.05. An exposure of a He+Ne+Hg lamp was done to ensure the wavelength calibration and the flux calibration was obtained by observing the Feige~67 ($\alpha= 12^{\rm h}41^{\rm m}51.80^{\rm s}$, $\delta = +17^{\circ}31^{\prime}21.0^{\prime\prime}$) spectro-photometric standard star of the catalogs of \citet{Oke1990}. The data reduction was performed using standard Image Reduction and Analysis Facility (IRAF) procedures \citep{Tody1993}. The DOLORES spectrum is shown in Fig.~\ref{tngspec}. After smoothing the spectrum with a 3 pixels boxcar to reduce the noise, we were able to measure the object redshift by fitting the only detected emission line, the OII$\lambda$3727\AA~visible in it with a single Gaussian profile, using the IRAF task \emph{splot}. \begin{figure}[h!] \centering \includegraphics[width=9.0cm]{j1034spec_newlabels2.png} \caption{TNG DOLORES observed spectrum of the galaxy J1034$+$2938. The [OII]$\lambda3727$\AA~line is marked. At $\sim$5600$\AA$ a residual sky line is remained after the data reduction. On the upper x-axis the rest frame wavelengths are shown. } \label{tngspec} \end{figure} \section{Results} \label{sec:results} Based on our TNG spectroscopic observations, we determine a redshift of $z = 0.341 \pm 0.004$. This value confirms, and refines, the already-known photometric redshift of the galaxy \citep{D'Avanzo2020}. At $z=0.341$, the luminosity distance is 1825 Mpc, which gives a scale of 4.9\,kpc\,arcsec$^{-1}$. The inspection of the radio surveys, together with measurements available in the literature, reveals unresolved radio emission at the location of the optical galaxy at a significance between $\sim3\sigma$ and $\sim7\sigma$ in all the datasets. The resulting flux densities are reported in Table \ref{tab:obs_arx} and shown in the spectrum of Fig.~\ref{fig:spectrum}, with error bars reporting the $1\sigma$ nominal uncertainties from the fitting procedure. The source is brightest at the lowest frequency, where the LOFAR flux densities range between 4.0 and 7.7 mJy. The spectrum is rather puzzling in this region, with a flat trend between 130 and 150\,MHz and a rise between 150 and 200\,MHz (Fig.~\ref{fig:spectrum}, inset). In the $\sim1$\,GHz region, the source is somewhat fainter; the most significant detection is achieved thanks to the most sensitive and highest resolution FIRST data ($0.79\pm0.10$ mJy); the NVSS and RACS data indicate slightly larger values, perhaps suggestive of the presence of some extended emission; the signal-to-noise ratio is however lower and the results could be considered overall consistent with FIRST. At 3\,GHz, the VLASS data are the only ones in which the fitting result suggests that the source is resolved, providing a value for the integrated flux density significantly larger than the brightness surface peak. However, J1304$+$2938 is located exactly on a sidelobe of the relatively bright ($S_\mathrm{3\,GHz}=6.0\pm0.4$ mJy) and clearly extended radio source FIRST\,J130353.7+293734 (coincident with SDSS\,J130353.70+293733.1); considering this fact, the low signal-to-noise, and the \textit{quick look} nature of the VLASS data, we cannot conclusively determine the nature of the detected source and consider the values for both components in our analysis. The nominal deconvolved size of the major axis of the component would be 5.6$^{\prime\prime}$, corresponding to $\sim$27\,kpc at $z=0.341$. We further point out that the VLASS data were taken in two separate epochs, one before and one (85\,d) after the occurrence of the GRB. The two measurements are however in agreement with each other within the uncertainties, so we cannot claim any contribution from the afterglow, whose flux density is constrained to be no higher than 180\,\textmu Jy. As a matter of fact, under the reasonable assumption that the afterglow does not contribute to the second epoch emission, we also combined the two epochs in a single image, which allows us to obtain a better constrained fit, which is also reported in Table \ref{tab:obs_arx}. On milliarcsecond scales, our deep VLBI observations did not detect any source at 1.6 and 5\,GHz. We can put stringent $3\,\sigma$ upper limits on the peak brightness of about $30$\,\textmu Jy\,beam$^{-1}$ at both frequencies, which corresponds to $\sim$9$\times 10^{28}$\,erg\,s$^{-1}$\,Hz$^{-1}$ if we use our spectroscopic redshift and adopt a reference spectral index of $\alpha=0.0$ (typical of compact components). On the larger scales, the moderate signal-to-noise, the difference in angular resolution and observing epochs, and the still preliminary nature of the data from the latest surveys prevent an accurate modelling of the spectrum, which will be the subject of a future study. However, the overall trend of optically thin emission, perhaps with a hint of self-absorption at low frequency, indicates the non-thermal nature of the emission in the observed frequency range. As a reference, in Fig.~\ref{fig:spectrum} we overlay to the observed data a $F_\nu \propto \nu^{-0.75}$ power law. By using this reference value for the extended emission, and the highest signal-to-noise measurement (from FIRST), we derive a luminosity at 1.4\,GHz of L$\simeq$(2.9$\pm$0.4)$\times10^{30}$\,erg\,s$^{-1}$\,Hz$^{-1}$. \section{Discussion} \label{sec:discussion} We carried out an extensive search for long-duration GRB host galaxies in the literature, comprising previous observations in the radio band \citep{Berger2001, Berger2003, Michalowski2012, Hatsukade2012, Perley2013, Stanway2014, Stanway2015, Perley2015, Michalowski2015, Greiner2016}, and we ended up with 87 galaxies. Among these, only 19 are detected in the radio, with J1304$+$2938 being the fifth most luminous, and more generally a radio emission above $10^{30}$\,erg\,s$^{-1}$\,Hz$^{-1}$ turned out to be rare. Having multi-frequency and multi-resolution data is a further element of novelty, even though it leads us to a quite complex picture, worth discussing in at least two extreme cases. The fact that the GRB presents possible indications of gravitational lensing adds further interest in a discussion on the properties of this galaxy. \subsection{The radio loud AGN} The radio-to-optical luminosity ratio $R = F_\mathrm{radio} / F_\mathrm{opt}$ is a classical tool to characterize the radio loudness of an active galaxy \citep{Kellermann1989}. Considering the nearest available bands to those traditionally used to calculate $R$, we obtain for J1304$+$2938 a value of $R=53$, i.e.\ well into the radio loud domain. The 1.4 GHz radio luminosity from the FIRST ($2.9\times10^{23}$, in units of W\,Hz$^{-1}$) and the steep spectral index in the radio band would place J1304$+$2938 in the Fanaroff-Riley I (FRI) class. The available data do not allow however a direct confirmation of the expected morphology for an FRI radio galaxy, with a compact core and twin jets ending in diffuse, edge dimmed, lobes or plumes. The survey data are overall compatible with the presence of some diffuse emission on a few 10's kpc scales, as indicated by the apparently resolved nature of the VLASS image and the increase in total flux density when increasing the resolution around 1 GHz (from FIRST, to RACS, and NVSS). If the total extension of the radio emission were confined within a few kpc, the source could be classified as a FR0 \citep{Baldi2016} or a low-power compact source (LPC, \citealt{Giroletti2005}), which actually represent a substantial fraction of the radio-loud population at lower redshift \citep{Baldi2018}. However, in spite of all the circumstantial support from the radio surveys, the AGN scenario lacks the ultimate signature, i.e.\ the presence of an active compact core, either from high-energy data or from VLBI observations. In this sense, our deep images could have provided a direct confirmation. The lack of a detection at rather low luminosities does not allow us to conclude in support of this scenario, while leaving open the possibility of a strongly debeamed core (if the axis of the jets of the radio galaxy are seen under a large viewing angle) or of a recently switched-off nuclear activity \citep{Murgia2011}. Before the GRB detection by \emph{Swift}, in X-rays only the ROSAT satellite pointed this region of the sky between July and December 1990. No source is visible in the 0.1--2.4 keV image of the ROSAT All-Sky Survey \citep[RASS][]{Voges1999}. With PIMMS, assuming a power-law model with a photon index of 1.7, we could obtained only loose upper limits on the flux ($\sim$1$\times$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ in the 0.2--10 keV band) and luminosity ($\lesssim$4$\times$10$^{43}$\,s$^{-1}$), not sufficient to conclude on the presence of AGN-related X-ray emission. Future experiments able to finally demonstrate this scenario would be a detection at high energy or a successful imaging of a radio-galaxy structure based on deeper radio data. In this case, and if GRB~200716C belongs to J1304$+$2938, this would be the third GRB found within a galaxy with an AGN, after GRB~170817A, which belonged to NGC 4993 \citep{Coulter2017, Palmese2017, Fong2017, Contini2018, Wu2018}, and GRB~150101B, which belonged to WISEA J123204.97-105600.6 \citep{Xie2016}. NGC 4993 is a low-luminosity, radio-loud galaxy \citep{Wu2018}, while WISEA J123204.97-105600.6 (2MASX J12320498-1056010) is an X-ray bright, optically normal, radio-loud galaxy \citep{Xie2016}. \subsection{The extreme star formation} An immediate implication of the non detection with the EVN is that the radio emission detected by lower angular resolution surveys is extended on scales that are larger than the largest detectable angular scale $\vartheta_{\rm LAS}$, which is of $2$\,arcsec at 1.6\,GHz (hence smaller than the angular scales sampled by the VLASS). Possible mechanisms for a diffuse radio emission unrelated to nuclear activity are the free-free emission from the ionised gas surrounding a population of bright OB stars, which would lead to a thermal spectrum, and/or the Supernova contribution from young stars, which is characterised by a steep non-thermal spectrum. As our data are clearly suggestive of a steep spectral index, we can assume the latter to be the predominant emission mechanism in the portion of the spectrum we are interested in. Considering the high luminosity we found, this leads to a high star formation rate (SFR hereafter), i.e. SFR $\ge 15$\,M$_{\odot}$\,yr$^{-1}$ \citep{Greiner2016}. As the SFR can be inferred form the radio luminosity with different formulas, from the flux density at 1.4\,GHz with the FIRST and the NVSS survey we reckon that SFR$= (186 \pm 24)$\,M$_{\odot}$\,yr$^{-1}$ and $(376 \pm 94)$\,M$_{\odot}$\,yr$^{-1}$, using the conversion from \citet{Greiner2016}, respectively. In order to more securely estimate the contribution of the diffuse star formation within J1304$+$2938 a deeper spectroscopic follow-up in the optical is necessary. The detection of typical emission lines of star-forming galaxies, such as H$\alpha$ or NII, could provide a robust independent estimate of the SFR and also allow one to study in detail the chemical composition of the galaxy. \begin{table*}[t] \centering \begin{tabular}{ccccc} \toprule Array &Central frequency &T-T$_0$ &Angular resolution & Flux density\\ &(GHz) & (days) & (arcsec) & (\textmu Jy)\\ \midrule EVN+{\it e}$-$MERLIN &1.6 & 379 & 0.010 & $<24$\\ EVN & 5.0 & 372 & 0.005 & $<29$\\ \bottomrule \end{tabular} \caption[]{VLBI observations of J1304$+$2938. \textsl{Column 1:} Array; \textsl{Column 2:} observing frequency (GHz); \textsl{Column 3:} T-T$_0$ (days), which is the total time from the burst; \textsl{Column 4:} angular resolution (arcsec); \textsl{Column 5:} Upper limits (3\,$\sigma$) for the flux density (\textmu Jy).} \label{tab:obs} \end{table*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Amati4.pdf} \caption{Location of GRB~200716 (red star) in the rest-frame (E$_{iso}$, E$_{p,z}$) plane for the short-GRBs (grey squares) and the long-GRBs (black circles) of \citet{Tsvetkova2017}. The grey solid line indicates the Amati relation estimated using the long GRBs of \citet{Tsvetkova2017}, while the grey dashed lines indicate its 3$\sigma$ uncertainty. The orange dotted line shows the position of the burst for 0.341 $\le$ z $\le$ 10.} \label{fig:amati_relation} \end{figure} \subsection{J1304+2938 and GRB 200716C} \label{subsec:GRBandHost} The overall radio properties of J1304$+$2938 seem to favour a long-GRB scenario for GRB~200716C. Nevertheless, there are some caveats that are relevant to the interpretation of this burst. First of all, the spectrum of the galaxy in the radio band shows some peculiarities that could be due to the low signal-to-noise ratio, the different angular resolutions and/or the epochs of the surveys. To solve the conundrum, deep observations with arcsecond resolution and a broad bandwidth are required, such those provided by e.g. the Karl G. Jansky Very Large Array. Second, taking the isotropic equivalent energy E$_{iso}$ and the time-integrated peak energy E$_{p}$ for 150 long- and short-duration Konus/Wind GRBs \citep{Tsvetkova2017}, with E$_{iso} = 3.7\times10^{51}$\,erg and E$_{p,z} = $880\,keV GRB~200716C is a clear outlier of the Amati relation, where E$_{p,z} =$ E$_{p}(1+z)$ (see Fig.~\ref{fig:amati_relation}) and E$_{iso}$ was rescaled to $z=0.341$. Even if J1304$+$2938 is a foreground galaxy with GRB~200716C being a background source, i.e. at higher $z$, this GRB will still not lie in the long-GRB area (Fig.~\ref{fig:amati_relation}, orange dotted line). To be consistent with the 3\,$\sigma$ uncertainty of the Amati relation, the uncertainty on the peak energy should be at least $\sim$230\,keV (1\,$\sigma$). Finally, we note that GRB~200716C is located close to another well-known and still puzzling outlier of the E$_{iso}$-E$_{p,z}$ relation, i.e. GRB~061021 \citep{Nava2012}. The prompt emission light curve of GRB~200716C shows two prominent peaks, followed by an extended emission up to T$_{90} \sim$90\,s \citep{Veres2020, Barthelmy2020, Torii2020, Xue2020}. From the analysis of the $\gamma$-ray light curve \citet{Wang2021} and \citet{Yang2021} suggested that GRB~200716C might be a short-GRB gravitationally lensed by an IMBH probably hosted by J1304$+$2938. According to this scenario, the separation between the two peaks in the prompt emission light curve corresponds to the gravitational time delay between two lensed images. Potentially, VLBI observations could detect a compact emission, which may indicate the presence of the possibly radio-loud IMBH acting as a (milli-)lens \citep[e.g.,][]{Paragi2006}. A possible radio emission from an IMBH would highly help our understanding of the localisation of these objects in galaxies, which is highly unconstrained from an observational perspective \citep[e.g.,][]{Weller2022}. Ultraluminous X-ray sources (ULX) have been suggested as possible IMBHs \citep{Kaaret2001, Miller2003} and they are variable objects on different time scales (from months to years, see e.g. \citealt{Lasota2011, Earnshaw2016, Atapin2019}). Unfortunately, neither the archival observations nor our sensitive VLBI follow-up can shed light on this hypothesis: our non detection could be also due to a potential variability of the source, but dedicated X-ray observations at high angular resolution, e.g. with {\it Chandra}, would be necessary to test this scenario. To date we found only a few (macro-)lensing galaxies showing radio/mm emission \citep{McKean2007, Haas2014, Paraficz2018}, making "radio-emitting" lenses extremely rare objects\footnote{These radio-loud lenses are at higher redshifts than J1304$+$2938 ($z\sim0.65-0.8$).}. In general, VLBI is the only method that allows to directly image the lensed images caused by lenses with mass $<10^{5-6}$\,M$_{\odot}$, which are expected to be separated by a few mas \citep{Spingola2019, Casadio2021}. Nevertheless, to detect the putative radio lensed images of GRB~200716C the VLBI observations should have happened within few hours/days after the detection of the burst at $\gamma$-rays. \section{Conclusions} \label{sec:conclusions} In this Letter we presented the analysis of dedicated VLBI observations, together with infrared and optical public data of the putative host galaxy J1304$+$2938 of GRB~200716C at $z=0.341$. We did not detect any compact nor diffuse radio emission within a field of view of 2.5 arcsec at 1.6 and 5\,GHz and a sensitivity of $<10$\,\textmu Jy\,beam$^{-1}$. Moreover, by performing a dedicated spectroscopic follow-up with the TNG, we confirmed the previous redshift estimate of the galaxy. The non-detection with EVN and EVN+{\it e}-MERLIN could be due either to a variable AGN/IMBH or it could suggest that the radio emission detected at low angular resolution by the RACS, FIRST, NVSS and VLASS surveys might be diffused and, hence, completely resolved out by our VLBI observations. We derive a 1.4\,GHz luminosity larger than $10^{30}$\,erg\,s$^{-1}$\,Hz$^{-1}$, which implies a SFR $>186$\,M$_{\odot}$\,yr$^{-1}$. This high SFR is consistent with the typical environment for long-duration GRBs, favouring a scenario of an unlensed long-GRB hosted by J1304$+$2938. That being the case, J1304$+$2938 would be one of the brightest host galaxy in the radio. Nevertheless, the temporal and spectral properties of the prompt emission of GRB~200716C, together with the offset with respect to the Amati relation for long-GRBs, make the nature of this burst still puzzling. \begin{acknowledgements} The European VLBI Network is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project codes: EG118A, EG118B. e-MERLIN is a National Facility operated by the University of Manchester at Jodrell Bank Observatory on behalf of STFC. This work is based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundación Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. In particular M.Pedani did the observations as part of the telescope routine engineering time. This research has made use of the CIRADA cutout service at URL cutouts.cirada.ca, operated by the Canadian Initiative for Radio Astronomy Data Analysis (CIRADA). CIRADA is funded by a grant from the Canada Foundation for Innovation 2017 Innovation Fund (Project 35999), as well as by the Provinces of Ontario, British Columbia, Alberta, Manitoba and Quebec, in collaboration with the National Research Council of Canada, the US National Radio Astronomy Observatory and Australia's Commonwealth Scientific and Industrial Research Organisation.This paper includes archived data obtained through the CSIRO ASKAP Science Data Archive, CASDA (http://data.csiro.au). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard \& Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. The authors thank the directors and staff of all the EVN telescopes for making the observations possible. CS acknowledges financial support from the Italian Ministry of University and Research $-$ Project Proposal CIR01$\_$00010. \end{acknowledgements} \bibliographystyle{aa} \bibpunct{(}{)}{;}{a}{}{,}
train/arxiv
BkiUdk025V5hcJY58tix
5
1
\section{Introduction}\label{sec:1} \IEEEPARstart{T}{he} use of balanced codes is crucial for some information transmission systems. Errors can occur in the process of storing data onto optical devices due to the low frequency of operation between structures of the servo and the data written on the disc. This can be avoided by using encoded balanced codes, as no low frequencies are observed. In such systems, balanced codes are also useful for tracking the data on the disc. Balanced codes are also used for countering cut-off at low frequencies in digital transmission through capacitive coupling or transformers. This cut-off is caused by multiple same-charge bits, and results in a DC level that charges the capacitor in the AC coupler \cite{immink2004}. In general, the suppression of low-frequency spectrum can be done with balanced codes. A large body of work on balanced codes is derived from the simple algorithm for balancing sequences proposed by Knuth \cite{knuth1986}. According to Knuth's parallel algorithm, a binary sequence, $\mbox{\boldmath $x$}$, of even length $k$, can always be balanced by complementing its first or last $i$ bits, where $0 \leq i \leq k$. The index $i$ is then encoded as a balanced prefix that is appended to the data. The decoder can easily recover $i$ from the prefix, and then again complementing the first or last $i$ bits to obtain the original information. For Knuth's serial (or sequential) algorithm, the prefix is used to provide information regarding the information sequence's initial weight. Bits are sequentially complemented from one side of the overall sequence, until the information sequence and prefix together are balanced. Since the original weight is indicated by the prefix, the decoder simply has to sequentially complement the bits until this weight is attained. Al-Bassam \cite{bassam1990phd} presented a generalization of Knuth's algorithm for binary codes, non-binary codes and semi-balanced codes (the latter occur where the number of 0's and 1's differs by at most a certain value in each sequence of the code). The balancing of binary codes with low DC level is based on DC-free coset codes. For the design of non-binary balanced codes, symbols in the information sequence are $q$-ary complemented from one side, but because this process does not guarantee balancing, an extra redundant symbol is added to enforce the balancing (similar to our approach later on). Information regarding how many symbols to complement is sent by using a balanced prefix. Capocelli \emph{et al}. \cite{capocelli1991} proposed using two functions that must satisfy certain properties to encode any $q$-ary sequence into balanced sequences. The first function is similar to Knuth's serial scheme: it outputs a prefix sequence depending on the original sequence's weight. Additionally, all the $q$-ary sequences are partitioned into disjointed chains, where each chain's sequences have unique weights. The second function is then used to select an alternate sequence in the chain containing the original information sequence, such that the chosen prefix and the alternate sequence together are balanced. Tallini and Vaccaro \cite{tallini1999} presented another construction for balanced $q$-ary sequences that makes use of balancing and compression. Sequences that are close to being balanced are encoded with a generalization of Knuth's serial scheme. Based on the weight of the information sequence, a prefix is chosen. Symbols are then ``complemented in stages'', one at a time, until the weight that balances the sequence and prefix together is attained. Other sequences are compressed with a uniquely decodable variable length code and balanced using the saved space. Swart and Weber \cite{swart2009} extended Knuth's parallel balancing scheme to $q$-ary sequences with parallel decoding. However, this technique does not provide a prefix code implementation, with the assumption that small lookup tables can be used for this. Our approach aims to implement these prefixes via Gray codes. Swart and Weber's scheme will be expanded on in Section~\ref{sec:2.2}, as it also forms the basis of our proposed algorithm. Swart and Immink \cite{swart2013} described a prefixless algorithm for balancing of $q$-ary sequences. By using the scheme from \cite{swart2009} and applying precoding to a very specific error correction code, it was shown that balancing can be achieved without the need for a prefix. Pelusi \emph{et al}. \cite{pelusi2015} presented a refined implementation of Knuth's algorithm for parallel decoding of $q$-ary balanced codes, similar to \cite{swart2009}. This method significantly improved \cite{capocelli1991} and \cite{tallini1999} in terms of complexity. The rest of this paper is structured as follows. In Section~\ref{sec:2}, we present the background for our work, which includes Swart and Weber's balancing scheme for $q$-ary sequences \cite{swart2009} and non-binary Gray code theory \cite{guan1998}. In Section~\ref{sec:3}, a construction is presented for sequences where $k=q^t$. Section~\ref{sec:4} extends on our proposed construction to sequences with $k\neq q^t$. Finally, Section~\ref{sec:5} deals with the redundancy and complexity of our construction compared to prior art constructions, our conclusions are presented in Section~\ref{sec:6}. \section{Preliminaries}\label{sec:2} Let $\mbox{\boldmath $x$} = (x_1 x_2 \dots x_k)$ be a $q$-ary information sequence of length $k$, where $x_i \in \{0,1,\dots q-1\}$ is from a non-binary alphabet. A prefix of length $r$ is appended to $\mbox{\boldmath $x$}$. The prefix and information together are denoted by $\mbox{\boldmath $c$} = (c_1 c_2 \dots c_n)$ of length $n=k+r$, where $c_i \in \{0,1,\dots q-1\}$. Let $w(\mbox{\boldmath $c$})$ refer to the weight of $\mbox{\boldmath $c$}$, that is the algebraic sum of symbols in $\mbox{\boldmath $c$}$. The sequence $\mbox{\boldmath $c$}$ is said to be balanced if \begin{equation*} w(\mbox{\boldmath $c$}) = \sum_{i=1}^n c_i = \frac{n(q-1)}{2}. \end{equation*} Let $\beta_{n,q}$ represent this value obtained at the balancing state. For the rest of the paper, the parameters $k$, $n$, $q$ and $r$ are chosen in such a way that the balancing value, $\beta_{n,q}=n(q-1)/2$, leads to a positive integer. \subsection{Balancing of $q$-ary Sequences}\label{sec:2.2} Any information sequence, $\mbox{\boldmath $x$}$ of length $k$ and alphabet size $q$, can always be balanced by adding (modulo $q$) to that sequence one sequence from a set of balancing sequences \cite{swart2009}. The balancing sequence, $\mbox{\boldmath $b$}_{s,p} = (b_1 b_2 \dots b_k)$ is derived as \begin{equation*} b_i = \begin{cases} s, & i > p, \\ s+1 \pmod q, & i \leq p, \end{cases} \end{equation*} where $s$ and $p$ are positive integers with $0 \leq s \leq q-1$ and $0 \leq p \leq k-1$. Let $z$ be the iterator through all possible balancing sequences, such that $z = sn + p$ and $0 \leq z \leq kq-1$. Let $\mbox{\boldmath $y$}$ refer to the resulting sequence when adding (modulo $q$) the balancing sequence to the information sequence, $\mbox{\boldmath $y$} = \mbox{\boldmath $x$} \oplus_q \mbox{\boldmath $b$}_{s,p}$, where $\oplus_q$ denotes modulo $q$ addition. The cardinality of balancing sequences equals $kq$ and amongst them, at least one leads to a balanced output $\mbox{\boldmath $y$}$. Since $s$ and $p$ can easily be determined for the $z$-th balancing sequence using $z = sn + p$, we will use the simplified notation $\mbox{\boldmath $b$}_z$ to denote $\mbox{\boldmath $b$}_{s,p}$.\\ \begin{example}\label{ex1} Let us consider the balancing of the 3-ary sequence 2101, of length 4. The encoding process is illustrated below, with weights in bold indicating that the sequences are balanced. \begin{equation*} \begin{array}{c@{\quad\quad}c@{\;}c@{\;}c@{\;}c@{\;}c@{\quad\quad}c} z & \mbox{\boldmath $x$} & \oplus_q & \mbox{\boldmath $b$}_z & = & \mbox{\boldmath $y$} & w(\mbox{\boldmath $y$}) \\ \hline 0 & (2101) & \oplus_3 & (0000) & = & (2101) & \mathbf{4} \\ 1 & (2101) & \oplus_3 & (1000) & = & (0101) & 2 \\ 2 & (2101) & \oplus_3 & (1100) & = & (0201) & 3 \\ 3 & (2101) & \oplus_3 & (1110) & = & (0211) & \mathbf{4} \\ 4 & (2101) & \oplus_3 & (1111) & = & (0212) & 5 \\ 5 & (2101) & \oplus_3 & (2111) & = & (1212) & 6 \\ 6 & (2101) & \oplus_3 & (2211) & = & (1012) & \mathbf{4} \\ 7 & (2101) & \oplus_3 & (2221) & = & (1022) & 5 \\ 8 & (2101) & \oplus_3 & (2222) & = & (1020) & 3 \\ 9 & (2101) & \oplus_3 & (0222) & = & (2020) & \mathbf{4} \\ 10 & (2101) & \oplus_3 & (0022) & = & (2120) & 5 \\ 11 & (2101) & \oplus_3 & (0002) & = & (2100) & 3 \\ \end{array} \end{equation*} For this example, there are four occurrences of balanced sequences. \end{example} A $(\gamma, \tau)$-random walk refers to a path with random increases of $\gamma$ and decreases of $\tau$. In our case, a random walk graph is the plot of the function of $w(\mbox{\boldmath $y$})$ versus $z$. In general, the random walk graph of $w(\mbox{\boldmath $y$})$ always forms a $(1, q-1)$-random walk \cite{swart2009}. Fig.~\ref{fig:ex1} presents the $(1,2)$-random walk for Example~\ref{ex1}. The dashed line indicates the balancing value $\beta_{4,3} = 4$. \begin{figure}[!t] \centering \includegraphics{example1} \caption{Random walk graph of $w(\mbox{\boldmath $y$})$ for Example~\ref{ex1}} \label{fig:ex1} \end{figure} This method, as presented in \cite{swart2009}, assumed that the $z$ indices can be sent using balanced prefixes, but the actual encoding of these was not taken into account. For instance, in Example~\ref{ex1} indices $z=0$, $3$, $6$ and $9$ must be encoded into balanced prefixes, in order to send overall balanced sequences. \subsection{Non-binary Gray Codes}\label{sec:2.3} Binary Gray codes were first proposed by Gray \cite{gray1953} for solving problems in pulse code communication, and have been extended to various other applications. The assumption throughout this paper is that a Gray code is mapped from a set of possible sequences appearing in the normal lexicographical order. This ordering results in the main property of binary Gray codes: two adjacent codewords differ in only one bit. The $(r',q)$-Gray code is a set of $q$-ary sequences of length $r'$ such that any two adjacent codewords differ in only one symbol position. This set is not unique, as any permutation of a symbol column within the code could also generate a new $(r',q)$-Gray code. In this work, a unique set of $(r',q)$-Gray codes is considered, as presented by Guan \cite{guan1998}. This set possesses an additional property: the difference between any two consecutive sequences' weights is $\pm1$. This same set of Gray codes was already determined in \cite{er1984} through a recursive method. Let $\mbox{\boldmath $d$} = (d_1 d_2 \ldots d_{r'})$ be any sequence within the set of all $q$-ary sequences of length $r'$, listed in the normal lexicographic order. These sequences are mapped to $(r', q)$-Gray code sequences, $\mbox{\boldmath $g$} = (g_1 g_2 \ldots g_{r'})$, such that any two consecutive sequences are different in only one symbol position. Table~\ref{gray} shows a $(3,3)$-Gray code, where $\mbox{\boldmath $d$}$ is the 3-ary representation of the index $z \in \{0,1,\ldots,26\}$ and $\mbox{\boldmath $g$}$ is the corresponding Gray code sequence. We see that for $\mbox{\boldmath $g$}$, the adjacent sequences' weights differ by $+1$ or $-1$. \begin{table}[!b] \centering \caption{Example of $(3,3)$-Gray code}\label{gray}\vspace{-8pt} \begin{tabular}{ccc|ccc|ccc} \hline\hline $z$ & $\mbox{\boldmath $d$}$ & $\mbox{\boldmath $g$}$ & $z$ & $\mbox{\boldmath $d$}$ & $\mbox{\boldmath $g$}$ & $z$ & $\mbox{\boldmath $d$}$ & $\mbox{\boldmath $g$}$ \\ \hline 0 & $(000)$ & $(000)$ & 9 & $(100)$ & $(122)$ & 18 & $(200)$ & $(200)$ \\ 1 & $(001)$ & $(001)$ & 10 & $(101)$ & $(121)$ & 19 & $(201)$ & $(201)$ \\ 2 & $(002)$ & $(002)$ & 11 & $(102)$ & $(120)$ & 20 & $(202)$ & $(202)$ \\ 3 & $(010)$ & $(012)$ & 12 & $(110)$ & $(110)$ & 21 & $(210)$ & $(212)$ \\ 4 & $(011)$ & $(011)$ & 13 & $(111)$ & $(111)$ & 22 & $(211)$ & $(211)$ \\ 5 & $(012)$ & $(010)$ & 14 & $(112)$ & $(112)$ & 23 & $(212)$ & $(210)$ \\ 6 & $(020)$ & $(020)$ & 15 & $(120)$ & $(102)$ & 24 & $(220)$ & $(220)$ \\ 7 & $(021)$ & $(021)$ & 16 & $(121)$ & $(101)$ & 25 & $(221)$ & $(221)$ \\ 8 & $(022)$ & $(022)$ & 17 & $(122)$ & $(100)$ & 26 & $(222)$ & $(222)$ \\ \hline\hline \end{tabular} \end{table} We will make use of the following encoding and decoding algorithms from \cite{guan1998}. \subsubsection{Encoding algorithm for $(r',q)$-Gray code} Let $\mbox{\boldmath $d$} = (d_1 d_2 \ldots d_{r'})$ and $\mbox{\boldmath $g$} = (g_1 g_2 \ldots g_{r'})$ denote respectively a $q$-ary sequence of length $r'$ and its corresponding Gray code sequence. Let $S_i$ be the sum of the first $i-1$ symbols of $\mbox{\boldmath $g$}$, with $2\leq i \leq r'$ and $g_1=d_1$. Then \begin{equation*} S_i=\sum_{j=1}^{i-1}g_j, \quad\text{and}\quad g_i = \begin{cases} d_i, & \text{if } S_i \text{ is even}, \\ q-1-d_i, & \text{if } S_i \text{ is odd}. \end{cases} \end{equation*} The parity of $S_i$ determines $\mbox{\boldmath $g$}$'s symbols from $\mbox{\boldmath $d$}$. If $S_i$ is even then the symbol stays the same, otherwise the $q$-ary complement of the symbol is taken. \subsubsection{Decoding algorithm for $(r',q)$-Gray code} Let $\mbox{\boldmath $g$}$, $\mbox{\boldmath $d$}$ and $S_i$ be defined as before, with $2\leq i \leq r'$ and $d_1=g_1$. Then \begin{equation*} S_i=\sum_{j=1}^{i-1}g_j, \quad\text{and}\quad d_i = \begin{cases} g_i, & \text{if } S_i \text{ is even}, \\ q-1-g_i, & \text{if } S_i \text{ is odd}. \end{cases} \end{equation*} \section{Construction for $k=q^t$}\label{sec:3} For the sake of simplicity, we will briefly explain the construction for information lengths limited to $k=q^t$, with $t$ being a positive integer. More details can be found in our conference paper \cite{mambou2016}. In the next section we will show how this restriction can be avoided. The main component of this technique is to encode the balancing indices, $z$, into Gray code prefixes that can easily be encoded and decoded. The prefix together with the information sequence must be balanced. The condition, $k=q^t$, is enforced so that the cardinality of the $(r',q)$-Gray code is equal to that of the balancing sequences, making $r' = \log_q(kq) = \log_q(q^{t+1}) = t+1$. \subsubsection{Encoding} Let $\mbox{\boldmath $c$}' = (\mbox{\boldmath $g$}|\mbox{\boldmath $y$}) = (g_1 g_2 \ldots g_{r'} y_1 y_2 \ldots y_k)$ be the concatenation of the Gray code prefix with $\mbox{\boldmath $y$}$, with $|$ representing the concatenation. As stated earlier, for the sequences $\mbox{\boldmath $y$}$ we obtain a $(1,q-1)$-random walk, and for the Gray codes $\mbox{\boldmath $g$}$ we have a $(1,1)$-random walk. Therefore, when we concatenate the two sequences together, the random walk graph of $\mbox{\boldmath $c$}'$ forms a $(\{0;2\}, \{q-2;q\})$-random walk, i.e. increases of 0 or 2 and decreases of $q-2$ or $q$. This concatenation of a Gray code prefix, $\mbox{\boldmath $g$}$, with an output sequence, $\mbox{\boldmath $y$}$, does not guarantee the balancing of the overall sequence, since the increases of 2 in the random walk graph do not guarantee that it will pass through a specific point. An extra symbol $u$ is added to ensure overall balancing, with $u = \beta_{n,q} - w(\mbox{\boldmath $c$}')$ if $0 \leq u \leq q-1$, otherwise $u=0$, thus forcing the random graph to a specific point. The overall sequence is the concatenation of $u$, $\mbox{\boldmath $g$}$ and $\mbox{\boldmath $y$}$, i.e. $\mbox{\boldmath $c$} = (u|\mbox{\boldmath $g$}|\mbox{\boldmath $y$}) = (u g_1 g_2 \ldots g_{r'} y_1 y_2 \ldots y_k)$. The length of $\mbox{\boldmath $c$}$ is $n=k+r'+1$. In summary, the balancing of any $q$-ary sequence of length $k$, where $k=q^t$, can be achieved by adding (modulo $q$) an appropriate balancing sequence, $\mbox{\boldmath $b$}_z$, and prefixing a redundant symbol $u$ with a Gray code sequence, $\mbox{\boldmath $g$}$. The construction relies on finding a Gray code prefix to describe $z$, and at the same time be balanced together with $\mbox{\boldmath $y$}$.\\ \begin{example}\label{ex2} Let us consider the encoding of the ternary sequence, 201 of length 3. Since $t=1$, the length of Gray code prefixes will be $r'=2$. The overall length is $n=6$ and the balancing value is $\beta_{6,3}=6$. The encoding process below is followed. \begin{equation*} \begin{array}{c@{\quad\quad}c@{\;}c@{\;}c@{\;}c@{\;}c@{\quad\quad}c@{\quad\quad}c} z & \mbox{\boldmath $x$} & \oplus_q & \mbox{\boldmath $b$}_z & = & \mbox{\boldmath $y$} & \mbox{\boldmath $c$} & w(\mbox{\boldmath $c$}) \\ \hline 0 & (201) & \oplus_3 & (000) & = & (201) & (\underline{\mathbf{0}00} 201) & 3 \\ 1 & (201) & \oplus_3 & (100) & = & (001) & (\underline{\mathbf{0}01} 001) & 2 \\ 2 & (201) & \oplus_3 & (110) & = & (011) & (\underline{\mathbf{2}02} 011) & \mathbf{6} \\ 3 & (201) & \oplus_3 & (111) & = & (012) & (\underline{\mathbf{0}12} 012) & \mathbf{6} \\ 4 & (201) & \oplus_3 & (211) & = & (112) & (\underline{\mathbf{0}11} 112) & \mathbf{6} \\ 5 & (201) & \oplus_3 & (221) & = & (122) & (\underline{\mathbf{0}10} 122) & \mathbf{6} \\ 6 & (201) & \oplus_3 & (222) & = & (120) & (\underline{\mathbf{1}20} 120) & \mathbf{6} \\ 7 & (201) & \oplus_3 & (022) & = & (220) & (\underline{\mathbf{0}21} 220) & 7 \\ 8 & (201) & \oplus_3 & (002) & = & (220) & (\underline{\mathbf{1}21} 200) & \mathbf{6} \\ \end{array} \end{equation*} The underlined symbols represent the appended prefix, the bold underlined symbol is $u$, which is chosen such that $\beta_{6,3}$ is obtained whenever possible, and the bold weights indicate that balancing was achieved. Fig.~\ref{fig:ex2} presents the random walk graph for the weight of the overall sequence, $\mbox{\boldmath $c$}$, with the shaded area indicating the possible weights as a result of the flexibility in choosing $u$. \end{example} \begin{figure}[!b] \centering \includegraphics{example2} \caption{Random walk graph of $w(\mbox{\boldmath $c$})$ for Example~\ref{ex2}} \label{fig:ex2} \end{figure} \subsubsection{Decoding} The decoding consists of recovering the index $z$ from the Gray code prefix, $\mbox{\boldmath $g$}$, and finding $s$ and $p$ to reconstruct $\mbox{\boldmath $b$}_z$. The original sequence is then obtained as $\mbox{\boldmath $x$} = \mbox{\boldmath $y$} \ominus_q \mbox{\boldmath $b$}_z$, where $\ominus_q$ represents modulo $q$ subtraction. As an example, Table~\ref{tab:a1} shows the decoding of every Gray code sequence into balancing sequences using the $(2,3)$-Gray code set. \begin{table}[!b] \centering \caption{Decoding of $(2,3)$-Gray codes for $3$-ary sequences of length 2}\label{tab:a1} \begin{tabular}{ccccc} \hline\hline Gray code ($\mbox{\boldmath $g$}$) & Sequence ($\mbox{\boldmath $d$}$) & $z$ & $s,p$ & $\mbox{\boldmath $b$}_z$ \\ \hline $(00)$ & $(00)$ & 0 & $0,0$ & $(000)$ \\ $(01)$ & $(01)$ & 1 & $0,1$ & $(100)$ \\ $(02)$ & $(02)$ & 2 & $0,2$ & $(110)$ \\ $(12)$ & $(10)$ & 3 & $1,0$ & $(111)$ \\ $(11)$ & $(11)$ & 4 & $1,1$ & $(211)$ \\ $(10)$ & $(12)$ & 5 & $1,2$ & $(221)$ \\ $(20)$ & $(20)$ & 6 & $2,0$ & $(222)$ \\ $(21)$ & $(21)$ & 7 & $2,1$ & $(022)$ \\ $(22)$ & $(22)$ & 8 & $2,2$ & $(002)$ \\ \hline\hline \end{tabular} \end{table} \\ \begin{example}\label{ex3} Consider the received ternary sequence $\mbox{\boldmath $c$} = (012012)$ of length $n=6$ (one of the balanced sequences from Example~\ref{ex2}). The $(2,3)$-Gray code prefixes were used in encoding the original sequence. The first symbol in $\mbox{\boldmath $c$}$, $u=0$ is dropped, then the Gray code prefix is $\mbox{\boldmath $g$}=(12)$. This Gray code corresponds to $\mbox{\boldmath $d$}=(10)$ as presented in Table~\ref{tab:a1}. This implies that $z=3$, leading to $s=1$, $p=0$ and therefore $\mbox{\boldmath $b$}_3 = (111)$. The original sequence is recovered as \begin{equation*} \mbox{\boldmath $x$} = \mbox{\boldmath $y$} \ominus_q \mbox{\boldmath $b$}_z = (012) \ominus_3 (111) = (201). \end{equation*} Thus, the information sequence from Example~\ref{ex2} is recovered. \end{example} \section{Construction for $k \neq q^t$}\label{sec:4} We will now generalize the technique described in the previous section to sequences of any length, i.e. $k \neq q^t$. The idea is to use a subset of the $(r',q)$-Gray code with an appropriate length to encode the $z$ indices that represent the $kq$ balancing sequences. Therefore, the cardinality of $(r',q)$-Gray code prefixes must be greater than that of the balancing sequences, i.e. $q^{r'} > kq$ or $r' > \log_q k + 1$. However, the challenge is to find the appropriate subset of $(r',q)$-Gray code prefixes that can uniquely match the $kq$ balancing sequences, and still guarantee balancing when combined with $u$ and $\mbox{\boldmath $y$}$. \subsection{$(r',q)$-Gray code prefixes for $q$ odd}\label{sec:subset-q-odd} When examining the random walk graph for Gray codes with $q$ odd, one notices that the random walk forms an odd function around a specific point. Fig.~\ref{fig:graycode1} presents the $(4,3)$-Gray code random walk graph, with $G$ being the intersection point between the horizontal line, $w(\mbox{\boldmath $g$})=4$, and the vertical line, $z = 40$. The graph forms an odd function around this point $G$. In general, for $(r',q)$-Gray codes where $q$ is odd, the random walk of the Gray codes gives an odd function centered around $w(\mbox{\boldmath $g$})=\beta_{r',q}$ and $z = \lfloor \frac{q^{r'}}{2} \rfloor$, where $\lfloor \cdot \rfloor$ represents the floor function. \begin{figure}[!b] \centering \includegraphics[width=1\linewidth]{graycode1} \caption{$(4,3)$-Gray code random walk graph} \label{fig:graycode1} \end{figure} \\ \begin{lemma}\label{lem:1} The random walk graph of $(r',q)$-Gray codes where $q$ is odd forms an odd function around the point $G$. \end{lemma} It was proved in \cite{er1984} that any $(r',q)$-Gray code, where $q$ is odd, is reflected. That is, the random walk graph of the $(r',q)$-Gray code forms an odd function centered around the point $G$. This implies that any subset of an $(r',q)$-Gray code around the center of its random walk graph, where the information sequence is such that $kq$ is odd (i.e. $k$ is odd), always has an average weight equal to $\beta_{r',q}$. As we need a unique subset of Gray code sequences for any case, we choose $kq$ elements from the ``middle'' values of $z \in [0,q^{r'}-1]$ and call it the $z$-centered subset. The index for this subset is denoted by $z'$, with $z' \in [z_1, z_2]$. When $kq$ is even (i.e. $k$ is even), it is not guaranteed that the subset of $(r',q)$-Gray codes' average weight around the center equals exactly $\beta_{r',q}$. However, it will be very close to it, with a rounded value that is equal to $\beta_{r',q}$. We formalize these observations in the subsequent lemma. Let $\mathcal{G}$ denote the subset of $kq$ Gray code sequences that are used to encode the index $z'$, let $\overline{w}(\cdot)$ denote the average weight of a set of sequences and let $\lVert\cdot\rVert$ denote rounding to the nearest integer.\\ \begin{lemma}\label{lem:2} For an $(r',q)$-Gray code subset, $\mathcal{G}$, where $q$ is odd and the $z'$-th codewords are chosen with $z' \in [z_1, z_2]$, the following holds: \begin{itemize} \item if $k$ is odd with $z_1 = \lfloor \frac{q^{r'}}{2} \rfloor - \lfloor \frac{kq}{2} \rfloor$ and $z_2 = \lfloor \frac{q^{r'}}{2} \rfloor + \lfloor \frac{kq}{2} \rfloor$, then $\overline{w}(\mathcal{G}) = \beta_{r',q}$, \item if $k$ is even with $z_1 = \lfloor \frac{q^{r'}}{2} \rfloor - \frac{kq}{2}$ and $z_2 = \lfloor \frac{q^{r'}}{2} \rfloor + \frac{kq}{2}-1$, then $\lVert\overline{w}(\mathcal{G})\rVert = \beta_{r',q}$. \end{itemize} \end{lemma} \begin{proof} To simplify notation in this proof, we simply use $\beta$ to represent $\beta_{r',q}$ throughout. If $k$ is odd, it follows directly from Lemma~\ref{lem:1} that choosing $kq$ sequences (where $kq$ is odd) from $z=\lfloor \frac{q^{r'}}{2} \rfloor - \lfloor \frac{kq}{2} \rfloor$ to $z=\lfloor \frac{q^{r'}}{2} \rfloor + \lfloor \frac{kq}{2} \rfloor$, centered around $z=\lfloor \frac{q^{r'}}{2} \rfloor$, will result in $\overline{w}(\mathcal{G}) = \beta$, since the random walk forms an odd function around this point. In cases where $k$ is even, if $z_2$ was chosen as $\lfloor \frac{q^{r'}}{2} \rfloor + \frac{kq}{2}$, we would have exactly $\overline{w}(\mathcal{G}) = \beta$ (using the same reasoning as for the case where $k$ is odd), as we use $\frac{kq}{2}$ elements to the left of $\lfloor \frac{q^{r'}}{2} \rfloor$ and $\frac{kq}{2}$ elements to the right of it. However, this would mean that $kq+1$ elements are being used. Thus, $z_2 = \lfloor \frac{q^{r'}}{2} \rfloor + \frac{kq}{2}-1$ must be used. Let $\alpha$ be the weight of the $(z_2+1)$-th Gray code, then \begin{equation*} \overline{w}(\mathcal{G}) = \frac{(kq+1)\beta - \alpha}{kq} = \beta + \frac{\beta-\alpha}{kq}. \end{equation*} The lowest possible value of $\alpha$ is $\alpha_{\min} = 0$, and its highest possible value is $\alpha_{\max} = k(q-1)$. Thus, \begin{equation*} \beta + \frac{\beta-\alpha_{\max}}{kq} \leq \overline{w}(\mathcal{G}) \leq \beta + \frac{\beta-\alpha_{\min}}{kq} \end{equation*} and with some manipulations it can be shown that \begin{equation*} \beta \left(1 - \frac{\frac{2k}{r'}-1}{kq}\right) \leq \overline{w}(\mathcal{G}) \leq \beta\left(1 + \frac{1}{kq}\right). \end{equation*} Finally, where $q$ is odd, we have $q \geq 3$, and rounding to the nearest integer results in $\lVert\overline{w}(\mathcal{G})\rVert = \beta$. \end{proof} \subsection{$(r',q)$-Gray code prefixes for $q$ even}\label{sec:subset-q-even} For the encoding of sequences that make use of $(r',q)$-Gray code prefixes where $q$ is even, a different approach is followed. The subset of Gray code prefixes is obtained by placing a sliding window of length $kq$ over the random walk graph of the $(r',q)$-Gray code sequences, and shifting it until we obtain a subset with an average weight value of $\beta_{r',q}$. Fig.~\ref{fig:graycode2} shows the $(6,2)$-Gray code random walk graph. However, this process does not always guarantee a subset of Gray code prefixes with an average weight value of exactly $\beta_{r',q}$. Since we have flexibility in choosing $u$, we can choose the average weight for the subset to be close to $\beta_{r',q}$, and adjust $u$ as necessary to obtain exact balancing. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{graycode2} \caption{$(6,2)$-Gray code random walk graph} \label{fig:graycode2} \end{figure} \\ \begin{lemma}\label{lem:3} An $(r', q)$-Gray code subset, $\mathcal{G}$, where $q$ is even, can be chosen such that $\lVert\overline{w}(\mathcal{G})\rVert = \beta_{r',q}$. \end{lemma} \begin{proof} A similar reasoning as in the proof of Lemma~\ref{lem:2}, where a symbol with weight $\alpha$ is repeatedly removed from the set, can be used to find $\lVert\overline{w}(\mathcal{G})\rVert$. \end{proof} \subsection{Encoding} Having presented all the required components, we now propose our encoding algorithm. The length of the required Gray code prefix is \begin{equation}\label{eq1} r' = \lceil \log_qk \rceil + 1, \end{equation} where $\lceil \cdot \rceil$ represents the ceiling function. The cardinality of $(r',q)$-Gray codes equals $q^{r'}$. This implies that $q^{r'-1} < kq < q^{r'}$. The encoding will make use of a subset of $kq$ Gray code sequences from the $q^{r'}$ available ones.\\ \begin{theorem} Any $q$-ary sequence can be balanced by adding (modulo $q$) an appropriate balancing sequence, $\mbox{\boldmath $b$}_z$, and prefixing a redundant symbol, $u$, with a Gray code sequence, $\mbox{\boldmath $g$}$, taken from the subset of $(r',q)$-Gray code prefixes. \end{theorem} \begin{proof} Let $\mathcal{U}$ denote the set of possible symbols for $u$, i.e. $\mathcal{U} = \{0,1,\ldots,q-1\}$, let $\mathcal{G}$ denote the subset of Gray code sequences, and let $\mathcal{Y}$ denote the set of $kq$ output sequences after the $kq$ balancing sequences are added to the information sequence. It is easy to see that \begin{equation*} \overline{w}(\mathcal{U}) = \frac{(q-1)}{2}. \end{equation*} From Lemmas~\ref{lem:2} and \ref{lem:3}, the subset of $(r',q)$-Gray code prefixes that corresponds to the $kq$ balancing sequences is chosen such that \begin{equation*} \lVert\overline{w}(\mathcal{G})\rVert = \frac{r'(q-1)}{2} = \beta_{r',q}. \end{equation*} It was proved in \cite{swart2009} that the average weight of the $kq$ sequences, $\mbox{\boldmath $y$} = \mbox{\boldmath $x$} \oplus_q \mbox{\boldmath $b$}_z$, is such that \begin{equation*} \overline{w}(\mathcal{Y}) = \frac{k(q-1)}{2} = \beta_{k,q}. \end{equation*} By considering $\mbox{\boldmath $c$} = (u|\mbox{\boldmath $g$}|\mbox{\boldmath $y$})$, with length $n = k+r'+1$, as the overall sequence to be transmitted, it follows that: \begin{align*} \overline{w}(\mathcal{U}) + \lVert\overline{w}(\mathcal{G})\rVert + \overline{w}(\mathcal{Y}) &= \frac{(q-1)}{2} + \frac{r'(q-1)}{2} + \frac{k(q-1)}{2} \\ &= \frac{(k+r'+1)(q-1)}{2} \\ &= \beta_{n,q}. \end{align*} This implies that there is at least one $\mbox{\boldmath $c$}$ for which $w(\mbox{\boldmath $c$}) \leq \beta_{n,q}$ and at least one other $\mbox{\boldmath $c$}$ for which $w(\mbox{\boldmath $c$}) \geq \beta_{n,q}$. Taking the random walk's increases into account, as well as the flexibility in choosing $u$, we can conclude that there is at least one $\mbox{\boldmath $c$}$ such that $w(\mbox{\boldmath $c$}) = \beta_{n,q}$. \end{proof} The encoding algorithm consists of the following steps: \begin{enumerate} \item Obtain the correct Gray code length $r'$ by using \eqref{eq1}. Then find the corresponding subset of $(r',q)$-Gray code prefixes, $z' \in [z_1, z_2]$, using the methods discussed in Section~\ref{sec:subset-q-odd} where $q$ is odd and in Section~\ref{sec:subset-q-even} where $q$ is even. \item Incrementing through $z$, determine the balancing sequences, $\mbox{\boldmath $b$}_z$, and add them to the information sequence $\mbox{\boldmath $x$}$ to obtain outputs $\mbox{\boldmath $y$}$. \item For each increment of $z$, append every $\mbox{\boldmath $y$}$ with the corresponding Gray code prefix $\mbox{\boldmath $g$}$ following the lexicographic order, with $\mbox{\boldmath $g$}$ obtained from the $q$-ary representations of the $z'$ indices. \item Finally, set $u = \beta_{n,q}-w(\mbox{\boldmath $y$})-w(\mbox{\boldmath $g$})$ if $u \in \{0, 1, \ldots, q-1\}$, otherwise set $u=0$. \end{enumerate} We illustrate the encoding algorithm with the following two examples, one for an odd value of $q$ and the other for an even value of $q$.\\ \begin{example}\label{ex4} Consider encoding the ternary sequence, $(21120)$, of length 5. Since $r' = \lceil \log_3 5 \rceil + 1 = 3$, we require $(3,3)$-Gray code prefixes to encode the $z'$ indices. The overall sequence length is $n = k+r'+1 = 9$, and the balancing value is $\beta_{9,3} = 9$. The cardinality of the $(3,3)$-Gray code is 27 and the required $z$-centered subset of prefixes containing $kq=15$ elements is such that $z' \in [5,19]$. The following process shows the possible sequences obtained. Again the underlined symbols represent the appended prefix, the bold underlined symbol is $u$, and the bold weights indicate balancing. \begin{equation*} \begin{array}{c@{\;}ccc@{\;}c} z & z' & \mbox{\boldmath $x$} \oplus_q \mbox{\boldmath $b$}_z =\mbox{\boldmath $y$} & \mbox{\boldmath $c$}&w(\mbox{\boldmath $c$})\\ \hline 0 & 5 & (21120) \oplus_3 (00000) = (21120) & (\underline{\mathbf{2}010}21120) & \mathbf{9} \\ 1 & 6 & (21120) \oplus_3 (10000) = (01120) & (\underline{\mathbf{0}020}01120) & 6 \\ 2 & 7 & (21120) \oplus_3 (11000) = (02120) & (\underline{\mathbf{1}021}02120) & \mathbf{9} \\ 3 & 8 & (21120) \oplus_3 (11100) = (02220) & (\underline{\mathbf{0}022}02220) & 10 \\ 4 & 9 & (21120) \oplus_3 (11110) = (02200) & (\underline{\mathbf{0}122}02200) & \mathbf{9} \\ 5 & 10 & (21120) \oplus_3 (11111) = (02201) & (\underline{\mathbf{0}121}02201) & \mathbf{9} \\ 6 & 11 & (21120) \oplus_3 (21111) = (12201) & (\underline{\mathbf{0}120}12201) & \mathbf{9} \\ 7 & 12 & (21120) \oplus_3 (22111) = (10201) & (\underline{\mathbf{0}110}10201) & 6 \\ 8 & 13 & (21120) \oplus_3 (22211) = (10001) & (\underline{\mathbf{0}111}10001) & 5 \\ 9 & 14 & (21120) \oplus_3 (22221) = (10011) & (\underline{\mathbf{2}112}10011) & \mathbf{9} \\ 10 & 15 & (21120) \oplus_3 (22222) = (10012) & (\underline{\mathbf{2}102}10012) & \mathbf{9} \\ 11 & 16 & (21120) \oplus_3 (02222) = (20012) & (\underline{\mathbf{2}101}20012) & \mathbf{9} \\ 12 & 17 & (21120) \oplus_3 (00222) = (21012) & (\underline{\mathbf{2}100}21012) & \mathbf{9} \\ 13 & 18 & (21120) \oplus_3 (00022) = (21112) & (\underline{\mathbf{0}200}21112) & \mathbf{9} \\ 14 & 19 & (21120) \oplus_3 (00002) = (21122) & (\underline{\mathbf{0}201}21122) & 11 \\ \end{array} \end{equation*} \end{example} \begin{example}\label{ex5} Consider encoding the 4-ary sequence, (312), of length 3. As before, $r' = \lceil \log_4 3 \rceil + 1 = 2$, requiring $(2,4)$-Gray code prefixes to be used. The overall sequence length is $n = 6$, and the balancing value is $\beta_{6,4} = 9$. The cardinality of the $(2,4)$-Gray code equals 16. The $z'$-subset is found by employing a sliding window of length $kq = 12$ over the random walk graph of the $(2,4)$-Gray code prefixes, shown in Fig.~\ref{fig:graycode3}. A suitable subset is found where $z_1 = 1$ and $z_2 = 12$, with an average weight value of 3, which equals $\beta_{2,4} = 3$. \begin{figure}[!b] \centering \includegraphics[width=1\linewidth]{graycode3} \caption{$(2,4)$-Gray code random walk graph with chosen subset} \label{fig:graycode3} \end{figure} The encoding process for the 4-ary sequence is shown next. \begin{equation*} \begin{array}{c@{\quad}c@{\quad\quad}c@{\;}c@{\;}c@{\;}c@{\;}c@{\quad\quad}c@{\quad\quad}c} z & z' & \mbox{\boldmath $x$} & \oplus_q & \mbox{\boldmath $b$}_z & = & \mbox{\boldmath $y$} & \mbox{\boldmath $c$} & w(\mbox{\boldmath $c$}) \\ \hline 0 & 1 & (312) & \oplus_4 & (000) & = & (312) & (\underline{\mathbf{2}01}312) & \mathbf{9} \\ 1 & 2 & (312) & \oplus_4 & (100) & = & (012) & (\underline{\mathbf{0}02}012) & 5 \\ 2 & 3 & (312) & \oplus_4 & (110) & = & (022) & (\underline{\mathbf{2}03}022) & \mathbf{9} \\ 3 & 4 & (312) & \oplus_4 & (111) & = & (023) & (\underline{\mathbf{0}13}023) & \mathbf{9} \\ 4 & 5 & (312) & \oplus_4 & (211) & = & (123) & (\underline{\mathbf{0}12}123) & \mathbf{9} \\ 5 & 6 & (312) & \oplus_4 & (221) & = & (133) & (\underline{\mathbf{0}11}133) & \mathbf{9} \\ 6 & 7 & (312) & \oplus_4 & (222) & = & (130) & (\underline{\mathbf{0}10}130) & 5 \\ 7 & 8 & (312) & \oplus_4 & (322) & = & (230) & (\underline{\mathbf{2}20}230) & \mathbf{9} \\ 8 & 9 & (312) & \oplus_4 & (332) & = & (200) & (\underline{\mathbf{0}21}200) & 5 \\ 9 & 10 & (312) & \oplus_4 & (333) & = & (201) & (\underline{\mathbf{2}22}201) & \mathbf{9} \\ 10 & 11 & (312) & \oplus_4 & (033) & = & (301) & (\underline{\mathbf{0}23}301) & \mathbf{9} \\ 11 & 12 & (312) & \oplus_4 & (003) & = & (311) & (\underline{\mathbf{0}33}311) & 11 \\ \end{array} \end{equation*} \end{example} \subsection{Decoding} Fig.~\ref{fig:dec2} presents the decoding process of our proposed scheme, for any $q$-ary information sequence. The decoding algorithm consists of the following steps: \begin{enumerate} \item The redundant symbol $u$ is dropped, then the following $r'$ symbols are extracted as the Gray code prefix, $\mbox{\boldmath $g$}$, converted to $\mbox{\boldmath $d$}$ and used to find $z'$. \item From $z'$, the corresponding $z$ index is computed as $z=z'-z_1$. \item $z$ is used to find the parameters $s$ and $p$, then $\mbox{\boldmath $b$}_z$ is derived. \item Finally, the original sequence is recovered through $\mbox{\boldmath $x$} = \mbox{\boldmath $y$} \ominus_q \mbox{\boldmath $b$}_z$. \end{enumerate} \begin{figure}[!t] \centering \includegraphics{decoding} \caption{Decoding process for any $q$-ary information sequence} \label{fig:dec2} \end{figure} \begin{example}\label{ex6} Consider the decoding of the sequence, $(\underline{\mathbf{2}100}121200)$ (the underlined symbols are the prefix and the bold underlined symbol is $u$), where $n=10$ and $q=3$, that was encoded using $(3,3)$-Gray code prefixes. The first symbol $u=2$ is dropped, then the Gray code prefix is extracted as $(100)$, which corresponds to $z'=17$, and the $z'$-subset of $(3,3)$-Gray code prefixes is $z' \in [4,21]$, thus $z=13$. This can be seen from Table~\ref{tab:a2}, where the decoding of all $(3,3)$-Gray codes is shown. This implies that $s=2$ and $p=1$, resulting in $\mbox{\boldmath $b$}_{13}=(022222)$. Finally, the original information sequence is extracted as $\mbox{\boldmath $x$} = \mbox{\boldmath $y$} \ominus_q \mbox{\boldmath $b$}_z = (121200) \ominus_3 (022222) = (102011)$. \begin{table}[!b] \centering \caption{Decoding of $(3,3)$-Gray codes for ternary sequences with $k=6$ and $z' \in [4,21]$}\label{tab:a2} \begin{tabular}{cccccc} \hline\hline Gray code ($\mbox{\boldmath $g$}$) & Sequence ($\mbox{\boldmath $d$}$) & $z'$ & $z$ & $s,p$ & $\mbox{\boldmath $b$}_z$ \\ \hline $(000)$ & $(000)$ & 0 & --- & --- & --- \\ $(001)$ & $(001)$ & 1 & --- & --- & --- \\ $(002)$ & $(002)$ & 2 & --- & --- & --- \\ $(012)$ & $(010)$ & 3 & --- & --- & --- \\ \hline $(011)$ & $(011)$ & 4 & 0 & $0,0$ & $(000000)$ \\ $(010)$ & $(012)$ & 5 & 1 & $0,1$ & $(100000)$ \\ $(020)$ & $(020)$ & 6 & 2 & $0,2$ & $(110000)$ \\ $(021)$ & $(021)$ & 7 & 3 & $0,3$ & $(111000)$ \\ $(022)$ & $(022)$ & 8 & 4 & $0,4$ & $(111100)$ \\ $(122)$ & $(100)$ & 9 & 5 & $0,5$ & $(111110)$ \\ $(121)$ & $(101)$ & 10 & 6 & $1,0$ & $(111111)$ \\ $(120)$ & $(102)$ & 11 & 7 & $1,1$ & $(211111)$ \\ $(110)$ & $(110)$ & 12 & 8 & $1,2$ & $(221111)$ \\ $(111)$ & $(111)$ & 13 & 9 & $1,3$ & $(222111)$ \\ $(112)$ & $(112)$ & 14 & 10 & $1,4$ & $(222211)$ \\ $(102)$ & $(120)$ & 15 & 11 & $1,5$ & $(222221)$ \\ $(101)$ & $(121)$ & 16 & 12 & $2,0$ & $(222222)$ \\ $(100)$ & $(122)$ & 17 & 13 & $2,1$ & $(022222)$ \\ $(200)$ & $(200)$ & 18 & 14 & $2,2$ & $(002222)$ \\ $(201)$ & $(201)$ & 19 & 15 & $2,3$ & $(000222)$ \\ $(202)$ & $(202)$ & 20 & 16 & $2,4$ & $(000022)$ \\ $(212)$ & $(210)$ & 21 & 17 & $2,5$ & $(000002)$ \\ \hline $(211)$ & $(211)$ & 22 & --- & --- & --- \\ $(210)$ & $(212)$ & 23 & --- & --- & --- \\ $(220)$ & $(220)$ & 24 & --- & --- & --- \\ $(221)$ & $(221)$ & 25 & --- & --- & --- \\ $(222)$ & $(222)$ & 26 & --- & --- & --- \\ \hline\hline \end{tabular} \end{table} \end{example} \section{Redundancy and Complexity}\label{sec:5} In this section we compare the redundancy and complexity of our proposed scheme with some existing ones. \subsection{Redundancy}\label{sec:5.1} Let $\mathcal{F}_q^k$ denote the cardinality of the full set of balanced $q$-ary sequences of length $k$. According to \cite{star1975}, \begin{equation*} \mathcal{F}_q^k = q^{k} \sqrt{\frac{6}{\pi r(q^2-1)}} \left(1+\mathcal{O}\left(\frac{1}{k}\right)\right). \end{equation*} The information sequence length, $k$, in terms of the redundancy, $r$, for the construction in \cite{swart2009} is \begin{equation}\label{eq_c1} k \leq \frac{\mathcal{F}_q^r}{q} \approx q^{r-1} \sqrt{\frac{6}{\pi r(q^2-1)}}. \end{equation} In \cite{capocelli1991}, two schemes are presented for $k$ information symbols, where one satisfies the bound \begin{equation}\label{eq_c3} k \leq \frac{q^r-1}{q-1}, \end{equation} and the other one satisfies \begin{equation}\label{eq_c2} k \leq 2\frac{q^r-1}{q-1}-r. \end{equation} The construction in \cite{tallini1999} presents the information sequence length in terms of the redundancy as \begin{equation*} k \leq \frac{1}{1-2\gamma}\frac{q^r-1}{q-1}-a_1(q, \gamma)r-a_2(q, \gamma), \end{equation*} with $\gamma \in [0,\frac{1}{2})$, where $a_1$ and $a_2$ are scalars depending on $q$ and $\gamma$. If the compression aspect is ignored, the information sequence length is the same as in \eqref{eq_c2}. The prefixless scheme presented in \cite{swart2013} has information sequence length that satisfies \begin{equation}\label{eq_c4} k \leq q^{r-1} - r. \end{equation} Two constructions with parallel decoding are presented in \cite{pelusi2015}. The first construction, where the prefixes are also balanced as in \cite{swart2009}, has its information length as a function of $r$ as \begin{equation}\label{eq_c5} k \leq \frac{\mathcal{F}_q^r - \{ q \bmod 2 + [(q-1)k] \bmod 2\}}{q-1}. \end{equation} The second construction, where the prefixes need not be balanced, is a refinement of the first and has an information length the same as \eqref{eq_c3}. As presented in Section~\ref{sec:4}, the redundancy of our new construction is given by $r = \lceil \log_q k \rceil + 2$. Therefore, the information sequence length in terms of redundancy is \begin{equation} k = q^{r-2}. \end{equation} Fig.~\ref{fig:redundancies} presents a comparison of the information length, $k$, versus the redundancy, $r$, for various constructions as discussed above. For all $q$, our construction is only comparable to the information lengths from \eqref{eq_c1} and \eqref{eq_c5}, although it does slightly improve on both. However, the trade-off is that as the redundancy becomes greater, the complexity of our scheme tends to remain constant, as we see in the next section. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{redundancies} \caption{Comparison of information sequence length vs. redundancy for various schemes} \label{fig:redundancies} \end{figure} \subsection{Complexity}\label{sec:5.2} We estimate the complexity of our proposed scheme and compare it to that of existing algorithms. The techniques in \cite{capocelli1991} and \cite{tallini1999} both require $\mathcal{O}(qk\log_qk)$ digit operations for the encoding and decoding. The method from \cite{swart2009} takes $\mathcal{O}(qk\log_qk)$ digit operations for the encoding and $\mathcal{O}(1)$ digit operations for the decoding. A refined design of the parallel decoding method is presented in \cite{pelusi2015}, where the complexity equals $\mathcal{O}(k\sqrt{\log_qk})$ in the encoding case and $\mathcal{O}(1)$ digit operations in the decoding process. The following pseudo code presents the steps of our encoding method: \begin{verbatim} Input: Information sequence, x of length k. Output: Encoded sequence, y of length n=k+r. for i=0:kq; for j=z1:z2; y(i) = [u | g(j) | x + b(s,p)(i)]; If (w(y(i))==beta) // Testing for balanced sequence. exit(); //Terminate the program. end; end; \end{verbatim} In the above code, $i$ is the iterator through the $kq$ output sequences and also through the balancing sequences, while $j$ is the iterator through the subset of Gray code sequences, ranging from $z_1 = \lfloor \frac{q^{r'}}{2} \rfloor - \lfloor \frac{kq}{2} \rfloor$ to $z_2= z_1 + kq-1$. The symbol `$\mid$' denotes the concatenation. Our encoding scheme is based on the construction in \cite{swart2009} that has an encoding complexity of $\mathcal{O}(qk\log_qk)$, and it takes $\mathcal{O}(\log_qk)$ to encode Gray code prefixes as presented in \cite{guan1998}. Therefore the encoding of our algorithm requires $\mathcal{O}(qk\log_qk)$ digit operations. The decoding process consists of very simple steps: the recovery of the index $z'$ from the Gray code requires $\mathcal{O}(\log_q k)$ digit operations \cite{guan1998}. After obtaining the index $z'$ from the Gray code prefix, the balancing sequence $\mbox{\boldmath $b$}_z$ is found and then the original information sequence is recovered through the operation, $\mbox{\boldmath $y$} = \mbox{\boldmath $x$} \ominus_q \mbox{\boldmath $b$}_z$, which can be performed in parallel, resulting in a complexity of $\mathcal{O}(1)$. Therefore the overall complexity for the decoding is $\mathcal{O}(\log_q k)$ digit operations. Table~\ref{tab:cmpl} summarizes the complexities for various constructions, where the orders of digit operations it takes to complete the encoding/decoding are compared. \begin{table}[!t] \centering \caption{Complexities of various schemes (orders are in digit operations)}\label{tab:cmpl} \renewcommand\arraystretch{1.5} \begin{tabular}{ccc} \hline\hline Algorithm & Encoding order & Decoding order \\ \hline \cite{tallini1999} & $\mathcal{O}(qk\log_qk)$ & $\mathcal{O}(qk\log_qk)$ \\ \cite{capocelli1991} & $\mathcal{O}(qk\log_qk)$ & $\mathcal{O}(qk\log_qk)$ \\ \cite{swart2009} & $\mathcal{O}(qk\log_qk)$ & $\mathcal{O}(1)$ \\ \cite{pelusi2015} & $\mathcal{O}(k\sqrt{\log_qk})$ & $\mathcal{O}(1)$ \\ Our scheme & $\mathcal{O}(qk\log_qk)$ & $\mathcal{O}(\log_qk)$ \\ \hline\hline \end{tabular} \end{table} \section{Conclusion}\label{sec:6} An efficient construction has been proposed for balancing non-binary information sequences. By making use of Gray codes for the prefix, no lookup tables are used, only linear operations are needed for the balancing and the Gray code implementation. The encoding scheme has a complexity of $\mathcal{O}(qk\log_qk)$ digit operations. For the decoding process, once the Gray code prefix is decoded using $\mathcal{O}(\log_qk)$ digit operations, the balancing sequence is determined and the rest of the decoding process is performed in parallel. This makes the decoding fast and efficient. Possible future research directions include finding a mathematical procedure to determine the subset of Gray code sequences for $q$ even, given that it was found manually, by using a sliding window over the random walk graph. Practically, the redundant symbol $u$ only needs to take on values of zero (when the random walk falls on the balancing value) or one (when the random walk falls just below the balancing value). Thus, unnecessary redundancy is contained in $u$, especially for large values of $q$. However, the flexibility over $u$ increases the occurrences of balanced sequences. These additional balanced outputs could potentially be used to send auxiliary data that could reduce the redundancy. This property was proved for the binary case \cite{weber2010}. Additionally, given that the random walk graph passes through other weights in the region of the balancing value, the scheme can be extended to the construction of constant weight sequences with arbitrary weights.
train/arxiv
BkiUcIDxK6-gD5TlcefH
5
1
\section{Quantum channels} Quantum channels, or completely-positive trace-preserving maps, are the most general maps between quantum systems. They enjoy a diverse range of applications, primarily in the quantum information community \cite{caruso2014}, but also in studies of matrix product states \cite{Fannes1992,Perez-Garcia2006}, entanglement renormalization \cite{Giovannetti2008,Pfeifer2009}, computability theory \cite{aaronson2016}, and even biological inference processes \cite{Lee2016}. The canonical form of a quantum channel $\A$ and its adjoint $\A^{\dgt}$ (a generalization of the Heisenberg picture defined under the Frobenius norm) is \cite{Sudarshan1961,Kraus1971,Choi1975} \begin{equation} \A\left(\r\right)=\sum_{\ell}A^{\ell}\r A^{\ell\dg}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{and}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\A^{\dgt}\left(O\right)=\sum_{\ell}A^{\ell\dg}OA^{\ell}\,,\label{eq:kraus} \end{equation} where $\A$ acts on states $\r$ and $\A^{\dgt}$ on operators $O$. The matrices $A^{\ell}$ are called the Kraus operators of $\A\equiv\left\{ A^{\ell}\right\} $, eq. (\ref{eq:kraus}) is the Kraus form of $\A$, and the only requirement for the channel to be trace preserving is (for $I$ identity) \begin{equation} \sum_{\ell}A^{\ell\dg}A^{\ell}=I\,.\label{eq:cptp} \end{equation} Quantum channels can be represented as matrices acting on a vectorized density matrix, i.e., the $D\times D$ matrix $\r$ written as a $D^{2}$-dimensional vector. Vectorization essentially ``flips'' the bra part in each of the outer products making up $\r$ and $\A$ is written as a $D^{2}\times D^{2}$ matrix of the form $\hat{\A}=\sum_{\ell}A^{\ell}\ot A^{\ell\star}$ acting on the vectorized $\r$ strictly from the left. This \textit{matrix or Liouville representation} of $\A$ \cite{Caves1999} is equivalent to the Kraus representation (\ref{eq:kraus}), and I slightly abuse notation by ignoring hats and not distinguishing the two. In the matrix representation, channels can be studied in terms of their eigenvalues and eigenmatrices. The eigenvalues of all channels are contained in the unit disk, and this work focuses on the eigenvalues/matrices $\st$ on the periphery of that disk, i.e., \begin{equation} \A\left(\st\right)=e^{i\la}\st\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{for some real }\la\,. \end{equation} Such eigenmatrices are called the channel's (right) \textit{rotating points}, and those with $\la=0$ are called \textit{fixed points}. The $\St$'s do not have to be physical states themselves, but they are a matrix basis for such states. Since $\A$ may not be diagonalizable, the eigenmatrices $J$ of its adjoint \textemdash{} left rotating points \textemdash{} may be different from $\St$: \begin{equation} \A^{\dgt}\left(J\right)=e^{-i\la}J\,. \end{equation} Left rotating points will be called \textit{conserved quantities} because their expectation value is either constant or oscillates with successive powers of $\A$, but does not decay: \begin{equation} \tr\{J\A^{n}(\r)\}=\tr\{\A^{\dgt n}(J)\r\}=e^{-in\la}\tr\{J\r\}\,. \end{equation} The general block structure of $\varPsi$'s is already well-known \cite{robin,Lindblad1999,BlumeKohout2008,baumr,Carbone2015}, and here the focus is on the structure of the $J$'s. It is important to note that there are as many conserved quantities as there are rotating points (more technically, the Jordan normal form of $\A$ contains only trivial Jordan blocks for all eigenvalues on the periphery of the unit disk; see, e.g., Prop 6.2 in Ref. \cite{wolf2010}). In the limit of many applications of $\A$, all eigenmatrices with eigenvalues not on the periphery of the unit disk will become irrelevant and all that will be left of the channel is the projection onto the subspace spanned by the rotating points. The collective effect of many applications of $\A$ is quantified by the channel's \textit{asymptotic projection} $\ppp$, \begin{equation} \ppp(\r)\equiv\lim_{n\rightarrow\infty}\A^{\a n}(\r)\,,\label{eq:asproj} \end{equation} which projects onto the eigenspace of the peripheral spectrum of the channel. The extra parameter $\a$ allows one to take the limit in such a way as to remove the eigenvalues $e^{i\la}$ arising from application of $\A$ on $\r$. For any $\la=\frac{2\pi}{N}n$ (for some positive integers $n,N$), rotating points of $\A$ are fixed points of $\A^{N}$, so one simply takes $\a=N$ to get rid of the extra phases. Other $\la$ which are not rational multiples of $2\pi$ can similarly be removed to arbitrary accuracy \cite{robin,Wolf2010b,wolf2010} by remembering that irrational numbers are limits of sequences of rationals. The above limit is a direct generalization of the large time limit of Markovian/Lindbladian channels $\A_{t}=e^{t\L}$ for some Lindbladian $\L$. However, in that case, $\lim_{t\rightarrow\infty}e^{t\L}$ can produce residual unitary evolution which cannot be removed by clever manipulation of the limit. The asymptotic projection is expressible in terms of (superoperator) projections onto the eigenspaces of the rotating points, \begin{equation} \ppp(\r)=\sum_{\la,\m}\st_{\la\m}\tr\left\{ J^{\la\m\dg}\r\right\} \,,\label{eq:ap} \end{equation} where the rotating points are indexed by their eigenvalue $e^{i\la}$ and $\m$ counts any degeneracies for each $\la$. In that sense, conserved quantities are as important as fixed points despite being less well-understood. Conveniently, the rotating points and their corresponding conserved quantities can be made biorthogonal, $\tr\{J^{\la\m\dg}\st_{\varTheta\n}\}=\d_{\la\varTheta}\d_{\m\n}$. The $\st$'s thus determine the basis elements of a generalized Bloch vector \cite{alicki_book,schirmer} of the asymptotic state $\ppp(\r)$ while the $J$'s determine the coefficients of said Bloch vector. The biorthogonality condition easily implies that $\ppp$ is really a projection \textemdash{} $\ppp^{2}=\ppp$. If a channel has a unique fixed point $\st$ and no rotating points, then the unique conserved quantity is the identity (due to the necessity of trace preservation) and $\ppp(\r)=\st\tr\{\r\}=\st$. Channels with more non-trivial $\ppp$ are therefore those with multiple fixed or rotating points. As a simple example of such a channel, consider $\A=\{A\}$ acting on $2\times2$ matrices with one Kraus operator $A=\text{diag}\{1,e^{i\t}\}$. Such a channel sports two fixed points, the identity and the Pauli matrix $Z$, and two rotating points $\s_{\pm}$ with eigenvalues $\la=\pm\t$. In fact, since there is only one Kraus operator, such a channel is actually unitary. For a non-unitary example, set $\t=\pi$ (so $A=Z$) and add the Pauli matrix $X$ as another Kraus operator {[}normalizing both $A$'s by $\frac{1}{\sqrt{2}}$ to satisfy trace preservation (\ref{eq:cptp}){]}. This channel has the identity as the unique fixed point and $Y$ as the only rotating point with $\la=\pi$. Since both Kraus operators are Hermitian, the left and right fixed points are the same; we will see examples when they are not later. Other examples of $\ppp$ come from recovery maps in quantum error-correction, which take a state which has undergone an error and project it back into the protected subspace of the quantum code \cite{Ippoliti2014}. \section{Structure of conserved quantities\label{sec:Structure-of-conserved}} \subsection{Faithful channels} The first part focuses on channels that do not contain a decaying subspace. This means that no populations $|\psi\ket\bra\psi|$ decay completely to zero under many applications of the channel: $\bra\psi|\R_{\E}(|\psi\ket\bra\psi|)|\psi\ket\neq0$ for all states $|\psi\ket$, a channel $\E$, and its asymptotic projection $\R_{\E}$. Equivalently, the channel has to have one fixed point $\r$ which is of full rank ($\bra\psi|\r|\psi\ket>0$ for all $|\psi\ket$). The structural differences between such channels and channels which do admit decay warrant a special definition: \begin{defn*} A channel $\E\equiv\{E_{\ell}\}$ is \textit{faithful }if it admits a full-rank (i.e., faithful) fixed point $\r$. In other words, \begin{equation} \exists\,\r>0\,\,\text{ such that }\,\,\E(\r)=\r\,. \end{equation} \end{defn*} Here, I always use $\E$ to denote faithful channels and later show how $\E$ can be extended to channels $\A$ which act on a larger Hilbert space and admit a decaying subspace. In this sense, $\E$ is the faithful channel of $\A$. Note that the number of fixed points is independent of this condition, and Table \ref{tab:Some-types-of-channels} relates this definition to others. \begin{table} \begin{tabular}{cccc} \toprule & FP unique? & $\exists$ full-rank FP? & $\exists$ rot. point?\tabularnewline \midrule \midrule ergodic \cite{Raginsky2002,Raginsky2002a,Burgarth2007} & Yes & & \tabularnewline faithful {[}here{]} & & Yes & \tabularnewline irreducible \cite{Davies1970,wolf2010} & Yes & Yes & \tabularnewline mixing \cite{Burgarth2007} & Yes & & No\tabularnewline primitive \cite{Sanz2010,wolf2010} & Yes & Yes & No\tabularnewline \bottomrule \end{tabular}\caption{\label{tab:Some-types-of-channels}Some types of channels; FP$=$fixed point. A blank entry means there is no requirement for that definition. For semigroups, mixing is also known as relaxing \cite{Burgarth2013a} and faithful is also known as minimal \cite{ABFJ}. Primitive is equivalent to strongly irreducible \cite{Sanz2010}.} \end{table} The first result is regarding the relationship between the conserved quantities $J$ and the Kraus of operators of $\E$. It is a generalization of a theorem for fixed points of faithful channels \cite{robin,Kribs2003,Choi2006,Gheondea2016}, which states that a conserved quantity $J$ with eigenvalue $\la=0$ commutes with all of the Kraus operators. It is shown that conserved quantities with $\la\neq0$ commute up to a phase. For the aforementioned example $\E=\{E\}$ with $E=\text{diag}\{1,e^{i\t}\}$, the conserved quantity $\s_{+}$ satisfies $\s_{+}E=e^{-i\t}E\s_{+}$. This turns out to be true for all faithful channels and reduces to known results for ergodic channels (\cite{Burgarth2013a}, Thm. 9). It can be proven using Thms. 4.1-4.2 and Corollary 4.3 in Ref. \cite{Novotny2012}; a more direct proof is in the appendix. \begin{numtheorem}{\hyperref[prop:1]{Proposition 1.}} Let $\E=\left\{ E_{\ell}\right\} $ be a faithful channel. Let $J$ be a conserved quantity of $\E$, i.e., $\E^{\dgt}\left(J\right)=e^{-i\la}J$ for some real $\la$. Then, for all $\ell$, \begin{equation} JE_{\ell}=e^{-i\la}E_{\ell}J\,.\label{eq:com} \end{equation} \end{numtheorem} Assuming $\E^{\dgt}(J_{1})=e^{-i\la{}_{1}}$ and $\E^{\dgt}(J_{2})=e^{-i\la{}_{2}}$, Eq. (\ref{eq:com}) easily implies that $\E^{\dgt}(J_{1}J_{2})=e^{-i(\la{}_{1}+\la_{2})}J_{1}J_{2}$. Combined with the fact that there must be $\leq D^{2}$ conserved quantities, this implies that there are some constraints on $\la$ such that there remain finitely many eigenvalues. This brings us to the second result about the eigenvalues of a specific subset of conserved quantities. Each conserved quantity $J=\jd+\jn$ can be decomposed into a diagonalizable part $\jd$ and a nilponent part $\jn$ \cite{vaughn_book} ($\jn^{N}=0$ for $N\leq D$, the dimension of the Hilbert space). While $\la$ can be an irrational multiple of $2\pi$ for strictly nilpotent $J$, it turns out that $e^{i\la}$ are $N$th roots of unity for all diagonalizable $J$ with $N\leq D$. In other words, given any conserved quantity $J$, $J^{D}$ is either a zero, the identity, or a projection. This extends similar results (\cite{wolf2010}, Thm. 6.6; \cite{Fannes1992}, Prop. 3.3; \cite{Bialonczyk2017}, Corr. 3) to faithful channels. It is not, however, as thorough a characterization of the peripheral spectrum as Ref. \cite{Wolf2010b}, Thm. 9. \begin{numtheorem}{\hyperref[prop:2]{Proposition 2.}} Let $\E=\left\{ E_{\ell}\right\} $ be a faithful channel. Let $\jd$ be such that $\E^{\dgt}(\jd)=e^{-i\la}\jd$ for some real $\la$ and assume $\jd$ is diagonalizable. Then, there exists an integer $n$ such that \begin{equation} \la=\frac{2\pi}{N}n\,\,\,\,\,\,\,\text{for some}\,\,\,\,\,\,\,N\leq D\,. \end{equation} \end{numtheorem} Let us assume a unitary conserved quantity, $J^{\dg}J=JJ^{\dg}=I$, and show that the above two propositions extend known results (\cite{wolf2010}, Prop. 6.7) from irreducible to faithful channels. Proposition \ref{prop:2} readily implies that $\E$ is covariant (more specifically, invariant or symmetric) under $J$, \begin{equation} J\E(\r)J^{\dg}=\E(J\r J^{\dg})\,\,\,\,\,\,\forall\r\,, \end{equation} so conserved quantities are symmetries of the channel. Proposition \ref{prop:2} implies that $J^{N\leq D}=I$, so the set $\{J^{n}\}_{n=0}^{N-1}$ forms the symmetry group $\Z_{N}$. Note that the symmetry group is never infinite for finite dimension $D$. Generalizing this, the set of unitary conserved quantities thus forms a finite group under which $\E$ is covariant. This is a one-way Noether-type theorem linking conserved quantities to symmetries (see Ref. \cite{pub011} or Ref. \cite{thesis}, Ch. 2.6, for the semigroup analogue). This cannot be extended to a two-way theorem because symmetries of a channel are not always conserved quantities. A simple counterexample is the channel $\E=\{X/\sqrt{2},Z/\sqrt{2}\}$, for which the Hadamard operation $H$ taking $X\leftrightarrow Z$ is a symmetry, but is not conserved {[}$\E^{\dgt}(H)=0${]}. \subsection{General channels} Now let us extend faithful channels to channels which do not contain a full-rank fixed point. While Props. \ref{prop:1}-\ref{prop:2} break down for general channels, the extension below implies that, for every general channel, there is a corresponding faithful channel for which they hold. Any faithful channel $\E=\left\{ E_{\ell}\right\} $ can be extended to a channel $\A=\left\{ A^{\ell}\right\} $ which contains a decaying subspace (also, transient subspace \cite{ying2013}). Specifically, the Kraus operators of $\A$ are \begin{equation} A^{\ell}=\begin{pmatrix}E_{\ell}\vspace{4pt} & A_{\ur}^{\ell}\\ 0 & A_{\lr}^{\ell} \end{pmatrix}\equiv\begin{pmatrix}A_{\ul}^{\ell}\vspace{4pt} & A_{\ur}^{\ell}\\ 0 & A_{\lr}^{\ell} \end{pmatrix}\,.\label{eq:bl} \end{equation} The dimensions of the square matrices $E_{\ell}$ and $A_{\lr}^{\ell}$ can differ, and the bounds of $\ell$ can change by padding the same $E$ with two different pairs of matrices in $\urbig$ (``upper right'') and $\lrbig$ (``lower right'') to make two different $A$'s. The zero matrix in $\llbig$ is necessary to make sure that $\ulbig$ is the largest invariant subspace; thus, all rotating points of $\A$ are the same as those of $\E$. In addition, $\A$ needs to be a legitimate channel, i.e., satisfy eq. (\ref{eq:cptp}). Writing out the $A^{\ell}$'s in blocks {[}as in eq. (\ref{eq:bl}){]} yields the conditions\begin{subequations}\label{eq:conds} \begin{align} \sum_{\ell}A_{\ul}^{\ell\dg}A_{\ul}^{\ell} & =\pp\label{eq:conds1}\\ \sum_{\ell}A_{\ul}^{\ell\dg}A_{\ur}^{\ell} & =0\label{eq:conds2}\\ \sum_{\ell}(A_{\ur}^{\ell})^{\dg}A_{\ur}^{\ell}+A_{\lr}^{\ell\dg}A_{\lr}^{\ell} & =\qq\,, \end{align} \end{subequations}where $\qq$ is the projection on $\lrbig$ and $\pp=I-\qq$ is the projection onto $\ulbig$ (with $\tr\left\{ P\right\} \equiv D$). For each faithful channel $\E$, there are an infinite number of possible extensions $\A$. Conversely, an arbitrary channel $\A$ either is a faithful channel or contains one. The remaining two completely positive maps associated with this decomposition of $\A$, $\{A_{\ur}^{\ell}\}$ and $\{A_{\lr}^{\ell}\}$, are both trace-decreasing. Now let us develop the required notation. Just like $\pp$ and $\qq$ split the Hilbert space into two parts, they can be used to split the space of operators on a Hilbert space into four ``corners'' $\{\ulbig,\urbig,\llbig,\lrbig\}$ \cite{ABFJ}. Each of the four corners corresponds to its own superoperator projection. For example, \begin{equation} \R_{\ur}(O)\equiv\pp O\qq\equiv O_{\ur} \end{equation} for any operator $O$. The other three projections are defined accordingly. One can graphically determine which corner a product of operators belongs to by multiplying their blocks as matrices (e.g., $A_{\ll}B_{\ur}\in\lrbig$). Moreover, the four-corners projections add graphically ($\R_{\ul}+\R_{\lr}\equiv\R_{\di}$) and are Hermitian ($\R_{\emp}^{\dgt}=\R_{\emp}$). Analogous to studying operators in terms of their matrix elements, one can study superoperators in terms of their four-corners decomposition. For example, \begin{equation} \R_{\ul}\A\R_{\lr}(\r)=\pp\A\left(\qq\r\qq\right)\pp=\sum_{\ell}A_{\ur}^{\ell}\r_{\lr}(A_{\ur}^{\ell})^{\dg}\label{eq:tr} \end{equation} is the map $\{A_{\ur}^{\ell}\}$ which transfers $\r_{\lr}$ from $\lrbig$ to $\ulbig$. ``Diagonal'' elements are denoted as $\A_{\emp}\equiv\R_{\emp}\A\R_{\emp}$ for convenience, so the faithful channel $\E\equiv\R_{\ul}\A\R_{\ul}$ and similarly $\{A_{\lr}^{\ell}\}\equiv\R_{\lr}\A\R_{\lr}$. With conditions (\ref{eq:bl}) and (\ref{eq:conds}), $\A$ contains a decaying subspace of dimension $\tr\left\{ \qq\right\} $ and the same rotating points as $\E$. But what about the conserved quantities? Those are not the same because, by trace preservation, they need to make sure that all state populations (and sometimes some coherences) in $\lrbig$ are transferred to $\ulbig$. For example, the identity is (always) a conserved quantity of $\A$, but the analogous conserved quantity of $\E$ is $\pp$. Denoting the conserved quantities of $\E$ as $J_{\ul}$, it will now be shown how to extend them to form $J$, the conserved quantities of $\A$. Having defined this notation, it is easy to write out the conserved quantities of the extended channel $\A$. \begin{numtheorem}{\hyperref[prop:3]{Proposition 3.}} The conserved quantities of $\A$ corresponding to eigenvalues $e^{i\la}$ are \begin{equation} J=J_{\ul}+J_{\lr}=J_{\ul}-(\A_{\lr}^{\dgt}-e^{-i\la})^{-1}\A^{\dgt}(J_{\ul})\,,\label{eq:main} \end{equation} where $J_{\ul}$ are conserved quantities of $\A_{\ul}=\E$. \end{numtheorem} An important corollary of the above proposition is that $J_{\of}=0$. After plugging in this formula for $J$ into $\ppp$ (\ref{eq:ap}), this means that the asymptotic projection has only two pieces: \begin{equation} \ppp=\R_{\ul}\ppp\R_{\di}\equiv\ps+\ppp\R_{\lr}\,,\label{eq:ppp} \end{equation} where the \textit{faithful projection} (for semigroups, minimal projection \cite{ABFJ}) \begin{equation} \ps(\cdot)\equiv\ppp\R_{\ul}(\cdot)=\sum_{\la,\m}\St_{\la\m}\tr\{J_{\ul}^{\la\m\dg}\cdot\} \end{equation} is the asymptotic projection of the faithful channel $\E$. The piece $\ps$ is responsible for preserving parts of an initial state $\r$ which is in $\ulbig$ while the piece $\ppp\R_{\lr}$ is a channel mapping states from $\lrbig$ onto the subspaces spanned by the rotating points of $\A$, all located in $\ulbig$. The key result here is that the rotation induced by $\la$, besides inducing phases on the rotating points, also contributes to the decay of information from $\lrbig$ into $\ulbig$. Namely, the inverse of the piece $(\A^{\dgt}-e^{-i\la})_{\lr}$ modulates the decoherence induced during the decay in a way that depends on how close the eigenvalues of $\A_{\lr}$ are to the phases $e^{i\la}$ \begin{equation} \ppp\R_{\lr}(\r)=-\sum_{\la,\m}\st_{\la\m}\tr\left\{ J_{\ul}^{\la\m\dg}\left[\A(\A-e^{i\la})_{\lr}^{-1}\right](\r_{\lr})\right\} \,,\label{eq:main-asymptotic-projection} \end{equation} where the superoperator in square brackets acts on $\r_{\lr}$. The $\la=0$ case reduces to known results (\cite{robin}, Lemma 5.8; \cite{Cirillo2015}, Prop. 7), \begin{equation} \ppp\R_{\lr}=\ps\A\left(\id-\A\right)_{\lr}^{-1}\,, \end{equation} where $(\id-\A)_{\lr}^{-1}$ (with $\I$ the superoperator identity) can be thought of as the quantum version of the fundamental matrix from classical Markov chains \cite{markov_book}. These formulas also reduce to the Lindbladian result (\cite{ABFJ}, Prop. 3) if we let $\A=e^{\L}\rightarrow\id+\L$ for some Lindbladian $\L$ and $e^{-i\la}\rightarrow1-i\la$. In the Lindblad case, some dependence on $\la$ can be canceled by properly tuning $\L_{\lr}$ (\cite{thesis}, Sec. 3.2.3). \section{Application: information preserving structures\label{sec:Application:-information-preserv}} This section lists some uses of the above result and includes an algorithm that outputs a properly organized $\ppp$ given a channel $\A$. \subsection{Asymptotic probabilities} Expounding on the above, eq. (\ref{eq:main-asymptotic-projection}) allows us to find the asymptotic \cite{Cirillo2015} (also, reachability \cite{ying2013}) probabilities of a given initial state $\r$ to reach a particular subspace of $\ulbig$. The new result here is determination of the \textit{coherences} reached by $\r$, assuming knowledge of the left ($J_{\ul}^{\la\m}$) and right ($\St_{\la\m}$) rotating points of $\E$. To show this, recall that the $\St_{\la\m}$'s can be made orthonormal, $\tr\{\St_{\la\m}^{\dg}\St_{\varTheta\n}\}=\d_{\la\varTheta}\d_{\m\n}$. (Loosely speaking, this is because the $\St$'s are a matrix basis used to write all asymptotic density matrices and so must be well-behaved; for more rigor, see Sec. \ref{subsec:Algorithm-for-finding}.) To determine the coefficient in front of the basis element $\St_{\la\m}$ in the asymptotic state $\rout=\ppp(\r)$, instead of applying $\A$ a sufficiently large number of times to determine $\ppp$, simply calculate \begin{equation} \tr\{\St_{\la\m}^{\dg}\rout\}=\tr\left\{ J_{\ul}^{\la\m\dg}\left[\id-\A(\A-e^{i\la})_{\lr}^{-1}\right](\r)\right\} \,. \end{equation} \subsection{Error-correction of a decoherence-free subspace} Let us assume that now all of $\ulbig$ consists of rotating or fixed points, so $\A_{\ul}=\E$ is a unitary channel. An example of this case is $\A_{\ul}=\left\{ E\right\} $, where $E=\text{diag}\{1,e^{i\t}\}$ is the Kraus operator that mentioned before. The necessary and sufficient condition on the $A$'s for this to hold is \begin{equation} A_{\ul}^{\ell}=a_{\ell}U\label{eq:dfs} \end{equation} for some unitary $U$, real $a_{\ell}$, and such that $\sum_{\ell}|a_{\ell}|^{2}=1$ to satisfy the condition (\ref{eq:conds1}). Since there is no decay in $\ulbig$, that portion forms a \textit{decoherence-free subspace} (DFS) \cite{Lidar1998} and $\ps=\R_{\ul}$. The form of $A_{\ul}$ also implies that $\R_{\ul}\A\R_{\of}=0$ and the statement of Prop. \ref{prop:1} implies that the rotating points reduce to being outer products of eigenstates of $U$. The form (\ref{eq:bl}) of $A$ with the above restriction on $A_{\ul}$ generalizes the previous DFS condition from eq. (11) of Ref. \cite{lidar2003} (see also Refs.~\cite{Karasik2008,Kamizawa2018} for different formulations). The difference is that now $A_{\ur}$ does not have to be zero, so information from $\lrbig$ flows into the DFS $\ulbig$. For example, in quantum error-correction, $\ulbig$ is the logical subspace, $\lrbig$ is the orthogonal error subspace, and the piece $\ppp\R_{\lr}$ plays the role of a ``recovery channel'' which attempts to recover the leaked information after an error \cite{Ippoliti2014}. It turns out one can remove the inverse term from $\ppp\R_{\lr}$, putting the piece in Kraus form. Setting $A_{\lr}=0$ and $A_{\ul}=\pp$ (unitary evolution within DFS is trivial) eliminates $\A_{\lr}$ and reduces $\ppp\R_{\lr}$ to the transfer map (\ref{eq:tr}), \begin{equation} \ppp\R_{\lr}=\R_{\ul}\A\R_{\lr}\,,\label{eq:arb} \end{equation} with Kraus operators $A_{\ur}$. Condition (\ref{eq:conds2}) on $A_{\ur}$ reduces to $\sum_{\ell}A_{\ur}^{\ell}=0$, which is automatically satisfied by the set of operators $\{\pm A_{\ur}^{\ell}/\sqrt{2}\}$. However, the channel created by those operators is the same as $\{A_{\ur}^{\ell}\}$, so $\ppp$ embeds an arbitrary recovery channel from the error subspace $\lrbig$ to code subspace $\ulbig$. \subsection{How to find $\protect\ppp$\label{subsec:Algorithm-for-finding}} In a more complicated case than a DFS, $\ulbig$ is factorized into a DFS and an auxiliary subspace, forming a \textit{noiseless subsystem (NS)} \cite{Knill2000}. Evolution on the DFS is still unitary while the auxiliary subspace contains one fixed and no rotating points. The Kraus operators for $\E=\A_{\ul}$ are then $A_{\ul}^{\ell}=U\ot B^{\ell}$, where $U$ acts on the DFS and $B^{\ell}$ are Kraus operators on the auxiliary space. This reduces to the DFS case (\ref{eq:dfs}) if the dimension of the auxiliary space is one. In the most general case, the rotating and fixed points of $\E$ can be block-diagonalized into a direct sum of blocks, with each block being an NS \cite{robin,Lindblad1999,BlumeKohout2008,baumr,Carbone2015}. In that case, the Kraus operators can be written as \begin{equation} A_{\ul}^{\ell}=\bigoplus_{\varkappa}U_{\varkappa}\otimes B^{\ell,\varkappa}\,,\label{eq:decomp} \end{equation} where $U_{\varkappa}$ is unitary and the Kraus map $\{B^{\ell,\varkappa}\}_{\ell}$ for each $\varkappa$ is primitive (see Table \ref{tab:Some-types-of-channels}). This blocks-of-factors structure or \textit{shape} of $A_{\ul}^{\ell}$ is the most general form of an information-preserving structure \cite{robin} and has deep connections to the theory of matrix algebras \cite{wolf2010}. The key to organizing the rotating points and conserved quantities is converting to a \textit{canonical basis} \textemdash{} a basis which respects the above block structure. In such a basis (utilizing the block index $\varkappa$), rotating points are of the form $\St_{\la\m}^{\varkappa}=e_{\m}^{\varkappa}\ot\varrho^{\varkappa}$ (where $\m$ is now used to label the matrix units $e_{\m}^{\varkappa}$ of the space of $U_{\varkappa}$ and $\varrho^{\varkappa}$ is the unique fixed point of $\{B^{\ell,\varkappa}\}_{\ell}$) while their dual conserved quantities are $J_{\ul}^{\varkappa\la\m}=e_{\m}^{\varkappa}\ot P^{\varkappa}$ (where $P^{\varkappa}$ is the identity on the auxiliary subspace). Thus, conserved quantities in each block are related to rotating points via a division by (i.e., inversion of all nonzero eigenvalues of) the auxiliary fixed point, $J_{\ul}^{\varkappa\la\m}=\St_{\la\m}^{\varkappa}(\varrho^{\varkappa})^{-1}$ \cite{Novotny2012,Novotny2017}. It is well-known among experts that $\{J_{\ul}^{\varkappa\la\m}\}$ form a \textit{matrix algebra} \textemdash{} a vector space (where the vectors are matrices) that is closed under multiplication and the conjugate transpose operation. It is important to keep in mind that all of this extra structure in $\ulbig$ does not put any constraints on the remaining parts $\{A_{\ur},A_{\lr}\}$ of $\A$, the extension of $\E$; this is why it was avoided until now. Moreover, $\{J^{\varkappa\la\m}\}$ do \textit{not} have to form a matrix algebra. There exist several algorithms to determine the shape (\ref{eq:decomp}) of $\A$ \cite{robin,Holbrook2003,Choi2006,Knill2006,Maehara2010,Wang2013,Guan2018}. A straightforward way \cite{robin} to find the form (\ref{eq:decomp}) for a general channel $\A$ is to diagonalize $\A$ and apply standard matrix algebra techniques \cite{Holbrook2003,Maehara2010} to find a canonical basis for the algebra of conserved quantities in $\ulbig$. Using Prop. \ref{prop:3}, I slightly extend the algorithm from Ref. \cite{robin} to one that finds and organizes not just the conserved quantities restricted to $\ulbig$, but the full conserved quantities as well. Once again, the main new inclusion is the determination of conserved quantities whose eigenvalue is modulus one (as opposed to exactly one). \begin{lyxalgorithm*} Finding and organizing $\ppp$ Find the rotating points $\St$ and conserved quantities $J$ by diagonalizing $\A$ Construct $\ppp$ and $\pp$, the projection onto $\textnormal{range}\{\ppp(I)\}$ Find the projected conserved quantities $J_{\ul}\equiv\pp J\pp$ Decompose the algebra spanned by $J_{\ul}$ into canonical form using, e.g., Refs.~\cite{Holbrook2003,Maehara2010} Determine a canonical basis $\St_{\la\m}^{\varkappa}$ for the rotating points and $J_{\ul}^{\varkappa\la\m}$ for the conserved quantities Extend $J_{\ul}^{\varkappa\la\m}$ to $J^{\varkappa\la\m}$ via Prop. \ref{prop:3}. \end{lyxalgorithm*} Note that $\ulbig$ is the range of $\ppp(I)$, i.e., $\ppp(I)\propto\pp$, because $I$ is dual to the maximally mixed fixed point $\frac{1}{\tr\{\pp\}}\pp$ and is the only conserved quantity with nonzero trace. \section{Application: matrix product states} For those who skimmed Secs. \ref{sec:Structure-of-conserved}-\ref{sec:Application:-information-preserv}, those parts focused on the distinction between a channel $\A$ and its corresponding faithful channel $\E\equiv\R_{\ul}\A\R_{\ul}$ \textemdash{} $\A$ restricted to the largest invariant subspace $\ulbig$ (equivalently, the range of $\A$'s maximal-rank fixed point). The block $\lrbig$ thus forms a decaying subspace, but the asymptotic projection $\ppp$ (\ref{eq:ap}) of $\A=\{A^{\ell}\}$ nevertheless retains information from states in $\lrbig$ by transferring it into $\ulbig$ through the operators $A_{\ur}^{\ell}$. Here, this decomposition is applied to matrix product states (MPS) in order to obtain an unambiguous thermodynamic limit for any MPS that is translationally invariant in the bulk, but has non-trivial boundary effects. Then, I show how one can absorb any dependence of said limit on the decaying parts $\lrbig$ of the bond degrees of freedom into the boundary conditions. This allows one to shorten the bond dimension and use the transfer matrix $\A_{\ul}=\E$ instead of the full $\A$. \subsection{What are MPS?} Our playground is now a one-dimensional lattice consisting of $2M+1$ spins. Each spin is $d$-dimensional and indexed by the physical index $\ell$. An MPS $|\P\ket$ that is translationally-invariant in the bulk of the lattice can be written as \begin{equation} |\P_{\A}^{\{B\}}\ket\propto\sum_{\ell_{-M},\cdots,\ell_{M}=0}^{L-1}\tr\{BA^{\ell_{-M}}\cdots A^{\ell_{M}}\}|\ell_{-M}\cdots\ell_{M}\ket\,,\label{eq:mps} \end{equation} where $A^{\ell}$ is an $L$-dimensional vector of $N\times N$ matrices (for some \textit{bond dimension} $N$) and $B$ is an $N\times N$ matrix quantifying the boundary conditions. The bond dimension determines the degree of entanglement of the spins, with $N=1$ corresponding to a separable state. Physically meaningful boundaries are either $B=I$ (the identity) for translationally invariant MPS's or $B=|r\ket\bra l|$ for some states $|r\ket,|l\ket$ quantifying the effect of the boundary on the right and left ends of the chain. By performing similarity transformations on the $A$'s, all MPSs can be put into a canonical form \cite{Perez-Garcia2006,Cirac}, in which the $A$'s satisfy eq. (\ref{eq:cptp}) and therefore form a Kraus map $\A\equiv\{A^{\ell}\}_{\ell=0}^{L-1}$. This map is usually called a \textit{transfer channel} (also, double tensor \cite{Zeng2015}), and it appears when one of the lattice sites from eq. (\ref{eq:mps}) is traced out. Continuing to trace out more sites while also taking the thermodynamic limit of the MPS ($M\rightarrow\infty$), one can obtain the normalization of the state: \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|\P_{\A}^{\{B\}}\ket=\lim_{M\rightarrow\infty}\Tr\{\A^{\a(2M+1)}\B\}=\Tr\{\ppp\B\}\,, \end{equation} where $\B\equiv B\ot B^{\star}$, the trace is over superoperator space, and $\a$ is the parameter that eliminates phases stemming from rotating points. The addition of $\a$, which physically is equivalent to blocking sites of the MPS and taking the limit of blocks, allows one to define an unambiguous and non-pathological thermodynamic limit for general boundary conditions. As an example, for periodic boundary conditions $B=I$ and faithful channels $\E$ containing rotating points $\varPsi_{\la=\frac{2\pi}{N}n}$ and conserved quantities $J^{\la=\frac{2\pi}{N}n}$ satisfying $\tr\{J^{\la\dg}\varPsi_{\la^{\prime}}\}=\d_{\la,\la^{\prime}}$, the normalization is \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\E}^{\{I\}}|\P_{\E}^{\{I\}}\ket=\lim_{M\rightarrow\infty}\sum_{n=0}^{N-1}e^{i\frac{2\pi}{N}\a(2M+1)n}\,. \end{equation} Picking $\a=1$ yields zero whenever $2M+1$ and $N$ are noncommensurate \{e.g., \cite{Haegeman2014}, Eq. (130)\}. In contrast, setting $\a=N$ gives $N$ (as was also noticed recently in Ref. \cite{Cirac}). A similar equation occurs if one wants to evaluate observables in the thermodynamic limit (see below). In this way, the transfer channel and boundaries determine the properties of the MPS in the thermodynamic limit. One can also use $\B$ to get rid of any undesired components of $\R_{\A}$ \cite{Ueda2011}. Note that $|\Psi_{\R_{\A}}^{\{B\}}\ket$ is also the fixed-point MPS that $|\Phi_{\A}^{\{B\}}\ket$ flows to under RG transformations \cite{Verstraete2005,Wei2010,Cirac2017}, and \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|\P_{\A}^{\{B\}}\ket=\bra\Psi_{\R_{\A}}^{\{B\}}|\Psi_{\R_{\A}}^{\{B\}}\ket\,, \end{equation} so simplifying $\R_{\A}$ also yields insight into the structure of RG fixed points. \subsection{Boundary effects in the thermodynamic limit} The exact connection between quantum channels and MPSs has been well-studied for the case when the MPS is injective \textemdash{} when its corresponding transfer channel has only one fixed point and no rotating points. Since the results from the previous section are exactly about cases where there are arbitrary numbers of fixed and rotating points, here we will quantify the connection between asymptotics of quantum channels and the thermodynamic limit of non-injective MPS. The approach is somewhat reverse of what has been done before (see Sec. 3.2.2 of \cite{Perez-Garcia2006}): instead of first considering a general MPS, I consider a general channel $\A$ and simplify its corresponding MPS in the thermodynamic limit by applying the above results about $\A$'s structure. Since applying identical transformations $U$ to each site is the same as changing basis for the Kraus operators of $\A$, \begin{equation} A^{\ell}\rightarrow\sum_{\ell^{\prime}}U_{\ell\ell^{\prime}}A^{\ell^{\prime}}\,, \end{equation} more technically this is a study of sets of MPS related by local unitaries. Let us apply the four-corners decomposition onto the MPS in order to determine which blocks are relevant in the thermodynamic limit. Assume that each $A^{\ell}=A_{\thd}^{\ell}$ now has a decaying subspace $\lrbig$ and that powers of $\A$ eventually transfer the state completely into $\ulbig$. Recall that $A_{\ul}^{\ell}\equiv E_{\ell}$ and so the channel which determines the right fixed points is $\A_{\ul}\equiv\{E_{\ell}\}=\E$. After some algebra, the coefficient $\tr\{B(A_{\ell_{-M}}\cdots A_{\ell_{M}})\}$ in the MPS (\ref{eq:mps}) becomes equal to \begin{align} & \,\,\,\phantom{+}\tr\left\{ B_{\ul}(E_{\ell_{-M}}\cdots E_{\ell_{M}})\right\} +\tr\left\{ B_{\lr}(A_{\lr}^{\ell_{-M}}\cdots A_{\lr}^{\ell_{M}})\right\} \nonumber \\ & +{\displaystyle \sum_{m=-M}^{M}}\tr\left\{ B_{\ll}(E_{\ell_{-M}}\cdots E_{\ell_{m-1}})A_{\ur}^{\ell_{m}}(A_{\lr}^{\ell_{m+1}}\cdots A_{\lr}^{\ell_{M}})\right\} \,.\label{eq:mps2} \end{align} The first term corresponds to the usual MPS $|\P_{\E}^{\{B\}}\ket$ whose transfer matrix $\E$ is faithful. The second term vanishes in the thermodynamic limit because its corresponding transfer matrix does not have any fixed points. When $B_{\ll}\neq0$, the third term is present and has the form of a translationally-invariant domain wall excitation. Therefore, the decaying subspace $\lrbig$ corresponds to extra degrees of freedom on each site which house such an excitation. This excitation is never present for periodic boundary conditions ($B=I$), allowing one to straightforwardly derive a standard irreducible form for MPS with such boundary conditions in which the first and second terms are decomposed into smaller irreducible blocks \cite{Perez-Garcia2006,Cirac}. Let us continue to focus on ``twisted'' boundaries $B_{\ll}\neq0$. The main result is that, in the thermodynamic limit, contributions from extra degrees of freedom corresponding to $\lrbig$ can equivalently be described by considering only $\A_{\ul}=\E$, but given a \textit{mixture} of MPS having different boundary conditions. Culminating with Eq. (\ref{eq:result-boundary-conditions-mps}), it will be shown that, in the thermodynamic limit, expectation values of local observables with an MPS $|\P_{\A}^{\{B\}}\ket$ can be equivalently calculated from expectation values with the MPS $\{|\P_{\E}^{\{B_{k}\}}\ket\}_{k=0}^{K}$, where $K>1$ and $B_{k}$ are distinct boundary conditions dependent on $\R_{\A}\R_{\lr}$ and $B$. Let us evaluate the expectation value of an observable $O$ on a site in the thermodynamic limit. The number of lattice sites between the site which supports $O$ and both boundaries is infinite and $\a$ is used to remove any phases occurring due to rotating points {[}see eq. (\ref{eq:asproj}){]}. This allows one to simplify a previous form of such a limit, eq. (133) of Ref. \cite{Haegeman2014}, and remove any convergence issues arising from such phases. After some algebra, \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\Tr\{\R_{\A}\O\R_{\A}\B\}\equiv\Tr\{\O_{\A}\B\}\,, \end{equation} where the corresponding superoperator is \begin{equation} \O\equiv\sum_{k,\ell=0}^{d-1}\bra\ell|O|k\ket A_{k}\ot A_{\ell}^{\star}\,. \end{equation} To finish the calculation, decompose $\ppp$ using eq. (\ref{eq:ppp}) and $\O$ using the block form of $A^{\ell}$ (\ref{eq:bl}), yielding $\O\R_{\ul}=\R_{\ul}\O\R_{\ul}$ and correspondingly \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\Tr\{\O_{\E}(\B_{\ul}+\ppp\R_{\lr}\B\R_{\ul})\}\,,\label{eq:mps-observable} \end{equation} where $\O_{\E}\equiv\ps\O\ps$ and $\B_{\ul}\equiv\R_{\ul}\B\R_{\ul}$. The $\B_{\ul}$ term is the standard contribution of boundary effects located in $\ulbig$ and corresponds to the first term in the form of the MPS (\ref{eq:mps2}). By contrast, the only piece of $\B$ contributing to the second term in Eq. (\ref{eq:mps-observable}) is $\R_{\lr}\B\R_{\ul}=B_{\ll}\ot B_{\ll}^{\star}$, corresponding to the \textit{third} term in the form of the MPS (\ref{eq:mps2}). As a sanity check, taking periodic boundary conditions ($B=I=I_{\di}$) yields $\R_{\lr}\B\R_{\ul}=0$ and so only the first term in Eq. (\ref{eq:mps-observable}) remains. In general, the domain-wall-like excitations from the third term in Eq. (\ref{eq:mps2}) combined with ``twisted'' boundary conditions $B_{\ll}\neq0$ \textit{can} contribute to the thermodynamic limit of the MPS. \subsection{Absorbing boundary effects} One can interpret the contribution of $\lrbig$ in a different way by thinking of both terms from Eq. (\ref{eq:mps-observable}) as coming from the effective boundary on $\ulbig$, \begin{equation} \overline{\B}\equiv\B_{\ul}+\ppp\R_{\lr}\B\R_{\ul}=\overline{\B}_{\ul}\,. \end{equation} Since $\R_{\A}\R_{\lr}$ is a channel from $\lrbig$ to a subspace of $\ulbig$, one can decompose it in terms of some Kraus operators $F^{k}=F_{\ur}^{k}$: $\R_{\A}\R_{\lr}=\sum_{k=1}^{K}F^{k}\otimes F^{k\star}$. (These Kraus operators are of course related to the rotating points $R_{\la\m}$ and the $\lrbig$ pieces of conserved quantities $L_{\lr}^{\la\m}$ from the previous section.) The rank $K$ is bounded by $\min\{\dim\ulbig,\dim\lrbig\}$, so it is independent of the system size $M$. This shows that the effects of $\lrbig$ can just as well be simulated by a \textit{superposition} of effective boundary conditions $B_{\ul}$ with those from the set $\{F_{\ur}^{k}B_{\ll}\}_{k=1}^{K}$, \begin{equation} \overline{\B}=\sum_{k=0}^{K}\B_{k}\equiv\sum_{k=0}^{K}B_{k}\otimes B_{k}^{\star}\,, \end{equation} where $B_{0}=B_{\ul}$ and $B_{k>0}=F_{\ur}^{k}B_{\ll}$. Plugging in the above form for $\overline{\B}$ into Eq. (\ref{eq:mps-observable}), \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\Tr\{\O_{\E}\overline{\B}\}=\sum_{k=0}^{K}\Tr\{\O_{\E}\B_{k}\}\,. \end{equation} Working backwards, each term in the sum over $k$ corresponds to the thermodynamic limit of the MPS $|\Psi_{\E}^{\{B_{k}\}}\ket$: \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\sum_{k=0}^{K}\lim_{M\rightarrow\infty}\bra\P_{\E}^{\{B_{k}\}}|O|\P_{\E}^{\{B_{k}\}}\ket\,.\label{eq:result-boundary-conditions-mps} \end{equation} Therefore, when calculating expectation values of local observables, one can drop $\lrbig$ as long as one includes a \textit{mixture} of MPS with different boundary conditions. The same occurs with two observables $O^{(1)}$ and $O^{(2)}$ (with corresponding superoperators $\O^{(1)}$ and $\O^{(2)}$) separated by some number of sites $W$, \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(1)}O^{(2)}|\P_{\A}^{\{B\}}\ket=\tr\left\{ \R_{\A}\O^{(1)}\A^{W}\O^{(2)}\overline{\B}\right\} \,, \end{equation} and take the $W\rightarrow\infty$ limit by blocking sites in order to get rid of any phases from rotating points. This yields \begin{equation} \lim_{M,W\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(1)}O^{(2)}|\P_{\A}^{\{B\}}\ket=\tr\left\{ \O_{\E}^{(1)}\O_{\E}^{(2)}\overline{\B}\right\} \,, \end{equation} where $\O_{\E}^{(i)}=\R_{\E}\O^{(i)}\R_{\E}$. Similarly, consider an observable touching the left boundary: \begin{align} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(L)}|\P_{\A}^{\{B\}}\ket & =\Tr\{\O^{(L)}\R_{\A}\B\}=\Tr\{\O_{\E}^{(L)}\overline{\B}\}\,.\label{eq:leftside} \end{align} Somewhat surprisingly, considering an observable touching the right boundary produces something completely different: \begin{equation} \lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(R)}|\P_{\A}^{\{B\}}\ket=\Tr\{\R_{\A}\O^{(R)}\B\}\,. \end{equation} Notice how $\R_{\A}$ now comes before the observable {[}cf. the first equality of Eq. (\ref{eq:leftside}){]}, which results in a series of new terms stemming from combinations of $A_{\ur}^{\ell}$ and $A_{\lr}^{\ell}$ with $B$. Why is there an asymmetry between the two boundaries? This has to do with the fact that we had initially assumed an asymmetric form for our MPS, $A^{\ell}=A_{\thd}^{\ell}$. The domain wall-type excitations represented by the third term in Eq. (\ref{eq:mps2}) are such that there is always a $A_{\lr}$ at the right-most site $M$. Since one has to block sites in order to have a valid thermodynamic limit, one might imagine that effects of periodicities in the MPS (i.e., effects of the rotating points) are eliminated. This is not the case due to the presence of the eigenvalues $e^{i\la}$ in the piece $\R_{\A}\R_{\lr}$ (\ref{eq:main-asymptotic-projection}). This piece in turn affects the boundary conditions $\{B_{i}\}_{i}$ required to make sure Eq. (\ref{eq:result-boundary-conditions-mps}) is satisfied. Thus, MPS with rotating points retain some of their properties even in a thermodynamic limit which blocks sites. \section{Conclusion} An important property of quantum channels $\A$ is their asymptotics, i.e., their behavior in the limit of infinite applications, akin to the infinite-time limit of Lindbladians \cite{ABFJ}. An infinite product of $\A$ produces the channel's asymptotic projection $\ppp$ \textemdash{} a projection on all of the non-decaying eigenspaces of the channel (i.e., whose eigenvalues have unit modulus). The superoperator $\ppp$ can be constructed out of the channel's left and right rotating points, or as they are called here, conserved quantities $J$ and steady-state basis elements $\St$. There has been a lot of overlapping work quantifying such asymptotics, but there have remained a few gaps in the literature when it came to considering conserved quantities with eigenvalue other than one. The aim of the first half of this work is to close those gaps in simple and standalone fashion. I start off with two results about channels admitting a full-rank fixed point, which I call faithful. The first is that any $J$ commute with a faithful channel's Kraus operators up to a phase. The second is that the eigenvalue of any diagonalizable $J$ of a faithful channel is an $N$th root of unity, where $N$ is bounded by the dimension of the channel's Hilbert space. A third result deals with determining the dependence of the asymptotic state on initial state and on properties of $\A$. An analytical formula is derived that quantifies the dependence of the final state on initial states located in $\A$'s decaying eigenspaces (i.e., whose eigenvalues are less than one in modulus). The aim of the second half of this work is to apply the third result above to matrix product states (MPS), where asymptotics come into play in the thermodynamic limit or in the limit of infinite renormalization transformations. In the same way that asymptotic states depend on initial states, the thermodynamic limit of MPS (whose transfer matrices admit more than one fixed point) depends on the boundary conditions. In such situations, the effects of any decaying bond degrees of freedom can be absorbed in the boundary conditions. Quantitatively, it is shown that the thermodynamic expectation value of a local operator $O$ with an MPS having transfer matrix $\A$ and boundary condition $B$ is equivalent to a sum of expectations values with MPS having $\A$ restricted to its largest invariant subspace and several different boundary conditions $\{B_{i}\}$ (\ref{eq:result-boundary-conditions-mps}). Since similar two-dimensional MPS (often called ``PEPS'' \cite{Cirac2011}) and multiscale entanglement renormalization ansatz (MERA \cite{Giovannetti2008,Pfeifer2009}) states also correspond to a transfer channel, such techniques may further generalized to study PEPS dependence on boundaries and dependence of MERA hierarchies on their ``caps''. \begin{acknowledgments} Insightful discussions with L. Jiang, M. Fraas, N. Schuch, M. M. Wolf, D. Perez-Garcia, M. B. Sahinoglu, A. M. Turner, B. Bradlyn, F. Ticozzi, and X. Chen are acknowledged. This research was supported in part by the National Science Foundation (PHY-1748958) and the Walter Burke Institute for Theoretical Physics at Caltech. I thank KITP Santa Barbara for their hospitality as part of the Quantum Physics of Information workshop. \end{acknowledgments}
train/arxiv
BkiUd8zxK7Tt52mM9zPz
5
1
\section*{Introduction} In common geometry, a homogeneous space defined as the quotient space of a Lie group by an its closed subgroup. These spaces are important for pure mathematical studies as well as for their applications to physics. See \cite{zeghib} and \cite{arkani}. It is shown that a manifold is a homogeneous space if a Lie group acts on it transitively \cite{Greub2}. \medskip Extension of these spaces in supergeometry is also studied. For example in \cite{carmelibook}, \cite{g-spaces} and \cite{qoutiontsuper}, the concepts of homogeneous superspaces, super Lie groups and their actions on supermanifolds are introduced by sheaf theoretic approach. In this paper, we show that the super Lie group $GL(m|n)$, c.f. section \ref{prelim}, acts transitively on supergrassmannian $G_{k|l}(m|n)$ , c.f. section \ref{supergrass}. Thus it is a non trivial example of Balduzzi et al's homogeneous superspaces. Regarding homogeneity of $G_{k|l}(m|n)$, there exists other attempts which have been done earlier. But these works limited to study of special cases or the infinitesimal actions of the groups (see \cite{valenzuela}). \medskip In the first section, we recall briefly all necessary basic concepts from \cite{carmelibook} and \cite{g-spaces}. For clarifying the concepts or ideas, whenever it is needed, we give some examples. \medskip In section 2, we study supergrassmannians extensively. Although this concept is introduced by Manin in \cite{ma1}, but here by developing an efficient formalism, we give a precise proof for existence of these supermanifolds. \medskip In section 3, by a functor of points approach, an action of super Lie group $GL(m|n)$ on supergrassmannian $G_{k|l}(m|n)$ is defined by gluing local actions. Finally it is shown that this action is transitive. \section{Preliminaries} \label{prelim} In this section we introduce the basic definitions and results concerning category theory, super Lie groups and action of a super Lie group on a supermanifold. For further detail see \cite{carmelibook}. \medskip A supermanifold of dimension $p|q$ is a pair $(|M|,\mathcal{O}_M)$, where $|M|$ is a second countable and Hausdorff topological space, and $\mathcal{O}_M$ is a sheaf of $\mathbb Z_2$-graded algebras, locally isomorphic to $C^\infty(\mathbb R^p)\otimes \wedge \mathbb R^q$. A morphism between two supermanifolds $M=(|M|,\mathcal{O}_M)$ and $N=(|N|,\mathcal{O}_N)$ is a continuous map $|\psi|:|M|\rightarrow |N|$ together with a sheaf morphism $\psi^*:\mathcal{O}_N\rightarrow \mathcal{O}_M$ called pullback. \medskip For open subset $U\subset|M|$, by $J_M(U)$ we mean the set of nilpotent elements in $\mathcal{O}_M(U)$. It is clear that this set is an ideal in $\mathcal{O}_M(U)$. So one has the quotient sheaf $\frac{\mathcal{O}_M}{J_M}$, and locally there exists a canonical isomorphism from this sheaf to the sheaf $C^{\infty}$ under $s+J\mapsto \tilde{s}$. Thus $\tilde M:=(|M|,\frac{\mathcal{O}_M}{J_M})$ is a classical manifold that is called \textit{reduced manifold} associated to $M$. One can associate a reduced map $\widetilde\psi:\widetilde M\rightarrow \widetilde N$ to each morphism $\psi:M\rightarrow N$. By evaluation of $s$ at $x\in U$, denoted by $ev_x(s)$, we mean $\tilde{s}(x)$. \medskip By locally small category, we mean a category such that the collection of all morphisms between any two of its objects is a set. Let $X$, $Y$ are objects in a category and $\alpha,\beta:X\rightarrow Y$ are morphisms between these objects. An universal pair $(E,\epsilon)$ is called equalizer if the following diagram commutes: $$E\xrightarrow{\epsilon} X\overset{\alpha}{\underset{\beta}{\rightrightarrows}}Y$$ i.e., $\alpha \circ \epsilon=\beta \circ \epsilon$ and also for each object $T$ and any morphism $\tau:T\rightarrow X$ which satisfy $\alpha \circ \tau=\beta \circ \tau$, there exists unique morphism $\sigma:T\rightarrow E$ such that $\epsilon \circ \sigma = \tau$. If equalizer existed then it is unique up to isomorphism. For example, in the category of sets, which is denoted by $\textbf{SET}$, the equalizer of two morphisms $\alpha,\beta:X\rightarrow Y$ is the set $E=\{x\in X | \alpha(x)=\beta(x)\}$ together with the inclusion map $\epsilon:E\hookrightarrow X$ \medskip Let $\mathcal{C}$ be a locally small category and $X$ is an object in $\mathcal{C}$. By $T$-points of $X$, we mean $X(T):=Hom_\mathcal{C}(T,X)$ for any $T\in Obj(\mathcal{C})$. The functor of points of X is a functor which is denoted by X(.) and is defined as follows: $$\begin{matrix} X(.):\mathcal{C} \rightarrow \textbf{SET}\\ \qquad\quad S \mapsto X(S)\\ \\ X(.): Hom_{\mathcal{C}}(S,T)\rightarrow Hom_{\textbf{SET}}(X(T),X(S))\\ \varphi \mapsto X(\varphi),\end{matrix}$$\\ where $X(\varphi):f\mapsto f\circ \varphi.$ A functor $F:\mathcal{C} \to \textbf{SET}$ is called representable if there exists an object $X$ in $\mathcal{C}$ such that $F$ and $X(.)$ are isomorphic. Then one may say that $F$ is represented by $X$. The category of functors from $\mathcal{C}$ to SET is denoted by $[\mathcal{C}, \textbf{SET}]$. It is shown that the category of all representable functors from $\mathcal{C}$ to SET is a subcategory of $[\mathcal{C}, \textbf{SET}]$. \medskip Corresponding to each morphism $\psi:X\rightarrow Y$, there exists a natural transformation $\psi(.)$ from $X(.)$ to $Y(.)$. This transformation correspond the mapping $\psi(T):X(T)\rightarrow Y(T)$ with $\xi\mapsto \psi\circ \xi$ for each $T\in Obj(\mathcal{C})$. Now set: $$\begin{matrix} \mathcal{Y}:\mathcal{C}\rightarrow [\mathcal C,\textbf{SET}]\\ X \mapsto X(.)\\ \psi \mapsto \psi(.). \end{matrix}$$ Obviously, $\mathcal{Y}$ is a covariant functor and it is called \textit{\textbf{Yoneda embedding}}. \begin{lemma}\label{Yoneda} The Yoneda embedding is full and faithful functor, i.e. the map $$Hom_\mathcal{C}(X,Y)\longrightarrow Hom_{[\mathcal{C},\textbf{SET}]}(X(.),Y(.))$$ is a bijection for each $X,Y\in Obj(\mathcal{C})$ \end{lemma} \begin{proof} see \cite{carmelibook}. \end{proof} Thus according to this lemma, $X,Y\in Obj(\mathcal{C})$ are isomorphic if and only if their functor of points are isomorphic. The Yoneda embedding is an equivalence between $\mathcal{C}$ and a subcategory of representable functors in $[\mathcal{C},\textbf{SET}]$, since not all functors are representable. \begin{remark}\label{charttheorem} Consider the superdomain $\mathbb R^{p|q}$ and arbitrary supermanifold $T$. Let $f_i\in\mathcal{O}(T)_0$ and $g_j\in\mathcal{O}(T)_1$ such that $1\leq i \leq p$, $1\leq j \leq q$ be even and odd elements respectively. By Theorem 4.3.1 in \cite{vsv}, One can determine a unique morphism $\psi:M\rightarrow \mathbb R^{p|q}$, by setting $t_i\mapsto f_i$ and $\theta_j\mapsto g_j$ where $(t_i, \theta_j)$ is a global even and odd coordinate system on $\mathbb R^{p|q}$. Thus $\psi$ may be represented by $(f_i, g_j)$. \end{remark} Let $\textbf{SM}$ be the category of supermanifolds and their morphisms. Obviously, $\textbf{SM}$ is a locally small category and has finite product property. In addition it has a terminal object $\mathbb{R}^{0|0}$, that is the constant sheaf $\mathbb{R}$ on a singleton $\{0\}$. \medskip Let $M=(|M|,\mathcal{O}_M)$ be a supermanifold and $p\in |M|$. There is a map $j_p=(|j_p|,{j_p}^*)$ where: $$\begin{matrix} |j_p|:\{0\} \rightarrow |M|\qquad\qquad & {j_p}^*:\mathcal{O}_M\rightarrow \mathbb{R} \\ & \qquad\qquad\qquad\quad g \mapsto \tilde g(p)=:ev_p(g).\end{matrix}$$ So, for each supermanifold $T$, one can define the morphism \begin{align}\hat{p}:T\rightarrow \mathbb{R}^{0|0}\xrightarrow{j_p}M\label{phat}\end{align} as a composition of $j_p$ and the unique morphism $T\rightarrow \mathbb{R}^{0|0}$. \medskip By super Lie group, we mean a group-object in the category $\textbf{SM}$. More precisely, it is defined as follows: \begin{definition} A super Lie group $G$ is a supermanifold $G$ together the morphisms $\mu:G\times G \rightarrow G,\quad i:G \rightarrow G,\quad E:\mathbb{R}^{0|0} \rightarrow G$ say multiplication, inverse and unit morphisms respectively, such that the following equations are satisfied \begin{align*} \mu \circ (\mu\times 1_G)&=\mu \circ (1_G\times \mu)\\ \mu \circ (1_G\times \hat{e})\circ \bigtriangleup_G&=1_G=\mu \circ (\hat{e}\times 1_G)\circ \bigtriangleup_G\\ \mu \circ (1_G\times i)\circ \bigtriangleup_G&=\hat{e}=\mu \circ (i\times 1_G)\circ \bigtriangleup_G \end{align*} where $\mu, i, E$ are multiplication, inverse and unit morphisms respectively, $1_G$ is identity on $G$ and $\hat{e}$ is the morphism according to (\ref{phat}) for element $e\in |G|$ and also $\Delta_G$ be the diagonal map on $G$. \end{definition} Note that, there is a Lie group associated with each super Lie group. Indeed, let $G$ is a super Lie group and $\tilde{G}$ is reduced manifold associated to $G$ and $\tilde{\mu}, \tilde{i}, \tilde{E}$ are reduced morphisms associated to $\mu, i, E$ respectively. Since $G \rightarrow \tilde{G}$ is a functor, $(\tilde{G}, \tilde{\mu}, \tilde{i}, \tilde{E})$ is a group-object of the category of differentiable manifolds. \medskip \begin{remark}\label{remark2} Simply, one can show that any super Lie group $G$ induced a group structure over its $T$-points for any arbitrary supermanifold $T$. This means that the functor $T\rightarrow G(T)$ takes values in category of groups. Moreover, for any other supermanifold $S$ and morphism $T\rightarrow S$, the corresponding map $G(T)\rightarrow G(S)$ is a homomorphism of groups. One can also define a super Lie group as a representable functor $T\rightarrow G(T)$ from category $\textbf{SM}$ to category of groups. If such functor represented by a supermanifold $G$, then the maps $\mu,i,E$ are obtained by Yoneda's lemma and the maps $\mu_T:G(T)\times G(T)\rightarrow G(T),\quad i_T:G(T)\rightarrow G(T)$ and $E_T:\mathbb{R}^{0|0}(T)\rightarrow G(T)$. \end{remark} \begin{example} Let $(t, \theta)$ be a global coordinate system on superdomain $\mathbb{R}^{1|1}$. We use the language of functor of points and define multiplication $\mu:\mathbb{R}^{1|1}\times \mathbb{R}^{1|1}\rightarrow \mathbb{R}^{1|1}$. Let $T$ be an arbitrary supermanifold, we define: \begin{align*} \mu_T:\mathbb{R}^{1|1}(T)\times \mathbb{R}^{1|1}(T)& \longrightarrow \mathbb{R}^{1|1}(T) \\ (f,g),(f^{\prime},g^{\prime}) & \longmapsto (f+f^{\prime},g+g^{\prime}) \end{align*} where, according to Remark \ref{charttheorem}, there exists a unique morphism from $T$ to $\mathbb{R}^{1|1}$ corresponding to each of $(f,g),(f^{\prime},g^{\prime})$, where $f,f^{\prime}\in \mathcal{O}(T)_0$ and $g,g^{\prime}\in\mathcal{O}_1.$ $\mathbb{R}^{1|1}(T)$ with $\mu_T$ is a common group. Thus, by Remark \ref{remark2}, $\mathbb{R}^{1|1}$ is a super Lie group. \end{example} Analogously, one may show that the superdomain $\mathbb{R}^{p|q}$ is a super Lie group. \begin{example} Let $V$ be a finite dimensional super vector space of dimension $m|n$ and Let $\{R_1,\cdots,R_{m+n}\}$ be a basis of V for which the first $m$ elements. we denote even and odd elements of an its basis by $r_i$. Consider the functor \begin{align*} F: & \textbf{SM} \rightarrow \textbf{Grp}\\ & T\mapsto Aut_{\mathcal O(T)}(\mathcal O(T)\otimes V)\end{align*} where $F$ maps each seupermanifold $T$ to the group of even $\mathcal O(T)$-module automorphisms of $\mathcal O(T)\otimes V$, and $\textbf{Grp}$ is the category of groups. Consider the supermanifold $\textbf{End}(V)=\Big(End(V_{0})\times End(V_{1}),\mathcal{A}\Big)$ where $\mathcal{A}$ is the structure sheaf $\textbf{End}(V)$. We denote a basis of $\textbf{End}(V)$ by $\{F_{ij}\}$ where $F_{ij}$ is an even linear transformation on V with $R_k\mapsto \delta_{ik}R_j$. If $\{f_{ij}\}$ is the corresponding dual basis, then it may be considered as a global coordinates on $\textbf{End}(V)$. Let $X$ be the open subsupermanifold of $\textbf{End}(V)$ corresponding with the open set: $$|X|=GL(V_{0})\times GL(V_{1})\subset End(V_{0})\times End(V_{1})$$ Thus, we have $$X=\Big(GL(V_{0})\times GL(V_{1}),\mathcal{A}|_{GL(V_{0})\times GL(V_{1})}\Big).$$ It can be shown that the functor $F$ may be represented by $X$. For this, one may show $Hom(T,X)\cong Aut_{\mathcal O(T)}(\mathcal O(T)\otimes V)$. First, we have $$Hom(T,X)=Hom(\mathcal A_X(|X|),\mathcal O(T))$$ It is known that each $\psi\in Hom(\mathcal A_X(|X|),\mathcal O(T))$ may be uniquely determined by $\{g_{ij}\}$ where $g_{ij}=\psi(f_{ij})$. Now set $\bar{\psi}(R-j)=\Sigma g_{ij}R_i$. One may consider $\bar{\psi}$ as an element of $Aut_{\mathcal{O}(T)}(\mathcal{O}(T)\otimes V)$. Obviously $\psi\mapsto \bar{\psi}$ is a bijection from $Hom(T,X)$ to $Aut_{\mathcal{O}(T)}(\mathcal{O}(T)\otimes V)$. Thus the supermanifold $X$ is a super Lie group and denoted it by $GL(V)$ or $GL(m|n)$ if $V=\mathbb{R}^{m|n}$. Therefore $T$- points of $GL(m|n)$ is the group of invertible even $m|n\times m|n$ supermatrices $\begin{pmatrix} A & B\\ C & D \end{pmatrix}$ such that $A=(a_{ij}), B=(b_{il}), C=(c_{kj}), D=(d_{kl})$ with $a_{ij}, d_{kl}\in \mathcal O(T)_0,\quad b_{il},c_{kj}\in \mathcal O(T)_1 $ and the multiplication can be written as the matrix product. \end{example} Let $x\in|G|$, one can define the left and right translation by $x$ as \begin{align} r_x:=\mu \circ (1_G\times \hat x)\circ \Delta_G,\label{pullefttrans}\\ l_x:=\mu \circ (\hat x\times 1_G)\circ \Delta_G,\label{pulrighttrans} \end{align} respectively. One can show that pullbacks of above morphisms are as following \begin{align} r_x^*:=(1_{\mathcal{O}(G)}\otimes ev_x)\circ \mu^*,\label{lefttrans}\\ l_x^*:=(ev_x\otimes 1_{\mathcal{O}(G)})\circ \mu^*.\label{righttrans} \end{align} One may also use the language of functor of points to describe two morphisms \ref{pullefttrans} and \ref{pulrighttrans}. \begin{definition} Let $M$ be a supermanifold and let $G$ be a super Lie group with $\mu,i$ and $E$ as its multiplication, inverse and unit morphisms respectively. A morphism $a:M\times G\rightarrow M$ is called a (right) action of $G$ on $M$, if the following diagrams are commuted: \begin{Small}\begin{displaymath} \xymatrix{ & M\times G\times G \ar[rd]^{1_M\times\mu} \ar[ld]_{ a\times 1_G} & \\ M\times G \ar[dr]_a & & M\times G \ar[ld]^{a}\\ & M & }\quad \xymatrix{ & M\times G \ar[rd]^{a} & \\ M\ar[ur]^{(1_M\times\hat{e})\circ \Delta_M} \ar[rr]_{1_M} & & M} \end{displaymath}\end{Small} where $\hat{e}, \Delta_M$ are as above. In this case, we say G acts from right on $M$. One can define left action analogously. \end{definition} According to the above diagrams, one has: $$\begin{matrix} a\circ(1_M\times\mu)=a\circ(a\times 1_G)&\qquad,\qquad & a\circ(1_M\times\hat{e})\circ \Delta_M=1_M. \end{matrix}$$ By Yoneda lemma (Lemma \ref{Yoneda}), one may consider, equivalently, the action of G as a natural transformation: $$a(.):M(.)\times G(.)\rightarrow M(.),$$ such that for each supermanifold $T$, the morphism $a_T: M(T)\times G(T)\rightarrow M(T)$ is an action of group $G(T)$ on the set $M(T)$. This means: \begin{enumerate} \item[1.] $(m.g_1).g_2=m.(g_1g_2),\qquad \forall g_1,g_2\in G(T), \forall m\in M(T).$ \item[2.] $m.\hat{e}=m,\qquad \forall m\in M(T).$ \end{enumerate} Let $p\in |M|$, define a map $$\begin{matrix} a_p:G\rightarrow M\\ \qquad a_p:=a\circ (\hat{p}\times 1_G)\circ \Delta_G, \end{matrix}$$ where $\hat{p}$ is the morphism (\ref{phat}) for $p\in |M|$. Equivalently, this map may be defined as $$\begin{matrix} (a_p)_T:G(T)\rightarrow M(T)\\ \qquad g\longmapsto \hat{p}.g \end{matrix}$$ One may easily show that $a_p$ has constant rank(see Proposition 8.1.5 in \cite{carmelibook}, for more details). Before next definition, we recall that a morphism between supermanifolds, say $\psi:M\rightarrow N$ is a submersion at $x\in|M|$, if $(d\psi)_x$ is surjective and $\psi$ is called submersion, if this happens at each point. (For more detail about this, One can refer to \cite{vsv}, \cite{carmelibook}). $\psi$ is a surjective submersion, if in addition $\tilde{\psi}$ is surjective. \begin{definition} Let $G$ acts on $M$ with action $a:M\times G \rightarrow M$. We say that $a$ is transitive, if there exist $p\in |M|$ such that $a_p$ is a surjective submersion. \end{definition} It is shown that, if $a_p$ be a submersion for one $p\in |M|$, then it is a submersion for all point in $|M|$. The following proposition will be required in the last section. \begin{proposition}\label{Transitive} Let $a:M\times G \rightarrow M$ be an action, $a$ is transitive if and only if $(a_p)_{\mathbb{R}^{0|q}}:G(\mathbb{R}^{0|q})\rightarrow M(\mathbb{R}^{0|q})$ is surjective, where q is the odd dimension of $G$ \end{proposition} \begin{proof} see proof proposition 9.1.4 in \cite{carmelibook}. \end{proof} \begin{definition} Let $G$ be a super Lie group and let $a$ be an action on supermanifold $M$. By stabilizer of $p\in |M|$ we mean a supermanifold $G_p$ equalizing the diagram $$G\overset{a_p}{\underset{\hat{p}} {\rightrightarrows}}M.$$ \end{definition} It is not clear that such an equalizer exists. In this regard, there are two propositions \begin{proposition}\label{Isotropic} Let $a:M\times G \rightarrow M$ be an action, then \begin{enumerate} \item[1.] The following diagram admits an equalizer $G_p$ $$G\overset{a_p}{\underset{\hat{p}} {\rightrightarrows}}M.$$ \item[2.] $G_p$ is a sub super Lie group of $G$. \item[3.] the functor $T\rightarrow (G(T))_{\hat{p}}$ is represented by $G_p$, where $(G(T))_{\hat{p}}$ is the stabilizer of $\hat{p}$ of the action of $G(T)$ on $M(T)$. \end{enumerate} \end{proposition} \begin{proof} See proof proposition 8.4.7 in \cite{carmelibook}. \end{proof} \begin{proposition}\label{equivariant} Suppose $G$ acts transitively on $M$. There exists a $G$-equivariant isomorphism \begin{displaymath}\xymatrix{ \dfrac{G}{G_p}\ar[r]^{\cong} & M. } \end{displaymath} \end{proposition} \begin{proof} See proof proposition 6.5 in \cite{g-spaces}. \end{proof} \section{Supergrassmannian}\label{supergrass} Supergrassmannians are introduced by Manin in \cite{ma1}. In this section, we study their sheaf structures in more details. For convenience from now we set $\alpha :=k(m-k)+l(n-l)$ and $\beta:= l(m-k)+k(n-l)$ and also demonstrate any supermatrix with four blocks, say $B_1, B_2, B_3, B_4$. Upper left and lower right blocks, $B_1, B_4$ are called even blocks and lower left and upper right blocks, $ B_2, B_3 $ are called odd blocks. By a supergrassmannian, $G_{k|l}(m|n)$, we mean a supermanifold which is constructed by gluing superdomains $\mathbb{R}^{\alpha|\beta}=(\mathbb{R}^\alpha, \, C^{\infty}(\mathbb{R}^\alpha)\otimes \wedge \mathbb{R}^\beta)$ as follows:\\ Let $I\subset \{1, \cdots, m \}$ and $ R \subset \{ m+1, \cdots, m+n\}$ be sets with $ k $ and $ l $ elements respectively. The elements of $ I $ are called even indices and the elements of $ R $ are called odd indices. In this case $I|R$ is called $k|l$-index. Set $U_{I|R}=(|U_{I|R}|, \mathcal{O}_{I|R})$, where $$|U_{I|R}|=\mathbb{R}^\alpha \quad,\quad \mathcal{O}_{I|R}(\mathbb{R}^\alpha)=\, C^{\infty}(\mathbb{R}^\alpha)\otimes \wedge \mathbb{R}^\beta.$$ Let each superdomain $U_{I|R}$ is labeled by an even $k|l\times m|n$ supermatrix, say $A_{I|R}$ with four blocks, $ B_1, B_{2}, B_3, B_4 $ as above. Except for columns with indices in $I \cup R$, which together form a minor denoted by $M_{I|R}(A_{I|R})$, even and odd blocks are filled from up to down and left to right by $x_a^I, e_b^I$, the even and odd free generators of $\mathcal{O}_{I|R}(\mathbb{R}^\alpha)$, respectively. This process impose an ordering on the set of generators. In addition $ M_{I|R}A_{I|R} $ is supposed to be a unit supermatrix. For example, let $I=\{1\}, R=\{1,2\}$ and let $ I|R $ be an $1|2$-index in $G_{1|2}(3|3)$. In this case the set of generators of $\mathcal{O}_{I|R}(\mathbb{R}^\alpha)$ is \begin{equation*} \{x_1, x_2, x_3, x_4; e_1, e_2, e_3, e_4, e_5\} \end{equation*} and $ A_{I|R} $ is: \begin{equation*} \left[ \begin{array}{ccc|ccc} 1 & x_1 & x_2 & 0 & 0 & e_5 \\ \hline 0 & e_1 & e_3 & 1 & 0 & x_3 \\ 0 & e_2 & e_4 & 0 & 1 & x_4 \\ \end{array} \right] \end{equation*} Note that, in this example, $\{x_1,e_1,e_2,x_2,e_3,e_4,e_5,x_3,x_4\}$ is corresponding total ordered set of generators. The transition map between two superdomains, $U_{I|R}$ and $U_{J|S}$ is denoted by $$g_{I|R,J|S}:\bigg(|U_{I|R}|\cap |U_{J|S}|,\mathcal{O}_{I|R}|_{|U_{I|R}|\cap |U_{J|S}|}\bigg)\rightarrow \bigg(|U_{I|R}|\cap |U_{J|S}|,\mathcal{O}_{J|S}|_{|U_{I|R}|\cap |U_{J|S}|}\bigg).$$ It may be defined whenever $M_{J|S}A_{I|R}$ is invertible. In this case the following equation defines how to change coordinates(transition map) \begin{equation}\label{transitionmap} D_{J|S}\bigg(\big(M_{J|S}(A_{I|R})\big)^{-1}A_{I|R}\bigg)=D_{J|S}(A_{J|S}), \end{equation} where $D_{J|S}(A_{I|R})$ is a matrix which is remained after omitting $M_{J|S}(A_{I|R})$. Clearly, this map is defined whenever $M_{J|S}(A_{I|R})$ is invertible. For example in $G_{1|2}(3|3)$ suppose $I=\{2\}, R=\{1,3\}, J=\{3\}, S=\{2,3\} $ , so $I|R, J|S$ are $ 1|2 $-indices. We have: \begin{equation*} A_{I|R}=\left[ \begin{array}{ccc|ccc} x_1 & 1 & x_2 & 0 & e_5 & 0 \\ \hline e_1 & 0 & e_3 & 1 & x_3 & 0 \\ e_2 & 0 & e_4 & 0 & x_4 & 1 \\ \end{array} \right], \quad A_{J|S}=\left[ \begin{array}{ccc|ccc} x_1 & x_2 & 1 & e_5 & 0 & 0 \\ \hline e_1 & e_3 & 0 & x_3 & 1 & 0 \\ e_2 & e_4 & 0 & x_4 & 0 & 1 \\ \end{array}\right], \end{equation*} \begin{equation*} M_{J|S}A_{I|R}=\left[ \begin{array}{c|cc} x_2 & e_5 & 0 \\ \hline e_3 & x_3 & 0 \\ e_4 & x_4 & 1 \\ \end{array} \right]. \end{equation*} The transition map between $U_{I|R}$ and $U_{J|S}$ is obtained by substituting the above supermatrices in (\ref{transitionmap}). By a straightforward proof, it can be shown that the following proposition exist: \begin{proposition} Let $g_{I|R,J|S}$ be as above, then \begin{enumerate} \item[1.] $g_{I|R,I|R}=id.$ \item[2.] $g_{I|R,J|S} g_{J|S,I|R}=id.$ \item[3.] $g_{I|R,J|S} g_{J|S,K|T} g_{K|T,I|R}=id.$ \end{enumerate} \end{proposition} \begin{proof} For first equation, note that the map $ g_{I|R,I|R} $ is obtained from the following equality: \begin{equation*} D_{I|R}\bigg((M_{I|R}A_{I|R})^{-1}A_{I|R}\bigg)=D_{I|R}A_{I|R}. \end{equation*} Where the matrix $ M_{I|R}A_{I|R} $ is identity. So $g_{I|R,J|S}$ is defined by the following equality: \begin{equation*} D_{I|R}A_{I|R}=D_{I|R}A_{I|R}. \end{equation*} This shows the first equation. For second equality, let $ J|S $ be an another $ k|l $-index, so $ g_{I|R,J|S} $ is obtained by the following equality: \begin{equation*} D_{J|S}\bigg((M_{J|S}A_{I|R})^{-1}A_{I|R}\bigg)=D_{J|S}A_{J|S} \end{equation*} One may see that $ g_{J|S,I|R}\circ g_{I|R,J|S} $ is obtained by following equality: \begin{equation*} D_{I|R}\bigg(\bigg(M_{I|R}\Big((M_{J|S}A_{I|R})^{-1}A_{I|R}\Big)\bigg)^{-1}(M_{J|S}A_{I|R})^{-1}A_{I|R}\bigg)=D_{I|R}A_{I|R}. \end{equation*} For left side, we have \begin{align*}&=D_{I|R}\Bigg(\bigg((M_{J|S}A_{I|R})^{-1}M_{I|R}A_{I|R}\bigg)^{-1}(M_{J|S}A_{I|R})^{-1}A_{I|R}\Bigg)\\ &=D_{I|R}\bigg(\bigg((M_{J|S}A_{I|R})^{-1}\bigg)^{-1}(M_{J|S}A_{I|R})^{-1}A_{I|R}\bigg)\\ &=D_{I|R}\bigg((M_{J|S}A_{I|R})(M_{J|S}A_{I|R})^{-1}A_{I|R}\bigg)=D_{I|R}(A_{I|R})\end{align*} Accordingly the map $ g_{J|S,I|R}\circ g_{I|R,J|S} $ is obtained by $ D_{I|R}A_{I|R}=D_{I|R}A_{I|R} $ and it shows that this map is identity. For third equality, it is sufficient to show that the map $ g_{I|R,J|S}og_{J|S,T|P} $ is obtained from \begin{equation*} D_{I|R}((M_{I|R}A_{T|P})^{-1}A_{T|P})=D_{I|R}A_{I|R} . \end{equation*} This case obtain from case 2 Analogously. \end{proof} So the sheaves $(U_{I|R}, \mathcal O_{I|R})$ may be glued through the $g_{I|R, J|S}$ to construct the supergrassmannian $G_{k|l}(m|n)$. Indeed, according to \cite{vsv}, the conditions of the above proposition are necessary and sufficient for gluing. \section {Supergrassmannian as homogeneous superspace} In this section, regarding homogeneous superspace, we first quickly recall some definitions from \cite{g-spaces} and \cite{carmelibook}. For more information one may see these papers. Let $G=(|G|,\mathcal{O}_G)$ be a super Lie group and $H=(|H|,\mathcal{O}_H)$ a closed sub super Lie group of $G$. One can define a supermanifold structure on the topological space $|X|=\dfrac{|G|}{|H|}$ as follows:\\ Let $\mathfrak{g}=Lie(G)$ and $\mathfrak{h}=Lie(H)$ be super Lie algebras corresponding with $G$, $H$. For each $Z\in \mathfrak{g}$, let $D_{Z}$ be the left invariant vector field on $G$ associated with $Z$. For subalgebra $\mathfrak{h}$ of $\mathfrak{g}$ set: $$\forall U\subset |G| \qquad \mathcal{O}_{\mathfrak{h}}(U):=\{f\in \mathcal{O}_{G}(U)| D_{Z}f=0 \quad on\hspace{5pt} U, \quad \forall Z\in \mathfrak{h}\}.$$ On the other hand, for any open subset $U\subset |G|$ set: $$\mathcal{O}_{inv}(U):=\{f\in\mathcal{O}_{G}(U)|\quad \forall x_0 \in |H|,\hspace{5pt} r^*_{x_0}f=f \}.$$ If $|H|$ is connected, then $\mathcal{O}_{inv}(U)= \mathcal{O}_{\mathfrak{h}}(U)$. For each open subset $W\subset |X|=\dfrac{|G|}{|H|}$, the structure sheaf $\mathcal{O}_X$ is defined as following $$\mathcal{O}_{X}(W):=\mathcal{O}_{inv}(U)\cap \mathcal{O}_{\mathfrak{h}}(U),$$ where $U=\pi^{-1}(W).$ One can show that $\mathcal{O}_{X}$ is a sheaf on $|X|$ and the ringed space $X=(|X|,\mathcal{O}_{X})$ is a superdomain locally (For more details ref. to \cite{g-spaces}). So $X$ is a supermanifold and is called homogeneous superspace. In this section, we want to show that the supergrassmannian $G_{k|l}(m|n)$ is a homogeneous superspace. According to the section 1, it is enough to find a super Lie group which acts on $G_{k|l}(m|n)$ transitively. We show that the super Lie group $GL(m|n)$ acts on supergrassmannian $G_{k|l}(m|n)$ transitively. First, we have to define a morphism $a:G_{k|l}(m|n)\times GL(m|n) \rightarrow G_{k|l}(m|n)$. For this, by Yoneda lemma, it is sufficient, for each supermanifold $T$, to define $a_T$: $$a_T:G_{k|l}(m|n)(T)\times GL(m|n)(T) \rightarrow G_{k|l}(m|n)(T).$$ or equivalently define $$(a_T)^P: G_{k|l}(m|n)(T) \rightarrow G_{k|l}(m|n)(T).$$ where $P$ is a fixed arbitrary element in $GL(m|n)(T)$. For brevity, we denote $(a_T)^P$ by $\textbf{A}$. One may consider $GL(m|n)(T)$, as invertible $m|n\times m|n$ super matrix with entries in $\mathcal{O}(T)$, but there is not such a description for $G_{k|l}(m|n)(T)$, because it is not a superdomain. We know each supergrassmaanian is constructed by gluing superdomains (c.f. section 2), so one may define the actions of $GL(m|n)$ on superdomains $(|U_{I|R}|,\mathcal{O}(U_{I|R}))$ and then shows that these actions glued to construct $a_T$. In addition for defining $\textbf{A}$, it is needed to refine the covering $\{U_{I|R}(T)\}_{I|R}$. Set $$U_{I|R}^{J|S}(T):=\Big\{X\in U_{I|R}(T)\quad|\quad D_{J|S}\big(\big(M_{J|S}(\tilde{X}_{I|R}P)\big)^{-1}\tilde{X}_{I|R}P\big)\in U_{J|S}(T)\Big\},$$ where, by $\tilde{X}_{I|R}$, we mean a supermatrix that except for columns with indices in $I\cup R$, which together form an identity supermatrix, even and odd blocks are filled from up to down and left to right by $f_a^{I|R}, g_b^{I|R}$, the even and odd free elements of $\mathcal{O}_{I|R}(T)$. One can show that $\{U_{I|R}^{J|S}(T)\}_{I|R,J|S}$ is a covering for $G_{k|l}(m|n)(T).$ Now consider all maps \begin{align*} \textbf{A}_{I|R}^{J|S}: & U_{I|R}^{J|S}(T)\rightarrow U_{J|S}(T)\\ & \quad X \rightarrow D_{J|S}\Big(\big(M_{J|S}(\tilde{X}_{I|R}P)\big)^{-1}\tilde{X}_{I|R}P\Big), \end{align*} where, $\tilde{X}_{I|R}$ is as above. We have to show that these maps may be glued to construct a global map on $G_{k|l}(m|n)(T)$. For this, it is sufficient to show that the following diagram commutes: \begin{Small} \begin{displaymath} \xymatrix{ & U_{I|R}^{J|S}(T)\cap U_{K|Q}^{H|L}(T) \ar[rd]^{(g_{{K|Q},{I|R}})_T}\ar[ld]_{\textbf{A}_{I|R}^{J|S}} & \\ U_{J|S}(T)\cap U_{H|L}(T)\ar[dr]_{(g_{{H|L},{J|S}})_T} & & U_{I|R}^{J|S}(T)\cap U_{K|Q}^{H|L}(T) \ar[ld]^{\textbf{A}_{K|Q}^{H|L}}\\ & U_{J|S}(T)\cap U_{H|L}(T) & }\quad \end{displaymath} \end{Small} We need the following lemma to show commutativity of the above diagram. Let for each supermanifold $T$, $\big(g_{I|R, J|S}\big)_T$ be the induced map from $g_{I|R, J|S}$ on $T$-points. \begin{lemma} Let $\psi:T\rightarrow \mathbb{R}^{\alpha|\beta}$ be a $T$-point of $\mathbb{R}^{\alpha|\beta}$ and $(z_{tu})$ be a global coordinates of $\mathbb{R}^{\alpha|\beta}$ with ordering as the one introduced in the second paragraph at the start of section 2. If $B=(\psi^*(z_{tu}))$ is the supermatrix corresponding to $\psi$, then the supermatrix corresponding to $\big(g_{I|R,J|S}\big)_T(\psi)$ is as follows: \begin{equation*} D_{I|R}((M_{I|R}\tilde{B}_{J|S})^{-1}\tilde{B}_{J|S}) \end{equation*} where $\tilde{B}_{J|S}$ is as above. (c.f. the second paragraph before this lemma). \end{lemma} \begin{proof} Note that $g^*_{I|R, J|S}$ may be represented by a supermatrix as follows: \begin{equation*} D_{I|R}((M_{I|R}A_{J|S})^{-1}A_{J|S}) \end{equation*} Let $M_{I|R}A_{J|S}=(m_{tu})$ and $((M_{I|R}A_{J|S})^{-1}=(m^{tu})$. If $z=(z_{ij})$ be a coordinate system including even and odd coordinates on $U_{I|R}$, then one has $$g^*_{I|R, J|S}(z_{tu})=\sum m^{tk}(z).z_{ku}$$ Then \begin{align*} \psi^*\circ g^*_{I|R, J|S}(z_{tu})=\\ \psi^*\big(\sum m^{tk}(z).z_{ku}\big)=\\ \sum m^{tk}(\psi^*(z)).\psi^*(z_{ku})\\ \end{align*} For second equality one may note that $\psi^*$ is homomorphism of $Z_2$-graded algebras and $m^{tk}(z)$ is rational function of z. Obviously, the last expression is the (t, u)-entry of the matrix $D((M_{I|R}B)^{-1}B)$. This completes the proof. \end{proof} \begin{proposition} The above diagram commutes. \end{proposition} \begin{proof} Equivalently, for arbitrary $k|l$-indices $I|R, J|S, K|Q, H|L$ we have to show that \begin{equation}\label{glueaction} (g_{{H|L},{J|S}})_T\circ \textbf{A}_{I|R}^{J|S}=\textbf{A}_{K|Q}^ {H|L} \circ (g_{{K|Q},{I|R}})_T. \end{equation} Let $X\in U_{I|R}^{J|S}(T)\cap U_{K|Q}^{H|L}(T)$ be an arbitrary element. One has $X\in U_{I|R}^{J|S}(T)$, so \begin{align*} D_{J|S}\Big(\big(M_{J|S}(\tilde{X}_{I|R}P)\big)^{-1}\tilde{X}_{I|R}P\Big)&\in U_{J|S}(T),\\ (g_{{H|L},{J|S}})_T\Bigg(D_{J|S}\Big(\big(M_{J|S}(\tilde{X}_{I|R}P)\big)^{-1}\tilde{X}_{I|R}P\Big)\Bigg)&\in U_{H|L}(T). \end{align*} From left side of (\ref{glueaction}), we have: \begin{align*} (g_{{H|L},{J|S}})_T&\circ \textbf{A}_{I|R}^{J|S}(X)\\ &=(g_{{H|L},{J|S}})_T\Bigg(D_{J|S}\Big(\big(M_{J|S}(\tilde{X}_{I|R}P)\big)^{-1}\tilde{X}_{I|R}P\Big)\Bigg)\\ & =D_{H|L}\Bigg(\bigg(M_{H|L}\Big((M_{J|S}(\tilde{X}_{I|R}P))^{-1}\tilde{X}_{I|R}P\Big)\bigg)^{-1}(M_{J|S}(\tilde{X}_{I|R}P))^{-1}\tilde{X}_{I|R}P\Bigg)\\ & =D_{H|L}\Bigg(\Big((M_{J|S}(\tilde{X}_{I|R}P))^{-1}(M_{H|L}(\tilde{X}_{I|R}P))\Big)^{-1}(M_{J|S}(\tilde{X}_{I|R}P))^{-1}\tilde{X}_{I|R}P\Bigg)\\ & =D_{H|L}\Bigg((M_{H|L}(\tilde{X}_{I|R}P))^{-1}M_{J|S}(\tilde{X}_{I|R}P)(M_{J|S}(\tilde{X}_{I|R}P))^{-1}\tilde{X}_{I|R}P\Bigg)\\ & =D_{H|L}\Bigg((M_{H|L}(\tilde{X}_{I|R}P))^{-1}\tilde{X}_{I|R}P\Bigg). \end{align*}‎ Also, since $X\in U_{I|R}(T)$, so \begin{equation*} (g_{{K|Q},{I|R}})_T(X)\in U_{K|Q}(T)\quad ,\qquad \textbf{A}_{K|Q}^{H|L}\bigg((g_{{K|Q},{I|R}})_T(X)\bigg)\in U_{H|L}(T) \end{equation*} Then from right side of equation (\ref{glueaction}), we have \begin{align*} \textbf{A}_{K|Q}^{H|L}&\circ(g_{{K|Q},{I|R}})_T(X)\\ &=\textbf{A}_{K|Q}^{H|L}\Bigg(D_{K|Q}\bigg((M_{K|Q}\tilde{X}_{I|R})^{-1}\tilde{X}_{I|R}\bigg)\Bigg)\\ & =D_{H|L}\Bigg(\Big[M_{H|L}\bigg((M_{K|Q}\tilde{X}_{I|R})^{-1}\tilde{X}_{I|R}P\bigg)\Big]^{-1}(M_{K|Q}\tilde{X}_{I|R})^{-1}\tilde{X}_{I|R}P\Bigg)\\ & =D_{H|L}\Bigg(\Big[(M_{K|Q}\tilde{X}_{I|R})^{-1}M_{H|L}(\tilde{X}_{I|R}P)\Big]^{-1}(M_{K|Q}\tilde{X}_{I|R})^{-1}\tilde{X}_{I|R}P\Bigg)\\ & =D_{H|L}\Bigg(\Big(M_{H|L}(\tilde{X}_{I|R}P)\Big)^{-1}(M_{K|Q}\tilde{X}_{I|R})(M_{K|Q}\tilde{X}_{I|R})^{-1}\tilde{X}_{I|R}P\Bigg)\\ & =D_{H|L}\Bigg(\Big(M_{H|L}(\tilde{X}_{I|R}P)\Big)^{-1}\tilde{X}_{I|R}P\Bigg). \end{align*} This shows that the above diagram commutes. \end{proof} Therefore $GL(m|n)$ acts on $G_{k|l}(m|n)$ with action $a$. Now it is needed to show that this action is transitive. \begin{theorem} $GL(m|n)$ acts on $G_{k|l}(m|n)$ with above action, transitively. \end{theorem} \begin{proof} By proposition \ref{Transitive}, it is sufficient to show that the map $$(a_X)_{\mathbb{R}^{0|q}}:GL(m|n)(\mathbb{R}^{0|q}) \rightarrow G_{k|l}(m|n)(\mathbb{R}^{0|q})$$ is surjective, where $q=2mn$ is odd dimension of $GL(m|n)$ and $$X=(X_1,X_2)\in|U_{I|R}|= U_I\times U_R$$ is an element and $X_1$ is the matrix of coordinates of a $k$-dimensional subspace, say $A_1$ in $U_I$, and also $X_2$ is the matrix of coordinates of a $l$-dimensional subspace, say $A_2$ in $U_R$. As an element of $G_{k|l}(m|n)(T)$, one may use (\ref{phat}) and represent $\hat{X}$, as: $$\hat{X}=\begin{pmatrix}\bar{X}_1 & 0\\0 & \bar{X}_2\end{pmatrix}$$ where $T$ is an arbitrary supermanifold and $\bar{X}_1$ and $\bar{X}_2$ are unique representations of $A_1$ and $A_2$ in $U_I$ and $U_R$ respectively. For surjectivity, let $$W=\begin{pmatrix} A & B\\ C & D \end{pmatrix}\in U_{J|S}(\mathbb{R}^{0|q}),$$ be an arbitrary element. We have to show that there exists an element $V\in GL(m|n)(\mathbb{R}^{0|q})$ such that $\hat{X}V=W$. Since $\bar{X}_1\in G_k(m)$ and $A$ is a matrix with rank $k$ and also given that the Lie group $GL(m)$ acts on manifold $G_k(m)$ transitively, then there exists an invertible matrix $P_{m\times m}\in GL(m)$ such that $\bar{X}_1P=A$. Similarly one may see that there exists an invertible matrix $Q_{n\times n}\in GL(n)$ that $\bar{X}_2Q=D$. In addition, the equations $\bar{X}_1Z=B$ and $\bar{X}_2Z^\prime=C$ have infinite solutions. Choose arbitrary solutions say $H_{m\times n}$ and $N_{n\times m}$ for these equations. Clearly, One can see $V=\left[\begin{array}{c|c}P_{m\times m} & H_{m\times n}\\ \hline N_{n\times m} & Q_{n\times n} \end{array} \right]_{m|n\times m|n}$ satisfy in the equation $\hat{X}V=W$. So $(A_X)_{\mathbb{R}^{0|q}}$ is surjective. By Proposition \ref{Transitive}, this shows that the action of $GL(m|n)$ on $G_{k|l}(m|n)$ is transitive. \end{proof} Then according to Proposition \ref{equivariant}, $G_{k|l}(m|n)$ is a homogeneous superspace. \section{Conclusion} In this paper, we show that $GL(m|n)$ acts transitively on $G_{k|l}(m|n)$. Thus this supermanifold is a homogeneous superspace. This paper is part of our work on $\nu$-grassmannians, a novel generalization of grassmannian in supergeometry, and super Lie groups which act on them. The most important feature of these spaces is that they carry an odd involution on their sheaf structures. In forthcoming papers, the second author (with his PhD students) show that by using $\nu$-grassmannians, one may generalize the theorem of homotopy classification of vector bundles and the concept of universal Chern classes in supergeometry. To see these results in special cases for $\nu$-projective spaces refer to \cite{vclass} and \cite{vprojective}. \newpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\href}[2]{#2}
train/arxiv
BkiUc8s5qYVBSqkygmc6
5
1
\section{Introduction\label{Intro}} The discovery of new nuclei in the proximity of the neutron dripline provides a test for nuclear mass models, and hence for the understanding of the nuclear force and the creation of elements. Once neutron-rich nuclei are observed, and their cross sections for formation are understood, investigations to study the nuclei themselves, such as decay spectroscopy, can be planned. Therefore, obtaining production rates for the most exotic nuclei continues to be an important part of the experimental program at existing and future rare-isotope facilities. A number of production mechanisms have been used to produce neutron-rich isotopes for \protect{$20\le Z\le 28$} ~\cite{OT-PRC09}. But in the last years two reaction mechanisms were the most effective at producing nuclei in this region: \begin{itemize} \item projectile fragmentation -- an experiment with a $^{76}$Ge (132 MeV/u) beam produced 15 new isotopes of \protect{$17\le Z\le 25$}~\cite{OT-PRL09}, \item in-flight fission with light targets (Abrasion-Fission) -- an experiment with a $^{238}$U beam~\cite{Ohn-JPSJ10} produced a large number of isotopes of \protect{$25\le Z\le 48$} using a Be-target, and several new isotopes with \protect{$46\le Z\le 56$} by Coulomb fission on a heavy target. \end{itemize} Progress in the production of neutron-rich isotopes was made possible by the increase of primary beam intensities, new beam development at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University and advances in experimental techniques \cite{TB-N07}. Indeed, recent measurements at the NSCL~\cite{OT-PRC07,TB-N07,PFM-BAPS08,OT-PRC09} have demonstrated that the fragmentation of $^{48}$Ca and $^{76}$Ge beams can be used to produce new isotopes in the proximity of the neutron dripline. Continuing this work, we report here the next step with a newly developed $^{82}$Se beam towards the fundamental goal of defining the absolute mass limit for chemical elements in the region of calcium. In the present measurement, four neutron-rich isotopes with \protect{$42\le N\le 47$} were identified for the first time (see Fig.\ref{chart}), one event was registered consistent with $^{70}$Cr$_{46}$, and another one with $^{75}$Fe$_{49}$. \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth]{isotope_chart_Se-82_beam.eps} \caption{(Color online) The region of the nuclear chart investigated in the present work. The solid line shows the limit of bound nuclei from the KTUY mass model~\cite{KTUY-PTP05}. The new isotopes observed for the first time in the present work are marked by red squares.\label{chart}} \end{center} \end{figure*} One of the first indications of significant changes in the structure of neutron rich nuclei was the discovery of enhanced nuclear binding of heavy sodium isotopes~ \cite{CT-PRC75}. This is now understood to result from significant contributions of {\it fp} shell intruder orbitals to the ground-state configurations of these isotopes~ \cite{XC-NPA75,EKW-PRC90}. Low-lying 2$^+$ states and quadrupole collectivity have been reported in neutron-rich even-even Ne and Mg isotopes around $N=20$, see for example Refs.~\cite{DGM-NPA84,MO-PLB95,YO-PLB01,YA-PLB03,CH-PRC05}. This region around $^{31}$Na, where the neutron {\it fp} shell contributes significantly to the ground-state structure, is now known as the ``Island of Inversion''. Similarly, there is mounting evidence for an onset of deformation around neutron number $N=40$ in Fe and Cr nuclei. In even-even Fe and Cr nuclei, for example, this evidence is based on the energies of low-lying states~\cite{HA-PRL99,SO-EPJA03,AG-PRC08,AO-PRL09,GA-PRC10}, transition strengths~\cite{RO-PRL11}, and deformation length ~\cite{AO-PRL09}. Neutron $g_{9/2}$ and $d_{5/3}$ configurations from above the $N=40$ shell gap are proposed to descend and dominate the low-lying configurations similar to the $N=20$ Island of Inversion~\cite{BAB-PPNP01,LE-PRC10}. In our previous cross section measurements in the region around $^{62}$Ti ($^{76}$Ge primary beam)~\cite{OT-PRL09} we observed a systematic smooth variation of the production cross sections that might point to nuclear structure effects, for example, an onset of collectivity, that are not included in global mass models that form the basis of systematics. The present work, since using a different primary beam, provides an independent check of this interpretation. \section{Experiment\label{secExpt}} \subsection{Setup\label{secSetup}} A newly developed 139 MeV/u $^{82}$Se beam with an intensity of 35~pnA, accelerated by the coupled cyclotrons at the NSCL, was fragmented on a series of beryllium targets and a tungsten target, each placed at the object position of the A1900 fragment separator~\cite{DJM-NIMA03}. In this work we used an identical configuration as in our previous experiment with a $^{76}$Ge beam ~\cite{OT-PRC09}, where the combination of the A1900 fragment separator with the S800 analysis beam line~\cite{DB-NIMB03} formed a two-stage separator system, that allowed a high degree of rejection of unwanted reaction products. At the end of the S800 analysis beam line, the particles of interest were stopped in a telescope of eight silicon PIN diodes (50$\times$50~mm$^2$) with a total thickness of 8.0~mm. A 50~mm thick plastic scintillator positioned behind the Si-telescope served as a veto detector against reactions in the Si-telescope and provided a measurement of the residual energy of lighter ions that were not stopped in the Si-telescope. A position sensitive parallel plate avalanche counter (PPAC) was located in front of the Si-telescope. All experimental details and a sketch of the experimental setup can be found in Ref.~\cite{OT-PRC09}. In this paper, we describe the details of our experimental approach and discuss the results. \subsection{Experimental runs\label{secPlanning}} The present experiment consisted of four parts. Except for the last part, the present experiment planning was similar to the previous $^{76}$Ge experiment~\cite{OT-PRC09}. During all runs, the magnetic rigidity of the last two dipoles of the analysis line was kept constant at a nominal value of 4.3~Tm while the production target thickness was varied to map the fragment momentum distributions. This approach greatly simplified the particle identification during the scans of the parallel momentum distributions. The momentum acceptance of the A1900 fragment separator was restricted to $\Delta p/p = 0.1\%$ (first four runs with thin targets), and $\Delta p/p = 0.2\%$ (other targets) for the measurement of differential momentum distributions in the first part of the experiment. The use of different beryllium target thicknesses (9.7, 68, 138, 230, 314, 413, 513~mg/cm$^2$) allowed coverage of the fragment momentum distributions necessary to extract production cross sections and also resulted in more isotopes in the particle identification spectrum. For the second part of the experiment, a Kapton wedge with a thickness of 20.0 mg/cm$^2$ was used at the dispersive image of the A1900 to reject less exotic fragments with a 10~mm aperture in the focal plane while the separator was set for $^{67}$Fe and $^{78}$Zn ions. The goal of this setting was to confirm the particle identification by isomer tagging as described in Ref.~\cite{RG-PLB95} with $^{67m}$Fe~($E_{\gamma}=367$~keV, $T_{1/2}=43$~$\mu$s) and $^{78m}$Zn~($E_{\gamma}=730, 890, 908$~keV, $T_{1/2}=0.32$~$\mu$s). In the third part of the experiment, dedicated to the search for new isotopes, five settings were used to cover the most neutron-rich isotopes with \protect{$20\le Z\le 27$}, as it was impossible to find a single target thickness and magnetic rigidity to produce all of the fragments of interest. Each setting was characterized by a fragment for which the separator was tuned. A search for the most exotic nuclei in each setting was carried out with Be and W targets. The settings were centered on $^{60}$Ca, $^{68}$V and $^{74,75}$Fe respectively, based on \softsh{LISE$^{++}$}~\cite{OT-NIMB08} calculations using the parameterizations of the momentum distributions obtained in the first part of the experiment (see Section~\ref{secMomentum}). The momentum acceptance of the A1900 was set to the maximum of $\Delta p/p = 5.0\%$ for these production runs. It is noteworthy that the momentum acceptance of the S800 beamline is about 4\% according to \soft{LISE$^{++}$} Monte Carlo simulations based on new extended configurations with 5-order optics. These calculations have been taken into account for the cross section analysis. The fourth part of the experiment has been devoted to two short runs to measure the yield of more stable isotopes by centering on $^{45,48}$Ca. \begin{figure} \centering \includegraphics[width=0.65\columnwidth]{Am3Z_Z.eps}% \caption{(Color online) Particle identification plot showing the measured atomic number, $Z$, versus the calculated function $A-3Z$ for the nuclei observed in production runs of this work. See text for details. The limit of previously observed nuclei is shown by the solid red line as well as the locations of a reference nucleus ($^{64}$Ti). \label{Fig_pid}} \end{figure} \section{Analysis of experimental data \label{secAnalysis}} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{MassDistribution.eps}% \caption{(Color online) Mass spectra of the elements \protect{$21\le Z\le 26$}. All particles that were stopped in the Si-telescope during the production runs were analyzed. The limits of previously observed nuclei are shown by the vertical dashed lines. Standard deviations produced in multi-peak fits with Gaussian distributions at constant width (dashed curves) are shown in the figures for each element. \label{Fig_Mass}} \end{center} \end{figure*} The particle identification matrix of fully-stripped reaction products observed in the production runs is shown in Fig.~\ref{Fig_pid}. The range of fragments is shown as a function of the measured atomic number, $Z$, versus the calculated quantity $A-3Z$. The identification of the individual isotopes in Fig.~\ref{Fig_pid} was confirmed via isomer tagging using the known isomeric decays in $^{67}$Fe and $^{78}$Zn. The standard deviations of ionic ($q$) and elemental ($Z$) spectra were found to be similar to those in the previous experiment, therefore the probabilities of one event being misidentified as a neighboring charge state or element were as small as before. The details of the calculation of the particle identification are given in the appendix of the previous work~\cite{OT-PRC09}. \subsection{Search for new isotopes\label{secNewIsotopes}} The mass spectra for the isotopic chains from scandium to iron measured during the production runs are shown in Fig.~\ref{Fig_Mass}. Only nuclei that stopped in the Si telescope are included in this analysis. The observed fragments include several new isotopes that are the most neutron-rich nuclides yet observed of elements \protect{{$22\le Z\le 25$} ($^{64}$Ti, $^{67}$V, $^{69}$Cr, $^{72}$Mn)}. One event was found to be consistent with $^{70}$Cr, and another one with $^{75}$Fe. The new neutron-rich nuclei observed in this work are those events to the right of the solid line in Fig.~\ref{Fig_pid} and to the right of the vertical dashed lines in Fig.~\ref{Fig_Mass}. \subsection{Parallel momentum distributions\label{secMomentum}} The prediction of the momentum distributions of residues is important when searching for new isotopes in order to set the fragment separator at the maximum production rate. Also, the accurate prediction of the momentum distributions allows a precise estimate of the transmission and efficient rejection of strong contaminants. In this experiment the ``target scanning" approach~\cite{OT-NIMA09} developed in the previous experiment was used to obtain parameters for the neutron-rich isotope momentum distribution models such as ~\cite{AG-PLB74,DJM-PRC89}. This method is particularly well suited to survey neutron-rich nuclei since the less exotic nuclei are produced with the highest yields and their momentum distributions can be measured with the thin targets. The data analysis of this approach has been updated, and a detailed explanation is in preparation~\cite{OT-NIMA12}. Important improvements include: first, that the most probable velocity for a fragment is not that at the center of the target if the yield is sharply rising or falling with momentum, and second, asymmetric Gaussian distributions have been used where the asymmetry coefficients have been taken from the convolution model implemented in the \soft{LISE$^{++}$} code~\cite{OT-NIMB08}. Note that, at the energy of these experiments, the shape of the fragment momentum distribution is slightly asymmetric with a low-energy exponential tail stemming from dissipative processes~\cite{OT-NPA04}. Seven targets were used to measure the momentum distributions. The momentum distributions for 122 isotopes were derived and integrated to deduce the production cross sections. A survey of all of the fitted results showed that neutron-rich fragments were produced with significantly higher velocities than the momentum distribution models~\cite{DJM-PRC89,BOR-ZPA83} predict, and this result is similar to our previous measurements~\cite{OT-NIMA09}. \section{Results and Discussion\label{secRes}} \subsection{Production cross section\label{secCS}} \begin{figure*} \includegraphics[width=0.9\textwidth]{fig8.eps}% \caption{(Color online) Inclusive production cross sections for fragments from the reaction of $^{82}$Se with beryllium and tungsten targets shown as a function of mass number. The cross sections with the beryllium targets derived by momentum distribution integration are shown by stars, those normalized with \softsh{LISE$^{++}$}\ transmission calculations are indicated by solid diamonds The cross sections obtained with the tungsten target were normalized with \softsh{LISE$^{++}$}\ transmission calculations. The red solid lines show the predictions of the \soft{EPAX~3.1} systematics~\cite{KS-EPAX3} for beryllium. The two magenta dashed lines separate nuclei that require neutron pickup in the production mechanism. \label{Fig_CrossSection}} \end{figure*} The inclusive production cross sections for the observed fragments were calculated by correcting the measured yields for the finite momentum and angular acceptances of the separator system. A total of 122 cross sections with beryllium were obtained from the Gaussian fits to the longitudinal momentum distributions; these nuclei are indicated by stars in Fig.~\ref{Fig_CrossSection}. The cross sections for all of the remaining fragments with incompletely measured longitudinal momentum distributions were obtained with estimated transmission corrections as has been done in our previous work~\cite{OT-PRC09}. The angular and momentum transmissions were calculated for each isotope in each setting using a model of the momentum distribution with smoothly varying parameters extracted from the measured parallel momentum distributions. The cross sections obtained for all of the fragments observed in this experiment are shown in Fig.~\ref{Fig_CrossSection} along with the predictions of the recent \soft{EPAX~3.1} parameterization~\cite{KS-EPAX3}. For those isotopes that relied on transmission calculations, the weighted mean of all measured yields was used to obtain the ``model-based'' cross section (shown by solid diamonds in Fig.~\ref{Fig_CrossSection}). The uncertainties in these cases include the statistical, the systematic and the transmission uncertainties. For more details see ref.~\cite{OT-NIMA09}. As can be seen in Fig.~\ref{Fig_CrossSection}, the model-based cross sections are in good agreement with those produced by integrating the measured longitudinal momentum distributions. It is important to note that the predictions of the recent \soft{EPAX~3.1} parameterization for reactions with beryllium, shown by the solid lines in Fig.~\ref{Fig_CrossSection}, reproduces the measured cross sections for isotopes much better than previous \soft{EPAX 2.15} predictions~\cite{KS-PRC00}. \begin{figure*} \includegraphics[width=1.0\textwidth]{CS_Qg.eps}% \caption{(Color online) Cross sections for the production of neutron-rich nuclei with odd (left plot) and even (right plot) atomic numbers, with a beryllium target. See text for explanation of $Q_g^*$ and the lines. The cross section for $^{62}$Ti at the center of the proposed new island of inversion~\protect{\cite{BAB-PPNP01}} are circled. \\ \label{Fig_Qg}} \end{figure*} \subsection{Q$_g$ systematics\label{secQg}} The production cross sections for the most neutron-rich projectile fragments have been previously shown to have an exponential dependance on $Q_g$ (the difference in mass-excess of the beam particle and the observed fragment)~\cite{OT-PRC07,OT-PRL09}. To test this behavior, the cross sections for each isotopic chain were fitted with the simple expression: \begin{equation}\label{Eq_Qg} \sigma(Z,A) = f(Z)\exp{(Q^*_{g}/T)}, \end{equation} where $T$ is an effective temperature or inverse slope. In this work neutron odd-even corrections have been applied for $Q_g$ of neutron-odd isotopes, that do not change slopes of lines, but smooth the data and significantly decreases the $\chi^2$-value. This correction has a large effect on the stable isotopes, and practically no influence for very exotic nuclei with weakly bound neutrons. Fig.~\ref{Fig_Qg} represents production cross sections measured with Be targets in this experiment, where the abscissa, $Q^*_{g}$, is the smoothed difference between the mass of the ground state of the projectile and the observed fragment, where the masses were taken from Ref.~\cite{KTUY-PTP05}. As in the previous experiment, the heaviest isotopes of elements in the middle of the distribution ($Z=$~19, 20, 21, and 22) appear to break away from the straight-line behavior. The data were fitted by two lines with a floating connection point, and results are shown by lines in the figure. The behavior of the slopes in the $Q^*_g$ figure are summarized in Fig.\ref{Fig_TemperatureBe} where the two individual fitted values of the inverse slope parameter, $T$, for products from Be targets are shown as a function of atomic number. The corresponding plot for the W target is shown in Fig.~\ref{Fig_TemperatureW}. The inverse slopes of the cross sections from the previous experiment~\cite{OT-PRC09} with $^{76}$Ge beam are shown in these figures for comparison. Based on these figures we find that the general increase in $T$ for all of the heavy isotopes of elements $Z=$~19, 20, 21, and 22 observed with a $^{76}$Ge beam is reproduced by this experiment using the $^{82}$Se beam. Small values of the inverse slope parameter $T$ for heavy neutron-rich isotopes of elements ($Z=$~27-33) in Fig.\ref{Fig_TemperatureBe} can be explained by the fact that these isotopes were produced through transfer reactions (see the dashed lines in Fig.~\ref{Fig_CrossSection}, which show pick-up products), and therefore the well-known $Q_{gg}$-systematics used for the two-body process may be more applicable. \begin{figure} \begin{minipage}{18pc} \includegraphics[width=18.5pc]{GraphTKTYU_Be.eps} \caption{\label{Fig_TemperatureBe}(Color online) Values of the inverse slope parameter, $T$, from the best fit of Eq.~\ref{Eq_Qg} to the experimental cross sections in Fig.~\ref{Fig_Qg} shown as a function of atomic number, where half-filled green circles for heaviest isotope and half-filled black diamonds for light isotopes produced with a $^{82}$Se beam on beryllium targets, whereas open magenta circles and solid brown stars for isotopes obtained with a $^{76}$Ge beam~\cite{OT-PRC09}. } \end{minipage}\hspace{2pc}% \begin{minipage}{18pc} \vspace{-4.0cm} \includegraphics[width=18.5pc]{GraphTKTYU_W.eps} \caption{(Color online) \label{Fig_TemperatureW} Similar to Fig.~\ref{Fig_TemperatureBe} for data with the tungsten target.} \end{minipage} \end{figure} \section{Summary\label{Summary}} The present study of the fragmentation of a $^{82}$Se beam at 139~MeV/u provided evidence for the production of four previously unobserved neutron-rich isotopes. The momentum distributions and cross sections for a large number of neutron-rich nuclei produced by the $^{82}$Se beam were measured by varying the target thickness in a two-stage fragment separator using a narrow momentum selection. The longitudinal momentum distributions of 122 neutron-rich isotopes of the elements with \protect{$11\le Z\le 32$} were compared to models that describe the shape and centroid of fragment momentum distributions. New parameters for the semiempirical momentum distribution models~\cite{DJM-PRC89,AG-PLB74} based on the measured momenta were obtained. The most neutron-rich nuclei of elements with $Z=19$ to 22 were produced with an enhanced rate compared to the systematics of the production cross sections from the $Q_g$ function. This trend was previously reported for fragmentation with $^{76}$Ge beam target~\cite{OT-PRC09}. \ack The authors would like to acknowledge the operations staff of the NSCL for developing the intense $^{82}$Se beam necessary for this study. This work was supported by the U.S.~National Science Foundation under grant PHY-06-06007 and PHY-11-02511. \section*{References}
train/arxiv
BkiUbcY4eIfiUQtkoVZ4
5
1
\section{Introduction } Throughout this note, all groups are finite. Two nonisomorphic groups $G$ and $H$ are said to form a {\it Brauer pair} if $G$ and $H$ have identical character tables and identical power maps. In \cite{Brauer}, Brauer had asked if there exist any such pairs. The first examples of Brauer pairs were found by Dade in \cite{Dade}. Other examples of Brauer pairs can be found in the \cite{EiMu} and \cite{NenBr}. In this paper, we find a condition that characterizes when Camina groups of nilpotence class $2$ form Brauer pairs. In \cite{VZgroups}, we defined a {\it VZ-group} to be a group where all the nonlinear irreducible characters vanish off the center, and in that paper, we characterizes when two such groups have identical character tables. Using our characterization of the character tables of VZ-groups, A. Nenciu was able to characterize those VZ-groups that were Brauer pairs (see \cite{NenVZ}). A group $G$ is a Camina group if for all $g \in G \setminus G'$ we have that $g$ has $gG'$ as its conjugacy class. The nilpotent Camina groups of nilpotence class $2$ are VZ-groups. These have also been studied under the name semi-extraspecial groups in \cite{Beis} and \cite{Ver}. (One can see in \cite{extreme} and \cite{Norit} that the condition semi-extraspecial is equivalent to the condition of being a Camina $p$-group of nilpotence class $2$.) Our characterization in \cite{VZgroups} simplified for Camina groups of nilpotence class $2$ to say: let $G$ and $H$ be nilpotent Camina groups of nilpotence class $2$. Then $G$ and $H$ have identical character tables if and only if $\gpindex G{G'} = \gpindex H{H'}$ and $\card {G'} = \card {H'}$. Our goal in this note is to show that Nenciu's characterization of Brauer pairs of VZ-groups also has an easy simplification to Camina groups of nilpotence class $2$. Any nilpotent Camina group must be a $p$-group for some prime $p$, and any VZ-group will be the direct product of a $p$-group and an abelian $p'$-group. Also, Nenciu showed in \cite{NenVZ}, that there are no Brauer pairs of VZ-groups that are also $2$-groups. Therefore, we may assume that we are working with Camina $p$-groups where $p$ is an odd prime. Finally, if $P$ is a $p$-group, then we have the subgroups $\mho_1 (P) = \langle x^p \mid x \in P \rangle$ and $\Omega_1 (P) = \langle x \in P \mid x^p = 1 \rangle$. When $p$ is odd and $P$ has nilpotence class $2$, it is known that in fact $\mho_1 (P) = \{ x^p \mid x \in P \}$. With $\mho_1 (P)$ in hand, we can state our simplified condition that characterizes Brauer pairs of Camina $p$-groups with nilpotence class $2$. It shows that if $P$ and $Q$ are Camina $p$-groups of nilpotence class $2$, then $P$ and $Q$ form a Brauer pair if and only if $\gpindex P{P'} = \gpindex Q{Q'}$, $\card {P'} = \card {Q'}$, and $\card {\mho_1 (P)} = \card {\mho_1 (Q)}$. In other words, the only hypothesis needed to have a Brauer pair beyond the conditions to have identical character tables is that $\mho_1 (P)$ and $\mho_1 (Q)$ have the same order. \begin{thm}[Main Theorem] Let $P$ and $Q$ be nonisomorphic Camina $p$-groups of nilpotence class $2$ for some odd prime $p$. Then $P$ and $Q$ form a Brauer pair if and only if $\gpindex P{P'} = \gpindex Q{Q'}$, $\card {P'} = \card {Q'}$, and $\card {\mho_1 (P)} = \card {\mho_1 (Q)}$. \end{thm} We conclude this introduction by noting that we will show the condition that $\card {\mho_1 (P)} = \card {\mho_1 (Q)}$ is equivalent to $\gpindex P{\Omega_1 (P)} = \gpindex Q{\Omega_1 (Q)}$ for these groups. Hence, the Main Theorem could also be restated as saying that when $P$ and $Q$ are nonisomorphic Camina $p$-groups of nilpotence class $2$, then $P$ and $Q$ form a Brauer pair if and only if $\gpindex P{P'} = \gpindex Q{Q'}$, $\card {P'} = \card {Q'}$, and $\gpindex P{\Omega_1 (P)} = \gpindex Q{\Omega_1 (Q)}$ \section{Results} Before we prove the main theorem, we need to state some notation. Let $P$ be a $p$-group with nilpotence class $2$, where $p$ is an odd prime. We define $\nu_P : P \rightarrow P$ by $\nu_P (x) = x^p$. When $P'$ is elementary abelian, it is not difficult to see that $\nu_P$ is a homomorphism of $P$ and its image is $\mho_1 (P)$. The fact that $p$ is odd and $P'$ is central and elementary abelian are necessary for $\nu_P$ to be a homomorphism. Also, $\ker {\nu_P} = \{ x \in P \mid x^p = 1 \} = \Omega_1 (P)$. Hence, we have $\gpindex G{\Omega_1 (P)} = \card {\mho_1 (P)}$. As $P'$ is elementary abelian, $P' \le \ker {\nu_P}$, and if $P/\gpcen P$ is elementary abelian, then $\mho_1 (P) \le \gpcen P$. If $P$ is a VZ-group, both of these occur; so when $P$ is a VZ-group, we can view $\nu_P : P/P' \rightarrow \gpcen P$ as a homomorphism. We can also define the projection map $\phi_P : \gpcen P \rightarrow P/P'$ by $\phi_P (z) = zP'$ for every $z \in \gpcen P$. In \cite{NenVZ}, Nenciu showed that if $P$ and $Q$ are nonisomorphic VZ-groups associated with the prime $p$, then $P$ and $Q$ form a Brauer pair if and only if there exist isomorphisms $\hat\alpha : P/P' \rightarrow Q/Q'$ and $\hat\beta : \gpcen P \rightarrow \gpcen Q$ so that $\hat\alpha \circ \phi_P = \phi_Q \circ \hat\beta$ and $\nu_Q \circ \hat\alpha = \hat\beta \circ \nu_P$. \begin{proof}[Proof of Main Theorem] We know when $P$ is Camina group with nilpotence class $2$ that $\gpcen P = P'$. It follows that $\phi_P$ the trivial map. Similarly, $\phi_Q$ will be the trivial map. Thus, Nenciu's condition for $P$ and $Q$ to be a Brauer pair is equivalent to finding isomorphisms $\hat\alpha$ and $\hat\beta$ so that $\nu_P \circ \hat\alpha = \hat\beta \circ \nu_Q$. Suppose that $P$ and $Q$ form a Brauer pair. Then isomorphisms $\hat\alpha$ and $\hat\beta$ exist. We already know that this implies that $\gpindex P{P'} = \gpindex Q{Q'}$ and $\card {P'} = \card {Q'}$. We have $\hat\beta \circ \nu_P (P/P') = \hat\beta (\nu_P (P/P')) = \hat\beta (\mho_1 (P))$, and $\nu_Q \circ \hat\alpha (P/P') = \nu_Q (\hat\alpha (P/P')) = \nu_Q (Q/Q') = \mho_1 (Q)$. It follows that $\hat\beta (\mho_1 (P)) = \mho_1 (Q)$, and we conclude that $\card {\mho_1 (P)} = \card {\mho_1 (Q)}$ as desired. We now suppose that $\gpindex P{P'} = \gpindex Q{Q'}$, $\card {P'} = \card {Q'}$, and $\card {\mho_1 (P)} = \card {\mho_1 (Q)}$. Now, $\mho_1 (P) \le \gpcen P = P'$, so $\mho_1 (P)$ is an elementary abelian $p$-group. Similarly, $\mho_1 (Q) \le Q'$ is an elementary abelian $p$-group. Since $\mho_1 (P)$ and $\mho_1 (Q)$ have the same size, they are isomorphic. Write $\beta$ for the isomorphism from $\mho_1 (P)$ to $\mho_1 (Q)$. Since $P'$ and $Q'$ are elementary abelian of the same size, we can extend $\beta$ to an isomorphism $\hat\beta$ from $P'$ to $Q'$. We know that $P' \le \Omega_1 (P)$ and $Q' \le \Omega_1 (Q)$. Since $P/P'$ is an elementary abelian $p$-group, there is a subgroup $A$ of $P$ so that $P/P' = \Omega_1 (P)/P' \times A/P'$. Similarly, there is a subgroup $B$ of $Q$ so that $Q/Q' = \Omega_1 (Q)/Q' \times B/Q'$. Now, $\Omega_1 (P)$ is the kernel of $\nu_P$, so we know that $\nu_P : P/\Omega_1 (P) \cong \mho_1 (P)$. Restricting $\nu_P$ to $A$, we have that $\nu_P : A/P' \cong \mho_1 (P)$. Similarly, $\nu_Q : B/Q' \cong \mho_1 (Q)$. We can define $\alpha : A/P' \rightarrow B/P'$ to be the unique map so that $\nu_P \circ \alpha = \beta \circ \nu_Q$. Observe that $\Omega_1 (P)/P'$ and $\Omega_1 (Q)/Q'$ are elementary abelian groups of the same size. Hence, there is an isomorphism $a : \Omega_1 (P)/P' \rightarrow \Omega_1 (Q)/Q'$. Note that $\nu_P \circ a = 1$. We can define an isomorphism $\hat\alpha : P/P' \rightarrow Q/Q'$ by $\hat\alpha = (\alpha,a)$. We observe that $\nu_P \circ \hat\alpha = \nu_P \circ \alpha = \beta \circ \nu_Q = \hat\beta \circ \nu_Q$. \end{proof} Now, suppose that $p$ is an odd prime, and let $P$ be a Camina $p$-group with nilpotence class $2$ and $\card {P'} = p^n$. Thus, there are $n+1$ choices for the value of $\card {\mho_1 (P)}$. If the number of Camina groups with $\card {P'} = p^n$ and $\gpindex P{P'} = p^{2m}$ with $m \ge n$ is bigger than $n+1$, then this will be another source of Brauer pairs. We will have that $P$ and $Q$ form a Brauer pair when $P$ and $Q$ are Camina groups of nilpotence class $2$ and exponent $p$ with $\gpindex P{P'} = \gpindex Q{Q'}$ and $\card {P'} = \card {Q'}$. Verardi has shown in \cite{Ver} a number of ways of constructing nonisomorphic Camina $p$-groups with nilpotence class $2$ and exponent $p$ for a fixed $\gpindex P{P'}$ and $\card {P'}$. Thus, we get a number of Brauer pairs this way. If $n = 1$, then $P$ is extra-special of order $p^{2m+1}$. We know that there are $2$ such groups one with exponent $p$ (and hence, $\card {\mho_1 (P)} = 1$) and one with exponent $p^2$ (and hence, $\card {\mho_1 (P)} = p$). Thus, we do not get any Brauer pairs here. Of course, this fact is well-known. Next, we decided to look at the Camina groups with $\gpindex P{P'} = p^4$ and $\card {P'} = p^2$ using MAGMA and the library of small groups. Notice that there are $3$ possibility for the order of $\mho_1 (P)$. We have checked the primes up to $31$. For each prime, we have found that there are $p+3$ Camina groups with $\gpindex P{P'} = p^4$ and $\card {P'} = p^2$. For each prime that we tested, one group has exponent $p$, and so, $\card {\mho_1 (P)} = 1$; one group has $\card {\mho_1 (P)} = p$; and the remaining $p+1$ groups have $\card {\mho_1 (P)} = p^2$. We note that one of the groups in the last class has that $\Omega_1 (P)$ is not abelian, and the remaining $p$-groups in that class have $\Omega_1 (P)$ is abelian. Thus, we obtain Brauer pairs by taking any two nonisomorphic Camina groups with nilpotence class $2$ where $\gpindex P{P'} = p^4$, $\card {P'} = p^2$, and $\card {\mho_1 (P)} = p^2$. We note that there are two examples in \cite{NenVZ} which motivated our study. The examples there actually meet the hypotheses of our theorem. Both of the examples there have $\card {\mho_1 (P)} = p^2$. It follows that $\card {\Omega_1 (P)} = p^4$; one of the examples has $\Omega_1 (P)$ abelian and the other example has $\Omega_1 (P)$ nonabelian.
train/arxiv
BkiUfrTxK7Tt6AlxGNG7
5
1
\section{Introduction} What can the pursuits of (\emph{i}) investigating quantum entanglement, via multicomponent wavefunctions, on the one hand, and (\emph{ii}) studying frequency array data in order to infer species evolution in molecular phylogenetics, on the other -- both hot topics in their respective fields -- possibly have to do with one another? Quite a lot, as it turns out -- as becomes clear, once the elegant connections with group representations and tensor analysis are made transparent. The following is an overview of some of the salient background, and a biassed selection of applications of invariant theory to the respective topics, arising from our recent work in both areas. The results which we report here provide novel instances of how group representation theory, and specifically classical invariant theory, can provide well-founded and useful tools for practitioners, in the realms of both quantum information, and mathematical biology. Given a group $G$ and a $G$-module $V$ (a space carrying a linear $G$ action, or representation), there is a standard construct ${\mathbb C}{[}V{]}$, the space of `polynomials in the components of the vectors in $V$'. Natural objects of special interest in this space are the `invariants', that is, functions $f(x)$ which are unchanged (up to scalar multiplication)\footnote{Of course, $\lambda_g$ must be a one-dimensional representation, $\lambda_g \lambda_h = \lambda_{gh}$, which for the cases studied here will be realized by various matrix determinants.} under the action of $G$, $f(g\!\cdot\! x) = \lambda_g f(x)$, and we would like to characterize the sub-ring of invariants, $I(V) := {\mathbb C}{[}V{]}^G$. In view of the grading of ${\mathbb C}{[}V{]}$ by degree, the coarsest characterization is the associated Molien series, $h(z) = \sum_0^\infty h_n z^n$ with $h_n = Dim({\mathbb C}{[}V{]}{}^G_n)$. In well-behaved cases, $I(V)$ has a regular structure (and is finitely generated), and $h(z)$ is a very pleasant rational polynomial. For $G$ semi-simple and compact, Molien's theorem \cite{molien1897invarianten} gives an integral representation of $h(z)$ via the Haar measure on $G$. Knowledge of $h(z)$ and of a set of generators of $I(V)$ is generally important for applications. For example if $V$ is the adjoint representation, with $G$ semi-simple, Harish-Chandra's isomorphism states that $I(V)$ is a polynomial ring, whose generators are nothing but the fundamental (Casimir) invariants for the Lie algebra ${\mathfrak g}=L(G)$ of $G$. For a comprehensive introduction to the theory of representations and invariants of the classical groups see for example Goodman \cite{Goodman1998}. We now turn to our discussion of applications. \section{Application I -- Quantum entanglement} In nonrelativistic quantum mechanics with continuous variable systems, we work with the Schr\"{o}dinger representation, whose uniqueness is guaranteed by the celebrated Stone-von Neumann theorem. The $V$'s are thus various complex $L^2$ spaces and, for multi-partite systems, tensor products thereof. However, for purely `spin' systems, where the state space is spanned by a finite set of eigenstates of some selected observable quantity, the Hilbert spaces are simply finite-dimensional complex vector spaces $V \cong {\mathbb C}^N$. Our interest here is in composite systems with $K$ parts. In the context of quantum information, a subsystem with dimension $D$ is referred to as a `qu$D$it'. For $K$ qu$D$its, then, $N=K^D$. The simplest case occurs for $D=2$ (corresponding to spin-$\frac 12$, for example `up' or `down' electronic spin states in an atom) and we have $K$ `qubits', with $V$ the $K$-fold tensor product ${\mathbb C}^2 \otimes {\mathbb C}^2 \otimes \cdots {\mathbb C}^2$ of dimension $N=2^K$. The quantum state of the system as a whole is described as usual by a vector in the total space $V$, but we imagine experimenters Alice, Bob, Carol, $\cdots$, and Karl who are each able to access only one subsystem. In the oft-described scenario of `spooky-action-at-a-distance', Alice, Bob, Carol, $\cdots$, and Karl, despite remaining in their spatially separated labs, each manipulate their own subsystem independently, but observable outcomes between their measurements, and those in their colleagues' labs, are nonetheless not independent -- the properties of each subsystem's quantum state in this case are correlated with those of the other $K\!-\!1$ subsystems, and the overall state is described as `entangled'. % One strategy available to each of Alice, Bob, Carol, $\cdots$, and Karl is simply to let his or her individual subsystem change under some time evolution, which can be engineered independently of the others. However, such local transformations do not affect the entanglement of the joint, $K$-party quantum state of the system as a whole: hence, any proposed numerical measure of entanglement must be invariant under appropriate symmetry transformations. Since standard time evolution of quantum states is represented by unitary operators, entanglement measures should therefore be invariant under the Cartesian product of $K$ unitary groups, each acting on one experimenter's qu$D$it Hilbert space. In the qubit case, then, the symmetry group is just\footnote{More general procedures open to Alice, Bob, Carol, $\cdots$, and Karl involve various types of general quantum operations (measurements). For example, under reversible operations which succeed only with some probability less than one, the transformation group on each subsystem would be extended from $U(2)$ to $GL(2,{\mathbb C})$, and the group as a whole would become $\times^K GL(2,{\mathbb C})$. Of course, such local transformations do modify state entanglement, although numerical measures which are {bona fide} \emph{entanglement monotones} are defined to be \emph{nonincreasing} under such changes \cite{Vidal2000em}.} $G = U(2)\times U(2) \times \cdots \times U(2)$ acting on the said $K$-fold tensor product space $V \cong \otimes^K {\mathbb C}^2$. The invariants from $I(V)$ are perfectly suited to quantifying these local quantum effects, and are hence referred to as `local entanglement invariants'. There is great interest in using these invariants to build complete entanglement measures \cite{Vidal2000em}, and the first problem is to characterize and evaluate the invariants in different situations. A famous case in point for tripartite entanglement ($K=3$) is the use of the Cayley hyperdeterminant, which is called the \emph{tangle} in the physics literature \cite{CoffmanKunduWootters2000de}. See \cite{HorodeckiEtAl:2009qe} for a recent review of the topic of quantum entanglement. A less well studied case is that of so-called mixed states, where the system itself is described in a statistical sense (an ensemble of electrons, each of whose members is an electron described by a state vector which is an equal superposition of spin `up' and spin `down', is physically very different from an ensemble wherein, in 50\% of instances, the electron spin is `up', and in the other 50\%, the electron spin is `down'). The state is now specified by a density operator (a self-adjoint positive semidefinite linear operator on $V$ of unit trace), and hence transforms in the adjoint representation $\cong V \otimes V^*$. Even just for $K=2$, that is for \emph{two} qubit mixed states, the structure of the invariant ring is quite rich, for example being considerably more complicated than the \emph{four} qubit pure state case \cite{VerStraeteEtAl2002fq9e}. The Molien series \cite{GrasslEtAl1998cli,makhlin2002nlp,KingWelshJarvis2007} \[ h(z) = \frac{1+z^4 + z^5 + 3z^6 + 2z^7+ 2z^8 +3z^9 +z^{10} + z^{11} + z^{15} } {(1-z)(1-z^2)^3(1-z^3)^2 (1-z^4)^3 (1-z^6)} \] enumerates a plethora of primary and secondary invariants, whose precise role in the formulation of suitable entanglement measures is still not completely tied down \cite{GrasslEtAl1998cli,makhlin2002nlp}. \section{Application II -- Phylogenetics} What of molecular phylogenetics? The simplest, so-called `general Markov model' of molecular evolution \cite{barry1987,semple2003phylogenetics} is given as follows. For a given set of $K$ species (`taxonomic units'), a probabilistic description of some set of $D$ observed characters is adopted. Models are constructed that describe the frequency of patterns derived from morphological features, or in molecular phylogenetics, from alignments of homologous nucleic acid sequences, nucleotide bases $\{A,C,G,T\}$, $D=4$; or of homologous proteins, amino acid residues $\{A,R,N,D,C,E,Q,G,H,I,L,K,M,F,P,S,T,W,Y,V\}$, $D=20$; or a variety of other molecular motifs or repeated units. These models are constructed by assuming molecular sequences evolving from a common ancestor via a Markov process, punctuated by speciation events. The data, corresponding to the observed frequencies, are taken as a sample of the probabilities on the basis that each site in the alignment independently follows an identical random process. These assumptions are contestible, but are well motivated by considerations of finding a balance between biological realism and statistical tractability. Contained within this model is the description of the evolution of the $K$ \emph{extant} species and their characters. This is a process whereby the $K\!$-way probability array, sampled by the pattern frequencies, evolves according to the tensor product of $K$ independent $D \times D$ Markov transition matrices. This scenario is analogous to the set-up of quantum entanglement described above, and algebraically it becomes an instance of the classical invariant theory problem, by extending the \emph{set} of Markov matrices to the smallest containing matrix \emph{group}. In the case of continuous-time models, this is no difficulty, as the matrices describing substitution rates between molecular units formally belong to the relevant matrix Lie algebra \cite{sumner2012}, and the Markov transition matrices are their matrix exponentials -- and are hence invertible. From this algebraic perspective, it also makes sense to work over $\mathbb{C}$ from the outset, and later examine stochastic parameter regions as required for applications. This will be elaborated upon for a specific example below. The said $K$-fold tensor product module ${\mathbb C}^D \otimes {\mathbb C}^D \otimes \cdots \otimes {\mathbb C}^D$ thus now transforms under $G=GL_1(D) \times GL_1(D) \times \cdots \times GL_1(D)$, where the non-reductive group\footnote{This group is thus the workhorse of Markov models, playing a role analogous to $GL(D)$, which Weyl in his book famously referred to as `her all-embracing majesty' amongst the classical groups.} $GL_1(D)$ is the Markov \emph{stochastic group} of invertible $D\!\times\!D$ unit row-sum matrices \cite{johnson1985,mourad2004} ($GL_1(D)$ is of course a matrix subgroup of $GL(D)$, and is isomorphic to the affine group $A\!f\!\!f_{D\!-\!1}$ in one dimension lower; the \emph{doubly stochastic} group is the subgroup having unit row- \emph{and} column-sums, and is isomorphic to $GL(D\!-\!1)$ (see \S A below)). In this non-reductive case there is no Molien theorem, and no guarantee of the invariant ring even being finitely generated. However, there is no difficulty in counting one dimensional representations degree by degree in tensor powers, and indeed we have shown that a slightly modified version of the standard combinatorial results applies (see Appendix). In practical terms, this allows us to identify useful invariants for the purposes of phylogenetic inference. In this context we call such objects \emph{Markov invariants}. One such quantity, the so-called `\texttt{logDet}', has in fact been known and used by phylogenetic practitioners for over two decades \cite{barry1987,lake1994,lockhart1994}. For the case of two taxa, the determinant function of the 2-fold phylogenetic tensor array (a polynomial of degree $D$) is certainly a one dimensional representation under the action of $GL(D) \times GL(D)$ itself, in fact transforming as $Det \otimes Det$, and thus necessarily an invariant of the Markov subgroup. Taking the (negative) log, and with the usual matrix relation $-\ln Det = -Tr \ln$, we recover the (negative) sum of the traces of the rate generators, multiplied by the evolved time. Modulo some care with the distribution of characters belonging to the presumed common ancestor of the two taxa, this can be taken as a measure of the total `evolutionary distance' between them, essentially the sum of all the individual rates changing characters into one another, multiplied by the time. The \texttt{logDet} can be recorded for all pairs of taxa, using marginalisations of the $K$-fold probability array, and thus leads to a robust `distance-based' method for phylogenetic inference. In fact, Buneman's theorem \cite{buneman1971} guarantees reconstruction of a tree from a pairwise `metric' satisfying certain additional conditions. Using our technical results, Markov invariants beyond the two-fold case are able to be counted and constructed, and it is an important matter of principle to investigate them. In data sets where the number of species $K$ is large, where the pairwise nature of \texttt{logDet} can lead to significant loss of evolutionary information, they may also provide alternative or supplementary information to help with inference. In view of the pevious discussion of quantum entanglement, it turns out that for the case of binary characters ($D=2$), and three-fold arrays ($K=3$) or tripartite marginalisations of higher arity arrays, the Cayley hyperdeterminant (degree $n=4$) is precisely such a candidate \cite{sumner2005}, and we have identified analogous low-degree `tangles' for $D = 3$ and 4 \cite{sumner2006}. For four taxa, $K=4$, and four characters, $D=4$, we have found a remarkable, symmetrical set of three degree-five, $n=5$, Markov invariants dubbed the `squangles' (\underline{s}tochastic \underline{qu}artet t\underline{angles})\cite{sumner2008,sumner2009,holland2013low}. A simple least squares analysis of their values \cite{holland2013low} allows a direct ranking of one of the three possible unrooted tree topologies for quartets\footnote{It is here that careful account of the stochastic parameter regime should be taken, as a crucial aspect of the least squares analysis requires certain inequalitites to hold.}. The squangles provide a low-parameter and statistically powerful way of resolving quartets based on the general Markov model \cite{holland2013low}, without any special assumptions about the types of rate matrices in the model, and independently of any recourse to pairwise distance measures. They are useful because many reconstruction methods for large trees build a `consensus tree' from some kind of ranking of quartet subtrees, where robust decisions at the quartet level are absolutely crucial. Further details are given in the appendix, \S A. It must be noted that Markov invariants are in general distinct from the so-called `phylogenetic invariants' \cite{CavenderFelsenstein1987}. These are polynomials that evaluate to zero for a subset of phylogenetic trees regardless of particular model parameters, and hence can serve in principle to discriminate trees and models. Their formal presentation can be given in terms of algebraic geometry \cite{Lake1987,allman2004}. However, in contrast to Markov invariants which are 1-dimensional $G$-modules, phylogenetic invariants in general belong to high-dimensional $G$-modules \cite{allman2013,sumner:jarvis:2013cpi}. Our Markov invariants are necessarily quite large objects -- they are polynomials of reasonably high degree in a significant number of variables. For example, the squangles are degree 5 polynomials in the components of a $4^4 = 256$-element array, and given their combinatorial origins, it is perhaps not surprising to find that they each have 66,744 terms\footnote{This is still $\ll O(256^5)$.}. However, once defined, there is no numerical problem with evaluations\footnote{Explicit forms for the squangles, together with \texttt{R} code for their evaluation, are available from the authors.} -- their utility is in their ability to syphon useful information out of the complexity of the data. As such they provide a viable alternative to parameter-estimation intensive phylogenetic methods, where massive likelihood optimisations are required, in order to make decisions about much more tightly specified models. \paragraph{Acknowledgements} The authors thank E Allman, D Ellinas, B Fauser, J Fern\'{a}ndez-S\'{a}nchez, B Holland, R King, J Rhodes, M Steel and A Taylor for helpful discussions and correspondence on this research. JGS acknowledges the support of the Australian Research Council grant DP0877447 for part of this work. PDJ acknowledges the support of the Australian-American Fulbright Commission for the award of a senior scholarship for part of this work. \begin{appendix} \section{Counting invariants: some character theorems} The mathematical setting for both the study of entanglement measures for composite quantum systems, and of analogous quantities for the setting of phylogenetics, is that there is a model space $V$ which is a $K$-fold tensor product, $V \cong {\mathbb C}^D\otimes {\mathbb C}^D\otimes \cdots \otimes {\mathbb C}^D$. In the case of quantum mechanics the components of $V$ in some standard basis describe the state; for example in Dirac notation a pure state is a ket $|\Psi \rangle \in V$ of the form $ |\Psi \rangle = \sum_{0}^{D\!-\!1} \Psi_{i_1 i_2 \cdots i_K} |i_1,i_2, \cdots, i_K \rangle $ in the case of qu$D$its (see below for mixed states). In the phylogenetic case we simply have a $K$-way frequency array ${\{} P_{i_1 i_2 \cdots i_K} {\}}$ sampling the probability of a specific pattern, say ${i_1 i_2 \cdots i_K}$, where each $i_k \in {\{} A,C,G,T {\}}$ for nucleotide data, at a particular site in a simultaneous alignment of a given homologous sequence across all $K$ of the species under consideration. We focus attention on the linear action of the appropriate matrix group $G = G_1 \times G_2 \times \cdots \times G_K$ on $V$. In the quantum qu$D$it case each local group $G_k$ is a copy of $U(D)$, but given the irreducibility of the fundamental representation, for polynomial representations the analysis can be done using the character theory of the complex group\footnote{This technical point is different from the previous observation about extending the analysis to allow local quantum operations and communication of these between parties.} $GL(D, {\mathbb C})$. This group is too large for the phylogenetic case, where the pattern frequency array $P$ evolves as $P \rightarrow P' := g \cdot P$, namely \[ P' = M_1 \otimes M_2 \otimes \cdots \otimes M_K \cdot P \] where each $M_k$ belongs to the stochastic Markov group $GL_1(D,{\mathbb C})$ (the group of nonsingular complex unit row-sum $D \!\times\!D$ matrices). We compute the Molien series $h(z) = \sum_0^\infty h_n z^n$ for ${\mathbb C}{[} V{]}^G$ degree-by-degree using combinatorial methods based on classical character theory for $GL(D)$, adapted slightly for the stochastic case $GL_1(D)$, which we now describe. All evaluations are carried out using the group representation package ${}^\copyright \!$\texttt{Schur} \cite{SCHUR}. In terms of class parameters (eigenvalues) $x_1,x_2,\cdots, x_D$ for a nonsingular matrix $M \in GL(D)$, the defining representation, the character is simply $Tr(M) = x_1+ x_2+ \cdots + x_D$; the contragredient has character $Tr(M^T{}^{-1}) = x_1{}^{-1}+ x_2{}^{-1}+ \cdots + x_D{}^{-1}$. Irreducible polynomial and rational characters of $GL(D)$ are given in terms of the celebrated Schur functions \cite{Weyl1939,littlewood1940} denoted $s_\lambda(x)$, where $\lambda = (\lambda_1,\lambda_2,\cdots,\lambda_D)$, $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_D$, is an integer partition of at most $D$ nonzero parts. $\ell(\lambda)$, the length of the partition, is the index of the last nonzero entry (thus $\ell(\lambda)=D$ if $\lambda_D >0$). $|\lambda|$, the weight of the partition, is the sum $|\lambda|=\lambda_1+\lambda_2 + \cdots + \lambda_D$, and we write $\lambda \vdash |\lambda|$. For brevity we write the Schur function simply as $\{\lambda \}$ where the class parameters are understood. Thus the space $V$ as a representation of $G$ as a $K$-fold Cartesian product is endowed with the corresponding product of $K$ characters of the above defining representation of each local group, $\chi= \{1\} \cdot \{1\}\cdot \, \cdots \, \cdot \{1\}$ in the quantum mechanical pure state and stochastic cases, and $\chi = (\{1\} \{\overline{1}\}) \! \cdot \! (\{1\} \{\overline{1}\})\!\cdot \, \cdots \, \cdot\!(\{1\} \{\overline{1}\})$ in the quantum mechanical mixed state case, where $\{1\}$ is the character of the defining representation, and $\{\overline{1}\}$ that of its contragredient. The space of polynomials of degree $n$ in $\Psi$ or $P$, ${\mathbb C}{[}V{]}_n$, is a natural object of interest and by a standard result is isomorphic to the $n$-fold symmetrised tensor product $V \vee V \vee \cdots \vee V$, a specific case of a Schur functor: ${\mathbb S}_{\{n\}}(V)$. Its character is determined by the corresponding Schur function \emph{plethysm}, $\chi \underline{\otimes} \{n\}$, and the task at hand is to enumerate the one-dimensional representations occurring therein. Before giving the relevant results it is necessary to note two further rules for combining Schur functions. The \emph{outer} Schur function product, is simply the pointwise product of Schur functions, arising from the character of a tensor product of two representations. Of importance here is the \emph{inner} Schur function product $\ast$ defined via the Frobenius mapping between Schur functions and irreducible characters of the symmetric group. We provide here only the definitions sufficient to state the required counting theorems in technical detail. For a more comprehensive, Hopf-algebraic setting for symmetric functions and characters of classical (and some non-classical) groups see \cite{FauserJarvis2003hl,FauserJarvisKing2006nbr}. Concretely, we introduce structure constants for inner products in the Schur function basis as follows: \[ \{\lambda \} \ast \{ \mu \} = \sum_\nu g^\nu_{\lambda,\mu} \{\nu \}. \] For partitions $\lambda$, $\mu$ of equal weight\footnote{If $|\lambda| \ne |\mu|$ then $\{\lambda \} \ast \{ \mu \}=0$.}, $|\lambda| = |\mu|= n$, say, this expresses the reduction of a tensor product of two representations of the symmetric group ${\mathfrak S}_n$ labelled by partitions $\lambda$, $\mu$. By associativity, we can extend the definition of the structure constants to $K$-fold inner products, \[ \{\tau_1 \} \ast \{\tau_2 \} \ast \cdots \ast \{\tau_K \} = \sum_\nu g^\nu_{\tau_1, \tau_2, \cdots, \tau_K} \{\nu \}. \] \noindent \textbf{Theorem: Counting invariants} \begin{description} \item[(a) Quantum pure states]\mbox{}\\ Let $D$ divide $n$, $n = rD$, and let $\tau$ be the partition $(r^D)$ (that is, with Ferrers diagram a rectangular array of $r$ columns of length $D$). Then \[ h_n = g^{(n)}_{\tau,\tau,\cdots,\tau}\quad \mbox{($K$-fold inner product)}. \] If $D$ does not divide $n$, then $h_n =0$. \item[(b) Quantum mixed states]\mbox{}\\ We have \[ h_n = \sum_{|\tau|= n,\ell(\tau) \le D^2} \left( \sum_{|\sigma|= n, \ell(\sigma) \le D} g^{\tau}_{\sigma,\sigma}\right)^{\!\!\!\!2}. \] \item[(c) Phylogenetic $K$-way pattern frequencies, general Markov model We have \[ h_n = g^{(n)}_{\tau_1,\tau_2,\cdots,\tau_K}\quad \mbox{($K$-fold inner product)}, \] for each $\tau_k$ of the form $(r_k+s_k, r_k^{(D\!-\!1)})$ such that $n = r_kD+s_k$, $s_k \ge 0$. \item[(d) Phylogenetic $K$-way pattern frequencies, doubly stochastic model We have \[ h_n = g^{(n)}_{\tau_1,\tau_2,\cdots,\tau_K}\quad \mbox{($K$-fold inner product)}, \] for each $\tau_k$ of the form $(r_k+s_k, r_k^{(D\!-\!2)},t_k)$ such that $n = r_k(D\!-\!1)+s_k+t_k$, $0\le t_k \le r_k$, $s_k \ge 0$. \end{description} \mbox{}\hfill $\Box$\\ As pointed out in the main text, the enumeration and identification of entanglement invariants, in the case of quantum systems, and Markov invariants, in the phylogenetic context, is of practical importance in characterising general properties of the systems under study -- in the quantum case, because they are by definition impervious to local unitary operations, and form the raw material for constructing interesting entanglement measures; and in the phylogenetic case, because they tend to be independent of how the specific Markov change model is parametrized, but nonetheless they can give information about the underlying tree. An example of identifying invariants is the case of the three squangle quantities. We find ${g^{(5)}}_{\tau\tau\tau\tau}=4$, where $\tau$ is the partition $(2,1^3)$ which is of course of dimension $4$ and irreducible in $GL(4)$, but indecomposable in $GL_1(4)$, as it contains a one-dimensional representation. One of the four linearly independent degree five candidates is discounted, because of algebraic dependence on lower degree invariants. Recourse to the appropiate quartet tree isotropy group \cite{sumner2009} reveals that one of the remaining three is not tree informative. Further, the situation with respect to the final two objects is expressed symmetrically in terms of the \emph{three} squangle quantities $Q_1$, $Q_2$, $Q_3$, which satisfy $Q_1+Q_2+Q_3=0$, as follows. For tree 1, for example $12|34$, we have on evaluation with stochastic parameters, $Q_1= 0$, and $-Q_3=Q_2 > 0$. This pattern recurs cyclically for the other two unrooted quartet trees: for tree 2, $13|24$, $Q_2=0$, whereas $-Q_1=Q_3 > 0$, and for tree 3, $14|23$, $Q_3=0$, and $-Q_2=Q_1>0$. As noted above, the (strict) inequalities entailed in the above evaluations are crucial for the validity of the least squares method for ranking quartet trees using squangles. There are many more gems to be examined in hunting down Markov invariants for different models and subgroups \cite{jarvis:sumner:2012miesm,jarvis:sumner:2013missm}, with potential practical and theoretical interest. As one instance of as-yet unexplored terrain, for $K=3$ we have evidence \cite{sumner2006a, sumner2008} at degree 8 for \underline{s}tochastic \underline{tangle} (`stangle') invariants with mixed weight, since it turns out that \[ g^{(8)}_{(51^3),(2^4),(2^4)} =1 \quad (\equiv g^{(8)}_{(2^4),(51^3),(2^4)}\equiv g^{(8)}_{(2^4),(2^4),(51^3)})\,. \] Thus there are three mixed weight stangle candidates, which would differ in the information they reveal about each leg of their ancestral star tree. \end{appendix} {\small
train/arxiv
BkiUekbxK6-gDyrT-nyW
5
1
\section{Introduction} \label{sec:intro} Disorder which is inevitably present in most materials can dramatically affect their properties.~\cite{Lee_RevModPhys,Belitz_RevModPhys} It can lead to changes in their electronic structure and transport. One of the most interesting effects of disorder is the spatial confinement of charge carriers due to coherent backscattering off random impurities which is known as Anderson localization.~\cite{Anderson,PhysRevLett.42.673} Despite progress over the last decades, the subject of Anderson localization remains an active area of research. The lack of quantitative analytical results has meant that numerical investigations (see for e.g., Refs.~\onlinecite{Bulka85,Bulka87,Slevin2014,PhysRevLett.78.4083,Slevin99,PhysRevLett.105.046403,PhysRevB.84.134209}) have provided a significant role in understanding the Anderson transition.~\cite{Nakayama-Yakubo-2003,Markos-review-2006,0305-4470-33-42-103} The simplest model used to study the effects of disorder in materials is a single band tight binding model with a random on-site disorder potential.~\cite{a_gonis_92} Such a model is justified when the disorder is introduced by substitutional impurities, as in a binary alloy. The substitution of host atoms by impurities only leads to changes of the local potential on the substitutional site and, on average, does not affect the neighbors.~\cite{a_gonis_92,soven_67} In this situation, the disorder appears only in the diagonal terms of the Hamiltonian and hence is referred to as diagonal disorder. However, when the bandwidth of the dopant is very different from the one of the pure host, such substitution results not only in the change of the local potential but may also affect the neighboring sites.~\cite{a_gonis_92} Consequently, a simple model to capture such effects should include both random local potentials and random hopping amplitudes which depend on the occupancy of the sites. The dependence of the hopping amplitude on the disorder configuration is usually referred to as off-diagonal disorder. Of course, a proper theoretical description of realistic disordered materials requires the inclusion of both diagonal and off-diagonal randomness. The coherent potential approximation (CPA) is a widely used single site mean field theory for systems with strictly diagonal disorder.~\cite{soven_67} Blackman, Esterling and Berk (BEB)~\cite{PhysRevB.4.2412} have extended the CPA to systems with off-diagonal disorder. However, being single-site approximations, the CPA and the BEB theories neglect all disorder induced non-local correlations. There have been a number of attempts to develop systematic nonlocal extensions to the CPA. These include cluster extensions such as the molecular coherent potential approximation (MCPA),~\cite{gonis_mcpa,gonis_odd1} the dynamical cluster approximation (DCA),~\cite{Hettler98,Hettler00,m_jarrell_01a} etc. Self-consistent mean field studies of off-diagonal disorder have been conducted by a number of authors.~\cite{Shiba, Brezini-1, Brezini-2, gonis_odd1} However, all these studies have been performed at the local single-site BEB level. To include the effects of off-diagonal disorder, Gonis \cite{gonis_mcpa} extended the Molecular CPA, which uses a self-consistently embedded finite size cluster to capture non-local corrections to the CPA. However, he criticized the MCPA for violating translational invariance and other critical properties of a valid quantum cluster theory.~\cite{a_gonis_92,PhysRevB.89.081107} In order to take into account such non-local effects on off-diagonal disorder models while maintaining translational invariance, we extend the BEB formalism using the DCA scheme.~\cite{Hettler98,Hettler00,m_jarrell_01a} While the CPA, DCA, and BEB have shown to be successful self-consistent mean-field theories for the quantitative description of the density of states and electronic structure of disordered systems, they can not properly address the physics of Anderson localization. These mean field approaches describe the effective medium using the average density of states which is not critical at the transition.~\cite{Thouless,PhysRevB.89.081107,Nakayama-Yakubo-2003,Abou-Chacra} Thus, theories which rely on such averaged quantities will fail to properly characterize Anderson localization. As noted by Anderson, the probability distribution of the local density of states must be considered, focusing on the most probable or the \emph{typical} value.~\cite{Anderson,anderson1978local} Close to the Anderson transition, the distribution is found to have very long tails characteristic of a log-normal distribution.\cite{PhysRevLett.105.046403,PhysRevLett.72.526,Vollhardt} In fact, the distribution is log-normal up to ten orders of magnitude~\cite{PhysRevB.81.155106} and so the typical value~\cite{Janssen98,Crow1988,Vollhardt,Derrida198429} is the geometrical mean. Based on this idea, Dobrosavljevi\'{c} {\em{et al.}}~\cite{Vlad2003} formulated a single site typical medium theory (TMT) for the Anderson localization. This approximation gives a qualitative description of the Anderson localization in three dimensions. However, it fails to properly describe the trajectory of the mobility edge (which separates the extended and localized states) as it neglects non-local corrections and so does not include the effects of coherent backscattering.~\cite{2005PhyB..359..789A} It also underestimates considerably the critical strength of the disorder at which the localization happens. In addition, TMT is only formulated for diagonal disorder. Recently, by employing the DCA within the typical medium analysis, we developed a systematic Typical Medium Dynamical Cluster Approximation (TMDCA) formalism.~\cite{PhysRevB.89.081107} The TMDCA provides an accurate description of the Anderson localization transition for modest cluster sizes in three-dimensional models with diagonal disorder while recovering the TMT for a one-site cluster. In this work, we generalize our recently proposed TMDCA scheme to address the question of electron localization in systems with both diagonal and off-diagonal disorder. To go beyond the local single-site CPA-like level of the BEB formalism, we employ the DCA~\cite{Hettler98,Hettler00,m_jarrell_01a} scheme which systematically incorporates non-local spatial correlations effects. Hence, in this paper, we first present an extension of the DCA for systems with both diagonal and off-diagonal disorder. Next, we develop a typical medium dynamical cluster approximation formalism capable of incorporating the effects of Anderson localization both for diagonal and off-diagonal disorder. We then perform a systematic study of the effects of non-local correlations and off-diagonal randomness on the density of states and electron localization. The results of our calculations are compared with the ones obtained with other numerical methods for finite size lattices, including exact diagonalization, kernel polynomial, and transfer matrix methods. The paper is organized as follows: following the Introduction in Sec.~\ref{sec:intro} we present the model and describe the details of the formalism we used in Sec.~\ref{sec:formalism}. In Sec.~\ref{sec:Results_DCA} we present our results of the average density of states for both diagonal and off-diagonal disorder cases. In Sec.~\ref{sec:Results_TMDCA} we consider the effects of diagonal and off-diagonal disorder on the typical density of states, from which we extract the mobility edges and construct a complete phase diagram in the disorder-energy parameter space. We summarize and discuss future directions in Sec.~\ref{sec:conclusion}. \section{Formalism} \label{sec:formalism} \subsection{Dynamical cluster approximation for off-diagonal disorder} \label{sec:ODD} The simplest model widely used to study disordered systems is the single band tight binding Hamiltonian \begin{equation} H=-\sum_{<i,j>}t_{ij}(c_{i}^{\dagger}c_{j}+h.c.)+\sum_{i}v_{i}n_{i}, \label{eq:1} \end{equation} where disorder is modeled by a local potential $v_{i}$ which is a random variable with probability distribution function $P(v_{i})$. We will focus on the binary disorder case, where some host $A$ atoms are substituted with $B$ impurities with a probability distribution function of the form \begin{equation} P(v_{i})=c_{A}\delta(v_{i}-V_{A})+c_{B}\delta(v_{i}-V_{B}), \label{eq:pdf} \end{equation} where $c_B=1-c_A$. For the diagonal disorder case when the bandwidth of the pure host $A$ is about the same that the bandwidth of the $B$ system, such substitution results only in a change of the local potential $v_{i}$ at the replaced site $i$. This corresponds to changes in the diagonal elements of the Hamiltonian. In this case it is assumed that substitution of impurity atoms on average has no effect on hopping amplitudes to the neighboring atoms. For systems with off-diagonal disorder, the randomness is introduced not only locally in the random diagonal potential $v_{i}$, but also through the hopping amplitudes. To model this, BEB~\cite{PhysRevB.4.2412} introduced the disorder configuration dependent hopping amplitude of electrons $t_{ij}$ as \begin{eqnarray} t_{ij}&=& \nonumber t_{ij}^{AA} ,~{\rm if} \quad i\in A,\quad j\in A \\ \nonumber & &t_{ij}^{BB} ,~{\rm if} \quad i\in B,\quad j\in B \\ \nonumber & &t_{ij}^{AB} ,~{\rm if} \quad i\in A,\quad j\in B \\ & &t_{ij}^{BA} ,~{\rm if} \quad i\in B,\quad j\in A, \label{eq:} \end{eqnarray} where $t_{ij}$ depends on the type of ion occupying sites $i$ and $j$. For off-diagonal disorder BEB~\cite{PhysRevB.4.2412} showed the scalar CPA equation becomes a $2\times2$ matrix equation, with corresponding AA, AB, BA, and BB matrix elements. In momentum space, if there is only near-neighbor hopping between all ions, the bare dispersion can be written as (the under-bar denotes matrices) \begin{eqnarray} \underline{\varepsilon_k} & = \left(\begin{array}{cc} t^{AA} & t^{AB} \\ [1.0em] t^{BA} & t^{BB} \end{array}\right) \varepsilon_k \label{Eq: dispersion} \end{eqnarray} where in three dimensions $\varepsilon_k=-2t(\cos(k_x) + \cos(k_y) + \cos(k_z))$ with $4t=1$ which sets our unit of energy, and $t^{AA}$, $t^{BB}$, $t^{AB}$, and $t^{BA}$ are unitless prefactors. The BEB approach is local by construction, hence all non-local disorder induced correlations are neglected.~\cite{PhysRevB.4.2412} In order to take into account non-local physics, we extend the BEB formalism to a finite cluster using the DCA scheme. Here in the following, we present the algorithm and details of our non-local DCA extension of the BEB formalism for off-diagonal disorder. Just as in the DCA scheme,~\cite{m_jarrell_01a} the first Brillouin zone is divided into coarse-grained cells with centers $K$ surrounded by points $\tilde{k}$ within the cell so that an arbitrary $k=K+\tilde{k}$. For a given DCA $K$-dependent effective medium hybridization $\underline{\Delta(K,\omega)}$ matrix \begin{equation} \begin{split} \underline{\Delta(K,\omega)}= \left(\begin{array}{cc} \Delta^{AA}(K,\omega) & \Delta^{AB}(K,\omega)\\ \Delta^{BA}(K,\omega) & \Delta^{BB}(K,\omega) \end{array}\right) \label{eq:-2-1} \end{split} \end{equation} we solve the cluster problem, usually in real space. A set of stochastically generated random configuration of disorder potentials $v_i$ is used to calculate the disorder averaged cluster Green function $ \overline{G}_{c}(K,\omega)$, \begin{equation} \left(\underline{G_{c}}\right)_{ij}=\langle \left(\underline{\omega}-\underline{\overline{t'}}- \underline{\Delta'}-\underbar{V}\right)_{ij}^{-1} \rangle \label{eq:5} \end{equation} where $\langle ... \rangle$ denotes disorder averaging and $\underbar{V}$ is a diagonal matrix for the disorder site potential. The primes stand for the configuration dependent Fourier transform (FT) components of the hybridization and hopping, respectively. I.e., \begin{subequations} \begin{equation} \underline{\Delta}'_{ij} =\begin{cases} FT(\Delta(K,\omega)^{AA}) ,~{\rm if} \quad i\in A,\quad j\in A\\ FT(\Delta(K,\omega)^{BB}) ,~{\rm if} \quad i\in B,\quad j\in B\\ FT(\Delta(K,\omega)^{AB}) ,~{\rm if} \quad i\in A,\quad j\in B\\ FT(\Delta(K,\omega)^{BA}) ,~{\rm if} \quad i\in B,\quad j\in A\end{cases} \label{eq:-5} \end{equation} and \begin{equation} \overline{\underline{t}}'_{ij}=\begin{cases} FT(\overline{\epsilon}(K)^{AA}) ,~{\rm if} \quad i\in A,\quad j\in A\\ FT(\overline{\epsilon}(K)^{BB}) ,~{\rm if} \quad i\in B,\quad j\in B\\ FT(\overline{\epsilon}(K)^{AB}) ,~{\rm if} \quad i\in A,\quad j\in B\\ FT(\overline{\epsilon}(K)^{BA}) ,~{\rm if} \quad i\in B,\quad j\in A\end{cases} \label{eq:-6} \end{equation} with \begin{eqnarray} \underline{\overline{\epsilon}(K)} & = \left(\begin{array}{cc} t^{AA} & t^{AB} \\ [1.0em] t^{BA} & t^{BB} \end{array}\right) \frac{N_{c}}{N}\sum_{\tilde{k}} \varepsilon_{k}, \label{Eq: eps_bar} \end{eqnarray} \end{subequations} where $\underline{\Delta}'_{ij}$ and $\overline{\underline{t}}'_{ij}$ are $N_{c}\times N_{c}$ real-space matrices (where $N_c$ is the cluster size), and e.g., $FT(\Delta(K,\omega)^{AA})=\sum_K \Delta(K,\omega)^{AA} e^{iK(r_i-r_j)}$. The hopping can be long ranged, but since they are coarse-grained quantities are effectively limited to the cluster. Physically, $\underline{\Delta}'_{ij}$ represents the hybridization between sites $i$ and $j$ which is configuration dependent. For example, the AA component of the hybridization corresponds to both A species occupying site $i$ and $j$, while the AB component means that site $i$ is occupied by an A atom and site $j$ by a B atom. The interpretation of the hopping matrix is the same as for the hybridization function. In the next step, we form the $2N_c \times 2N_c$ disorder averaged Green function \begin{eqnarray} \left\langle G_{c}(\omega)_{ij} \right\rangle &= \left(\begin{array}{cc} \left\langle G_{c}^{AA}(\omega) \right\rangle_{ij} \quad \left\langle G_{c}^{AB}(\omega) \right\rangle_{ij}\\ [1.0em] \left\langle G_{c}^{BA}(\omega) \right\rangle_{ij} \quad \left\langle G_{c}^{BB}(\omega) \right\rangle_{ij} \end{array}\right). \label{eq:10} \end{eqnarray} This may be done by assigning the components according to the occupancy of the sites $i$ and $j$ \begin{eqnarray} (G_{c}^{AA})_{ij} &=&(G_{c})_{ij}\: ~{\rm if} \quad i\in A,\quad j\in A\nonumber \\ (G_{c}^{BB})_{ij} &=&(G_{c})_{ij}\: ~{\rm if} \quad i\in B,\quad j\in B\nonumber \\ (G_{c}^{AB})_{ij} &=&(G_{c})_{ij}\: ~{\rm if} \quad i\in A,\quad j\in B\nonumber \\ (G_{c}^{BA})_{ij} &=&(G_{c})_{ij}\: ~{\rm if} \quad i\in B,\quad j\in A\, \label{eq:11} \end{eqnarray} with the other components being zero. Because only one of the four matrix elements is finite for each disorder configuration (each site can be occupied by either $A$ or $B$ atom), only the sum of the elements in Eq.~\ref{eq:10} is normalized as a conventional Green function. Having formed the configuration dependent average Green function, we then Fourier transform to $K$-space (which also imposes translational symmetry) and obtain the K-dependent disorder averaged cluster Green function for each component of the matrix \begin{equation} G_{c}(K,\omega)=\left(\begin{array}{cc} G_{c}^{AA}(K,\omega) & G_{c}^{AB}(K,\omega) \\ [1.0em] G_{c}^{BA}(K,\omega) & G_{c}^{BB}(K,\omega)\end{array}\right)\,. \label{eq:-11} \end{equation} Once the cluster problem is solved, we calculate the coarse-grained lattice Green function as \begin{eqnarray} \bar{G}(K,\omega) & = & \left(\begin{array}{cc} \overline{G}^{AA}(K,\omega) & \overline{G}^{AB}(K,\omega) \nonumber \\ \overline{G}^{BA}(K,\omega) & \overline{G}^{BB}(K,\omega)\end{array}\right) \nonumber\\ & = & \frac{N_{c}}{N}\sum_{\tilde{k}}\Big(\underline{ G_{c}(K,\omega)}^{-1}+\underline{\Delta_{}(K,\omega)} \nonumber \\ & - & \underline{\varepsilon_{k}}+\underline{\overline{\epsilon}(K+\tilde{k})}\Big)^{-1}. \label{coarsegraining} \end{eqnarray} It is important to note that each component of the Green function matrix above does not have the normalization of a conventional, i.e., scalar, Green function. Only the sum of the matrix components has the conventional normalization, so that $\overline{G}(K,\omega)\sim1/\omega$, with the total coarse grained lattice Green function being obtained as \begin{eqnarray} \overline{G}(K,\omega)&=& \overline{G}^{AA}(K,\omega)+\overline{G}^{BB}(K,\omega) \nonumber \\ &+& \overline{G}^{AB}(K,\omega)+\overline{G}^{BA}(K,\omega). \label{eq:-13} \end{eqnarray} Next, to construct the new DCA effective medium $\underline{\Delta(K,\omega)}$, we impose the BEB DCA $(2 \times 2)$ matrix self-consistency condition, requiring the disorder averaged cluster and the coarse-grained lattice Green functions to be equal \begin{equation} \underline{G_c(K,\omega)}=\underline{\bar{G}(K,\omega)} \,. \end{equation} This is equivalent to a system of three coupled scalar equations \begin{subequations} \begin{eqnarray} \overline{G}^{AA}(K,\omega) &=& G_{c}^{AA}(K,\omega), \\ \overline{G}^{BB}(K,\omega) &=& G_{c}^{BB}(K,\omega), \quad \textnormal{and} \\ \overline{G}^{AB}(K,\omega) &=& G_{c}^{AB}(K,\omega). \end{eqnarray} \end{subequations} Note $\overline{G}^{BA}(K,\omega)=\overline{G}^{AB}(K,\omega)$ automatically. We then close our self-consistency loop by updating the corresponding hybridization functions for each components as \begin{eqnarray} \Delta_{n}^{AA}(K,\omega) &=& \Delta_{o}^{AA}(K,\omega) \nonumber \\ &+& \xi\left(G_{c}^{-1}(K,\omega)^{AA}-\overline{G}^{-1}(K,\omega)^{AA}\right) \nonumber \\ \Delta_{n}^{BB}(K,\omega) &=& \Delta_{o}^{BB}(K,\omega) \nonumber \\ &+& \xi\left( G_{c}^{-1}(K,\omega)^{BB}-\overline{G}^{-1}(K,\omega)^{BB}\right) \nonumber \\ \Delta_{n}^{AB}(K,\omega) &=& \Delta_{o}^{AB}(K,\omega) \nonumber \\ &+& \xi\left( G_{c}^{-1}(K,\omega)^{AB}-\overline{G}^{-1}(K,\omega)^{AB}\right) \nonumber \\ \Delta_{n}^{BA}(K,\omega) &=& \Delta_{n}^{AB}(K,\omega)\label{eq:4} \end{eqnarray} where `o' and `n' denote old and new respectively, and $\xi$ is a linear mixing parameter $0<\xi< 1$. We then iterate the above steps until convergence is reached. There are two limiting cases of the above formalism which we carefully checked numerically. In the limit of $N_{c}=1$, we should recover the original BEB result. Here the cluster Green function loses its $K$ dependence, so that \begin{equation} \begin{split} \left(\begin{array}{cc} G_{c}^{AA}(\omega) & 0\\ 0 & G_{c}^{BB}(\omega) \end{array}\right)= \\ \frac{1}{N}\sum_{{k}}\left( \underline{G_{c}(\omega)}^{-1}+ \underline{\Delta(\omega)}-\underline{\varepsilon(k)}\right)^{-1} \label{eq:-14} \end{split} \end{equation} which is the BEB self-consistency condition. Here we used that $\overline{\epsilon}(K)=0$ for N$_c=1$. The second limiting case is when there is only diagonal disorder so that $t^{AA}=t^{BB}=t^{AB}=1$. In this case the above formalism reduces to the original DCA scheme. We have verified numerically both these limits. \subsection{Typical medium theory with off-diagonal disorder} \label{sec:TMDCA} To address the issue of electron localization, we recently developed the typical medium dynamical cluster approximation (TMDCA) and applied it to the three-dimensional Anderson model.~\cite{PhysRevB.89.081107} In Ref.~\onlinecite{PhysRevB.89.081107} we have confirmed that the typical density of states vanishes for states which are localized and it is finite for extended states. In the following we generalize our TMDCA analysis to systems with off-diagonal disorder to address the question of localization and the mobility edge in such models. First, we would like to emphasize that the crucial difference between TMDCA~\cite{PhysRevB.89.081107} and the standard DCA~\cite{m_jarrell_01a} procedure is the way the disorder averaged cluster Green function is calculated. In the TMDCA analysis instead of using the algebraically averaged cluster Green function in the self-consistency loop, we calculate the typical (geometrically) averaged cluster density of states \begin{equation} \rho^c_{typ}(K,\omega)=e^{\frac{1}{N_c}\sum_{i}\langle\ln\rho_{ii}(\omega)\rangle}\left\langle \frac{-\frac{1}{\pi} \Im G_{c}(K,\omega)}{\frac{1}{N_c}\sum_{i}(-\frac{1}{\pi}\Im G_{ii}(\omega))}\right\rangle, \label{eq:-17} \end{equation} with the geometric averaging being performed over the local density of states $\rho^c_{ii}(K,\omega)=-\frac{1}{\pi}\Im G_{ii}(w)$ only. Using this $\rho^c_{typ}(K,\omega)$ the cluster averaged typical Green function is constructed via a Hilbert transform \begin{equation} G_{c}(K,\omega) =\int d\omega'\frac{\rho^c_{typ}(K,\omega')}{\omega-\omega'}. \label{eq:-18-1} \end{equation} In the presence of off-diagonal disorder, following BEB, the typical density of states becomes a $2 \times 2$ matrix, which we define as \begin{widetext} \begin{eqnarray} \addtolength{\jot}{1em} \underline{\rho^c_{typ}(K,\omega)} & = \exp\left(\dfrac{1}{N_c} \sum_{i=1}^{N_c} \left\langle \ln \rho_{ii} (\omega) \right\rangle\right) \times & \left(\begin{array}{cc} \left\langle \dfrac{-\dfrac{1}{\pi}\Im G_{c}^{AA}(K,\omega)}{\frac{1}{N_c} \sum_{i=1}^{N_c}(-\dfrac{1}{\pi}\Im G_{ii}(\omega))}\right\rangle & \left\langle \dfrac{-\frac{1}{\pi}\Im G_{c}^{AB}(K,\omega)}{\frac{1}{N_c} \sum_{i=1}^{N_c}(-\frac{1}{\pi}\Im G_{ii}(\omega))}\right\rangle \\ [1.8em] \left\langle \dfrac{-\dfrac{1}{\pi}\Im G_{c}^{BA}(K,w)}{\frac{1}{N_c} \sum_{i=1}^{N_c}(-\dfrac{1}{\pi}\Im G_{ii}(\omega))}\right\rangle & \left\langle \dfrac{-\frac{1}{\pi}\Im G_{c}^{BB}(K,\omega)}{\frac{1}{N_c} \sum_{i=1}^{N_c}(-\frac{1}{\pi}\Im G_{ii}(\omega))}\right\rangle \end{array}\right). \label{rhotyp_BEB} \end{eqnarray} \end{widetext} Here the scalar prefactor depicts the local typical (geometrically averaged) density of states, while the matrix elements are linearly averaged over the disorder. Also notice that the cluster Green function $(\underline{G_c})_{ij}$ and its components $G_c^{AA}$, $G_c^{BB}$ and $G_c^{AB}$ are defined in the same way as in Eqs. (\ref{eq:5}-\ref{eq:11}). In the next step, we construct the cluster average Green function $G_c(K,\omega)$ by performing Hilbert transform for each component \begin{eqnarray} \addtolength{\jot}{1em} \underline{ G_c(K,\omega)} & = & \left(\begin{array}{cc} \int d\omega'\frac{\rho_{typ}^{AA}(K,\omega')}{\omega-\omega'} & \int d\omega'\frac{\rho_{typ}^{AB}(K,\omega')}{\omega-\omega'} \\ [1.8em] \int d\omega'\frac{\rho_{typ}^{BA}(K,\omega')}{\omega-\omega'} & \int d\omega'\frac{\rho_{typ}^{BB}(K,\omega')}{\omega-\omega'} \end{array}\right). \label{Gtyp_BEB} \end{eqnarray} Once the disorder averaged cluster Green function $G_c(K,\omega)$ is obtained from Eq.~\ref{Gtyp_BEB}, the self-consistency steps are the same as in the procedure for the off-diagonal disorder DCA described in the previous section: we calculate the coarse-grained lattice Green function using Eq.~\ref{coarsegraining} which is then used to update the hybridization function with the effective medium via Eq.~\ref{eq:4}. The above set of equations provide us with the generalization of the TMDCA scheme for both diagonal and off-diagonal disorder which we test numerically in the following sections. Also notice that for $N_c=1$ with only diagonal disorder ($t^{AA}=t^{BB}=t^{AB}=t^{BA}$) the above procedure reduces to the local TMT scheme. In this case, the diagonal elements of the matrix in Eq.~\ref{rhotyp_BEB} will contribute $c_A$ and $c_B$, respectively, with the off-diagonal elements being zero (at N$_c=1$ the off-diagonal terms vanish because at a given site the can be either $A$ or $B$ atom only). Hence, the typical density reduces to the local scalar prefactor only, which has exactly the same form as in the local TMT scheme. \section{Results and Discussion} \label{sec:Results_Disc} To illustrate the generalized DCA and TMDCA algorithms described above, we present our results for the effects of diagonal and off-diagonal disorder in a generalized Anderson Hamiltonian (Eq.~\ref{eq:1}) for a three dimensional system with binary disorder distribution ($V_A = -V_B$) and random hopping ($t^{AA} \neq t^{BB}$, $t^{AB} = t^{BA}$) with other parameters as specified. The results are presented and discussed in Subsections~\ref{sec:Results_DCA} and \ref{sec:Results_TMDCA}. \subsection{DCA results for diagonal and off-diagonal disorder} \label{sec:Results_DCA} \begin{figure}[t!] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig1_changing-taa} \caption{(Color online). The effect of off-diagonal disorder on the average density of states calculated in the DCA scheme with $N_c=32$. Our DCA results for $N_c=1$ corresponds to a single site CPA BEB scheme. We consider two values of local disorder potential below ($V_A=0.4$) and above ($V_A=0.9$) the band-split limit, and examine the effect of changing the off-diagonal hopping strength (which amounts to a change in the non-local potential). We start with the diagonal disorder case $t^{AA}=t^{BB}=t^{AB}=1.0$ and then consider two off-diagonal disorder cases: $t^{AA}=1.5, t^{BB}=0.5$ and $t^{AA}=1.8, t^{BB}=0.2$, respectively. We fix $t^{AB}=t^{BA}=0.5(t^{AA}+t^{BB})$ and $c_A=0.5$. For this parameter range of off-diagonal disorder, we do not observe a significant difference between the CPA ($N_c=1$) and the DCA ($N_c=32$) results indicating that non-local inter-site correlations are weak.} \label{fig:Fig1} \end{figure} The effect of off-diagonal disorder on the average density of states (DOS) calculated within the DCA ($N_c=32$) is presented in Fig.~\ref{fig:Fig1}. The DOS we present in our results is a local density of states calculated as \begin{eqnarray} DOS(\omega) & = & -\frac{1}{\pi N_c}\sum_{K=1}^{N_c} \Big(\Im \overline{G}^{AA}(K,\omega)+\Im \overline{G}^{AB}(K,\omega) \nonumber \\ & + & \Im \overline{G}^{BA}(K,\omega)+\Im \overline{G}^{BB}(K,\omega)\Big). \end{eqnarray} Notice that our TMDCA procedure for $N_c=1$ reduce to the original CPA-like BEB. For a fixed concentration $c_A=0.5$, we examine the effects of off-diagonal disorder at two fixed values of the diagonal disorder potential $V_{A}=0.4$ (below the split-band limit) and $V_{A}=0.9$ (above the split-band limit). The off-diagonal randomness is modeled by changes in the hopping amplitudes $t^{AA},t^{BB}$ with $t^{AB}=0.5(t^{AA}+t^{BB})$. For a diagonal disorder case (top panel of Fig.~\ref{fig:Fig1}) with $t^{AA} = t^{BB} = t^{AB} = t^{BA}$ we have two subbands contributing equally to the total DOS. While as shown in the middle and bottom panels, the change in the strength of the off-diagonal disorder leads to dramatic changes in the DOS. An increase of the AA hoping results in the broadening of the AA subband with the development of a resonance peak at the BB subband. For this parameter range both the DCA ($N_c=32$ ) and CPA ($N_c=1$) provide about the same results indicating that disorder-induced non-local correlations are negligible. \begin{figure}[tbh] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig2_changing-V} \caption{(Color online). The effect of an increasing diagonal disorder potential $V_A$ for a fixed off-diagonal disorder with $t^{AA}=1.5$, $t^{BB}=0.5$, $t^{AB}=0.5 (t^{AA}+t^{BB}),$ $c_{A}=0.5.$ on the the average density of states calculated with our modified DCA scheme. Results are obtained for $N_c=1$ (corresponding to the CPA) and $N_c=32$ cluster sizes. We also compare our DCA average DOS with the DOS obtained using exact diagonalization (ED) for a $12 \times 12 \times 12$ cubic lattice cluster. For ED results, we used $\eta=0.01$ broadening in frequency. \label{fig:Fig2}} \end{figure} In Fig.~\ref{fig:Fig2}, we show the average density of states calculated for fixed off-diagonal-disorder parameters and different diagonal disorder potentials $V_{A}$. We again compare local CPA ($N_c=1$) and the DCA ($N_c=32$) results. To benchmark our off-diagonal extension of the DCA, we compare our results with those obtained from exact diagonalization. For small $V_{A}$, there is no difference between the CPA ($N_c=1$) and the DCA ($N_c=32$) results. As local potential $V_{A}$ is increased, noticeable differences start to develop. We can see that for larger $V_{A}$ a gap starts to open and is more dramatic in the CPA scheme. While in the DCA ($N_c=32$) this gap is partially filled due to the incorporation of non-local inter-site correlations which are missing in the CPA. Furthermore, the DOS obtained from the DCA procedure provides finer structures which are in basic agreement with the DOS calculated with exact diagonalization for a cluster of size $12 \times 12 \times 12$. The agreement we get with ED results is a good indication of the the accuracy of our extension of the DCA to off-diagonal disorder. \subsection{Typical medium finite cluster analysis of diagonal and off-diagonal disorder} \label{sec:Results_TMDCA} \begin{figure}[t!] \centering{} \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig4_dos_dd} \caption{(Color online). Diagonal disorder case: the average density of states (dash-dotted line) calculated within the DCA and the typical density of states (shaded regions) calculated within the TMDCA for diagonal disorder $t^{AA}= t^{BB}= t^{AB}=t^{BA}=1, c_A=0.5$ and various values of the local potential $V_A=-V_B$ for cluster sizes $N_c=1$ and $N_c=32$. The shaded regions represent the TDOS which is finite for the extended states and zero when the states are localized. The mobility edges extracted from the vanishing of the TDOS are marked by the arrows. The extended states region with a finite TDOS is always narrower for $N_c=1$ as compared to the results of a $N_c=32$ cluster, indicating that a single site TMT tends to overemphasize the localized states. \label{fig:Fig3}} \end{figure} To characterize the Anderson localization transition, we now explore the typical density of states (TDOS) calculated within our extension of the TMDCA presented in Sec.~\ref{sec:TMDCA}. In the typical medium analysis, the TDOS serves as the order parameter for the Anderson localization transition. In particular, the TDOS is finite for extended states and zero for states which are localized. First we consider the behavior of the TDOS and compare it with the average DOS for diagonal disorder. In Fig.~\ref{fig:Fig3} we show our results for $N_c=1$ (left panel) and $N_c=32$ (right panel). Notice that $N_c=1$ results for TDOS correspond to the single-site TMT of Dobrosavljevi\'{c} {\em{et al.}},~\cite{Vlad2003} and for average DOS they correspond to the ordinary CPA. As expected,~\cite{Vlad2003, PhysRevB.89.081107} for small disorder ($V_A=0.15$) there is not much difference between the DCA ($N_c=32$) and the TMDCA ($N_c=32$) or between the CPA and TMT for $N_c=1$ results. However, there are subtle differences between the results for finite $N_c=32$ and single site $N_c=1$ clusters due to incorporation of spatial correlations. As the disorder strength $V_A$ is increased ($V_A=0.6$), the typical density of states (TDOS) becomes smaller than the average DOS and is broader for the larger cluster. Moreover, the finite cluster introduce features in the DOS which are missing in the local $N_c=1$ data. Regions where the TDOS is zero while the average DOS is finite indicate Anderson localized states, separated by the mobility edge (marked by arrows). For $N_c>1$ these localized regions are wider which indicates that the localization edge is driven to higher frequencies. This is a consequence of the tendency of non-local corrections to suppress localization. For even larger disorder $V_A=1$, a gap opens in both the TDOS and the average DOS leading to the formation of four localization edges, but again the region of extended states is larger for the finite cluster, indicating that local TMT ($N_c=1$) tends to underestimate the extended states region. \begin{figure}[tbh] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig3_TDOS_KPM} \caption{(Color online). Diagonal disorder case: Comparison of the average and typical DOS calculated with the DCA/TMDCA and Kernel polynomial methods (KPM)~\cite{Schubert} for diagonal disorder with $t^{AA}= t^{BB}= t^{AB}=t^{BA}=1$ at various values of local potential $V_A$ and concentrations $c_A$ for cluster size $N_c=32$. The kernel polynomial method used $2048$ moments on a $48^3$ cubic lattice, and $200$ independent realizations generated with $32$ sites randomly sampled from each realization.} \label{fig:TDOS_KPM_comparison} \end{figure} To further benchmark our results for the diagonal disorder, we show in Fig.~\ref{fig:TDOS_KPM_comparison} a comparison of the average and typical DOS calculated with the DCA and the TMDCA ($N_c=32$) as compared with the kernel polynomial method (KPM).~\cite{KPM_review_2006, Schubert} In the KPM analysis, instead of diagonalizing the Hamiltonian directly, the local DOS is expressed in term of an infinite series of Chebyshev polynomials. In practice, the truncated series leads to Gibbs oscillations. The KPM damps these oscillations by a modification of the expansion coefficients. Following previous studies on the Anderson model, the Jackson kernel is used.~\cite{KPM_review_2006} As it is evident from the plots, our TMDCA results reproduced those from the KPM nicely showing that our formalism offers a systematic way of studying the Anderson localization transition in binary alloy systems. Such good agreement indicates a successful benchmarking of the TMDCA method.~\cite{PhysRevB.89.081107} \begin{figure}[tbh] \centering{} \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig5_dos_odd} \caption{(Color online). Off-diagonal disorder case: the evolution of the average density of states (dash-dotted line) and the typical density of states (shaded regions) for various values of the local potential $V_A$ with off-diagonal disorder parameters: $t^{AA}=1.5$, $t^{BB}=0.5$, $t^{AB}=0.5(t^{AA}+t^{BB})$, and $c_{A}=0.5$. The left panel is for $N_c=1$ and the right panel for $N_c=32$. The mobility edges are extracted as described in Fig.~\ref{fig:Fig3}. \label{fig:Fig4}} \end{figure} Next, we explore the effects of the off-diagonal disorder. In Fig.~\ref{fig:Fig4}, we compare the typical TDOS from the TMDCA and average DOS from the DCA for several values of the diagonal disorder strength $V_A$ at fixed off-diagonal disorder amplitudes $t^{AA}=1.5$, $t^{BB}=0.5$, $t^{AB}=1.0$. To show the effect of a finite cluster with non-local correlations, we present data for single site $N_{c}=1$ and finite cluster $N_{c}=32$. The TMT ($N_c=1$) again underestimates the extended states regime by having a narrower TDOS as compared to $N_c=32$. For small disorder $V_{A}$ both the DOS and the TDOS are practically the same. However, as $V_{A}$ increases, significant differences start to emerge. Increasing $V_A$ leads to the gradual opening of the gap which is more pronounced in the $N_c=1$ case and for smaller disorder $V_{A}=0.6$ is partially filled for the $N_c=32$ cluster. As compared to the diagonal disorder case of Fig.~\ref{fig:Fig3}, the average DOS and TDOS become asymmetric with respect to zero frequency due to the off-diagonal randomness. \begin{figure}[t!] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig6_pd} \caption{(Color online). Disorder-energy phase diagram for the diagonal disorder case (left panel) and for the off-diagonal disorder case (right panel). Parameters used for the diagonal disorder case are: $t^{AA}=t^{BB}=t^{AB}=1.0$, and $c_A=0.5$. For the off-diagonal disorder case: $t^{AA}=1.5$, $t^{BB}=0.5$, $t^{AB}=1.0$, and $c_A=0.5$. We compare the mobility edges obtained from the TMT $N_c=1$ (dash line), TMDCA $N_c=72$ (solid line), and the transfer-matrix method (TMM) (dotted line). The single site $N_c=1$ strongly underestimates the extended states region in both the diagonal and especially in the off-diagonal disorder. The mobility edges obtained from the finite cluster TMDCA ($N_c=72$) show good agreement with those obtained from the TMM, in contrast to single site TMT. See text for parameters and details of the TMM implementation. \label{fig:Fig5}} \end{figure} In Fig.~\ref{fig:Fig5} we present the disorder-energy phase diagram for both diagonal (left panel) and off-diagonal (right panel) disorder calculated using the single TMT ($N_c=1$) and the non-local TMDCA ($N_c=72$). To check the accuracy of the mobility edge trajectories extracted from our typical medium analysis, we compare our data with the results obtained with the transfer matrix method (TMM). The TMM~\cite{MacKinnonKramer1983,Kramer2010,Markos-review-2006} is a well established numerical method for calculating the correlation length and determining the mobility edge of the disorder Anderson model. Its main advantage is in its capability of capturing the effects from rather large system sizes. Thus, it provides good data for a finite size scaling analysis to capture the critical points and the corresponding exponents. In our calculations, the transmission of states down a three-dimensional bar of widths $M = [6,12]$ and length $L = 2 \times 10^4 M$ are studied by adding the products of the transfer matrices with random initial states. The multiplication of transfer matrices is numerically unstable. To avoid this instability, we orthogonalized the transfer matrix product every five multiplications using a Lapack QR decomposition.~\cite{Slevin2014} The localization edge is obtained by calculating the Kramer-MacKinnon scaling parameter $\Lambda_M$.~\cite{MacKinnonKramer1983} This is a dimensionless quantity which should be invariant at the critical point, that is, $\Lambda_M$ scales as a constant for $M \rightarrow \infty$.~\cite{Kramer2010} Thus, we determine the boundary of the localization transition vis-\`{a}-vis the critical disorder strength~\cite{plyushchay2003} by performing a linear fit to $\Lambda_M$ v. $M$ data: localized states will have a negative slope and visa versa for extended states. The transfer-matrix method finite size effects are larger for weak disorder where the states decay slowly with distance and so have large values of $\Lambda_M$ that carry a large variance in the data. Notice that the CPA and the DCA do not suffer such finite size effect limitation for small disorder and are in fact exact in this limit. The mobility edges shown in Fig.~\ref{fig:Fig5} were extracted from the TDOS, with boundaries being defined by zero TDOS. As can be seen in Fig.~\ref{fig:Fig5}, while the single-site TMT does not change much under the effect of off-diagonal disorder, the TMDCA results are significantly modified. The bands for a larger cluster become highly asymmetric with significant widening of the A subband. The local $N_c=1$ boundaries are narrower than those obtained for $N_c=72$ indicating that the TMT strongly underestimates the extended states regime in both diagonal and off-diagonal disorder. On the other hand, comparing the mobility edge boundaries for $N_c=72$ with those obtained using TMM, we find very good agreement. This again confirms the validity of our generalized TMDCA. \begin{figure}[tbh] \centering{} \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig7_concentration_dep} \caption{(Color online). The average DOS (dot-dashed lines) and the typical DOS (shaded regions) for various values of the concentration $c_A$ with off-diagonal disorder parameters $t^{AA}=1.1$, $t^{BB}=0.9$ and $t^{AB}=1.0$, at fixed local potential $V_A=1.0$ for $N_c=1$ (left panel) and $N_c=32$ (right panel).} \label{fig:concentartion_tdos_ados} \end{figure} Next, we consider the effect of off-diagonal disorder at various concentrations $c_A$. In Fig.~\ref{fig:concentartion_tdos_ados}, we show the typical and average DOS for several values of $c_A$ calculated with the TMDCA and the DCA, respectively. As expected, when $c_A\rightarrow 0$, we obtain a pure $B$ subband contribution (the top panel). Upon gradual increase of the $c_A$ concentration, the number of states in the $A$ sub-band grows until $B$-subband becomes a minority for $c_A>0.5$ and completely disappears at $c_A\rightarrow 1$ (the bottom panel). Again, we see that a finite cluster $N_c=32$ provides a more accurate description (with finite details in DOS and broader regions of extended states in TDOS) in both average DOS and TDOS. The associated contour plots for the evolution of the TDOS in the concentrations range $0\leq c_A\leq1$ are shown in Fig.~\ref{fig:concentartion_contour}. The essence of these plots is to show the overall evolution of the typical and average DOS for a fixed local potential and off-diagonal disorder parameters as a function of the concentration $c_A$. In the limit of $c_A\rightarrow 0$, only the B-subband centered around $\omega=-V_A$ survives, and for $c_A\rightarrow 1$, only the A-subband centered around $\omega=V_A$ is present. For intermediate concentrations, we clearly have contributions to the total typical density of states from both species, as expected. \begin{figure}[tbh] \begin{raggedright} \includegraphics[scale=0.225]{Fig9_Nc1} \includegraphics[scale=0.225]{Fig9_Nc32} \par\end{raggedright} \caption{(Color online). The evolution of the typical density of states for $N_c=1$ (left panel) and $N_c=32$ (right panel) with the change in the concentration $0<c_A<1$ at fixed diagonal and off-diagonal disorder parameters: $t^{AA}=1.1$, $t^{BB}=0.9$, $t^{AB}=1.0$ and $V_A=1.0$ } \label{fig:concentartion_contour} \end{figure} Finally, we would like to comment on the possible further development of the presented scheme. After certain generalizations our current implementation of the typical medium dynamical cluster approximation for off-diagonal disorder can serve as the natural formalism for multiband (multiorbital) systems.~\cite{Koepernik} Such an extension is crucial for studying disorder and localization effects in real materials. Further development towards this direction will be the subject of future publications. \section{Conclusion} \label{sec:conclusion} A proper theoretical description of disordered materials requires the inclusion of both diagonal and off-diagonal randomness. In this paper, we have extended the BEB single site CPA scheme to a finite cluster DCA that incorporates the effect of non-local disorder. Applying the generalized DCA scheme to a single band tight binding Hamiltonian with configuration-dependent hopping amplitudes, we have considered the effects of non-local disorder and the interplay of diagonal and off-diagonal disorder on the average density of states. By comparing our numerical results with those from exact diagonalization, we have established the accuracy of our method. To study the effect of disorder on electron localization and to determine the mobility edge in systems with both diagonal and off-diagonal randomness, we have also extended our recently developed TMDCA to included off-diagonal randomness. Within the TMDCA the typical DOS vanishes for localized states, and is finite for states which are extended. Employing the typical DOS as an order parameter for Anderson localization, we have constructed the disorder-energy phase diagram for systems with both diagonal and off-diagonal disorder. We have also demonstrated the inability of the single site CPA and the TMT methods to capture accurately the localization and disorder effects in both the average and the typical DOS, respectively. Comparing our results with kernel polynomial, exact diagonalization, and transfer-matrix methods we find a remarkably good agreement with our extended DCA and TMDCA. To the best of our knowledge, this is the first numerically accurate investigation of the Anderson localization in systems with off-diagonal disorder within the framework of the typical medium analysis. We believe that the extended TMDCA scheme presents a powerful tool for treating both diagonal and off-diagonal disorder on equal footing, and can be easily extended to study localization in multi-band systems. \begin{acknowledgments} We thank A. Gonis for useful discussion and directing us to the BEB formalism, we also thank Shuxiang Yang, Wei Ku, and Tom Berlijn for useful discussions. This work is supported by DOE SciDAC grant DE-FC02-10ER25916 (MJ) and BES CMCSN grant DE-AC02-98CH10886 (HT). Additional support was provided by NSF EPSCoR Cooperative Agreement No. EPS-1003897 (HT, CE, CM), and NSF OISE-0952300 (JM). This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575, the high performance computational resources provided by the Louisiana Optical Network Initiative (http://www.loni.org), and HPC@LSU computing. \end{acknowledgments}
train/arxiv
BkiUeLA4eIOjReIV-lcz
5
1
\section{} \end{document} \endinput \section{} \input{Sections/Introduction} \input{Sections/TheoreticalBackground} \input{Sections/Results} \input{Sections/Conclusion} \input{Sections/DataAvailability} \input{Sections/Acknowledgements} \section*{Online submitted abstract} \section*{Acknowledgments} This research was funded by the Mathematics Division of the Center for Computational Engineering Science at RWTH Aachen University, and the Volkswagen Foundation. We would like to thank Manuel Torrilhon for his support. The funding sources had no role in the study design, collection, analysis and interpretation of data as well in the writing of the report. The calculations were run on RWTH Aachen's Compute Cluster under the rwth0572 project. \section{Selected nuclide models} \begin{figure} \centering \resizebox{.95\textwidth}{!}{ \input{Sections/Images/Sr902d} \input{Sections/Images/Sr901d} } \caption{Reconstruction plots for $^{90}$Sr. The left plot shows the GPR model based on samples generated by the Sobol sequence, showing the mass as a function of burnup and temperature. The reported masses have been scaled by the number of fuel assemblies in the reactor and their dimensions. The squares indicate the samples obtained from the Sobol sequence. The plot on the right shows the same model but plotting this time the predictions for two fixed temperature values alongside their posterior uncertainties. The latter are shown at the 6-$\sigma$ level. Below this plot, the errors of both curves is shown. It can be observed that they are distributed around zero.} \label{fig:sr90} \end{figure} \begin{figure} \centering \resizebox{.95\textwidth}{!}{ \input{Sections/Images/Eu1542d} \input{Sections/Images/Eu1541d} } \caption{Reconstruction plots for $^{154}$Eu. The left plot shows the GPR model based on samples generated by the Sobol sequence, showing the mass as a function of burnup and temperature. The reported masses have been scaled by the number of fuel assemblies in the reactor and their dimensions. The squares indicate the samples obtained from the Sobol sequence. The plot on the right shows the same model but plotting this time the predictions for two fixed temperature values alongside their posterior uncertainties. The latter are shown at the 6-$\sigma$ level. Below this plot, the errors of both curves is shown. It can be observed that they are distributed around zero.} \label{fig:eu154} \end{figure} \section{Conclusion}\label{section:conclusion} We have compared two interpolation methods for direct fast prediction of spent fuel nuclide compositions. As expected, GP models typically require significantly more time to interpolate values. While the time itself is of the order of miliseconds for a relatively small training set size, for large-scale applications involving a very large number of calculations for many nuclides, as is for instance the case when solving Bayesian inference problems using Markov Chain Monte Carlo \cite{ESARDA-Antonio}, it is desired to keep the required calculation time as low as possible. In return for this larger calculation time, GP models provide an estimate of the prediction uncertainty as well as an explicit method for estimating the gradient of the underlying function, provided the kernels used are differentiable. This can be useful for the solution of both forward and inverse optimization problems. Additionally, we have noted that GP models result in smaller interpolation errors under the $RMSE$ metric, with varying reduction factors of up to 20 when compared to Cubic Spline interpolation. Several possibilities exist for the extension of this work. As of now, we have created GP models depending only on two input parameters using the ASE kernel. Future research could entail the inclusion of a larger number of parameters such as power or reactor downtime, as well as the use of different kernels and even combinations of kernels for more flexible models. Higher fidelity GP models can be created by the use of 3-D full core reactor simulations, to account for spatial variation of nuclide concentrations and burnup, temperature and power levels. Building such GP-based models would, however, be significantly more computationally expensive. As we have only studied GPR to directly predict isotopic compositions, a further research area could be the study of the predictive performance when implementing GPR on a cross-section level. This could enable implementing changes in the model parameters during the irradiation cycle. This approach could make feasible the forward uncertainty estimation via Monte Carlo methods, calculating the nuclide concentrations through the matrix exponential method, all thanks to the predictive variance of the interpolated one-group cross-sections. In conclusion, we hope that the full potential of GP-based modelling of nuclear processes will be further studied and exploited in the future, having discussed here the many advantages it has for nuclear engineering applications. \section*{Data Availability} The code used for the implementation of GP models and their comparison with Cubic Spline models can be obtained at: \url{https://github.com/FigueroaAC/GPs-for-SpentFuel} \section{Introduction} The computation of spent fuel nuclide compositions is a complex problem which involves modelling the fuel assembly, reactor geometries and tracking the nuclide evolution as the fuel is irradiated following an operational history indicated by different parameters such as burnup, power, and temperature. This requires the numerical solution of the neutron transport equation through probabilistic - namely Monte Carlo calculations - or deterministic methods \cite{nuchb}, which are computationally expensive. For several applications however, it is desirable to have a fast and computationally less intensive method to compute nuclide compositions, especially when many repeated calculations are needed for further analysis. Examples of this are nuclear fuel cycle simulators \cite{cyclus}, nuclear forensics applications \cite{forensics}, and sensitivity analyses, the latter illustrated by \cite{sensitivity} where it is performed for a model which discriminates reactor types based on measurements of plutonium samples and their errors. This problem has been addressed in the past through pre-computed data-bases of reactor simulations, and the interpolation of either the one-group cross-sections \cite{arp} \cite{cyclus2} or nuclide concentrations resulting from these simulations with different methods such as: Nearest Neighbors, Linear Fit, Lagrange Polynomials, Neural Networks \cite{baptiste}, and Cubic Splines, the latter being currently the most commonly used. Up until now, most of the available software packages have used infinite lattice reactor simulations sampled on a uniform multi-dimensional grid of at most three dimensions \cite{arp}\cite{cyclus2}. For low dimensional problems ($d\leq3$), uniform grid sampling produces good results, however for a larger number of dimensions, this sampling method does not distribute the sampling points efficiently across space \cite{DOE}, resulting in poor exploration of the parameter space. Quasirandom Sampling methods would provide better space coverage properties at higher dimensions \cite{DOE}. However, some of the above mentioned interpolation methods require the samples to be distributed on a grid in order to perform correctly. Furthermore, while the interpolation quality can be estimated, the aforementioned interpolation methods do not provide information on the expected variance of the interpolation at non-sampled points. Here, we propose using Gaussian Processes (GP) for interpolation, by considering noiseless inputs. Under this assumption, GP's provide zero error on points arising from the dataset on which the models have been trained, whereas a regression is performed for unseen data points. In addition, this method has already been used in conjunction with Quasirandom Sampling \cite{ESARDA-Antonio}. Furthermore, also the gradient and estimates of their prediction uncertainty at non-sampled points are directly obtained. The former is useful for applications such as forward and inverse optimization problems, the latter for uncertainty quantification applications. GP belongs to a set of tools used in the Machine Learning communities for a variety of tasks, including classification and regression. With Gaussian Process Regression (GPR), the interpolation is not performed on a specific function but over an infinite distribution of functions that share common properties as defined by the user. An equivalent concept to GPR called kriging is well known in the field of geology \cite{recipes}. Recently, researchers have used GP-based surrogate models for nuclear engineering applications such as modeling of equipment degradation and preventive maintenance \cite{degradation}, or studying fuel performance and thermo-hydraulics \cite{BayesGP}. They also have been used for the prediction of fuel nuclide compositions and compared to surrogate models based on Dynamic Mode Decomposition, by performing regression on a Principal Component Analysis (PCA) model of the fuel isotopics \cite{dmd}. Additionally, we have started exploring their use for the direct prediction of spent fuel compositions, without a PCA-reduced model and using multidimensional input variables \cite{ESARDA-Antonio}, work that we update here In this paper, we compare the performance of GP-based surrogate models to models based on Cubic Splines for the direct interpolation of spent fuel nuclide compositions (GP could also interpolate cross-sections). We compare to Cubic Splines as they often produce smoother and higher quality interpolators in comparison to methods such as Langrange and Newton polynomials \cite{na}. The performance of both techniques will be explored through computational experiments involving different sampling configurations using both Grid sampling and Sobol Quasirandom Sequences in order to assess the impact of the sampling strategy on the regressions. We study two-dimensional problems to examine whether GPR performs well already in problems where grid sampling is still effective. \section{Results and Discussion}\label{section:results} We have compared GPR-based models of spent fuel nuclide concentrations to Cubic Splines models of these quantities. The Splines models have been implemented using the \texttt{SmoothBivariateSpline} module from the \texttt{Scipy} \cite{scipy} python package. The following comparison is focused on two main elements: Runtime Performance and Model Quality. \subsection{Runtime performance} We have studied the mean time required to perform a prediction for the interpolation methods discussed in this article. This calculation time we refer to here as \textit{runtime} is different from the time required to train the model which in itself is dependent on the sample size, the number of dimensions and the subsequent optimization problem that is to be solved in order to derive the appropriate kernel hyperparameters. The tests have been performed on a 2.2 GHz \texttt{Intel Core i7} computer with 4 cores and 16GB of RAM. Table \ref{tab:runtime} shows the regression time required averaged over all the models, with the time for a GP model separated into the time required for the prediction \textbf{GPR}$-\mu$ and the time required for the estimation of the prediction variance \textbf{GPR}$-\sigma$, as these calculations can be run independently of each other. Interpolation using GP models can require significantly more time than interpolation based on Cubic Splines. It might be possible that through the use of more efficient implementations, the runtime difference between the two can be reduced in the future. \input{Sections/Tables/tableruntime} \subsection{Model Quality} We have analyzed the quality of the models based on GP's and Cubic Splines using the experimental designs presented in Table \ref{tab:setups} and Figures \ref{fig:SvG} and \ref{fig:exppatterns}. For this, we have considered an array of metrics to quantify model performance, namely the root-mean-square-errors ($RMSE$) and the coefficients of determination $R^{2}$. We also examine an additional metric related to the posterior predictive variance produced by GPR in order to study the quality of this estimator. This quantity is $Pred-1\sigma$, which quantifies the fraction of model predictions located within 1 predictive standard deviation from the true values of the test sets, i.e. it shows whether the predicted variance represents the variance that is actually observed. While models were created for each of the nuclides tracked by \texttt{SERPENT 2}, Tables \ref{tab:grid2} and \ref{tab:sobolnew} contain the results of our analysis for a small subset of them representative of both major actinides and fission products. The mean values shown have been scaled by the assembly length and number of fuel assemblies in the reactor core. \input{Sections/Tables/tablegrid2} \input{Sections/Tables/tablesobolnew} In general, we observe that based on the $R^{2}$ metric, both interpolation methods have good performance for most of the nuclides. Furthermore, we note that based on the $RMSE$ metric, GP based models perform better in all the experimental configurations for all isotopes. The improvement of the mean RMSE is in most cases between a factor of 1.02 and 10. Still, Cubic Splines could provide on average reasonable results for most nuclides, provided the application requirements allow for $RMSE$ errors of such magnitude. We also observe that the variance predictor of the GP model predicts the true values at the $1-\sigma$ level reasonably well, which is not generalizable to any GPR application. In addition,it can be seen that both surrogate model methods perform better on a uniform grid-based sample set compared to a sampled set based on Sobol sequences. Nevertheless, based on a study where different sampling schemes are compared for their space covering properties and efficiency \cite{DOE}, one would expect the positive properties of Sobol sequences to be made manifest in problems of higher dimensionality. \section{Creating the datasets}\label{sec:Models} \subsection{Reactor simulations} For our research, we have modeled a 2D infinite lattice model of a CANDU 6 reactor based on specifications from available literature \cite{Candu}. The implementation has been made in the computer code \texttt{SERPENT 2} which couples a Monte Carlo neutron transport module with a fuel depletion solver based on the Chebyshev Rational Approximation Method \cite{serpent}. The quality of the model has been examined by comparing end-of-cycle isotopic compositions with the \texttt{Bruce-1} dataset reported in the SFCOMPO-2.0 database \cite{sfcompo}. Figure \ref{fig:2dlattice} shows the CANDU 6 37-elements fuel assembly implemented in \texttt{SERPENT 2}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Sections/Images/Candu_geom.png} \caption{CANDU 6, 37-element fuel assembly, 2D infinite lattice implementation in \texttt{SERPENT 2} (light green indicates the fuel elements, white for the coolant, pink for the calandria and pressure tubes, black for the void between these tubes and turquoise for the moderator between fuel channels)} \label{fig:2dlattice} \end{figure} \begin{table}[ht] \centering \begin{tabular}{c c} \toprule \textbf{Parameters} & \textbf{Range} \\ \midrule Moderator Temperature & 333 - 363 $K$ \\ Burnup & 0.1 - 7 $\frac{\mathrm{MWd}}{\mathrm{kg_{HM}}}$ \\ \bottomrule \end{tabular} \caption{Parameter ranges used in in the generation of the training and testing datasets} \label{tab:paramranges} \end{table} \subsection{Sampling strategies} Strategies are required to sample sets of input parameters for each Serpent simulation involved in model development and testing. In this study, we sample moderator temperature and discharge burnup. Table \ref{tab:paramranges} indicates the range of values considered for these parameters. Previous studies on sampling schemes and strategies have shown that uniform grid sampling performs poorly in higher-dimensional input spaces \cite{DOE}. While a random number sequence can overcome this issue in principle, due to the nature of pseudo-random number generators implementations, random number sequences tend to cluster, resulting in a non-uniform distribution of samples across the input space. In such cases, a quasirandom sequence can provide a set of samples with a better spatial distribution\cite{qsr} Sobol quasirandom sampling is a method to generate quasi-random sequences which is designed to minimize the star discrepancy, a mathematical term describing the difference between the distribution of values generated by a certain generating scheme to a multidimensional uniform probability distribution. The generation of the samples involves using a special algorithm in which bitwise operations are performed. Details on the algorithm and its implementation can be found in \cite{SobolSQ}. In order to evaluate the impact of the sampling method on the prediction quality of the models, two scenarios have been prepared. In Scenario A, a 25x25 2D-grid dataset was used and simulated, based on the parameter ranges of Table \ref{tab:paramranges}. In this scenario a set of 169 samples was generated from the selection of samples starting from the corners and leaving one grid element in between. The rest of the unselected grid points are then considered as the test dataset, for both this and the following scenario. Scenario B consists of 169 samples generated from a Sobol sequence sampler written in-house. Again, the test dataset is the same as for Scenario A. \\ Figure \ref{fig:SvG} shows a comparison between the samples generated via the Sobol sequence and the aforementioned grid. While grid sampling is still effective in two dimensions, the good space coverage of the Sobol sequence is evident. Figure \ref{fig:exppatterns} shows the spatial arrangement of Scenario A's training set through the bright elements, while the darker elements represent the test dataset used for both scenarios. Table \ref{tab:setups} summarizes the different scenarios and their characteristics \begin{figure} \centering \input{Sections/Images/sampling_comparison} \caption{Comparison between the dataset based on Sobol quasirandom sampling (left) and the dataset based on Grid sampling (right). While both the Sobol sequence and grid sampling are effective in two dimensions as seen here, the Sobol sequence by far outperforms grid sampling in higher dimensions, at the same number of samples.} \label{fig:SvG} \end{figure} \begin{figure} \centering \includegraphics[width=0.32\textwidth,trim=2cm 1.3cm 2cm 1.4cm, clip]{Sections/Images/grid2.pdf} \caption{Spatial distribution pattern used to evaluate the performance of the surrogate models on a grid. The yellow elements represent the simulations to be used for training the interpolators, while the dark regions represent the cells where the values will be interpolated. } \label{fig:exppatterns} \end{figure} \begin{table}[ht] \centering \begin{tabular}{c c c} \toprule \textbf{Configuration}&\textbf{Training set size} & \textbf{Test set size} \\ \midrule Scenario A& 169 (Grid) & 456 (Grid) \\ Scenario B & 169 (Sobol) & 456 (Grid) \\ \bottomrule \end{tabular} \caption{Details of the sampling strategies. The different configurations allow for the study of model performance under varying spatial sample distributions} \label{tab:setups} \end{table} \section{Building the model} \label{sec:Reg} Based on the pre-generated training data, we built the Cubic Spline and the GPR models. In the following section we examine these methods. \subsection{Cubic Spline Interpolation} Cubic Spline Interpolation is a method to interpolate functions within the boundaries described by a number of its samples. The interpolation procedure consists of fitting a piecewise cubic polynomial between the sampled points. This method results in smoother and higher quality interpolators in comparison to methods such as Langrange and Newton polynomial fitting \cite{NM2}. In addition, it has the advantage of avoiding oscillations and Runge's phenomenon. A more detailed description of the method and its implementation can be found in \cite{CSplines}. \subsection{Gaussian Process Regression} A GP is a set of random variables that share a joint Gaussian probability distribution. GP's can be used for the construction of probabilistic surrogate models of black-box problems where an analytical form is unavailable or intractable and the computational cost is elevated. Without loss of generalization, the aim of GPR is to approximate a function $Y(\vv{X}),\, \vv{X} = (x_{1},..., x_{d})$ through a GP as: \begin{equation} Y(\vv{X}) \approx \mathcal{G}\mathcal{P}(\vv{X}) = \mathcal{N}(0,K(\vv{X})) \end{equation} Where $d$ is the dimension of the input space. The kernel ($K$) of a GP is a function which describes the covariance between the inputs of the target function. It encodes a limited set of assumptions about the underlying function such as smoothness and differentiability. By choosing a kernel, an infinite set of functions which share the kernel properties are used to perform the regression, and by training the kernel parameters based on the input and output data a subset of those functions are chosen that match the data. We chose the Anisotropic Squared Exponential (ASE) kernel, which provides for very smooth interpolation and is infinitely differentiable, as we expect that the change of nuclide concentrations throughout the parameter space would meet these characteristics. An additional advantage is that the ASE kernel allows for the determination of relative input parameter relevance through the use of different correlation lengths parameters ($\ell_{i}$) for each input: \begin{equation}\label{eq:ASE} K(\vv{X},\vv{Z}) = exp\left(-\sum_{i}^{d}\left( \dfrac{x_{i}-z_{i} } {\ell_{i}}\right)^{2}\right) \end{equation} Where $\vv{X},\vv{Z}$ are two different points in which the covariance function is evaluated. The smaller $\ell_{i}$, the more sensitive the underlying model is to changes in input $x_{i}$. Once the parameters of the kernel that reproduce the training data are estimated, the posterior predictive distribution of the GP - our model - at an unseen point ($*$) is given by a normal distribution with prediction mean ($\mu_{*}$), prediction variance ($\sigma^{2}_{*}$) and prediction gradient ($\nabla\mu_{*}$): \begin{align} \mu_{*} &= K_{train,*} (K_{train,train})^{-1} Y_{train} \nonumber \\ \sigma^{2}_{*} &= K_{*,*} - K_{*,train}^{T}(K_{train,train})^{-1}K_{*,train} \nonumber \\ \nabla\mu_{*} &= \nabla K_{train,*} (K_{train,train})^{-1} Y_{train} \end{align} To implement the GPR models, we have used the \texttt{scikit-learn} Python package \cite{scikit-learn}. More details on GP's and their kernels can be found in \cite{6}. \subsection{Cross-validation} Since the kernel training process is strongly dependent on the training data, weak prediction performance can occur if an inappropriate selection of the training set is made. This can be avoided by cross-validation. It is implemented by splitting the training set into k ``folds'' of approximately equal size and performing the training on each fold, thus obtaining the model parameters, then using the other folds as test data to evaluate the model predictive quality. This should be performed several times by randomizing the selection of the folds, thus generating a set of plausible model parameters from which the best performing combinations can be selected. Notwithstanding this, a major benefit of cross-validation is that it allows the training of models with different combinations of samples spanning the entire input space, providing parameter sets that tend to enhance the model generalization properties, typically at a smaller computational cost since the training set size is reduced. Each \texttt{SERPENT 2} simulation provides an output vector containing about 1300 nuclides. We have created GPR models for each of these nuclides for both datasets. Each GPR model has been trained using a 5-fold cross validation scheme that has been repeated 10 times, thus generating a set of 50 kernel parameter combinations from which the best-performing is chosen. Figures \ref{fig:gpr_model}, \ref{fig:sr90} and \ref{fig:eu154} show the computed GPR models for $^{239}$Pu, $^{90}$Sr and $^{154}$Eu respectively. The models shown are based on the Sobol sequence dataset from scenario B. \begin{figure} \centering \resizebox{.95\textwidth}{!}{ \input{Sections/Images/Pu2d} \input{Sections/Images/Pu1d} } \caption{Reconstruction plots for $^{239}$Pu. The left plot shows the GPR model based on samples generated by the Sobol sequence, showing the mass as a function of burnup and temperature. The reported masses have been scaled by the number of fuel assemblies in the reactor and their dimensions. The squares indicate the samples obtained from the Sobol sequence. The plot on the right shows the same model but plotting this time the predictions for two fixed temperature values alongside their posterior uncertainties. The latter are shown at the 6-$\sigma$ level. A portion of the high burnup section of the plot has been zoomed in in the top left corner, making evident the small albeit important effect of temperature at higher burnups. A lower temperature results in more plutonium being produced. This can be explained by a thermal broadening of $^{238}$U resonances. Below this plot, the errors of both curves is shown. It can be observed that they are distributed around zero.} \label{fig:gpr_model} \end{figure} \section{} \input{Sections/Introduction} \input{Sections/TheoreticalBackground} \input{Sections/Results} \input{Sections/Conclusion} \input{Sections/DataAvailability} \input{Sections/Acknowledgements} \section*{Online submitted abstract} \section*{Acknowledgments} This research was funded by the Mathematics Division of the Center for Computational Engineering Science at RWTH Aachen University, and the Volkswagen Foundation. We would like to thank Manuel Torrilhon for his support. The funding sources had no role in the study design, collection, analysis and interpretation of data as well in the writing of the report. The calculations were run on RWTH Aachen's Compute Cluster under the rwth0572 project. \section{Selected nuclide models} \begin{figure} \centering \resizebox{.95\textwidth}{!}{ \input{Sections/Images/Sr902d} \input{Sections/Images/Sr901d} } \caption{Reconstruction plots for $^{90}$Sr. The left plot shows the GPR model based on samples generated by the Sobol sequence, showing the mass as a function of burnup and temperature. The reported masses have been scaled by the number of fuel assemblies in the reactor and their dimensions. The squares indicate the samples obtained from the Sobol sequence. The plot on the right shows the same model but plotting this time the predictions for two fixed temperature values alongside their posterior uncertainties. The latter are shown at the 6-$\sigma$ level. Below this plot, the errors of both curves is shown. It can be observed that they are distributed around zero.} \label{fig:sr90} \end{figure} \begin{figure} \centering \resizebox{.95\textwidth}{!}{ \input{Sections/Images/Eu1542d} \input{Sections/Images/Eu1541d} } \caption{Reconstruction plots for $^{154}$Eu. The left plot shows the GPR model based on samples generated by the Sobol sequence, showing the mass as a function of burnup and temperature. The reported masses have been scaled by the number of fuel assemblies in the reactor and their dimensions. The squares indicate the samples obtained from the Sobol sequence. The plot on the right shows the same model but plotting this time the predictions for two fixed temperature values alongside their posterior uncertainties. The latter are shown at the 6-$\sigma$ level. Below this plot, the errors of both curves is shown. It can be observed that they are distributed around zero.} \label{fig:eu154} \end{figure} \section{Conclusion}\label{section:conclusion} We have compared two interpolation methods for direct fast prediction of spent fuel nuclide compositions. As expected, GP models typically require significantly more time to interpolate values. While the time itself is of the order of miliseconds for a relatively small training set size, for large-scale applications involving a very large number of calculations for many nuclides, as is for instance the case when solving Bayesian inference problems using Markov Chain Monte Carlo \cite{ESARDA-Antonio}, it is desired to keep the required calculation time as low as possible. In return for this larger calculation time, GP models provide an estimate of the prediction uncertainty as well as an explicit method for estimating the gradient of the underlying function, provided the kernels used are differentiable. This can be useful for the solution of both forward and inverse optimization problems. Additionally, we have noted that GP models result in smaller interpolation errors under the $RMSE$ metric, with varying reduction factors of up to 20 when compared to Cubic Spline interpolation. Several possibilities exist for the extension of this work. As of now, we have created GP models depending only on two input parameters using the ASE kernel. Future research could entail the inclusion of a larger number of parameters such as power or reactor downtime, as well as the use of different kernels and even combinations of kernels for more flexible models. Higher fidelity GP models can be created by the use of 3-D full core reactor simulations, to account for spatial variation of nuclide concentrations and burnup, temperature and power levels. Building such GP-based models would, however, be significantly more computationally expensive. As we have only studied GPR to directly predict isotopic compositions, a further research area could be the study of the predictive performance when implementing GPR on a cross-section level. This could enable implementing changes in the model parameters during the irradiation cycle. This approach could make feasible the forward uncertainty estimation via Monte Carlo methods, calculating the nuclide concentrations through the matrix exponential method, all thanks to the predictive variance of the interpolated one-group cross-sections. In conclusion, we hope that the full potential of GP-based modelling of nuclear processes will be further studied and exploited in the future, having discussed here the many advantages it has for nuclear engineering applications. \section*{Data Availability} The code used for the implementation of GP models and their comparison with Cubic Spline models can be obtained at: \url{https://github.com/FigueroaAC/GPs-for-SpentFuel} \section{Introduction} The computation of spent fuel nuclide compositions is a complex problem which involves modelling the fuel assembly, reactor geometries and tracking the nuclide evolution as the fuel is irradiated following an operational history indicated by different parameters such as burnup, power, and temperature. This requires the numerical solution of the neutron transport equation through probabilistic - namely Monte Carlo calculations - or deterministic methods \cite{nuchb}, which are computationally expensive. For several applications however, it is desirable to have a fast and computationally less intensive method to compute nuclide compositions, especially when many repeated calculations are needed for further analysis. Examples of this are nuclear fuel cycle simulators \cite{cyclus}, nuclear forensics applications \cite{forensics}, and sensitivity analyses, the latter illustrated by \cite{sensitivity} where it is performed for a model which discriminates reactor types based on measurements of plutonium samples and their errors. This problem has been addressed in the past through pre-computed data-bases of reactor simulations, and the interpolation of either the one-group cross-sections \cite{arp} \cite{cyclus2} or nuclide concentrations resulting from these simulations with different methods such as: Nearest Neighbors, Linear Fit, Lagrange Polynomials, Neural Networks \cite{baptiste}, and Cubic Splines, the latter being currently the most commonly used. Up until now, most of the available software packages have used infinite lattice reactor simulations sampled on a uniform multi-dimensional grid of at most three dimensions \cite{arp}\cite{cyclus2}. For low dimensional problems ($d\leq3$), uniform grid sampling produces good results, however for a larger number of dimensions, this sampling method does not distribute the sampling points efficiently across space \cite{DOE}, resulting in poor exploration of the parameter space. Quasirandom Sampling methods would provide better space coverage properties at higher dimensions \cite{DOE}. However, some of the above mentioned interpolation methods require the samples to be distributed on a grid in order to perform correctly. Furthermore, while the interpolation quality can be estimated, the aforementioned interpolation methods do not provide information on the expected variance of the interpolation at non-sampled points. Here, we propose using Gaussian Processes (GP) for interpolation, by considering noiseless inputs. Under this assumption, GP's provide zero error on points arising from the dataset on which the models have been trained, whereas a regression is performed for unseen data points. In addition, this method has already been used in conjunction with Quasirandom Sampling \cite{ESARDA-Antonio}. Furthermore, also the gradient and estimates of their prediction uncertainty at non-sampled points are directly obtained. The former is useful for applications such as forward and inverse optimization problems, the latter for uncertainty quantification applications. GP belongs to a set of tools used in the Machine Learning communities for a variety of tasks, including classification and regression. With Gaussian Process Regression (GPR), the interpolation is not performed on a specific function but over an infinite distribution of functions that share common properties as defined by the user. An equivalent concept to GPR called kriging is well known in the field of geology \cite{recipes}. Recently, researchers have used GP-based surrogate models for nuclear engineering applications such as modeling of equipment degradation and preventive maintenance \cite{degradation}, or studying fuel performance and thermo-hydraulics \cite{BayesGP}. They also have been used for the prediction of fuel nuclide compositions and compared to surrogate models based on Dynamic Mode Decomposition, by performing regression on a Principal Component Analysis (PCA) model of the fuel isotopics \cite{dmd}. Additionally, we have started exploring their use for the direct prediction of spent fuel compositions, without a PCA-reduced model and using multidimensional input variables \cite{ESARDA-Antonio}, work that we update here In this paper, we compare the performance of GP-based surrogate models to models based on Cubic Splines for the direct interpolation of spent fuel nuclide compositions (GP could also interpolate cross-sections). We compare to Cubic Splines as they often produce smoother and higher quality interpolators in comparison to methods such as Langrange and Newton polynomials \cite{na}. The performance of both techniques will be explored through computational experiments involving different sampling configurations using both Grid sampling and Sobol Quasirandom Sequences in order to assess the impact of the sampling strategy on the regressions. We study two-dimensional problems to examine whether GPR performs well already in problems where grid sampling is still effective. \section{Results and Discussion}\label{section:results} We have compared GPR-based models of spent fuel nuclide concentrations to Cubic Splines models of these quantities. The Splines models have been implemented using the \texttt{SmoothBivariateSpline} module from the \texttt{Scipy} \cite{scipy} python package. The following comparison is focused on two main elements: Runtime Performance and Model Quality. \subsection{Runtime performance} We have studied the mean time required to perform a prediction for the interpolation methods discussed in this article. This calculation time we refer to here as \textit{runtime} is different from the time required to train the model which in itself is dependent on the sample size, the number of dimensions and the subsequent optimization problem that is to be solved in order to derive the appropriate kernel hyperparameters. The tests have been performed on a 2.2 GHz \texttt{Intel Core i7} computer with 4 cores and 16GB of RAM. Table \ref{tab:runtime} shows the regression time required averaged over all the models, with the time for a GP model separated into the time required for the prediction \textbf{GPR}$-\mu$ and the time required for the estimation of the prediction variance \textbf{GPR}$-\sigma$, as these calculations can be run independently of each other. Interpolation using GP models can require significantly more time than interpolation based on Cubic Splines. It might be possible that through the use of more efficient implementations, the runtime difference between the two can be reduced in the future. \input{Sections/Tables/tableruntime} \subsection{Model Quality} We have analyzed the quality of the models based on GP's and Cubic Splines using the experimental designs presented in Table \ref{tab:setups} and Figures \ref{fig:SvG} and \ref{fig:exppatterns}. For this, we have considered an array of metrics to quantify model performance, namely the root-mean-square-errors ($RMSE$) and the coefficients of determination $R^{2}$. We also examine an additional metric related to the posterior predictive variance produced by GPR in order to study the quality of this estimator. This quantity is $Pred-1\sigma$, which quantifies the fraction of model predictions located within 1 predictive standard deviation from the true values of the test sets, i.e. it shows whether the predicted variance represents the variance that is actually observed. While models were created for each of the nuclides tracked by \texttt{SERPENT 2}, Tables \ref{tab:grid2} and \ref{tab:sobolnew} contain the results of our analysis for a small subset of them representative of both major actinides and fission products. The mean values shown have been scaled by the assembly length and number of fuel assemblies in the reactor core. \input{Sections/Tables/tablegrid2} \input{Sections/Tables/tablesobolnew} In general, we observe that based on the $R^{2}$ metric, both interpolation methods have good performance for most of the nuclides. Furthermore, we note that based on the $RMSE$ metric, GP based models perform better in all the experimental configurations for all isotopes. The improvement of the mean RMSE is in most cases between a factor of 1.02 and 10. Still, Cubic Splines could provide on average reasonable results for most nuclides, provided the application requirements allow for $RMSE$ errors of such magnitude. We also observe that the variance predictor of the GP model predicts the true values at the $1-\sigma$ level reasonably well, which is not generalizable to any GPR application. In addition,it can be seen that both surrogate model methods perform better on a uniform grid-based sample set compared to a sampled set based on Sobol sequences. Nevertheless, based on a study where different sampling schemes are compared for their space covering properties and efficiency \cite{DOE}, one would expect the positive properties of Sobol sequences to be made manifest in problems of higher dimensionality. \section{Creating the datasets}\label{sec:Models} \subsection{Reactor simulations} For our research, we have modeled a 2D infinite lattice model of a CANDU 6 reactor based on specifications from available literature \cite{Candu}. The implementation has been made in the computer code \texttt{SERPENT 2} which couples a Monte Carlo neutron transport module with a fuel depletion solver based on the Chebyshev Rational Approximation Method \cite{serpent}. The quality of the model has been examined by comparing end-of-cycle isotopic compositions with the \texttt{Bruce-1} dataset reported in the SFCOMPO-2.0 database \cite{sfcompo}. Figure \ref{fig:2dlattice} shows the CANDU 6 37-elements fuel assembly implemented in \texttt{SERPENT 2}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Sections/Images/Candu_geom.png} \caption{CANDU 6, 37-element fuel assembly, 2D infinite lattice implementation in \texttt{SERPENT 2} (light green indicates the fuel elements, white for the coolant, pink for the calandria and pressure tubes, black for the void between these tubes and turquoise for the moderator between fuel channels)} \label{fig:2dlattice} \end{figure} \begin{table}[ht] \centering \begin{tabular}{c c} \toprule \textbf{Parameters} & \textbf{Range} \\ \midrule Moderator Temperature & 333 - 363 $K$ \\ Burnup & 0.1 - 7 $\frac{\mathrm{MWd}}{\mathrm{kg_{HM}}}$ \\ \bottomrule \end{tabular} \caption{Parameter ranges used in in the generation of the training and testing datasets} \label{tab:paramranges} \end{table} \subsection{Sampling strategies} Strategies are required to sample sets of input parameters for each Serpent simulation involved in model development and testing. In this study, we sample moderator temperature and discharge burnup. Table \ref{tab:paramranges} indicates the range of values considered for these parameters. Previous studies on sampling schemes and strategies have shown that uniform grid sampling performs poorly in higher-dimensional input spaces \cite{DOE}. While a random number sequence can overcome this issue in principle, due to the nature of pseudo-random number generators implementations, random number sequences tend to cluster, resulting in a non-uniform distribution of samples across the input space. In such cases, a quasirandom sequence can provide a set of samples with a better spatial distribution\cite{qsr} Sobol quasirandom sampling is a method to generate quasi-random sequences which is designed to minimize the star discrepancy, a mathematical term describing the difference between the distribution of values generated by a certain generating scheme to a multidimensional uniform probability distribution. The generation of the samples involves using a special algorithm in which bitwise operations are performed. Details on the algorithm and its implementation can be found in \cite{SobolSQ}. In order to evaluate the impact of the sampling method on the prediction quality of the models, two scenarios have been prepared. In Scenario A, a 25x25 2D-grid dataset was used and simulated, based on the parameter ranges of Table \ref{tab:paramranges}. In this scenario a set of 169 samples was generated from the selection of samples starting from the corners and leaving one grid element in between. The rest of the unselected grid points are then considered as the test dataset, for both this and the following scenario. Scenario B consists of 169 samples generated from a Sobol sequence sampler written in-house. Again, the test dataset is the same as for Scenario A. \\ Figure \ref{fig:SvG} shows a comparison between the samples generated via the Sobol sequence and the aforementioned grid. While grid sampling is still effective in two dimensions, the good space coverage of the Sobol sequence is evident. Figure \ref{fig:exppatterns} shows the spatial arrangement of Scenario A's training set through the bright elements, while the darker elements represent the test dataset used for both scenarios. Table \ref{tab:setups} summarizes the different scenarios and their characteristics \begin{figure} \centering \input{Sections/Images/sampling_comparison} \caption{Comparison between the dataset based on Sobol quasirandom sampling (left) and the dataset based on Grid sampling (right). While both the Sobol sequence and grid sampling are effective in two dimensions as seen here, the Sobol sequence by far outperforms grid sampling in higher dimensions, at the same number of samples.} \label{fig:SvG} \end{figure} \begin{figure} \centering \includegraphics[width=0.32\textwidth,trim=2cm 1.3cm 2cm 1.4cm, clip]{Sections/Images/grid2.pdf} \caption{Spatial distribution pattern used to evaluate the performance of the surrogate models on a grid. The yellow elements represent the simulations to be used for training the interpolators, while the dark regions represent the cells where the values will be interpolated. } \label{fig:exppatterns} \end{figure} \begin{table}[ht] \centering \begin{tabular}{c c c} \toprule \textbf{Configuration}&\textbf{Training set size} & \textbf{Test set size} \\ \midrule Scenario A& 169 (Grid) & 456 (Grid) \\ Scenario B & 169 (Sobol) & 456 (Grid) \\ \bottomrule \end{tabular} \caption{Details of the sampling strategies. The different configurations allow for the study of model performance under varying spatial sample distributions} \label{tab:setups} \end{table} \section{Building the model} \label{sec:Reg} Based on the pre-generated training data, we built the Cubic Spline and the GPR models. In the following section we examine these methods. \subsection{Cubic Spline Interpolation} Cubic Spline Interpolation is a method to interpolate functions within the boundaries described by a number of its samples. The interpolation procedure consists of fitting a piecewise cubic polynomial between the sampled points. This method results in smoother and higher quality interpolators in comparison to methods such as Langrange and Newton polynomial fitting \cite{NM2}. In addition, it has the advantage of avoiding oscillations and Runge's phenomenon. A more detailed description of the method and its implementation can be found in \cite{CSplines}. \subsection{Gaussian Process Regression} A GP is a set of random variables that share a joint Gaussian probability distribution. GP's can be used for the construction of probabilistic surrogate models of black-box problems where an analytical form is unavailable or intractable and the computational cost is elevated. Without loss of generalization, the aim of GPR is to approximate a function $Y(\vv{X}),\, \vv{X} = (x_{1},..., x_{d})$ through a GP as: \begin{equation} Y(\vv{X}) \approx \mathcal{G}\mathcal{P}(\vv{X}) = \mathcal{N}(0,K(\vv{X})) \end{equation} Where $d$ is the dimension of the input space. The kernel ($K$) of a GP is a function which describes the covariance between the inputs of the target function. It encodes a limited set of assumptions about the underlying function such as smoothness and differentiability. By choosing a kernel, an infinite set of functions which share the kernel properties are used to perform the regression, and by training the kernel parameters based on the input and output data a subset of those functions are chosen that match the data. We chose the Anisotropic Squared Exponential (ASE) kernel, which provides for very smooth interpolation and is infinitely differentiable, as we expect that the change of nuclide concentrations throughout the parameter space would meet these characteristics. An additional advantage is that the ASE kernel allows for the determination of relative input parameter relevance through the use of different correlation lengths parameters ($\ell_{i}$) for each input: \begin{equation}\label{eq:ASE} K(\vv{X},\vv{Z}) = exp\left(-\sum_{i}^{d}\left( \dfrac{x_{i}-z_{i} } {\ell_{i}}\right)^{2}\right) \end{equation} Where $\vv{X},\vv{Z}$ are two different points in which the covariance function is evaluated. The smaller $\ell_{i}$, the more sensitive the underlying model is to changes in input $x_{i}$. Once the parameters of the kernel that reproduce the training data are estimated, the posterior predictive distribution of the GP - our model - at an unseen point ($*$) is given by a normal distribution with prediction mean ($\mu_{*}$), prediction variance ($\sigma^{2}_{*}$) and prediction gradient ($\nabla\mu_{*}$): \begin{align} \mu_{*} &= K_{train,*} (K_{train,train})^{-1} Y_{train} \nonumber \\ \sigma^{2}_{*} &= K_{*,*} - K_{*,train}^{T}(K_{train,train})^{-1}K_{*,train} \nonumber \\ \nabla\mu_{*} &= \nabla K_{train,*} (K_{train,train})^{-1} Y_{train} \end{align} To implement the GPR models, we have used the \texttt{scikit-learn} Python package \cite{scikit-learn}. More details on GP's and their kernels can be found in \cite{6}. \subsection{Cross-validation} Since the kernel training process is strongly dependent on the training data, weak prediction performance can occur if an inappropriate selection of the training set is made. This can be avoided by cross-validation. It is implemented by splitting the training set into k ``folds'' of approximately equal size and performing the training on each fold, thus obtaining the model parameters, then using the other folds as test data to evaluate the model predictive quality. This should be performed several times by randomizing the selection of the folds, thus generating a set of plausible model parameters from which the best performing combinations can be selected. Notwithstanding this, a major benefit of cross-validation is that it allows the training of models with different combinations of samples spanning the entire input space, providing parameter sets that tend to enhance the model generalization properties, typically at a smaller computational cost since the training set size is reduced. Each \texttt{SERPENT 2} simulation provides an output vector containing about 1300 nuclides. We have created GPR models for each of these nuclides for both datasets. Each GPR model has been trained using a 5-fold cross validation scheme that has been repeated 10 times, thus generating a set of 50 kernel parameter combinations from which the best-performing is chosen. Figures \ref{fig:gpr_model}, \ref{fig:sr90} and \ref{fig:eu154} show the computed GPR models for $^{239}$Pu, $^{90}$Sr and $^{154}$Eu respectively. The models shown are based on the Sobol sequence dataset from scenario B. \begin{figure} \centering \resizebox{.95\textwidth}{!}{ \input{Sections/Images/Pu2d} \input{Sections/Images/Pu1d} } \caption{Reconstruction plots for $^{239}$Pu. The left plot shows the GPR model based on samples generated by the Sobol sequence, showing the mass as a function of burnup and temperature. The reported masses have been scaled by the number of fuel assemblies in the reactor and their dimensions. The squares indicate the samples obtained from the Sobol sequence. The plot on the right shows the same model but plotting this time the predictions for two fixed temperature values alongside their posterior uncertainties. The latter are shown at the 6-$\sigma$ level. A portion of the high burnup section of the plot has been zoomed in in the top left corner, making evident the small albeit important effect of temperature at higher burnups. A lower temperature results in more plutonium being produced. This can be explained by a thermal broadening of $^{238}$U resonances. Below this plot, the errors of both curves is shown. It can be observed that they are distributed around zero.} \label{fig:gpr_model} \end{figure} \section{} \end{document} \endinput
train/arxiv
BkiUfnLxK6nrxpQc3j_O
5
1
\section{Introduction and Summary} \label{sec:intro} The backreaction of the emission of gravitational radiation on the system emitting the radiation is a problem both of formal interest within general relativity and of practical interest for gravitational-wave detection. A leading candidate source for laser-interferometric gravitational-wave observatories, both on the ground and in space, is the radiation-reaction induced inspiral of a binary system of two compact objects (black holes or neutron stars). In order to develop accurate theoretical predictions for the gravitational waveforms emitted by such systems, one must know their evolution under the dissipative effects of gravitational-wave emission to high accuracy. In addition, particularly for systems containing black holes, the effects of spin may be important. Spin-orbit and spin-spin couplings can result in precessions of the bodies' spins and of the orbital angular momentum, leading to modulations in the gravitational waveform \cite{3min}, and can affect the rate of decay of the orbit \cite{kww,kidder}. As a result, substantial effort has gone into determining the effects of spin in binary systems. Except for the final few orbits, much of the inspiral of such systems can be described by the post-Newtonian approximation, which is an expansion of Einstein's equations in powers of $\epsilon \sim (v/c)^2 \sim Gm/rc^2$, where $v$, $m$ and $r$ represent typical velocities, masses and separations in the system, and $G$ and $c$ are the gravitational constant and speed of light. Each power of $\epsilon$ represents one ``post-Newtonian'' (PN) order in the series ($\epsilon^{1/2}$ represents one-half, or 0.5PN orders). Formally, spin effects first enter the equations of motion at the 1PN level, and have been derived by numerous authors from a variety of points of view, ranging from formal developments of the GR equations of motion in multipole expansions \cite{papapetrou1,papapetrou2}, to post-Newtonian calculations \cite{obrien}, to treatments of linearized GR as a spin-two quantum theory \cite{barkerocon1,barkerocon2}. For a review of these various approaches, see \cite{barkeroconrev}. Spin also affects gravitational radiation reaction, and radiation reaction can affect spin; it is straightforward to show that such effects first occur at 3.5PN order. In earlier work, we derived, from first principles, the radiation-reaction effects of spin-orbit and spin-spin coupling, by integrating the post-Newtonian hydrodynamic equations of motion, including 1PN, 2.5PN and 3.5PN terms, over bodies consisting of rotating fluid \cite{dire3,dire4}. As a check, we found that the loss of energy and angular momentum (including spin) induced by the radiation reaction terms, matched precisely the expressions for energy and angular momemtum flux derived by Kidder {\em et al.}\cite{kww,kidder}. An alternative approach to obtaining equations of motion with radiation reaction at higher PN orders was studied by Iyer and Will \cite{iyerwill1,iyerwill2}. There, we wrote down the most general form that the 2.5PN and 3.5PN radiation-reaction terms could take in the equations of motion for a binary system of {\em spinless} bodies, in terms of arbitrary coefficients. We then used the assumption of energy and angular momentum balance, combined with energy and angular-momentum flux expressions accurate to PN order beyond the quadrupole approximation to impose constraints on the arbitrary coefficients used in the equations of motion. After taking into account a fundamental ambiguity in the definitions of energy and angular momentum at 2.5PN and 3.5PN orders, we were left with equations of motion with coefficients that are fixed up to two arbitrary coefficients at 2.5PN order and 6 arbitrary coefficients at 3.5PN order. It was then straightforward to show that these eight degrees of freedom correspond precisely to the effects, mapped onto the two-body equations of motion, of coordinate transformations at the relevant PN orders. At 2.5PN order, for example, one choice of the two arbitrary coefficients gives the equations in the so-called Burke-Thorne gauge \cite{MTW}, in which the radiation reaction terms are obtained from a gradient of the potential $(G/5c^2) x^i x^j d^5 I^{<ij>}/dt^5$, where $I^{<ij>}$ is the trace-free moment of inertia tensor of the system, while another choice gives the Damour-Deruelle gauge, which is more directly tied to harmonic gauge \cite{DD81,damour300}. For spinless systems, this approach was extended to determine the 4.5PN terms in the equations of motion using flux expressions accurate to 2PN order beyond quadrupole \cite{gopuiyer2}. It is the purpose of this paper to extend this approach to include spin-orbit radiation reaction effects at 3.5PN order. We assume that the equation of motion for the relative vector ${\bf x}={\bf x}_1-{\bf x}_2$ in a binary system may be written in a PN expansion in the form \begin{equation} {\bf a}= -\frac{m}{r^2}\bf n + {\bf a}_{\rm PN-SO}+ \dots +{\bf a}_{\rm 2.5PN}+ \dots + {\bf a}_{\rm 3.5PN-SO} \,, \label{PNeom} \end{equation} where $r \equiv |{\bf x}|$, ${\bf n}\equiv {\bf x}/r$ and $m\equiv m_1+m_2$; ${\bf a}_{\rm PN-SO}$ is the post-Newtonian spin-orbit contribution, ${\bf a}_{\rm 2.5PN}$ is the leading radiation-reaction contribution, and ${\bf a}_{\rm 3.5PN-SO}$ is the 3.5PN spin-orbit contribution. Here and for the rest of this paper, we use units in which $G=c=1$. We have not displayed the point mass 1PN, 2PN, 3PN, and 3.5PN terms, as they will play no role in our analysis. We also will not use the bookeeping parameter $\epsilon$ explicitly to keep track of PN orders, since we will be considering only specific orders. It is sufficient to recall that, since spin scales as $mvr$, then $S/r^2 \sim v(m/r) \sim \epsilon^{3/2}$. This, plus explicit labelling of terms throughout, should make clear the PN order of terms being discussed. We then write down the most general 3.5PN spin-orbit expression that (a) contains terms each involving a single spin (either ${\bf S}_1$ or ${\bf S}_2$), (b) is a vector, and (c) is antisymmetric under the interchange $1 \rightleftharpoons 2$. This turns out to involve 30 arbitrary coefficients. Because the bodies have intrinsic spin, we must make an assumption about their spin evolution. At 1PN order, we assume that they obey the standard spin-orbit precession equations (see Eq. (\ref{eomPNSO}) below). These 1PN equations produce only precession; the magnitudes of the spins do not change. At 3.5PN order, we likewise assume that gravitational radiation reaction produces only precessions of the spin. This is a reasonable assumption, because, for a rotating axisymmetric body, it is impossible to see how gravitational radiation can cause it to spin up or down, to the 3.5PN order being considered. Such effects must involve specific couplings of radiation to deformations of the bodies, either due to rotational flattening or due to tidal couplings, which are beyond the scope of our assumption of almost point-like bodies with spin. In this case, we can then show that the most general 3.5PN expression for the evolution of each spin that (a) is a pseudovector; (b) depends only on the spin itself and on orbital variables; and (c) is orthogonal to the spin, can in fact be written as a total time derivative of spin and orbital variables, which can then be absorbed into a meaningless 3.5PN-order correction to the definition of the spin. Consequently, we can calculate the loss of energy and angular momentum using only the parametrized equations of motion and the 1PN spin precession equations. There is no contribution to the evolution of the spins at 3.5PN order. However, we must incorporate the freedom to add arbitrary terms of 3.5PN spin-orbit order into the definitions of total energy and total angular momentum, just as in the spinless case. There are 6 such terms in $E$ and 26 in ${\bf J}$. Thus there is a total of 62 arbitrary coefficients to be determined. We then equate the time derivative of these expressions for $E$ and ${\bf J}$ with the corresponding expressions obtained from the far-zone gravitational-wave flux, including spin-orbit terms, as calculated by Kidder {\em et al.}\cite{kww,kidder}, and compare them term by term. This leads to 54 constraints on the coefficients; however 4 of these constraints are not linearly independent of others, and thus we have 50 constraints on 62 coefficients, leaving 12 undetermined coefficients. Finally we show that these 12 free coefficients in the equation of motion correspond precisely to the effects of 3.5PN order coordinate transformations, mapped onto the two-body equations of motion with spin-orbit coupling. The remainder of this paper provides details. In Sec. \ref{sec:two body} we review the known equations of motion and spin evolution through 2.5PN order. Section \ref{sec:balance} applies energy and angular momentum balance to determine the 3.5PN spin-orbit terms in the two-body equations of motion, while Sec. \ref{sec:gauge} shows that the remaining undetermined coefficients are directly related to gauge freedom. Section \ref{sec:conclusions} presents concluding remarks, while certain detailed formulae are relegated to Appendices. \section{Two-body equations of motion with spin-orbit coupling} \label{sec:two body} The PN-SO and 2.5PN terms in Eq. (\ref{PNeom}) are given by conventional expressions \begin{eqnarray} {\bf a}_{\rm PN-SO} &=& \frac{1}{r^3} \biggl \{ \frac{3}{2} \frac{\bf n}{r} {\bf {\tilde L}}_{\rm N} \cdot \left ( 4{\bf {\cal S}} + 3\fourvec{\xi} \right ) - {\bf v} \times \left ( 4{\bf {\cal S}} + 3\fourvec{\xi} \right ) +\frac{3}{2} \dot r {\bf n} \times \left ( 4{\bf {\cal S}} + 3\fourvec{\xi} \right ) \biggr \} \,, \label{eomPNSO} \\ {\bf a}_{\rm 2.5PN}&=&\frac{8\mu m}{5r^3}\biggl \{\left [3(1+\beta)v^2+\frac{1}{3}(23+6\alpha- 9\beta)\frac{m}{r} -5\beta \dot r^2\right ]\dot r {\bf n} \nonumber \\ &&-\left [(2+\alpha)v^2+(2-\alpha)\frac{m}{r}-3(1+\alpha)\dot r^2\right ] {\bf v}\biggr \} \,, \label{eom25PN} \end{eqnarray} where ${\bf v}=d{\bf x}/dt$ is the relative velocity, $\mu \equiv m_1m_2/m$ is the reduced mass, ${\cal S}\equiv {\cal S}_1+{\cal S}_2$ is the total spin, $\fourvec{\xi}=(m_2/m_1){\cal S}_1+(m_1/m_2){\cal S}_2$ is a second spin parameter, ${\bf {\tilde L}}_{\rm N} ={\bf x}\times {\bf v}$ is the orbital angular momentum per unit reduced mass, and $\dot r = {\bf v}\cdot {\bf n}$. The coefficients $\alpha$ and $\beta$ in ${\bf a}_{\rm 2.5PN}$ reflect the possibility of different gauges for expressing radiation reaction at 2.5PN order \cite{iyerwill1,iyerwill2}. The choice $\alpha=4$, $\beta=5$ corresponds to Burke-Thorne gauge \cite{MTW}, while the choice $\alpha=-1$, $\beta=0$ leads to the Damour-Deruelle radiation-reaction formula \cite{DD81,damour300}. Any choice of $\alpha$ and $\beta$ leads to the same loss of energy and angular momentum at 2.5PN order, corresponding to quadrupole approximation energy and angular momentum flux. In defining spin, we must specify the center of mass of each body using a procedure commonly known as the ``spin supplementary condition (SSC); the definition used in this paper corresponds to the value $k_{\rm SSC} =1/2$ (see \cite{barkeroconssc,kidder,dire3} for further discussion). The equations of evolution for the spins may be written in the form \begin{equation} {\dot {\bf {\cal S}}}_1 = ({\dot {\bf {\cal S}}}_1)_{\rm PN} +\dots + ({\dot {\bf {\cal S}}}_1)_{\rm 3.5PN-SO} \,, \end{equation} where \begin{equation} ({\dot {\bf {\cal S}}}_1)_{\rm PN} = \frac{\mu}{r^3} {\bf {\tilde L}}_{\rm N} \times {\bf {\cal S}}_1 \left ( 2 + \frac{3}{2}\frac{m_2}{m_1} \right ) \,, \label{spinsummary} \end{equation} and where the equations for spin 2 can be obtained by the interchange $1 \rightleftharpoons 2$. We have not included conservative 2PN and 3PN contributions, and can show that the leading radiation reaction contributions come at 3.5PN order \cite{dire3}. Up to 2.5PN order, the motion is conservative and the energy and angular momentum are constant. Including only the Newtonian and spin-orbit terms, they are given by \begin{eqnarray} E&=& E_{\rm N} = \mu \biggl(\frac{1}{2}v^2-\frac{m}{r}\biggr) \,, \label{EJ1} \\ {\bf J}&=&\mu {\tilde {\bf L}}_{\rm N}+ {\bf {\cal S}}+\frac{\mu}{2r} {\bf n} \times \biggl[ {\bf n} \times (4{\cal S}+3\fourvec{\xi}) \biggr] \,. \label{EJ2} \end{eqnarray} In our chosen spin supplementary condition, there is no spin-orbit contribution to the conserved energy, while ${\bf J}$ contains the orbital angular momentum, the total spin, and a PN spin-orbit contribution. These conserved quantities can be derived from the equations of motion by constructing $\frac{1}{2}dv^2/dt\equiv {\bf v}\cdot {\bf a}$, and $d({\bf x}\times {\bf v})/dt\equiv {\bf x}\times {\bf a}$, and showing that, after substituting the equations of motion and spin-precession equations carried to the appropriate order, everything can be expressed as total time derivatives. \section{Spin-orbit radiation reaction via $E$ and $J$ balance} \label{sec:balance} We now write down the most general 3.5PN spin-orbit terms as \begin{eqnarray} {\bf a}_{3.5PN-SO}&=&-\frac{\mu}{5r^4}\biggl[A_{\cal S} \frac{\dot r{\bf n}}{r}({\bf {\tilde L_N}}\cdot{\cal S})+B_{\cal S} \frac{\bf v}{r}({\bf {\tilde L_N}}\cdot{\cal S})+C_{\cal S} \dot r{\bf v}\times{\cal S} +D_{\cal S} {\bf n}\times{\cal S} \nonumber \\ &&+A_{\xi} \frac{\dot r{\bf n}}{r}({\bf {\tilde L_N}}\cdot \fourvec{\xi})+B_{\xi} \frac{\bf v}{r}({\bf {\tilde L_N}}\cdot \fourvec{\xi}) +C_{\xi} \dot r{\bf v}\times \fourvec{\xi} +D_{\xi} {\bf n}\times \fourvec{\xi}\biggr] \,. \label{eomgeneral} \end{eqnarray} The form of Eq. (\ref{eomgeneral}) is dictated by the fact that it must be a correction to the Newtonian acceleration, ({\em i.e.} be proportional to a mass $/r^2$); must vanish in the test body limit when gravitational radiation vanishes, ({\em i.e.} be proportional to $\mu$); must be dissipative, or odd in velocities; must be linear in the spins; must be a vector, not a pseudovector; and must change sign under the interchange $1 \rightleftharpoons 2$. Note that other possible terms, such as ${\bf {\tilde L}}_{\rm N} ({\bf n}\cdot{\cal S})$ can be seen to be linear combinations of the terms above using standard vector identities. The prefactor 1/5 is chosen for convenience. To make the terms of $O(\epsilon^{7/2})$ beyond Newtonian order, $A_{\cal S}$, $B_{\cal S}$, $C_{\cal S}$, $A_{\xi}$, $B_{\xi}$ and $C_{\xi}$ must be of $O(\epsilon)$, and $D_{\cal S}$ and $D_{\xi}$ must be of $O(\epsilon^2)$. The only orbital variables available to construct expressions of the relevant order are $v^2$, $m/r$ and $\dot r^2$. Thus $A_{\cal S}$, $B_{\cal S}$, $C_{\cal S}$ and $D_{\cal S}$ can be written in terms of 15 arbitrary coefficients, in the form \begin{eqnarray} A_{\cal S}&=&a_1v^2+a_2\frac{m}{r}+a_3\dot r^2 \,, \nonumber \\ B_{\cal S}&=&a_4v^2+a_5\frac{m}{r}+a_6\dot r^2 \,, \nonumber \\ C_{\cal S}&=&a_7v^2+a_8\frac{m}{r}+a_9\dot r^2 \,, \nonumber \\ D_{\cal S}&=&a_{10}v^4+a_{11}v^2\dot r^2+a_{12}\dot r^4+a_{13}v^2\frac{m}{r}+a_{14}\dot r^2\frac{m}{r}+a_{15}\frac{m^2}{r^2} \,. \label{coefficients} \end{eqnarray} In a parallel manner, we can write $A_{\xi}$, $B_{\xi}$, $C_{\xi}$ and $D_{\xi}$ in terms of its own set of 15 coefficients. Because all expressions involving spin-orbit terms divide naturally into those involving the total spin ${\bf {\cal S}}$ and those involving the spin parameter $\fourvec{\xi}$, we can solve for each set using identical methods; we will focus on the ${\bf {\cal S}}$-terms. Our goal is to evaluate these thirty coefficients by imposing energy and angular momentum balance. Because the equations of motion at 2.5PN order and 3.5PN order have dissipative terms, the energy and angular momentum are no longer conserved explicitly. Furthermore, they are ambiguous because one has the freedom to add arbitrary terms to $E$ and $\bf J$ at 2.5PN order and 3.5PN order to redefine them without affecting their conservation through 2PN order. Similarly, the spins are strictly defined only up to the order at which radiation reaction begins, and so one has the freedom to add a 3.5PN term to each spin, without changing its behavior at ``conservative'' orders. Adding to $E$ and $\bf J$ the appropriate 2.5PN terms to account for the coefficients $\alpha$ and $\beta$ in Eq. (\ref{eom25PN}), and adding the most general 3.5PN spin-orbit terms, with arbitrary coefficients, we can define new quantities $E^*$, and $\bf J^*$ according to \begin{eqnarray} E^* &=& E_{\rm N} + \delta E_{\rm 2.5PN} + \delta E_{\rm 3.5PN-SO} \,, \nonumber \\ \bf J^* &=& \mu {\tilde {\bf L}}_{\rm N} + {\bf {\cal S}}+\frac{\mu}{2r} {\bf n} \times \left[ {\bf n} \times (4{\cal S}+3\fourvec{\xi}) \right] +\delta {\bf J}_{\rm 2.5PN} + \delta {\bf J}_{\rm 3.5PN-SO} \,, \label{EJstar} \end{eqnarray} where, from our earlier work \cite{iyerwill1,iyerwill2}, we can write \begin{eqnarray} \delta E_{\rm 2.5PN}&=& \frac{8}{5} \frac{\mu^2 m}{r^2} {\dot r} [(2+\alpha)v^2 - \beta {\dot r}^2 ] \,, \nonumber \\ \delta {\bf J}_{\rm 2.5PN} &=& \alpha \frac{8}{5} \frac{\mu^2 m}{r^2}{\dot r} {\bf {\tilde L}}_{\rm N} \,, \end{eqnarray} and for the 3.5PN-SO expressions we write the general parametrized form \begin{eqnarray} \delta E_{\rm 3.5PN-SO}&=& - \frac{1}{5} \frac{\mu^2}{r^4}{\dot r} \left \{ ({\bf {\tilde L_N}}\cdot{\cal S}) \left ( \alpha_1 v^2 + \alpha_2 {\dot r}^2 + \alpha_3 \frac{m}{r} \right ) + ({\cal S} \to \fourvec{\xi} ) \right \} \,, \nonumber \\ \delta {\bf J}_{\rm 3.5PN-SO} &=& -\frac{1}{5} \frac{\mu^2}{r^2} \biggl \{ {\dot r} {\bf {\cal S}} \left ( \gamma_1 v^2 + \gamma_2 {\dot r}^2 + \gamma_3 \frac{m}{r} \right ) + {\dot r} {\bf n} ({\bf n} \cdot {\bf {\cal S}}) \left ( \gamma_4 v^2 + \gamma_5 {\dot r}^2 + \gamma_6 \frac{m}{r} \right ) \nonumber \\ && + {\bf v} ({\bf n} \cdot {\bf {\cal S}}) \left ( \gamma_7 v^2 + \gamma_8 {\dot r}^2 + \gamma_9 \frac{m}{r} \right ) + {\bf n}({\bf v} \cdot {\bf {\cal S}}) \left ( \gamma_{10} v^2 + \gamma_{11} {\dot r}^2 + \gamma_{12} \frac{m}{r} \right ) \nonumber \\ &&+ \gamma_{13} {\dot r} {\bf v}({\bf v} \cdot {\bf {\cal S}}) + ({\cal S} \to \fourvec{\xi} ) \biggr \}\,, \label{EJstar35} \end{eqnarray} where the notation ${\cal S} \to \fourvec{\xi}$ means repeat the preceding terms replacing ${\cal S}$ with $\fourvec{\xi}$, with an appropriate set of arbitrary coefficients. This gives a total of 32 arbitrary coefficients. We now take time derivatives of $E^*$ and ${\bf J}^*$ in Eqs. (\ref{EJstar}), substituting the Newtonian and PN spin-orbit accelerations explicitly, to obtain \begin{eqnarray} {\dot E}^*&=&\mu {\bf v}\cdot \left ({\bf a}_{\rm 2.5PN}+ {\bf a}_{\rm 3.5PN-SO} \right ) + \frac{d}{dt} \delta E_{\rm 2.5PN} + \frac{d}{dt} \delta E_{\rm 3.5PN-SO} \,, \nonumber \\ {\dot {\bf J}}^*&=& {\dot {\bf {\cal S}}}_{\rm 3.5PN-SO} + \mu {\bf x}\times \left ( {\bf a}_{\rm 2.5PN}+ {\bf a}_{\rm 3.5PN-SO} \right ) + \frac{d}{dt} \delta {\bf J}_{\rm 2.5PN} + \frac{d}{dt} \delta {\bf J}_{\rm 3.5PN-SO} \,. \end{eqnarray} In fact, we will show in Appendix \ref{app:spinevolve} that, if we assume that $({\dot {\bf {\cal S}}}_1 )_{\rm 3.5PN-SO}$ is orthogonal to ${\bf {\cal S}}_1$ (and similarly for spin 2), then the most general 3.5PN expression for $({\dot {\bf {\cal S}}}_1 )_{\rm 3.5PN-SO}$ turns out to be a total time derivative, which can be absorbed into a meaningless 3.5PN correction to the definition of ${\bf {\cal S}}_1$. Hence we can assume henceforth that ${\dot {\bf {\cal S}}}_{\rm 3.5PN-SO} =({\dot {\bf {\cal S}}}_1 )_{\rm 3.5PN-SO} + ({\dot {\bf {\cal S}}}_2 )_{\rm 3.5PN-SO} =0$. We now substitute the appropriate terms from the equations of motion (\ref{eom25PN}) and (\ref{eomgeneral}), and calculate explicitly the time derivatives of the 2.5PN and 3.5PN-SO contributions to $E^*$ and ${\bf J}^*$. These time-derivative terms may be calculated using the identities shown in Appendix \ref{app:timederiv}, which are derived using the Newtonian equations of motion and the 1PN spin-orbit terms. When evaluating $d\delta {E}_{\rm 2.5PN}/dt$ and $d\delta {\bf J}_{\rm 2.5PN}/dt$, in order to obtain all terms that contribute at 3.5PN-SO order, we must include the 1PN spin orbit terms present in the expressions in Appendix \ref{app:timederiv}. The result is \begin{eqnarray} \dot E^*&=&-\frac{8\mu^2m^2}{15r^4}(12v^2-11\dot r^2)- \frac{8\mu^2m}{10r^6}{\bf {\tilde L_N}}\cdot(4{\cal S}+3\fourvec{\xi}) \biggl[-(2+\alpha)v^2+3\beta \dot r^2\biggr] \nonumber \\ &&- \frac{\mu^2}{5r^5} \biggl \{ ({\bf {\tilde L_N}}\cdot{\cal S}) \left ( {\cal P}_1 v^4 + {\cal P}_2 v^2 {\dot r}^2 + {\cal P}_3 {\dot r}^4 + {\cal P}_4 v^2 \frac{m}{r} + {\cal P}_5 {\dot r}^2 \frac{m}{r} + {\cal P}_6 \frac{m^2}{r^2} \right ) \nonumber \\ && \quad + ({\bf {\cal S}} \to \fourvec{\xi} ) \biggr \}\,, \label{Estardot} \\ \dot J^*&=&-\frac{8\mu^2m}{5r^3} {\bf {\tilde L}}_{\rm N} \left (2v^2-3\dot r^2+2\frac{m}{r} \right ) \nonumber \\ &&-\frac{8\mu^2m\alpha}{5r^4}\biggl \{-\frac{1}{2r^2}{\bf {\tilde L}}_{\rm N} {\bf {\tilde L_N}}\cdot(4{\cal S}+3\fourvec{\xi})+\dot r{\bf n}\times \biggl[({\bf v}-\frac{3}{2}\dot r{\bf n})\times (4{\cal S}+3\fourvec{\xi}) \biggr]\biggr \} \nonumber \\ && -\frac{\mu^2}{5r^3} \biggl \{ {\bf {\cal S}} \left ({\cal R}_1 v^4 + {\cal R}_2 v^2 {\dot r}^2 + {\cal R}_3 {\dot r}^4 + {\cal R}_4 v^2 \frac{m}{r} + {\cal R}_5 {\dot r}^2 \frac{m}{r} + {\cal R}_6 \frac{m^2}{r^2} \right ) \nonumber \\ && +{\bf n}({\bf n}\cdot {\bf {\cal S}}) \left ({\cal R}_7 v^4 + {\cal R}_8 v^2 {\dot r}^2 + {\cal R}_9 {\dot r}^4 + {\cal R}_{10} v^2 \frac{m}{r} + {\cal R}_{11} {\dot r}^2 \frac{m}{r} + {\cal R}_{12} \frac{m^2}{r^2} \right ) \nonumber \\ && +{\dot r} {\bf n}({\bf v}\cdot {\bf {\cal S}}) \left ( {\cal R}_{13} v^2 + {\cal R}_{14} {\dot r}^2 + {\cal R}_{15} \frac{m}{r} \right ) + {\dot r} {\bf v}({\bf n}\cdot {\bf {\cal S}}) \left ( {\cal R}_{16} v^2 + {\cal R}_{17} {\dot r}^2 + {\cal R}_{18} \frac{m}{r} \right) \nonumber \\ && + {\bf v}({\bf v}\cdot {\bf {\cal S}}) \left ( {\cal R}_{19} v^2 + {\cal R}_{20} {\dot r}^2 + {\cal R}_{21} \frac{m}{r} \right ) + ({\cal S} \to \fourvec{\xi} ) \biggr \}\,. \label{Jstardot} \end{eqnarray} The first term in each of Eqs. (\ref{Estardot}) and (\ref{Jstardot}) is the 2.5PN quadrupole, or Newtonian loss term, while the second term in each case comes from the spin-orbit correction terms in Appendix \ref{app:timederiv} applied to $d \delta E_{\rm 2.5PN}/dt$ and $d \delta {\bf J}_{\rm 2.5PN}/dt$. In the third set of terms in each case, the 27 coefficients ${\cal P}_n, \, n=1 \dots 6$ and ${\cal R}_n, \, n=1 \dots 21$ in the ${\cal S}$-dependent terms are functions of the 15 coefficients $a_n$ from the equations of motion (\ref{eomgeneral}) and (\ref{coefficients}), and of the 16 coefficients $\alpha_n$ and $\gamma_n$ from the 3.5PN ambiguity terms in $E^*$ and ${\bf J}^*$. A parallel set of 27 coefficients appear in the $\fourvec{\xi}$-dependent terms, with identical dependences on the corresponding 15 + 16 arbitrary coefficients. We now use the assumption of energy and angular momentum balance to equate the rate of energy and angular momentum loss to the corresponding far-zone fluxes \cite{kww,kidder}. The lowest-order Newtonian and the 1PN spin-orbit contributions are given by \begin{eqnarray} {\dot E}_{\rm far\, zone}&=&- \frac{8}{15}\frac{\mu^2m^2}{r^4}(12v^2-11\dot r^2) \nonumber \\ &&- \frac{8\mu^2m}{15r^6}\biggl \{ ({\bf {\tilde L_N}}\cdot{\cal S}) \left ( 27\dot r^2-37v^2- 12\frac{m}{r} \right ) \nonumber \\ && +({\bf {\tilde L_N}}\cdot \fourvec{\xi}) \left ( 18\dot r^2-19v^2-8\frac{m}{r} \right )\biggr \} \,, \label{farfluxE} \\ {\dot {\bf J}}_{\rm far\, zone}&=&- \frac{8}{5}\frac{\mu^2m}{r^3}{\bf {\tilde L}}_{\rm N} (2v^2-3\dot r^2+2\frac{m}{r}) \nonumber \\ &&-\frac{4\mu^2}{5r^3}\biggl \{ {\bf {\cal S}} \left ( 6v^2\dot r^2 - 6v^4 - \frac{50}{3}v^2\frac{m}{r} +\frac{50}{3}\dot r^2 \frac{m}{r} -2\frac{m^2}{r^2} \right ) \nonumber \\ &&+{\bf n} ({\bf n}\cdot{\cal S}) \left ( 18v^4 -30\dot r^2 v^2 +25v^2 \frac{m}{r} +6\dot r^2 \frac{m}{r} +2\frac{m^2}{r^2} \right ) \nonumber \\ &&+\dot r {\bf n} ({\bf v}\cdot{\cal S}) \left ( 6v^2-21\frac{m}{r} \right ) - \dot r {\bf v} ({\bf n}\cdot{\cal S}) \left (18v^2-30\dot r^2 +33\frac{m}{r} \right ) \nonumber \\ &&+ {\bf v}({\bf v}\cdot{\cal S}) \left ( 6v^2-12\dot r^2+23\frac{m}{r} \right ) \nonumber \\ &&+ \fourvec{\xi} \left ( 5\dot r^4 -2v^2 \dot r^2 - \frac{10}{3}v^4 -\frac{22}{3}v^2 \frac{m}{r} +\frac{23}{3}\dot r^2 \frac{m}{r} - \frac{4m^2}{3r^2} \right ) \nonumber \\ &&+ {\bf n} ({\bf n}\cdot \fourvec{\xi}) \left ( 13v^4 -20\dot r^2 v^2 +\frac{41}{3}v^2 \frac{m}{r} +6\dot r^2 \frac{m}{r} +\frac{4m^2}{3r^2} \right ) \nonumber \\ &&+\dot r {\bf n} ({\bf v}\cdot \fourvec{\xi}) \left (7v^2-5\dot r^2-\frac{34m}{3r} \right ) -\dot r {\bf v} ({\bf n}\cdot \fourvec{\xi}) \left (13v^2-20\dot r^2 +\frac{64m}{3r} \right ) \nonumber \\ &&+ {\bf v} ({\bf v}\cdot \fourvec{\xi}) \left (\frac{10}{3}v^2- 5\dot r^2 +\frac{38m}{3r}\right ) \biggr \} \,. \label{farfluxJ} \end{eqnarray} After rewriting some of the terms in Eq. (\ref{Jstardot}) using standard vector identities, we compare Eqs. (\ref{Estardot}) and (\ref{Jstardot}) to Eqs. (\ref{farfluxE}) and (\ref{farfluxJ}) term by term to obtain 54 constraints on the 62 coefficients. It turns out, however, that 4 of these constraints are not linearly independent of others, so there are 50 non-trivial constraints, leaving 12 undetermined degrees of freedom. The specific choice of the free coefficients is somewhat arbitrary; one choice gives the following values for the coefficients (\ref{coefficients}) in the equations of motion (\ref{eomgeneral}): \begin{eqnarray} a_1&=&2820[2160]+15\gamma_4+45\gamma_7+45\gamma_9 +15\gamma_{11}+45\gamma_{12}-3\alpha_2 \,, \nonumber \\ a_2&=&-1728[-1348]-13\gamma_4-39\gamma_7-42\gamma_9-11\gamma_{11}- 42\gamma_{12}+3\alpha_2 \,, \nonumber \\ && +48[36](\alpha-\beta) \,, \nonumber \\ a_3&=&-6020[-4620]-35\gamma_4-105\gamma_7-105\gamma_9-35\gamma_{11}- 105\gamma_{12}+7\alpha_2 \,, \nonumber \\ a_4&=&-220[-164]-\gamma_4-3\gamma_7-3\gamma_9-2\gamma_{11}-3\gamma_{12} \,, \nonumber \\ a_5&=&\frac{68}{3}[36]+\gamma_4+3\gamma_7+3\gamma_9+2\gamma_{11} +3\gamma_{12}+16[12]\alpha \,, \nonumber \\ a_6&=&860[640]+5\gamma_4+15\gamma_7+15\gamma_9+10\gamma_{11}+15\gamma_{12} \,, \nonumber \\ a_{7}&=&-788[-608]-4\gamma_4-6\gamma_7-15\gamma_9-4\gamma_{11}- 15\gamma_{12} \,, \nonumber \\ a_{8}&=&\frac{3152}{3}[808]+4\gamma_4+16\gamma_7+24\gamma_9+4\gamma_{11}+16\gamma_{12}-32[-24]\alpha \,, \nonumber \\ a_{9}&=&2460[1900]+10\gamma_4+30\gamma_7+45\gamma_9+10\gamma_{11}+45\gamma_{12} \,, \label{solution} \nonumber \\ a_{10}&=&-148[-112]-2\gamma_4-3\gamma_7-3\gamma_9-2\gamma_{11}-3\gamma_{12} \,, \nonumber \\ a_{11}&=&3320[2540]+25\gamma_4+60\gamma_7+60\gamma_9+25\gamma_{11}+60\gamma_{12} \,, \nonumber \\ a_{12}&=&-6020[-4620]-35\gamma_4-105\gamma_7-105\gamma_9-35\gamma_{11}- 105\gamma_{12} \,, \nonumber \\ a_{13}&=&\frac{1276[968]}{3}+4\gamma_4+11\gamma_7+9\gamma_9+4\gamma_{11}+5\gamma_{12} \,, \nonumber \\ a_{14}&=&-4392[-3372]-23\gamma_4-87\gamma_7-78\gamma_9-23\gamma_{11}- 54\gamma_{12}+48[36]\alpha \,, \nonumber \\ a_{15}&=&-376 \left [-\frac{872}{3} \right ]-2\gamma_4-8\gamma_7-6\gamma_9-2\gamma_{11}-2\gamma_{12} \,, \end{eqnarray} where the numbers in square brackets represent the values to be used, along with the corresponding set of six free coefficients, for the terms in Eq. (\ref{eomgeneral}) involving $\fourvec{\xi}$. The unique choice of the twelve coefficients \begin{equation} \alpha_2=\frac{45}{2} ,\, \gamma_4=\frac{287}{2} ,\, \gamma_7=-\frac{89}{6} ,\, \gamma_9=-\frac{140}{3} ,\, \gamma_{11}=-\frac{263}{2} ,\, \gamma_{12}=-1 ,\, \end{equation} for the ${\bf {\cal S}}$ terms, and \begin{equation} \alpha_2=-\frac{105}{2} ,\, \gamma_4=\frac{181}{2} ,\, \gamma_7=-\frac{155}{6} ,\, \gamma_9=-34 ,\, \gamma_{11}=-\frac{105}{2} ,\, \gamma_{12}=2 \,, \end{equation} for the $\fourvec{\xi}$ terms, along with the values $\alpha=-1$, and $\beta=0$ for the harmonic Damour-Deruelle gauge at 2.5PN order, gives precisely the 3.5PN spin-orbit radiation reaction terms derived in \cite{dire3}. \section{Gauge Freedom and Arbitrary Coefficients in the Equation of Motion} \label{sec:gauge} The formulas for energy and angular momentum flux in the far zone are gauge invariant, while the equations of motion are gauge, or coordinate dependent. Any coordinate transformation $x^\mu \to x^\mu + \zeta^\mu$, where $\zeta^\mu$ is, in a suitable sense, of 2.5PN and 3.5PN order relative to $x^\mu$, will induce changes in the variables of a binary system, such as the relative vector ${\bf x}$ and the spin vectors. Notice that a transformation of coordinate time simply induces a velocity-dependent change in ${\bf x}$ via ${\bf x}(t+\delta t) = {\bf x}(t) + {\bf v}\delta t$. As for the spin, any change induced by a gauge transformation at 2.5PN or 3.5PN order can always be reabsorbed into a new definition of spin, since it is ambiguous at radiation-reaction orders. Therefore we will only consider coordinate transformation induced changes in the relative vector ${\bf x}$ at 2.5PN and 3.5PN-SO orders, according to \begin{equation} {\bf x}^\prime = {\bf x} + \delta {\bf x}_{\rm 2.5PN} +\delta {\bf x}_{\rm 3.5PN-SO} \,. \end{equation} The 2.5PN order coordinate change that corresponds to the arbitrary coefficients $\alpha$ and $\beta$ in Eq. (\ref{eom25PN}) was calculated in \cite{iyerwill1,iyerwill2}, and is given by \begin{equation} \delta {\bf x}_{\rm 2.5PN} = \frac{8\mu m}{15r}\left [\beta \dot r {\bf n}+(2\beta-3\alpha) {\bf v}\right ] \,. \label{dx25PN} \end{equation} We can derive directly \begin{eqnarray} {\bf v}'&=&{\bf v}+\delta {\dot {\bf x}}_{\rm 2.5PN} + \delta {\dot {\bf x}}_{\rm 3.5PN-SO} \,, \nonumber \\ \frac{d{\bf v}'}{dt'}&=&\frac{d{\bf v}}{dt}+\delta {\ddot {\bf x}}_{\rm 2.5PN} + \delta {\ddot {\bf x}}_{\rm 3.5PN-SO} \,, \nonumber \\ \frac{m{\bf x}'}{r'^3}&=&\frac{m{\bf x}}{r^3}+\frac{m}{r^3} (\delta {\bf x}_{\rm 2.5PN} -3{\bf n}{\bf n}\cdot \delta {\bf x}_{\rm 2.5PN}) \nonumber \\ &&+ \frac{m}{r^3} (\delta {\bf x}_{\rm 3.5PN-SO} -3{\bf n}{\bf n}\cdot \delta {\bf x}_{\rm 3.5PN-SO}) \,. \end{eqnarray} The 2.5PN terms in these equations must also be used to determine the induced change in the 1PN spin-orbit acceleration terms in Eq. (\ref{eomPNSO}). In evaluating $\delta {\ddot {\bf x}}_{\rm 2.5PN}$ explicitly using Eq. (\ref{dx25PN}), the 1PN spin-orbit equations must be employed wherever an acceleration occurs. The result is that the equation of motion (\ref{PNeom}) changes between the original and the new coordinates by a quantity ${\bf Q}$ given by \begin{eqnarray} {\bf Q}&=&\biggl \{\frac{8\mu m}{5r^3}\biggl[\left (3\beta v^2+(2\alpha-3\beta)\frac{m}{r}-5\beta \dot r^2\right )\dot r {\bf n} - \left (v^2-\frac{m}{r}-3\dot r^2 \right )\alpha {\bf v} \biggr]\biggr \} \nonumber \\ && - \delta {\ddot {\bf x}}_{\rm 3.5PN-SO} - \frac{m}{r^3} (\delta {\bf x}_{\rm 3.5PN-SO} -3{\bf n}{\bf n}\cdot \delta {\bf x}_{\rm 3.5PN-SO}) \nonumber \\ &&- \frac{8\mu m}{5r^5}\biggl[ \frac{1}{2r}{\bf {\tilde L_N}}\cdot(4{\cal S}+3\fourvec{\xi}) \left (3 (\alpha- \beta) {\dot r} {\bf n} + \alpha {\bf v} \right ) - \dot r {\bf v} \times (4{\cal S}+3\fourvec{\xi}) \left ( \alpha + \frac{1}{3} \beta \right ) \nonumber \\ &&-\frac{1}{6} {\bf n} \times (4{\cal S}+3\fourvec{\xi}) \left (\beta v^2 - \beta \frac{m}{r}-(9\alpha+6\beta)\dot r^2 \right) \biggr] \,. \label{Qterm} \end{eqnarray} Note that the 2.5PN terms in Eq. (\ref{Qterm}) match exactly the arbitrary terms in Eq. (\ref{eom25PN}). We now want to find a form for $\delta {\bf x}_{\rm 3.5PN-SO}$ so that the 3.5PN-SO terms in Eq. (\ref{Qterm}) match the terms in (\ref{eomgeneral}) generated by the arbitrary coefficients in Eq. (\ref{coefficients}). This can be done either by direct integration to find $\delta {\bf x}_{\rm 3.5PN-SO}$, or by assuming a suitable form for $\delta {\bf x}_{\rm 3.5PN-SO}$ and seeing if one can solve for a set of coefficients. Remarkably, a solution can be found, and is given by \begin{eqnarray} \delta {\bf x}_{\rm 3.5PN-SO}&=& - \frac{\mu}{5r^2} \biggl \{ \frac{{\dot r}{\bf n}}{r} ({\bf {\tilde L_N}}\cdot{\cal S}) \left (\gamma_4 + 3\gamma_7 + 3\gamma_9 + \gamma_{11} + 3\gamma_{12} -\frac{1}{5}\alpha_2 \right ) \nonumber \\ && + \frac{\bf v}{3r} ({\bf {\tilde L_N}}\cdot{\cal S}) \left ( \gamma_4 + 3\gamma_7 + 3\gamma_9 + 3\gamma_{12} - \frac{1}{5}\alpha_2 \right ) \nonumber \\ && + \frac{1}{4} {\bf n}\times{\cal S} \biggl [ (\gamma_7+\gamma_9+\gamma_{12} )v^2 + (\gamma_4 + 3\gamma_7 + 3\gamma_9 + \gamma_{11} + 3\gamma_{12} ) {\dot r}^2 \nonumber \\ && - (\gamma_{12}-\frac{4}{3}\beta) \frac{m}{r} \biggr ] - \frac{1}{4} {\dot r} {\bf v}\times{\cal S} (\gamma_9 + \gamma_{12} ) + ({\cal S} \to \fourvec{\xi} ) \biggr \}\,. \end{eqnarray} The 12 ($6+6$) coefficients correspond precisely to the 12 degrees of freedom in Eqs. (\ref{solution}). \section{Concluding remarks} \label{sec:conclusions} We have used energy and angular momentum balance to deduce the general form of the 3.5PN spin-orbit radiation reaction terms in the two-body equations of motion, and showed that the remaining undetermined degrees of freedom correspond to the freedom to change gauges or coordinates at the corresponding post-Newtonian order. A specific choice of the free coefficients yields 3.5PN spin-orbit terms in the equations of motion identical with those derived from first principles. The results were subject to the physically reasonable assumption that gravitational radiation reaction has no effect on the magnitude of the individual spins, to 3.5PN order. A natural extension of this work is to determine the contribution of spin-spin interactions in radiation reaction using balance arguments and to compare the results with those calculated from first principles by Wang and Will \cite{dire4}. This work is in progress.
train/arxiv
BkiUam7xK1Thg9qFbAnD
5
1
\section*{Notation} \begin{eqnarray*} \mathbb{R}_+ &=&[0,\infty);\\ \mathcal{B} &=& \mbox{space of bounded measurable functions with bounded support};\\ D&=& \{(\omega_1, \omega_2, \omega_3)\in \mathbb{R}_+^3 \, |\, \omega_1 + \omega_2 \geq\omega_3\};\\ \mathbf{k} &&\mbox{wavevector, it belongs to } \mathbb{R}^N;\\ \omega(\mathbf{k}) && \mbox{dispersion relation};\\ \overline{T} &=& \overline{T}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}) \mbox{ interaction coefficient};\\ \mathcal{P}(\mathbb{R}_+) && \mbox{space of probability measures in }\mathbb{R}_+ \\ \mathcal{M}(\mathbb{R}_+) &&\mbox{set of finite measures on $\mathbb{R}_+$}. \end{eqnarray*} \section{Introduction} Wave turbulence (\cite{zakharov2004one,zakharov1992kolmogorov,nazarenko2011wave}, \cite[Entry turbulence]{scott2006encyclopedia}) describes weakly non-linear systems of dispersive waves. The present work focuses in the case of 4 interacting waves. We start with a brief presentation of the general 4-wave kinetic equation and move quickly to consider the isotropic case with simplified kernels, which is the object of study of the present work, and present the main results. We give a brief account on the theory of wave turbulence in Section \ref{sec:brief_account}. The rest of the text consists on the proofs of the main theorems. \subsection{The 4-wave kinetic equation.} Using in shorthand $n_i=n(\mathbf{k}_i,t)$, $n_k=n(\mathbf{k},t)$, $\omega_i=\omega(\mathbf{k}_i)$ and $\omega=\omega(\mathbf{k})$, the \textbf{4-wave kinetic equation} is given by \begin{eqnarray} \label{eq:kinetic_wave_equation} \frac{d}{dt}n(\mathbf{k}, t) &=& 4\pi \int_{\mathbb{R}^{3N}} \overline{T}^2(\mathbf{k}_1, \mathbf{k}_2, \mathbf{k}_3, \mathbf{k}) (n_1n_2n_3 +n_1n_2n_k -n_1n_3n_k-n_2n_3n_k) \\ && \qquad \times \delta(\omega_1+\omega_2-\omega_3 -\omega) \delta(\mathbf{k}_1+\mathbf{k}_2-\mathbf{k}_3-\mathbf{k}) d\mathbf{k}_1d\mathbf{k}_2d\mathbf{k}_3. \nonumber \end{eqnarray} where $\mathbf{k}\in \mathbb{R}^N$ is called \textbf{wavevector}; the function $n=n(\mathbf{k},t)$ can be interpreted as the spectral density (in $\mathbf{k}$-space) of a wave field and it is called \textbf{energy spectrum}; $\omega(\mathbf{k})$ is the \textbf{dispersion relation}; and $$\overline T_{123k}:= \overline T(\mathbf{k}_1, \mathbf{k}_2, \mathbf{k}_3, \mathbf{k}) $$ is the \textbf{interaction coefficient}. $$E=\int_{\mathbb{R}^N}\omega(\mathbf{k})\, n(\mathbf{k})d\mathbf{k}, \quad W=\int_{\mathbb{R}^N} n(\mathbf{k}) d\mathbf{k}$$ correspond to the total energy and the waveaction (total number of waves), respectively. These two quantities are conserved formally. \paragraph{Properties of the dispersion relation and the interaction coefficient.} $\omega(\mathbf{k})$ and $T_{123k}$ are homogeneous, i.e., for some $\alpha>0$ and $\beta\in\mathbb{R}$ $$\omega(\xi \mathbf{k}) = \xi^{\alpha} \omega(\mathbf{k}), \qquad \overline T(\xi \mathbf{k}_1, \xi \mathbf{k}_2, \xi \mathbf{k}_3, \xi \mathbf{k}) = \xi^\beta \overline T(\mathbf{k}_1, \mathbf{k}_2, \mathbf{k}_3, \mathbf{k}) \qquad \xi>0.$$ Moreover the interaction coefficient possesses the following symmetries $$\overline T_{123k}=\overline T_{213k}=\overline T_{12k3}=\overline T_{3k12}.$$ \paragraph{Example: shallow water.} In the case of shallow water we deal with weakly-nonlinear waves on the surface of an ideal fluid in an infinite basin of constant depth $h$ small. In this case (\cite{zakharov1999statistical}) we have that $\alpha=1$, $\beta=2$, dimension is 2 and \begin{equation}\label{eq:T_shallow_water} T(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k})=-\frac{1}{16\pi^2h}\frac{1}{(k_1k_2k_3k)^{1/2}} \left[(\mathbf{k}_1\cdot\mathbf{k}_2)(\mathbf{k}_3\cdot\mathbf{k})+(\mathbf{k}_1\cdot \mathbf{k}_3)(\mathbf{k}_2\cdot \mathbf{k}) + (\mathbf{k}_1\cdot\mathbf{k})(\mathbf{k}_2\cdot\mathbf{k}_3)\right]. \end{equation} In general $T$ will be given by very complex expressions, see for example \cite{zakharov1992kolmogorov}. \paragraph{Resonant conditions and the $\delta$ distributions.} The delta distributions appearing in equation \eqref{eq:kinetic_wave_equation} correspond to the so-called resonant conditions: \begin{eqnarray*} \mathbf{k}_1+\mathbf{k}_2&=&\mathbf{k}_3+\mathbf{k}\\ \omega(\mathbf{k}_1) + \omega(\mathbf{k}_2)&=&\omega(\mathbf{k}_3)+\omega(\mathbf{k}). \end{eqnarray*} This imposes the conservation of energy and momentum in the wave interactions. \subsection{The \textit{simplified} weak isotropic 4-wave kinetic equation.} We focus our study on the weak formulation of the isotropic 4-wave kinetic equation defined against functions in $\mathcal{B}(\mathbb{R}^N)$; the set of bounded measurable functions with bounded support in $\mathbb{R}^N$. More specifically, we assume that $n(\mathbf{k})=n(k)$ is a radial function (isotropic). Then, using the relation $\omega(\mathbf{k}) =k^\alpha$, we study the evolution of the \textbf{angle-averaged frequency spectrum} $\mu=\mu(d\omega)$ which corresponds to $$\mu(d\omega):=\frac{|S^{N-1}|}{\alpha} \omega^{\frac{N-\alpha}{\alpha}}n(\omega^{1/\alpha})d\omega,$$ where $S^{N-1}$ is the $N$ dimensional sphere. The total number of waves (waveaction) and the total energy are now expressed respectively as \begin{eqnarray} \label{eq:totalNumberOfWaves} W&=& \int^\infty_0 \mu(d\omega)\\ E &=& \int^\infty_0 \omega \mu(d\omega). \label{eq:total_energy} \end{eqnarray} The weak form of the isotropic equation is given formally by \begin{equation} \label{eq:isotropic_4_wave_equation_weak} \mu_t = \mu_0 + \int^t_0 Q(\mu_s, \mu_s, \mu_s) \, ds \end{equation} where $Q$ is defined against functions $f\in \mathcal{B}(\mathbb{R}_+)$ as \begin{eqnarray*} \langle f, Q(\mu, \mu,\mu) \rangle &=& \frac{1}{2} \int_{D} \mu(d\omega_1) \mu(d\omega_2) \mu(d\omega_3) K(\omega_1,\omega_2,\omega_3) \\ &&\qquad \times[ f(\omega_1+\omega_2-\omega_3) + f(\omega_3) -f(\omega_2) -f(\omega_1) ] \end{eqnarray*} where $D:= \{ \mathbb{R}^3_+ \cap (\omega_1 + \omega_2 \geq \omega_3)\}$. See appendix \ref{sec:weak_isotropic_wave_eq} for the formal derivation of this equation. Formally $K=K(\omega_1, \omega_2, \omega_3)$ is written as \begin{eqnarray} K(\omega_1, \omega_2, \omega_3) &=& \frac{8\pi}{\alpha |S^{N-1}|^4}(\omega)^{\frac{N-\alpha}{\alpha}}\\ && \qquad \int_{\left( S^{N-1}\right)^4} d\mathbf{s}_1d\mathbf{s}_2d\mathbf{s}_3d\mathbf{s}\, \overline{T}^2(\omega_1^{1/\alpha} \mathbf{s}_1, \omega_2^{1/\alpha}\mathbf{s}_2, \omega_3^{1/\alpha}\mathbf{s}_3, (\omega)^{1/\alpha} \mathbf{s}) \nonumber\\ && \qquad \qquad \times \delta(\omega_1^{1/\alpha}\mathbf{s}_1+ \omega_2^{1/\alpha}\mathbf{s}_2- \omega_3^{1/\alpha}\mathbf{s}_3- (\omega)^{1/\alpha} \mathbf{s}). \nonumber \end{eqnarray} Notice that formally $K$ is homogeneous of degree \begin{equation} \label{eq:lambda} \lambda:=\frac{2\beta-\alpha}{\alpha}. \end{equation} \bigskip \textbf{ Our starting point is equation \eqref{eq:isotropic_4_wave_equation_weak} considering \textit{simplified kernels} $K$}. In this work we do not study the relation between the interaction coefficient $\overline{T}$ and $K$. Specifically, we will consider the following type of kernels: \begin{definition} We say that $K$ is a \textbf{model kernel} if \begin{itemize} \item $K: \mathbb{R}_+^3 \rightarrow \mathbb{R}_+$; \item $K$ is continuous in $\mathbb{R}^3_+=[0,\infty)^3$; \item $K$ is homogeneous of degree $\lambda$; \item $K(\omega_1,\omega_2,\omega_3)=K(\omega_2,\omega_1,\omega_3)$ for all $(\omega_1, \omega_2,\omega_3)\in \mathbb{R}_+^3$. \end{itemize} \end{definition} Some examples of model kernels are: \begin{eqnarray} \nonumber K(\omega_1,\omega_2,\omega_3) &=& \frac{1}{2}\left( \omega_1^p \omega_2^q \omega_3^r + \omega_1^q \omega_2^p \omega_3^r \right) \quad \mbox{with }p+q+r=\lambda,\\ K(\omega_1, \omega_2, \omega_3) &=& (\omega_1\omega_2\omega_3)^{\lambda/3}, \label{eq:bestK}\\ K(\omega_1, \omega_2, \omega_3) &=& \frac{1}{3}(\omega_1^\lambda +\omega_2^\lambda + \omega_3^\lambda). \nonumber \end{eqnarray} The main question we want to address is: \begin{quote} \textsc{For which types of kernels $K$ there is existence and uniqueness of solutions for equation \eqref{eq:isotropic_4_wave_equation_weak} and, moreover, can this solution(s) be taken as the mean-field limit of a specific stochastic particle system?} \end{quote} The present work gives a positive answer for a particular class of kernels as explained in the next section, but first, for the motivation of the problem, we need to answer the two following questions: \paragraph{a) Why is it relevant to study the weak isotropic 4-wave kinetic equation with simplified kernels?} \mbox{} The present work is inspired on the article \cite{connaughton2009numerical} from the physics literature on wave turbulence. In \cite{connaughton2009numerical} the author works with the 3-wave kinetic equation and considers its isotropic version also assuming simplified kernels. The idea is that the 3-wave kinetic equation can be interpreted as a process where particles coagulate and fragment. This interpretation allows to use numerical methods coming from the theory of coagulation-fragmentation processes, which can be applied to this type of simplified kernels. As in \cite{connaughton2009numerical}, ignoring the specific shape of the interaction coefficient $\overline{T}$ is not uncommon in the wave turbulence literature; in general the shape of $\overline{T}$ is too complex, too messy to extract information. Moreover, the most important feature in wave turbulence, the steady states called KZ-spectrum, depend only on the parameters $\alpha$, $\beta$ and $N$. That is why in the physics literature $\overline{T}$ plays a secondary role, sometimes no role at all. It is believed that only the asymptotic scaling properties of the kernel will affect the asymptotic behaviour of the solution. This is similar to what happens in the case of the Smoluchowski's coagulation equation, where homogeneous kernels give rise to self-similar solutions (scaling solutions) in some cases. The hypothesis that solutions become self-similar in the long run under the presence of an homogeneous kernel is called \textbf{dynamical scaling hypothesis}, see \cite{mischler2011uniqueness} for more on this. In the case of wave turbulence we expect this self-similar solutions to correspond to the steady states given by the KZ-spectrum. Proving the dynamical scaling hypothesis for the simplified isotropic 4-wave kinetic equation under the assumptions of Theorem \ref{eq:existence_solutions_kinetic_4wave} (existence of solutions) will imply proving the validity of the KZ-spectrum for this simplified kernels (if there is correspondence between the two). This would provide a great indication of the mathematical validity of the theory of wave turbulence. \paragraph{b) Why consider the isotropic case?} There are examples in the physics literature where the phenomena are considered to behave isotropically (like in Langmuir waves for isotropic plasmas and shallow water with flat bottom). The main reason though to consider the isotropic case is that it makes easier to get a mean-field limit from discrete stochastic particle systems. Suppose that we want to find a discrete particle system that approximates the dynamics of \eqref{eq:kinetic_wave_equation}. For given waves with wavenumbers $\mathbf{k}_1, \mathbf{k}_2,\mathbf{k}_3$, we want to see if they interact. On one hand, due to the resonance conditions $\mathbf{k}$ defined as $$\mathbf{k}= \mathbf{k}_1+\mathbf{k}_2 -\mathbf{k}_3$$ is uniquely determined. On the other hand, on top we must add the constraint $$\omega=\omega_1+\omega_2-\omega_3$$ and this in general will not be satisfied. Therefore, if we consider systems with a finite number of particles, in general, interactions will not occur and the dynamics will be constant. We go around this problem by considering the isotropic case. By assuming that $n=n(k)$ is a rotationally invariant function, we add the degree of freedom that we need. \subsubsection{Summary of results} Next we summarise the main results in the present work. These results are the analogous ones presented in the papers \cite{norris1999smoluchowski,norris2000cluster} for the Smoluchowski equation (coagulation model). \begin{remark}[Strategy] \label{rem:strategy} We will adapt the proofs by Norris in \cite{norris1999smoluchowski} and \cite{norris2000cluster} for coagulation phenomena. In the proof by Norris in \cite{norris1999smoluchowski} sublinear functions $\varphi:\mathbb{R}_+ \rightarrow\mathbb{R}_+$ are used, i.e., \begin{eqnarray*} \varphi(\lambda x) &\leq& \lambda \varphi(x), \quad \lambda \geq 1\\ \varphi(x+y)&\leq& \varphi(x)+\varphi(y). \end{eqnarray*} These functions are the key to get bounds because of the following property: let $(\mu^n_t)_{t\geq0}$ be a stochastic coagulation process with $n$ particles, if initially $$\langle \varphi, \mu^n_0 \rangle \leq \Lambda$$ for some $\Lambda<\infty$, for all $n\in \mathbb{N}$, then $$\langle \varphi, \mu^n_t \rangle \leq \Lambda \mbox{ for all } n,t.$$ Actually, what we obtain is that $$\langle\varphi, \mu^n_t \rangle \leq \langle \varphi, \mu^n_0 \rangle$$ thanks to the sublinearity of $\varphi$; say that two particles of masses $x, y\in \mathbb{R}_+$ coagulate creating a particle of mass $x+y$, then \begin{equation} \label{eq:ex_varphi} \varphi(x+y) \leq \varphi(x) + \varphi(y) \end{equation} by sublinearity. \medskip In general, this idea to get bounds cannot be applied to the type of stochastic particle processes that we are going to consider because they also include fragmentation phenomena; we will have that in an interaction two particles of masses $\omega_1, \omega_2\in\mathbb{R}_+$ disappear and two particles of masses $\omega, \omega_3\in \mathbb{R}_+$ are created. To get bounds on this stochastic process using the method above we need an expression analogous to \eqref{eq:ex_varphi}, i.e., $$\varphi(\omega)+\varphi(\omega_3) \leq \varphi(\omega_1)+\varphi(\omega_2).$$ Therefore we can use Norris method with the appropriate adaptations for the particular case where $\varphi(\omega)=\omega+c$ for a constant $c$, which we will take to be one. Notice that this works as a consequence of the conservation of the energy (given by the $\omega$'s, see \eqref{eq:total_energy}) and the conservation of the total number of particles at each interaction. \end{remark} \medskip \begin{definition} Consider $\varphi(\omega)=\omega+1$. We say that a kernel $K$ is \textbf{sub-multiplicative} if \begin{equation} \label{eq:bounds_on_K} K(\omega_1, \omega_2, \omega_3) \leq \varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3). \end{equation} \end{definition} \paragraph{A. Existence and uniqueness of solutions.} \begin{definition}[Solution and types of solutions] \label{def:solutions} We will say that $(\mu_t)_{t<T}$ is a \textit{local solution} if it satisfies \eqref{eq:isotropic_4_wave_equation_weak} for all bounded measurable functions $f$ of bounded support and such that $\langle \omega, \mu_t \rangle \leq \langle \omega, \mu_0\rangle$ for all $t<T$. If $T=+\infty$ then we have a \textit{solution}. If moreover, $$\int^\infty_0\omega \mu_t(d\omega)$$ is finite and constant, then we say that $(\mu_t)_{t<T}$ is \textit{conservative}. We call any local solution $(\mu_t)_{t<T}$ such that $$\int^t_0\langle \varphi^2, \mu_s\rangle\, ds<\infty\quad \mbox{for all } t<T$$ a \textit{strong solution}. \end{definition} \begin{remark} Observe that we consider the possibility of having not conservative solutions, implying loss of mass. This will correspond to gelation in coagulation and the concept of finite capacity cascades in Wave Turbulence (see \cite{connaughton2009numerical}). \end{remark} \begin{theorem}[Existence and uniqueness of solutions] \label{eq:existence_solutions_kinetic_4wave} Consider equation \eqref{eq:isotropic_4_wave_equation_weak} and a given $\mu_0$ measure in $\mathbb{R}_+$. Define $\varphi(\omega)=\omega+1$ and assume that $K$ is submultiplicative model kernel. Assume further that $\langle \varphi, \mu_0 \rangle <\infty$ (i.e., initially the total number of waves \eqref{eq:totalNumberOfWaves} and the total energy \eqref{eq:total_energy} are finite). Then, if $(\mu_t)_{t<T}$ and $(\nu_t)_{t<T}$ are local solutions, starting from $\mu_0$, and if $(\nu_t)_{t<T}$ is strong, then $\mu_t= \nu_t$ for all $t<T$. Moreover, any strong solution is conservative. Also, if $\langle \varphi^2, \mu_0\rangle<\infty$, then there exists a unique maximal strong solution $(\mu_t)_{t<\zeta(\mu_0)}$ with $\zeta(\mu_0)= \langle \varphi^2, \mu_0 \rangle^{-1} \langle \varphi, \mu_0 \rangle^{-1}$. \end{theorem} The proof of this theorem will be an adaptation of \cite[Theorem 2.1]{norris1999smoluchowski}. \paragraph{B. Mean-field limit (coagulation-fragmentation phenomena).} We will consider a system of stochastic particles undergoing coagulation-fragmentation phenomena. The basic idea is that three particles $(\omega_1,\omega_2,\omega_3)$ with $\omega_1+\omega_2\geq \omega_3$ will interact at a given rate $K(\omega_1, \omega_2, \omega_3)$. In the interaction, first $\omega_1$ and $\omega_2$ coagulate to form $\omega_1+\omega_2$ and then, under the presence of $\omega_3$ the coagulant splits into two other components which are $\omega$ and a new $\omega_3$ (fragmentation). So interactions are $$[\omega_1,\omega_2,\omega_3] \mapsto [\omega_1+\omega_2-\omega_3,\omega_3,\omega_3].$$ Note that we assume that $K$ is symmetric in the first two variables because in the interactions the role of $\omega_1$ and $\omega_2$ is symmetric. We will define and build for each $n\geq 1$, $(X_t^n)_{t\geq 0}$ a instantaneous coagulation-fragmentation stochastic particle system of $n$ particles (Section \ref{sec:coagulation_fragmentation_stochastic_process}) following the previous ideas. We will approximate the solutions to the isotropic 4-wave kinetic equation using this coagulation-fragmentation phenomena. We present here two mean-field limits each of them requiring a different set of assumptions: \begin{theorem}[First mean-field limit] \label{th:mean_field_limit_unbounded_kernel} Assume that for $\tilde \varphi(\omega) =\omega^{1-\gamma}$, $\gamma\in (0,1)$ it holds that $K$ is a model kernel with $$K(\omega_1, \omega_2, \omega_3) \leq \tilde\varphi(\omega_1)\tilde\varphi(\omega_2)\tilde\varphi(\omega_3).$$ Assume also that $\langle \omega, X_0^n \rangle$ is bounded uniformly in $n$ by $\langle \omega, \mu_0\rangle <\infty$, and $$X^n_0\rightarrow \mu_0 \quad \mbox{weakly.}$$ Then the sequence of laws $(X^n_t)_{n\in\mathbb{N}}$ is tight in the Skorokhod topology. Moreover, under any weak limit law, $(\mu_t)_{t\geq 0}$ is almost surely a solution of equation \eqref{eq:isotropic_4_wave_equation_weak}. In particular, this equation has at least one solution. \end{theorem} The proof of this theorem will be an adaptation of \cite[Theorem 4.1]{norris1999smoluchowski}. \medskip Denote by $d$ some metric on $\mathcal{M}$, the set of finite measures on $\mathbb{R}_+$, which is compatible with the topology of weak convergence, i.e., \begin{equation} \label{eq:definition_metric_weak_convergence} d(\mu_n, \mu)\rightarrow 0 \quad \mbox{if and only if} \quad \langle f, \mu_n\rangle \rightarrow \langle f, \mu \rangle \end{equation} for all bounded continuous functions $f: \mathbb{R}_+\rightarrow \mathbb{R}$. We choose $d$ so that $d(\mu, \mu') \leq \|\mu-\mu'\|$ for al $\mu, \mu'\in \mathcal{M}$. \begin{theorem}[Second mean-field limit] \label{th:mean-field-limit-complete} Let $K$ be a model kernel and let $\mu_0$ be a measure on $\mathbb{R}_+$. Assume that for $\varphi(\omega)=\omega+1$ it holds $$K(\omega_1, \omega_2, \omega_3) \leq \varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3)$$ and that $\langle \varphi, \mu_0\rangle<\infty$ and $\langle \varphi^2, \mu_0\rangle<\infty$. Denote by $(\mu_t)_{t<T}$ the maximal strong solution to \eqref{eq:isotropic_4_wave_equation_weak} provided by Theorem \ref{eq:existence_solutions_kinetic_4wave}. Let $(X^n_t)_{n\in \mathbb{N}}$ be a sequence of instantaneous coagulation-fragmentation particle system, with jump kernel $K$. Suppose that $$d(\varphi X^n_0, \varphi \mu_0) \rightarrow 0$$ as $n\rightarrow \infty$. Then, for all $t<T$, $$\sup_{s\leq t} d(\varphi X^n_s, \varphi \mu_s) \rightarrow 0$$ in probability, as $n\rightarrow \infty$. \end{theorem} The proof of this theorem will be an adaptation of \cite[Theorem 4.4]{norris1999smoluchowski}. \medskip Many mathematical works have been devoted to the study of the coagulation-fragmentation equation. We base our work on \cite{norris1999smoluchowski} and \cite{norris2000cluster} but the reader is also referred to \cite{eibeck2000approximative}, \cite{escobedo2005self}, \cite{laurenccot2002discrete}, \cite{wagner2005explosion}, as an example. \paragraph{C. Applications} For the physical applications we consider $K$ given by expression \eqref{eq:bestK}, i.e. $K(\omega_1, \omega_2, \omega_3)=(\omega_1\omega_2\omega_3)^{\lambda/3}$, which is submultiplicative (since $\omega^\lambda \leq \omega+1, \;\lambda\in[0,1]$). If $\lambda\in[0,3)$ then we can apply all the previous theorems. For the case $\lambda=3$ the theorems also apply with the exception of the first mean-field limit, Theorem \ref{th:mean_field_limit_unbounded_kernel}. Here are some examples: \begin{itemize} \item \textit{Langmuir waves in isotropic plasmas and spin waves}: $\beta=2$, $\alpha=2$, so $\lambda=1$ (the dimension is $N=3$). \item \textit{Shallow water} (isotropic in a flat bottom, \cite{zakharov1999statistical}): $\beta=2$, $\alpha=1$, so $\lambda=3$ (dimension $N=2$). \item \textit{Waves on elastic plates}: $\beta=3$, $\alpha=2$, so $\lambda=2$ (dimension $N=2$). \end{itemize} However, these results cannot be applied to other systems like gravity waves on deep water, nonlinear optics and Bose-Einstein condensates. \subsection{Some notes on the physical theory of Wave Turbulence} \label{sec:brief_account} The theory of Wave Turbulence is a relatively recent field where most of the results are due to physicists. Next, we present some concepts of the theory extracted from \cite{zakharov2004one,zakharov1992kolmogorov,nazarenko2011wave}, \cite[Entry turbulence]{scott2006encyclopedia}. All the results are formal and require a rigorous mathematical counterpart. Wave turbulence is formed by the so-called weak wave turbulence (whose central object is the kinetic wave equation) and the so-called `coherent structures'. Wave turbulence takes place on the onset of weakly non-linear dispersive waves. The assumption on weak non-linearity allows the derivation of the kinetic wave equation of which \eqref{eq:kinetic_wave_equation} is an example for the case of 4 interacting waves. In the general case, $N$ waves interact in resonant sets transferring energy. Differences between physical systems are given by the dimension of the system, the number of interacting waves and the medium itself (which is described by the dispersion relation and the interaction coefficient). \paragraph{A. Derivation of the wave kinetic equation and the Cauchy theory.} There is not a rigorous mathematical derivation and Cauchy theory for the kinetic wave equation. In this work we prove existence and uniqueness of solutions for the isotropic weak 4-wave kinetic equation in some restricted setting. \begin{itemize} \item \emph{General procedure:} in \cite[Section 6.1.1]{nazarenko2011wave} it is given a scheme of the general procedure to derive the kinetic wave equation. We do not reproduce here the explanation there but point at some of the key steps: \begin{itemize} \item the starting point is a nonlinear wave equation (mostly written in Hamiltonian form); \item then the equation is written in Fourier space in $\mathbf{k}$ using the interaction representation between waves; \item using the weakness of the nonlinearity hypothesis, a perturbation analysis is done expanding around a small nonlinearity parameter; \item perform statistical averaging. \end{itemize} \item \emph{Example: shallow water}. In the case of shallow (or deep water) the vertical coordinate is considered to be $$-h<z<\eta(\mathbf{r}), \qquad\mathbf{r}=(x,y)$$ and the velocity field $V$ is incompressible and a potential field, $$\mbox{div } V=0, \qquad V= \nabla \Phi$$ where the potential satisfies the Laplace equation $$\Delta \Phi=0$$ with boundary conditions $$\Phi|_{z=\eta}=\Psi(\mathbf{r},t), \qquad\Phi|_{z=-h}=0.$$ The Hamiltonian is consider to by the sum $H=T+U$ of kinetic and potential energies defined as follows: \begin{eqnarray*} T&=&\frac{1}{2}\int dr \int^\eta_{-h}(\nabla \Phi)^2dz,\\ U&=& \frac{1}{2}g \int \eta^2 dr + \sigma \int \left( \sqrt{1+(\nabla \eta)^2}-1 \right) dr \end{eqnarray*} where $g$ is the acceleration of gravity and $\sigma$ is the surface tension coefficent. Zakharov \cite{zakharov1998weakly} derived the equations of motion for $\eta$ and $\Psi$ as $$\frac{\partial \eta}{\partial t}=\frac{\delta H}{\delta \Psi}, \qquad\frac{\partial \Psi}{\partial t}= - \frac{\delta H}{\delta \eta}.$$ In \cite{zakharov1999statistical}, Zakharov derives the kinetic wave equation for shallow and deep water starting from these equations. \item \emph{The delta distribution}. One of the main issues to study the validity of the kinetic wave equation is the presence of the two delta distributions that make sure that the energy and the total momentum are conserved. \item \emph{$N$-waves}. At the beginning of this work the 4-wave equation was presented. In the general case, the kinetic equation will correspond to $N$ interacting waves, where $N$ is the minimal number such that the interaction operator is non-zero, i.e., such that \begin{enumerate} \item the $N$-wave resonant conditions are satisfied for a non-trivial set of wave vectors (here `non-trivial set' must be made precise): \begin{eqnarray*} \omega(\mathbf{k}_1)\pm\omega(\mathbf{k}_2)\pm\hdots\pm\omega(\mathbf{k}_N)&=&0;\\ \mathbf{k}_1 \pm \mathbf{k}_2 \pm \hdots \pm \mathbf{k}_N; \end{eqnarray*} \item the $N$-wave interaction coefficient $\overline{T}$ must be non-zero over this set. \end{enumerate} \end{itemize} \paragraph{B. The Kolmogorov-Zakharov (KZ) spectra.} The Kolmogorov-Zakharov spectrum corresponds to steady states of the system. \begin{itemize} \item \emph{Derivation, validity (locality) and stability.} The derivation of the KZ spectrum is explained in \cite[Chapter 3]{zakharov1992kolmogorov}. For the derivation, only the homogeneity index of the interaction coefficient $\overline{T}$ is needed. However, the validity of the KZ spectrum depends on the condition of `locality', i.e., that only waves with similar wavelength interact. This condition is translated in the finiteness of the interaction integral (see \cite{zakharov1992kolmogorov} for more details) and it does depend on the particular shape of $\overline{T}$. On the other hand, one should check the stability of the KZ to small perturbations. \item \emph{Case of shallow water:} for this case, the corresponding Kolmogorov-Zakharov solutions are (\cite{zakharov1999statistical}): $$n^{(1)}_k \sim k^{-10/3}$$ $$n^{(2)}_k \sim k^{-3}.$$ Observe that there are two solutions; the first one corresponds to the energy flux and the second to the flux of action (corresponding to the waveaction). \end{itemize} \paragraph{Historical note.} The kinetic wave equation was first derived by Nordheim in 1928 \cite{nordheim1928kinetic} in the context of a Bose gas and by Peierls \cite{peierls1929kinetischen} in 1929 in the context of thermal conduction in crystals. \paragraph{C. Some examples.} We have already seen the case of shallow water, but there are many more examples. The Majda-McLaughlin-Tabak model is explained in \cite{zakharov2004one} in dimension 1 where the dispersion relation is given by $$\omega(\mathbf{k}) =k^{\alpha}, \quad \alpha>0$$ where $k=\|\mathbf{k}\|$ and \begin{equation} \label{eq:T_in_MMTmodel} T_{123k}= (k_1 k_2 k_3 k)^{\beta/4} \end{equation} for some $\beta \in \mathbb{R}$. The particular case $\alpha=\frac{1}{2}$ corresponds to the Majda-McLaughlin-Tabak (MMT) model. We have a four-wave interaction process with \textbf{resonant conditions}: $$\left\{\begin{array}{l} \mathbf{k}_1+\mathbf{k}_2 =\mathbf{k}_3+\mathbf{k}\\ |k_1|^{1/2}+|k_2|^{1/2}= |k_3|^{1/2}+ |k|^{1/2} . \end{array} \right.$$ In this case wave numbers that are non-trivial solutions to these conditions cannot have all the same sign. Moreover, non-trivial solutions can be parametrized by a two parameter family $A$ and $\xi>0$: \begin{equation} \label{eq:parametrization_action_set} k_1=-A^2\xi^2, \quad k_2=A^2(1+\xi +\xi^2)^2, \quad k_3= A^2(1+\xi)^2, \quad k=A^2\xi^2(1+\xi)^2. \end{equation} When $\beta=0$ the collision rate is bounded. In \cite{zakharov2004one} the authors obtain the following Kolmogorov-type solutions for $\alpha=1/2$ and $\beta=0$: \begin{eqnarray} n & \sim & |k|^{-5/6}\\ n & \sim & |k|^{-1} \, . \end{eqnarray} \medskip The derivation of the kinetic wave equation is done from the equation $$i \frac{\partial \psi}{\partial t} = \underbrace{\left| \frac{\partial}{\partial x}\right|^\alpha \psi}_{\mbox{dispersive}} + \lambda \underbrace{\left| \frac{\partial}{\partial x}\right|^{\beta/4} \left( \left| \left| \frac{\partial}{\partial x}\right|^{\beta/4}\psi \right|^2 \left| \frac{\partial}{\partial x}\right|^{\beta/4}\psi \right)}_{\mbox{non-linearity}} \quad \lambda = \pm1$$ where $\psi(x,t)$ denotes a complex wave field. \bigskip Other examples in wave turbulence are (taken from \cite{nazarenko2011wave}): \begin{itemize} \item 4-wave examples \begin{itemize} \item surface gravity waves; $N=2$, $\alpha=1/2$, $\beta=3$; \item langmuir waves in isotropic plasmas, spin waves; $N=3$, $\alpha=2$, $\beta=2$; \item waves on elastic plates: $N=2$, $\alpha=2$, $\beta=3$; \item Bose-Einstein condensates and non-linear optics: $\alpha=2$, $\beta=0$; \item Gravity waves on deep water: $N=2$, $\alpha=1/2$, $\beta=3$. \end{itemize} \item 3-wave examples \begin{itemize} \item capillary waves: $N=2$, $\alpha=3/2$; \item acoustic turbulence, waves in isotropic elastic media; $N=3$, $\alpha=1$; \item interval waves in stratified fluids: $N=1$, $\alpha=-1$; \end{itemize} \item other examples \begin{itemize} \item Kelvin waves on vortex filaments: $N=1$, 6-wave interaction, $\alpha=2$. \end{itemize} \end{itemize} \section{Existence of solutions for unbounded kernel} In this section we will follow the steps in \cite[Theorem 2.1]{norris1999smoluchowski} (see Remark \ref{rem:strategy}). \begin{remark} We make some comments about Theorem \ref{eq:existence_solutions_kinetic_4wave}: \begin{enumerate} \item The statement is correct even if $$K(\omega_1, \omega_2, \omega_3) \leq C\varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3)$$ for some positive constant $C<\infty$. This only changes the $\zeta(\mu_0)$ into $$\zeta(\mu_0)=\langle \varphi^2, \mu_0 \rangle^{-1} C^{-1} \langle \varphi, \mu_0 \rangle^{-1}.$$ Also notice that by scaling time, we can eliminate the multiplicative constant. \item Notice that in the coagulation case, existence of strong solutions is assured for times $T'= \langle \varphi^2, \mu_0\rangle^{-1}$. We expect that in the 4-wave equations we can assure existence of strong solutions for larger times. The reason that we do not get that is because when bounding \eqref{eq:estimate_time_strong_sol}, we ignore some negative factors. \item We will need to use that for $\varphi(\omega)=\omega +1$, it holds that for any local solution $(\mu_t)_{t<T}$ \begin{equation} \label{eq:bounds_for_varphi} \langle \varphi, \mu_t\rangle \leq \langle \varphi, \mu_0\rangle \quad \mbox{for all }t<T. \end{equation} This is a condition for $\mu_t$ being a solution (see Definition \ref{def:solutions}). Notice that in particular strong solutions fulfilled this condition automatically as they are conservative (this is explained in expression \eqref{eq:strong_solutions_is_conservative}). \item We could have defined our set of test functions as including also measurable functions with linear growth (and in an unbounded interval). This way the theorem works the same and we would have that $\langle \varphi, \mu_t \rangle = \langle \varphi, \mu_0\rangle$ for all $t$ where the solution exists, i.e., for that particular set of test functions we would only consider conservative solutions. \item A main difference with the result obtained in \cite{norris1999smoluchowski} and \cite{norris2000cluster} is that we do not allow $K$ to blow up at zero. \end{enumerate} \end{remark} \subsection{Proof of Theorem \ref{eq:existence_solutions_kinetic_4wave}} The rest of this section will consist on the proof of this theorem, which we will split in different propositions. We will follow the idea and structure as in \cite[Theorem 2.1]{norris1999smoluchowski}. We want to apply an iterative scheme on the equation to prove existence of solutions and for that we need estimates on $\| Q(\mu) \|$ and $\|Q(\mu)-Q(\mu')\|$, which, unfortunately, are unavailable in our present case for unbounded kernels. To sort this problem, we will consider an auxiliary process that approximates our looked for solution and that operates on bounded sets. This auxiliary process will take the form $(X^B_t, \Lambda^B_t)_{t\geq 0}$ for some bounded set $B$. $\Lambda^B_t$ gives an upper estimate of the effect on $X^B_t$ of the particles outside $B$ and $X^B_t$ will be a lower bound for our process in $B$. Let $B\subset [0,\infty)$ be bounded. Denote by $\mathcal{M}_B$ the space of finite signed measures supported on $B$. We define $L^B: \mathcal{M}_B \times \mathbb{R} \rightarrow \mathcal{M}_B\times \mathbb{R}$ by the requirement: \begin{eqnarray*} \langle (f, a), L^B(\mu, \lambda) \rangle &=& \frac{1}{2} \int_D (f(\omega_1+\omega_2-\omega_3) \mathbb{1}_{\omega_1+\omega_2-\omega_3\in B} + a\varphi(\omega) \mathbb{1}_{\omega \notin B} \nonumber\\ && \qquad + f(\omega_3) -f(\omega_1) -f(\omega_2) ) K(\omega_1, \omega_2, \omega_3) \mu(d\omega_1) \mu(d\omega_2)\mu(d\omega_3) \nonumber\\ &+&(\lambda^2+ 2\lambda \langle \varphi ,\mu \rangle) \int^\infty_0 \left( a\varphi(\omega)- f(\omega) \right) \varphi(\omega) \mu(d\omega)\ \end{eqnarray*} for all bounded measurable functions $f$ on $(0,\infty)$ and all $a\in \mathbb{R}$ where $D= \{ \mathbb{R}^3_+ \cap \omega\geq 0\}$. We used the notation $\langle (f,a), (\mu, \lambda) \rangle = \langle f, \mu\rangle + a \lambda$. Consider the equation \begin{equation} \label{eq:auxiliary_equation_existence_proof} (\mu_t, \lambda_t) = (\mu_0, \lambda_0) + \int^t_0 L^B(\mu_s, \lambda_s)\, ds. \end{equation} We admit as a \textit{local solution} any continuous map $$t \mapsto (\mu_t, \lambda_t):[0,T] \rightarrow \mathcal{M}_B \times \mathbb{R}$$ where $T\in (0,\infty)$, which satisfies equation \eqref{eq:auxiliary_equation_existence_proof} for all $t\in[0,T]$. \begin{proposition}[Existence for the auxiliary process] \label{prop:existence_auxiliary_process} Suppose $\mu_0\in \mathcal{M}_B$ with $\mu_0 \geq 0$ and that $\lambda_0\in[0,\infty)$. The equation \eqref{eq:auxiliary_equation_existence_proof} has a unique solution $(\mu_t,\lambda_t)_{t\geq 0}$ starting from $(\mu_0,\lambda_0)$. Moreover, $\mu_t\geq 0$ and $\lambda_t\geq 0$ for all $t$. \end{proposition} The proof is obtained by adapting the one in \cite[Proposition 2.2]{norris1999smoluchowski}. \begin{proof} By assumption \eqref{eq:bounds_on_K} it holds that for $\varphi(\omega)=\omega+1$ $$K(\omega_1, \omega_2, \omega_3) \leq \varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3).$$ Observe that $\varphi \geq 1$. By a scaling argument we may assume, without loss, that $$\langle \varphi, \mu_0 \rangle + \lambda_0 \leq 1,$$ which implies that $$\|\mu_0\| + |\lambda_0| \leq 1.$$ We will show next by a standard iterative scheme, that there is a constant $T>0$ depending only on $\varphi$ and $B$, and a unique local solution $(\mu_t, \lambda_t)_{t\leq T}$ starting from $(\mu_0, \lambda_0)$. Then we will see that $\mu_t\geq 0$ for all $t\in [0,T]$. This will be enough to prove the proposition: if we put $f=0$ and $a=1$ in \eqref{eq:auxiliary_equation_existence_proof} we get \begin{eqnarray*} \frac{d}{dt}\lambda_t &=& \frac{1}{2} \int_D \varphi(\omega)\mathbb{1}_{\omega\notin B}K(\omega_1, \omega_2, \omega_3) \mu(d\omega_1) \mu(d\omega_2) \mu(d\omega_3) \\ &&\quad+ (\lambda^2 + 2\lambda\langle \varphi, \mu \rangle) \int^\infty_0 \varphi(\omega)^2\mu(d\omega). \end{eqnarray*} So, since $\mu_t \geq 0$, we deduce that $\lambda_t\geq 0$ for all $t\in [0,T]$. Next, we put $f=\varphi$ and $a=1$ to see that \begin{eqnarray} \label{eq:expression_equal_0} \frac{d}{dt}\langle \varphi, \mu_t \rangle +\lambda_t &=& \frac{1}{2}\int_D (\varphi(\omega)+\varphi(\omega_3)-\varphi(\omega_1)-\varphi(\omega_2))\\ \nonumber &&\qquad\quad \timesK(\omega_1, \omega_2, \omega_3)\mu(d\omega_1)\mu(d\omega_2)\mu(d\omega_3) =0 \end{eqnarray} which is zero given that $\varphi(\omega)=\omega+1$. Therefore, $$\|\mu_T\|+ |\lambda_T| \leq \langle \varphi, \mu_T \rangle + \lambda_T = \langle \varphi, \mu_0 \rangle + \lambda_0 \leq 1.$$ We can now start again from $(\mu_T, \lambda_T)$ at time $T$ to extend the solution to $[0,2T]$, and so on, to prove the proposition. \medskip We use the following norm on $\mathcal{M}_B \times \mathbb{R}$: $$\|(\mu, \lambda)\| = \|\mu\| + |\lambda|.$$ Note the following estimates: there is a constant $C= C(\varphi, B)<\infty$ such that for all $\mu, \mu'\in \mathcal{M}_B$ and all $\lambda, \lambda'\in \mathbb{R}$ \begin{eqnarray}\label{eq:estimate1_auxiliary_process} \|L^B(\mu, \lambda)\| &\leq & C\|(\mu, \lambda)\|^3\\ \|L^B(\mu, \lambda) - L^B(\mu',\lambda') \| &\leq& C \bigg( \|\mu-\mu'\| \left( \|\mu\|^2 +\|\mu\|\|\mu'\|+\|\mu'\|^2\right) \label{eq:estimate2_auxiliary_process}\\ &&+(|\lambda|+|\lambda'|) |\lambda-\lambda'| \|\mu \|+ |\lambda'|^2\|\mu-\mu'\| \nonumber\\ &&\quad + |\lambda-\lambda'|\|\mu\|^2 + |\lambda'|\left(\|\mu\| \|\mu-\mu'\| + \|\mu'\| \|\mu'-\mu\|\right) \bigg) \nonumber \end{eqnarray} Observe that we get these estimates because we are working on a bounded set $B$. We turn to the iterative scheme. Set $(\mu^0_t, \lambda^0_t) = (\mu_0, \lambda_0)$ for all $t$ and define inductively a sequence of continuous maps $$t\mapsto (\mu^n_t, \lambda^n_t):\, [0,\infty)\rightarrow \mathcal{M}_B\times \mathbb{R}$$ by $$(\mu_t^{n+1},\lambda^{n+1}_t)=(\mu_0, \lambda_0) + \int^t_0 L^B(\mu_s^{n},\lambda_s^{n}) \, ds.$$ Set $$f_n(t) = \|(\mu^n_t, \lambda^n_t)\|$$ then $f_0(t)=f_n(0) = \|(\mu_0, \lambda_0) \|\leq 1$ and by the estimate \eqref{eq:estimate1_auxiliary_process} we have that $$f_{n+1}(t) \leq 1+ C\int^t_0 f_n(s)^3\, ds.$$ Hence $$f_n(t) \leq (1-2Ct)^{-1/2} \quad\mbox{for } t < (2C)^{-1}.$$ This last assertion is checked by induction. Suppose that it holds for $n$ then $$f_{n+1}(t) \leq 1+ C\int^t_0 (1-2Cs)^{-3/2}\, ds = 1+ (1-2Cs)^{-1/2} |_{s=0}^{s=t}.$$ Therefore, for all $n$ setting $T= (4C)^{-1}$, we have \begin{equation} \label{eq:estimate3_auxiliary_function} \|(\mu^n_t, \lambda^n_t) \|\leq \sqrt{2}\qquad t\leq T. \end{equation} Next set $g_0(t) = f_0(t)$ and for $n\geq 1$ $$g_n(t) = \|(\mu^n_t, \lambda^n_t) - (\mu^{n-1}_t, \lambda^{n-1}_t)\|.$$ By estimates \eqref{eq:estimate2_auxiliary_process} and \eqref{eq:estimate3_auxiliary_function}, there is a constant $C=C(B, \varphi) <\infty$ such that $$g_{n+1}(t) \leq C\int^t_0 g_n(s) \, ds \quad t\leq T.$$ Hence by the usual arguments (Gronwall, Cauchy sequence), $(\mu^n_t, \lambda^n_t)$ converges in $\mathcal{M}_B\times \mathbb{R}$ uniformly in $t\leq T$, to the desired local solution, which is also unique. Moreover, for some constant $C<\infty$ depending only on $\varphi$ and $B$ we have $$\|(\mu_t, \lambda_t)\|\leq C \qquad t \leq T.$$ \medskip Finally, we are left to check that $\mu_t\geq 0$. For this, we need the following result: \begin{proposition} \label{prop:auxiliary_prop} Let $$(t,\omega) \mapsto f_t(\omega):[0, T]\times B \rightarrow \mathbb{R}$$ be a bounded measurable function, having a bounded partial derivative $\partial f/ \partial t$. Then, for all $t\leq T$, $$ \frac{d}{dt}\langle f_t, \mu_t \rangle = \langle \partial f/\partial t, \mu_t \rangle + \langle (f_t, 0), L^B(\mu_t, \lambda_t) \rangle.$$ \end{proposition} The proof is a straightforward adaptation of the same Proposition (with different $L^B$) in \cite[Proposition 2.3]{norris1999smoluchowski}. For $t\leq T$, set $$\theta_t(\omega_1) = \exp\int^t_0 \left( \int_{\mathbb{R}^2_+ \cap (\omega_1 + \omega_2\geq \omega_3)} K(\omega_1, \omega_2, \omega_3) \mu_s(d\omega_2) \mu_s(d\omega_3) + \left( \lambda^2_s + 2\lambda_s \langle \varphi, \mu_s\rangle \right)\varphi(\omega_1) \right) ds$$ and define $G_t : \mathcal{M}_B \rightarrow \mathcal{M}_B$ by \begin{eqnarray*} \langle f, G_t(\mu) \rangle &=& \frac{1}{2} \int_D \left( (f\theta_t) (\omega) \mathbb{1}_{\omega\in B} + (f\theta_t)(\omega_3) \right)\\ && \qquad \times K(\omega_1, \omega_2, \omega_3) \theta_t(\omega_1)^{-1} \theta_t(\omega_2)^{-1}\theta_t(\omega_3)^{-1}\\ &&\qquad \times \mu(d\omega_1) \mu(d\omega_2)\mu(d\omega_3) \end{eqnarray*} Note that $G_t(\mu) \geq 0$ whenever $\mu\geq 0$ and for some $C=C(\varphi, B)<\infty$ we have \begin{eqnarray} \|G_t(\mu)\|&\leq& C\|\mu\|^3\\ \|G_t(\mu)-G_t(\mu')\|&\leq& C \|\mu-\mu'\|\left( \|\mu\|^2 + \|\mu'\|\|\mu\|+ \|\mu'\|^2\right). \end{eqnarray} Set $\tilde \mu_t = \theta_t \mu_t$. By Proposition \ref{prop:auxiliary_prop}, for all bounded measurable function $f$ we have $$\frac{d}{dt}\langle f, \tilde \mu_t \rangle = \langle f \frac{\partial \theta}{\partial t}, \mu_t \rangle + \langle (f\theta_t, 0), L^B(\mu_t, \lambda_t) \rangle$$ so, using the symmetry of $\omega_1$ and $\omega_2$ in $L^B$ we get \begin{eqnarray} \frac{d}{dt}\langle f, \tilde \mu_t \rangle &=& \langle f, G_t(\tilde \mu_t)\rangle. \end{eqnarray} Thus, the function $\theta_t$ is simply designed as an integrating factor, which removes the negative terms appearing in $L^B$. Define inductively a new sequence of measures $\tilde \mu^n_t$ by setting $\tilde \mu^0_t =\mu_0$ and for $n\geq 0$ $$\tilde \mu^{n+1} = \mu_0 + \int^t_0 G_s(\tilde \mu^n_s) \, ds.$$ By an argument similar to that used for the original iterative scheme, the proof is completed: we can show, first, and possibly for a smaller value of $T>0$, but with the same dependence, that $\|\tilde \mu^n_t\|$ is bounded, uniformly in $n$, for $t\leq T$, and then that $\|\tilde \mu^n_t - \tilde \mu_t\|\rightarrow 0$ as $n\rightarrow \infty$. Since $\tilde \mu^n_t \geq 0$ for all $n$, we deduce $\tilde \mu_t\geq 0$ and hence $\mu_t \geq 0$ for all $t\leq T$. \end{proof} We fix now $\mu_0 \in \mathcal{M}$ with $\mu_0\geq 0$ and $\langle \varphi, \mu_0\rangle <\infty$. For each bounded set $B\subset [0,\infty)$, let \begin{equation}\label{eq:initial_data_auxiliary_process} \mu^B_0 =\mathbb{1}_{B}\mu_0, \qquad\lambda^B_0=\int_{[0,\infty)\backslash B}\varphi(\omega)\mu_0(d\omega) \end{equation} and denote by $(\mu^B_t, \lambda^B_t)_{t\geq 0}$ the unique solution to \eqref{eq:auxiliary_equation_existence_proof}, starting from $(\mu^B_0, \lambda^B_0)$, provided by Proposition \ref{prop:existence_auxiliary_process}. We have that for $B\subset B'$, $$\mu^B_t \leq \mu^{B'}_t, \qquad \langle \varphi, \mu^B_t \rangle + \lambda^B_t = \langle \varphi, \mu^{B'}_t \rangle + \lambda^{B'}_t.$$ The inequality will be proven in Proposition \ref{prop:bounds_on_auxiliary_process1} and the equality is consequence of expression \eqref{eq:expression_equal_0} and the fact that $$\langle \varphi, \mu^B_0\rangle + \lambda^B_0= \langle \varphi, \mu^{B'}_0 \rangle + \lambda^{B'}_0$$ by expression \eqref{eq:initial_data_auxiliary_process}. Moreover, it holds that for any local solution $(\nu_t)_{t<T}$ of the 4-wave kinetic equation \eqref{eq:isotropic_4_wave_equation_weak}, for all $t<T$, \begin{equation} \label{eq:bounds_compare_sol_aux_sol} \mu^B_t \leq \nu_t, \qquad \langle \varphi, \mu^B_t\rangle + \lambda^B_t \geq \langle \varphi, \nu_t\rangle. \end{equation} We prove the first inequality in Proposition \ref{prop:bounds_on_auxiliary_process2}. The second inequality is consequence of \begin{equation} \label{eq:bounds_auxiliary} \langle \varphi, \nu_t \rangle \leq \langle \varphi,\mu_0 \rangle \leq \langle \varphi, \mu_0 \rangle + \lambda^B_0 = \langle \varphi, \mu^B_t \rangle + \lambda^B_t. \end{equation} We now show how these facts lead to the proof of Theorem \ref{eq:existence_solutions_kinetic_4wave}. \medskip Set $\mu_t = \lim_{B \uparrow [0,\infty)}\mu^B_t$ and $\lambda_t = \lim_{B\uparrow[0,\infty)} \lambda^B_t$. Note that $$\langle \varphi, \mu_t\rangle= \lim_{B\uparrow[0,\infty)}\langle \varphi, \mu^B_t \rangle\leq \langle \varphi, \mu_0\rangle <\infty.$$ So, by dominated convergence, using that $K$ is submultiplicative, for all bounded measurable functions $f$, $$\int_D f(\omega) \mathbb{1}_{\omega\notin B} K(\omega_1, \omega_2, \omega_3) \mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\rightarrow 0,$$ and we can pass to the limit in \eqref{eq:auxiliary_equation_existence_proof} to obtain \begin{eqnarray*} \frac{d}{dt}\langle f, \mu_t \rangle &=& \frac{1}{2}\int_D (f(\omega)+f(\omega_3)-f(\omega_1)-f(\omega_2))\\ && \qquad\quad \timesK(\omega_1, \omega_2, \omega_3)\mu_t(d\omega_1)\mu_t(d\omega_2)\mu_t(d\omega_3)\\ && -(\lambda^2_t+2\lambda_t\langle \varphi,\mu_t\rangle )\langle f\varphi,\mu_t\rangle . \end{eqnarray*} For any local solution $(\nu_t)_{t<T}$, for all $t<T$, $$\mu_t\leq \nu_t, \qquad \langle \varphi, \mu_t \rangle + \lambda_t \geq \langle \varphi, \nu_t \rangle.$$ Hence, if $\lambda_t =0$ for all $t<T$, then $(\mu_t)_{t<T}$ is a local solution and, moreover, is the only local solution on $[0, T)$. If $(\nu_t)_{t<T}$ is a strong local solution, then $$\int^t_0\langle \varphi^2, \mu_s\rangle\, ds \leq \int^t_0\langle \varphi^2, \nu_s\rangle\, ds <\infty$$ for all $t<T$; this allows us to pass to the limit in \eqref{eq:auxiliary_equation_existence_proof} to obtain \begin{equation} \label{eq:lambda_to_be_zero} \frac{d}{dt}\lambda_t = (\lambda_t^2+ 2\lambda_t\langle \varphi, \mu_t\rangle) \langle\varphi^2, \mu_t\rangle \end{equation} and to deduce from this equation that $\lambda_t=0$ for all $t<T$. It follows that $(\nu_t)_{t<T}$ is the only local solution on $[0, T)$. For any local solution $(\nu_t)_{t<T}$, \begin{eqnarray}\label{eq:strong_solutions_is_conservative} \int^\infty_0 \omega \mathbb{1}_{\omega\leq n} \nu_t (d\omega) &=& \int^\infty_0 \omega \mathbb{1}_{\omega \leq n} \nu_0(d\omega)\\\nonumber &+& \frac{1}{2} \int^t_0 \int_D \left\{(\omega) \mathbb{1}_{\omega \leq n}+\omega_3\mathbb{1}_{\omega_3 \leq n}-\omega_1\mathbb{1}_{\omega_1 \leq n}-\omega_2\mathbb{1}_{\omega_2 \leq n} \right\}\\\nonumber &&\qquad \times K(\omega_1, \omega_2, \omega_3) \nu_s(d\omega_1) \nu_s(d\omega_2) \nu_s(d\omega_3) . \end{eqnarray} Hence, if $(\nu_t)_{t<T}$ is strong we have that $$\int^t_0 \langle \omega^2, \nu_s\rangle \, ds \leq \int^t_0 \langle \varphi^2, \nu_s\rangle \, ds <\infty.$$ Then, by dominated convergence, the second term on the right tends to $0$ as $n\rightarrow \infty$, showing that $(\nu_t)_{t<T}$ is conservative. Suppose now that $\langle \varphi^2, \mu_0\rangle < \infty$ and set $T= \langle \varphi^2, \mu_0 \rangle^{-1}\langle \varphi, \mu_0\rangle^{-1}$. For any bounded set $B\subset [0,\infty)$, we have \begin{eqnarray} \nonumber \frac{d}{dt} \langle \varphi^2, \mu^B_t \rangle &\leq& \frac{1}{2} \int_D \left\{ \varphi(\omega)^2+\varphi(\omega_3)^2-\varphi(\omega_1)^2-\varphi(\omega_2)^2 \right\}\\\nonumber &&\qquad \timesK(\omega_1, \omega_2, \omega_3) \mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\\ \label{eq:estimate_time_strong_sol} &\leq & \int_D \varphi(\omega_1)^2\varphi(\omega_2)^2\varphi(\omega_3) \mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\\ \nonumber &\leq &\langle \varphi, \mu^B_t \rangle \langle\varphi^2, \mu^B_t \rangle^2\\\nonumber & \leq &\langle \varphi, \mu_0 \rangle \langle\varphi^2, \mu^B_t \rangle^2 \end{eqnarray} where we used that \begin{eqnarray*} (\omega+1)^2+ (\omega_3+1)^2 - (\omega_1+1)^2-(\omega_2+1)^2 &=& (\tilde \omega_1 + \tilde \omega_2 - \tilde \omega_3)^2 + \tilde{\omega_3}^2 - \tilde{\omega_1}^2-\tilde{\omega_2}^2\\ &=& 2 \tilde{\omega_1}\tilde{\omega_2}+ 2\tilde{\omega_3} \left( \tilde{\omega_3}-\tilde{\omega_1}-\tilde{\omega_2}\right)\\ &\leq & 2 \tilde{\omega_1}\tilde{\omega_2} \end{eqnarray*} with $\tilde \omega_i = \omega_i +1$, and using that in our domain $\omega\geq 0$, so for $t<T$ $$\langle \varphi^2, \mu_t \rangle \leq (S- \langle \varphi, \mu_0 \rangle t)^{-1}$$ where $S = \langle \varphi^2, \mu_0\rangle^{-1}$. Hence \eqref{eq:lambda_to_be_zero} holds and forces $\lambda_t =0$ for $t<T$ as above, so $(\mu_t)_{t<T}$ is a strong local solution. \begin{proposition} \label{prop:bounds_on_auxiliary_process1} Suppose $B\subset B'$ and that $(\mu^B_t, \lambda^B_t)_{t\geq0}$, $(\mu^{B'}_t, \lambda^{B'}_t)_{t\geq 0}$ are the solutions of \eqref{eq:auxiliary_equation_existence_proof} for each one of these sets corresponding to the initial data given by \eqref{eq:initial_data_auxiliary_process}. Then for all $t\geq 0$, $\mu^B_t \leq \mu^{B'}_t$. \end{proposition} The proof is obtained by adapting the one in \cite[Proposition 2.4]{norris1999smoluchowski}. \begin{proof} Set $$\theta_t(\omega_1) = \exp\int^t_0 \left( \int_{\mathbb{R}^2_+ \cap (\omega_1+\omega_2\geq\omega_3)} K(\omega_1, \omega_2, \omega_3) \mu^B_s(d\omega_2)\mu^B_s(d\omega_3) + ((\lambda^B_s)^2+2\lambda^B_s\langle \varphi, \mu^B_s\rangle )\varphi(\omega_1) \right) ds.$$ Denote by $\pi_t = \theta_t(\mu^{B'}_t-\mu^B_t)$. Note that $\pi_0\geq 0$. By Proposition \ref{prop:auxiliary_prop}, for any bounded measurable function $f$, \begin{eqnarray*} \frac{d}{dt}\langle f, \pi_t\rangle &=& \langle f\frac{\partial\theta_t}{\partial t}, \mu^{B'}_t-\mu^B_t\rangle\\ && + \langle (f\theta_t, 0), L^{B'}(\mu^{B'}_t, \lambda^{B'}_t)- L^B(\mu^B_t, \lambda^B_t)\rangle\\ &=& \int_D (f\theta_t) (\omega_1) K(\omega_1, \omega_2, \omega_3)\left( \mu^{B'}_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\right)\\ &+& \left( (\lambda^B_t)^2+2\lambda^B_t\langle \varphi, \mu^B_t\rangle \right) \int^\infty_0 (f\theta_t)(\omega_1) \varphi(\omega_1)(\mu^{B'}_t(d\omega_1)-\mu^B_t(d\omega_1))\\ &+&\frac{1}{2} \int_D (f\theta_t)(\omega) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \left( \mathbb{1}_{\omega\in B'}\mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3)-\mathbb{1}_{\omega\in B}\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &+ & \frac{1}{2}\int_D(f\theta_t)(\omega_3) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \left( \mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &-& \int_D (f\theta_t)(\omega_1) K(\omega_1, \omega_2, \omega_3)\left( \mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &-&\left( (\lambda^{B'}_t)^2+ 2\lambda^{B'}_t \langle \varphi,\mu^{B'}_t\rangle \right) \int^\infty_0 (f\theta_t)(\omega_1) \varphi(\omega_1) \mu^{B'}_t(d\omega_1)\\ &+&\left( (\lambda^B_t)^2+ 2\lambda^B_t \langle \varphi,\mu^B_t\rangle \right) \int^\infty_0 (f\theta_t)(\omega_1) \varphi(\omega_1) \mu^B_t(d\omega_1)\\ &=& I\\ &+& \int_D (f\theta_t) (\omega_1) K(\omega_1, \omega_2, \omega_3) \left( \mu^{B'}_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)-\mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3)\right)\\ &+& \left( \left( (\lambda^B_t)^2+ 2\lambda^B_t \langle \varphi,\mu^B_t\rangle \right)-\left( (\lambda^{B'}_t)^2+ 2\lambda^{B'}_t \langle \varphi,\mu^{B'}_t\rangle \right) \right) \langle f\theta_t\varphi, \mu^{B'}_t\rangle \end{eqnarray*} where \begin{eqnarray*} I&:=&\frac{1}{2} \int_D (f\theta_t)(\omega) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \left( \mathbb{1}_{\omega\in B'}\mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3)-\mathbb{1}_{\omega\in B}\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &+ & \frac{1}{2}\int_D(f\theta_t)(\omega_3) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \left( \mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right). \end{eqnarray*} Now, squaring the equality $$\langle \varphi, \mu^B_t \rangle + \lambda^B_t = \langle \varphi, \mu^{B'}_t \rangle + \lambda^{B'}_t$$ we have that $$\left( (\lambda^B_t)^2+ 2\lambda^B_t \langle \varphi,\mu^B_t\rangle \right)- (\lambda^{B'}_t)^2-2\lambda^{B'}_t \langle \varphi,\mu^{B'}_t\rangle = \langle \varphi, \mu^{B'}_t\rangle^2-\langle\varphi, \mu^B_t\rangle^2$$ and therefore \begin{eqnarray*} \frac{d}{dt} \langle f, \pi_t\rangle &=& I \\ &+&\int_{\mathbb{R}^3_+\backslash D} (f\theta_t) (\omega_1) \varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3)\\ && \quad\left( \mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3)-\mu^{B'}_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\right)\\ &+&\int_{D} (f\theta_t) (\omega_1) (\varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3)-K(\omega_1, \omega_2, \omega_3))\\ && \quad\left( \mu^{B'}_t(d\omega_1)\mu^{B'}_t(d\omega_3)\mu^{B'}_t(d\omega_3)-\mu^{B'}_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\right). \end{eqnarray*} Therefore, $\pi_t$ satisfies an equation of the form $$\frac{d}{dt} \pi_t = H_t(\pi_t)$$ where $H_t: \mathcal{M}_{B'} \rightarrow \mathcal{M}_{B'}$ and it holds $H_t(\pi)\geq 0$ whenever $\pi\geq 0$ and where we have estimates, for $t\leq 1$, $$\|H_t(\pi)\|\leq C\|\pi\|$$ for some constant $C<\infty$ depending only on $\varphi$ and $B'$. Therefore, we can apply the same sort of argument that we used for nonnegativity to see that $\pi_t\geq 0$ for all $t\leq 1$, and then for all $t<\infty$. Explicitly, $H_t$ is \begin{eqnarray*} H_t &=& \frac{1}{2} \int_D (f\theta_t)(\omega) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \Big(\mathbb{1}_{\omega\in B'}\theta_t^{-1}(\omega_1)\pi(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3) \\ && \qquad+\mathbb{1}_{\omega\in B'} \mu^B_t(d\omega_1)\theta_t^{-1}(\omega_2)\pi(d\omega_2)\mu^{B'}_t(d\omega_3) \\ &&\qquad+ \mathbb{1}_{\omega\in B}\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\theta_t^{-1}(\omega_3) \pi(d\omega_3) \Big)\\ &+ & \frac{1}{2}\int_D(f\theta_t)(\omega_3) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \Big( \theta^{-1}_t(\omega_1) \pi(d\omega_1)\mu^{B'}_t(d\omega_2)\mu^{B'}_t(d\omega_3) + \theta^{-1}_t(\omega_2)\pi(d\omega_2)\mu^B_t(d\omega_1)\mu^{B'}_t(d\omega_2)\\ &&\qquad +\theta^{-1}_t(\omega_3)\pi(d\omega_3)\mu^B_t(d\omega_1)\mu^B_t(d\omega_2) \Big)\\ &+&\int_{\mathbb{R}^3_+\backslash D} (f\theta_t) (\omega_1) \varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3)\\ && \quad\times\left( \mu^{B'}_t(d\omega_1)\theta^{-1}_t(\omega_2)\pi(d\omega_2)\mu^{B'}_t(d\omega_3)+\mu^{B'}_t(d\omega_1)\mu^B_t(d\omega_2)\theta^{-1}_t(\omega_3)\pi(d\omega_3)\right)\\ &+&\int_{D} (f\theta_t) (\omega_1) (\varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3)-K(\omega_1, \omega_2, \omega_3))\\ && \quad\times\left( \mu^{B'}_t(d\omega_1)\theta^{-1}_t(\omega_2)\pi(d\omega_2)\mu^{B'}_t(d\omega_3)+\mu^{B'}_t(d\omega_1)\mu^B_t(d\omega_2)\theta^{-1}_t(\omega_3)\pi(d\omega_3)\right). \end{eqnarray*} where we have used that $$\mathbb{1}_{\omega\in B'}\mu^B_t(d\omega_1) \mu^B_t(d\omega_2)\mu^{B'}_t(d\omega_3)= \mathbb{1}_{\omega\in B}\mu^B_t(d\omega_1)\mu^B_t (d\omega_2)\mu^{B'}_t(d\omega_3),$$ given that if $\omega_1, \omega_2<b$ and $\omega_3>b$ then it must hold that $\omega<b$. \end{proof} \begin{proposition} \label{prop:bounds_on_auxiliary_process2} Suppose that $(\nu_t)_{t<T}$ is a local solution of the 4-wave kinetic equation \eqref{eq:isotropic_4_wave_equation_weak}, starting from $\mu_0$. Then, for all bounded sets $B\subset [0,\infty)$ and all $t<T$, $\mu^B_t\leq\nu_t$. \end{proposition} The proof is obtained by adapting the one in \cite[Proposition 2.5]{norris1999smoluchowski}. \begin{proof} Set $\theta_t$ as in the previous Proposition and denote $\nu^B_t = \mathbb{1}_B \nu_t$ and $\pi_t = \theta_t(\nu^B_t-\mu^B_t).$ By a modification of Proposition \ref{prop:auxiliary_prop}, we have, for all bounded measurable functions $f$, $$\frac{d}{dt}\langle f, \pi_t \rangle = \langle f \partial \theta/ \partial t, \nu^B_t- \mu^B_t\rangle + \langle f\theta_t\mathbb{1}_B, Q(\nu_t)\rangle - \langle (f\theta_t, 0), L^B(\mu^B_t, \lambda^B_t)\rangle.$$ Now, proceeding as before we have that \begin{eqnarray*} \frac{d}{dt}\langle f, \pi_t \rangle &=& \int_D (f\theta_t) (\omega_1) K(\omega_1, \omega_2, \omega_3) (\nu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3))\\ &+& \left( (\lambda^B_t)^2+2\lambda^B_t\langle \varphi, \mu^B_t\rangle \right) \int^\infty_0 (f\theta_t)(\omega_1) \varphi(\omega_1)\nu^B_t(d\omega_1)\\ &+&\frac{1}{2} \int_D (f\theta_t)(\omega) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \mathbb{1}_{\omega\in B}\left(\nu_t(d\omega_1)\nu_t(d\omega_2)\nu_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &+ & \frac{1}{2}\int_D(f\theta_t)(\omega_3) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \left( \nu_t(d\omega_1)\nu_t(d\omega_2)\nu^B_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &-& \int_D (f\theta_t)(\omega_1) K(\omega_1, \omega_2, \omega_3)\left( \nu^B_t(d\omega_1)\nu_t(d\omega_2)\nu_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &=&\chi_t \int^\infty_0 (f\theta_t)(\omega_1) \varphi(\omega_1)\nu^B_t(d\omega_1)\\ &+&\frac{1}{2} \int_D (f\theta_t)(\omega) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \mathbb{1}_{\omega\in B}\left(\nu_t(d\omega_1)\nu_t(d\omega_2)\nu_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &+ & \frac{1}{2}\int_D(f\theta_t)(\omega_3) K(\omega_1, \omega_2, \omega_3) \\ && \quad \times \left( \nu_t(d\omega_1)\nu_t(d\omega_2)\nu^B_t(d\omega_3)-\mu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3) \right)\\ &+& \int_{\mathbb{R}^3_+ \backslash D} (f\theta_t)(\omega_1) \varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3) \left(\nu^B_t(d\omega_1)\nu_t(d\omega_2)\nu_t(d\omega_3)- \nu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\right)\\ &+& \int_{D}(f\theta_t)(\omega_1) (\varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3)-K(\omega_1, \omega_2, \omega_3)) \\ && \times\qquad \left(\nu^B_t(d\omega_1)\nu_t(d\omega_2)\nu_t(d\omega_3)- \nu^B_t(d\omega_1)\mu^B_t(d\omega_2)\mu^B_t(d\omega_3)\right) \end{eqnarray*} where $\chi_t = (\lambda^B_t)^2+ 2\lambda^B_t \langle \varphi, \mu^B_t\rangle + \langle \varphi, \mu^B_t \rangle^2- \langle \varphi, \nu_t \rangle^2 \geq 0$. Therefore, analogously as in the previous Proposition \ref{prop:bounds_on_auxiliary_process1}, we have that $$\frac{d}{dt}(\pi_t) = \tilde H_t(\pi_t) $$ where $\tilde{H}_t: \mathcal{M}_{B} \rightarrow \mathcal{M}_{B}$ is linear and $\tilde{H}_t(\pi)\geq 0$ whenever $\pi\geq 0$. Moreover for $t\leq 1$ $$\|\tilde H_t(\pi) \|\leq C\|\pi\|$$ for some constant $C<\infty$ depending only on $\varphi$ and $B$. \end{proof} \section{Mean-field limit} \subsection{The instantaneous coagulation-fragmentation stochastic process} \label{sec:coagulation_fragmentation_stochastic_process} Define $$D= \{(\omega_1, \omega_2, \omega_3)\in \mathbb{R}_+^3 \, |\, \omega_1 + \omega_2 \geq\omega_3\}.$$ We consider $X^n_0$ a probability measure on $\mathbb{R}_+$ written as a sum of unit masses $$X^n_0 = \frac{1}{n}\sum_{i=1}^n \delta_{\omega_i}$$ for $\omega_1,\hdots, \omega_n\in \mathbb{R}_+$. $X^n_0$ represents a system of $n$ waves labelled by their dispersion $\omega_1,\hdots, \omega_n$. We define a Markov process $(X^n_t)_{t\geq 0}$ of probability measures on $\mathbb{R}_+$. For each triple $(\omega_i,\omega_j,\omega_l)\in D$ of distinct particles, take an independent exponential random time $T_{ijl}$, $i<j$, with parameter \begin{equation} \label{eq:jump_rate_MC} \frac{1}{n^2}K(\omega_i, \omega_j, \omega_l). \end{equation} Set $T_{ijk}=T_{jik}$ and set $T= \min_{ijl} T_{ijl}$. Then set $$X^n_t = X^n_0 \qquad \mbox{ for } t<T$$ and $$X^n_T = X^n_0 + \frac{1}{n}(\delta_{\omega}+\delta_{\omega_l}-\delta_{\omega_i}-\delta_{\omega_j})$$ with $\omega= \omega_i+\omega_j-\omega_l$. Then begin the construction afresh from $X^n_T$. \medskip We call the process $(X^n_t)_{t\geq0}$ an instantaneous $n$-coagulation-fragmentation stochastic process. \begin{remark} Note that we should be careful not to pick the same particle twice as one particle cannot interact with itself. Suppose that $\omega_i=\omega_j=\omega_l$ then, the Markov Chain does not make a jump. The same happens with $\omega_i=\omega_l$ or $\omega_j=\omega_l$. Finally the case $\omega_i=\omega_j$ needs to be considered. For that, we define $$\mu^{(1)}(A\times B\times C) = \mu(A)\mu(B)\mu(C) - \mu(A\cap B) \mu(C)$$ as the counting measure of triples of particles with different particles in the first and second position. Also, define \begin{equation} \label{eq:counting_measure} \mu^{(n)}(A\times B\times C) = \mu(A)\mu(B)\mu(C) -n^{-1} \mu(A\cap B) \mu(C). \end{equation} Note that \begin{equation} \label{eq:rescaling_property_counting_measure4} n^3\mu^{(n)}= (n\mu)^{(1)}. \end{equation} \end{remark} \paragraph{Generator of the Markov Chain:} For all $F\in C_b$: $$GF(X) =\frac{n}{2}\int_{D}\left[F(X^{\omega_1, \omega_2, \omega_3})-F(X) \right] K(\omega_1, \omega_2, \omega_3) X^{(n)}(d\omega_1, d\omega_2, d\omega_3) $$ where $$X^{\omega_1, \omega_2, \omega_3}= X + \frac{1}{n}\left(\delta_{\omega_3} + \delta_{\omega_1 + \omega_2-\omega_3} - \delta_{\omega_1} - \delta_{\omega_2} \right).$$ \medskip \paragraph{Interpretation of the stochastic process.} Three different particles, say $\omega_1$, $\omega_2$, $\omega_3$ interact at a random time given by the rate \eqref{eq:jump_rate_MC}. The outcome of the interaction is that $\omega_1$ and $\omega_2$ merge and then, under the presence of $\omega_3$, they split, creating a new particle $\omega_3$ and another one with the rest $\omega= \omega_1+\omega_2-\omega_3$. (Coagulation-fragmentation phenomena, which takes place instantaneously). \paragraph{The martingale formulation.} Now, for each function $f\in C_b(\mathbb{R}_+)$ the Markov chain can be expressed as \begin{equation} \label{eq:martingale_formulation} \langle f, X^n_t\rangle = \langle f, X_0^n \rangle + M^{n,f}_t + \int^t_0 \langle f, Q^{(n)}(X^n_s)\rangle\, ds \end{equation} where $(M^{n,f}_t)_{t\geq 0}$ is a martingale. Note that using \eqref{eq:rescaling_property_counting_measure4} we have that \begin{eqnarray*} &&\langle f, Q^{(n)}(\mu) \rangle\\ &=&\frac{1}{2}\int_{D}\frac{1}{n} (f(\omega_1+\omega_2-\omega_3)+f(\omega_3)-f(\omega_1) -f(\omega_2)) \frac{1}{n^2} K(\omega_1,\omega_2,\omega_3)(n\mu)^{(1)}(d\omega_1,d\omega_2,d\omega_3)\\ &=&\frac{1}{2}\int_{D} (f(\omega_1+\omega_2-\omega_3)+f(\omega_3)-f(\omega_1) -f(\omega_2)) K(\omega_1,\omega_2,\omega_3)\mu^{(n)}(d\omega_1, d\omega_2,d\omega_3) \end{eqnarray*} from this expression it is clear why we needed to rescaled the collision frequency by $n^2$. \subsection{First result on mean-field limit} We will start working in the simpler case where $K$ is bounded and see that the unbounded case will come as a `modification' of the bounded one. \subsubsection{Mean-field limit for bounded jump kernel} \paragraph{Uniqueness of solutions for bounded kernel} \begin{lemma} \label{lem:properties_trilinear_operator} It holds that $Q$ given in \eqref{eq:isotropic_4_wave_equation_weak} is linear in each one of its terms and the following symmetry $$\langle f, Q(\mu, \nu, \tau) \rangle = \langle f, Q(\nu, \mu, \tau) \rangle$$ but \begin{eqnarray*} \langle f, Q(\mu, \nu, \tau) \rangle &\neq& \langle f, Q(\mu, \tau, \nu) \rangle\\ \langle f, Q(\mu, \nu, \tau) \rangle &\neq& \langle f, Q(\tau, \nu, \mu) \rangle. \end{eqnarray*} Moreover, \begin{equation} \label{eq:trilinearQ} Q(\mu,\mu, \mu)-Q(\nu, \nu,\nu) = Q(\mu+\nu, \mu-\nu, \mu)+ Q(\mu+\nu, \nu, \mu-\nu) +Q(\mu, \nu, \nu-\mu) \end{equation} \end{lemma} \begin{proof} The first part of the statement is immediate from the definition. The second part will make use of this symmetry property along with the linearity in each component: \begin{eqnarray*} Q(\mu, \mu, \mu) - Q(\nu, \nu, \nu) &=&Q(\mu, \mu, \mu) + Q(\nu, \mu,\mu) - Q(\mu, \nu, \mu) + Q(\mu, \nu, \nu) \\ &&- Q(\mu,\nu, \nu)- Q(\nu, \nu, \nu)\\ &=& Q(\mu+\nu, \mu, \mu) + Q(\mu, \nu, \nu-\mu)- Q(\mu+\nu, \nu, \nu)\\ &=& Q(\mu+\nu, \mu, \mu) - Q(\mu+\nu, \nu,\mu) + Q(\mu+\nu, \nu,\mu)- Q(\mu+\nu, \nu, \nu)\\ &&\quad+\, Q(\mu, \nu, \nu-\mu)\\ &=& Q(\mu+\nu, \mu-\nu, \mu) + Q(\mu+\nu, \nu, \mu-\nu) + Q(\mu, \nu, \nu-\mu) \end{eqnarray*} \end{proof} \begin{proposition}[Uniqueness of solutions] \label{prop:uniqueness_solutions_kinetic_wave} Suppose that the jump kernel in \eqref{eq:isotropic_4_wave_equation_weak} is bounded by $\Lambda$. Then for any given initial data, if there exists a solution for \eqref{eq:isotropic_4_wave_equation_weak}, then the solution is unique. \end{proposition} \begin{proof} Suppose that we have $\mu_t, \nu_t\in \P(\mathbb{R}_+)$ solutions to \eqref{eq:isotropic_4_wave_equation_weak} with the same initial data. We will compare these solutions in the total variation norm: $$\|\mu_t-\nu_t||_{TV} = \sup_{\|f\|_\infty =1} \langle f, \mu_t-\nu_t \rangle = \sup_{\|f\|_\infty =1} \int^t_0 \langle f, \dot \mu_t - \dot \nu_t \rangle.$$ Then by expression \eqref{eq:trilinearQ} we have that $$\dot \mu_s - \dot \nu_s = Q(\mu_s+\nu_s, \mu_s-\nu_s, \mu_s)+ Q(\mu_s+\nu_s, \nu_s, \mu_s-\nu_s) +Q(\mu_s, \nu_s, \nu_s-\mu_s).$$ Therefore, for any $f\in C_b(\mathbb{R}_+)$ such that $\|f\|_\infty=1$ it holds $$|\langle f, \dot \mu_s- \dot\nu_s \rangle | \leq 24 \Lambda \|\mu_s -\nu_s \|_{TV}.$$ Finally applying Gronwall on $$\|\mu_t-\nu_t\|_{TV} \leq 24 \Lambda \int^t_0 \|\mu_s - \nu_s\|_{TV}\, ds$$ we have that the two solutions must coincide. \end{proof} \begin{remark} Existence of solutions for this case can be proven directly using a classical argument of iterative scheme (as done previously for the unbounded case). \end{remark} The following theorem is an adaptation of part of \cite[Theorem 4.1]{norris1999smoluchowski}. Much more detail is provided here than in the original reference. To give the details, the author was much guided by an unpublished report \cite{martinccareport} that studied the homogoneous Boltzmann equation with bounded kernels. \begin{theorem}[Mean-field limit for bounded jump kernel] \label{th:hydrodynamic_limit} Suppose that for a given measure $\mu_0$ it holds that $$\langle \omega, X^n_0\rangle \leq \langle \omega, \mu_0\rangle$$ and that as $n\rightarrow \infty$ $$X^n_0 \rightarrow \mu_0 \qquad \mbox{weakly}$$ Assume that the kernel is uniformly bounded $$K \leq \ \Lambda < \infty.$$ Then the sequence $(X^n)_{t\geq 0}$ converges as $n\rightarrow \infty$ in probability in $D([0,\infty) \times \P(\mathbb{R}_+))$. Its limit, $(\mu_t)_{t\geq 0}$ is continuous and it satisfies the isotropic 4-wave kinetic equation \eqref{eq:isotropic_4_wave_equation_weak}. In particular, for all $f\in C_b(\mathbb{R}_+)$ \begin{eqnarray*} \sup_{s\leq t }\langle f, X^n_t \rangle & \rightarrow & \langle f, \mu_t\rangle, \\ \sup_{s\leq t }|M_s^{f,n}| &\rightarrow & 0,\\ \sup_{s\leq t }\int^t_0 \langle f, Q^{(n)}(X^n_s)\rangle\, ds &\rightarrow & \int^t_0 \langle f, Q(\mu_s) \rangle\, ds \end{eqnarray*} all in probability. As a consequence, equation \eqref{eq:isotropic_4_wave_equation_weak} is obtained as the limit in probability of \eqref{eq:martingale_formulation} as $n\rightarrow \infty$. \end{theorem} \begin{corollary}[Existence of solutions for the weak wave kinetic equation] There exists a solution for \eqref{eq:isotropic_4_wave_equation_weak} (expressed as the limit of the $X^n_t$). \end{corollary} \begin{proof} We have that the limit $(\mu_t)_{t\geq 0}$ satisfies $\langle \omega, \mu_t \rangle \leq \langle \omega, \mu_0\rangle$ by the following $$\langle \omega \mathbb{1}_{\omega\leq k}, \mu \rangle = \lim_{n\rightarrow \infty}\langle \omega\mathbb{1}_{\omega\leq k}, X^n_t \rangle $$ and we have that $$\langle \omega\mathbb{1}_{\omega\leq k}, X^n_t \rangle \leq\langle \omega, X^n_t \rangle \leq \langle \omega, \mu_0\rangle.$$ So by making $k\rightarrow \infty$ we get the bound. \end{proof} \subsubsection{Proof of Theorem \ref{th:hydrodynamic_limit}} We want to take the limit in the martingale formulation \eqref{eq:martingale_formulation}. For that we will follow the following steps in \cite{norris1999smoluchowski}: \begin{enumerate} \item The martingale $(M^{n, f})_{n\in\mathbb{N}}$ converges uniformly in time for bounded sets to zero $$\sup_{0 \leq s\leq t} |M^{n, f}_s| \rightarrow 0 \qquad \mbox{in probability}$$ (Proposition \ref{prop:convergence_martingale}). \item Up to a subsequence $(X^n_t)_{n\in\mathbb{N}}$ converges weakly as $n\rightarrow \infty$ in $D([0,\infty)\times \P(\mathbb{R}_+)$ (Proposition \ref{prop:almost_sure_convergence_for_measures}). This will be split in three steps: \begin{enumerate} \item We will prove that the laws of the sequence $(\langle f, X^n_t \rangle)_{n\in\mathbb{N}}$ are tight in $D([0,\infty), \mathbb{R})$ (Lemma \ref{lem:tightness_for_the_action}). \item From this deduce that actually the laws of the sequence $(X^n_t)_{n\in\mathbb{N}}$ itself is tight in $\mathcal{P}(D([0,\infty)\times\P(\mathbb{R}_+)))$ (Lemma \ref{lem:tightness_for_the_measures}). \item Finally use Prokhorov theorem to prove the statement. \end{enumerate} \item Compute the limit of the trilinear term (Proposition \ref{prop:convergence_trilinear_term}). For this we will need to prove that: \begin{enumerate} \item The limit of $(X^n_t)_{t\geq 0}$ as $n\rightarrow\infty$ is uniformly in compact sets of the $t$ variable (Lemma \ref{lem:uniform_convergence_in_time}). This will be a consequence of proving that the limit itself is continuous (Lemma \ref{lem:continuity_of_limit}). \item Prove that actually in the limit we can forget about the counting measure $X^{(n)}$ and consider just the product of the three measures $X(d\omega_1)X(d\omega_2) X(d\omega_3)$ (Lemma \ref{lem:limit_counting_measures}). \end{enumerate} \item Using the uniqueness of the wave kinetic equation, we have that all the convergent subsequences converge to the same limit. Hence the whole sequence converges; if a tight sequence has every weakly convergent subsequence converging to the same limit, then the whole sequence converges weakly to that limit (\cite{billingsley2013convergence}). \item We have that the weak limit of $(X_t^n)_{n\in\mathbb{N}}$ satisfies the kinetic wave equation \eqref{eq:isotropic_4_wave_equation_weak} so it is deterministic. Therefore, we actually have convergence in probability. \item Finally, as an application of the functional monotone class theorem we can extend this result to functions $f\in \mathcal{B}(\mathbb{R}_+)$. \end{enumerate} \bigskip \paragraph{Step 1: control on the martingale} \begin{proposition}[Martingale convergence] \label{prop:convergence_martingale} For any $f\in C_b(\mathbb{R}_+)$, $t\geq 0$ $$\sup_{0 \leq s\leq t} |M^{n, f}_s| \rightarrow 0\qquad \mbox{in } L^2(\mathbb{R})$$ in particular, it also converges in probability. \end{proposition} \begin{proof}[Proof of Proposition \ref{prop:convergence_martingale}] We use Proposition 8.7 in \cite{darling2008differential} that ensures that $$\E \left[ \sup_{s\leq T} |M^{n, f}_s|^2\right] \leq 4 \E \int^T_0 \alpha^{n,f}(\mu_s) ds$$ as long as the right hand side is finite, where \begin{eqnarray} \label{eq:alpha_previsible} \alpha^{n,f}(\mu_s) &=& \frac{1}{2} \int_{D}\left( \frac{1}{n} (f(\omega_1+\omega_2-\omega_3)+f(\omega_3) - f(\omega_1) -f(\omega_2)) \right)^2 \\ && \qquad\times\frac{1}{n^2} K(\omega_1, \omega_2, \omega_3) (n\mu_s)^{(1)}(d\omega_1,d\omega_2,d\omega_3) \nonumber \end{eqnarray} (this statement is consequence of Doob's $L^2$ inequality). Therefore, using \eqref{eq:rescaling_property_counting_measure4} we have that \begin{equation} \label{eq:bound_martingale} \E \left[ \sup_{s\leq t} |M^{n, f}_s|^2\right] \leq \frac{1}{n}32 \|f\|^2_{\infty} \Lambda^2 t. \end{equation} This implies the convergence of the supremum towards 0 in $L^2$ which implies also the convergence in probability. \end{proof} \bigskip \paragraph{Step 2: convergence for the measures} \label{sec:step2_convergence_of_measures} \begin{proposition}[Weak convergence for the measures] \label{prop:almost_sure_convergence_for_measures} There exists a weakly convergent subsequence $(X^{n_k}_t)_{k\in\mathbb{N}}$ in $D([0,\infty)\times \P(\mathbb{R}_+))$ as $k\rightarrow\infty$. \end{proposition} \begin{lemma} \label{lem:tightness_for_the_action} The sequence of laws of $(\langle f, X^n_t \rangle)_{n\in\mathbb{N}}$ on $D([0,\infty), \mathbb{R})$ is tight. \end{lemma} \begin{lemma} \label{lem:tightness_for_the_measures} The laws of the sequence $(X^n_t)_{n\in\mathbb{N}}$ on $D([0,\infty)\times\P(\mathbb{R}_+))$ is tight. \end{lemma} \begin{proof}[Proof of Proposition \ref{prop:almost_sure_convergence_for_measures}] By Lemma \ref{lem:tightness_for_the_measures} we know that the laws of the sequence $(X^n_t)_{n\in\mathbb{N}}$ are tight. This implies relative compactness for the sequence by Prokhorov's theorem. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:tightness_for_the_action}] We use Theorem \ref{th:criteria_tightness_Skorokhod}. To prove the first part (i) of the Theorem we use that $$|\langle f, X^n_t \rangle | = \left| \frac{1}{n}\sum^n_{i=1}f(\omega^{i,n}_t)\right| \leq \frac{1}{n} \sum_{i=1}^n|f(\omega^{i, n}_t)|\leq \|f\|_\infty$$ so for all $t\geq 0$, $\langle f, X^n_t \rangle \in [-\|f\|_\infty, \|f\|_\infty]$. The second condition (ii) of the theorem will be consequence of the following inequalities: \begin{equation} \label{eq:4bound_square_martingale} \E \left[ \sup_{r\in[s,t)}|M_{r}^{n,f}-M_s^{n,f}|^2\right] \leq \frac{1}{n} 32 \|f\|^2_\infty \Lambda^2(t-s) \end{equation} and \begin{equation} \label{eq:4bound_square_operator} \E \left[ \sup_{r\in[s,t)} \left( \int^r_s \langle f, Q^{(n)}(X^n_p) \rangle \, dp \right)^2 \right]\leq 16 \|f\|^2_\infty \Lambda^2(t-s)^2. \end{equation} which imply that \begin{equation} \label{eq:4bound_square_measure} \E \left[ \sup_{r\in[s,t)}|\langle f, X^n_r-X^n_s\rangle|^2\right] \leq A\left((t-s)^2+\frac{(t-s)}{n} \right) \end{equation} for some $A>0$ depending only on $\|f\|_\infty$ and $\Lambda$. First we use Markov's and Jensen's inequalities to get $$\mathbb{P}(w'(\langle f, X^n\rangle, \delta, T) \geq \eta) \leq \frac{\mathbb{E}[w'(\langle f, X^n \rangle, \delta, T)]}{\eta} \leq \frac{\left( \mathbb{E}[w'(\langle f, X^n\rangle, \delta, T)^2] \right)^{1/2}}{\eta}.$$ ($w'$ is defined in Theorem \ref{th:criteria_tightness_Skorokhod}). Now, for a given partition $\{t_i\}^n_{i=1}$, $$\sup_{r_1, r_2\in [t_{i-1}, t_i)}|\langle f, X^n_{r_1}-X^n_{r_2} \rangle | \leq 2 \sup_{r\in [t_{i-1}, t_i)}|\langle f, X^n_r-X^n_{t_{i-1}}\rangle|.$$ Denote by $i^*$ the point where the maximum on the right hand side is attained (the number of points in each partition is always finite). Now we want to consider a partition such that $\mbox{max}_{i}|t_i-t_{i-1}|=\delta+\varepsilon$ for some $\varepsilon>0$ so $$w'(\langle f, X^n\rangle, \delta, T) \leq 2 \sup_{r\in[t_{i^*-1},t_{i^*-1}+\delta+\varepsilon)}|\langle f, X^n_r-X^n_{t_{i^*-1}}\rangle| \quad a.s.. $$ Therefore we are just left to check that $$\mathbb{E} \left[\sup_{r\in[s,s+\delta+\varepsilon)}|\langle f, X^n_r-X^n_s\rangle|^2 \right] \leq \frac{\eta^4}{2}$$ which is fulfilled thanks to the bound \eqref{eq:4bound_square_measure} by taking, for example, $$\delta = \sqrt{1+\frac{\eta^4}{2A}}-1-\varepsilon$$ for $\varepsilon$ small enough. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:tightness_for_the_measures}] We will use Theorem \ref{th:jakubowski_criteria} to prove this. To check condition (i), we consider the compact set $W\in \P(\mathbb{R}_+)$ (compact with respect to the topology induced by the weak convergence of measures) defined as $$W:=\left\{ \tau\in \P(\mathbb{R}_+)\,:\, \int_{\mathbb{R}_+} \omega\, \tau(d\omega) \leq C \right\}.$$ Consider $\left(\mathcal{L}^n\right)_{n\in\mathbb{N}}$ the family of probability measures in $\mathcal{P}(D([0,\infty); W))$ which are the laws of $(X^n)_{n\in \mathbb{N}}$. We have that $$\mathcal{L}^n(D([0,\infty); W) =1 \quad \mbox{ for all } n\in \mathbb{N}$$ by the conservation of the total energy and its boundedness (assumption (B1)): $$\int_{\mathbb{R}_+} \omega X_t^n(d\omega) = \frac{1}{n}\sum_{i=0}^n \omega_t^{n,i} =\frac{1}{n}\sum_{i=0}^n \omega_0^{n,i} = \int_{\mathbb{R}_+} \omega \mu_0^n(d\omega) \leq C \quad \mbox{a.s..}$$ Now, to check condition (ii) we will use the family of continuous functions in $\P(\mathbb{R}_+)$ defined as $$\mathbb{F}= \{ F\, :\, \mathcal{P}(\mathbb{R}_+) \rightarrow \mathbb{R} \, :\, F(\tau)= \langle f, \tau \rangle \mbox{ for some } f\in C_b(\mathbb{R}_+)\}.$$ This family is closed under addition since $C_b(\mathbb{R}_+)$ is, it is continuous in $\mathcal{P}(\mathbb{R}_+)$, and separates points in $\P(\mathbb{R}_+)$: if $F(\tau)=F(\bar \tau)$ for all $F\in \mathbb{F}$ then $$\int_{\mathbb{R}_+} f(k) d(\tau-\bar \tau) (k) =0 \quad \forall f\in C_b(\mathbb{R}_+)$$ hence $\tau\equiv\bar\tau$. So we are left with proving that for every $f\in C_b(\mathbb{R}_+)$ the sequence $\{\langle f, X^n \rangle\}_{n\in \mathbb{N}}$ is tight. This was proven in Lemma \ref{lem:tightness_for_the_action}. \end{proof} \bigskip \paragraph{Step 3: convergence for the trilinear term} \label{sec:4convergence_trilinear_term} \begin{proposition}[Convergence for the trilinear term] \label{prop:convergence_trilinear_term} It holds that $$\int^t_0 \langle f, Q^{(n)}(X^{n_k}_s)\rangle \, ds \rightarrow \int^t_0 \langle f, Q(\mu_s, \mu_s, \mu_s)\rangle \, ds \quad \mbox{weakly.}$$ \end{proposition} \begin{lemma}[Continuity of the limit] \label{lem:continuity_of_limit} The weak limit of $(X^{n_k}_t)_{t\geq 0}$ as $k\rightarrow\infty$ is continuous in time a.e.. \end{lemma} \begin{lemma}[Uniform convergence] \label{lem:uniform_convergence_in_time} For all $f\in C_b(\mathbb{R}_+)$, it holds $$\sup_{s\leq t}|\langle f, X^{n_k}_s-\mu_s\rangle|\rightarrow 0 \quad \mbox{weakly}$$ as $k\rightarrow\infty$. \end{lemma} \begin{lemma} \label{lem:limit_counting_measures} It holds that $$\sup_{s\leq t} |\langle f, Q^{(n)}(X^{n_k}_s)- Q(\mu_s)\rangle| \rightarrow 0 \quad \mbox{weakly}$$ as $k\rightarrow \infty$. \end{lemma} \begin{proof}[Proof of Proposition \ref{prop:convergence_trilinear_term}] By Lemma \ref{lem:limit_counting_measures} we can pass the limit inside the integral in time. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:continuity_of_limit}] We have that for any $f\in C_b(\mathbb{R}_+)$ $$|\langle f, X^{n_k}_t \rangle- \langle f, X^{n_k}_{t-}\rangle | \leq\frac{4}{n_k} \|f\|_{\infty}$$ applying Theorem \ref{th:continuity_criteria_limit_Skorokhod_space} we get that $\langle f, \mu_t\rangle$ is continuous for any $f\in C_b(\mathbb{R}_+)$ and this implies the continuity of $(\mu_t)_{t\geq0}$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:uniform_convergence_in_time}] We know by Lemma \ref{lem:continuity_of_limit} that the limit of $(X^{n_k})_{k\in \mathbb{N}}$ is continuous. The statement is consequence of the continuity mapping theorem in the Skorokhod space (proven using the Skorokhod representation theorem \ref{th:Skorokhod_representation_theorem}) and the fact that $g(X)(t)=\sup_{s\leq t} |X|$ is a continuous function in this space. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:limit_counting_measures}] We abuse notation and denote by $(X^n_t)_{n\in \mathbb{N}}$ the convergent subsequence. We split the proof in two parts, we will prove for all $f\in C_b(\mathbb{R}_+)$: \begin{itemize} \item[(i)] $\sup_{s\leq t}|\langle f, \left( Q-Q^{(n)}\right) (X^{n}_s)\rangle| \rightarrow 0$ as $n\rightarrow \infty$, \item[(ii)] $\sup_{s\leq t}\left|\langle f, Q\left( X^n_s \right) -Q\left(\mu_s\right) \rangle\right| \rightarrow 0$ as $n\rightarrow \infty$. \end{itemize} (i) is consequence of \begin{eqnarray} \nonumber |\langle f, \left( Q-Q^{(n)}\right) (X^{n}_s)\rangle| &=& \frac{1}{2}\frac{1}{n}\int_{2\omega_2\geq \omega_3}\left( f(2\omega_2-\omega_3) + f(\omega_3)-2f(\omega_2)\right) \\ &&\qquad\qquad\times K(\omega_2, \omega_2, \omega_3) X^n_s(d\omega_2)X^n_s(d\omega_3) \nonumber\\ &\leq&\frac{2}{n}\|f\|_{\infty}\Lambda . \label{eq:4bound_difference_trilinear_term} \end{eqnarray} Now, for (ii) we compute we have that \begin{eqnarray} \nonumber \sup_{s\leq t}\left|\langle f, Q( X^n_s) -Q(\mu_s) \rangle\right| & \leq & \frac{1}{2} \int_{D} K(\omega_1, \omega_2, \omega_3) \left|f(\omega_1 + \omega_2 -\omega_3) +f(\omega_3) - f(\omega_1) - f(\omega_2) \right| \\ &&\qquad\times \sup_{s\leq t} \left|X^n_s(d\omega_1)X_s^n(d\omega_2)X_s^n(d\omega_3)-\mu_s(d\omega_1)\mu_s(d\omega_2)\mu_s(d\omega_3)\right| \nonumber\\ \label{eq:4bound_trilinear_term_uniformly} \end{eqnarray} We conclude (ii) with an argument analogous to Lemma \ref{lem:uniform_convergence_in_time} and the fact that $$X^n_t \otimes X^n_t \otimes X^n_t \rightarrow \mu_t \otimes \mu_t \otimes \mu_t$$ weakly (consequence of L\'evy's continuity theorem). \end{proof} \subsubsection{Proof of Theorem \ref{th:mean_field_limit_unbounded_kernel} (unbounded kernel)} \begin{remark} The proof that we already wrote in the case of bounded kernels works here in most parts substituting $\Lambda$ by $M$ where $$\int_{\mathbb{R}_+}\omega X^n(d\omega)\leq M= \langle \omega, \mu_0\rangle.$$ The only places where we need to be careful are Lemmas \ref{lem:uniform_convergence_in_time} and \ref{lem:limit_counting_measures}. \end{remark} \begin{lemma}[Convergence of a subsequence] There exists a subsequence $\left( X^{n_k}_t\right)_{k\in \mathbb{N}}$ that converges weakly in $D([0,\infty)\times \mathcal{P}(\mathbb{R}_+))$ as $k\rightarrow \infty$. \end{lemma} \begin{proof} The proof is exactly the one as in Section \ref{sec:step2_convergence_of_measures} and Proposition \ref{prop:almost_sure_convergence_for_measures} using the bound on the jump kernel $K$, for example in the proof of Lemma \ref{lem:tightness_for_the_action}, in the bounds of expressions \eqref{eq:4bound_square_martingale} and \eqref{eq:4bound_square_operator}, the value of $\Lambda$ will be substituted by $M^3$. \end{proof} \begin{lemma} For any $f\in C_b(\mathbb{R}_+)$, $t\geq 0$ it holds that $$\mathbb{E}\left[ \sup_{s\leq t}|M^{n,f}_s|^2\right] \leq \frac{1}{n}32 \|f\|_\infty^2 M^6t.$$ \end{lemma} \begin{proof} The proof is the same one as in Proposition \ref{prop:convergence_martingale} using the bound on the jump kernel $K$. \end{proof} \begin{lemma} \label{eq:limit_trilinear_operator_unbounded_kernel_4} It holds that for any $t \geq 0$ $$\sup_{s\leq t} |\langle f, Q^{(n)}(X^n_s) - Q(\mu_s)\rangle| \rightarrow 0 \quad \mbox{weakly}$$ for $f$ continuous and of compact support. \end{lemma} \begin{proof} Here everything works as in Section \ref{sec:4convergence_trilinear_term}, but we need to find the bounds \eqref{eq:4bound_difference_trilinear_term} and \eqref{eq:4bound_trilinear_term_uniformly}. We use a similar approach as in \cite{norris1999smoluchowski}. Firstly, we will prove an analogous bound to \eqref{eq:4bound_trilinear_term_uniformly}. Fix $\varepsilon>0$ and define $p(\varepsilon)=\varepsilon^{-1/\gamma}$. Then for $\omega\geq p(\varepsilon)$ it holds $$\frac{\tilde \varphi(\omega)}{\omega}\leq \varepsilon.$$ Now choose $\kappa\in (0, \gamma/[2(1-\gamma)])$. We split the domain into $F_1^p:=\{(\omega_1,\omega_2, \omega_3):\, \omega_1\leq p^\kappa(\varepsilon),\omega_2\leq p^\kappa(\varepsilon), \omega_3\leq p^\kappa(\varepsilon)\}$ and $F_2^p$ its complementary. In $F_1^p$ the kernel is bounded and we have, with obvious notations, $$\sup_{s\leq t} |\langle f, Q_1(X^n_s)-Q_1(\mu_s) \rangle | \rightarrow 0 \quad \mbox{weakly.}$$ On the other hand, in $F_2^p$, at least one of the components is greater than $p^\kappa(\varepsilon)$. Assume, without loss of generality that $\omega_3\geq p^\kappa(\varepsilon)$. Then \begin{eqnarray*} |\langle f, Q_2(X^n_t) \rangle| &=& \bigg|\int_D \left\{f(\omega)+f(\omega_3)-f(\omega_1)-f(\omega_2) \right\} K(\omega_1, \omega_2, \omega_3)\\ && \quad \times X^n_t(d\omega_1) X^n_t(d\omega_2) X^n_t(d\omega_3) \bigg|\\ &\leq & 4\|f\|_\infty \int_D\tilde\varphi(\omega_1)\tilde\varphi(\omega_2)\tilde\varphi(\omega_3)X^n_t(d\omega_1) X^n_t(d\omega_2) X^n_t(d\omega_3)\\ &\leq & 4\|f\|_\infty \max\left\{\left( p^{\kappa}(\varepsilon)\right)^{2(1-\gamma)} \varepsilon\langle \omega, \mu_0\rangle, \left( p^\kappa(\varepsilon)\right)^{1-\gamma} \varepsilon^2\langle \omega, \mu_0\rangle^2, \varepsilon^3 \langle \omega, \mu_0\rangle^3 \right\} \\ &\leq& c\varepsilon^\eta \qquad \mbox{for } \eta=1-2\kappa(1-\gamma)/\gamma>0. \end{eqnarray*} and analogously $$|\langle f, Q_2(\mu_t) \rangle |\leq c\varepsilon^\eta.$$ This implies that $$\limsup_{n\rightarrow \infty} \sup_{s\leq t} |\langle f, Q_2(X^n_s)-Q_2(\mu_s) \rangle | \leq 2c\varepsilon^\eta$$ but $\varepsilon$ is arbitrary so the limit is proved. We are left with proving an analogous estimate to \eqref{eq:4bound_difference_trilinear_term}, which is obtained straightforwardly since we restrict ourselves to continuous functions of compact support. \end{proof} \begin{proof}[Proof of Theorem \ref{th:mean_field_limit_unbounded_kernel}] \label{proof:theorem_mean_limit_1_unbounded} Thanks to the previous Lemmas we know that there exists convergent subsequence $X^{n_k}_t \rightarrow \mu_t$ weakly as $k\rightarrow \infty$ such that $$\langle f, \mu_t \rangle =\langle f, \mu_0\rangle + \int^t_0 \langle f, Q(\mu_s) \rangle ds$$ for any $f$ is continuous of compact support. Now using the bounds on the jump kernel and that $\langle \omega, \mu_t\rangle \leq \langle \omega, \mu_0\rangle$ and a limit argument, we can extend this equation to all bounded measurable functions $f$. \end{proof} \subsection{Second result on mean-field limit} \subsubsection{A coupling auxiliary process} Write $$X^n_0=\frac{1}{n}\sum^{n}_{i=1}\delta_{\omega_i},$$ for $\omega_i\in \mathbb{R}_+$. Define for $B\subset \mathbb{R}_+$ bounded $$X_0^{B,n}=\frac{1}{n} \sum^n_{i\,:\,\omega_i\in B} \delta_{\omega_i}.$$ Consider $\Lambda^{B,n}_0$ such that for each $B' \subset \mathbb{R}_+$ bounded such that $B\subset B'$ it holds \begin{equation} \label{eq:bounds_X} X_0^{B,n} \leq X_0^{B',n}, \quad \langle \varphi, X^{B,n}_0 \rangle + \Lambda^{B,n}_0 = \langle \varphi, X^{B',n}_0\rangle + \Lambda^{B',n}_0. \end{equation} Set $$\nu^B = (\Lambda_0^{B,n})^2+2\Lambda_0^{B,n} \langle \varphi, X_0^{B,n} \rangle - \frac{1}{n^2}\sum_{k,j\,:\, \omega_j\notin B \mbox{ or } \omega_k\notin B}\varphi(\omega_j)\varphi(\omega_k) . $$ Note that $\nu^B$ decreases as $B$ increases and $\nu^{[0,\infty)}=(\Lambda_0^{B,n})^2+2\Lambda_0^{B,n} \langle \varphi, X_0^{B,n} \rangle \geq 0$. For $i<j$ take independent exponential random variables $T_{ijk}$ of parameter $K(\omega_i, \omega_j,\omega_k)/n^2$. Set $T_{ijk}=T_{jik}$. Also, for $i\neq j$, take independent exponential random variables $S_{ijk}$ of parameter $\left( \varphi(\omega_i)\varphi(\omega_j)\varphi(\omega_k)-K(\omega_i,\omega_j,\omega_k)\right)/n^2$ (in all these cases we assume that $\omega_i+\omega_j\geq\omega_k$). We can construct, independently for each $i$, a family of independent exponential random variables $S^B_i$, increasing in $B$, with $S^B_i$ having parameter $\varphi(\omega_i)\nu^B$. Set $$T^B_i=\min_{k,j\,:\, \omega_j\notin B \mbox{ or } \omega_k\notin B} \left( T_{ijk} \wedge S_{ijk}\right) \wedge S^B_i,$$ $T^B_i$ is an exponential random variable of parameter $$\frac{1}{n^2}\sum_{k,j\,:\, \omega_j\notin B \mbox{ or } \omega_k\notin B} \varphi(\omega_i)\varphi(\omega_j)\varphi(\omega_k)+ \varphi(\omega_i)\nu^B= \varphi(\omega_i)\left((\Lambda^{B,n}_0)^2+2\Lambda^{B,n}_0 \langle \varphi,X_0^{B,n} \rangle \right).$$ For each $B$, the random variables $$(T_{ijk}, T^B_i:\, i,j,k \mbox{ such that }\omega_i, \omega_j, \omega_k\in B, \, i<j)$$ form an independent family. Suppose that $i$ is such that $\omega_i\in B$ and that $j$ is such that $\omega_j\notin B$ or $k$ is such that $\omega_k\notin B$, then we have $$T^B_i \leq T_{ijk}$$ and for $B\subset B'$ and all $i$, we have (as a consequence of \eqref{eq:bounds_X}) $$T^B_i \leq T^{B'}_i.$$ Now set $$T=\left( \min_{i<j,k} T_{ijk} \right) \wedge \left( \min_i T_i^B\right).$$ We set $(X^{B,n}_t, \Lambda^{B,n}_t) = (X^{B,n}_0, \Lambda^{B,n}_0)$ for $t<T$ and set $$ (X^{B,n}_T, \Lambda^{B,n}_T) =\left\{ \begin{array}{l} (X^{B,n}_0 -\frac{1}{n}\delta_{\omega_i}-\frac{1}{n}\delta_{\omega_j}+\frac{1}{n}\delta_{\omega_k}+\frac{1}{n}\delta_{\omega_i+\omega_j-\omega_k}, \Lambda^{B,n}_0)\\ \quad \mbox{if } T=T_{ijk},\,\omega_i, \omega_j,\omega_k,\omega_i+\omega_j-\omega_k\in B,\\ \\ (X^{B,n}_0 -\frac{1}{n}\delta_{\omega_i}-\frac{1}{n}\delta_{\omega_j}+\frac{1}{n}\delta_{\omega_k}, \Lambda^{B,n}_0+\frac{1}{n}\varphi(\omega_i+\omega_j-\omega_k))\\ \quad \mbox{if } T=T_{ijk},\,\omega_i, \omega_j,\omega_k \in B,\;\omega_i+\omega_j-\omega_k\notin B,\\ \\ (X^{B,n}_0- \frac{1}{n}\delta_{\omega_i}, \Lambda^{B,n}_0+\frac{1}{n}\varphi(\omega_i)), \quad \mbox{if } T=T^B_i,\, \omega_i\in B,\\ \\ (X^{B,n}_0, \Lambda^{B,n}_0), \quad\mbox{otherwise} \end{array}\right. $$ One can check that $X^{B,n}_T$ is supported on $B$ and for $B\subset B'$ \begin{equation} \label{eq:stochastic_turbulent_bounds} X^{B,n}_T \leq X^{B',n}_T, \qquad \langle \varphi, X^{B,n}_T \rangle + \Lambda^B_T = \langle \varphi, X^{B',n}_T\rangle + \Lambda^{B'}_T . \end{equation} We repeat the above construction independently from time $T$, again and again to obtain a family of Markov processes $(X^{B,n}_t, \Lambda^{B,n}_t)_{t\geq0}$ such that \eqref{eq:stochastic_turbulent_bounds} holds for all time. \begin{remark} Notice that $\Lambda^{B,n}_0$ and $X^{B,n}_0$ in the definition of $\nu^B$ must be updated to $\Lambda^{B,n}_T$ and $X^{B,n}_T$ in the new step. \end{remark} For a bounded set $B\subset [0,\infty)$, we will consider $$X^{B,n}_0=\mathbb{1}_B X^n_0, \quad \Lambda^{B,n}_0=\langle \varphi \mathbb{1}_{B^c}, X^n_0\rangle.$$ \paragraph{Markov Chain generator} For all $F\in C_b(\mathcal{M}^B)$, $\mu\in \mathcal{M}^B$ we have \begin{eqnarray*} \mathcal{G}F(\mu,\lambda) &=& \frac{n}{2}\int_D \left\{ F\left( \mu^{\omega_1,\omega_2,\omega_3}, \lambda\right) - F(\mu, \lambda) \right\} \mathbb{1}_{\omega\in B} K(\omega_1, \omega_2, \omega_3) \mu^{(n)}(d\omega_1,d\omega_2,d\omega_3)\\ &+& \frac{n}{2}\int_D \left\{ F\left( \hat\mu^{\omega_1,\omega_2,\omega_3}, \lambda^{\omega}\right) - F(\mu, \lambda)\right\} \mathbb{1}_{\omega\notin B} K(\omega_1, \omega_2, \omega_3)\mu^{(n)}(d\omega_1,d\omega_2,d\omega_3)\\ &+&n \int_{\mathbb{R}_+} \left\{ F\left( \mu^{\omega}, \lambda^\omega\right) - F(\mu, \lambda)\right\}\left( \lambda^2 + 2\lambda\langle \varphi, \mu \rangle \right) \varphi(\omega)\mu(d\omega) \end{eqnarray*} where \begin{eqnarray*} \mu^{\omega_1,\omega_2,\omega_3}&=&\mu+\frac{1}{n}\left(\delta_{\omega_3}+\delta_{\omega}-\delta_{\omega_1}-\delta_{\omega_2} \right);\\ \hat \mu^{\omega_1,\omega_2,\omega_3} &=& \mu + \frac{1}{n}\left(\delta_{\omega_3}-\delta_{\omega_1}-\delta_{\omega_2} \right);\\ \lambda^{\omega} &=& \lambda + \frac{1}{n}\varphi(\omega);\\ \lambda^\omega &=& \lambda+\frac{1}{n}\varphi(\omega);\\ \mu^\omega &=& \mu -\frac{1}{n}\delta_\omega \end{eqnarray*} \paragraph{Associated martingale.} Remember the definition $$\mu^{(n)}(A\times B\times C)=\mu(A)\mu(B)\mu(C)-n^{-1}\mu(A\cap B)\mu(C)$$ which has the property $n^{3}\mu^{(n)}=(n\mu)^{(1)}$. Define for any bounded measurable function $f$ on $\mathbb{R}_+$ and $a\in \mathbb{R}$: \begin{eqnarray*} L^{B,(n)}(\mu,\lambda)(f,a) &=& \langle (f,a), L^{B,(n)}(\mu,\lambda)\rangle\\ &=& \frac{1}{2}\int_{\mathbb{R}_+^3} \big( f(\omega)\mathbb{1}_{\omega\in B}+a \varphi(\omega)\mathbb{1}_{\omega\notin B} \\ &&\qquad+ f(\omega_3)-f(\omega_1)-f(\omega_2) \big) K(\omega_1, \omega_2, \omega_3) \mu^{(n)}(d\omega_1,d\omega_2,d\omega_3)\\ &&+\left( \lambda^2+2\lambda \langle \varphi, \mu\rangle \right)\int_{\mathbb{R}_+}\left( a\varphi(\omega)-f(\omega) \right) \varphi(\omega)\mu(d\omega) \end{eqnarray*} and \begin{eqnarray*} P^{B,(n)}(\mu,\lambda)(f,a) &=& \frac{1}{2n}\int_{D} \Big( f(\omega)\mathbb{1}_{\omega\in B}+a \varphi(\omega)\mathbb{1}_{\omega\notin B} \\ &&\qquad+ f(\omega_3)-f(\omega_1)-f(\omega_2) \Big)^2 K(\omega_1, \omega_2, \omega_3) \mu^{(n)}(d\omega_1,d\omega_2,d\omega_3)\\ &&+\left( \lambda^2+2\lambda \langle \varphi, \mu\rangle \right) \int_{\mathbb{R}_+}\left( a\varphi(\omega)-f(\omega) \right)^2 \varphi(\omega)\mu(d\omega). \end{eqnarray*} Then, for all $f$ and $a$ $$M^n_t = \langle f, X^{B,n}_t\rangle +a\Lambda^{B,n}_t - \langle f, X^{B,n}_0\rangle - a\Lambda^{B,n}_0 - \int^t_0 L^{B,(n)}(X^{B,n}_s, \Lambda^{B,n}_s)(f,a)\, ds$$ is a martingale with previsible increasing process $$\langle M\rangle_t =\int^t_0 P^{B,(n)}(X^{B,n}_s, \Lambda^{B,n}_s)(f,a)\, ds.$$ \subsubsection{Proof of Theorem \ref{th:mean-field-limit-complete}} Remember the metric $d$ in $\mathcal{M}^f$ defined around expression \eqref{eq:definition_metric_weak_convergence}. \begin{proposition}\label{prop:comparison_auxiliary_problem_stochastic} Let $B\subset [0,\infty)$ be bounded and $\mu_0$ be measure on $\mathbb{R}_+$ such that $\langle\varphi,\mu_0\rangle<\infty$ and that $$\mu^{*n}_0(\partial B) =0 \quad \mbox{for all } n\geq 1.$$ Assume that for $\varphi(\omega)=\omega+1$ it holds $$K(\omega_1, \omega_2, \omega_3) \leq \varphi(\omega_1)\varphi(\omega_2)\varphi(\omega_3).$$ Consider $(\mu^B_t, \lambda^B_t)_{t\geq 0}$ the solution to \eqref{eq:auxiliary_equation_existence_proof} given by Proposition \ref{prop:existence_auxiliary_process}. Suppose that $$d(X^{B,n}_0, \mu^{B}_0 )\rightarrow 0, \quad |\Lambda^{B,n}_0-\lambda_0^B|\rightarrow 0$$ as $n\rightarrow \infty$. Then for all $t \geq 0$, $$\sup_{s\leq t} d(X^{B,n}_s, \mu^B_s) \rightarrow 0,\quad \sup_{s\leq t}|\Lambda^{B,n}_s - \lambda^B_s| \rightarrow 0$$ in probability. \end{proposition} \begin{proof}[Proof of Proposition \ref{prop:comparison_auxiliary_problem_stochastic}] Set $M=\sup_{n}\langle \varphi, X^{B,n}_0\rangle <\infty$. For all B and all continuous bounded functions $f$ and all $a\in \mathbb{R}$ \begin{eqnarray} \label{eq:martingale2} M^n_t &=& \langle f, X^{B,n}_t\rangle + a \Lambda^{B,n}_t - \langle f, X^{B,n}_0\rangle -a \Lambda^{B,n}_0\\ \nonumber && \qquad - \int^t_0 L^{B,(n)}(X^{B,n}_s, \Lambda^{B,n}_s)(f,a)\, ds \end{eqnarray} is a martingale with previsible increasing process $$\langle M^n\rangle_t = \int^t_0 P^{B,(n)}(X^{B,n}_s, \Lambda^{B,n}_s) (f,a)\, ds,$$ (which is the analogous expression to \eqref{eq:alpha_previsible}). There is a constant $C<\infty$, depending only on $B, \Lambda, \varphi$, such that \begin{eqnarray} \label{eq:ineq1} |L^B(X^{B,n}_t, \Lambda^{B,n}_t)(f,a)| &\leq& C(\|f\|_\infty+|a|)\\ |(L^B-L^{B,(n)})(X^{B,n}_t, \Lambda^{B,n}_t)(f,a)| & \leq & C n^{-1} (\|f\|_\infty+|a|),\\ |P^{B,(n)}(X^{B,n}_t, \Lambda_t^{B,n})(f,a)| &\leq & Cn^{-1}(\|f\|_\infty+|a|)^2, \label{eq:ineq3} \end{eqnarray} where $L^B$ is defined in expression \eqref{eq:auxiliary_equation_existence_proof}. Hence by the same argument as in Theorem \ref{th:hydrodynamic_limit}, the laws of the sequence $(X^{B,n}, \Lambda^{B,n})$ are tight in $D([0,\infty), \mathcal{M}_B\times \mathbb{R})$ (inequality \eqref{eq:ineq3} is the analogous to \eqref{eq:alpha_previsible}; the inequality \eqref{eq:ineq1} is analogous to \eqref{eq:4bound_square_operator}). Similarly, the laws of the sequence $(X^{B,n}, \Lambda^{B,n}, I^n, J^n)$ are tight in $D([0,\infty), \mathcal{M}_B \times \mathbb{R}\times \mathcal{M}_{B\times B\times B}\times \mathcal{M}_{B\times B\times B})$, where \begin{eqnarray*} I^n_t(d\omega_1,d\omega_2,d\omega_3) &=& K(\omega_1, \omega_2, \omega_3) \mathbb{1}_{\omega\in B} X^{B,n}_t(d\omega_1) X^{B,n}_t(d\omega_2) X^{B,n}_t(d\omega_3),\\ J^n_t(d\omega_1,d\omega_2,d\omega_3) &=& K(\omega_1, \omega_2, \omega_3) \mathbb{1}_{\omega\notin B} X^{B,n}_t(d\omega_1) X^{B,n}_t(d\omega_2) X^{B,n}_t(d\omega_3). \end{eqnarray*} Let $(X,\Lambda, I, J)$ some weak limit point of the sequence. Passing to a subsequence and using the Skorokhod representation theorem \ref{th:Skorokhod_representation_theorem}, we can consider that the sequence converges almost surely, i.e., as a pointwise limit in $D([0,\infty), \mathcal{M}_B \times \mathbb{R}\times \mathcal{M}_{B\times B\times B}\times \mathcal{M}_{B\times B\times B})$. Therefore, there exist bounded measurable functions $$I, J:[0,\infty)\times B\times B\times B \rightarrow [0,\infty)$$ symmetric in the first two components, such that \begin{eqnarray*} I_t(d\omega_1, d\omega_2, d\omega_3) &=& I(t, \omega_1, \omega_2, \omega_3) X_t(d\omega_1) X_t(d\omega_2) X_t(d\omega_3)\\ J_t(d\omega_1, d\omega_2, d\omega_3) &=& J(t, \omega_1, \omega_2, \omega_3) X_t(d\omega_1) X_t(d\omega_2) X_t(d\omega_3) \end{eqnarray*} in $\mathcal{M}_{B\times B \times B}$ and such that \begin{eqnarray*} I(t,\omega_1,\omega_2,\omega_3) &=& K(\omega_1, \omega_2, \omega_3) \mathbb{1}_{\omega \in B}\\ J(t,\omega_1,\omega_2,\omega_3) &=& K(\omega_1, \omega_2, \omega_3) \mathbb{1}_{\omega \notin B} \end{eqnarray*} whenever $\omega\notin \partial B$ (notice that we assumed $K$ to be continuous). Now, passing to the limit in \eqref{eq:martingale2} we obtain, for all continuous functions $f$ and all $a\in \mathbb{R}$, for all $t\geq 0$, almost surely \begin{eqnarray*} \langle (f,a), (X_t, \Lambda_t)\rangle &=&\langle (f,a), (X_0,\Lambda_0) \rangle\\ &&+ \frac{1}{2} \int^t_0 \int_{\mathbb{R}^3_+} \big( f(\omega)+f(\omega_3)-f(\omega_1)-f(\omega_2) \big)\\ && \qquad\times I(s, \omega_1, \omega_2,\omega_3) X_s(d\omega_1) X_s(d\omega_2) X_s(d\omega_3)\, ds\\ &&+ \frac{1}{2}\int^t_0 \int_{\mathbb{R}^3_+} \left( a\varphi(\omega)+f(\omega_3)-f(\omega_1)-f(\omega_2) \right)\\ && \qquad \times J(s, \omega_1, \omega_2, \omega_3) X_s(d\omega_1) X_s(d\omega_2) X_s(d\omega_3)\, ds\\ && + \int^t_0 \left( \Lambda_s^2+2\Lambda_s\langle \varphi, X_s\rangle \right)\int_{\mathbb{R}_+} \left( a\varphi(\omega)-f(\omega) \right) \varphi(\omega) X_s(d\omega)\, ds. \end{eqnarray*} \medskip Consider now an analogous iterative scheme to the one done in Proposition \ref{prop:existence_auxiliary_process} for this equation. Denote by $(\nu_t^n)_{n\in\mathbb{N}}$ the sequence approximating $(X_t)_{t\geq 0}$. We deduce that $$\nu^0_t=\mu_0, \quad\nu^{n+1}_t \ll \mu_0 + \int^t_0 (\nu^n_s + \nu^n_s * \nu^n_s *\hat\nu^n_s) \, ds$$ for $\hat\nu(A)=\nu(-A)$ and for all $n \geq 0$, (notice that we have extended the measures in the previous expression to the whole $\mathbb{R}$ by taking value 0 in subsets of $(-\infty, 0)$)\footnote{$$\langle f, \nu * \nu *\hat\nu \rangle = \int_{\mathbb{R}^3} f(\omega) \nu(d\omega_1)\nu(d\omega_2)\nu(d\omega_3)$$ }. By induction we have that $$\nu^n_t \ll \gamma_0=\sum^\infty_{k=1}\sum^\infty_{l=0}\nu_0^{*k}*\hat\nu_0^{*l}.$$ This implies in our case (taking $n\rightarrow \infty$) that $X_t \otimes X_t \otimes X_t$ is absolutely continuous with respect to $\gamma_0^{\otimes 3}$ for all $t\geq 0$, almost surely. For $G=\{(\omega_1,\omega_2,\omega_3)\, |\, \omega\in \partial B\}$, we have that $\gamma_0^{\otimes 3}(G)=0$ because of the assumptions on $\mu_0$ and that $\gamma_0^{\otimes 3}(G) = (\gamma_0 * \gamma_0 *\hat{\gamma_0})(G)$. Therefore we can replace $I(t, \omega_1, \omega_2,\omega_3)$ by $K(\omega_1, \omega_2, \omega_3)\mathbb{1}_{\omega \in B}$ and $J(t,\omega_1,\omega_2,\omega_3)$ by $K(\omega_1, \omega_2, \omega_3) \mathbb{1}_{\omega\notin B}$. Since the equation obtained after this substitution is the same as \eqref{eq:auxiliary_equation_existence_proof} and $(\mu^B_t, \lambda^B_t)$ is its unique solution, we conclude that the unique weak limit point of $(X^{B,n},\Lambda^{B,n})$ in $D([0,\infty), \mathcal{M}_B \times \mathbb{R})$ is precisely $(\mu^B_t, \lambda^B_t)_{t\geq 0}$. \end{proof} The proof of Theorem \ref{th:mean-field-limit-complete} is exactly the same one as in \cite[Theorem 4.4]{norris1999smoluchowski} and we copy it here just for the sake of completeness. \begin{proof}[Proof of Theorem \ref{th:mean-field-limit-complete}, from \cite{norris1999smoluchowski}] Fix $\delta>0$ and $t<T$. Since $(\mu_t)_{t<T}$ is strong, we can find a compact set $B$ satisfying $\mu^{*n}_0(\partial B)=0$(\footnote{The reason for this being true is that, for any given $\mu_0$, $\mu^{*n}_0(\partial B)=0$ for all $n\geq 1$ holds for all but countably many closed intervals in $\mathbb{R}_+$.}) for all $n\geq 1$ and such that $\lambda^B_t<\delta/2$. Now $$d(\varphi X^n_0, \varphi \mu_0) \rightarrow 0,$$ so $$d(X^{B,n}_0,\mu^B_0)\rightarrow 0, \quad |\Lambda^{B,n}_0 - \lambda^B_0|\rightarrow 0.$$ Hence, by Proposition \ref{prop:comparison_auxiliary_problem_stochastic}, $$\sup_{s\leq t} d(X^{B,n}_s, \mu^B_s) \rightarrow 0, \quad \sup_{s\leq t}|\Lambda^{B,n}_s-\lambda^B_s|\rightarrow 0,$$ in probability as $n\rightarrow \infty$. Since $\{\mu^{B}_s:\, s\leq t\}$ is compact (the support of $\mu_s$ is contained in $B$)(\footnote{Remember the definition of the metric $d$ given in \eqref{eq:definition_metric_weak_convergence}. Since $d(X^{B,n}_s, \mu^B_s) \rightarrow 0$, we have that for all $f$ bounded continuous function on $\mathbb{R}_+$ $$\int f \varphi(X^{B,n}_s-\mu^B_s) =\int f \varphi\mathbb{1}_B (X^{B,n}_s-\mu^B_s) \rightarrow 0$$ since $\varphi$ restricted to $B$ is also bounded and continuous. }), we also have $$\sup_{s\leq t} d(\varphi X^{B,n}_s, \varphi\mu^{B}_s) \rightarrow 0$$ in probability as $n\rightarrow \infty$. By \eqref{eq:bounds_compare_sol_aux_sol} and by the bounds on the instantaneous coagulation-fragmentation particle system \eqref{eq:stochastic_turbulent_bounds}, we have that for $s\leq t$ \begin{eqnarray*} \|\varphi(\mu_s-\mu_s^B) \| &=& \langle \varphi, \mu_s-\mu_s^B \rangle \leq \lambda^B_s \leq \lambda^B_t <\delta/2\\ \|\varphi(X^n_s-X^{B,n}_s)\| &=& \langle \varphi, X^n_s-X^{B,n}_s \rangle \leq \Lambda^{B,n}_s \leq \Lambda^{B,n}_t \\ &\leq & \lambda^B_t + |\Lambda^{B,n}_t-\lambda^B_t|\\ &\leq & \delta/2 + |\Lambda^{B,n}_t - \lambda^B_t|. \end{eqnarray*} Now (remember the properties of the metric $d$ defined in \eqref{eq:definition_metric_weak_convergence}) \begin{eqnarray*} d(\varphi X^n_s, \varphi \mu_s) &\leq & \|\varphi(X^n_s - X^{B,n}_s)\| + d(\varphi X^{B,n}_s, \varphi \mu^B_s) + \|\varphi(\mu_s-\mu^B_s)\|\\ &\leq & \delta + d(\varphi X^{B,n}_s, \varphi \mu^B_s) + |\Lambda^{B,n}_t -\lambda^B_t|, \end{eqnarray*} so $$\mathbb{P}\left(\sup_{s\leq t} d(\varphi X^n_s, \varphi \mu_s)>\delta \right) \rightarrow 0 $$ as $n\rightarrow\infty$, as required. \end{proof} \section{Appendix: Some properties of the Skorokhod space} \begin{theorem}[Prohorov's theorem (\cite{ethier2009markov}), Chapter 3] Let $(S,d)$ be complete and separable, and let $\mathcal{M} \in \P(S)$. Then the following are equivalent: \begin{enumerate} \item $\mathcal{M}$ is tight. \item For each $\varepsilon>0$, there exists a compact $K\in S$ such that $$\inf_{P\in \mathcal{M}}P(K^\varepsilon) \geq 1-\varepsilon$$ where $K^\varepsilon:= \{x\in S: \inf_{y\in K} d(x,y)<\varepsilon\}$. \item $\mathcal{M}$ is relatively compact. \end{enumerate} \end{theorem} Let $(E,r)$ be a metric space. The space $D([0,\infty); E)$ of cadlag functions taking values in $E$ is widely used in stochastic processes. In general we would like to study the convergence of measures on this space, however, most of the tools known for convergence of measures are for measures in $\P(S)$ for $S$ a complete separable metric space. Therefore, it would be very useful to find a topology in $D([0,\infty) \times E)$ such that it is a complete and separable metric space. This can be done when $E$ is also complete and separable; and the metric considered is the Skorokhod one. This is why in this case the space of c\`adl\`ag functions is called Skorohod space. Some important properties of this space are the following: \begin{proposition}[\cite{ethier2009markov}, Chapter 3] If $x\in D([0,\infty); E)$, then $x$ has at most countably many points of discontinuity. \end{proposition} \begin{theorem}[\cite{ethier2009markov}, Chapter 3] If $E$ is separable, then $D([0,\infty); E)$ is separable. If $(E,r)$ is complete, then $(D([0,\infty);E), d)$ is complete, where $d$ is the Skorokhod metric. \end{theorem} \begin{theorem} The Skorokhod space is a complete separable metric space. \end{theorem} \begin{theorem}[The almost sure Skorokhod representation theorem, \cite{ethier2009markov}, Theorem 1.8, Chapter 3] \label{th:Skorokhod_representation_theorem} Let $(S,d)$ be a separable metric space. Suppose $P_n$, $n=1,2,\hdots$ and $P$ in $\P(S)$ satisfy $\lim_{n\rightarrow\infty}\rho(P_n, P)=0$ where $\rho$ is the metric in $\P(S)$. Then there exists a probability space $(\Omega, \mathcal{F}, \nu)$ on which are defined $S$- valued random variable $X_n$, $n=1,2, \hdots$ and $X$ with distributions $P_n$, $n=1,2,\hdots$ and $P$, respectively such that $\lim_{n\rightarrow \infty} X_n=X$ almost surely. \end{theorem} \begin{theorem}[Tightness criteria for measures on the Skorokhod space, \cite{jakubowski1986skorokhod} Theorem 3.1] \label{th:jakubowski_criteria} Let $(S,\mathcal{T})$ be a completely regular topological space with metrisable compact sets. Let $\mathbb{G}$ be a family of continuous functions on $S$. Suppose that $\mathbb{G}$ separates points in $S$ and that it is closed under addition. Then a family $\{ \mathcal{L}^n\}_{n\in \mathbb{N}}$ of probability measures in $\P(D([0,\infty);S)$ is tight iff the two following conditions hold: \begin{itemize} \item[(i)] For each $\varepsilon>0$ there is a compact set $K_\varepsilon \subset S$ such that $$\mathcal{L}^n(D([0,\infty);K_\varepsilon))>1-\varepsilon, \quad n\in \mathbb{N}.$$ \item[(ii)] The family $\{ \mathcal{L}^n\}_{n\in\mathbb{N}}$ is $\mathbb{G}$-weakly tight. \end{itemize} \end{theorem} \begin{theorem}[Criteria for tightness in Skorokhod spaces (\cite{ethier2009markov}, Corollary 7.4, Chapter 3)] \label{th:criteria_tightness_Skorokhod} Let $(E,r)$ be a complete and separable metric space, and let $\{X_n\}$ be a family of processes with sample paths in $D([0,\infty); E)$. Then $\{X_n\}$ is relatively compact iff the two following conditions hold: \begin{itemize} \item[(i)]For every $\eta>0$ and rational $t\geq 0$, there exists a compact set $\Lambda_{\eta, t} \subset E$ such that $$\liminf_{n\rightarrow \infty}\mathbb{P}\{X_n(t) \in \Lambda^\eta_{\eta,t}\} \geq 1-\eta.$$ \item[(ii)] For every $\eta>0$ and $T>0$, there exits $\delta>0$ such that $$\limsup_{n\rightarrow\infty} \mathbb{P}\{ w'(X_n, \delta, T) \geq \eta \} \leq \eta.$$ \end{itemize} where we have used the \textbf{modulus of continuity} $w'$ defined as follows: for $x\in D([0,\infty)\times E)$, $\delta>0$, and $T>0$: $$w'(x,\delta, T) = \inf_{\{t_i\}} \max_i \sup_{s,t \in [t_{i-1}, t_i)} r(x(s), x(t)),$$ where $\{t_i\}$ ranges over all partitions of the form $0=t_0<t_1<\hdots< t_{n-1}<T\leq t_n$ with $\min_{1\leq i\leq n}(t_i-t_{i-1})>\delta$ and $n\geq 1$ \end{theorem} \begin{theorem}[Continuity criteria for the limit in Skorokhod spaces (\cite{ethier2009markov}, Theorem 10.2, Chapter 3)] \label{th:continuity_criteria_limit_Skorokhod_space} Let $(E,r)$ be a metric space. Let $X_n$, $n=1,2,\hdots,$ and $X$ be processes with sample paths in $D([0,\infty);E)$ and suppose that $X_n$ converges in distribution to $X$. Then $X$ is a.s. continuous if and only if $J(X_n)$ converges to zero in distribution, where $$J(x) = \int^\infty_0 e^{-u} [ J(x,u) \wedge 1] \, du$$ for $$J(x,u) = \sup_{0\leq t \leq u} r(x(t), x(t-)).$$ \end{theorem} \section{Appendix: Formal derivation of the weak isotropic 4-wave kinetic equation} \label{sec:weak_isotropic_wave_eq} Suppose that $n(\mathbf{k})=n(k)$ is a radial function (isotropic). The waveaction in the isotropic case can be written as $$W= \int_{\mathbb{R}^N} n(\mathbf{k}) d\mathbf{k} = \int_{\mathbb{R}_+\times S^{N-1}} n(k) k^{N-1} dk d\mathbf{s} = \frac{|S^{N-1}|}{\alpha} \int^\infty_0 n(\omega) \omega^{\frac{N-\alpha}{\alpha}} \, d\omega ,$$ where $S^{N-1}$ is the $N-1$ dimensional sphere. From this expression, one can denote the angle-averaged frequency spectrum $\mu=\mu(d\omega)$ as $$\mu(d\omega):=\frac{|S^{N-1}|}{\alpha} \omega^{\frac{N-\alpha}{\alpha}}n(\omega)d\omega.$$ The total number of waves (waveaction) and the total energy are respectively \begin{eqnarray*} W&=& \int^\infty_0 \mu(d\omega)\\ E &=& \int^\infty_0 \omega \mu(d\omega). \end{eqnarray*} The isotropic version of the weak 4-wave kinetic equation can be written as \begin{equation} \label{eq:isotropic_4_wave_equation_weak2} \mu_t = \mu_0 + \int^t_0 Q(\mu_s, \mu_s, \mu_s) \, ds \end{equation} where $Q$ is defined against test functions $g\in \mathcal{S}(\mathbb{R}_+)$ as \begin{eqnarray} \label{eq:Q_isotropic_case} \langle g, Q(\mu, \mu,\mu) \rangle &=& \frac{1}{2} \int_{D} \mu(d\omega_1) \mu(d\omega_2) \mu(d\omega_3) K(\omega_1,\omega_2,\omega_3) \\ \nonumber &&\qquad \times[ g(\omega_1+\omega_2-\omega_3) + g(\omega_3) -g(\omega_2) -g(\omega_1) ] \end{eqnarray} where $D:= \{ \mathbb{R}^3_+ \cap (\omega_1 + \omega_2 \geq \omega_3)\}$ and \begin{eqnarray} \label{eq:jump_kernel_isotropic_4wave} K(\omega_1, \omega_2, \omega_3) &=& \frac{8\pi}{\alpha |S^{N-1}|^4}(\omega)^{\frac{N-\alpha}{\alpha}}\\ && \qquad \int_{\left( S^{N-1}\right)^4} d\mathbf{s}_1d\mathbf{s}_2d\mathbf{s}_3d\mathbf{s}\, \overline{T}^2(\omega_1^{1/\alpha} \mathbf{s}_1, \omega_2^{1/\alpha}\mathbf{s}_2, \omega_3^{1/\alpha}\mathbf{s}_3, (\omega)^{1/\alpha} \mathbf{s}) \nonumber\\ && \qquad \qquad \times \delta(\omega_1^{1/\alpha}\mathbf{s}_1+ \omega_2^{1/\alpha}\mathbf{s}_2- \omega_3^{1/\alpha}\mathbf{s}_3- (\omega)^{1/\alpha} \mathbf{s}) \nonumber \end{eqnarray} \bigskip Next we explain the formal derivation of the weak isotropic 4-wave kinetic equation \eqref{eq:isotropic_4_wave_equation_weak}. We have that \begin{eqnarray*} \int_{(0,\infty)} \partial_t \mu(\omega) d\omega &=& \int_{\mathbb{R}^N} \partial_t n(\mathbf{k}) d\mathbf{k}\\ &=& 4\pi \int_{\Omega^4 \times S^4} \overline{T}^2(k_1 s_1, k_2 s_2, k_3 s_3, k s)\\ &&\qquad\qquad\times \delta(k_1 s_1+k_2 s_2-k_3s_3-ks) \delta(\omega_1+\omega_2-\omega_3-\omega)\\ && \qquad\qquad\times (n_1n_2n_3+n_1n_2n-n_1n_3n-n_2n_3n) (k k_1k_2k_3)^{N-1}dkds\\ &=&\frac{4\pi}{\alpha|S^{N-1}|^4} \int_{\mathbb{R}_+^4 \times S^4} d\omega_{0123} ds_{0123}T^2(\omega_1^{1/\alpha} s_1, \omega_2^{1/\alpha} s_2, \omega^{1/\alpha}_3 s_3, \omega^{1/\alpha} s)\\ &&\qquad\qquad\times \delta(\omega^{1/\alpha}_1 s_1+\omega^{1/\alpha}_2 s_2-\omega^{1/\alpha}_3s_3-\omega^{1/\alpha}s) \delta(\omega_1+\omega_2-\omega_3-\omega)\\ && \qquad\qquad\times (\mu(\omega_1)\mu(\omega_2)\mu(\omega_3)\omega^{\frac{N-\alpha}{\alpha}}+\mu(\omega_1)\mu(\omega_2)\mu(\omega) \omega_3^{\frac{N-\alpha}{\alpha}}\\ &&\qquad\qquad\quad-\mu(\omega_1)\mu(\omega_3)\mu(\omega)\omega_2^{\frac{N-\alpha}{\alpha}}-\mu(\omega_2)\mu(\omega_3)\mu(\omega)\omega_1^{\frac{N-\alpha}{\alpha}}) \\ &=&\int_{\mathbb{R}_+^4} d\omega_{0123} F(\omega_1,\omega_2,\omega_3,\omega)\delta(\omega_1+\omega_2-\omega_3-\omega)\\ && \qquad\qquad\times (\mu(\omega_1)\mu(\omega_2)\mu(\omega_3)\omega^{\frac{N-\alpha}{\alpha}}+\mu(\omega_1)\mu(\omega_2)\mu(\omega) \omega_3^{\frac{N-\alpha}{\alpha}}\\ && \qquad\qquad\quad-\mu(\omega_1)\mu(\omega_3)\mu(\omega)\omega_2^{\frac{N-\alpha}{\alpha}}-\mu(\omega_2)\mu(\omega_3)\mu(\omega)\omega_1^{\frac{N-\alpha}{\alpha}}) \\ \end{eqnarray*} for $S^i=(S^{N-1})^i$, $d\omega_{0123}=d\omega d\omega_1 d\omega_2 d\omega_3$, $ds_{0123}=ds_1 ds_2 ds_3 ds$, and \begin{eqnarray*} F(\omega_1, \omega_2, \omega_3,\omega) &=& \frac{4\pi}{\alpha|S^{N-1}|^4} \int_{S^4} ds_{0123} \overline{T}^2(\omega_1^{1/\alpha} s_1, \omega_2^{1/\alpha} s_2, \omega^{1/\alpha}_3 s_3, \omega^{1/\alpha} s)\\ &&\qquad\qquad\times\delta(\omega^{1/\alpha}_1 s_1+\omega^{1/\alpha}_2 s_2-\omega^{1/\alpha}_3s_3-\omega^{1/\alpha}s). \end{eqnarray*} \medskip Hence, $\mu_\omega$ satisfies \begin{eqnarray}\label{eq:isotropic_4_waveKineticEquation} \partial_t \mu(\omega) &=& \int_{\mathbb{R}_+^3} d\omega_{123} F(\omega_1,\omega_2,\omega_3,\omega)\delta(\omega_1+\omega_2-\omega_3-\omega)\\ && \qquad\qquad\times (\mu(\omega_1)\mu(\omega_2)\mu(\omega_3)\omega^{\frac{N-\alpha}{\alpha}}+\mu(\omega_1)\mu(\omega_2)\mu(\omega) \omega_3^{\frac{N-\alpha}{\alpha}}\\ && \qquad\qquad-\mu(\omega_1)\mu(\omega_3)\mu(\omega)\omega_2^{\frac{N-\alpha}{\alpha}}-\mu(\omega_2)\mu(\omega_3)\mu(\omega)\omega_1^{\frac{N-\alpha}{\alpha}}) \nonumber \end{eqnarray} Its weak formulation $$\mu_t =\mu^{in} +\int_{\Omega^3} Q(\mu_s, \mu_s, \mu_s) \, ds$$ is defined against functions $g\in \mathcal{S}(\mathbb{R}_+)$ as \begin{eqnarray} \nonumber \langle g, Q(\mu, \mu,\mu) \rangle &=& \int_{\mathbb{R}_+^4} d\omega_{0123} \mu(\omega_1) \mu(\omega_2) \mu(\omega_3) \omega^{\frac{N-\alpha}{\alpha}} \\ \nonumber &&\qquad \times[ F_{1230} \delta(\omega^{12}_{30}) g(\omega) + F_{1203} \delta(\omega^{12}_{03})g(\omega_3) \\ \nonumber && \qquad\quad - F_{1032}\delta(\omega^{10}_{32})g(\omega_2) -F_{0231}\delta(\omega^{02}_{31})g(\omega_1) ]\\ \nonumber &=& \int_{\mathbb{R}_+^4} d\omega_{0123} \mu(\omega_1) \mu(\omega_2) \mu(\omega_3) \omega^{\frac{N-\alpha}{\alpha}} F_{1230} \delta(\omega^{12}_{30}) \\ \nonumber &&\qquad \times[ g(\omega) + g(\omega_3) -g(\omega_2) -g(\omega_1) ]\\ \nonumber &=& \frac{1}{2}\int_{D} d\omega_{123} \mu(\omega_1) \mu(\omega_2) \mu(\omega_3) K(\omega_1,\omega_2,\omega_3) \\ &&\qquad \times[ g(\omega_1+\omega_2-\omega_3) + g(\omega_3) -g(\omega_2) -g(\omega_1) ] \end{eqnarray} To conclude we assumed that $\overline{T}$ is symmetric in all its variables. We used that changing labels we get that \begin{eqnarray*} &&d\omega_{123} F_{1230} \delta(\omega^{12}_{30}) g(\omega)+ F_{1203} \delta(\omega^{12}_{03}) g(\omega_3)- F_{1032}\delta(\omega^{10}_{32})g(\omega_2) -F_{0231}\delta(\omega^{02}_{31})g(\omega_1)\\ && \qquad=d\omega_{123} F_{1230} \delta(\omega^{12}_{30}) g(\omega)+ F_{1203} \delta(\omega^{12}_{03}) g(\omega_3)- F_{3012}\delta(\omega^{30}_{12})g(\omega_2) -F_{0321}\delta(\omega^{03}_{21})g(\omega_1) \end{eqnarray*} and the properties of the function $F$ to factorise it. We used the notation $\delta(\omega^{ij}_{lp})=\delta(\omega_i+\omega_j-\omega_l-\omega_p)$ and $$K(\omega_1, \omega_2, \omega_3):= 2(\omega_1 +\omega_2-\omega_3)^{\frac{N-\alpha}{\alpha}}F(\omega_1, \omega_2,\omega_3, \omega_1+\omega_2-\omega_3).$$ For the last line we used the \textit{sifting property} of the delta distribution i.e. \begin{equation} \label{eq:sifting_property} \int^b_a f(t) \delta(t-d) \, dt = \left\{\begin{array}{l l} f(d) & \mbox{for } d\in[a, b]\\ 0 & \mbox{otherwise} \end{array} \right. . \end{equation} \begin{remark} In reference \cite[Section 3.1.3]{zakharov1992kolmogorov}, the authors state that even in isotropic medium, the interaction coefficient $\overline{T}$ in the 4-wave case cannot be considered to be isotropic too. In the 3-wave case it is possible, but not for the 4-wave. We can rewrite \begin{equation} \label{eq:def_f2} |\overline{T}(\mathbf{k}_1, \mathbf{k}_2, \mathbf{k}_3, \mathbf{k})|^2 = \overline{T}_0^2 k^{2\beta} f_2\left(\frac{\mathbf{k}_1}{k},\frac{\mathbf{k}_2}{k},\frac{\mathbf{k}_3}{k}\right) \end{equation} for some dimensionless constant $\overline{T}_0$ and some dimensionless function $f_2$. \end{remark} \section{Conclusions} In this work we have dealt with the weak isotropic 4-wave kinetic equation with simplified kernels. When the kernels are at most linear we have given conditions for the local existence and uniqueness of solutions. We have also derived the equation as a mean-field limit of interacting particle system given by a simultaneous coagulation-fragmentation: three particles interact with a coagulation-fragmentation phenomenon where one of the particles seem to act as a catalyst. As we saw in the introduction, this theory can be applied to physical scenarios that include Langmuir waves, shallow water and waves on elastic plates. Moreover, using the interacting particle system, numerical methods could be devised to simulate the solution of the equation (as done by \cite{connaughton2009numerical} for the 3-wave kinetic equation), by adapting the methods in \cite{eibeck2000approximative}. Finally, these numerical simulations would allow the study of steady state solutions and to check if they match the Kolmogorov-Zakharov spectra. This will be attempted in a future work. \bibliographystyle{amsalpha}
train/arxiv
BkiUd4U5qX_BsNW_soWI
5
1
\section{INTRODUCTION} Flat rotation curves around galaxies constitute one of the most stunning astrophysical findings since 1930s. The cases can simply be attributed to the unobservable dark matter which still lacks a satisfactory candidate. On the general relativity side which reigns in the large universe an interesting approach is to develop appropriate models of constant centrifugal force. One such attempt was formulated by Grumiller \cite{1,2} in which the centrifugal force was given by $F=-\left( \frac{m}{r^{2} +a\right) $. Here $m$ represents the mass (both normal and dark) while the parameter "$a$" is a positive constant - called Rindler acceleration \cite{3} - which gives rise to a constant attractive force. The Newtonian potential involved herein is $\Phi \left( r\right) \sim -\frac{m}{r}+ar$ , so that for $r\rightarrow \infty $ the term $\Phi \left( r\right) \sim ar$ becomes dominant. Since in Newtonian circular motion $F=\frac{mv^{2}}{r}$, for a mass $m,$ tangential speed $v\left( r\right) $ and radius $r$ are related by $v\left( r\right) \sim r^{\frac{1}{2}}$ for large $r$, overall which amounts slightly nearer to the concept of flat rotation curves. No doubts, the details and exact flat rotation curves must be much more complicated than the toy model depicted here. Physically the parameter "$a$" becomes meaningful when one refers to an accelerated frame in a flat space, known as Rindler frame and accordingly the terminology Rindler acceleration is adopted. In \cite{4} impact of a Rindler-type acceleration is studied on the Oort Cloud and in \cite{5} the solar system constraints on Rindler acceleration is investigated while in \cite{6} bending of light in the model of gravity at large distances proposed by Grumiller \cite{1,2} is considered. Let us add also that to tackle the flat rotation curves, Modified Newtonian Dynamics (MOND) in space was proposed \cite{7}. Assuming a physical source to the Rindler acceleration term in the spacetime metric has been challenging in recent years. Anisotropic fluid field was considered originally by Grumiller \cite{1,2}, whereas nonlinear electromagnetism was proposed as an alternative source \cite{8}. A fluid model with energy-momentum tensor of the form $T_{\mu }^{\nu }=diag[-\rho ,p,q,q]$ was proposed recently in the popular $f\left( R\right) $ gravity \cite{9}. For a review of the latter we propose \cite{10,11,12}. By a similar strategy we wish to employ the vast richness of $f(R)$ gravity models to identity possible candidates that may admit Rindler type acceleration. Our approach in this study beside the Rindler acceleration is to elaborate on the energy conditions in $f\left( R\right) $ gravity. Although violation of the energy conditions is not necessarily a problem (for instance, any quantum field theory violates all energy conditions) but it is still interesting to investigate the non-violation of the energy conditions. Note that energy conditions within the context of dark matter in $f(R)$ gravity was considered by various authors \cite{13}. This at least will filter the viable models that satisfy the energy conditions. In brief, for our choice of energy-momentum the weak energy conditions (WECs) can be stated as follow: i) WEC1 says that energy density $\rho \geqslant 0$. ii) WEC2, says that $\rho +p\geqslant 0$, and iii) WEC3 states that $\rho +q\geqslant 0$. The more stringent energy conditions, the strong energy conditions (SECs) amounts further to $\rho +p+2q\geqslant 0$, which will not be our concern in this paper. However, some of our models satisfy SECs as well. Our technical method can be summarized as follows. Upon obtaining $\rho ,$ $p$ and $q$ as functions of $r$ we shall search numerically for the geometrical regions in which the WECs are satisfied. (A detailed work on energy condition in $f(R)$ gravity was done by J. Santos et al in \cite{14}). From the outset our strategy is to assume validity of the Rindler modified Schwarzschild metric a priori and search for the types of $f(R)$ models which are capable to yield such a metric. Overall we test ten different models of $f\left( R\right) $ gravity models and observe that in most cases it is possible to tune free parameters in rendering the WECs satisfied. In doing this we entirely rely on numerical plots and we admit that our list is not an exhaustive one in $f\left( R\right) $ arena. Organization of the paper goes as follows. Sec. II introduces the formalism with derivation of density and pressure components. Sec. III presents eleven types of $f\left( R\right) $ models relevant to the Mannheim's metric. The paper ends with Conclusion in Sec. IV. \section{The Formalism} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig01} \caption{A plot of $WEC1,$ $WEC2$ and $WEC3$ for $m=1$, $a=0.1$ and $b=1.$ To have an idea of the range in which WECs are satisfied we also plot the metric function which identifies the location of the horizon. It is observed from the figure that WECs are all satisfied for $r\geq r_{h}$ in which r_{h} $ is the event horizon of the Grumiller's metric. Since $R<0$ the plot of $f(R)$ is from $-\infty $ up to zero and as can be seen $\frac{df}{dR}<0$ while $\frac{d^{2}f}{dR^{2}}>0$. We also plot the heat capacity $C$ w.r.t the horizon radius $r_{h}$.} \end{figure} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig02} \caption{Our choice of the parameters are $\protect\nu =1,$ $\protect\mu =2,$ $b=1,$ $c=-1,$ $\Lambda _{1}=0=\Lambda _{2}$. WECs are shown to satisfy while stability is valid only for $R<-1$. This can easily be checked from f\left( R\right) =R+\frac{1}{R}+R^{2}.$ Thermodynamic stability (i.e. $C>0$) is also shown in the inscription.} \end{figure} Let's start with the following action ($\kappa =8\pi G=1$) \begin{equation} S=\frac{1}{2}\int \sqrt{-g}f\left( R\right) d^{4}x+S_{M} \end{equation where $f\left( R\right) $ is a function of the Ricci scalar $R$ and $S_{M}$ is the physical source for a perfect fluid-type energy momentum \begin{equation} T_{\mu }^{\nu }=\left( \begin{array}{cccc} -\rho & 0 & 0 & 0 \\ 0 & p & 0 & 0 \\ 0 & 0 & q & 0 \\ 0 & 0 & 0 & \end{array \right) \end{equation We adopt the static spherically symmetric line element \begin{equation} ds^{2}=-A\left( r\right) dt^{2}+\frac{1}{A(r)}dr^{2}+r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) \end{equation with \begin{equation} A\left( r\right) =1-\frac{2m}{r}+2ar \end{equation which will be referred henceforth as the Mannheim's metric \cite{15} (Note that it has been rediscovered by Grumiller in \cite{1,2}). Einstein's field equations follow the variation of the action with respect to $g_{\mu \nu }$ which reads a \begin{equation} G_{\mu }^{\nu }=\frac{1}{F}T_{\mu }^{\nu }+\check{T}_{\mu }^{\nu } \end{equation in which $G_{\mu }^{\nu }$ is the Einstein's tensor. The share of the curvature in the energy-momentum is given b \begin{equation} \check{T}_{\mu }^{\nu }=\frac{1}{F}\left[ \nabla ^{\nu }\nabla _{\mu }F-\left( \square F-\frac{1}{2}f+\frac{1}{2}RF\right) \delta _{\mu }^{\nu \right] \end{equation while $T_{\mu }^{\nu }$ refers to the fluid source \cite{1,2}. Following the standard notation, $\square =\nabla ^{\mu }\nabla _{\mu }=\frac{1}{\sqrt{-g} \partial _{\mu }\left( \sqrt{-g}\partial ^{\mu }\right) $ and $\nabla ^{\nu }\nabla _{\mu }u=g^{\lambda \nu }\nabla _{\lambda }u_{,\mu }=g^{\lambda \nu }\left( \partial _{\lambda }u_{,\mu }-\Gamma _{\lambda \mu }^{\beta }u_{,\beta }\right) $ for a scalar function $u$. The three independent Einstein's field equations are explicitly given by \begin{equation} FR_{t}^{t}-\frac{f}{2}+\square F=\nabla ^{t}\nabla _{t}F+T_{t}^{t} \end{equation \begin{equation} FR_{r}^{r}-\frac{f}{2}+\square F=\nabla ^{r}\nabla _{r}F+T_{r}^{r} \end{equation \begin{eqnarray} FR_{\theta _{i}}^{\theta _{i}}-\frac{f}{2}+\square F &=&\nabla ^{\theta _{i}}\nabla _{\theta _{i}}F+T_{\theta _{i}}^{\theta _{i}} \\ &&\left( F=\frac{df}{dR}\right) , \end{eqnarray in which $\theta _{i}=\left( \theta ,\varphi \right) .$ Adding these equations (i.e., $tt$, $rr$, $\theta \theta $ and $\varphi \varphi $) one gets the trace equation \begin{equation} FR-2f+3\square F=T \end{equation which is not an independent equation. Using the field equations one find \begin{equation} \rho =\nabla ^{t}\nabla _{t}F-FR_{t}^{t}+\frac{f}{2}-\square F, \end{equation \begin{equation} p=-\nabla ^{r}\nabla _{r}F+FR_{t}^{t}-\frac{f}{2}+\square F, \end{equation an \begin{equation} q=-\nabla ^{\theta }\nabla _{\theta }F+FR_{\theta }^{\theta }-\frac{f}{2 +\square F. \end{equation In what follows we find the energy momentum components for different models of $f(R)$ gravity together with their thermodynamical properties. \section{$f(R)$ Models apt for the Rindler Acceleration} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig03} \caption{ Our parameters in this case are $\protect\nu =1,$ $\protect\mu =2,$ $b=-1,$ $c=-1$ with $\Lambda _{1}=0=\Lambda _{2}$. So that $f(R)$ takes the form $f\left( R\right) =R+\frac{1}{R}-R^{3}$ which satisfies the WECs. This choice yields a stable model for $R<-\frac{1}{\sqrt[4]{3}}.$ Beyond certain horizon radius the specific function $C$ is also positive.} \end{figure} In this section we investigate a set of possible $f(R)$ gravity models which admit the line element (3) as the static spherically symmetric solution of its field equations. Then by employing Eq.s (12) to (14) we shall find the energy density $\rho $ and the pressures $p$ and $q.$ Having found $\rho $, p$ and $q$ we investigate the energy conditions together with the feasibility of the $f(R)$ models numerically. More precisely we work on weak energy conditions which includes three individual conditions \begin{equation} WEC1=\rho \geq 0, \end{equation \begin{equation} WEC2=\rho +q\geq 0 \end{equation and \begin{equation} WEC3=\rho +p\geq 0. \end{equation In the numerical plotting, we plot explicitly $WEC1,$ $WEC2$ and $WEC3$ in terms of $r$ to work out the region(s) in which the WECs are satisfied. In addition to WECs we plot $f\left( R\right) $ in terms of $R$ to find out the physically acceptable model by imposing the well known conditions on f\left( R\right) $ which are given b \begin{equation} F\left( R\right) =\frac{df\left( R\right) }{dR}>0 \end{equation for not to have ghost field an \begin{equation} \frac{d^{2}f\left( R\right) }{dR^{2}}>0 \end{equation to have a stable model. Before we start to study the $f(R)$ models, we add that in the case of Mannheim's metric the Ricci scalar is given by $R=-\frac 12a}{r}$ which is negative ($a>0$). \subsection{The Models} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig04} \caption{From Eq. (31) we choose the parameters as $\protect\mu =1,$ $b=1$ and $n=-3$. We find a restricted domain in which WECs are satisfied. From those parameters beside WECs from $\frac{d^{2}f}{dR^{2}}=6\left( 1+R^{2}\right) \left( 1+5R^{2}\right) >0$ the stability condition also is satisfied.} \end{figure} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig05} \caption{In this model given by the $f(R)$ in Eq. (32) we have not been able to find a physically admissible region to satisfy WECs.} \end{figure} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig06} \caption{From $f\left( R\right) $ model in Eq. (33) the choice $c=1/3,$ \protect\varepsilon =1$ and $b=1$ , we observe that WECs are not satisfied. Specific heat function is also pictured in the inscription.} \end{figure} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig07} \caption{The choice of parameters $c=1.1,$ $\protect\varepsilon =1$ and $b=1$ in Eq. (33) yields a region where WECs are satisfied. It can be checked that $\frac{d^{2}f}{dR^{2}}>0$ is also satisfied. For $\left\vert R\right\vert >\left\vert R_{0}\right\vert $ where $f^{\prime }\left( R_{0}\right) =0,$ \frac{df}{dR}>0$ which implies ghost free solution. Everywhere positive specific heat $C$ is also shown in the inscription.} \end{figure} \textbf{1)} Our first model which we find interesting is given by \cite{16} \begin{equation} f\left( R\right) =\sqrt{R^{2}+b^{2}} \end{equation for $b=$constant. For $\left\vert R\right\vert \gg b,$ this model is a good approximation to Einstein's $f(R)=R$ gravity. For the other extent, namely \left\vert R\right\vert \ll b$, $b$ may be considered as a cosmological constant. Having this $f\left( R\right) $ one finds \begin{equation} \frac{df}{dR}=\frac{R}{\sqrt{R^{2}+b^{2}}} \end{equation \begin{equation} \frac{d^{2}f}{dR^{2}}=\frac{b^{2}}{\left( R^{2}+b^{2}\right) ^{3/2}} \end{equation which are positive functions with respect to $R$. This means that this model of $f(R)$ gravity is satisfying the necessary conditions to be physical. Yet we have to check the WECs at least to see whether it can be a good candidate for a spacetime with Rindler acceleration, namely the Mannheim's metric. Figure 1 displays $WEC1,$ $WEC2$ and $WEC3$ together with part of $A(r)$ in terms of $r.$ We see that the WECs are satisfied right after the horizon. Therefore this model can be a good candidate for what we are looking for. This model is also interesting in other aspects. For instance in the limit when $b$ is small one may write \begin{equation} f\left( R\right) \simeq \left\vert R\right\vert +\frac{b^{2}}{2}\frac \left\vert R\right\vert }{R^{2}} \end{equation which is a kind of small fluctuation from $R$ gravity for $\left\vert R\right\vert \gg b$. In particular, this model of $f(R)$ gravity is satisfying all necessary conditions to be a physical model to host Mannheim's metric. Hence we go one step more to check the heat capacity of the spacetime to investigate if the solution is stable from the thermodynamical point of view. To do so, first we find the Hawking temperature \begin{equation} T_{H}=\left. \frac{\frac{\partial }{\partial r}g_{tt}}{4\pi }\right\vert _{r=r_{h}}=\frac{m+ar_{h}^{2}}{2\pi r_{h}^{2}}. \end{equation Then, from the general form of the entropy in $f(R)$ gravity we fin \begin{equation} S=\left. \frac{\mathcal{A}}{4G}F\right\vert _{r=r_{h}}=\pi r_{h}^{2}F_{h} \end{equation in which $\left. \mathcal{A}\right\vert _{r=r_{h}}=4\pi r_{h}^{2}$ is the surface area of the black hole at the horizon and $\left. F\right\vert _{r=r_{h}}=\frac{-12a}{\sqrt{\frac{144a^{2}}{r_{h}^{2}}+b^{2}r_{h}}}$. Having $T_{H}$ and $S$ available one may find the heat capacity of the black hole a \begin{equation} C=T\left( \frac{\partial S}{\partial T}\right) =\frac{12\left( 1+4ar_{h}\right) \left( 288a^{2}+b^{2}r_{h}^{2}\right) r_{h}^{2}\pi a} \left( 144a^{2}+b^{2}r_{h}^{2}\right) ^{3/2}}. \end{equation We comment here that $C$ is always positive and nonsingular irrespective of the values of the free parameters given the fact that $a>0$. This indeed means that the black hole solution will not undergo a phase change as expected form a stable physical solution. \textbf{2)} The second model which we shall study, in this part, has been introduced and studied by Nojiri and Odintsov in \cite{17}. As they have reported in their paper \cite{17}, "\textit{this model naturally unifies two expansion phases of the Universe: in-flation at early times and cosmic acceleration at the current epoch". }This model of $f(R)$ is given by \begin{equation} f(R)=R-\frac{c}{\left( R-\Lambda _{1}\right) ^{\nu }}+b\left( R-\Lambda _{2}\right) ^{\mu } \end{equation in which $b,$ $c,$ $\Lambda _{1},$ $\Lambda _{2},$ $\mu $ and $\nu $ are some adjustable parameters. Our plotting strategy of each model is such that if the WECs are violated (note that such cases are copious) we ignore such figures and regions satisfying WECs are shaded. The other conditions $\frac df}{dR}>0,$ $\frac{d^{2}f}{dR^{2}}>0$ are satisfied in some cases whereas in the others not. In Figs. 2 and 3 we plot $WEC1,$ $WEC2$ and $WEC3$ in terms of $r$ for specific values of $\nu ,$ $\mu ,b,$ $c$ i.e. in Fig. 2 $\nu =1,\mu =2,$ $b=1,$ $c=-1$, $\Lambda _{1}=0$, $\Lambda _{2}=0.$ In Fig. 3 \nu =1,$ $\mu =3,$ $b=-1,$ $c=-1,\Lambda _{1}=0$, $\Lambda _{2}=0$. Among the particular cases which are considered here, one observes that Fig. 2 and Fig. 3 which correspond to \begin{equation} f\left( R\right) =R+\frac{1}{R}+R^{2} \end{equation an \begin{equation} f\left( R\right) =R+\frac{1}{R}-R^{3} \end{equation respectively, are physically acceptable as far as WECs are concerned. We also note that in these two figures we plot the heat capacity in terms of r_{h}$ to show whether thermodynamically the solutions are stable. $\frac d^{2}f}{dR^{2}}$ reveals that (28) and (29) are locally stable. \textbf{3)} Our next model is a Born-Infeld type gravity which has been studied in a more general form of Dirac-Born-Infeld modified gravity by Quiros and Ure\~{n}a-L\'{o}pez in \cite{18}. The Born-Infeld model of gravity is given by $f\left( R\right) =2b\left( 1-\sqrt{1+\frac{\left\vert R\right\vert }{b}}\right) ,$ which implies \begin{equation} F\left( R\right) =\frac{1}{\sqrt{1+\frac{\left\vert R\right\vert }{b}}} \notag \end{equation and \begin{equation} \frac{d^{2}f}{dR^{2}}=\frac{1}{2\left( 1+\frac{\left\vert R\right\vert }{b \right) ^{3/2}} \end{equation Clearly both are positive functions of $R$ therefore the solution given in this model is stable and ghost free. In spite of that, the WECs are not satisfied therefore this model is not a proper model for Mannheim's metric as far as the energy conditions are concerned. \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig08} \caption{The model with $f\left( R\right) =R$e$^{-\frac{1}{R}}$ gives a region in which WECs are satisfied. Furthermore, since $\frac{d^{2}f}{dR^{2} =\frac{1}{R^{3}}e^{-\frac{1}{R}}<0$, it gives an unstable model. Beyond certain radius the specific heat is also positive which is required for thermodynamical stability.} \end{figure} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig09} \caption{Our model in this case is given by $f\left( R\right) =R\left( e^ \frac{b}{R}}-1\right) $ with $b=const$. With the choice $b=1.1$ its observed that WECs are satisfied while the stability condition is violated in spite of the fact that the specific heat $C$ is everywhere positive.} \end{figure} \textbf{4)} Another interesting model of $f(R)$ gravity is given by \cite{19} \begin{equation} f\left( R\right) =R-\mu b\left[ 1-\left( 1+\frac{R^{2}}{b^{2}}\right) ^{-n \right] \end{equation in which $\mu ,b$ and $n$ are constants. Figure 4 with $\mu =1,b=1,n=-3$ shows that between horizon and a maximum radius we may have physical region in which $f^{\prime \prime }>0$. Now let's consider \cite{20} the model \begin{equation} f\left( R\right) =R-\mu b\frac{\left( \frac{R}{b}\right) ^{2n}}{\left( \frac R}{b}\right) ^{2n}+1} \end{equation which amounts to the Fig. 5 and clearly there is no physical region. \textbf{5)} Here, we use another model introduced in \cite{21} which is given by \begin{equation} f\left( R\right) =R\left( 1-c\right) +c\varepsilon \ln \left( \frac{\cosh \left( \frac{R}{\varepsilon }-b\right) }{\cosh \left( b\right) }\right) \frac{R^{2}}{6m^{2}} \end{equation in which $c,\varepsilon ,b$ and $\mu $ are all constants. Our analysis yields to the Fig. 6 with $c=\frac{1}{3}$ and Fig. 7 with $c=1.1$. One observes that although in Fig. 6 there is no physical region possible for different $c$ in Fig. 7 and for $r>r_{h}$ our physical conditions are satisfied provided $\left\vert R\right\vert <\left\vert R_{0}\right\vert $ where $R_{0}$ is the point for which $F\left( R\right) =0.$ \textbf{6)} In Ref. \cite{22} an exponential form of $f(R)$ is introduced which is given by \begin{equation} f\left( R\right) =Re^{\frac{b}{R}} \end{equation in which $b=$constant with its first derivative \begin{equation} F\left( R\right) =e^{\frac{b}{R}}\left( 1-\frac{b}{R}\right) . \end{equation Our numerical plotting admits the Fig. 8 for this model with $b=-1$. We comment here that although the case $b=-1$ provides the WECs satisfied but in both cases $f^{\prime \prime }\left( R\right) $ is negative which makes the model not physical. \bigskip \textbf{7)} Another exponential model which is also given in \cit {22} read \begin{equation} f(R)=Re^{bR}, \end{equation in which $b=$constant and \begin{equation} F(R)=e^{bR}\left( 1+bR\right) . \end{equation This does not satisfy the energy conditions and therefore it is not a physically interesting case. \textbf{8)} In Ref. \cite{23} a modified version of our models 6 and 7 is given in whic \begin{equation*} f(R)=R\left( e^{\frac{b}{R}}-1\right) \end{equation* \ with $b=$constant and \begin{equation*} F(R)=e^{\frac{b}{R}}\left( 1-\frac{b}{R}\right) -1. \end{equation* Figure 9 is our numerical results with $b=0.1$. For a bounded region from above and from below the WECs are satisfied while $f^{\prime \prime }\left( R\right) $ is negative which makes our model non-physical. \textbf{9)} Among the exponential models of gravity let's consider \cite{24} \begin{equation} f\left( R\right) =R+be^{\alpha R} \end{equation where $\alpha $ and $b$ are constants and \begin{equation} F\left( R\right) =1+b\alpha \text{ }e^{\alpha R}. \notag \end{equation Figure 17 displays our numerical calculations for specific values of $\alpha =-1$ . Evidently from these figures we can conclude that this model is not a feasible model. \textbf{10) }Finally we consider a model of gravity given in Ref. \cite{25} \begin{equation} f\left( R\right) =\left( \left\vert R\right\vert ^{b}-\Lambda \right) ^ \frac{1}{b}} \end{equation in which $b$ is a constant. The first derivative of the model is given b \begin{equation*} F(R)=\left\vert R\right\vert ^{b-1}\left( \left\vert R\right\vert ^{b}-\Lambda \right) ^{\frac{1}{b}-1}. \end{equation* Figures 11 and 12 are with $b=\frac{1}{2}$ and $b=2,$ respectively, for \Lambda =1.$ We observe that WECs are satisfied in a restricted region while for $b=2$ / $\frac{1}{2}$ it gives a stable / unstable model. \section{CONCLUSION} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig10} \caption{In this model we use $f\left( R\right) =R+be^{\protect\alpha R}$ where $\protect\alpha $ and $b$ are constants. For $\protect\alpha =-1$ and b=1,$ WECs are satisfied and $\frac{d^{2}f}{dR^{2}}>0$. Specific heat is shown also to be positive.} \end{figure} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig11} \caption{Our model is given by $f\left( R\right) =\left( \mid R\mid ^{2}-1\right) ^{2}$ which has WECs satisfied but $\frac{d^{2}f}{dR^{2}}>0$, for $\left\vert R\right\vert >\left\vert R_{0}\right\vert $ where $f^{\prime }\left( R_{0}\right) =0.$ This indicates stability of the solution. Furthermore the specific heat suggests a thermodynamically stable model too.} \end{figure} \begin{figure}[tbp] \includegraphics[width=80mm,scale=0.7]{Fig12} \caption{This is the model with $f\left( R\right) =\left( \mid R\mid ^{\frac 1}{2}}-1\right) ^{\frac{1}{2}}$ which has WECs conditions all satisfied while the stability condition is violated. It is thermodynamically stable since $C>0.$} \end{figure} In Einstein's general relativity which corresponds to $f\left( R\right) =R$ , Rindler modification of the Schwarzschild metric faces the problem that the energy conditions are violated. For a resolution to this problem we invoke the large class of $f\left( R\right) $ theories. From cosmological standpoint the main reason that we insist on the Rindler acceleration term can be justified as follows: at large distances such a term may explain the flat rotation curves as well as the dark matter problem. Our physical source beside the gravitational curvature is taken to be a fluid with equal angular components. Being negative the radial pressure is repulsive in accordance\ with expectations of the dark energy. Our scan covered ten different f\left( R\right) $ models and in most cases by tuning of the free parameters we show that WECs are satisfied. All over in ten different models we searched primarily for the validity of WECs as well as for $\frac{d^{2}f} dR^{2}}>0$, i.e. the stability. With some effort thermodynamic stability can also be checked through the specific heat. With equal ease $\frac{df}{dR}>0,$ i.e. absence of ghost can be traced. Fig. 1 for instance, depicts the model with $f(R)=\sqrt{R^{2}+b^{2}}$, ($b=$constant) in which WECs and stability, even the thermodynamic stability are all satisfied, however, it hosts ghosts since $\frac{df}{dR}<0$ for $R<0$. Finally, among all models considered herein, we note that, Fig. 7 satisfies WECs, stability conditions as well as ghost free condition for $r>r_{\min }$ in which $r_{\min }\geq r_{h}$ depends on the other parameters. Finally we comment that abundance of parameters in the $f\left( R\right) $ theories is one of its weak aspects. This weakness, however, may be used to obtain various limits and for this reason particular tuning of parameters is crucial. Our requirements have been weak energy conditions (WECs), Rindler acceleration, stability and absence of ghosts. Naturally further restrictions will add further constraints to dismiss some cases considered as viable in this study.
train/arxiv
BkiUdlQ4uBhiwecdfjnB
5
1
\section{Introduction}\label{intro} Observations and theory demonstrate that star-planet interaction (SPI) is a complex, yet potentially very informative probe of extrasolar planetary magnetic fields. In Shkolnik et al.~(2003, 2005a), we reported on planet-induced chromospheric activity on two stars, HD 179949 and $\upsilon$~And apparent from the night-to-night modulation of the Ca II H \& K chromospheric emission phased with the hot Jupiter's orbit. The modulation was indicative of a magnetic rather than tidal interaction (Cuntz et al.~2000), such that the period of the observed stellar activity correlated with the planet's orbital period $P_{orb}$, rather than $P_{orb}/2$. Ample observational evidence of tidal and magnetic interactions exists in the exaggerated case of the RS Canum Venaticorum (RS CVn) stars, which are tightly-orbiting binary systems consisting of two chromospherically active late-type stars (e.g.~Glebocki et al.~1986, Catalano et al.~1996, Shkolnik et al.~2005b ). Although efforts to observe variable radio emission from the stars with hot Jupiters have not yet been successful (e.g.~Lazio \& Farrell 2007, George \& Stevens 2007), there have been several additional observations that support the existence of SPI. Photometric observations by the MOST space telescope of several hot-Jupiter systems, including HD 179949 and $\tau$~Boo, suggest that stellar surface activity in the form of active spots may be induced by the giant planet (Walker et al.~2006, 2007). Also, Saar et al.~(2006) recently reported a possible detection of planet-induced X-ray emission from the HD 179949 system corresponding to $\sim$30\% increase in X-ray flux over quiescent levels coincident with the phase of the Ca II enhancements at $\phi_{orb}$$\sim$0.8. A statistical analysis by Kashyap et al.~(2006) suggests that the X-ray flux from stars with hot Jupiters is on average $\gtrsim$ 3 times greater than stars with planets at larger orbital distances, presenting further evidence that close-in giant planets have a measurable effect on the activity of the parent star. One scenario of magnetospheric interaction proposes that the planet induces reconnection events as it travels through the large stellar magnetic loops (Cuntz et al.~2000, Ip et al.~2004), implying that the resulting activity should depend on the star's magnetic field, the planet's magnetic field and the orbital distance with respect to the Alfv\'en radius of the host star ($\sim$10 stellar radii). Novel research of the magnetic field topology of hot Jupiter host stars is underway (Catala et al.~2007, Moutou et al.~2007) using Zeeman Doppler Imaging (ZDI), which hopes to contribute to a more detailed understanding of SPI. A detection of a magnetic field of a hot Jupiter would 1) provide a constraint on the rapid hydrodynamic escape of its atmosphere (Vidal-Madjar et al.~2003, 2004) which could affect the planet's structure and evolution, 2) present implications for the planet's internal structure, and 3) shed light on the mass-radius relationship of the known transiting planets (Pont et al.~2005, Bakos et al.~2006). Although the internal magnetic fields of hot Jupiters are expected to be weaker than Jupiter's due to probable tidal locking and slower spin rates (Sanchez-Lavega 2004, Griessmeier et al.~2004), Olson \& Christensen (2006) calculated that the magnetic field of a planet with even a tenth of Jupiter's rotation rate would still have a strong dipole moment, when reasonably assuming that the convection is not highly modified by the rotation rate. Also, the fact that both hot and very hot Jupiters, such as HD 209458~b and OGLE-TR-56~b, are detected at all means that they must have strong enough magnetic fields to balance the extreme stellar irradiation and CME plasma pressure to prevent destructive atmospheric erosion (Khodachenko et al.~2007). We seek to probe hot Jupiter magnetic fields in order to understand their formation and evolution. SPI potentially offers an indirect way to detect, and with future modeling and observations, measure planetary magnetic fields. It is reasonable to assume that any magnetic interaction would be greatest in the outermost layers of the star, namely the chromosphere, transition region and the corona due to their proximity to the planet, low density, and non-radiative heat sources. With the commissioning of CFHT's high-resolution \'echelle spectrograph, ESPaDoNS, we are able to include several stellar activity indicators in our analysis to observe the interaction as a function of atmospheric height in order to model the energy transfer and dissipation mechanisms of this phenomenon. ESPaDOnS' wavelength coverage allows simultaneous monitoring of the Ca II infrared triplet (IRT, lower chromosphere), H$\alpha$, Ca II H, K (middle chromosphere), and He I D$_3$ (upper chromosphere). Our program stars have planets with orbital periods between 2.2 and 7.1 days, eccentricities $\approx$~0 and semi-major axes $< 0.08$ AU. These systems offer the best chance of observing upper atmospheric heating. Of the seven systems we observed with ESPaDoNS, $\tau$~Boo, HD~179949, $\upsilon$ And and HD~209458 have been observed previously in our CFHT/Gecko campaign. The first results from 2001 and 2002 observations, including the first evidence of planet-induced magnetic heating of HD~179949, were published in Shkolnik et al.~(2003, 2005a). We later extended the experiment at the Very Large Telescope (VLT) to include five southern targets. The three new systems monitored in this study are HD 217107, HD 149143 and HD 189733. The system parameters for the ESPaDOnS program stars are listed in Table 1 along with our standard 61~Vir. In this paper, we present new \'echelle spectra and compare with those of previous years, bringing to light a broader understanding of stellar activity, its cycles, and SPI. The details of our observations and data reduction are outlined in Section~\ref{spectra}. In Section~\ref{caII} we discuss our analysis and results of the Ca II K measurements including long-term, short-term and rotational modulation. Comparisons with other activity indicators are made in Section~\ref{other}. \clearpage \begin{deluxetable}{ccccccccccccl}\label{targets} \tabletypesize{\footnotesize} \tablecaption{Stellar and Orbital Parameters \label{starpars_01}} \tablewidth{0pt} \tablehead{ \colhead{~ Star} & \colhead{SpT} & \colhead{$v$sin$i$} & \colhead{$P_{rot}$} & \colhead{$P_{orb}$\tablenotemark{a}} & \colhead{$M_{p}$sin$i$\tablenotemark{a}} & \colhead{$a$\tablenotemark{a}} & \colhead{$\langle$K$\rangle$\tablenotemark{b}} & \colhead{$\langle$K$\arcmin\rangle$\tablenotemark{c}} & \colhead{$\langle$MADK$\rangle$\tablenotemark{d}} & \colhead{He I EW} & \\ \colhead{} & \colhead{} & \colhead{(km s$^{-1}$)} & \colhead{(days)} & \colhead{(days)} & \colhead{($M_{J}$)} & \colhead{(AU)} & \colhead{(\AA)} & \colhead{(\AA)} & \colhead{(\AA)} & \colhead{(m\AA)} & } \startdata $\tau$~Boo & F7~IV & 14.8$\pm$0.3 & 3.2\tablenotemark{e} &3.31 &4.4 & 0.046 & 0.336 & 0.184 & 0.0019 & 27\\ HD~179949 & F8~V & 6.3$\pm$0.9 & 7\tablenotemark{f} &3.09 &0.98 & 0.045 & 0.369 & 0.186& 0.0022\tablenotemark{j} & 17\\ HD~209458 & G0~V & 4.2$\pm$0.5 & 16\tablenotemark{g} & 3.53 &0.69\tablenotemark{h} & 0.045 & 0.195 & 0.078 & 0.0009 & $\lesssim$3\\ $\upsilon$ And & F7 V & 9.0$\pm$0.4 & 12\tablenotemark{e,k} & 4.618 & 0.71 & 0.059 & 0.254 & 0.091 & 0.0016 & --\\ HD~189733 & K1 V & 2.92$\pm$0.22 & 11.7\tablenotemark{i} &2.22 &1.15\tablenotemark{h} &0.031 & 1.337 & 1.231 & 0.0044\tablenotemark{j} & 35\\ HD~217107 & G8 IV & 9.0$\pm$0.4 & 39\tablenotemark{k} & 7.13 &1.35 & 0.075 & 0.160 & 0.075 & 0.0007 & $<$4\\ HD~149143 & G0 IV & 3.9$\pm$1 &?? &4.09 &1.36 &0.052\tablenotemark{r} & 0.342 & 0.144 & 0.0014 & 10\\ 61~Vir & G5~V & 2.2\tablenotemark{m} & 33\tablenotemark{k} & --- & --- & --- & 0.182 & 0.083 & 0.0008 & $<$4\\ \enddata \tablenotetext{a}{Published orbital solutions: $\tau$ Boo $-$ Butler et al.~1997, HD 179949 $-$ Tinney et al.~2001, HD 209458 $-$ Charbonneau et al.~1999, $\upsilon$ And $-$ Butler et al.~1997, HD 189733 $-$ Bouchy et al.~ 2005, HD 217107 $-$ Fischer et al.~1998, HD~149143 $-$ De Silva et al.~2006} \tablenotetext{b}{Total integrated intensity of the mean normalized Ca II K core. These values are relative to the normalization points near 3930 and 3937 \AA\/ at 1/3 of the pseudo-continuum at 3950 \AA.} \tablenotetext{c}{We subtracted the photospheric emission from $\langle$K$\rangle$ in order to measure the mean integrated chromospheric emission $\langle$K$\arcmin\rangle$ using data from Wright et al.~(2004). (See text for more details.)} \tablenotetext{d}{Average integrated `intensity' of the mean absolute deviation (MAD) of the K residuals, per observing run} \tablenotetext{e}{Henry et al.~2000} \tablenotetext{f}{This work and Wolf \& Harmanec 2004} \tablenotetext{g}{Mazeh et al.~2000} \tablenotetext{h}{Transiting system} \tablenotetext{i}{Croll et al.~2007} \tablenotetext{j}{Value were corrected to remove geometric (rotational and/or planetary) modulation of an active region on the star. For HD 189733, the non-corrected value is 0.0098 and for HD 179949, 0.0063.} \tablenotetext{k}{Wright et al.~2004} \end{deluxetable} \begin{deluxetable}{ccccccccccl}\label{mags} \tabletypesize{\footnotesize} \tablecaption{Observations\label{starpars_02}} \tablewidth{0pt} \tablehead{ \colhead{~ Star} & \colhead{$U$} & \colhead{$B$} & \colhead{$V$} & \colhead{Exposures\tablenotemark{a}} & \colhead{$S/N$\tablenotemark{b}} & \colhead{$S/N$\tablenotemark{b}} & \\ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{$t \times n \times N$} & \colhead{at 3950 \AA} & \colhead{at 8710 \AA} & } \startdata $\tau$ Boo & 5.02 & 4.98 & 4.50 & 120 $\times$ 10 $\times$ 9\tablenotemark{c} & 630 & 1640 \\ HD 179949 & 6.83 & 6.76 & 6.25 & 660 $\times$ 5 $\times$ 7\tablenotemark{e} & 440 & 1270 \\ HD 209458 & 8.38 & 8.18 & 7.65 & 1800 $\times$ 4 $\times$ 4 & 350 & 960 \\ $\upsilon$ And & 4.69 & 4.63 & 4.09 & 180 $\times$ 120 $\times$ 4\tablenotemark{e} & 2800 & 7290\\ HD 189733 & $-$ & 8.60 & 7.67 & 1800 $\times$ 4 $\times$ 4 & 280 & 1260 \\ HD 217107 & 7.33 & 6.90 & 6.18 & 600 $\times$ 5 $\times$ 3 & 350 & 1340 \\ HD 149143 & $-$ & 8.53 & 7.90 & 1800 $\times$ 4 $\times$ 4 & 320 & 1000 \\ 61 Vir & 5.71 & 5.45 & 4.74 & 120 $\times$ 7 $\times$ 5 & 450 & 1480 \\ \enddata \tablenotetext{a}{$t$ = Exposure time in seconds, $n$ = number of exposures per night, $N$ = number of nights during the June 2006 ESPaDOnS observing run} \tablenotetext{b}{Typical nightly S/N per 0.022-\AA\/ pixel} \tablenotetext{c}{Spectra from three of the nine nights were observed in ESPaDOnS' `spectropolarimetry' mode.} \tablenotetext{d}{Four nights in June 2006 and three nights in September 2005} \tablenotetext{e}{All these data were acquired in the `spectropolarimetry' mode in September 2005. The extremely high S/N was a requirement of the partner program to search for linear polarization. (Collier Cameron et al., in prep.)} \end{deluxetable} \clearpage \section{The spectra}\label{spectra} The observations were made with the 3.6-m Canada-France-Hawaii Telescope (CFHT) on 7 nights in September 2005 and 9 nights in 2006 June. We used ESPaDOnS (\'Echelle SpectroPolarimetric Device for the Observation of Stars), which is fiber fed from the Cassegrain to coud\'e focus where the fiber image is projected onto a Bowen-Walraven slicer at the spectrograph entrance. With a 79 gr/mm grating and a 2048$\times$4608-pixel CCD detector, ESPaDOnS' `star-only' mode records the full spectrum over 40 grating orders covering 3700 to 10400 \AA\/ at a spectral resolution R of $\approx$80,000. The four nights of observations of $\upsilon$ And (18, 19, 23, 24 September 2005) and three nights of the $\tau$ Boo data (16, 17, 18 June 2006) were taken in ESPaDOnS' `spectropolarimetry' mode at R of 68,000. The data were reduced using {\it Libre Esprit}, a fully automated reduction package provided for the instrument and described in detail by Donati et al.~(1997, 2007\footnote{Also see http://www.cfht.hawaii.edu/Instruments/Spectroscopy/Espadons/Espadons\_esprit.html.}). Each stellar exposure is bias-subtracted and flat-fielded for pixel-to-pixel sensitivity variations. After optimal extraction, the 1-D spectra are wavelength calibrated with several Th/Ar arcs taken throughout the night. Finally the spectra are divided by a flat-field response and then the continuum is normalized. Heliocentric velocity corrections are applied as well as small velocity corrections ($<$ 100 m~s$^{-1}$) to account for instrumental effects using the telluric lines. The final spectra were of high S/N reaching $\approx$ 130 per pixel (880 \AA$^{-1}$) in the H \& K emission core, 400 per pixel (2700 \AA$^{-1}$) in the pseudo-continuum near 3950 \AA, and about 3 times higher near the Ca II IRT. Spectra with comparable S/N were taken of 61 Vir, a G5~V star known not to have close-in giant planets, plus the hot standard HR 5511 (SpT = A0V) for telluric line correction. Table 2 lists the program stars, including their magnitudes, exposure times, and typical S/N. All further processing and analysis were performed with standard IRAF (Image Reduction and Analysis Facility) routines.\footnote[1]{IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc.~(AURA) under cooperative agreement with the National Science Foundation.} Differential radial velocity corrections were applied to each stellar spectrum using IRAF's {\it fxcor} and {\it rvcorrect} routines. Representative spectra near the key stellar activity indicators are shown in Figure~\ref{hd189733_spec} for HD~189733. \section{Measuring chromospheric activity}\label{caII} The very strong Ca II H and K photospheric absorption lines suppress the local stellar continuum making it difficult to normalize each spectrum consistently. The normalization level was set at 0.3 of the flux at 3950~\AA\/ centered on the H and K lines. Therefore the wavelengths were constant for all spectra of a given star, though they varied slightly from star to star due to variations in spectral type. The $\approx$7-\AA\/ spectral range was chosen to isolate the H and K reversals. This window is wide enough that a few photospheric absorption features appear to test for general stability. To normalize each sub-spectrum, the end points were set to 1 and fitted with a straight line. The mean Ca II K cores for four of the program stars are shown in Figure~\ref{Kcores} with those for HD~179949, HD~189733 and $\upsilon$ And in Figures~\ref{hd179949_diffs} $-$ \ref{upsAnd_diffs}. The spectra were grouped by date and a nightly mean was computed for each of the lines. Other than the active HD~189733 (see Section~\ref{hd189733}), all stars observed had non-varying K emission at the $\lesssim$ 0.001 level on average on a given night. This is a result of S/N variations and intranight (short time-scale) chromospheric activity. We used nightly residuals from the average stellar spectrum to measure the chromospheric activity within the reversals. Each residual spectrum had a broad, low-order curvature removed. The residuals of the normalized spectra (smoothed by 17 pixels) were used to compute the mean absolute deviation (MAD = $N^{-1}\Sigma|data_{i}-mean|$ for $N$ spectra), a measure of overall variability within the span of the observing run. The Ca II K MAD spectrum and the nightly residuals used to generate it for HD~179949, HD~189733 and $\upsilon$ And are displayed in Figures~\ref{hd179949_diffs} $-$ \ref{upsAnd_diffs}. The analysis presented in this section consists of only Ca II K emission measurements for several reasons: 1) The broad, deep photospheric absorption of the Ca II K line allows the chromospheric emission to be seen at higher contrast as compared to H$\alpha$ and the Ca II IRT where chromospheric emission merely fills in the absorption core. 2) Extensive studies by the Mount Wilson group and Wright et al. (2004) allow us to isolate the average chromospheric emission $\langle$K$\arcmin$$\rangle$ by correcting for the photospheric contribution to our measurements ($\langle$K$\arcmin$$\rangle$ = $\langle$K$\rangle$$-$$\langle$K$_{phot}$$\rangle$, see Section 3.3.2 of Shkolnik et al.~2005a for more details.) This makes for a more accurate comparison of the stars in the sample whose spectral types vary from F7 to K1. 3) Previous CFHT/Gecko spectra consisted of only a single order containing Ca II H \& K and it is useful to make comparisons between the data sets. 4) Lastly, there are no telluric features or blended lines to contaminate the spectra, as is the case for the other indicators, which are discussed in Section~\ref{other}. \subsection{HD 179949 \& $\upsilon$ And: Evidence of the on/off nature of SPI}\label{induced} When monitoring chromospheric emission, stellar activity may be modulated by the star's rotation, planetary motion in the case of SPI, or a combination of both. The orbital periods of the planets are well known and uniquely established by the PRV and transit discovery methods, but the rotation periods of the stars are much harder to determine in part due to stellar differential rotation. For studies of SPI, differentiating between rotational and orbital modulation of the chromospheric emission is key. In Shkolnik et al.~(2005a) we presented evidence of planet-induced heating on HD~179949. The effect lasted for over a year and peaked only once per orbit, suggesting a magnetic interaction. In the simplest configuration, a magnetic interaction would occur near the sub-planetary point, when the planet is in front of the star relative to the line-of-sight, which defines orbital phase $\phi_{orb} = 0$. Reproduced in Figure~\ref{hd179949_intK}, we fitted a truncated, best-fit spot model to our 2001 and 2002 data with $P = P_{orb}$ = 3.092 d corresponding to the change in projected area of a bright spot on the stellar surface before being occulted by the stellar limb. The fit to the 2001 and 2002 data peaks at $\phi_{orb}$ = 0.83 $\pm$0.04 with an amplitude of 0.027. We over-plot new data from 2005 which is fit remarkably well by the same model with only an insignificantly small relative phase shift of -0.07. This phase lead may help identify the nature of the interaction. For example, the offset from the sub-planetary point of a starspot or group of starspots can be a characteristic effect of tidal friction, magnetic drag or reconnection with off-center stellar magnetic field lines. For further discussion on such mechanisms, see papers by Gu et al.~(2005), Preusse et al.~(2006) and McIvor et al.~(2006). In any case, the phasing, amplitude and period of the activity have persisted for over 4 years. Ca II data acquired in 2003 and 2006 of HD 179949 do not phase with the planet's orbit (Figure~\ref{hd179949_intK_2003_2006_orb}), but both phase well with a 7-day period, likely the rotation period of the star. In Figure~\ref{hd179949_intK_rot} we fit data from each year separately with a rotation curve because the effects of differential rotation and the appearance and disappearance of new spots over the three years would produce variations in phase, amplitude and period in the observed modulation. Note that the amplitude of the rotational activity in only 0.6 of that of induced by SPI. Indirect indications of the rotation rate of HD 179949 imply $P_{rot} \approx 9$ days and are presented in Shkolnik et al.~(2003) and Saar et al.~(2004). Wolf \& Harmanec (2004) weakly detect (1.5 $\sigma$) a photometric rotation period for HD~179949 of 7.07 d with an amplitude of only 0.008 mag. While more photometry is needed to determine a rotation period conclusively, the modulated Ca II emission of this star in both 2003 and 2006 strongly suggests a rotation period of 7 days. Similarly, previous Ca II data of $\upsilon$ And indicated possible SPI (Figure 8 of Shkolnik et al.~2005a), yet our September 2005 data appears to vary with the rotation. Again, the rotation period is not well known. Henry et al.~(2000) quotes both 11 and 19 days, with a probable 11.6-day period from the $\langle S_{HK} \rangle$ index. We plot the 2005 data against the 4.6-d orbital in Figure~\ref{upsAnd_intK_orb} and an 11.6-day rotation period in Figure~\ref{upsAnd_intK_rot}. Unlike data from 2002 and 2003, the 2005 data phase much better with $P_{rot}$=11.6 d than with a planetary orbit. This on/off characteristic of SPI observed in the HD~179949 and $\upsilon$~And systems is predicted by the models of Cranmer \& Saar (2007). They model the Ca II H \& K light curve of a sun-like star with a hot Jupiter interacting with the field geometry at various stages of the empirically derived solar magnetic field at annual steps of the 11-year solar cycle. They conclude that due to the complex nature of the multipole fields, the Ca II K light curves due to SPI do not repeat exactly from orbit to orbit, and at times the planet-induced enhancement may disappear altogether leaving only rotationally modulated emission. This may explain the 2003 and 2006 disappearance of the strong orbital modulation seen in 2001, 2002 and 2005 for HD 179949. Their models also show that for sparsely sampled data, the apparent phase shift between the peak Ca II emission and the sub-planetary point may fall between −0.2 and +0.2 (or $\pm$ 72$^{\circ}$), consistent with the -0.17 phase shift we detected repeatedly for HD 179949. \subsection{HD 189733: The active host of a massive planet}\label{hd189733} We reported in Shkolnik et al.~2005a that chromospheric variability of the active, young star $\kappa^{1}$~Ceti ($\langle$K$\arcmin$$\rangle$=0.815), for which the presence of a hot Jupiter is not ruled out, and HD~73256, known to host a hot Jupiter (M$_p$sin$i$=1.85 M$_J$, $a$=0.037 AU, Udry et al.~2003, $\langle$K$\arcmin$$\rangle$=0.899) was modulated by stellar rotation with additional variability or flaring potentially induced by a hot Jupiter. We find a similar effect on HD 189733, a generally more active star, for which the average $P_{rot}$ is well known (11.73$\pm$0.07 days, Croll et al.~2007). This star has relatively strong K emission with $\langle$K$\arcmin$$\rangle$=1.231 and large intranight variability as shown in Figure~\ref{hd189733_intK_rot} which varies on time-scales at least as short as the length of the individual exposures (30 minutes). The average emission from night to night clearly varies with the star's rotation though with a lower amplitude (0.011 \AA) as compared to HD~73256 (0.045 \AA), likely because the star is intrinsically more active, with a larger percentage of its surface covered in spots. This makes it difficult to extract any magnetic and/or tidal contribution to HD~189733's chromospheric emission by the planet as no correlation is seen between the planet's orbit and the residuals to the rotational modulation. However, Figure~\ref{hd189733_MADK_orb} shows how the integrated mean absolute deviation of the K line residuals from the global mean per night (MADK) vary with orbital phase. Though we only have four nights of observations spanning 1.4 orbits, there is a clear increase in very-short-term ($\leq$30 minutes) activity at $\phi_{orb}\sim0.8$. Remarkably, this is the same phase at which SPI peaks for HD 179949 and $\tau$~Boo as measured both spectroscopically and photometrically. (See discussion below and Walker et al.~2006, 2007) \subsection{$\tau$ Boo: SPI on a tidally-locked star}\label{tauboo} The star with the shortest rotation period in our sample is $\tau$~Boo. It has the largest $v$sin$i$ (= 14.8 m~s$^{-1}$, Gray 1982) and is believed to be in synchronous rotation with its tightly-orbiting massive planet ($P_{rot}$ = 3.2 $\pm$ 0.5 d, Henry et al.~2000, $P_{orb}$ = 3.31250 d, M$_p$sin$i$=4.4 M$_J$, Butler et al.~1997). We observed a small but significant night-to-night modulation in the H \& K emission of $\tau$ Boo during the first 3 years of observations with no obvious phasing with the planet's position. (Data from 2001, 2002 and 2003 are in Figure 7 of Shkolnik et al.~2005a.) Due to the tidal-locking of the star with the planet, we must depend on consistent orbital phasing of any modulated activity to disentangle SPI from rotational modulation for $\tau$ Boo. Walker et al.~(2007) presented light curves of $\tau$ Boo taken in 2004 and 2005 observed in broadband optical light by the MOST space telescope. In the first year, they observed a significant photometric signal close to the planet's orbital frequency. In the second year, there was no signal of similar strength but a clear correlation with the MAD of the photometry with orbital phase. They showed that when phased to the planet's orbital period, the active region precedes the sub-planetary point by 68$^{\circ}$, very close to the phase lead we observe in the enhanced activity on HD~179949 and in the MAD of HD 189733's Ca II K emission. Though synchronisity with the exact planet's position is not obvious in $\tau$ Boo's 2001$-$2003 Ca II data, Walker et al.~(see their Figure 6) show that the MAD of these data during the photometrically active phase range, centered on $\phi_{orb}=$0.8, is twice as high than outside of it. Though of smaller amplitude (and larger error bars) than in previous years, our new 2006 Ca II data (Figure~\ref{tauboo_intK})\footnote{Of the 9 June 2006 nights on which $\tau$ Boo was observed, three were taken in `spectropolarimetry' mode, those at orbital phases of 0.19, 0.49 and 0.82. It is interesting to note that on the two nights with high K emission, Catala et al.~(2007) detect a clear Stokes V signature while the third night, ($\phi_{orb}$=0.49) has both low K emission and no Stoke V signal.} may also show a weak enhancement between $\phi_{orb}=$0.7 and 1.2. If this is indeed the case, it implies that an active region leading the sub-planetary point, has persisted on $\tau$ Boo for at least 5 years, equivalent to $\approx$550 planetary orbits. \subsection{Night-to-night activity correlates with planet's magnetic moment} S\'anchez-Lavega (2004) looked at the internal structure and the convective motions of giant extrasolar planets in order to calculate their dynamo-generated surface magnetism. Given the same angular frequency (which is a reasonable approximation for the short-period planets in question), the magnetic dipole moment, and hence the magnetospheric strength, increases with planetary mass. This is observed for the magnetized planets in our own solar system, where the magnetic moment grows proportionally with the mass of the planet (Stevens 2005), and more specifically, with the planet's angular momentum ($L \propto$ $M_pR_p^2$$P_{p,rot}^{-1}$, Arge et al.~1995). Since only lower limits exist for the masses of most hot Jupiters and at such small semi-major axes they should be tidally locked ($P_{p,rot}$ = $P_{orb}$), we plot $M_{p}$sin$i$/P$_{orb}$ against $\langle$MADK$\rangle$, the average of the integrated MAD of the K line residuals per observing run, in Figure~\ref{msini_MADK}. Though we are able to include only one additional $active$ point (HD 189733) to the original plot of Shkolnik et al.~2005a, we continue to see an intriguing correlation between the planet's magnetic moment and the night-to-night chromospheric activity on its star. Of our sample, $\tau$ Boo has the most massive planet and yet falls well below the correlation. This is consistent with the proposed Alfv\'en wave model where the near zero relative motion due to the tidal locking of both the star and the planet ($P_{*,rot}$ = $P_{p,rot}$ = $P_{orb}$) produces minimal SPI because of the weak Alfv\'en waves generated as the planet passes through the stellar magnetosphere, thereby transporting little excess energy to the stellar surface along the magnetic field lines (Gu et al.~2005). If this correlation between short-term activity and planetary magnetic moment holds for more hot Jupiter systems engaging in SPI, this could provide an empirical tool with which to estimate the strength of extrasolar planetary magnetic fields. \subsection{Long-term stellar activity cycles}\label{long} With CFHT data spanning 5 years, we can compare the long-term variations in the chromospheric level of HD 179949, $\tau$ Boo, $\upsilon$ And and HD~209458. We measure Ca II K emission strength $\langle$K$\rangle$ by integrating across the normalized K cores bounded by the K1 features (Montes et al.~1994) and plot their average for each observing run in Figure~\ref{longterm_Kemission}. The error bars represent the MAD value for each observing season. There does not appear to be any correlation between the mean chromospheric activity and the level of night-to-night modulation, be it due to SPI or stellar rotation. Though 5-year baseline is a good start to tracking the intrinsic stellar activity cycles of these stars, limiting their periods to $>$10 years, the variability from run to run may also be due to active regions on the visible disk of the star. We require more frequent monitoring over several more years to firmly say anything more about the activity cycle of any individual program star. \section{Correlating with other activity indicators}\label{other} A similar analysis to the Ca II K line described above was performed for the Ca II H, IRT line at 8662\AA\/ and H$\alpha$ though with the normalization points set at 0.7 of the local continuum for the latter two. This level is within the photospheric damping wings of the line, focusing the analysis on the chromospheric core and excluding some blended and telluric lines. For both HD 179949 and HD 189733, there is a strong correlation between the residuals of Ca II K with those of the Ca II H and 8662\AA\/ lines (Figure~\ref{correlations}) as expected given the common upper energy level of their transitions. However, a poorer correlation exists with the H$\alpha$ lines. Though H$\alpha$ is often demonstrated to be just as good a tracer of chromospheric activity as Ca II, a recent analysis by Cincunegui et al.~(2007) has shown that when comparing a sample of stars, the correlation between H$\alpha$ and Ca II is the result of the correlation of each line with spectral type rather than with stellar activity. When comparing the variability of the lines for an individual star, there is no consistent correlation. This is likely the effect of the differing underlying formation physics between them. (Soderblom et al.~1993, Cram \& Mullan 1985). We leave the relative energy emitted in the lines and their implications for SPI models for a later paper. The He I D3 line (a blended triplet) at 5876 \AA\/ correlates well with plage regions on the solar surface (Landman 1981) as well as with the Ca II H\&K emission (Saar et al.~1997). It forms in the upper chromosphere and is thought to be back-heated by the stellar corona giving us a unique optical view of this hot plasma. The absence of He I absorption in non-magnetic regions on the sun and in other inactive stars indicates that He I has no basal (acoustically heated) flux level, unlike the other activity indicators in the visible spectrum, and is therefore purely a signature of magnetic activity. We show the spectral region near the He I lines of our program stars in Figure~\ref{HeI_6stars}. The line is blended with the weak lines of Fe I 5876.30, Cr I 5876.55, and unidentified lines at 5875.76 and 5875.14 \AA, both blended with telluric H$_2$O (Moore et al.~1966). The imperfect removal of the telluric lines from our spectra left residuals at the level of $\lesssim$0.003 of the nearby continuum. This made it difficult to analyze night-to-night variations in the stars that exhibit relatively strong variability in the Ca II lines, though the mere presence of the line indicates magnetic heating source, with strong absorption implying great activity. HD~179949, HD~189733 and $\tau$~Boo all have clear He I absorption while HD~209458, HD~149143, HD~217107 as well as our standard star 61~Vir have only weak, if any, absorption. Though night-to-night variability is difficult to quantify in HD 179949 and HD 189733, there is a clear increase in He I D3 absorption on the night that each of the two stars displays its maximum Ca II emission, which do not occur on the same night.\footnote{We cannot measure the He I EW of HD~179949 from the September 2005 data since poor weather prevented us from observing a telluric standard. Similarly, a telluric standard was not observed for the spectropolarimetric study of $\upsilon$ And.} We measured the average He I EW by deblending it with the contaminated lines, a technique particularly difficult for the rapid rotator $\tau$ Boo. The values are listed in Table~\ref{targets}. It is interesting to note that the He I EW does not follow the power-law correlation with Ca II H \& K emission observed by Saar et al.~(1997) for G and K dwarfs, but does display a strong correlation with the short-term activity metric $\langle$MADK$\rangle$ plotted in Figure~\ref{madk_HeI}, where stars with more night-to-night activity have stronger He I absorption. The relationship between the He I absorption and stellar rotation period observed by Saar et al. is also not obvious in our small sample. It therefore remains possible that the strength of the He I D3 line may predict whether a planet-bearing star will have night-to-night variability. When more stars with hot Jupiters are discovered this may come in handy in helping to decide which systems should be studied further with intensive time-series observations. \section{Summary}\label{summary} We have observed 7 stars with hot Jupiters using CFHT's \'echelle spectrograph ESPaDOnS to search for night-to-night modulation of the Ca II emission for evidence of SPI. Four of these have been observed in our previous studies of time-varying Ca II H \& K: HD 179949, $\upsilon$ And, HD 209458, and $\tau$~Boo, and we have added three new targets: HD 189733, HD 217107 and HD 149143. For our prime target, HD 179949, we now have a total of six observing runs spanning 5 years. During four runs (Aug 2001, July 2002, Aug 2002 and Sept 2005) the Ca II emission varied with the orbital period of 3.092 days, with consistent amplitude and peak phase indicative of a magnetic interaction between the star and planet. The peak activity on HD 179949 in these epochs occurs at $\phi_{orb}\approx0.8$, leading the sub-planetary longitude by some 70$^{\circ}$. Interestingly, this same phase shift is observed in the MAD of the Ca II K residuals of both $\tau$ Boo and HD~189733. The phase lead can provide information on the field geometries (i.e.~Parker spiral) and the nature of the effect such as tidal friction, magnetic drag or reconnection with off-center magnetic fields. HD 179949 data from the other two runs (Sept 2003 and June 2006) clearly vary with the rotation period of 7 days. A similar effect is seen on $\upsilon$ And where one of four epochs appears to be modulated by rotation rather than the planet's motion. This {\it on/off} behavior has been modeled by Cranmer \& Saar (2007) to be an effect of magnetic reconnection with the stellar field as it varies with the star's long-term activity cycle. We present the expected correlations between the variability observed in the Ca II K line with Ca II H and IRT 8662, and a weaker correlation with H$\alpha$. Though we could not accurately measure the variability in the upper-chromosphere line He I D3, we show that it has the potential to flag stars which might be active in Ca II K in on a night-to-night time scale. To date we have observed 13 stars with hot Jupiters at CFHT and VLT, of which 5 appear to be actively engaging in SPI: HD 179949, $\upsilon$ And, HD 189733, HD 73256 and $\tau$ Boo. The activity as measured by the mean absolute deviation over a run on the first four of these stars correlates well with $M_{p}$sin$i$/$P_{orb}$, a value proportional to the planet's magnetic moment, and thus with the hot Jupiter's magnetic field strength. Because of their small separation ($\leq$ 0.075 AU), a hot Jupiter lies within the Alfv\'en radius of its host star, allowing a direct magnetic interaction with the stellar surface. Although this correlation is tentative, short-term chromospheric variability may be our first probe of extrasolar planetary magnetospheres. \acknowledgements We are very grateful to Claude Catala and John Barnes for contributing several $\tau$ Boo and $\upsilon$ And spectra, to Jean-Francois Donati for making {\it Libre Esprit} available to CFHT users, to the CFHT staff for their excellent support of this program, and to Benjamin Brown for useful discussions on planetary dynamos. Research funding from the NASA Postdoctoral Program (formerly the NRC Research Associateship) for E.S. and the National Research Council of Canada (D.A.B.) are gratefully acknowledged.
train/arxiv
BkiUeDjxK7IAHQ7RZYRC
5
1
\section{Introduction} Magnetization dynamics in the presence of spin-transfer torques is a very active area of research with applications to magnetic memory devices and oscillators~\cite{BaderParkin2010,BrataasKentOhno2012,KentWorledge2015}. Some basic questions relate to the types of magnetization dynamics that can be excited and the time scales on which the dynamics occurs. Many of the experimental studies of spin-transfer torques are on thin film magnetic elements patterned into asymmetric shapes (e.g. an ellipse) in which the demagnetizing field strongly confines the magnetization to the film plane. Analytic models that capture the resulting nearly in-plane magnetization dynamics (see e.g. \cite{GarciaCerveraE01, DKMO, KohnSlas05Dynamics, MuratovOsipov06, CapellaOttoMelcher}) can lead to new insights and guide experimental studies and device design. A macrospin model that treats the entire magnetization of the element as a single vector of fixed length is a starting point for most analyses. The focus of this paper is on a thin-film magnetic element excited by a spin-polarized current that has an out-of-plane component. This out-of-plane component of spin-polarization can lead to magnetization precession about the film normal or magnetization reversal. The former dynamics would be desired for a spin-transfer torque oscillator, while the latter dynamics would be essential in a magnetic memory device. A device in which a perpendicular component of spin-polarization is applied to an in-plane magnetized element was proposed in {Ref.~[\onlinecite{Kent2004}]} and has been studied experimentally {\cite{Liu2010, Liu2012, Ye2015}}. There have also been a number of models that have considered the influence of thermal noise on the resulting dynamics, e.g., {on} the rate of switching and the dephasing of the oscillator motion \cite{Newhall2013,Pinna2013,Pinna2014}. Here we consider a weakly damped asymptotic regime of the Landau--Lifshitz--Gilbert--Slonczewski (LLGS) equation for a thin-film ferromagnet, in which the oscillatory nature of the in-plane dynamics is highlighted. In this regime, we derive a reduced partial differential equation (PDE) for the in-plane magnetization dynamics under applied spin-torque, which is a generalization of the underdamped wave-like model due to Capella, Melcher and Otto \cite{CapellaOttoMelcher}. We then analyze the solutions of this equation under the macrospin (spatially uniform) approximation, and discuss the predictions of such a model in the context of previous numerical studies of the full LLGS equation \cite{ChavesKent15}. The rest of this article is organized as follows. In Sec. II, we perform an asymptotic derivation of the reduced underdamped equation for the in-plane magnetization dynamics in a thin-film element of arbitrary cross section, by first making a thin-film approximation to the LLGS equation, then a weak-damping approximation. In Sec. III, we then further reduce to a macrospin ordinary differential equation (ODE) by spatial averaging of the underdamped PDE, and restrict to the particular case of a soft elliptical element. A brief parametric study of the ODE solutions is then presented, varying the spin-current parameters. In Sec. IV, we make an analytical study of the macrospin equation using an orbit-averaging method to reduce to a discrete dynamical system, and compare its predictions to the full ODE solutions. In Sec. V, we seek to understand transitions between the different solution trajectories (and thus predict current-parameter values when the system will either switch or precess) by studying the discrete dynamical system derived in Sec. IV. Finally, we summarize our findings in Sec. VI. \section{Reduced model} We consider a domain $\Omega\subset\mathbb{R}^3$ occupied by a ferromagnetic film with cross-section $D\subset \mathbb{R}^2$ and thickness $d$, i.e., $\Omega = D \times (0, d)$. Under the influence of a spin-polarized electric current applied perpendicular to the film plane, the magnetization vector $\v{m}=\v{m}(\v{r},t)$, with $|\v{m}|=1$ in $\Omega$ and $0$ outside, satisfies the LLGS equation (in SI units) \beq \pd{\v{m}}{t} = - \gamma\mu_0\v{m} \times \v{H}_{\text{eff}} + {\alpha} \v{m} \times \pd{\v{m}}{t} + {\tau_{\text{STT}}} \eeq in $\Omega$, with ${\partial \v{m}}/{\partial n}=(\v{n}\cdot\nabla)\v{m}=0$ on $\partial \Omega$, where $\v{n}$ is the outward unit normal to $\partial \Omega$. In the above, $\alpha > 0$ is the Gilbert damping parameter, $\gamma$ is the gyromagnetic ratio, $\mu_0$ is the permeability of free space, $ \v{H}_{\text{eff}} = -\frac{1}{\mu_0M_s}\frac{\delta E}{\delta \v{m}} $ is the effective magnetic field, \begin{multline} E(\v{m}) = \int_\Omega \Big(A |\nabla \v{m}|^2 + K \Phi(\v{m}) - \mu_0 M_s \v{H}_{\text{ext}}\cdot\v{m}) \Big)\d^3 r\\ + \mu_0 M_s^2 \int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{\nabla \cdot \v{m}(\v{r})\nabla \cdot \v{m}(\v{r'})}{8\pi |\v{r}-\v{r}'|}\d^3 r \d^3 r' \end{multline} is the micromagnetic energy with exchange constant $A$, anisotropy constant $K$, crystalline anisotropy function $\Phi$, external magnetic field $\v{H}_{\text{ext}}$, and saturation magnetization $M_s$. Additionally, the Slonczewski spin-transfer torque $\tau_\text{STT}$ is given by \beq \tau_\text{STT} = -\frac{\eta \gamma \hbar j}{2 d e M_s} \v{m} \times\v{m} \times \v{p}, \eeq where $j$ is the density of current passing perpendicularly through the film, $e$ is the elementary charge (positive), $\v{p}$ is the spin-polarization direction, and $\eta \in(0,1]$ is the spin-polarization efficiency. We now seek to nondimensionalize the above system. Let \beq \ell = \sqrt{\frac{2A}{\mu_0M_s^2}}, \quad Q = \frac{2K}{\mu_0M_s^2}, \quad \v{h}_{\text{ext}} = \frac{\v{H}_{\text{ext}}}{ M_s}. \eeq We then rescale space and time as \beq \v{r} \to \ell \v{r}, \quad t \to \frac{t}{\gamma \mu_0 M_s} \eeq obtaining the nondimensional form \beq \pd{\v{m}}{t} = -\v{m} \times \v{h}_{\text{eff}} + \alpha \v{m} \times \pd{\v{m}}{t} - \beta \v{m} \times\v{m} \times \v{p}, \label{LLG} \eeq where $\v{h}_{\text{eff}} = \v{H}_{\text{eff}}/M_s$, and \beq \beta = \frac{\eta \hbar j}{2 d e \mu_0M_s^2} \eeq is the dimensionless spin-torque strength. Since we are interested in thin films, we now assume that $\v{m}$ is independent of the film thickness. Then, after rescaling \beq E\to \mu_0 M_s^2 d\ell^2 E, \eeq we have $\v{h}_{\text{eff}} \simeq -\frac{\delta E}{\delta \v{m}}$, where $E$ is given by a local energy functional defined on the (rescaled) two-dimensional domain $D$ (see, e.g., Ref. [\onlinecite{KohnSlas05Gamma}]): \begin{multline} E(\v{m}) \simeq \frac12 \int_D \left(|\nabla \v{m}|^2 + Q \Phi(\v{m}) - 2\v{h}_{\text{ext}} \cdot \v{m}\right) \d^2 r \\ + \frac12 \int_D m_\perp^2 \d^2 r + \frac{1}{4\pi}\delta |\ln \lambda| \int_{\partial D} (\v{m} \cdot \v{n})^2 \d s, \label{ThinFilmEnergy} \end{multline} in which now $\v{m}:D\to \mathbb{S}^2$, $m_\perp$ is its out-of-plane component, $\delta = d/\ell$ is the dimensionless film thickness, and $\lambda = d /L \ll 1 $ (where $L$ is the lateral size of the film) is the film's aspect ratio. The effective field is given explicitly by \beq \v{h}_{\text{eff}} = \Delta \v{m} - \frac{Q}{2} \nabla_{\v{m}}\Phi(\v{m}) - m_\perp \v{e}_z + \v{h}_{\text{ext}}, \label{field} \eeq and $\v{m}$ satisfies equation \eqref{LLG} in $D$ with the boundary condition \beq \pd{\v{m}}{n} = -\frac{1}{2\pi}\delta |\ln \lambda| (\v{m} \cdot \v{n})(\v{n} - ( \v{m}\cdot \v{n}) \, \v{m} ) \label{BCm} \eeq on $\partial D$. We now parametrize $\v{m}$ in terms of spherical angles as \beq \v{m}=(-\sin\theta \cos \phi, \cos\theta \cos \phi, \sin \phi), \label{Mparam} \eeq and the current polarization direction $\v{p}$ in terms of an in-plane angle $\psi$ and its out-of-plane component $p_\perp$ as \beq \v{p} = \frac{1}{\sqrt{1+p_\perp^2}}(-\sin\psi, \cos \psi, p_\perp). \eeq Writing ${\beta}_* = {\beta}/\sqrt{1+p_\perp^2}$, after some algebra, one may then write equation \eqref{LLG} as the system \begin{multline} \pd{\phi}{t} = -\frac{1}{\cos\phi} \v{h}_{\text{eff}}\cdot \v{m}_\theta + \alpha \cos \phi \pd{\theta}{t} \\+ {\beta}_*(p_\perp \cos \phi - \sin\phi\cos(\theta-\psi)), \end{multline} \beq -\cos\phi\pd{\theta}{t} = -\v{h}_{\text{eff}}\cdot \v{m}_\phi + \alpha \pd{\phi}{t} + {\beta}_*\sin(\theta-\psi),\eeq where $\v{m}_\theta = \partial \v{m}/\partial \theta$ and $\v{m}_\phi = \partial \v{m}/\partial \phi$ for $\v{m}$ given by \eqref{Mparam}. Again, since we are working in a soft thin film, we assume $\phi \ll 1$ and that the out-of-plane component of the effective field in equation \eqref{field} is dominated by the term $\v{h}_\text{eff} \cdot \v{e}_z \simeq - m_\perp = -\sin \phi$. Note that this assumes that the crystalline anisotropy and external field terms in the out-of-plane directions are relatively small, so we assume the external field is only in plane, though it is still possible to include a perpendicular anisotropy simply by renormalizing the constant in front of the $m_\perp$ term in $\v{h}_\text{eff}$. We then linearize the above system in $\phi$, yielding \beq \pd{\phi}{t} = \frac{\delta \mathcal{E}}{\delta \theta} + \alpha \pd{\theta}{t} + {\beta}_*(p_\perp - \phi \cos(\theta-\psi)),\eeq \begin{multline} -\pd{\theta}{t} = \phi + {\beta}_*\sin(\theta-\psi)\\+ \phi(-h_x \sin\theta + h_y \cos\theta) + \alpha \pd{\phi}{t} . \label{LLGlin2} \end{multline} where $h_x = \v{h}_{\text{eff}}\cdot\v{e}_x$ and $h_y = \v{h}_{\text{eff}}\cdot\v{e}_y$, and $\mathcal{E}(\theta)$ is $E(\v{m})$ evaluated at $\phi = 0$. We now note that the last two terms in \eqref{LLGlin2} are negligible relative to $\phi$ whenever $|h_x|, |h_y|$ and $\alpha$ are small, which is true of typical clean thin-film samples of sufficiently large lateral extent. Neglecting these terms, one has \begin{align} \pd{\phi}{t} &= \frac{\delta \mathcal{E}}{\delta \theta} + \alpha \pd{\theta}{t} + \beta_*(p_\perp - \phi \cos(\theta-\psi)),\label{LLGunscaled1}\\ -\pd{\theta}{t} &= \beta_*\sin(\theta-\psi)+\phi. \label{LLGunscaled2} \end{align} Then, differentiating \eqref{LLGunscaled2} with respect to $t$ and using the result along with \eqref{LLGunscaled2} to eliminate $\phi$ and $\pd{\phi}{t}$ from \eqref{LLGunscaled1}, we find a second-order in time equation for $\theta$: \begin{multline} 0 = \pdd{\theta}{t} + \pd{\theta}{t}(\alpha + 2 \beta_*\cos(\theta-\psi)) +\frac{\delta \mathcal{E}}{\delta \theta} \\+ \beta_* p_\perp + \beta_*^2 \sin(\theta - \psi)\cos(\theta-\psi), \label{reduced} \end{multline} where, explicitly, one has \beq \frac{\delta \mathcal{E}}{\delta \theta} = -\Delta \theta + \frac{Q}{2} {\tilde{\Phi}'}(\theta) + \v{h}_{\text{ext}}\cdot(\cos \theta,\sin \theta), \eeq and $\tilde{\Phi}(\theta) = \Phi(\v{m}(\theta))$. In turn, from the boundary condition on $\v{m}$ in \eqref{BCm}, we can derive the boundary condition for $\theta$ as \beq \v{n}\cdot \nabla \theta = \frac{1}{2\pi}\delta|\ln \lambda|\sin(\theta - \varphi) \cos(\theta - \varphi), \label{BCtheta} \eeq where $\varphi$ is the angle parametrizing the normal $\v{n}$ to $\partial D$ via $\v{n} = (-\sin\varphi,\cos\varphi)$. The model comprised of \eqref{reduced}--\eqref{BCtheta} is a damped-driven wave-like PDE for $\theta$, which coincides with the reduced model of Ref.~[\onlinecite{CapellaOttoMelcher}] for vanishing spin-current density in an infinite sample. This constitutes our reduced PDE model for magnetization dynamics in thin-film elements under the influence of out-of-plane spin currents. It is easy to see that all of the terms in \eqref{reduced} balance when the parameters are chosen so as to satisfy \beq \beta_* \sim p_\perp \sim \alpha \sim {Q}^{1/2} \sim | \v{h}_{\text{ext}}|^{1/2} \sim \frac{\ell}{L}\sim \delta|\ln \lambda|. \label{scalepde} \eeq This shows that it should be possible to rigorously obtain the reduced model in \eqref{reduced}--\eqref{BCtheta} in the asymptotic limit of $L \to \infty$ and $\alpha, \beta_*, p_\perp, Q, |\mathbf h_\mathrm{ext}|, \delta \to 0$ jointly, so that \eqref{scalepde} holds. \section{Macrospin switching} In this section we study the behavior of the reduced model \eqref{reduced}--\eqref{BCtheta} in the approximation that the magnetization is spatially uniform on an elliptical domain, and compare the solution phenomenology to that found by simulating the LLGS equation in the same physical situation, as studied in Ref.~[\onlinecite{ChavesKent15}]. \subsection{Derivation of macrospin model} Integrating equation \eqref{reduced} over the domain $D$ and using the boundary condition \eqref{BCtheta}, we have \begin{multline} \int_D \left(\pdd{\theta}{t} + \pd{\theta}{t}(\alpha + 2 \beta_*\cos(\theta-\psi)) \right. \\\left.+ \beta_* p_\perp+ \beta_*^2 \sin(\theta - \psi)\cos(\theta-\psi)\right. \\\left.+ \frac{Q}{2} {\tilde{\Phi}'}(\theta) + \v{h}_{\text{ext}}\cdot(\cos \theta,\sin \theta) \right) \d^2 r \\= \frac{1}{2\pi}\delta|\ln \lambda| \int_{\partial D} \sin(\theta - \varphi) \cos(\theta - \varphi) \d s. \end{multline} Assume now that $\theta$ does not vary appreciably across the domain $D$, which makes sense in magnetic elements that are not too large. This allows us to replace $\theta(\v{r},t)$ by its spatial average $\bar{\theta}(t) = \frac{1}{|D|}\int_D\theta(\v{r},t) \d^2 r$, where $|D|$ stands for the area of $D$ in the units of $\ell^2$. Denoting time derivatives by overdots, and omitting the bar on $\bar{\theta}$ for notational simplicity, this spatial averaging leads to the following ODE for $\theta(t)$: \begin{multline} \ddot{\theta} +\dot{\theta}\brackets{\alpha + 2\beta_* \cos(\theta-\psi)} + \beta_*^2 \sin(\theta - \psi)\cos(\theta-\psi) \\+\beta_* p_\perp + \frac{Q}{2} {\tilde{\Phi}'}(\theta) + \v{h}_{\text{ext}}\cdot(\cos \theta,\sin \theta) \\= \frac{\delta|\ln \lambda|}{4\pi |D|} \, \sin 2 \theta \int_{\partial D} \cos (2 \varphi) \d s \\ - \frac{\delta|\ln \lambda|}{4\pi |D|} \, \cos 2 \theta \int_{\partial D} \sin (2 \varphi) \d s . \label{MacrospinGeneral} \end{multline} \begin{figure*} \begin{center} \includegraphics[width=.9\textwidth]{odesols.pdf} \caption{Solutions of macrospin equation \eqref{reducedODE} for $\alpha=0.01$, $\Lambda=0.1$. In (a), $p_\perp=0.2$, $\sigma =0.03$: decaying solution; in (b), $p_\perp=0.2$, $\sigma =0.06$: limit cycle solution (the initial conditions in (a) and (b) are $\theta(0)=3.5$, to better visualize the behavior). In (c), $p_\perp=0.3$, $\sigma =0.08$: switching solution; in (d), $p_\perp=0.6$, $\sigma =0.1$: precessing solution. } \label{odesols} \end{center} \end{figure*} Next, we consider a particular physical situation in which to study the macrospin equation, motivated by previous work \cite{Liu2010, Liu2012}. As in Refs.~[\onlinecite{Pinna2013, Pinna2014, ChavesKent15}], we consider an elliptical thin-film element (recall that lengths are now measured in the units of $\ell$): \begin{align} \label{eq:1} D = \left\{ (x, y) \ : \ {x^2 \over a^2} + {y^2 \over b^2} < 1 \right\}, \end{align} with no in-plane crystalline anisotropy, $Q=0$, and no external field, $\v{h}_{\text{ext}}=0$. We take the long axis of the ellipse to be aligned with the $\v{e}_y$-direction, i.e. $b > a$, with the in-plane component of current polarization also aligned along this direction, i.e., taking $\psi =0$. One can then compute the integral over the boundary in equation \eqref{MacrospinGeneral} explicitly, leading to the equation \begin{multline} \ddot{\theta} +\dot{\theta}\brackets{\alpha + \beta_* \cos\theta} + \Lambda \sin \theta\cos\theta \\+ \beta_*^2 \sin\theta\cos\theta+\beta_* p_\perp = 0, \end{multline} where we introduced the geometric parameter $0 < \Lambda \ll 1$ obtained by an explicit integration: \beq \Lambda = \frac{\delta|\ln \lambda|}{2\pi^2 ab} \int_0^{2\pi} \frac{b^2 \cos^2 \tau - a^2\sin^2 \tau}{\sqrt{b^2 \cos^2 \tau + a^2\sin^2 \tau}} \d \tau. \eeq This may be computed in terms of elliptic integrals, though the expression is cumbersome so we omit it here. Importantly, up to a factor depending only on the eccentricity the value of $\Lambda$ is given by \beq \Lambda \sim \frac{d}{L} \ln \frac{L}{d}. \eeq For example, for an elliptical nanomagnet with dimensions $100 \times 30 \times 2.5$ nm (similar to those considered in Ref.~[\onlinecite{ChavesKent15}]), this yields $\Lambda \simeq 0.1$. It is convenient to rescale time by $\sqrt{\Lambda}$ and divide through by $\Lambda$, yielding \begin{multline} \ddot{\theta} + \frac{1}{\sqrt{\Lambda}}\dot{\theta}\brackets{\alpha + 2\sigma \Lambda \cos\theta} + \sin\theta \cos \theta \\+ \sigma p_\perp+\sigma^2 \Lambda \sin\theta \cos\theta = 0, \label{reducedODE} \end{multline} where we introduced $\sigma = \beta_*/\Lambda$. We then apply this ODE to model the problem of switching of the thin-film elements, taking the initial in-plane magnetization direction to be static and aligned along the easy axis, antiparallel to the in-plane component of the spin-current polarization. Thus, we take \beq \theta(0)=\pi,\quad \dot{\theta}(0) = 0, \label{ICMacro} \eeq and study the resulting initial value problem. \subsection{Solution phenomenology} Let us briefly investigate the solution phenomenology as the dimensionless spin-current parameters $\sigma$ and $p_\perp$ are varied, with the material parameters, $\alpha$ and $\Lambda$, fixed. We take all parameters to be constant in time for simplicity. We find, by numerical integration, 4 types of solution to the initial value problem defined above. The sample solution curves are displayed in Fig. \ref{odesols} below. The first (panel (a)) occurs for small values of $\sigma$, and consists simply of oscillations of $\theta$ around a fixed point close to the long axis of the ellipse, which decay in amplitude towards the fixed point, without switching. Secondly (panel (b)), still below the switching threshold, the same oscillations about the fixed point can reach a finite fixed amplitude and persist without switching. This behavior corresponds to the onset of relatively small amplitude limit-cycle oscillations around the fixed point. Thirdly (panel (c)), increasing either $\sigma, p_\perp$ or both, we obtain switching solutions. These have initial oscillations in $\theta$ about the fixed point near $\pi$, which increase in amplitude, and eventually cross the short axis of the ellipse at $\theta=\pi/2$. Then $\theta$ oscillates about the fixed point near 0, and the oscillations decay in amplitude toward the fixed point. Finally (panel (d)), further increasing $\sigma$ and $p_\perp$ we obtain precessing solutions. Here, the initial oscillations about the fixed point near $\pi$ quickly grow to cross $\pi/2$, after which $\theta$ continues to decrease for all $t$, the magnetization making full precessions around the out-of-plane axis. \section{Half-period orbit-averaging approach} We now seek to gain some analytical insight into the transitions between the solution types discussed above. We do this by averaging over half-periods of the oscillations observed in the solutions to generate a discrete dynamical system which describes the evolution of the energy of a solution $\theta(t)$ on half-period time intervals. Firstly, we observe that in the relevant parameter regimes the reduced equation \eqref{reducedODE} can be seen as a weakly perturbed Hamiltonian system. We consider both $\alpha$ and $\Lambda$ small, with $\alpha \lesssim \sqrt{\Lambda}$, and assume $\sigma \sim \alpha/\Lambda$ and $\sigma p_\perp \lesssim 1$. The arguments below can be rigorously justified by considering, for example, the limit $\Lambda \to 0$ while assuming that $\alpha = O(\Lambda)$ and that the values of $\sigma$ and $p_\perp$ are fixed. This limit may be achieved in the original model by sending jointly $d \to 0$ and $L \to \infty$, while keeping\cite{KohnSlas05Gamma} \begin{align} {L d \over \ell^2} \ln {L \over d} \lesssim 1. \end{align} The last condition ensures the consistency of the assumption that $\theta$ does not vary appreciably throughout $D$. Introducing $\omega(t) = \dot{\theta}(t)$, \eqref{reducedODE} can be written to leading order as \beq \dot{\theta} = \pd{\mathcal{H}}{\omega}, \quad \dot{\omega} = - \pd{\mathcal{H}}{\theta}, \label{HAMILTONIANSYSTEM} \eeq where we introduced \beq \mathcal{H} = \frac12 \omega^2 +V(\theta), \quad V(\theta) = \frac12{\sin^2 \theta} +{\sigma p_\perp \theta}. \label{DEFINEHAMILTONIAN} \eeq At the next order, the effects of finite $\alpha$ and $\Lambda$ appear in the first-derivative term in \eqref{reducedODE}, while the other forcing term is still higher order. The behavior of \eqref{reducedODE} is therefore that of a weakly damped Hamiltonian system with Hamiltonian $\mathcal{H}$, with the effects of $\alpha$ and $\sigma$ serving to slowly change the value of $\mathcal{H}$ as the system evolves. Thus, we now employ the technique of orbit-averaging to reduce the problem further to the discrete dynamics of $\mathcal{H}(t)$, where the discrete time-steps are equal (to the leading order) to half-periods of the underlying Hamiltonian dynamics (which thus vary with $\mathcal{H}$). Let us first compute the continuous-in-time dynamics of $\mathcal{H}$. From \eqref{DEFINEHAMILTONIAN}, \beq \dot{\mathcal{H}} = \omega(\dot \omega + V'(\theta)), \eeq which vanishes to leading order. At the next order, from \eqref{reducedODE}, one has \beq \dot{\mathcal{H}} = -\frac{\omega^2}{\sqrt\Lambda}(\alpha + 2 \sigma \Lambda \cos\theta). \label{DYNAMICSOFHAM} \eeq We now seek to average this dynamics over the Hamiltonian orbits. The general nature of the Hamiltonian orbits is either oscillations around a local minimum of $V(\theta)$ (limit cycles) or persistent precessions. If the local minimum of $V$ is close to an even multiple of $\pi$, $\mathcal{H}$ cannot increase, while if it is close to an odd multiple then $\mathcal{H}$ can increase if $\sigma$ is large enough. The switching process involves moving from the oscillatory orbits close to one of these odd minima, up the energy landscape, then jumping to oscillatory orbits around the neighboring even minimum, and decreasing in energy towards the new local fixed point. We focus first on the oscillatory orbits. We may define their half-periods as \beq T(\mathcal{H}) = \int_{\theta^*_-}^{\theta^*_+}\frac{\d\theta}{\dot{\theta}}, \label{PERIODOSCILLATORY} \eeq where $\theta_-^*$ and $\theta_+^*$ are the roots of the equation $V(\theta)=\mathcal{H}$ to the left and right of the local minimum of $V(\theta)$ about which $\theta(t)$ oscillates. To compute this integral, we assume that ${\theta(t)}$ follows the Hamiltonian trajectory: \beq \dot{\theta} = \pm \sqrt{2(\mathcal{H} - V(\theta))}. \label{traj} \eeq We then define the half-period average of a function $f(\theta(t))$ as \beq \avg{f} = \frac{1}{T(\mathcal{H})} \int_{\theta^*_-}^{\theta^*_+}\frac{f(\theta) \d\theta}{\sqrt{2(\mathcal{H} - V(\theta))}}, \eeq which agrees with the time average over half-period to the leading order. Note that this formula applies irrespectively of whether the trajectory connects $\theta^*_-$ to $\theta^*_+$ or $\theta^*_+$ to $\theta^*_-$. Applying this averaging to $\dot{\mathcal{H}}$, we then have \beq \avg{\dot{\mathcal{H}}} = -\frac{1}{ T(\mathcal{H})}\int_{\theta^*_-}^{\theta^*_+} \chi(\theta,\mathcal{H}) \d \theta, \label{AVERAGEDDYNAMICS} \eeq where we defined \beq \chi(\theta,\mathcal{H}) = \frac{\brackets{\alpha + 2 \sigma \Lambda \cos \theta}\sqrt{2(\mathcal{H} - V(\theta))} }{\sqrt{\Lambda}}. \eeq If the value of $\mathcal{H}$ is such that either of the roots $\theta^*_\pm$ no longer exist, this indicates that the system is now on a precessional trajectory. In order to account for this, we can define the period on a precessional trajectory instead as \beq T(\mathcal{H}) = \int_{\theta_C - \pi}^{\theta_C}\frac{\d\theta}{\dot{\theta}}, \label{PERIODPRECESSIONAL} \eeq where $\theta_C$ is a local maximum of $V(\theta)$. On the precessional trajectories, we then have \beq \avg{\dot{\mathcal{H}}} = - \frac{1}{ T(\mathcal{H})}\int_{\theta_C-\pi}^{\theta_C} \chi(\theta,\mathcal{H}) \d \theta. \eeq In order to approximate the ODE solutions, we now decompose the dynamics of $\mathcal{H}$ into half-period time intervals. We thus take, at the $n$'th timestep, $\mathcal{H}_n = \mathcal{H}(t_n)$, $t_{n+1} = t_n + T(\mathcal{H}_n)$ and \beq \mathcal{H}_{n+1} = \mathcal{H}_n - \int_{\theta^*_-(\mathcal{H}_n)}^{\theta^*_+(\mathcal{H}_n)} \chi(\theta,\mathcal{H}_n) \d \theta, \label{discretemap} \eeq if $\mathcal{H}_n$ corresponds to a limit cycle trajectory. The same discrete map applies to precessional trajectories, but with the integration limits replaced with $\theta_C - \pi$ and $\theta_C$, respectively. \subsection{Modelling switching with discrete map} In order to model switching starting from inside a well of $V(\theta)$, we can iterate the discrete map above, starting from an initial energy $\mathcal{H}_0$. We choose $\mathcal{H}_0$ by choosing a static initial condition $\theta(0)=\theta_0$ close to an odd multiple of $\pi$ (let us assume without loss of generality that we are close to $\pi$), and computing $\mathcal{H}_0 = V(\theta_0)$. On the oscillatory trajectories, the discrete map then predicts the maximum amplitudes of oscillation ($\theta^*_\pm(\mathcal{H}_n)$) at each timestep, by locally solving $\mathcal{H}_n = V(\theta)$ for each $n$. After some number of iterations, the trajectory will escape the local potential well, and one or both roots of $\mathcal{H}_n = V(\theta)$ will not exist. Due to the positive average slope of $V(\theta)$ the most likely direction for a trajectory to escape the potential well is $\dot{\theta}<0$ (`downhill'). Assuming this to be the case, at some timestep $t_N$, it will occur that the equation $\mathcal{H}_N = V(\theta)$ has only one root $\theta = \theta^*_+ > \pi$, implying that the trajectory has escaped the potential well, and will proceed on a precessional trajectory in a negative direction past $\theta=\pi/2$ towards $\theta=0$. \begin{figure*}[t] \begin{center} \includegraphics[width=.9\textwidth]{averaged-switching.pdf} \caption{Switching solution (blue line) and its discrete approximation (green circles). Parameters: $\alpha=0.01$, $\Lambda=0.1$, $p_\perp = 0.3$, $\sigma=0.08$. Panel (a) shows the solution $\theta(t)$, and panel (b) shows the trajectory for this solution in the $\mathcal{H}-\theta$ plane. The red line in (b) shows $V(\theta)$.} \label{averaged-switching} \end{center} \end{figure*} To distinguish whether a trajectory results in switching or precession, we then perform a single half-period step on the precessional orbit from $\theta_C$ to $\theta_C - \pi$, and check whether $\mathcal{H} < V(\theta_C-\pi)$: if this is the case, the trajectory moves back to the oscillatory orbits around the well close to $\theta=0$, and decreases in energy towards the fixed point near $\theta=0$, representing switching. If however $\mathcal{H} > V(\theta_C-\pi)$ after the precessional half-period, the solution will continue to precess. In Fig. \ref{averaged-switching} below, we display the result of such an iterated application of the discrete map, for the same parameters as the switching solution given in Fig. \ref{odesols}(c). In Fig. \ref{averaged-switching}(a), the continuous curve represents the solution to \eqref{reducedODE}, and the points are the predicted peaks of the oscillations, from the discrete map \eqref{discretemap}. Fig. \ref{averaged-switching}(b) shows the energy of the same solution as a function of $\theta$. Again the blue curve gives $\mathcal{H}(t)$ for the ODE solution, the green points are the prediction of the iterated discrete map, and the red curve is $V(\theta)$. The discrete map predicts the switching behavior quite well, only suffering some error near the switching event, when the change of $\mathcal{H}$ is significant on a single period. \subsection{Modelling precession} Here we apply the discrete map to a precessional solution---one in which the trajectory, once it escapes the potential well near $\pi$, does not get trapped in the next well, and continues to rotate. Fig. \ref{averaged-precession}(a) below displays such a solution $\theta(t)$ and its discrete approximation, and Fig. \ref{averaged-precession}(b) displays the energy of the same solution. Again, the prediction of the discrete map is excellent. \section{Transitions in trajectories} In this section we seek to understand the transitions between the trapping, switching, and precessional regimes as the current parameters $\sigma$ and $p_\perp$ are varied. \subsection{Escape Transition} Firstly, let us consider the transition from states which are trapped in a single potential well, such as those in Figs. \ref{odesols}(a,b), to states which can escape and either switch or precess. Effectively, the absolute threshold for this transition is for the value of $\mathcal{H}$ to be able to increase for some value $\theta$ close to the minimum of $V(\theta)$ near $\pi$. Thus, we consider the equation of motion \eqref{DYNAMICSOFHAM} for $\mathcal{H}$, and wish to find parameter values such that $\dot{\mathcal{H}} > 0$ for some $\theta$ near $\pi$. This requires that \beq \frac{\omega^2}{\sqrt\Lambda}(\alpha + 2 \sigma \Lambda \cos \theta) < 0. \eeq Assuming that $\omega \neq 0$, we can see that the optimal value of $\theta$ to hope to satisfy this condition is $\theta=\pi$, yielding a theoretical minimum $\sigma = \sigma_s$ for the dimensionless current density for motion to be possible, with \beq \sigma_s = \frac{\alpha}{2\Lambda}. \eeq This is similar to the critical switching currents derived in previous work \cite{Pinna2013}. We then require $\sigma > \sigma_s$ for the possibility of switching or precession. Note that this estimate is independent of the value of $p_\perp$. \subsection{Switching--Precessing Transition} We now consider the transition from switching to precessional states. This is rather sensitive and there is not in general a sharp transition from switching to precession. It is due to the fact that for certain parameters, the path that the trajectory takes once it escapes the potential well depends on how much energy it has as it does so. In fact, for a fixed $\alpha, \Lambda$, and values of $\sigma > \sigma_s$ we can separate the $(\sigma, p_\perp)$-parameter space into three regions: (i) after escaping the initial well, the trajectory always falls into the next well, and thus switches; (ii) after escaping, the trajectory may either switch or precess depending on its energy as it does so (and thus depending on its initial condition); (iii) after escaping, the trajectory completely passes the next well, and thus begins to precess. \begin{figure*}[t] \begin{center} \includegraphics[width=.9\textwidth]{averaged-precession.pdf} \caption{Precessing solution (blue line) and its discrete approximation (green circles). Parameters: $\alpha=0.01$, $\Lambda=0.1$, $p_\perp = 0.6$, $\sigma=0.1$. Panel (a) shows the solution $\theta(t)$, and panel (b) shows the trajectory for this solution in the $\mathcal{H}-\theta$ plane. The red line in (b) shows $V(\theta)$.} \label{averaged-precession} \end{center} \end{figure*} We can determine in which region of the parameter space a given point $(\sigma,p_\perp)$ lies by studying the discrete map \eqref{discretemap} close to the peaks of $V(\theta)$. Assume that the trajectory begins at $\theta(0) = \pi$, and is thus initially in the potential well spanning the interval $\pi/2 \leq \theta \leq 3\pi/2$. Denote by $\theta_C$ the point close to $\theta=\pi/2$ at which $V(\theta)$ has a local maximum. It is simple to compute \beq \theta_C = \frac{\pi}{2} + \frac{1}{2} \sin^{-1}(2\sigma p_\perp). \eeq Moreover, it is easy to see that all other local maxima of $V(\theta)$ are given by $\theta = \theta_C + k\pi$, for $k\in \mathbb{Z}$. We now consider trajectories which escape the initial well by crossing $\theta_C$. These trajectories have, for some value of the timestep $n$ while still confined in the initial well, an energy value $\mathcal{H}_n$ in the range \beq \mathcal{H}_{\text{\scriptsize{trap}}} < \mathcal{H}_n < V(\theta_C+\pi), \label{range} \eeq where we define $\mathcal{H}_{\text{\scriptsize{trap}}}$ to be the value of $\mathcal{H}_n$ such that the discrete map \eqref{discretemap} gives $\mathcal{H}_{n+1} = V(\theta_C)$. We thus have $\mathcal{H}_{n+1} > V(\theta_C)$. In order to check whether the trajectory switches or precesses, we then compute $\mathcal{H}_{n+2}$ and compare it to $V(\theta_C-\pi)$. We may then classify the trajectories as switching if $\mathcal{H}_{n+2} - V(\theta_C-\pi) < 0$, and precessional if $\mathcal{H}_{n+2} - V(\theta_C-\pi) > 0$. Figure \ref{prec_v_switch} displays a plot of $\mathcal{H}_n - V(\theta_C+\pi)$ against $\mathcal{H}_{n+2} - V(\theta_C-\pi)$. The blue line shows the result of applying the discrete map, while the red line is the identity line. Values of $\mathcal{H}_n-V(\theta_C+\pi)$ which are inside the range specified in \eqref{range} are thus on the negative $x$-axis here. We can classify switching trajectories as those for which the blue line lies below the $x$-axis, and precessing trajectories as those which lie above. In Fig. \ref{prec_v_switch}, the parameters are such that both of these trajectory types are possible, depending on the initial value of $\mathcal{H}_n$, and thus this set of parameters are in region (ii) of the parameter space. We note that, since the curve of blue points and the identity line intersect for some large enough value of $\mathcal{H}$, this figure implies that if the trajectory has enough energy to begin precessing, then after several precessions the trajectory will converge to one which conserves energy on average over a precessional period (indicated by the arrows). In region (i) of the parameter space, the portion of the blue line for $\mathcal{H}_n - V(\theta_C+\pi) < 0$ would have $\mathcal{H}_{n+2} - V(\theta_C-\pi) <0$, while in region (iii), they would all have $\mathcal{H}_{n+2} - V(\theta_C-\pi) >0$. We can classify the parameter regimes for which switching in the opposite direction (i.e. $\theta$ switches from $\pi$ to $2\pi$) is possible in a similar way. It is not possible to have a precessional trajectory moving in this direction ($\dot\theta >0$), though. We may then predict, for a given point $(\sigma, p_\perp)$ in parameter space, by computing relations similar to that in Fig. \ref{prec_v_switch}, which region that point is in, and thus generate a theoretical phase diagram. In Fig. \ref{phase} below, we display the phase diagram in the $(\sigma, p_\perp)$-parameter space, showing the end results of solving the ODE \eqref{reducedODE} as a background color, together with predictions of the bounding curves of the three regions of the space, made using the procedure described above. The predictions of the discrete map, while not perfect, are quite good, and provide useful estimates on the different regions of parameter space. In particular, we note that the region where downhill switching reliably occurs (the portion of region (i) above the dashed black line) is estimated quite well. We would also note that we would expect the predictions of the discrete map to improve if the values of $\Lambda$ and $\alpha$ were decreased. \begin{figure}[b] \begin{center} \includegraphics[width=.45\textwidth,trim=40 0 0 20, clip=true]{prec_v_sw-eps-converted-to.pdf} \caption{Precession vs switching prediction from the discrete map. Parameters: $\alpha=0.01$, $\Lambda=0.1$, $p_\perp = 0.35$, $\sigma=0.08$. Values of $\mathcal{H}_n-V(\theta_C+\pi)$ to the left of the dashed line switch after the next period, the trajectory becoming trapped in the well around $\theta=0$. Values to the right begin to precess, and converge to a precessional fixed point of the discrete map.} \label{prec_v_switch} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth, trim=20 0 0 20,clip=true]{phase_diag-eps-converted-to.pdf} \caption{Macrospin solution phase diagram: $\alpha=0.01, \Lambda=0.1$. The background color indicates the result of solving the ODE \eqref{reducedODE} with initial condition \eqref{ICMacro}: the dark region to the left of the figure indicates solutions which do not escape their initial potential well, and the vertical dashed white line shows the computed value of the minimum current required to escape, $\sigma_s = \alpha/(2\Lambda)$. The black band represents solutions which decay, like in Fig. \ref{odesols}(a), while the dark grey band represents solutions like in Fig. \ref{odesols}(b). In the rest of the figure, the green points indicate switching in the negative direction like in Fig. \ref{odesols}(c), grey indicate switching in the positive direction, and white indicates precession like in Fig. \ref{odesols}(d). The solid black curves are the predictions of boundaries of the regions (as indicated in the figure) by using the discrete map, and the dashed line is the prediction of the boundary below which switching in the positive direction is possible. } \label{phase} \end{center} \end{figure} \section{Discussion} We have derived an underdamped PDE model for magnetization dynamics in thin films subject to perpendicular applied spin-polarized currents, valid in the asymptotic regime of small $\alpha$ and $\Lambda$, corresponding to weak damping and strong penalty for out-of-plane magnetizations. We have examined the predictions of this model applied to the case of an elliptical film under a macrospin approximation by using an orbit-averaging approach. We found that they qualitatively agree quite well with previous simulations using full LLGS dynamics \cite{ChavesKent15}. The benefits of our reduced model are that they should faithfully reproduce the oscillatory nature of the in-plane magnetization dynamics, reducing computational expense compared to full micromagnetic simulations. In particular, in sufficiently small and thin magnetic elements the problem further reduces to a single second-order scalar equation. The orbit-averaging approach taken here enables the investigation of the transition from switching to precession via a simple discrete dynamical system. The regions in parameter space where either switching or precession are predicted, as well as an intermediate region where the end result depends sensitively on initial conditions. It may be possible to further probe this region by including either spatial variations in the magnetization (which, in an earlier study \cite{ChavesKent15} were observed to simply `slow down' the dynamics and increase the size of the switching region), or by including thermal noise, which could result in instead a phase diagram predicting switching probabilities at a given temperature, or both. \section*{ACKNOWLEDGMENTS} Research at NJIT was supported in part by NSF via Grant No. DMS-1313687. Research at NYU was supported in part by NSF via Grant No. DMR-1309202.
train/arxiv
BkiUdJLxaKgTx4Y_-DwD
5
1
\section*{Acknowledgments} This work was supported by National Key Research and Development Program of China (2020AAA0109700), National Natural Science Foundation of China (62076167), and partially supported by NSERC OGP0046506. We would like to thank Wei Zeng and his team in Peng Cheng Laboratory (PCL) for computing resources to support this project. \section{Case Study} \label{case study} We first conduct case studies with Chinese GPT2. Case \ref{case:zh-1} and Case \ref{case:zh-2} are two cherry-picked examples. The prompt of the first example Case \ref{case:zh-1} is about the author's teacher. After finetuning without paragraph information, we can see that the generated continuation is related to the given prompt but pays too much attention to the gifts instead of the teacher, and generating something about the financial problem in the beginning. Although the middle portion of the continuation is well written, the latter half part is poor, incomplete and hard to be understood. In contrast, the continuation generated with \textsc{Eop}~is much better, although with minor errors of word choice. Besides, the ending of the latter one is much better as the former one just keeps generating until reaches the maximum length. Similar trending can be found in the second example Case \ref{case:zh-2}. According to our observation, without \textsc{Eop}, the beginning of the generation is more relevant to the end of the input prompt but the more it generates, the poor the quality is. While the generator with \textsc{Eop}~can generate multiple paragraphs related to the input with a reasonable ending but each paragraph is more independent than human writings. We further investigate the story generation results. Two different cases are shown in Case \ref{case:en-1} and Case \ref{case:en-2}. In the first case, the generation of DIY \textsc{Eop}~is better than \textsc{nl}~\textsc{Sep}, as it mentions the aliens multiple times and also generates the time ``100 years'' while the other one only talks about the planet. In Case \ref{case:en-2}, \textsc{nl}~\textsc{Sep}~is better than DIY \textsc{Eop}. The generator fine-tuned without paragraph information is the poorest in these two cases, which is consistent with our experimental results and indicates the paragraph information can benefit the story generation. \clearpage \onecolumn \subsection{Case 1 of Chinese-GPT2} \label{case:zh-1} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \textbf{Input prompt:}\\ \begin{CJK*}{UTF8}{gbsn} 老师,就像辛勤的园丁,每天都向我们灌溉知识。 老师,就像一支红烛,照亮了我们,燃尽了自己。 老师,就像那苍天大树,而我们,就是树下的一棵小草。 老师,就像那蓝天上的白云,而我们,就是自由翱翔的小鸟。 老师,就像那无尽的海水,而我们,就是哪快乐畅游的小鱼。 老师,就像那广袤无垠的大草原,而我们,就是哪草原上奔腾的骏马。 \\ \\ 每年快到教师节的时候,我都会无比的感谢老师,因为,有了老师,我们就是幸福的人,比那些在大山上不了学的,我们是多么的幸运。 有了老师,我们就是有了尾巴的鱼,在无尽的知识里畅游。 有了老师,我们就像有了翅膀的雄鹰,在蔚蓝的天空中,自由翱翔。 \\ \\ 我今年小学毕业,我离开了上了6年的小学,那里有熟知的人,熟悉的教室,可是那就是过去,是我回忆的地方,每年的教师节,我听到我们班主任说的最多的一句话就是:下次别再送了礼物了,我要的不是那些礼物,我要的是你们学习的态度,如果,你们要送的话,就送些手工做的,可以旧物利用,比如,叠个千纸鹤了,叠个星星了,都可以,不要用钱买,有那份钱,也可以买些学习用品。 \end{CJK*} \\ \textbf{Brief Translation:}\\ I am grateful to my teacher. The teacher's guidance to us is essential like the tail to the fish and the wings to the eagle. This year, I have graduated from primary school which is a place full of good memories. I still remember that every Teachers' Day, my teacher told us not to waste money to prepare gifts for her, as learning together is the best gift. \end{minipage} } \caption{Chinese input prompt and English translation.} \end{figure} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \textbf{Output of Chinese-GPT2 fine-tuned without \textsc{Eop}:} \\ \begin{CJK*}{UTF8}{gbsn} 我记得有一次,班主任说有同学家里穷,需要资助,因为我们很多\textbf{同学都没有孩子}({unreasonable}),一般都是自己挣钱,所以,我们同学的爸爸妈妈都说:"你们没钱没权,家里的\textbf{爸妈都没有孩子}({unreasonable}),这对我们来说是一种压力,因为我们的父母的学历都比你们高,就算我们再穷,我们也要好好读书,不然到时候就没的读了。"我听到这句话,心里像扎了根针一样,痛。\textbf{我希望老师知道我的心声,并对我进行语文课程的辅导,让我能更加好的理解课文。谢谢老师,让我在欢笑中学习知识,学会感恩,学会感谢辛苦付出的老师。老师,我们是彩虹,落在心里最美的地方,遇到,便是老天最大的幸运,让我们用最美的彩虹,每天绽放最灿烂的彩虹,为我们的校园增添一缕鲜花} (\textit{good continuation but should stop here.}) 奪す她,虽已不在人世,但我永远不会忘记她,为了我,我愿用自己全部的青春献出自己的真心,把我对老师的感谢埋进心底,告诉父母:我对老师的感恩,与幸福一样宝贵。笨呀,笨死了,我送了一本漫画书给她。我看到了她泪眼婆娑,一脸的疲惫,并且嘴里还喃喃着:"对不起,我是单亲家庭,不能送你一本书了。"我哽咽着,扶了扶眼镜,他一把拽住了我的衣服:"这有什么,我都已经这样了,还想着你,你居然还想走....."我再三道谢,沉默地笑了。笨呀,有什么好抱怨的,我不知道以后怎么办,以后,我愿不(\textit{incomplete}) \end{CJK*} \\ \textbf{Modified Google Translation:} \\ I remember a time when my teacher told us that one of our classmates need financial assistance, because many of \textbf{our classmates had no children}(\textit{unreasonable}) and generally made their own money, so our classmates' moms and dads said," You have no money and no rights. \textbf{Parents has no children}(\textit{unreasonable}) , which is a stress for us, because our parents have higher academic qualifications than you. Even if we are poor, we must study well, otherwise we will have no chance of studying. " At this point, my heart hurts like be inserted in a needle.\textbf{ I hope that the teacher will know my minds and provide me with Chinese language tutoring so that I can better understand the text. Thank you teacher, let me learn knowledge happily, learn to be grateful, learn to thank the hard-working teacher. Teacher, we are the rainbow. We fall in the most beautiful place in our hearts. When we meet, it is the greatest luck. Let us use the most beautiful rainbowe brightest rainbow every day and add a ray of flowers to our campus. }(\textit{good continuation but should stop here.}) [garbled tokens], she has passed away but I will never forget her. I told my mother that I am grateful to my teacher. I don't know what to do. In the future, I hope not(\textit{incomplete})\\ \end{minipage} } \caption{Generation from Chinese-GPT2 fine-tuned without \textsc{Eop}.} \end{figure} \clearpage \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \smallskip \noindent \textbf{Output of Chinese-GPT2 fine-tuned with \textsc{Eop}:}\\ \begin{CJK*}{UTF8}{gbsn} 老师,这是一个实实在在的老师,她知识渊博,课堂上,她激发我们的斗志,课下,她严格,刻苦,为我们全班同学排忧解难,可是在我的生活中,我最敬佩的老师是一位母亲,老师,是用爱心来温暖,去回报,用知识来创造美丽幸福的人。有了老师的教导,我们就像上帝创造出来的天使,是最纯净的天空,没有感伤,没有痛苦,每一天,我都在老师的呵护下,享受\textbf{天伦之乐}(\textit{unreasonable}),是您指引着我们,走向未来,是您为我们付出了许多,是您使我们在知识的海洋里健康地成长。\\ \\ 曾经有一位心灵手巧的老师,让我们学会手工活,学会穿衣服,学会修剪指甲,收拾桌椅,学会穿衣服,学会洗衣服......她就是我们的语文老师。说真的,我心里很感激我的语文老师。她教会我们词语的讲解和组合,教会我们汉字的发音,曾经我的心情很低落,她的安慰,为我指点迷津,让我重新燃起了奋斗的斗志。\\ \\ 我会好好的学习,才不会辜负老师的期望呢![\textsc{Eos}] \end{CJK*}\\ \textbf{Modified Google Translation:}\\ Teacher, this is a trustworthy teacher. She is very knowledgeable. In class, she inspires our fighting spirit. After class, she is strict and diligent. She solves problems for our classmates. The teacher is a mother, a teacher, a person who warms with love, returns, and uses knowledge to create beautiful and happy people. With the teaching of the teacher, we are like the angel created by God. It is the purest sky, no sentimentality, no pain. Every day, I am under the care of the teacher and enjoy the \textbf{family fun}(\textit{unreasonable}). It is you who lead us to the future, it is you who paid a lot for us, and it is you who made us grow up healthy in the ocean of knowledge. There used to be a handy teacher who taught us how to work by hand, how to dress, how to trim nails, how to clean tables and chairs, how to dress, how to wash clothes ... she is our Chinese language teacher. Seriously, I am very grateful to my Chinese teacher. She taught us the explanation and combination of words, taught us the pronunciation of Chinese characters. And when I was depressed, her consolation gave me pointers to help me rekindle my fighting spirit. I will study hard so that I will not disappoint the teacher's expectations! \end{minipage} } \caption{Generation from Chinese-GPT2 fine-tuned with \textsc{Eop}.} \end{figure} \clearpage \subsection{Case 2 of Chinese-GPT2} \label{case:zh-2} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \smallskip \noindent \textbf{Input Prompt:}\\ \begin{CJK*}{UTF8}{gbsn} 那时,正值夏季,烈日炎炎。\\ \\ 我和我的朋友们正走回学校的路上。 \end{CJK*} \\ \textbf{Brief Translation:}\\ It was a hot summer. I was on the way to walking back to school with my friends. \end{minipage} } \caption{Chinese input prompt and English translation.} \end{figure} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \smallskip \noindent \textbf{Output of Chinese-GPT2 fine-tuned without \textsc{Eop}}\\ \begin{CJK*}{UTF8}{gbsn} \indent凉爽的风吹来,我的心也舒畅了些许。走出校门,看到一大群人在热火朝天地玩着什么,我不禁打了个冷战。只见有几个小孩子挎着摇摇篮,走在空地上,摇摇篮里塞满了不听话的人,吵吵嚷嚷地叫着,吵闹的人群一个接着一个地的在空地上来回飞舞。只见他们不管是乱扔垃圾的,还是在家里乱扔东西的,总有那么一两个人,让人看了都觉得讨厌,那几个男生,在那儿使出浑身解数,往那儿扔垃圾,有的还不止一次,他们真的是抓破了脑袋才钻到垃圾堆里啊!"听说那些事儿很早就埋在那儿了。"他们越说越起劲,像每个夏季都会发生的那样,还拿起了"帚,拿起了沙包。回过神来,脸已经晒得通红,汗珠直往外流。我呆呆地站在那儿,思绪飞到了几年前,那时的我才十三岁,现在的我才二十四岁!打来打去,什么也没有留下。唉,打得还挺勤快。我转过身,在树底下端详着他们,又想起了他们,他们可真是坏事当前,坏事晚来了许多啊!我继续打打我那张臭脸。我的心猛地一抽,丢下书包,飞奔出去。在那片空地上,零零星星的人有的在玩儿,有的正在玩呢!他们又跳又叫,不分是非地叫着。我突然看见一个不起眼的小孩儿,他满脸通红,眼睛红得像个灯泡,让人看了如沐春风。他正用眼睛注视着那只有倒在地的人,他下意识地往上看,一会儿,上面散落着一些塑料瓶子。他不知道,还在玩儿呢!我忽然有些感动,急忙跑过去,跟在他后面,问:"他怎么了?为什么那么残忍。"小伙儿只是若无其事地回答:"我觉得他太可怜了。"他倒也会说话,但是有些话让我怎么说也说不出来。他一双大大的眼睛仿佛在说:"我这是为了家里,也是为了我们共同的孩子。现在,我是想退出,你们也别急,知道吗?要是真和这些小孩闹僵了,家里的学习教材再翻出来,你们就完了!"看着小伙儿那可怜的小眼睛,心里一阵酸楚,我不禁流下了眼泪。每次吵完架后,他总爱说:"我不怪你,可你就这样放弃我吧!我没钱啊!"唉,啊!唉!哎!气息都快从嗓子里出来了啊!唉!唉!唉!唉!那时,我仿佛听见了他的声音:"哎,孩子,爸爸的房屋可能拆迁了,你就住在那吧。等拆迁后我送你吧!我会好好学习的!"那声音大得连我自己都听不见,我的心仿佛像灌了铅似的。我也哭了,但没办法,谁叫人生是漫长的呢?那时我只会低着头走路,却找不到一根自来水。我生着闷气,仿佛快要窒息了。我急得满头大汗,冲到楼下,不停地喊:"救救我!救救我!"唉,我都快把脸憋得通红。这时,对面来了一辆出租车,我毫不犹豫地将 \end{CJK*} \\ \textbf{Modified Google Translation:}\\ The cool wind blew, and my heart was relieved a little. I walked out of the school gate and saw what a large group of people were doing something. I couldn't help fighting a cold war. I saw a few children carrying the cradle and walking on the open space. The cradle was full of disobedient people, screaming loudly, and the noisy crowd flew back and forth on the open space one by one. I saw that whether they were littering or throwing things at home, there were always one or two people that made people feel annoying when they saw it. Those boys, they tried to do everything and throw garbage there. Some more than once, they really broke their heads and got into the garbage dump! "I heard that those things were buried there very early." They said more and more vigorously, as they would happen every summer, and took up "Broom, took up the sandbag. When I looked back, my face was already sunburned It was red, sweat was flowing out. I stood there blankly, my thoughts flew a few years ago, when I was only 13 years old, and now I am only 24 years old! Stay. Alas, I played very diligently. I turned around, looked at them under the tree, and thought of them again. They are really bad things, and bad things come a lot late! I continue to beat my bad face . My heart twitched, dropped my schoolbag, and ran out. On the open space, some people were playing and some were playing! They jumped and called, regardless of right or wrong. Screaming. I suddenly saw an inconspicuous child, his face flushed, his eyes red like a light bulb, making people look like a spring breeze. He was staring at the only person who fell to the ground with his eyes, he subconsciously went up Look, for a while, there are some plastic bottles scattered on it. He doesn't know, he's still playing! I suddenly Some moved, quickly ran behind him, asked: "how he was? Why so cruel. "The boy just answered casually:" I think he's too pathetic. "He would speak, but some words made me unable to say what he said. His big eyes seemed to be saying," This is for my family and for our common children. Now, I want to quit. Don't worry, you know? If you are really upset with these children, you will find out the learning materials at home, and you are finished! "Looking at the poor little eyes of the guy, I felt sore in my heart, and I couldn't help crying. After each quarrel, he always said," I don't blame you, but you just give up on me! I have no money! "Oh, ah! Uh! Hey! The breath is almost coming out of my throat! Uh! Uh! Uh! Uh! Then I seemed to hear his voice:" Oh boy, my father's house may be demolished, You live there. I'll send you after the demolition! I will study hard! "That voice was so loud that I couldn't even hear it myself, my heart seemed to be filled with lead. I also cried, but there was no way, who made life long? At that time, I just walked with my head down, But I couldn't find a tap. I was sulking, as if I was about to suffocate. I was so sweaty that I rushed downstairs and kept shouting, "Save me! help me! "Well, I'm almost flushing my face. At this moment, a taxi came across, and I didn't hesitate to\\ \end{minipage} } \caption{Generation from Chinese-GPT2 fine-tuned without \textsc{Eop}.} \end{figure} \clearpage \begin{figure}[!htp] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \smallskip \noindent \textbf{Output of Chinese-GPT2 fine-tuned with \textsc{Eop}}\\ \begin{CJK*}{UTF8}{gbsn} \indent我们都还在上课,突然一阵急促的车铃声传来,我们低下了头,拿出粉笔,认真地练习。 \indent窗外的大树上,有一个乞丐,他向路人求助,却得不到人帮助,放弃了乞丐的工作,独自一人走了。他的脸上,永远挂着笑,脸颊上,微微露出皱纹。 \indent火车缓缓开过,熙熙攘攘的人群和我们汇成一片。那时,我正在大口大口地吃着零食。突然,看到一个乞丐,他颤巍巍地走了过来,我急忙拦住他:"小心,别被车轧到。"他却语气坚决,关切地问:"您怎么这么着急,知不知道车辆出轨了?我们的工作又不是那么危险,怎么可以冒险出来帮助别人呢?您一天到晚忙着赚钱,却不关心自己的健康,给钱这么不讲道理,真可怜。""啊,这是什么人啊?拿什么关心呀?这是个老人啊!他这样只顾自己,在哪打点零工,卖什么不关心。"一旁的马路上,几个小伙子正在大口大口地讨论着。我不禁流下了眼泪。含着眼泪,向其中一个乞丐致谢,这个老人见了我们,又弯下腰,打开自行车的后座,把钱给了我们,又对我们说:"不要再来烦我们了,我们的工作就是很危险的,也很需要钱的,不如来为国家做点力所能及的事吧。"我的泪水模糊了我的眼睛。 \indent慢慢地,车流渐渐远去,但有些在星星点点的人群里,依然在四处寻找着那个乞丐。夜深人静了,我一人在街上漫步,深深地感受到社会的冷漠和人性的丑恶。我的心又忍不住泛起阵阵潮湿的雨珠。一件一件事情,令我感到我自己的不幸。随后,雨又愈发大了。 \indent我环顾四周,在最好的车站等上了车,我并没有去接那乞丐。直到车子终于来了,我才有些不耐烦了。不是不想让别人帮我,而是那些恶作剧都是不成熟的表现。很多时候,对人也是冷漠的,甚至是冷嘲热讽。我有些生气,却又难以控制着。游戏在幸福的雨幕里。以后每当看见乞丐,我总是感到心酸。 \indent在接下来的日子里,我不再光顾街头,但行人仍多。我依然不理会任何人的言语,回家却无法挽回,那个老人又怎么会有人关心呢? \indent时间会冲淡一切,但我不想放弃,就算老天在我身边,我还是会努力,继续做好自己,虽然结果不尽如人意,但我依旧要用自己的行动报答社会。[\textsc{Eos}] \end{CJK*}\\ \textbf{Modified Google Translation:}\\ We were all still in class, and suddenly there was a rush of car bells. We lowered our heads, took out chalk, and practiced seriously. In the big tree outside the window, there was a beggar who asked for help from passersby, but could not get help. He gave up the work of the beggar and left alone. There was always a smile on his face, and a slight wrinkle appeared on his cheek. The train passed slowly, and the bustling crowd merged with us. At that time, I was eating snacks with a big mouthful. Suddenly, when I saw a beggar, he came tremblingly, and I hurriedly stopped him: "Be careful, don't get caught in the car." But he resolutely asked, "Why are you so anxious, do you know the car is out of track Now? Our work is not so dangerous, how can we take risks to help others? You are busy making money all day, but you don't care about your health. It 's so pitiful to give money so unreasonably. "" Ah, what is this? Man? What do you care about? This is an old man! He only cares about himself like this, where to do odd jobs and don't care what he sells. "On the side of the road, a few young men were talking in a big mouth. I can't help crying. With tears, thanks to one of the beggars. The old man saw us, then bent down, opened the back seat of the bicycle, gave us money, and said to us, "Don't bother us anymore, our work is very dangerous. I also need money, so let's do something for my country. "My tears have blurred my eyes. Slowly, the traffic gradually went away, but some of the little crowds were still looking for the beggar. The night was quiet, and I was walking on the street alone, deeply feeling the indifference of society and the ugliness of human nature. My heart could not help but burst with wet raindrops. One thing at a time makes me feel my own misfortune. Afterwards, the rain became heavy again. I looked around and waited at the best station. I didn't pick up the beggar. I didn't get impatient until the car finally came. It's not that you don't want others to help me, but that those pranks are immature. Many times, it is indifferent to people, even ridiculous. I'm a little angry, but hard to control. The game is in the rain of happiness. Whenever I see a beggar, I always feel sad. In the following days, I stopped patronizing the streets, but there were still many pedestrians. I still ignore the words of anyone, but I ca n't restore it when I go home. How could anyone care about that old man? Time will dilute everything, but I don't want to give up. Even if God is by my side, I will still work hard and continue to be myself. Although the results are not satisfactory, I still have to repay the society with my own actions. \end{minipage} } \caption{Generation from Chinese-GPT2 fine-tuned with \textsc{Eop}.} \end{figure} \clearpage \subsection{Case 1 of GPT2-117} \label{case:en-1} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \textbf{Input prompt:}\\ Aliens come to Earth in hoping to wipe us out and take over the planet, but are honorable and gives us a fighting chance. They announce they will return in 100 years so as to give Earth a chance to prepare and defend itself. Neither side expected what the other would bring to war \end{minipage} } \caption{English input prompt.} \end{figure} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} It was dark as the day began. Pure white, forever not knowing the night sky. This was the first time since their arrival that \textbf{ships} had been so dark, it was hard not to feel the tremors of terror as the drops of rain started falling. The rain started a minute after the \textbf{humans} had arrived. It started getting brighter as they piled up and felt the tremors of rain take their place. Another minute or two they saw them in the distance. One little shell, below them floated a little bigger. Behind that shell came another of them falling, one in each direction. Then, faster and faster the smaller the shell went. It was as if they were floating to each other, different colors and shapes sailing past each other in any direction, making the scene less momentary. The rain started to fall. The tall suits of \textbf{soldiers} patrolled the area. They didn't see what was happening but one was seeing some sort of change. He was slowly meeting the upper edge of the shell, directly under the soft snow. The rain started to fall and he was choking back the hail and it suddenly stopped flowing. He looked up at the Sun and the grass and saw the cloudless sky in front of him. He knew that had given him a new hope. He wasn't lying when he said that the atmosphere was starting to roll away, and just as he was about to turn around he felt a knock on his door. "I'm having a look at this. It's new... " " We'll wait. " " You have to go, have a look. " " Come with us." With that said the strange man took off his coat and sat on the porch of his house, waiting for the rain. \end{minipage} } \caption{Generation from GPT-117 fine-tuned without \textsc{Eop}/\textsc{Sep}.} \end{figure} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} A \textbf{planet} - there is no other more appealing than a\textbf{ peaceful planet}, \\ taking in the vast plains of green and dry, \\ allowing the native \textbf{species} to develop. \\ Among many other things, the greatest threat here was our neighbours as well, \\ hosting none but a single\textbf{ blue planet}. \\ With every news we received, while we was still, the green grass kept \\ tilling even the most \textbf{exotic plants}. \\ Against the odds, we agreed. This blue planet \\ is sure to die if it too falls, \\ as if the planet we are about to take hold of is not giving it. \\ Whether it is the region of lush land, \\ living in the shadow of endless sea, or the ones that lie beyond, \\ this \textbf{planet will feed our population} just as well. \end{minipage} } \caption{Generation from GPT-117 fine-tuned with \textsc{nl}~as \textsc{Sep}.} \end{figure} \clearpage \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} Robert stood in the room quietly. The pager in the edge of his TV blinked off as he contemplated what he was seeing. It was not his first visit by \textbf{alien ships}. Most of them had flown over the \textbf{galaxy in 882} since the foursome of an ancient race had first started transit in the Kardashev system. But they did not have a \textbf{star system} to conquer or destroy. \\ The strain of building a bridge was far from comfortable as it had become. Even hands and feet were shaking as they dutifully lugged the countless cargo back to their vessel. \textbf{Each alien lacked a ship just like him}. Their ships would not be ready \textbf{for another 100 years}. Without time, they would be easily overwhelmed. \\ The first of them came on us. The black, streaming in through a far wall. Its shape changing from green to orange as it accelerated. It came in waves across the curved surface. Its \textbf{metallic surface simply glowed off}. Robert had seen \textbf{the mouth of the creature. It had a back in it, almost reptilian. A spider-like body with a fangs that ripped through the flesh like a spider.} The mouth was growling with hunger in response. It roared as its muscles launched away from Robert to mate with him. \\ The next twenty-three seconds went by with little delay. Robert led the old stranger down a hallway to his repair bay. He was quick to greet the human with baited breath and followed close behind him. He took the sidearm from underneath his suit and held it in front of his eyes as the humans tore at the metal. A metal band formed around the metal covered mouth. The human was tongue hanging off his neck. \\ Without stopping, he freed his hands from the talons hanging around his head as he pressed the new badge on his hip. \\ Robert opened his mouth to scream but his goal was to prevent the humans from catching on to what was happening. Robert began unwrapping a vial of orange liquid. He approached the canine and began licking it, waiting. The canine howled as Robert removed his mask and walked away. He had fifty percent of the tanker harvested and stored at a distance of about five million miles. \\ The alien warship stopped in mid-flight and seemed to accelerate down again as Robert completed its tasks. He called back, letting out a whistle that was barely audible over the din of battle. Within a second the ship touched down at a small cove. The human ran over to him, mewling. \\ "Not again. " Robert said. \\ " Exactly, Mr. Tammoy. We have evidence of life support in here. " \textbf{The alien asked}. \\ " And what. Was. All. You. Deposed. " Robert responded. \\ " Quite literally. "\textbf{ The humanoid replied}. \\ " So, just one moment. Can't you just give it a chance? " \\ " Neither can anybody else." The creature continued, seemingly oblivious to its words and their implications. Robert slowly brought the cockpit of the ship's other 'Crew' to the nub. He unlocked the pods and stepped into the airlock. The robot was gone. Robert sat in the cockpit, the cylindrical container behind him. He took a deep breath and looked out into the blackness, following the exact pattern of the vessel. The black flesh was always fascinating. More often than not, he didn't really understand what it meant. \\ Robert walked past the humans towards his repair bay. It had not done much damage. He walked back past the humans and got into his pickup truck. With the last line of effort he pushed his way through the trio. \end{minipage} } \caption{Generation from GPT-117 fine-tuned with DIY \textsc{Eop}.} \end{figure} \clearpage \subsection{Case 2 of GPT2-117} \label{case:en-2} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} \smallskip \noindent \textbf{Input Prompt:}\\ You were driving to work one day and your car suddenly started flying. \end{minipage} } \caption{English input prompt.} \end{figure} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} ( got more into poetry / fiction themes/ before calling it Work ) My grandfather had 3 boxes of cake and a white sugar please Jane said "today is April 25th, and your seventh or eighth " I was in love by the looks of things, I hated her " Your father loves you at least nine times " I said " Oh, I know, it's really funny. " She looked at me with a confused expression " why would she do that " Then I began to back out of it like I told her I would. At that point Jane said something " no, she had a birthday today " and since I was the father of her child I felt really blessed for making her mother happy. We closed the front door and walked back into her house to eat cookies. You see, she almost did so fast, I was so pumped and pretty much took the bag to the kitchen for milk and honey, but here I was going to take everything home and spend it running errands. My wife moved back in a minute later, but I saw her and told her I was having this late so I told her " Yeah " " tell her I'll grab one at the bus stop tomorrow " She said you'll come back to bed early and eat one " goodbye " but I did hurry. My wife usually leaves early so \textbf{we're in the car} then this was her last day of school. At the bus stop it was all so bad that I'd have forgotten not to say goodbye " Hello, It seems I've had the worst day of my life. Please remember to check on Jane " " Excuse me child " I looked at her puzzled " oh my god " A strange feeling came over me and I knew I didn't have time for that in the moment. " K- Come on Jane " Suddenly I became stiff and lost my footing. I fell onto the curb and immediately collapsed onto the curb again. " What the HELL" followed by everything I missed out on. I knew my grandmother was trying to get me to go to the movies. \end{minipage} } \caption{Generation from GPT-117 fine-tuned without \textsc{Eop}/\textsc{Sep}.} \end{figure} \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} My heart was racing; I felt \textbf{something otherworldly being in my car}. " Hello again! Who is this? " I was already bracing for another \textbf{crash} at work.\\ " Shhh shhh " the object before me suddenly let go. I tried turning it on to get some air. Nope, this thing didn't change its positioning, it never got to anything. My arm feels like a pressure cooker, its chest feels like a heater. In all that time I never had expected to see anything like this happening. My legs were numb, and my back was numb. I was so tired. I was like a boat out of water and it seemed like it took all my energy to get the \textbf{gas pedal off}. " I swear to god please don't hurt me! " I screamed, and I was feeling for my hands like claws were opening my right leg.\\ Suddenly the skull popped open... \textbf{my car began to fly}..no where else to go except to look on the horizon and never see it again. I had been driving home in rage and frustration. \textbf{I need to get out of here}, I'm literally the black dog everywhere. "\textbf{ What the hell happened}? " I was about to ask out of fear of being right next to someone when a screeching fell from above. I slowly turned my head towards the heavens. \textbf{Something was definitely different in the sky.}\\ " Can you hear me? " I shouted, though I was standing a full 6 feet tall. I was so sure. * What is going on? * I wanted to yell out, to pretend that I didn't hear someone. " Help me! Please! " What! No. Please. Please help me! " I must have just tried to reach for my hand or something.\\ It landed in the grassland and started shaking. My chest felt heavier and it was hardly noticeable on the ground. It shook harder, and now my eyes were fixated on that star. " Fuuuuuk, Fuuuuuuuuuk... " *BANG*, the Earth shook. I was spinning and was floating. It was so bright I looked down at my arms, and \textbf{I felt like we were floating for what seemed like an eternity}. I was breathing like I was being ripped apart.\\ Then I felt something was dragging me. I don't know what it was, but it was drifting closer and closer and more and more until I felt as though I was about to fall and I could hear what could only be a small nervous chuckle escaping my lips. " God this is so weird! "\\ " What're you doing? "\\ " I'm.... In a strange alternate world.\textbf{ I'm falling from this sky}, touching a comet... "\\ " Hell no! This is \textbf{just a dream}... I'm falling from this sky and it's going to explode and everything is going to be fine until I touch a comet again. Then... this is it. All of this is right now and I can't see what's going on but I can feel it. I can feel it being lifted off of me and all of this knowledge and knowledge... "\\ *Brilliant* What?! It was the best dream I had ever had! I could definitely feel that tiny little planet being pulled up to the surface by whatever ship it was in. God, could I ever know what will happen to it like that? But what would happen to me if I touched one such comet! Even being floating in space with nothing to do?\\ I had finally had enough! " I'll get you out of here! I'll let you out... ( Once I get to the comet, I can feel the universe get pulling me to another dimension. ) I'll let you out! " I took off my sunglasses and put them on, and there was no way I wouldn't be in that crowd again.\\ After that nightmare I decided that I wasn't going to make it out of here alive. That I couldn't \textbf{ride a car }that would almost definitely end in killing someone. Whatever! I'll be back, that's all I need.\\ \textbf{I'll be free of this reality, *that dream* and open to some other universe!} \end{minipage} } \caption{Generation from GPT-117 fine-tuned with \textsc{nl}~as \textsc{Sep}.} \end{figure} \clearpage \begin{figure}[!htb] \centering \small \fbox{ \begin{minipage}{.95\textwidth} Here we go again, I'm trying to pick up my pace. My hands are dragging along the overgrowth and I hardly feel a thing. The sunlight beating down on me still yet dulling my vision. My hand suddenly feels like it's in a basketball as I pick myself up off the ground. \\ "Hey Wendy... You okay? " \\ " Just trying to get some fresh air on my lungs " \\ " Can you get me one of my of my stones to drink. I mean, \textbf{there is no use in driving} " \\ " Sure... " I say looking around, taking up a new position on a nearby skylight. While searching, one on a hill that seems pretty rounded in any way, two other ones are out of order, and I take notice of them trying to move out of the way. \\ Not even half an hour passes... I can feel them roll in and out of the path as I decide it's time to head out for the day. No, I don't give one. \\ " Would you like some fresh air for my shoulder? " \\ " How about that new Sonic X that's been around for the past couple years? Now as soon as I get off it, it can take me out of the sun. So just give me a moment of peace and rest " \\ I reach for my rock, still still clutching at my leg with my shoe. Yet as fast as I left it, it's trapped by my arm. I'm powerless to do anything... until I hear something coming down from the trees. " STOP! " I yell as I try to dodge it in a fast spiral. Before I can react, it's shoved right at me and\textbf{ I fall to the ground}. The sky is dark, smog filling the sky. Already I'm blacking out, the\textbf{ backlight on my car keeping me firmly in darkness.} \\ A crisp wind whipping about me, I grab my blanket from my chair and prepare to throw it at anything that could have managed to keep me with me. Bouncing out of my chair, I continue down the path where the road once was. \\ The wind is beginning to get stronger. More thunderstorms begin breaking out, as well as additional thunder. My turn comes up and the wind picks up. As soon as I can see it, it's nowhere to be seen. I'm only about 10 minutes away from the road, standing in the middle of the road. I hear \textbf{a voice screaming from my car}. A tall man in fatigues looks at me and \textbf{looks at my car}. " Damn... \textbf{I was driving}... " he says, before sprinting from my car and grabbing his wallet. He gives me a look of disgust, as if the only thing worse than avoiding the \textbf{highway} was choosing between two other men. \\ I ask him what was going on, and he smiles gently. " You think I'm lucky to get in, huh? \textbf{I really shouldn't be riding a car just yet}, you know. But I'm glad you're here! So if you don't mind if I drive me, I have a few things on my mind. " \\ " Alright, fine, whatever. Go, fasten the seat belt, you can't come back here any other way. Are you sure you're just going to excuse me, though?" \\ That was his last expression, before he limped away like a glutton.\\ \textbf{This is the end of my first attempt at writing nothing! Any thoughts of how to improve upon it?} \end{minipage} } \caption{Generation from GPT-117 fine-tuned with DIY \textsc{Eop}.} \end{figure} \section{\textsc{Eos}~and ParaType} \label{appendix: eos_ranking} \begin{figure*}[h] \centering \small \subfigure[Global View] {\includegraphics[width=0.40\textwidth]{figs/eos_ranking.pdf}\label{fig:global}} \subfigure[Local View] {\includegraphics[width=0.40\textwidth]{figs/eos_ranking_local.pdf}\label{fig:local}} \caption{Relationships between paragraph relative position and the ranking of the \textsc{Eos}~probability predicted by the last token of each paragraph.} \label{fig:appendix_eos} \vspace{-4mm} \end{figure*} We compared the relation between \textsc{Eos}~and different \textsc{Eop}/\textsc{Sep}, which is shown in Figure \ref{fig:appendix_eos}. The horizontal axis represents the relative paragraph index, 0 means the beginning paragraph and 100 means the last paragraph of the story. The vertical axis represents the ranking position of the \textsc{Eos}~probability among all tokens in the vocabulary predicted by the last token of each paragraph. As \textsc{Eos}~should only be predicted by the last token of the last paragraph, the ranking at 100 should be higher and the other position should be lower. According to Figure \ref{fig:global}, all models rank \textsc{Eos}~higher as the paragraph index increasing. \textsc{Eop}~works better than \textsc{Sep}~as the \textsc{Eop}~models rank \textsc{Eos}~higher at the 100th position and lower on the other positions, which can be seen from Figure \ref{fig:local}. \section{Human evaluation for WritingPrompts} \label{appendix: he} Human evaluation results for WritingPrompts are shown in Figure~\ref{fig:he_score_en} and Figure~\ref{fig:he_kappa_en}. \begin{figure*}[h] \centering \small \subfigure[Topic relevance] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}45 \\ \textsc{Eop}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{55} & \\ \end{tabular}} \subfigure[Fluency] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}51 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}49 & \\ \end{tabular}} \subfigure[Ending quality] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}43 \\ \textsc{Eop}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{57} & \\ \end{tabular}} \subfigure[Overall preference] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}47 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}53 & \\ \end{tabular}} \caption{Average percents of systems in row outperform system in column.} \label{fig:he_score_en} \end{figure*} \begin{figure*}[h] \centering \small \subfigure[Topic relevance] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.67 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.67 & \\ \end{tabular}} \subfigure[Fluency] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.69 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.69 & \\ \end{tabular}} \subfigure[Ending quality] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.72 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.72 & \\ \end{tabular}} \subfigure[Overall preference] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.58 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.58 & \\ \end{tabular}} \caption{Fleiss' kappa $\kappa$~(ranging from -1.0 to 1.0) for the reliability of raters' agreement. } \label{fig:he_kappa_en} \end{figure*} \section{Truncated BLEU Score} \label{appendix: t-bleu} The BLEU score is easily affected by the length of text, where a short text might achieve a higher BLEU score than a long text. The average lengths of the texts generated from different methods are shown in Table~\ref{table:results_tb_story} and Table~\ref{table:results_tb_essay}. An intuitive metric for this problem is the truncated BLEU~(T-BLEU) score. To get the T-BLEU score, we first truncate the generated text with the length of its corresponding ground-truth text. Then, the BLEU score of the truncated text is the T-BLEU score. As we can see from Table~\ref{table:results_tb_story} and Table~\ref{table:results_tb_essay}, although the BLEU score improvements of \textsc{Eop}/\textsc{Sep}~become less significant on the Chinese dataset, the overall trending is similar with the normal BLEU scores. \section{Background} Our target task is to conduct auto-regressive language modeling over WritingPrompts and the ChineseEssay dataset. The basic assumption of auto-regressive generation is that the probability of a word sequence equals the product of conditional word probability: $P\left( w_{1:T}|W_{0}\right) =\prod_{t=1}^{T}P\left( w_{t}|w_{1:t-1},W_{0}\right)$ where $W_{0}$ is the given context, and in this work, $W_{0}$ can be a story prompt or the beginning of an essay. The generated sequence length $T$ is usually determined by the time $t$ generating the \textsc{Eos}~(end-of-sequence) token: $ P\left(w_{T}|w_{1:T-1}, W_{0}\right) =P\left(\textsc{Eos}|w_{1:t-1},W_{0}\right)$ In this work, the model computing the conditional probabilities is self-attention Transformer~\cite{DBLP:conf/nips/VaswaniSPUJGKP17}. We train our model with the cross-entropy loss between the predicted conditional probabilities and the ground-truth next token. When the target of generation consists of multiple paragraphs, there are several approaches to indicating the paragraph ending. The most common and obvious approach is to separate paragraphs with line break \textsc{nl}: $ w_{1:T} = p_{1}, \textsc{nl}, ..., p_{n-1}, \textsc{nl}, p_{n}, \textsc{Eos}~$ where $p_i = \{w_{b_i:e_i}\}$ is the words sequence of paragraph $i$, from the beginning word $w_{b_i}$ to the ending word $w_{e_i}$. However, not every paragraph ends with \textsc{nl}, and during the pre-training, not every \textsc{nl}~represents the paragraph separator~(\textsc{Sep}) . A better option is to append a new specific token \textsc{Eop}~to indicate the end of the paragraph: $w_{1:T} = p_{1}^{,}, ..., p_{n-1}^{,}, p_{n}^{,}, \textsc{Eos}$ where $p_{i}^{,} = \{w_{b_i:e_i},\textsc{Eop}\}$. Then, each paragraph can end with the \textsc{Eop}~and the transformer-based language model can learn this feature with every paragraph in the training data, without distinguishing when to generate \textsc{Eop}~and when not to. It is well known that greedy decoding and beam search usually lead to repetitive and degenerate outputs\cite{DBLP:conf/acl/ShangLL15, DBLP:journals/corr/abs-1911-03587}. Sampling-based decoding methods have shown a strong ability in generating diversity, fluency and repetitiveness of the generation with pre-trained language models, such as \textit{top-k} and \textit{top-p} sampling. In this work, we choose the \textit{top-p} sampling decoding algorithm and set the $p$ equals to 0.95. \section{Conclusion} In this paper, we have demonstrated that \textsc{Eop}~and \textsc{Eos}~information helps generating better text. Chinese GPT2 and English GPT2 are two existing models pre-trained without and with \textsc{Eop}~respectively, which provides a perfect platform for our proposed experiments. On the ChineseEssay dataset, the text generation when fine-tuned with \textsc{Eop}~and \textsc{Eos}~information is significantly improved. On the other hand for the English task, although (English) GPT-2 was trained with \textsc{nl}~which serves as \textsc{Eop}~to some degree, learning to end paragraphs can still benefit the story generation in terms of automatic metrics and human evaluation results. \section{Introduction} Large-pretrained neural models such as GPT~\cite{gpt} and BERT~\cite{DBLP:journals/corr/bert} have achieved the state-of-the-art on many NLP tasks. Among these models, OpenAI's GPT2~\cite{gpt2}, for example, has shown to be capable of generating long fluent text in many areas, such as stories~\cite{DBLP:conf/conll/SeePSYM19}, recipes~\cite{h2020recipegpt}, patent claims~\cite{DBLP:journals/corr/patentgpt}, and news~\cite{DBLP:conf/nips/grover}. However, the semantics of a text goes beyond what's written to what's not written: When to break paragraphs and when to end. We wish to experiment on this issue: How much do \textsc{Eop}~and \textsc{Eos}~markers affect our ability to generate texts with GPT2. To study the strength of GPT2 as a language generator, \citet{DBLP:conf/conll/SeePSYM19} conduct experiments in the context of story generation with the WritingPrompts~\cite{DBLP:conf/acl/LewisDF18} dataset. They find that the generated results of GPT2 have higher-quality content (using more rare words, concrete words, and named entities) by comparing the top 150 generated words. However, the average story length of the dataset is 12 paragraphs, 368 words. In such lengthy human writings, the overall layout and text endings are also important, but whether the GPT2 can generate them properly is unclear, and how to generate better endings has not been investigated. In this work, we find the generated endings are not only affected by \textsc{Eos}, but also \textsc{Eop}. \textsc{Eop}~can also help improve the topic relevance of the generated text. We first conduct essay completion experiments with Chinese GPT2~\cite{GPT2-ML}, which is a character-level LM without \textsc{Eos}~or \textsc{Eop}~during pre-training. Experimental results show that fine-tuning with \textsc{Eop}~can achieve higher ending quality score and topic relevance score in human evaluation. We further conduct story generation experiments on dataset WritingPrompts with English GPT2-117, which holds the line break ``\textbackslash n''~(\textsc{nl}) in the vocabulary. Thus, the \textsc{nl}~can be treated as the end-of-paragraph during fine-tuning~\cite{DBLP:conf/emnlp/MaoMMC19}. Experimental results show that learning to end the paragraph can benefit the word/token perplexity, BLUE score, \textsc{Eos}~perplexity, and human evaluated ending quality score. Our contributions are as follows: We show that not only the well-known \textsc{Eos}~but also the \textsc{Eop}, is part of the semantics of a text, and training with this information improves the text generation itself. The paragraph information not only can help improve the effectiveness of the generation model but also help to generate the end of the text. We also investigate different approaches to incorporating paragraph information into the LM generator. Our findings indicate that \textsc{Sep}/\textsc{Eop}~and \textsc{Eos}~should be introduced to GPT2 types of models during pre-training, to generate better text in length. \section{Experiments} \subsection{Datasets} \label{sec:datasets} \begin{table}[t] \centering \small \begin{tabular}{l|cc} Dataset &Story &Essay\\ \hline Language&English &Chinese\\ \hline \#Train samples &199,083 &1,615\\ \#Test samples &11,069 &461\\ \#Validation samples &11,474&195 \\ \hline \#Avg. words per sample &367.9& 571.3\\ \#Avg. paragraphs per sample &12.1&5.6 \\ \hline \end{tabular} \caption{\label{table:datasets}Detailed information of the filtered WritingPrompts dataset and the ChineseEssay dataset.} \end{table} \smallskip \noindent \textbf{Story Generation.} The story generation dataset is the WritingPrompts, collected by~\citet{DBLP:conf/acl/LewisDF18} from Reddit. It is a large dataset of 300K human-written stories. Each instance of this dataset is the pair of a short prompt and a long story. Following~\citet{DBLP:conf/conll/SeePSYM19}, we exclude examples that are longer than 1024 BPE tokens to meet the maximum length restriction of GPT2. Statistics for this dataset are detailed in Table~\ref{table:datasets}. We sample 1000 examples from the test set for decoding. \smallskip \noindent \textbf{Essay Completion.} We build an essay completion dataset ChineseEssay, which is collected from primary school and annotated by native Chinese annotators. All these essays are descriptive essays about people, such as family members and teachers. Hence, compared with the WritingPrompts, this dataset is smaller but less open domain. Dataset statistics are also shown in Table~\ref{table:datasets}. \begin{table*}[ht!] \centering \small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \textbf{ParaType}&\textbf{FT}&\textbf{\textsc{Eos}}&\textbf{T PPL}&\textbf{T PPL(-)}&\textbf{BLEU1}&\textbf{BLEU2}&\textbf{DIST1}&\textbf{DIST2}&\textbf{\textsc{Eos}\%}&\textbf{\textsc{Eos}~PPL}\\ \hline \multirow{3}{*}{None} & No &No &12.12&11.48&33.6&7.5&34.46&73.95&0&-\\ \cline{2-11} & Yes&No &11.44&11.44&38.1 &9.9 &32.95 & 73.96&0&-\\ \cline{2-11} & Yes&Yes &10.43&\textbf{10.42}&42.7&10.7&37.57&78.26&76.41&22.15\\ \hline \textsc{Sep}~DIY& Yes&Yes &10.45&10.52&44.1&11&38.73&78.98&90.26&8.92\\ \hline \textsc{Eop}~DIY& Yes&Yes &\textbf{10.34}&10.48&\textbf{45.4}&\textbf{11.2}&\textbf{40.18}&\textbf{80.61}&\textbf{93.07}&\textbf{2.74}\\ \hline \end{tabular} \caption{\label{table:results_essay}Test results of different models with/without fine-tuning(FT) on ChineseEssay dataset.} \vspace{-2mm} \end{table*} \begin{table*}[t] \centering \small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \textbf{ParaType}&\textbf{FT}&\textbf{W PPL}&\textbf{W PPL(-)}&\textbf{T PPL}&\textbf{T PPL(-)}&\textbf{\textsc{Eos}~PPL}&\textbf{BLEU1}&\textbf{BLEU2}&\textbf{DIST1}&\textbf{DIST2}\\ \hline \multirow{2}{*}{None} & No &42.53&42.20&34.42&34.17&295.50&20.3&2.2&\textbf{58.87}&\textbf{89.78}\\ \cline{2-11} & Yes &31.34&31.35&25.81&25.81&4.63&30.4&4.6&50.07&87.12\\ \hline \multirow{2}{*}{\textsc{Sep}~} & No &39.97&42.00&32.43&33.79&102.93&20.3&2.2&58.87&89.78\\ \cline{2-11} & Yes &\textbf{29.36}&\textbf{31.24}&\textbf{24.23}&\textbf{25.57}&4.32&31.2&4.3&50.15&85.88\\ \hline \textsc{Sep}~DIY& Yes &30.23&32.17&24.99&26.38&4.48&31.5&6.8&48.57&83.84\\ \hline \multirow{2}{*}{\textsc{Eop}~\textsc{nl}}& No &40.10&41.84&32.52&33.68&26478.91&20.3&2.2&58.87&89.78\\ \cline{2-11} & Yes &29.95&31.32&24.70&25.63&20534.60&30.7&4.3&49.79&85.44\\ \hline \textsc{Eop}~DIY& Yes &30.18&32.21&24.95&26.41&\textbf{2.26}&\textbf{31.7}&\textbf{6.9}&48.32&83.82\\ \hline \end{tabular} \caption{\label{table:results_story}Test results on WritingPrompts dataset. } \end{table*} \subsection{Experimental Settings} \label{sec:settings} \smallskip \noindent \textbf{Model Configuration.} For Chinese essay generation, we use Chinese-GPT2~\cite{GPT2-ML}, which is a 48 layers Transformer with 1.5 billion parameters, pre-trained with 15GB Chinese corpus. For story generation, we fine-tune the OpenAI's GPT2-117 with WritingPrompts following previous work~\cite{DBLP:conf/conll/SeePSYM19, DBLP:conf/emnlp/MaoMMC19}. The GPT2-117 model has 12 layers and 117 million parameters. During fine-tuning, the batch size is 32 and the warm-up steps are 800. The other hyperparameters are the same as the default setting of Huggingface Transformers~\cite{DBLP:journals/corr/abs-1910-03771}. Models can converge after 15 epochs for GPT2-117 and 3 epochs for Chinese-GPT2. The checkpoints with the best evaluation results on validation set are chosen for further testing. \smallskip \noindent \textbf{Automatic Metrics.} We use the following metrics: perplexity over all words/tokens~(W/T PPL); perplexity over words/tokens excluding \textsc{Eos}/\textsc{Eop}/\textsc{Sep}~(W/T PPL(-)); perplexity of \textsc{Eos}~(\textsc{Eos}~PPL); percentage of the generated texts that are ending with \textsc{Eos}~(\textsc{Eos}\%); BLEU/Distinct score excluding \textsc{Eos}/\textsc{Eop}/\textsc{Sep}~(BLEU/DIST). All perplexities are macro-average. \smallskip \noindent \textbf{Human Evaluation Metrics.} We also conduct the human evaluation with 50 random samples from the test set. For ChineseEssay, we collect generations from \textsc{Eos}~fine-tuned model, \textsc{Eos}+\textsc{Eop}~fine-tuned model, and \textsc{Eos}+\textsc{Sep}~fine-tuned model. For WritingPrompts, we collect generations from the model fine-tuned with \textsc{Eop}~and the model without \textsc{Sep}/\textsc{Eop}. Four native speakers are asked to compare the generations of different systems in pairs over four metrics:\ topic relevance, fluency, ending quality, and overall preference. The assessors were presented with pairs of generated output and asked to make a three way judgment:\ whether the ``left system'' was better, the ``right system'' was better, or ``cannot distinguish''. The latter option either meant that both output were equally good, or equally bad. To prevent inadvertent bias, all systems were blinded, i.e., the assessors did not know which system generated which output, and presentation order was randomized. After annotation, we count the total times of each system outperforming the others, and then normalize to 0-100\%. \begin{figure*}[h] \centering \small \subfigure[Global View] {\includegraphics[width=0.40\textwidth]{figs/eos_ranking.pdf}\label{fig:global}} \subfigure[Local View] {\includegraphics[width=0.40\textwidth]{figs/eos_ranking_local.pdf}\label{fig:local}} \caption{Relationships between paragraph relative position and the ranking of the \textsc{Eos}~probability predicted by the last token of each paragraph.} \label{fig:appendix_eos} \vspace{-4mm} \end{figure*} \begin{figure*}[ht!] \centering \small \subfigure[Topic relevance] {\begin{tabular}{lm{0.4cm}m{0.4cm}m{0.4cm}} &None &\textsc{Sep}~&\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}43 &\cellcolor[HTML]{cccccc}45 \\ \textsc{Sep}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{57} & &\cellcolor[HTML]{cccccc}52 \\ \textsc{Eop}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{55} &\cellcolor[HTML]{cccccc}48 & \\ \end{tabular}} \subfigure[Fluency] {\begin{tabular}{lm{0.4cm}m{0.4cm}m{0.4cm}} &None &\textsc{Sep}~&\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}47 &\cellcolor[HTML]{cccccc}53 \\ \textsc{Sep}~&\cellcolor[HTML]{cccccc}53 & &\cellcolor[HTML]{cccccc}54 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}47 &\cellcolor[HTML]{cccccc}46 & \\ \end{tabular}} \subfigure[Ending quality] {\begin{tabular}{lm{0.4cm}m{0.4cm}m{0.4cm}} &None &\textsc{Sep}~&\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}37 &\cellcolor[HTML]{cccccc}34 \\ \textsc{Sep}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{63} & &\cellcolor[HTML]{cccccc}49 \\ \textsc{Eop}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{66} &\cellcolor[HTML]{cccccc}51 & \\ \end{tabular}} \subfigure[Overall preference] {\begin{tabular}{lm{0.4cm}m{0.4cm}m{0.4cm}} &None &\textsc{Sep}~&\textsc{Eop}\\ None & &\cellcolor[HTML]{cccccc}42 &\cellcolor[HTML]{cccccc}47 \\ \textsc{Sep}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{58} & &\cellcolor[HTML]{cccccc}50 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}53 &\cellcolor[HTML]{cccccc}50& \\ \end{tabular}} \caption{Average percentage of systems in row outperform system in column. Results are normalized without considering the Cannot Distinguish examples. } \label{fig:he_score} \end{figure*} \begin{figure*}[ht!] \centering \small \subfigure[Topic relevance] {\begin{tabular}{lm{0.5cm}m{0.5cm}m{0.5cm}} &None &\textsc{Sep}~&\textsc{Eop}\\ None & &\cellcolor[HTML]{cccccc}0.89 &\cellcolor[HTML]{cccccc}0.69 \\ \textsc{Sep}~&\cellcolor[HTML]{cccccc}0.89 & &\cellcolor[HTML]{cccccc}0.70 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.69 &\cellcolor[HTML]{cccccc}0.70 & \\ \end{tabular}} \subfigure[Fluency] {\begin{tabular}{lm{0.5cm}m{0.5cm}m{0.5cm}} &None &\textsc{Sep}~&\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.61 &\cellcolor[HTML]{cccccc}0.60 \\ \textsc{Sep}~&\cellcolor[HTML]{cccccc}0.61 & &\cellcolor[HTML]{cccccc}0.65 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.60 &\cellcolor[HTML]{cccccc}0.65 & \\ \end{tabular}} \subfigure[Ending quality] {\begin{tabular}{lm{0.5cm}m{0.5cm}m{0.5cm}} &None &\textsc{Sep}~&\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.68 &\cellcolor[HTML]{cccccc}0.63 \\ \textsc{Sep}~&\cellcolor[HTML]{cccccc}0.68 & &\cellcolor[HTML]{cccccc}0.71 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.63 &\cellcolor[HTML]{cccccc}0.71 & \\ \end{tabular}} \subfigure[Overall preference] {\begin{tabular}{lm{0.5cm}m{0.5cm}m{0.5cm}} &None &\textsc{Sep}~&\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.52 &\cellcolor[HTML]{cccccc}0.34 \\ \textsc{Sep}~&\cellcolor[HTML]{cccccc}0.52 & &\cellcolor[HTML]{cccccc}0.51 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.34 &\cellcolor[HTML]{cccccc}0.51& \\ \end{tabular}} \caption{Fleiss' kappa $\kappa$~\cite{fleiss1971measuring} for the reliability of raters' agreement. The interpretation of $\kappa$'s value should be: poor agreement~($<0$), slight agreement~(0.01\textendash0.2), fair agreement~(0.21\textendash 0.4), moderate agreement~(0.41\textendash0.6), substantial agreement~(0.61\textendash0.8), and almost perfect agreement~(0.81\textendash1).} \label{fig:he_kappa} \end{figure*} \section{Results} \label{sec:results} The results of different settings of utilizing paragraph information (ParaType) are shown in Table~\ref{table:results_essay} and Table~\ref{table:results_story}: concatenating all paragraphs into an uninterrupted sequence~(None); concatenating all paragraphs with ``\textbackslash n'' as the paragraph separator~(\textsc{Sep}~\textsc{nl}); concatenating all paragraphs with a new token ``[\textsc{Sep}]'' as the paragraph separator~(\textsc{Sep}~DIY); appending ``\textbackslash n'' to the end of each paragraph~(\textsc{Eop}~\textsc{nl}); appending a new token ``[\textsc{Eop}]'' to the end of each paragraph~(\textsc{Eop}~DIY). \smallskip \noindent \textbf{Automatic Metrics.} For Chinese essay generation, since Chinese-GPT2 is pre-trained without any special tokens(\textsc{Eos}/\textsc{Eop}/\textsc{Sep}), it will keep generating until meet the max length limitation without fine-tuning. In this case, we first compare None models fine-tuned with and without \textsc{Eos}~in Table \ref{table:results_essay}. We can find that both T PPL~(-) and BLEU scores are better with \textsc{Eos}. However, even fine-tuned with \textsc{Eos}, only 76.41\% generated texts can end with \textsc{Eos}. After adding \textsc{Eos}, the \textsc{Eos}~PPL plunges from 22.15 to 2.74 and the \textsc{Eos}\% rising from 76.41 to 93.07, indicating that more generated essays end with the \textsc{Eos}~after learning to end paragraphs. The BLEU scores are also improved. It should be noted that the BLEU score is affected by the length of the text. We further truncate all generations with the length of the ground-truth story to calculate the truncated BLEU scores which are detailed in Appendix~\ref{appendix: t-bleu} and the overall trending is consistent. Finally, the ground-truth essays get 41.2 DIST1 and 82.65 DIST2, which means \textsc{Eos}~DIY achieves the closest DIST scores to the ground-truths. On the other hand, English GPT2 is pre-trained with \textsc{Eos}~and line break \textsc{nl}. Hence, we first compare GPT2 fine-tuned without \textsc{nl}, with \textsc{nl}, and with new token ``[\textsc{Sep}]''/``[\textsc{Eop}]''. According to the Table \ref{table:results_story}, we can find that the fine-tuned GPT2 with \textsc{nl}~as \textsc{Sep}~achieves the best results on word and token level perplexity metrics. Compared with the model fine-tuned without paragraph information, all the models with \textsc{Eop}/\textsc{Sep}~achieve better BLEU scores. We further report the length truncated BLEU scores in Appendix~\ref{appendix: t-bleu}. The overall trending is consistent. As for diversity score, the DIST1 and DIST2 of the ground-truth stories are 50.23 and 85.07, and the \textsc{Sep}~\textsc{nl}~is the most close one. In addition to the better PPL and BLEU score, we find that learning to end paragraphs can benefit the prediction of \textsc{Eos}. The \textsc{Eop}~DIY achieves the lowest \textsc{Eos}~PPL and all models trained with \textsc{Eop}/\textsc{Eos}~achieve better \textsc{Eos}~PPL than model without paragraph information, except the \textsc{Eop}~\textsc{nl}. This observation indicates that GPT2 tends not to generate the \textsc{Eos}~following the \textsc{nl}~even after fine-tuning, but it can learn better \textsc{Eos}~with the help of a new \textsc{Eop}~token. \begin{figure*}[h!] \centering \small \subfigure[Topic relevance] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}45 \\ \textsc{Eop}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{55} & \\ \end{tabular}} \subfigure[Fluency] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}51 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}49 & \\ \end{tabular}} \subfigure[Ending quality] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}43 \\ \textsc{Eop}~&\cellcolor[HTML]{4285f4}\color{white}\textbf{57} & \\ \end{tabular}} \subfigure[Overall preference] {\begin{tabular}{lm{0.4cm}m{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}47 & \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}53 & & \\ \end{tabular}} \caption{Average percentage of systems in row outperform system in column.} \label{fig:he_score_en} \end{figure*} \begin{figure*}[h!] \centering \small \subfigure[Topic relevance] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.67 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.67 & \\ \end{tabular}} \subfigure[Fluency] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.69 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.69 & \\ \end{tabular}} \subfigure[Ending quality] {\begin{tabular}{lm{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.72 \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.72 & \\ \end{tabular}} \subfigure[Overall preference] {\begin{tabular}{lm{0.4cm}m{0.4cm}m{0.4cm}} &None &\textsc{Eop}~\\ None & &\cellcolor[HTML]{cccccc}0.58 & \\ \textsc{Eop}~&\cellcolor[HTML]{cccccc}0.58 & & \\ \end{tabular}} \caption{Fleiss' kappa $\kappa$~(ranging from -1.0 to 1.0) for the reliability of raters' agreement. } \label{fig:he_kappa_en} \end{figure*} We further compared the relations between \textsc{Eos}~and different \textsc{Eop}/\textsc{Sep}, which is shown in Figure \ref{fig:appendix_eos}. The horizontal axis represents the relative paragraph index, 0 means the beginning paragraph and 100 means the last paragraph of the story. The vertical axis represents the ranking position of the \textsc{Eos}~probability among all tokens in the vocabulary predicted by the last token of each paragraph. As \textsc{Eos}~should only be predicted by the last token of the last paragraph, the ranking at 100 should be higher and the other position should be lower. According to Figure \ref{fig:global}, all models rank \textsc{Eos}~higher as the paragraph index increasing. \textsc{Eop}~works better than \textsc{Sep}~as the \textsc{Eop}~models rank \textsc{Eos}~higher at the 100th position and lower on the other positions, which can be seen from Figure \ref{fig:local}. \smallskip \noindent \textbf{Human Evaluations.} Human evaluation results are shown in Figure~\ref{fig:he_score}. Each cell represents the percentage of the examples that the row system wins the column system on. Cells will be filled in blue if the row system outperforms the column system over 10\%. It should be noted that Cannot Distinguish examples are skipped when counting winners for these figures. From Figure~\ref{fig:he_score}, we can first find that learning to end paragraphs leads to better ending quality: 63\% and 66\% results are rated better when comparing \textsc{Sep}/\textsc{Eop}~with None systems, while only 37\% and 34\% results are rated better for None system. We also find that \textsc{Eop}'s text endings are slightly better than \textsc{Sep}. This is consistent with \textsc{Eos}~PPL and \textsc{Eos}\% results in Table~\ref{table:results_essay}. Although \textsc{Sep}~wins on topic relevance and fluency, the overall preference of \textsc{Sep}~compared with \textsc{Eop}~is 50\%, which means these two systems are similar for human rates. Besides, we also find that \textsc{Sep}~and \textsc{Eop}~achieve better topic relevance and overall preference. For fluency, there is no significant difference among different systems. We also report Fleiss' kappa $\kappa$ in Figure~\ref{fig:he_kappa}, to access the reliabilities of raters' agreement. $\kappa<0$ means poor agreement, and $\kappa\sim(0.6,0.8)$ means substantial agreement. From this figure, we can find that most of them fall into the substantial agreement group. The overall preference falls into moderate agreement, because this metric is more subjective than the others. Human evaluation results for WritingPrompts are shown in Figure~\ref{fig:he_score_en} and Figure~\ref{fig:he_kappa_en}. Assessors still prefer model fine-tuned with \textsc{Eop}~rather than without \textsc{Eop}/\textsc{Sep}. \smallskip \noindent \textbf{Case Study.} We further conduct case study and detailed in Appendix~\ref{case study}. The most important observation is that, without \textsc{Eop}, the beginning of the generation is more relevant to the end of the input prompt, but the more it generates, the poor quality is. While the generator with \textsc{Eop}~can generate multiple paragraphs related to the input with a reasonable ending but each paragraph is more independent than human writings. \section*{Appendix Overview} In this supplementary material, we provide additional experimental results of truncated BLEU score in Appendix \ref{appendix: t-bleu}, and several generations in Appendix \ref{case study}. \input{appendix/truncated_bleu.tex} \input{appendix/case_study.tex}
train/arxiv
BkiUeevxK0wg09FXW7dN
5
1
\section{Introduction} The present period of the information infrastructure development is distinguished by the active interaction of various computer applications with huge Information Retrieval Systems. This activity actualizes the demand for efficient data compression methods that on one hand provide satisfactory compression rate, and, on the other, support fast search operations in compressed data. Along with this, the need for code robustness in the sense of limiting possible error propagations has been also strengthened. As is known, in large textual databases classical Huffman codes \cite {huffman}, when applied to words considered as symbols, show good compression efficiency approaching to the theoretically best. Unfortunately, Huffman's encoding does not allow a fast direct search in compressed data by a given compressed pattern. At the expense of losing some compression efficiency, this was amended by introducing byte aligned tagged Huffman codes. They are Tagged Huffman Codes \cite{THufmann}, End-Tagged Dense Codes (ETDC) \cite{ETDC}, and (s,c)-Dense Codes (SCDC) \cite{SCDC}. In these constructions, codewords are represented as sequences of bytes, which along with encoded information incorporate flags for the end of a codeword. The alternative approach for compression coding stems from using Fibonacci numbers of higher orders. The mathematical study of Fibonacci codes was started in the pioneering paper \cite{AF}. The authors first introduced a family of Fibonacci codes of higher orders with the emphasis on their robustness. They proved completeness and universality of these codes. The strongest argumentation for the use of Fibonacci codes of higher orders in data compression is given in \cite{Kl}, \cite{Kl1}. For these codes, the authors developed fast byte aligned algorithms for decoding \cite{KlDec} and search in compressed text \cite{KlSearch}. They also showed that Fibonacci codes have better compression efficiency comparing with ETDC and SCDC while still being somewhat inferior in decompression and search speed even if byte aligned algorithms are applied. Evidently, the structure of a code strongly depends on the form of initial data representation. Note that in their constructions many integer encodings use two-parted information. For instance, the simplest Run-Length Codes use pairs (\emph{the count of a symbol in a run, symbol}). The famous Elias \cite{El}, Levenshtein \cite{Lev} and many other codes that use their own length \cite{Salomon} exploit the pairing integer information (\emph{bit length, binary representation}). The Golomb \cite{Golomb} and the Golomb-Rice \cite{Rice} codes use pairs (\emph{quotient, remainder}) under integer division by a fixed number. So, we argue that many code constructions fit into the general scheme as follows: (i) According to some mathematical principle, each element of the input alphabet is put into one-to-one correspondence with the sequence of ordered integer pairs. Some relationships inside pairs and among pairs could be specified. (ii) For encoding pairs, some variable-length uniquely decodable function is chosen. (iii) To obtain the resultant codeword of a sequence of pairs, the corresponding codewords of pairs are concatenated in direct or reverse order. (iv) A special delimiter could be appended to the obtained binary sequence. This general scheme could be specified in many ways. One of such variants with the emphasis on splitting a code into simpler basic components is considered in this presentation. We introduce and study a family of binary codes that are derived from encoding sequences of ordered integer pairs with restrictions on one of the pair's component. Namely, we consider the initial data representation of the form $(\Delta_{1}, k_{1})\ldots(\Delta_{t}, k_{t})$, where all integers $\Delta_{i}$ are upper bounded by some constant $d$, values $k_{i}$ are not bounded, $0\leq\Delta_{i}\leq d$, $0<k_{i}$, $i=1,\ldots,t$. Each pair is encoded using the concatenation of two fixed independent prefix encoding functions applied to the corresponding components of a pair. A codeword consists of the sequential concatenation of those pair's encodings. We call such codes splittable. Depending on tasks to be solved, one can choose a variety of coding functions to encode each pair $(\triangle, k)$ . This way we construct a code, which we call a $(\triangle, k)$-code. In the same way by using the dual representation $(k_{1},\triangle_{1}),\ldots,(k_{t},\triangle_{t})$, we define $(k,\Delta)$-codes. The families of $(\Delta,k)$ and $(k,\Delta)$-codes constitute the set of splittable codes. Giving such a name to considered codes we want to stress that the structure of a code reflects the splittable nature of the initial data representation by simpler integral parts. Splittable codes could be considered as a generalization of Golomb's codes, which contain only one $(k,\Delta)$-pair. Splittable codes are well structured. Each codeword, including delimiters, is the concatenation of an integral number of corresponding $(\Delta,k)$ or $(k,\Delta)$-pairing encodings. This regularity of a code structure also facilitates proving its important properties, such as completeness, universality, and density. In spite of the fact that $(\Delta,k)$ and $(k,\Delta)$-sequences carry the same information about coded data, their encodings could be very different. We prove that any Fibonacci code belongs to the class of $(k,\Delta)$-codes and cannot be any $(\Delta,k)$-code. An important family of $(\Delta,k)$-codes are variable length codes with multiple delimiters. These codes are the main subject of our study. A delimiter is a synchronizing string that makes it possible to uniquely identify boundaries of codewords under their concatenation. In our case, each delimiter consists of a run of consecutive ones surrounded with zero brackets. Thus, delimiters have the form $01\ldots10$. A delimiter either can be a proper suffix of a codeword, or it arises as the concatenation of the codeword ending zero and a codeword of the form $11\ldots10$. The number of ones in delimiters is defined by a given fixed set of positive integers $m_{i}, i=1,2,\ldots,t$. The multi-delimiter code of that form is denoted by $D_{m_{1},\ldots,m_{t}}$. We prove that any multi-delimiter code $D_{m_{1},\ldots,m_{t}}$ is a $(\Delta, k)$- code and thus splittable. By their properties, multi-delimiter codes are close to Fibonacci codes of higher orders. We prove completeness and universality of those codes. There also exists a bijection between the set of natural numbers and any code $D_{m_{1},\ldots,m_{t}}$. This bijection is implemented by simple encoding and decoding procedures. For practical use, we present a byte aligned decoding algorithm, which has better computational characteristics than that of Fibonacci codes developed in \cite{Kl1}. As shown in \cite{Kl1}, the Fibonacci code of order three, denoted by Fib3, is the most effective for the text compression.From our study it follows that the simple code $D_{2}$ with one delimiter $0110$ has asymptotically higher density as against Fib3, although it is slightly inferior in compression rate for realistic alphabet sizes of natural language texts. We also note that by varying delimiters for better compression we can adapt multi-delimiter codes to a given probability distribution and an alphabet size. Thus, for example, we compare the codes $D_{2,3}$, $D_{2,3,5}$ and $D_{2,4,5}$ with the code Fib3. Those multi-delimiter codes are asymptotically less dense than Fib3. Nevertheless, alphabet sizes of the texts used in practice are relatively small, from a few thousands up to a few millions words. For texts of such sizes the mentioned above multi-delimiter codes outperform the Fib3 code in compression rate. The conducted computational experiment shows that, for example, the code $D_{2,3,5}$ gives the average codeword length by $2-3\%$ shorter than the Fib3 code when encoding the Bible and some other known texts. Even in encoding one of the largest up to date natural language text corpus of English Wikipedia, the code $D_{2,3,5}$ is still superior as well as the codes $D_{2,3}$ and $D_{2,4,5}$. Multi-delimiter codes, like Fibonacci codes, are static codeword sets not depending on any probability distribution. For a multi-delimiter code there exists an easy procedure for generating all words of a given length. Therefore, these codes allow an easy vocabulary representation for compression and decompression procedures. To create the vocabulary, one only needs to sort symbols according to the probabilities of their occurrences. Due to robust delimiters, multi-delimiter codes are synchronizable with synchronization delay at most one codeword. Properties of multi delimiter codes mainly rely on a finite set of special suffixes. Sets of words with a given fixed suffix, which cannot occur in other places of a word, are known as pattern codes. Properties of these codes such as synchranizability, completeness, universality, the average codeword length have been intensively studied \cite{Lakshman}-\cite{Lima}. Multi-delimiter codes even with one delimiter are not pattern codes, although they belong to the class of universal codes that are regular languages \cite{Capoc}. The structure of this presentation is as follows. Prior to the introduction of splittable codes, we precede with the consideration of two simpler codes of that type. In Section 3 with the purpose to show how $(\Delta,k)$-constructions arise in integer encodings, we briefly consider a specific integer representation using the two-base numeration system with the main radix 2 and the auxiliary radix 3. This representation yields a typical $(\Delta, k)$-code with restrictions given by inequalities $0\leq\Delta\leq2$, $0<k$. This code is universal, but it is not complete. In section 4 we show that it can be embedded into the larger one-delimiter code set $D_{2}$, which is complete. In section 5 we introduce splittable codes, and discuss $(\Delta,k)$ versus $(k,\Delta)$-codes. We argue that $(\Delta,k)$-codes have some advantages comparing with $(k,\Delta)$-codes. That includes the possibility to form a wider variety of short codewords and more efficient codeword separation. In section 6 we introduce multi-delimiter codes $D_{m_{1},\ldots,m_{t}}$. We prove the mentioned above main properties of these codes: being a $(\Delta, k)$-code, completeness, and universality. A bijective correspondence between the set of natural numbers and the codewords of any code $D_{m_{1},\ldots,m_{t}}$ is established in the next section. For multi-delimiter codes we present simple algorithms for encoding integers and decoding codewords. With the purpose to accelerate the procedure of decoding we describe the general scheme of a byte aligned algorithm. Using the code $D_{2}$ as the representative of the considered family of codes a byte aligned decoding algorithm is presented in detail in Section 8. Comparative density characteristics of different multi-delimiter codes and the code Fib3 are given in Section 9. Our conclusion is the following. The introduced multi-delimiter codes form a rich adaptive family of robust data compression codes that could be useful in many practical applications. \section{Definitions and notations} By $\{0,1\}^*$ denote the set of all strings in the alphabet $\{0,1\}$. Let $m$ be a non-negative integer. Denote by $1^{m}$ (respectfully $0^{m}$) the sequence consisting of $m$ consecutive ones (respectfully $m$ zeros). The empty string corresponds to the value $m = 0$. A run of consecutive ones in a word $w$ is called isolated if it is a prefix of this word ending with zero, or it is its suffix starting with zero, or it is a substring of $w$ surrounded with zeros, or it coincides with $w$. For a word $w\in\{0,1\}^{*}$ its length is denoted by $|w|$. A code is a set of binary words. A code is called prefix (prefix-free) if no codeword could be a prefix of another codeword. A code is called uniquely decodable (UD) if any concatenation of codewords is unique. Each prefix code has UD property. A code is called complete if its any extension leads to not UD code. Let $(\Delta_{0},k_{0})...(\Delta_{t},k_{t})$ be a sequence of ordered integer pairs, where $0\leq\Delta_{i}\leq d, 0<k_{i}$. For simplicity, in the sequel, pairs $(\Delta_{i},k_{i})$ of that type are called $(\Delta,k)$-pairs, and a sequence of such pairs is called a $(\Delta,k)$-sequence. Symbols $\Delta$ and $k$ can be viewed as names of variables corresponding to values $\Delta_{i}$ and $k_{i}$. We encode values $\Delta$ and $k$ by some fixed prefix binary codes. The codeword of a $(\Delta,k)$-pair is the concatenation of codewords corresponding to parameters $\Delta$ and $k$. The codeword of a $(\Delta,k)$-pair is called the $(\Delta,k)$-group. In analogous way by changing the order in pairs we define $(k,\Delta)$-pairs, $(k,\Delta)$-sequences, and $(k,\Delta)$-groups. Fibonacci numbers of order $m\geq1$, denoted by $F^{(m)}_{i}$, are defined by the recurrence relation: $F^{(m)}_{n}=F^{(m)}_{n-1}+F^{(m)}_{n-2}+...+F^{(m)}_{n-m}$ for $n>1$ $F^{(m)}_{1}=1, F^{(m)}_{n}=0$ for $-m<n\leq0$. The Fibonacci code of order $m$, denoted by Fib\emph{m}, is the set consisting of the word $1^{m}$ and all other binary words that contain exactly one occurrence of the substring $1^{m}$, and this occurrence is the word's suffix \cite{Kl1}. For any $n$ the Fibonacci code Fib$m$ contains exactly $F^{(m)}_{n}$ codewords of the length $n+m$. \section {Lower $(2,3)$-representation of numbers} Representation of numbers in the mixed two-base numeration system using the main radix 2 and the auxiliary radix 3 was first introduced in \cite{Ani1}. Prefix encoding of integers using this representation was studied in \cite{Ani2}. The so-called lower (2,3)-representation of numbers, which is a modification of the general (2,3)-representation, was introduced in \cite{AZ}. Let us briefly describe its essence. Let ${\mathbb{N}_{2,3}}$ be the set of natural numbers that are coprime with 2 and 3, $x\in {\mathbb{N}_{2,3}}$, $x>1$, $n =\lfloor\log_{2}x\rfloor$, $1\leq m\leq n$. A very simple idea stands behind the $(2,3)$-integer representation. Note that for any whole positive number $m$ integers $2^{m}$ and $2^{m-1}$ give different residues modulo 3. Therefore, $x$ can be uniquely represented in one of the forms $2^{m} + 3^{k}x_{1}$ or $2^{m-1} + 3^{k}x_{1}$, where $x_{1}$ also belongs to ${\mathbb{N}_{2,3}}$ and $k \geq 1$. In the general $(2,3)$-representation of $x$ the maximal value is chosen for $m$, $m=\lfloor\log_{2}x\rfloor$. In the lower $(2,3)$-representation we use the shifted value, $m=\lfloor\log_{2}x\rfloor -1$. Such a choice for $m$ provides a more balanced form of the $(2,3)$-integer partition. Thus, any number $x$ belonging to the set ${\mathbb{N}_{2,3}}$ can be uniquely represented in one of the forms $2^{n-1}+ 3^{k}x_{1}$ or $2^{n-2} + 3^{k}x_{1}$, where $x_{1}\in {\mathbb{N}_{2,3}}$, $x_{1} <x, k\geq 1$. Applying the same decomposition procedure to $x_{1}$, we obtain the remaining number $x_{2}$. In general, at the \emph{i}-th stage of the iterative procedure, we get the remaining number $x_{i+1}$, such that $x_{i} = 2^{n_{i}}+3^{k_{i}}x_{i+1}$, where $n_i=\lfloor\log_{2}x_i\rfloor-1$ or $n_i=\lfloor\log_{2}x_i\rfloor-2$. Continue this process recursively until at a certain iteration $t-1$ we obtain $x_{t} = 1$ or $x_{t} = 2$ (in the last case $x_{t-1} = 7 = 2^{0} + 3\cdot 2)$. A lower $(2,3)$-code is defined as any code in the binary alphabet $\{0,1\}$ that can be used to restore the sequence of values $x_{t}, x_{t-1},\ldots, x_{1}, x$. One of such codes we obtain using the so-called $(\Delta,k)$- approach. Note that for the unambiguous reconstruction of the number $x$ it is sufficient to keep the sequence of pairs given by the values $\Delta_{i} = \lfloor log _{2}3^{k_{i}}x_{i+1}\rfloor -n_{i}$ and $k_{i}, i = 0,\ldots,t-1$. These pairs we obtain at each iteration during decomposition of $x$. For the lower $(2,3)$-representation the following remarkable property holds. The defined above parameter $\Delta_{i}$ can take only three values: $0, 1$ and $2$ \cite{AZ}. So, with a number $x$ the numerical sequence of pairs is uniquely associated $(\Delta_{0},k_{0}), (\Delta_{1},k_{1}),\ldots,(\Delta_{t-1},k_{t-1})$, where $0 \leq\Delta_{i} \leq 2, 0 < k_{i}$. For the lower $(2,3)$-encoding, we use the specific binary encoding of pairs. The value $\Delta$ is encoded as follows: $\Delta = 2$ by the symbol 0, $\Delta=1$ by the word $11$ and $\Delta=0$ by the word $10$. The value $k$ is encoded by the word $1^{k-1}0$ with some exceptions arising due to the selection of a delimiter. In these exceptional cases, the codeword for $k$ is $1^{k}0$. The codeword of a number $x$ is the sequential concatenation of the corresponding $(\Delta,k)$-groups. For the lower $(2,3)$-code encoding groups are written in the reverse order regarding the way of obtaining them during encoding, $(\Delta_{t-1},k_{t-1}),\ldots,( \Delta _{0},k_{0})$. This allows to perform the decoding from left to right and makes it easier. Since every $(\Delta,k)$-group, and each codeword ends with the symbol $0$, then the word $0110$ can serve as a delimiter. To form the delimiter, it is necessary to append the string $110$ to the end of some words. If in a codeword the last group corresponding to the pair $(\Delta_{0}, k_{0})$ takes the form $0110$ or $10110$, i.e. $k_{0} = 3$ and $\Delta_{0}\neq 1$, then it already contains the delimiter, so there is no need to postfix the string $110$ to the end of a word. Thus, the $(\Delta,k)$-groups $110$, $0110$, $10110$ are separating ones; if any of them occurs, a codeword ends with it. In a codeword the last group $110$, which is externally appended, does not correspond to any pair $(\Delta,k)$ that take part in the lower $(2,3)$-representation, and has to be ignored during decoding, but groups $0110$ and $10110$ have to be taken into consideration. So, none $(\Delta,k)$-group that corresponds to a pair should not take the form 110, and none $(\Delta,k)$-group except the last one, should not take the forms $0110$ or $10110$. However, codewords of pairs $(\Delta_{i},k_{i})$ received in the lower $(2,3)$-factorization can violate these conditions. Namely, this undesirable situation occurs when: \begin{enumerate} \item $\Delta = 1$ and $k = 1$ (then the group 110 is formed); \item $\Delta\neq 1$, $k = 3$ and the corresponding $(\Delta,k)$-group is not the last one (it is one of the groups $0110$ or $10110$). \end{enumerate} It is easy to check (and this is shown in \cite{AZ}) that for the group $(\Delta_{t-1}, k_{t-1})$, which is written first in a codeword, case 1) is impossible. Therefore, to avoid the undesirable situation mentioned above, instead of $1^{k-1}0$ we encode the value $k$ in a $(\Delta,k)$-group by the string $1^{k}0 $ in such cases: $\Delta = 1$ and a $(\Delta,k)$-group is not the first; $\Delta\neq 1, k\geq 3$ and a $(\Delta,k)$-group is not the last. In this way, the constructed prefix code corresponds to the set of positive integers that are coprime with 2 and 3. The number 1, for which the lower $(2,3)$-factorization is empty, corresponds to the shortest codeword $110$. Together with the last zero of a preceding codeword this sequence forms a delimiter. By $C^{low}_{2,3}$ we denote the lower $(2,3)$-code described above. To encode an arbitrary positive integer $n$, it is necessary to find the $n$-th number in the ascending series of numbers that are coprime with 2 and 3. This number equals to $x = 3n-(n\mod 2)-1$. Thus, to encode $n,$ one have to find the lower $(2,3)$-representation of $x$ and encode it. Table \ref{tab1} shows 15 smallest numbers, their lower $(2,3)$-representations, and the corresponding codewords of the lower $(2,3)$-code. \begin{table}[!t] \caption{Lower $(2,3)$-representations and codewords of the first fifteen numbers} \label{tab1} \centering \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline $n$ & $x$ & $(\Delta_{0},k_{0})$ & $x_{1}$ & $(\Delta_{1},k_{1})$ & $x_{2}$ & code \\ \hline \hline 1 & 1 & & & & & 110 \\ \hline \hline 2 & 5 & 0,1 &1 & & & 100 110 \\ \hline \hline 3 & 7 & 2,1 & 2 & & & 00 110 \\ \hline \hline 4 & 11 & 2,2 & 1 & & & 010 110 \\ \hline \hline 5 & 13&1,2 & 1 & & &1110 110 \\ \hline \hline 6 & 17 & 0,2 & 1 & & & 1010 110 \\ \hline \hline 7 & 19 & 1,1 & 5 & 0,1 & 1 & 100 1110 110 \\ \hline \hline 8 & 23 & 0,1 & 5 & 0,1 & 1 & 100 100 110 \\ \hline \hline 9 &25 & 2,1 & 7 & 2,1 & 2 & 00 00 110 \\ \hline \hline 10 & 29 & 1,1 & 7 & 2,1 & 2 & 00 1110 110 \\ \hline \hline 11 & 31 & 2,3 & 1 & & & 0110 \\ \hline \hline 12 & 35 & 1,3 & 1 & & & 11110 110 \\ \hline \hline 13 & 37 & 0,1 & 7 & 2,1 & 2 & 00 100 110 \\ \hline \hline 14 & 41 & 2,1 & 11 & 2,2 & 1 & 010 00 110 \\ \hline \hline 15 &43 & 0,3 & 1 & & & 10110 \\ \hline \end{tabular} \end{table} As it was mentioned above, the last element in the lower $(2,3)$-representations is the number $x_{t} = 1$ or $x_{t} = 2$. Hence, decoding starts from one of these numbers. Then the sequence of numbers $x_{t},\ldots, x_{1}$, $x_{0}=x$ is calculated. It is processed as follows. Using the values $x_{i+1}, \Delta_{i}$ and $k_{i}$ we calculate $n_{i}=\lfloor log_{2}3^{k_{i}}x_{i+1}\rfloor -\Delta_{i}$, and hence we can obtain $x_{i}=2^{n_{i}}+3^{k_{i}}x_{i+1}$. Note that $x_{t}=2$ if and only if $\Delta_{t-1}=2$ and $k_{t-1}=1$; in other cases $x_{t}=1$ \cite{AZ}. Thus, there is no ambiguity at the starting point of the decoding procedure. \section{Code $D_{2}$} The existence of a delimiter for the code $C^{low}_{2,3}$ means that this code is prefix-free. However, it is not complete, i.e. the set of its codewords can be expanded while its UD property will not be lost. To demonstrate that, we construct a prefix code that contains all the codewords from $C^{low}_{2,3}$, and some more. This code is quite simple to define. It consists of the word $110$, and all other binary words that do not start with the string 110, ends with the sequence $0110$ and do not contain this sequence as a substring in other places. We denote this code by $D_{2}$. The number 2 in the code notation indicates that its delimiter contains 2 consecutive ones. Obviously, the code $D_{2}$ contains all the codewords of the code $C^{low}_{2,3}$ and has the same delimiter $0110$ as the code $C^{low}_{2,3}$. Each portion of concatenated codewords from $D_{2}$ ends with the delimiter string $0110$ that makes it possible to unambiguously determine the beginning of a new codeword in the flow of codewords. This also provides synchronizability of the code. In case of errors occur a receiver has only to identify the first delimiter string $0110$ to renew the code parsing. But in some cases it cannot unambiguously identify the delimiter suffix $110$ as the single codeword. The example of a word belonging to the code $D_{2}$, but not to $C^{low}_{2,3}$, is $100 00 110$. If we apply the $(2,3)$-decoding procedure to this string, we obtain the number 17. However, as Table \ref{tab1} shows, the codeword for $17$ is $1010 110$. Thus, the code $C^{low}_{2,3}$ is not complete. By the contrast, the code $D_{2}$ is complete, as a representative of a wider class of complete codes that will be defined and investigated in the following sections. \section {Splittable codes} In the lower $(2,3)$-integer representation, we use sequences of $(\Delta,k)$-pairs. Let us change the order of $\Delta$ and $k$ inside pairs. In this way, the dual sequence of $(k,\Delta)$-pairs $(k_{i},\Delta_{i})$, where $k_{i}$ is an arbitrary positive integer, and $\Delta_{i}$ takes the same values 0, 1 or 2, can also be associated with a number. Apart from the above-mentioned, this representation allows other binary prefix encodings including the following. We represent the value $k$ as the word $0^{k-1}1$ in the unary numeration system with 1 as a separator and the value $\Delta$ in the form $1^{\Delta}0$. The concatenation of codewords corresponding to $k_{i}$ and $\Delta_{i}$ respectively constitutes a $(k,\Delta)$-group. The codeword of a $(k,\Delta)$-sequence is formed by the concatenation of corresponding $(k,\Delta)$-groups appended by the delimiter string $1111$. It is obvious that in the concatenation of $(k,\Delta)$-groups obtained through the $(2,3)$-decomposition that word does not occur. In the lower $(2,3)$-integer representation, not all possible $(k,\Delta)$-sequences are valid. Let us abstract ourselves from the semantics of values $k$ and $\Delta$, as parameters of the lower $(2,3)$-factorization. Using the defined above atomic encoding of $(k,\Delta)$-pairs we consider encoding all possible sequences of $(k,\Delta) $-pairs $(k_{1},\Delta_{1})(k_{2},\Delta_{2})\ldots(k_{t},\Delta_{t})$, where the following restrictions hold: $0 \leq \Delta_{i}\leq 2$, $0<k$. It is easy to see that the obtained set of codewords is nothing more than the code Fib4, named in \cite{AF} as the code $C_{1}$ of the order 4. In this way varying upper bounds for values $\Delta$, $0\leq \Delta_{i}\leq m$, and, respectively, the quantity of ones in a code delimiter we obtain different Fibonacci codes. So, if $\Delta$ can take only one value (which is encoded by "0") and the delimiter consists of two ones, then we obtain the code Fib2. If $\Delta$ can take two values, which we encode by words "0" and "10", then the delimiter consists of three consecutive ones, and we have the code Fib3. Overall, in Fibonacci codes a restriction on the set of $\Delta$-values naturally predetermines a delimiter. If $\Delta$ can take no more than $m$ different values, then the delimiter is the run of $m+1$ ones. Thus, we can assume that the lower $(2,3)$-code, the popular Fibonacci codes and possibly some others can be viewed as the different realizations of a more general method of number encoding based on encoding sequences of ordered integer pairs with limitations on one of their components. From a practical point of view, it is also important that a code contains a sufficient number of short words. This means that if we consider a code with delimiters, the delimiters or their prefix parts should be included in some short sequences of $(\Delta,k)$ or $(k,\Delta )$-groups. The longer codewords can contain these shorter words as suffixes and thus we may not consider delimiters apart from codes of $(\Delta,k)$ (or $(k,\Delta)$)-sequences. Summarizing all the above mentioned, we come to the following definition of $(\Delta,k)$-codes. \begin{defi} Let $\textbf{S}$ be a given set of sequences of $(\Delta,k)$-pairs, where $\Delta$ is a non-negative integer that does not exceed some constant $d$, and $k$ can be any positive natural number. A $(\Delta,k)$-code of $\textbf{S}$ is the set of binary words that satisfy the following conditions: \begin{itemize}{\IEEEsetlabelwidth{(iii)}} \item[(i)] values $\Delta$ and $k$ are encoded by separate independent prefix encoding functions $\varphi_{1}$ and $\varphi_{2}$ respectfully; \item[(ii)] the encoding of a $(\Delta,k)$-pair is defined as the concatenation $\varphi_{1}(\Delta)\varphi_{2}(k)$, which we call a $(\Delta,k)$-group; \item[(iii)] the codeword of a $(\Delta,k)$-sequence from $\textbf{S}$ is the sequential concatenation of the corresponding $(\Delta,k)$-groups. \end{itemize} \end{defi} A $(\Delta ,k)$-code is any set of binary words that can be interpreted as a $(\Delta ,k)$-code for some set $\textbf{S}$ of $(\Delta ,k)$-sequences. Thus, to set a $(\Delta ,k)$-code it is necessary to specify a set $\textbf{S}$ of $(\Delta ,k)$-sequences and to choose well defined basic encodings of $(\Delta ,k)$-pairs. In what follows, we consider only codes, where a set $\textbf{S}$ is the set of all possible $(\Delta ,k)$-sequences. In general, like in the case of $(2,3)$-codes, a basic set $\textbf{S}$ could be a subset of all $(\Delta ,k)$-sequences. The definition of a $(k, \Delta)$-code is similar to that given above by changing $(\Delta ,k)$ by $(k, \Delta )$-pairs. We call both the $(\Delta ,k)$ and $(k,\Delta)$-codes splittable codes. The important property of splittable codes is that any codeword, including a delimiter, consists of a whole number of $(\Delta ,k)$ (respectively $(k, \Delta)$)-groups. This structural regularity can also be used as an element of proving technique in establishing important code properties, such as completeness, universality, and density. As shown above, the codewords of Fibonacci codes can be represented as sequences of $(k,\Delta)$-groups, which are externally supplemented by a delimiter. Interestingly, that using specific encodings of $k$ and $\Delta$, these codewords can be interpreted as the sequences consisting of a whole number of $(k,\Delta)$-groups even with a delimiter. Nevertheless, they cannot be given as the sequences of $( \Delta,k)$-groups. \begin{theorem} Any Fibonacci code Fib$m$ is a $(k,\Delta)$-code, but not a $(\Delta ,k)$-code. \end{theorem} \begin{IEEEproof} Consider a $(k,\Delta)$-pair, where $k$ could be any positive integer, and $\Delta$ can have only $m$ different values, $0 \leq\Delta < m$. Let us encode $k$ by the string $0^{k-1}1$, which comprises $k-1$ zeros. Values of $\Delta$ we encode by $m$ strings: $0, 10,\ldots, 1^{m-2} 0$, which contain runs up to $m-2$ ones, and the string $1^{m-1}$ corresponding to the value $m-1$. Using this encoding we prove the first part of the theorem statement by induction on the codeword length. Let $\alpha$ be a codeword from Fib$m$. The minimal possible length of $\alpha$ is equal to $m$. If that is so, $\alpha = 1^{m}=11^{m-1}$. This string corresponds to the $(k,\Delta)$-pair $(1, m-1)$. Suppose that the statement of the theorem holds for all codewords having lengths less or equal to some integer $t$, $t \geq m$. Assume that the length of $\alpha$ is $t+1$. If $\alpha$ starts with 1, then $\alpha$ can be represented in the form $\alpha=1^{i}0\beta=11^{i-1}0\beta, 0<i<m$. The prefix $11^{i-1}0$ corresponds to the $(k,\Delta)$-pair $(1, i-1)$. The shorter string $\beta$ also belongs to Fib$m$. Thus, by the inductive assumption $\beta$ comprises an integral number of $(k,\Delta)$-groups. Consider the case when $\alpha$ starts with $0, \alpha=0^{i}1\beta, i>0$. If $\beta$ is the suffix of the form $1^{m-1}$ then $\alpha=0^{i}11^{m-1}$, and that corresponds to the $(k,\Delta)$-pair $(i+1, m-1)$. In another case, $\beta$ is a string of the form $\beta=1^{j}0\gamma, 0\leq j < m-1, \gamma \in$ Fib\emph{m}. This gives the representation form $\alpha=0^{i}11^{j}0\gamma$. The prefix part $0^{i}11^{j}0 $ is the codeword corresponding to the $(k,\Delta)$-pair $(i+1, j.)$ By the inductive assumption the string $\gamma$ contains a whole number of $(k,\Delta)$-groups. Hence, $\alpha$ corresponds to some $(k,\Delta)$-sequence. By induction the first part of Theorem 1 is proved. Consider the second part of the theorem. Suppose, to the contrary, that Fib$m$ is a $(\Delta,k)$-code with some prefix encoding functions $\varphi_{1}$ for $\Delta$-values and $\varphi_{2}$ for $k$-values. For any integer $k$ the codeword $0^{k}1^{m}$ belongs to Fib$m$. On the other hand, the lengths of codewords corresponding to $\Delta$ values are restricted. It follows that there exists the value $\Delta^{\prime}$ such that $\varphi_{1}(\Delta^{\prime})=0^{s}$ for some integer $s>0$. Consider the word $0^{s}1^{m}$. The prefix property of the encoding $\varphi_{1}$ implies that there are no other codes of $\Delta$ of the form $0^{r}, r<s$. It follows that there exists some value $k^{\prime}$ such that $\varphi_{2}(k^{\prime})=1^{t}, t>0$, and $\varphi_{1}(\Delta^{\prime})\varphi_{2}(k^{\prime})$ is the first $(\Delta, k)$-group for the string $0^{s}1^{m}$. Consider the string $1^{m}$. It also belongs to Fib$m$. By our assumption, some $(\Delta,k)$-groups constitute the representation $1^{m}=\varphi_{1}(\Delta_{1})\varphi_{2}(k_{1})...\varphi_{1}(\Delta_{n})\varphi_{2}(k_{n})$. The prefix property of encodings $\varphi_{1}$ and $\varphi_{2}$ implies that $\Delta_{1}=\Delta_{2}=\ldots=\Delta_{n},\ k^{\prime}=k_{1}=k_{2}=\ldots=k_{n}, $ $\varphi_{1}(\Delta_{1}) = 1^{r}, r>0, \varphi_{2}(k_{1})=1^{t}, t>0$. It immediately follows that the inequality $t<m$ holds. Thus, from the consideration of the string $0^{s}1^{m}$ we conclude that the non-empty string $1^{m-t}$ consists of a whole number of identical $(\Delta, k)$-groups. Each of them corresponds to the pair $(\Delta_{1}, k_{1})$. The string $1^{m}$ can be represented in the form $1^{m}=1^{m-t}1^{t}$. It follows that the string $1^{t}$ should be represented using an integral quantity of identical $(\Delta, k)$-groups corresponding to the encoding $\varphi_{1}(\Delta_{1})\varphi_{2}(k_{1})=1^{r+t}, r>0$. This contradiction concludes the proof. \end{IEEEproof} For Fibonacci codes considered as $(k, \Delta )$-codes we use the unary encoding of parameters $k$ and $\Delta$. Note that when we use splittable codes for data compression, then they can be more effective, if the average codeword length is shorter. From this perspective, the encoding of parameters $k$ and $\Delta$ in the unary numeration system is not economical. More economical, for example, is the truncated binary encoding of the values $\Delta$ and $k$. However, for the parameter $k$ such encoding is impossible since the set of its values is unlimited. Nevertheless, the truncated binary encoding can be applied to encode the values of the parameter $\Delta$. Concerning the parameter $k$, there are only two unary prefix encodings $0^{k-1}1$ or $1^{k-1}0.$ Theoretically, other prefix encodings, such as Elias codes \cite{El} can be used for encoding $k.$ However, in applications of splittable codes to text compression, the probability distribution of $k$-values is geometric, and unary codes are the most effective for this kind of distribution. The Golomb codes \cite{Golomb} completely correspond to the principles described above. Those are ones of the simplest $(k,\Delta )$-codes, where each codeword consists of one $(k, \Delta )$-group. If we consider more complex codes, which codewords can contain several $(\Delta,k)$ or $(k,\Delta)$-groups, then certain groups should be considered as terminating in a codeword, i.e. separating ones. We note that due to the unary encoding of the parameter $k$, the last bit of any $(\Delta,k)$-group always has the same value, say zero. Therefore, to endow a splittable code with the feature of instantaneous separation, it is suitable to construct a code from $(\Delta,k)$-, but not $(k,\Delta)$-groups, predetermining a delimiter as $0\alpha 0$, where $\alpha0$-is a separating group, and zero in front of it is the last symbol of the previous group. If we encode $\Delta$ in the binary form, then $(k,\Delta)$-groups will not have such properties, because they can begin and end with zero as well as with one. This complicates finding the place that matches a delimiter. However, the more important advantage of $(\Delta,k)$-codes over $(k, \Delta)$-codes is the possibility to form short codewords that do not contain a whole delimiter. For example, they can consist of a separating group of the form $\alpha0$, while the delimiter takes the form $0\alpha0$. Longer delimiters provide the better asymptotic density of a code, while short codewords enable us to organize efficient compression for relatively small alphabet sizes. Thus, for example, the considered above code $D_{2},$ it will be proved further that it is a $(\Delta,k)$-code, contains the word $110$, although the sequence $0110$ is the code delimiter. As will be shown, it has a higher asymptotic density than the code Fib3, and only slightly inferior in the efficiency of compressing texts with small alphabets. \section{Multi-delimiter codes} One of the families of efficient $(\Delta ,k)$-codes can be obtained by using several delimiters of the form $01^m0$ in one code. The remaining part of this presentation deals completely with the investigation of these codes. Let $\mathcal{M}=\{m_{1},\ldots,m_{t}\}$ be a set of integers, given in the ascending order, $0 < m_{1}<\ldots< m_{t}$. \begin{defi} The multi-delimiter code $D_{m_1,\ldots,m_t}$ consists of all the words of the form $1^{m_{i}}0, i=1,\ldots,t$ and all other words that meet the following requirements: \begin{itemize}{\IEEEsetlabelwidth{(iii)}} \item[(i)] for any $m_{i}\in\mathcal{M}$ a word does not start with a sequence $1^{m_{i}}0$; \item[(ii)] a word ends with the suffix $01^{m_{i}}0$ for some $m_{i}\in\mathcal{M}$; \item[(iii)] for any $m_{i}\in\mathcal{M}$ a word cannot contain the sequence $01^{m_{i}}0$ anywhere, except a suffix. \end{itemize} \end{defi} The given definition implies that code delimiters in $D_{m_1,\ldots,m_t}$ are sequences of the form $01^{m_{i}}0$. However, the code also contains shorter words of the form $1^{m_{i}}0$, which form the delimiter together with the ending zero of a preceding codeword. Evidently, any multi-delimiter code is prefix-free and thus UD. Table \ref{tab2} shows examples of multi-delimiter codewords. This table lists all codewords of lengths not longer than 7 of different multi-delimiter codes and, for comparison, Fibonacci codes Fib2 and Fib3. The codes $D_{2,3}$ and $D_{2,3,4}$ with 2 and 3 delimiters respectfully contain many more short codewords than both the Fibonacci code Fib3 and the one-delimiter code $D_{2}$. However, as it will be demonstrated in the following, the asymptotic density of these codes is lower. Overall, codes with more delimiters have worse asymptotic density, but contain a larger quantity of short codewords. This regularity is related also to the lengths of delimiters: the shorter they are, the larger quantity of short words a code contains. For natural language text compression, the most effective seems to be codes with the shortest delimiter having two ones, which we will thoroughly examine. \begin{table*}[!t] \caption{Sample codeword sets of some multi-delimiter and Fibonacci codes} \label{tab2} \centering \begin{tabular}{l@{\extracolsep{2mm}}lllllll} \hline\noalign{\smallskip} Index&Fib2&$D_1$&$D_{1,2}$&Fib3&$D_2$&$D_{2,3}$&$D_{2,3,4}$\\ \hline\noalign{\smallskip} 1&11&10&10&111&110&110&110\\ \cline{2-8} 2&011&010&010&0111&0110&0110&0110\\ \cline{2-3}\cline{5-6} 3&0011&0010&110&00111&00110&1110&1110\\ \cline{3-4}\cline{7-8} 4&1011&00010&0010&10111&10110&00110&00110\\ \cline{2-2}\cline{5-6} 5&00011&11010&0110&000111&000110&10110&10110\\ \cline{3-4} 6&01011&000010&00010&010111&010110&01110&01110\\ \cline{7-7} 7&10011&011010&00110&100111&100110&000110&11110\\ \cline{2-2}\cline{4-4}\cline{6-6}\cline{8-8} 8&000011&110010&000010&110111&0000110&010110&000110\\ \cline{5-5} 9&001011&111010&000110&0000111&0010110&100110&010110\\ \cline{3-3} 10&010011&0000010&111010&0010111&0100110&001110&100110\\ \cline{4-4} 11&100011&0011010&0000010&0100111&1000110&101110&001110\\ \cline{7-7} 12&101011&0110010&0000110&1000111&1010110&0000110&101110\\ \cline{2-2} 13&0000011&1100010&0111010&1010111&1110110&0010110&011110\\ \cline{6-6}\cline{8-8} 14&0001011&0111010&1110010&0110111&&0100110&0000110\\ 15&0010011&1110010&1110110&1100111&&1000110&0010110\\ \cline{5-5} 16&0100011&1111010&1111010&&&1010110&0100110\\ \cline{3-4} 17&1000011&&&&&0001110&1000110\\ 18&0101011&&&&&0101110&1010110\\ 19&1001011&&&&&1001110&0001110\\ \cline{7-7} 20&1010011&&&&&&0101110\\ \cline{2-2} 21&&&&&&&1001110\\ 22&&&&&&&0011110\\ 23&&&&&&&1011110\\ \cline{8-8} \end{tabular} \end{table*} Now we demonstrate that multi-delimiter codes belong to the class of splittable codes. \begin{theorem} Any multi-delimiter code $D_{m_1,\ldots,m_t}$ is a $(\Delta,k)$-code. \end{theorem} \begin{IEEEproof} We need to set some positive integer that cannot be exceeded by the value of $\Delta$ and construct prefix encodings for $\Delta$ and $k$ so that any codeword of $D_{m_1,\ldots,m_t}$ comprises a whole number of $(\Delta, k)$-groups. Let $d$ be some fixed non-negative integer satisfying inequalities $0 \leq d<m_{1}$. The parameter $\Delta$ ranges from $0$ to $2^{d}+1$. We encode these values by the symbol $0$ and all binary words of the length $d+1$ with the fixed first symbol $1$. The value of the parameter $k,$ which can be any positive integer, is encoded by the word $1^{k-1}0$. Evidently, these encodings of values $\Delta$ and $k$ are prefix-frree. Consider a word $1^{r}0$, where $r\geq m_{1}$. This word can be represented in the form $1^{r}0 = 1^{d+1}1^{r-d-1}0$. The inequality $r\geq m_{1}$ and the choice of $d$ implies that $r\geq d+1$. It follows that $1^{r}0$ corresponds to the $(\Delta, k)$-pair with $\Delta$ encoded by $1^{d+1}$ and $k=r-d>0$ and any word $\alpha\in D_{m_1,\ldots,m_t}$ of the form $1^r0$ represents some $(\Delta, k)$-group. Note that for any binary word $\alpha$ of the length exceeding $d$ and containing zeros in its representation it is possible to choose a prefix, such that it can be interpreted both as a codeword of some value $\Delta$, and as a codeword of some value $k$. Indeed, if $\alpha$ starts with $0$ then this symbol can be interpreted as corresponding to $\Delta =0$ or $k=1$. If $\alpha$ starts with 1 then $\alpha = 1^{r}0\beta$, where $r>0 $ and $\beta$ is the binary word. The prefix $1^{r}0$ can be interpreted as the codeword of the value $k=r+1$. But, also it is possible to choose the prefix of $\alpha$ having the length $d+1$, which corresponds to some value of $\Delta$. Now, suppose that $\alpha\in D_{m_1,\ldots,m_t}$ and it does not have the form $1^r0$. Let us consider parsing the codeword $\alpha$ from left to right sequentially extracting corresponding $(\Delta,k)$-groups until it is possible. As the result, we make partitioning of $\alpha$ on a whole number of $(\Delta,k)$-groups or we obtain a remainder that is not capable of containing a whole number of $(\Delta,k)$-groups. In the first case we obtain the desirable partitioning of $\alpha$ on an integral number of $(\Delta,k)$-groups. Consider the case of obtaining a remainder. Let us examine how under this procedure the ending of a codeword is processed. The suffix of a codeword has the form $01^{m_{i}}0$ and contains at least $m_{1}$ ones. The first bit $"0"$ of that suffix either can be the ending of some codeword of $k$ or can belong to a codeword of $\Delta$. In the first case, at the last iteration we obtain the residue $1^{m_{i}}0$ with no less than $m_{1}$ ones that, as shown above, is a $(\Delta,k)$-group. In the second case, we note that the codeword of $\Delta$ comprises no more than $m_{1}$ bits and after its extraction we obtain the remaining sequence of the form $1\ldots10$, which represents a particular value of $k$. Thus, the situation when at the last iteration we obtain a remainder, which is not capable of containing a whole $(\Delta,k)$-group, is impossible. \end{IEEEproof} Note that Theorem 2 holds for any values $d$ that satisfy the inequalities $0\leq d<m_{1}$. In the sequel to further simplify considerations, we presume that $d=0$, i.e. the code of a $\Delta$-value comprises one bit. Note that although in the code $D_{2}$ we used the encoding of three possible values of $\Delta$, which corresponds to the value $d=1$, all words of that code can be also represented as $(\Delta,k)$-groups with a single-bit encoding of $\Delta.$ \begin{theorem} Any code $D_{m_1,\ldots,m_t}$ is complete. \end{theorem} \begin{IEEEproof} A necessary and sufficient condition for a code $C$ to be complete is given by the Kraft-Macmillan equality: $\sum\limits_{c\in C}2^{-|c|}=1$. By $f_{n}$ denote the number of codewords of the length $n$. This equality can be rewritten as: \begin{equation} \label{q1} \sum_{n=1}^\infty2^{-n}f_n=1 \end{equation} Consider the multi-delimiter code $D_{m_1,\ldots,m_t}$. Theorem 2 allows us to choose the one-bit encoding for $\Delta$, and $k$ is encoded by $1^{k-1}0$. For any $n \geq 2$ there exist two $(\Delta,k)$-groups of length $n$: $1^{n-1}0$ and $01^{n-2}0$. Among all of them $(\Delta,k)$-groups that include $m_{i}$ ones, $i=1,\ldots, t$, are terminal, i.e. they can occur only at the end of a codeword. Thus, for the code $D_{m_1,\ldots,m_t}$ there are $2t$ terminal groups having lengths $m_{1}+1,m_{1}+2,\ldots, m_{t}+1,m_{t}+2$. By $T_{n}$ denote the number of terminal groups of the length $n$. Evidently, $T_{n}$ equals to the number of occurrences of $n$ in the set $\{m_{1}+1,m_{1}+2,\ldots, m_{t}+1,m_{t}+2\}$. This number can be equal to $0, 1$ or $2$. The number of non-terminal groups of length $n$ equals to $2-T_{n}$. Consider the codewords of the length $n$ that contain at least two $(\Delta ,k)$-groups. Each such word can be obtained by prepending its first non-terminal $(\Delta ,k)$-group to a shorter codeword. On the other hand, prepending an arbitrary non-terminal group to any codeword forms a longer codeword. If the codeword contains only one $(\Delta ,k)$-group, then this group is terminal. Thus, taking into account that the length of the shortest $(\Delta ,k)$-group is 2, we obtain the following recurrent formula for calculating the number of codewords of the length $n$: \begin{eqnarray} \label{q2} f_n=T_{n}+\sum_{k=0}^{n-2}(2-T_{n-k})f_{k} = \nonumber\\ =T_{n}+2(f_{n-2}+f_{n-3}+ \cdots)- \nonumber\\ -f_{n-(m_1+1)}-\cdots-f_{n-(m_t+1)}-\nonumber\\ -f_{n-(m_1+2)}-\cdots-f_{n-(m_t+2)} \end{eqnarray} Let us apply this formula to calculate $f_{n-1}$: \begin{eqnarray} \label{q3} f_{n-1}=T_{n-1}+\sum_{k=0}^{n-3}(2-T_{n-1-k})f_{k}=\nonumber \\ =T_{n-1}+2(f_{n-3}+f_{n-4}+\cdots)-\nonumber \\ -f_{n-(m_1+2)}-\cdots-f_{n-(m_t+2)}- \nonumber \\ -f_{n-(m_1+3)}-\ldots-f_{n-(m_t+3)} \end{eqnarray} Find the right part of (3) in (2) and change it to $f_{n-1}$: \begin{eqnarray} \label{q3} f_{n}=T_{n}-T_{n-1}+2f_{n-2}+f_{n-1}-\nonumber \\ f_{n-m_{1}-1}-\cdots -f_{n-m_{t}-1}+\nonumber \\ +f_{n-m_{1}-3}+\cdots+f_{n-m_{t}-3} \end{eqnarray} Denoting the left part of (1) by $s$ and taking into account that $f_{0}=f_{-1}=\cdots=0$, for any $p>0$ we have the following equalities: $\sum_{n=1}^\infty2^{-n}f_{n-p}=2^{-p}\sum_{n=1}^\infty2^{-(n-p)}f_{n-p}=s2^{-p}$. Taking them into consideration and substituting expression (4) in (1), we obtain the following: \begin{eqnarray} \label{q5} s=\sum_{n=1}^\infty2^{-n}f_n=\sum_{n=1}^\infty2^{-n}(T_{n}-T_{n-1}+f_{n-1}+\nonumber\\ 2f_{n-2}-f_{n-(m_{1}+1)}-\cdots-f_{n-(m_{t}+1)}+\nonumber\\ + f_{n-(m_{1}+3)}+\cdots+f_{n-(m_{t}+3)}=\nonumber\\ =\sum_{n=1}^\infty2^{-n}T_{n}-\frac{1}{2}\sum_{n=1}^\infty2^{-(n-1)}T_{n-1}+\nonumber\\ +s(\frac{1}{2}+\frac{1}{2}-2^{-m_{1}-1}-\cdots- 2^{-m_{t}-1}+\nonumber\\ +2^{-m_{1}-3}+\cdots+2^{-m_{t}-3}) \end{eqnarray} Taking into account that $2^{-m_i-3}-2^{-m_i-1}=-3\cdot2^{-m_i-3}$ for any $i$, $\sum_{n=1}^\infty2^{-n}T_n=\sum_{n=1}^\infty2^{-(n-1)}T_{n-1}$ and cancelling out $s$ in both parts of (\ref{q5}) we obtain the following formula. \begin{equation} \label{q6} 3s\sum_{i=1}^t2^{-m_i-3}=\frac{1}{2}\sum_{n=1}^\infty2^{-n}T_{n} \end{equation} Since the lengths of terminal $(\Delta ,k)$-groups are $m_{1}+1 , m_{1}+2 ,\ldots, m_{t}+1, m_{t}+2$, the equality $$ \sum_{n=1}^\infty2^{-n}T_{n}= \sum_{i=1}^t2^{-m_i-1}+2^{-m_i-2}=\\ \frac{3}{4}\sum_{i=1}^t2^{-m_i} $$ is satisfied. Therefore, equality (6) takes the form $$\frac{3}{8}s\sum_{i=1}^t2^{-m_{i}}= \frac{3}{8}\sum_{i=1}^t 2^{-m_{i}}$$ That implies the condition $s = 1$. \end{IEEEproof} Also the $(\Delta ,k)$-structure of multi-delimiter codes enables us to prove another important feature, universality, but we give the simpler proof based on encoding integers. \section{Encoding integers} We define a multi-delimiter code as a set of words. There exists a simple bijection between the set of natural numbers and the set of codewords of any multi-delimiter code. Thus, it enables us to encode integers by codewords of these codes. Let $\mathcal{M}=\{m_{1},\ldots, m_{t}\}$ be the set of parameters of the code $D_{m_1,\ldots,m_t}$. By ${\mathbb{N}_{\mathcal{M}}}=\{j_{1},j_{2},...\}$ denote the ascending sequence of all natural numbers that does not belong to $\mathcal{M}$. \emph{Example}. Let $\mathcal{M}=\{2,5\}$. This gives the set ${\mathbb{N}_{\mathcal{M}}}=\{1,3,4,6,7,8,...\}.$ By $\varphi_{\mathcal{M}}(i)$ denote the function $\varphi_{\mathcal{M}}(i)=j_{i}$, $j_{i}\in {\mathbb{N}_{\mathcal{M}}}$ as defined above. It is easy to see that the function $\varphi_{\mathcal{M}}$ is a bijective mapping of the set of natural numbers onto ${\mathbb{N}_{\mathcal{M}}}$. Evidently, this function and the inverse function $\varphi_{\mathcal{M}}^{-1}$ can be constructively implemented by simple one cycle iterative procedures. The main idea of encoding integers by the code $D_{m_1,\ldots,m_t}$ is as follows. We scan the binary representation of an integer from left to right. During this scan each internal isolated group of $i$ consecutive $1s$ is changed to $\varphi_{\mathcal{M}}(i)$ $1s$. This way we exclude the appearance of delimiters inside a codeword. In decoding we change internal isolated groups of $j$ consecutive $1s$ to the similar groups of $\varphi_{\mathcal{M}}^{-1}(j)$ ones. Detailed description of the encoding procedure is as follows. \emph{Bitwise Integer Encoding Algorithm}. \emph{Input}: $x=x_{n}x_{n-1}...x_{0}$, $x_{i}\in \{0,1\}, x_{n}=1;$ \emph{Result}: a codeword from $D_{m_1,\ldots,m_t}$. \begin{enumerate} \item $x\leftarrow x-2^n$, i.e. extract the most significant bit of the number $x$, which is always 1. \item If $x=0$, append the sequence $1^{m_{1}}0$ to the string $x_{n-1}...x_{0}$, which contains only zeros or empty. \emph{Result} $\leftarrow x_{n-1}...x_{0}1^{m_{1}}0$. Stop. \item If the binary representation of $x$ takes the form of a string $0^{r}1^{m_{i}}0, r\geq0, m_{i}\in \mathcal{M}, i>1,$ then \emph{Result} $\leftarrow x$. Stop. \item In the string $x$ replace each isolated group of $i$ consecutive $1s$ with the group of $\varphi_{\mathcal{M}}(i)$ consecutive $1s$ except its occurrence as a suffix of the form $01^{m_{i}}0, i>1. $ Assign this new value to $x$. \item If the word ends with a sequence $01^{m_{i}}0, i>1$, then \emph{Result} $\leftarrow x.$ Stop. \item Append the string $01^{m_{1}}0$ to the right end of the word. Assign this new value to $x$. \emph{Result} $\leftarrow x$. Stop. \end{enumerate} According to this algorithm, if $x\neq 2^{n}$, the delimiter $01^{m_{1}}0$ with $m_{1}$ ones is attributed to a codeword externally, and therefore it should be deleted during the process of decoding, while the delimiters of a form $01^{m_{i}}0, i>1$ are informative parts of codewords and they must be processed during the decoding. If $x = 2^{n}$, the last $m_{1}+1$ bits of the form $1^{m_{1}}0$ must be deleted. \emph{Bitwise Decoding Algorithm}. \emph{Input}: a codeword $y\in D_{m_1,\ldots,m_t}$. \emph{Result}: an integer given in the binary form. \begin{enumerate} \item If the codeword $y$ is of the form $0^{p}1^{m_{1}}0$, where $p\geq0$, extract the last $m_{1}+1$ bits and go to step 4. \item If the codeword $y$ ends with the sequence $01^{m_{1}}0$, extract the last $m_{1}+2$ bits. Assign this new value to $y$. \item In the string $y$ replace each isolated group of $i$ consecutive $1s$, where $i\in\mathcal{M}$, with the group of $\varphi_{\mathcal{M}}^{-1}(i)$ consecutive $1s$. Assign this new value to $y$. \item Prepend the symbol 1 to the beginning of $y$. \emph{Result} $\leftarrow y.$ Stop. \end{enumerate} The following lemma gives an upper bound for the length of a multi-delimiter codeword. \begin{lemma} Let $D_{m_1,\ldots,m_t}$ be a multi-delimiter code, $c_{i}$ be the codeword of an integer $i$ obtained by the encoding algorithm given above. The length of $c_{i}$ satisfies the following upper bound: $|c_{i}|\leq \frac{1}{2} t \log _{2}i + m_{1}+2$. \end{lemma} \begin{IEEEproof} The encoding procedure that transforms a number $i$ given in binary form into the corresponding codeword of the code $D_{m_1,\ldots,m_t}$ can enlarge each internal isolated group of consecutive $1s$ maximum on $t$ ones. The quantity of such groups does not exceed $\frac{1}{2}\log_{2}i$. To some binary words the delimiter $01^{m_{1}}0$ could be externally appended. Therefore, the length of the codeword for $i$ is upper bounded by the value $\frac{1}{2} t\log_{2}i + m_{1} + 2$. \end{IEEEproof} Now we are ready to prove that any multi-delimiter code is universal. The concept of universality was introduced by P. Elias \cite{El}. This notion reflects the property of prefix sets to be nearly optimal codes for data sources with any given probability distribution function. A set of the codewords of lengths $l_{i} (l_{1}\leq l_{2}\leq\ldots)$ is called universal, if there exists a constant $K$, such that for any finite distribution of probabilities $P = (p_{1},\ldots,p_{n})$, where $p_{1}\geq p_{2}\geq\ldots$, the following inequality holds \begin{equation} \label{q6} \sum_{i=1}^n l_{i}p_{i}\leq K \cdot \max (1, E (P)), \end{equation} where $E(P)=-\sum_{i=1}^n p_{i}\log_{2} p_{i}$ is the entropy of distribution $P$, and $K$ is a constant independent of $P$. \begin{theorem} Any multi-delimiter code $D_{m_1,\ldots,m_t}$ is universal. \end{theorem} \begin{IEEEproof} Like in Lemma 1, by $c_{i}$ denote the codeword in $D_{m_1,\ldots,m_t}$ corresponding to the integer $i$. Let us sort codewords from $D_{m_1,\ldots,m_t}$ in the ascending order of their bit lengths, $a_{1}, a_{2},\ldots$. Map them to symbols of the input alphabet sorted in the descending order of their probabilities. We claim that the length of any word $a_{i}$ also satisfies the length upper bound for $|c_{i}|$ given by Lemma 1. Indeed, consider the set $\{c_{1},c_{2},\ldots ,c_{i}\}.$ Obviously, each of its elements satisfies that upper bound. In the sequence $a_{1},{a_{2}\ldots}$ at least one element, say $c_{j}, 1\leq j \leq i,$ occupies the place $k$ such that $k\geq i, a_{k}= c_{j}.$ This implies $|a_{i}|\leq |a_{k}| = |c_{j}|.$ It follows that $|a_{i}|$ satisfies the upper bound for $|c_{i}|,$ which is equal to $\frac{1}{2} t \ log _{2}i + m_{1}+2$ as Lemma 1 stated. The sequence $a_{1},a_{2},\ldots$ can be considered as a new encoding of natural numbers. To conclude the proof it remains only to apply the general Lemma 6 by Apostolico and Fraenkel taken from \cite{AF}: "Let $\psi$ be a binary representation such that $|\psi(k)|\leq c_{1}+c_{2}\log k$ $(k\in \mathbb{Z}^{+})$, where $c_{1}$ and $c_{2}$ are constants and $c_{2} > 0$. Let $p_{k}$ be the probability to meet $k$. If $p_{1}\geq p_{2}\geq \ldots\geq p_{n}$, $\sum p_{i}\leq 1$ then $\psi$ is universal". \end{IEEEproof} \section{Byte aligned algorithms} The considered above encoding and decoding algorithms are bitwise, and therefore they are quite slow. We can construct accelerated algorithms that process bytes. Since decoding is performed in real time more often than encoding and in general lasts longer, acceleration of decoding is a more important task we focus on. The general idea of the byte aligned decoding algorithm is similar to that one described in \cite{Kl1} for the Fibonacci codes. At the \emph{i}-th iteration of this algorithm, a whole number of bytes of encoded text is read out. We denote this portion of text by $u_{i}$. Assume that $u_{i}$ has the form $s_{i}E(w_{1}^i),\ldots,E(w_{k}^i)r_{i}$, where $E$ is an encoding function; $E(w_{1}^i),\ldots, E(w_{k}^i)$ are the codewords of numbers $w_{1}^i,\ldots,w_{k}^i; s_{i}$ is the beginning of the text $u_{i}$ that does not contain a whole codeword; and $r_{i}$ is the remainder of text $u_{i}$ that does not contain a whole codeword. As easy to see, the values $w_{1}^i,\ldots,w_{k}^i$ as well as the remainder $r_{i}$ can be unambiguously determined by $u_{i}$ and the remainder $r_{i-1}$ of the previous portion of bytes. Thus, we consider $u_{i}$ and $r_{i-1}$ as indices of predefined arrays $W_{1}, W_{2},\ldots,W_{k}, R$ containing the corresponding decoded numbers and a remainder, $W_{1}[r_{i-1},u_{i}]= w_{1}^i,\ldots, W_{k}[r_{i-1},u_{i}]=w_{k}^i, R[r_{i-1},u_{i}]=r_{i}$. We get decoded numbers directly from these arrays. Note that the concatenation $r_{i-1}s_{i}$ is also a codeword, if it is not empty. Some bits from the beginning of the number $E^{-1}(r_{i-1}s_{i})$ may be unambiguously obtained at the $(i-1)$-th iteration while others are obtained at the \emph{i}-th iteration. Thus, we can make correction assuming that $w_{1}^i$ and $w_{k}^i$ could be not the fully decoded numbers, but also the ending or the beginning of the decoded number binary representation respectfully. Values $w_{1}^i,\ldots,w_{k}^i$ corrected in this way we denote by $w_{1},\ldots,w_{k}$, eliminating the index $i$ for simplicity. Therefore, by $r_{i}$ we denote the ending of the text $u_{i}$, which cannot be decoded unambiguously at the \emph{i}-th iteration. Also, note that there is no need to store the first bit of numbers $w_{1},\ldots,w_{k}$, because it is always equal to one. To illustrate how the method works, we apply this general byte aligned algorithm for the code $D_{2}$, assuming that at each iteration one byte is processed. The arrays $W_{1},...W_{k}$ are stored in the predefined table. Some rows of this table are shown in Table \ref{tab3}. The shortest codeword of $D_{2}$ has the form $110$. This implies that with little exception one byte can encompass no more than three full or partial codewords from $D_{2}$. The only option when the byte can cover four codewords fully or partially is the case $0 110 110 x$, where $x$ is the last bit of the byte and the first bit of the fourth codeword. This bit can be attributed to the unprocessed remainder $r,$ and thus it is enough to store three resultant numbers. Together with the numbers $w_1$, $w_2$, $w_3$ and the remainder $r$ we store the following values in each row of the table: $|w_{i}|$ is the length of the \emph{i}-th number in bits (excluding the first bit); $f_{i}$ is the flag signaling if the codeword $w_{i}$ is the last in the current byte $(f_{i}=0)$ or not $(f_{i}=1)$. Under the heading of Table \ref{tab3} there are rows written from top to bottom, which are used to decode the coded text $11000111 \; 01101011\; 11001011 \; 11101101 \; 10011000$. \begin{table*}[!t] \caption{Decoding table for bytewise method for the code $D_{2}$} \label{tab3} \centering \begin{tabular}{|c|c||c|c|c|c|c|c|c|c|c|c|} \hline $r_{i-1}$&$u$&$w_1$&$|w_1|$&$f_1$&$w_2$&$|w_2|$&$f_2$&$w_3$&$|w_3|$&$f_3$&$r_i$\\ \hline &11000111&&0&1&0011&4&0&&&&1\\ \hline 1&01101011&&0&1&1&1&0&&&&011\\ \hline 011&11001011&0111001&7&0&&&&&&&011\\ \hline 011&11101101&01111&5&1&&0&0&&&&1\\ \hline 1&10011000&&0&1&0&1&1&00&2&0&\\ \hline \end{tabular} \end{table*} The structure of the second byte is shown in Fig. \ref{fig1}. \begin{figure}[!t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} &$r_{i-1}$&\multicolumn{8}{|c|}{$i$-th byte}\\ \hline $\ldots$&1&0&1&1&0&1&0&1&1\\ \hline &\multicolumn{5}{|c|}{$E(w_1)$}&$E(w_2)$&\multicolumn{3}{|c|}{$r_i$}\\ \end{tabular} \caption{Parsing of the byte 01101011} \label{fig1} \end{figure} Let us examine the set of possible values of the remainder $r$. First, let us make the following comments: \begin{enumerate} \item If some $(\Delta ,k)$-group is a part of the byte composition, then it can be unambiguously decoded regardless of the next byte content, and, therefore, its bits will not be included in $r$. \item If the byte ends with $p\geq 3$ consecutive ones, then they will be decoded as $p-1$ ones regardless of the next byte content. In this case, the string $r$ consists of the last $1$, which during the decoding of the next byte will serve as an indication that the previous byte did not end with zero. \item The string $10$ can be located only at the end or at the beginning of some $(\Delta ,k)$-group. In both cases, it can be decoded regardless of the next byte content: in the first case it is decoded together with the $(\Delta ,k)$-group, in which it is included. In the second case, it is decoded as $10$. \end{enumerate} It follows from the first of these observations that the sequence $r$ can not contain two consecutive zeros because such a situation is possible only if two zeros constitute a full $(\Delta ,k)$-group (then $r$ does not contain its bits), or when the first "0" is the end of one $(\Delta ,k)$-group, and the second $"0"$ is the beginning of the next group (in this case $r$ contains only the second zero). It follows from the second and third observations that the sequence $r$ can not contain three consecutive ones and the string $10$. Thus, we obtain a total 6 possible values of $r$: empty string, $0,1, 01, 11, 011$. Now we show that any row in Table \ref{tab3} can be "packed" into a single 32-bit machine word. We enumerate all possible values of $r$ by binary numbers from 0 to 5, and thus three bits are enough to store any such value. Note that if a certain flag $f_{i}$ is zero (this means that the word $w_{i}$ is not fully decoded), then there is no need to consider words $w_{i+1}, w_{i+2},...$, as well as flags $f_{i+1}, f_{i+2},...$, as the code $w_{i}$ extends to the beginning of the string $r$ or to the right boundary of the byte. Denoting these values $f_{i},$ which can be disregarded, by zeroes, we obtain the following possible combinations of flag values $f_{1},f_{2},f_{3}: 000, 100$ and $11x$, where $x$-is an arbitrary binary value. For each of these cases we describe the special method of packing a row of Table \ref{tab3} into a four-byte word (Fig. \ref{fig3}). However, in any case we write the values $f_{1},f_{2},f_{3}$ into three most significant bits, the values $w_{1}, |w_{1}|$; $w_{2}, |w_{2}|$ (if available); $w_{3}, |w_{3}|$ (if available) and $r$, from the least significant to the most significant bits, in the specified order. \textbf{$(f_{1}, f_{2}, f_{3})=000$.} In this case, the value $w_1$ takes no more than 10 bits. Indeed, consider first the case when $r_{i-1}=011$. If $f_{1}=0$, then the most significant bit of the byte $u_{i}$ can not be zero, since otherwise there would be a sequence $0110$, which means the end of the codeword and $f_{1}=1$. Assume, that all the bits of $u_{i}$ are ones. Then the last bit refers to $r_{i}$, and the length of the decoded value $w_{1}$ is $3 + 7 = 10$ bits. If $u_{i}$ contains the zero bit, then during decoding of $w_{i}$ the sequence of the form $01...10$ with more than 2 ones will be processed, which will correspond to one bit shorter piece of the code $w_{i}.$ Therefore, the total bit length of $w_{i}$ will not exceed $3+8-1=10$ bits. If the value $r_{i-1}$ contains less than three bits, then the length $w_{i}$ obviously, cannot be longer than $8+2=10$ bits. Thus, in the case of $(f_{1}, f_{2}, f_{3})=000$, four bits are enough to store the value $|w_{1}|$, and, in general, the packing of a string of the Tab. 3 in a four-byte word appears as in Fig. \ref{fig3a}. \textbf{$(f_{1}, f_{2}, f_{3})=100$.} In this case, the string concatenation $r_{i-1}u_{i}$ must contain the delimiter $0110$ or starts inside the delimiter. The value $w_{1}$ will be the longest if the delimiter is shifted to the right boundary of the byte. As the delimiter is not taken into consideration during decoding, the value $w_{1}$ will be obtained as a result of decoding at most 7 bits, and for reasons set out in the case $(f_{1}, f_{2}, f_{3})=000$, the greatest possible length of $w_{1}$ will be one bit less, i.e. $|w_{1}|\leq 6$ and to store the value $|w_{1}|$ 3 bits are enough. In the case $(f_{1}, f_{2}, f_{3})=100$ we also must store the value $w_{2}$. Since the code $w_{1}$ takes at least one bit of the byte $u_{i}$, for the code $w_{2}$ there remain no more than 7 bits, which requires 3 bits for the value $|w_{2}|$ and results in the packing as in Fig. \ref{fig3b}. \textbf{$(f_{1}, f_{2}, f_{3})=11x$.} In this case, the code $w_{1}$ satisfies the same restrictions as in the case $(f_{1}, f_{2}, f_{3})=100$. The code $w_{2}$, which total length does not exceed 7 bits, must also contain a delimiter with no less than three bits. Thus, four bits are enough for value $w_{2}$, three bits for $|w_{2}|$. Since the code $w_{1}$ occupies at least one bit of the byte $u$, and the shortest code $w_{2}$ is $110$, then the length of encoded and decoded values $w_{3}$ is not longer than four bits. Thus, we get the packing shown in Fig. \ref{fig3c}. \begin{figure*}[!t] \setlength{\tabcolsep}{1pt} \centering \subfigure[\label{fig3a}]{ \begin{tabular}{|c|c|c|c|c|c| c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} 32&31&30&\multicolumn{12}{|c|}{}&\multicolumn{1}{c}{17}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{15}& \multicolumn{1}{c}{14}&\multicolumn{2}{c}{}&\multicolumn{1}{c|}{11}& \multicolumn{1}{c}{10}&\multicolumn{8}{c}{}&\multicolumn{1}{c|}{1}\\ \hline 0&0&0&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}& \hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt} &&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}& \hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}\\ \hline $f_1$&$f_2$&$f_3$&\multicolumn{12}{|c|}{}&\multicolumn{3}{|c|}{$r$}& \multicolumn{4}{|c|}{$|w_1|$}&\multicolumn{10}{|c|}{$w_1$}\\ \end{tabular} } \subfigure[\label{fig3b}]{ \begin{tabular}{|c|c|c|c|c|c| c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} 32&31&30&\multicolumn{7}{|c|}{}&\multicolumn{1}{c}{22}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{20}& \multicolumn{1}{c}{19}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{17}& \multicolumn{1}{c}{16}&\multicolumn{5}{c}{}&\multicolumn{1}{c|}{10}& \multicolumn{1}{c}{9}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{7}& \multicolumn{1}{c}{6}&\multicolumn{4}{c}{}&\multicolumn{1}{c|}{1}\\ \hline 1&0&0&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}& \hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt} &\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}& \hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}\\ \hline $f_1$&$f_2$&$f_3$&\multicolumn{7}{|c|}{}&\multicolumn{3}{|c|}{$r$}& \multicolumn{3}{|c|}{$|w_2|$}&\multicolumn{7}{|c|}{$w_2$}& \multicolumn{3}{|c|}{$|w_1|$}&\multicolumn{6}{|c|}{$w_1$}\\ \end{tabular} } \subfigure[\label{fig3c}]{ \begin{tabular}{|c|c|c|c|c|c| c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} 32&31&30& \multicolumn{3}{|c|}{}&\multicolumn{1}{c}{26}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{24}& \multicolumn{1}{c}{23}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{21}& \multicolumn{1}{c}{20}&\multicolumn{2}{c}{}&\multicolumn{1}{c|}{17}& \multicolumn{1}{c}{16}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{14}& \multicolumn{1}{c}{13}&\multicolumn{2}{c}{}&\multicolumn{1}{c|}{10}& \multicolumn{1}{c}{9}&\multicolumn{1}{c}{}&\multicolumn{1}{c|}{7}& \multicolumn{1}{c}{6}&\multicolumn{4}{c}{}&\multicolumn{1}{c|}{1}\\ \hline 1&1&&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}& \hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt} &\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}& \hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}&\hspace*{10pt}\\ \hline $f_1$&$f_2$&$f_3$&\multicolumn{3}{|c|}{}&\multicolumn{3}{|c|}{$r$}& \multicolumn{3}{|c|}{$|w_3|$}&\multicolumn{4}{|c|}{$w_3$}& \multicolumn{3}{|c|}{$|w_2|$}&\multicolumn{4}{|c|}{$w_2$}& \multicolumn{3}{|c|}{$|w_1|$}&\multicolumn{6}{|c|}{$w_1$}\\ \end{tabular} } \centering \caption{Packing a string of decoding table into four-byte computer word} \label{fig3} \end{figure*} Now we describe in detail the byte aligned algorithm of decoding for the code $D_{2}$ (Fig. \ref{program}). By $x << c$ denote the operation of shifting the value $x$ to the left and by $x >> c$ shifting to the right in $c$ bits (shift is not cyclic and new bits are filled with zeros). The symbol $\&$ denotes the bitwise operation "and", and the symbol $|$ stands for the bitwise "or". By $text_{i}$ we denote another byte of encoded text, by $t$ denote a string from Table \ref{tab3} packed in four-byte word. In the variable $w$ a decoded number is formed as the string concatenation $w_{1}, w_{2}$ or $w_{3}$, and in a variable \emph{len} the lengths of these strings are stored. The initial value $w$ consists of one "1" bit, then it shifts to the left, and the right bits are replaced by values $w_{1}, w_{2}$ or $w_{3}$ (from the relevant parts of the word $t$), and thus the most significant bit of $w$ always remains 1. \begin{figure}[H] \begin{tabbing} \hspace{10mm}\=\hspace{10mm}\=\hspace{10mm}\=\hspace{10mm}\=\hspace{60mm}\=\\ $i\leftarrow 1;$\>\>\>\>\>//byte number of the encoded text\\ $r\leftarrow 0;$\\ $w\leftarrow 1;$\\ while (the end of the text is not reached) \{ \\ \>$t \leftarrow TAB[r][text_i];$\>\>\>\>// read out 4-byte string in Tab. 3\\ \>if($t\&\mathrm{0x80000000}$) \{\>\>\>\>// if $f_1 = 1$\\ \>\>$len \leftarrow (t >> 6)\&\mathrm{0x7};$\>\>\>// $len \leftarrow |w_1|$\\ \>\>output $(w << len)|(t\&\mathrm{0x3F});$\>\>\>// decoded number: $w$ with \\ \>\>\>\>\>// appended to the right 6 least significant bits of $t$\\ \>\>$w \leftarrow 1;$\\ \>\>if($x\&\mathrm{0x40000000}$) \{\>\>\>// if $f_2 = 1$\\ \>\>\>$len \leftarrow (t >> 13)\&\mathrm{0x7};$\>\>// $len \leftarrow |w_2|$\\ \>\>\>output $(w << len)|((t >> 9)\&\mathrm{0xF});$\>\>// decoded number: $1w_2$\\ \>\>\>$w \leftarrow 1;$\\ \>\>\>$len \leftarrow (t >> 20)\&\mathrm{0x7};$\>\>// $len \leftarrow |w_3|$\\ \>\>\>if($t\&\mathrm{0x20000000}$) \{\>\>// if $f_3 = 1$\\ \>\>\>\>output $(w << len)|((t >> 16)\&\mathrm{0xF});$\>// decoded number: $1w_3$\\ \>\>\>\>$w \leftarrow 1;$\\ \>\>\>\} else\>\>// $(f_1, f_2, f_3)=110$\\ \>\>\>\>$w \leftarrow (w << len)|((t >> 16)\&\mathrm{0xF});$\>// $w \leftarrow 1w_3$\\ \>\>\>$r \leftarrow (t >> 23)\&\mathrm{0x7};$\>\>// $r$ in bits 24-26\\ \>\>\} else \{\>\>\>// $(f_1, f_2)=10$\\ \>\>\>$len \leftarrow (t >> 16)\&\mathrm{0x7};$ \>\>// $len \leftarrow |w_2|$\\ \>\>\>$w \leftarrow (w << len)|((t >> 9)\&\mathrm{0x7F});$ \>\>// $w \leftarrow 1w_3$\\ \>\>\>$r \leftarrow (t >> 19)\&7;$ \>\>// $r$ in bits 20-22\\ \>\>\}\\ \>\} else \{\>\>\>\>// if $f_1 = 0$\\ \>\>$len \leftarrow (t >> 10)\&\mathrm{0xF};$\>\>\>// $len \leftarrow |w_1|$\\ \>\>$w \leftarrow (w << len)|(t\&\mathrm{0x3FF});$ \>\>\>// append $w_1$ to $w$ \\ \>\>$r \leftarrow (t >> 14)\&\mathrm{0x7};$ \>\>\>// $r$ in bits 15-17\\ \>\}\\ \>$i \leftarrow i+1;$\>\>\>\>// proceed to the next byte\\ \} \end{tabbing} \centering \caption{Bytewise decoding algorithm for the code $D_2$} \label{program} \end{figure} Let us estimate storage consumption of the method described above. For each of 6 possible values $r_{i-1}$ there exist $256$ values $u_{i}$, thus Table \ref{tab3} contains $6\times 256$ strings; 4 bytes are required to store each of them. Thus, the memory storage of the bytewise decoding method is 6 Kb. Let us compare the space complexity of a given method with fast byte aligned methods used for decoding Fibonacci codes. The most detailed study of them is presented in \cite{Kl1}, where three such methods are described. The fastest of them is the method that involves using the table named Fib3. Its memory storage requires 21.4 Kb, i.e. more than 3.5 times greater than the method we propose. Time complexities of these methods were compared by numerical experiments. The random 20 million words fragment from English Wikipedia text corpus was encoded by the codes $D_{2}$ and Fib3 and then decoded by byte aligned methods mentioned above. Time of decoding was measured. The experiment was repeated $100$ times, and the results were averaged. These results are shown in Table \ref{tab6}. As is seen, decoding of $D_{2}$ is about $20\%$ faster than that of Fib3. This mainly is due to the fact that the decoding of $D_2$ requires only one memory read operation at each iteration, after which all the other operations can be performed in processor registers very rapidly, while the mentioned above Fib3 decoding method requires 2 or 3 readings from one- or two-dimensional arrays at each iteration. \begin{table}[!t] \caption{Comparison of bytewise decoding methods complexity for codes $D_{2}$ and Fib3} \label{tab6} \centering \begin{tabular}{|c|c|c|} \hline &Bytewise decoding of $D_2$&Bytewise decoding of Fib3\\ \hline Memory&6K&21.4K \\ \hline Time&$0.255s$&$0.321s$\\ \hline \end{tabular} \end{table} \section{Compressing data by multi-delimiter codes} Applicability of a code for information compressing is largely related to its density, which is measured by the number of codewords of the length not exceeding $n$. Let us first calculate the asymptotic density of the code $D_{2}$. By $f_{n}$ denote the number of codewords in $D_{2}$ of the length $n$. \begin{lemma} The following equality holds \begin{equation} \label{q} f_{n}=f_{n-1}+f_{n-2}+f_{n-3}+f_{n-6} \end{equation} \end{lemma} \begin{IEEEproof} Applying formula (4) to parameters of the code $D_{2}(t=1, m_{1}=2)$ and taking into account that $T_{n}-T_{n-1}=0$ for $n\geq 6$, we obtain the following recurrent relation that is true for $n\geq 6:$ \begin{equation} \label{q} f_{n}=f_{n-1}+2f_{n-2}-f_{n-3}+f_{n-5} \end{equation} By induction, we prove that for $n \geq 7$ equality (8) is equivalent to (9). It is necessary to prove the equality of right parts (8) and (9), which after reductions takes the form $f_{n-2}-f_{n-3}+f_{n-5}= f_{n-3}+f_{n-6}$. This gives the equality \begin{equation} \label{q4eq} f_{n-2}+f_{n-5}=2f_{n-3}+f_{n-6} \end{equation} For $n = 7$ this equality is easy to check directly. Suppose, it holds for some $n \geq 7$. Express $f_{n-1}$ by using formula (9): $f_{n-1} = f_{n-2} + 2f_{n-3}-f_{n-4} + f_{n-6}$. It gives $2f_{n-3}+ f_{n-6}= f_{n-1} - f_{n-2} + f_{n-4}$. Substituting this expression to the right side of (10), we obtain equality $f_{n-1} + f_{n-4} = 2f_{n-2} + f_{n-5}$, which coincides with equality (10), if replace $n$ by $n+1$. \end{IEEEproof} By $s_{n}$ denote the number of codewords, which lengths do not exceed $n$, $s_{n}=\sum_{i=1}^n f_{i}$. Taking into account that $f_{3}= f_{4}=1, f_{5}=2, f_{6}=3$ and, summing over all indices $n\geq 7$ both parts of formula (8), we obtain: \begin{eqnarray} \label{q11} s_{n}= \sum_{i=3}^6 f_{i} +\sum_{i=7}^n f_{i}=\nonumber \\ 7+\sum_{i=7}^n(f_{i-1} + f_{1-2} + f_{i-3} + f_{i-6}) \end{eqnarray} Note that the following identities hold: \begin{eqnarray*} \sum_{i=7}^n f_{i-1} = \sum_{i=6}^{n-1}f_{i}=s_{n-1}-4;\nonumber \\ \sum_{i=7}^n f_{i-2} = \sum_{i=5}^{n-2}f_{i}=s_{n-2}-2;\nonumber \end{eqnarray*} \begin{eqnarray*} \sum_{i=7}^n f_{i-3} = \sum_{i=4}^{n-3}f_{i}=s_{n-3}-1;\nonumber \\ \sum_{i=7}^n f_{i-6} = \sum_{i=1}^{n-6}f_{i}=s_{n-6}. \end{eqnarray*} Substituting these expressions into formula (\ref{q11}), we obtain: \begin{equation} \label{q12} s_{n}=s_{n-1} + s_{n-2} + s_{n-3} + s_{n-6} \end{equation} Since $s_{2}=s_{1}=s_{0}=s_{-1}=\cdots=0$, $s_{3}=1, s_{4}=2, s_{5}=4, s_{6}=7$, the equality (\ref{q12}) holds for $n\geq6$. Formula (\ref{q12}) allows us to find the generating function $G(z)$ for $s_{n}$: \begin{eqnarray} \label{q3} G(z)=\sum_{n=0}^{\infty}s_{n}z^{n}= z^{3}+ 2z^{4} + 4z^{5} +\nonumber \\ +\sum_{n=6}^{\infty}s_{n}z^{n}= z^{3}+ 2z^{4} + 4z^{5} +\nonumber \\ +\sum_{n=6}^{\infty}(s_{n-1}+s_{n-2}+s_{n-3}+ s_{n-6})z^{n} \end{eqnarray} Take into account the following equalities: \begin{eqnarray*} \sum_{n=6}^{\infty}s_{n-1}z^{n}= z\sum_{n=6}^{\infty}s_{n-1}z^{n-1}=\\ z\sum_{n=5}^{\infty}s_{n}z^{n}=zG(z)-z^{4}- 2z^{5};\\ \\ \sum_{n=6}^{\infty}s_{n-2}z^{n}= z^{2}\sum_{n=6}^{\infty}s_{n-2}z^{n-2}=\\ z^{2}\sum_{n=4}^{\infty}s_{n}z^{n}=z^{2}G(z)-z^{5};\\ \\ \sum_{n=6}^{\infty}s_{n-3}z^{n}= z^{3}\sum_{n=6}^{\infty}s_{n-3}z^{n-3}=\\ z^{3}\sum_{n=3}^{\infty}s_{n}z^{n}=z^{3}G(z);\\ \\ \sum_{n=6}^{\infty}s_{n-6}z^{n}= z^{6}\sum_{n=6}^{\infty}s_{n-6}z^{n-6}=\\ z^{6}\sum_{n=0} ^{\infty}s_{n}z^{n}=z^{6}G(z). \end{eqnarray*} Substituting these equalities into formula (13) and solving the resulting equation with respect to $G (z)$, we obtain: \[ G (z)= \frac{z^{3}+z^{4}+z^{5}}{1-z-z^{2}-z^{3}-z^{6}}=\frac{z^{3}}{1-2z+z^{3}-z^{4}}\] Decompose $G(z)$ to the sum of prime fractions \begin{eqnarray} \label{q14} G(z)=\frac{-0.3618+0.2982i}{z-0.809-0.9816i} +\nonumber \\ +\frac{-0.3618+0.2982i}{z-0.809+0.9816i}-\nonumber \\ -\frac{0.1888}{z+1.1537}-\frac{0.0876}{z-0.5357}, \end{eqnarray} where $i$ is the imaginary unit, $i=\sqrt{-1}$. As seen from (\ref{q3}), the coefficient $s_{n}$ equals to the \emph{n}-th term of the Maclaurin series for the function $G(z)$. If we expand function $g(z)=\frac{1}{z-a}$ into the Maclaurin series, then the $n$-th term equals to $\frac{x^n}{n!}g^{(n)}(0)=\frac{(-1)^n!x^n}{n!(-a)^n}=\frac{x^n}{a^n}$. Thus, the order of growth of $s_n$ is determined by the value $1/a^n$, where the value $a$ should be selected by the condition that $|a|$ is the smallest value among all fractions of the form $\frac{b}{z-a}$ in formula (\ref{q14}). This is the last fraction in (\ref{q14}). Thus, $a=0.5357$ and the order of growth of $s_{n}$ is given by the expression \begin{equation} \label{q15} \left(\frac{1}{0.5357}\right)^{n}\approx 1.867^{n} \end{equation} As shown in \cite{Kl1}, among the family of Fibonacci codes of higher orders the code Fib3 gives the best compression rate in the case of encoding natural language texts. The asymptotic density of this code is $1.839^{n}$. Thus, the code $D_{2}$ is asymptotically denser than Fib3. It is also evident from the simple fact that the number of words of the length $n$ in the code $D_{2}$ determined by formula (8): $f_{n}=f_{n-1}+f_{n-2}+f_{n-3}+f_{n-6}$, while for the code Fib3 it is $f_{n}=f_{n-1}+f_{n-2}+f_{n-3}$. Using the standard technique of generating functions, it is not difficult to calculate the asymptotic density of other multi-delimiter codes. For several such codes that may be of interest from the practical point of view, as well as for several Fibonacci codes, these values together with numbers of short codewords are given in Table \ref{tab7}. \begin{table*}[!t] \caption{The number of codewords of length $\leq n$ for some codes} \label{tab7} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Code&Asymptotic&$n=2$&$n=3$&$n=4$&$n=5$&$n=6$&$n=7$&$n=8$&$n=15$\\ \hline \multicolumn{10}{|c|}{The codes with the shortest codeword of the length 2}\\ \hline Fib2&$1.618^n$&1&2&4&7&12&20&33&986\\ \hline $D_1$&$1.755^n$&1&2&3&5&9&16&28&1432\\ \hline $D_{1,2}$&$1.618^n$&1&3&5&7&10&16&27&799\\ \hline $D_{1,3}$&$1.674^n$&1&2&4&7&11&18&30&1106\\ \hline \multicolumn{10}{|c|}{The codes with the shortest codeword of the length 3}\\ \hline Fib3&$1.839^n$&0&1&2&4&8&15&28&2031\\ \hline $D_2$&$1.867^n$&0&1&2&4&7&13&24&1906\\ \hline $D_{2,3}$&$1.785^n$&0&1&3&6&11&19&33&1874\\ \hline $D_{2,4}$&$1.823^n$&0&1&2&5&9&17&30&1998\\ \hline $D_{2,5}$&$1.844^n$&0&1&2&4&8&15&28&1999\\ \hline $D_{2,3,4}$&$1.731^n$&0&1&3&7&13&23&39&1721\\ \hline $D_{2,3,5}$&$1.755^n$&0&1&3&6&12&21&37&1833\\ \hline $D_{2,4,5}$&$1.796^n$&0&1&2&5&10&19&34&2019\\ \hline $D_{2,4,6}$&$1.809^n$&0&1&2&5&9&18&32&2032\\ \hline \multicolumn{10}{|c|}{The codes with the shortest codeword of the length 4}\\ \hline Fib4&$1.928^n$&0&0&1&2&4&8&16&1606\\ \hline $D_3$&$1.933^n$&0&0&1&2&4&8&15&1510\\ \hline \end{tabular} \end{table*} As seen, many multi-delimiter codes contain a larger number of short codewords than the comparing Fibonacci codes with the same length of the shortest codeword. The "champions" are the codes $D_{2,3},$ $D_{2,3,4},$ $D_{2,3,5,}$ and $D_{2,4,5}$. They are the candidates for efficient compression. However, the code $D_{2,3,4}$ has quite low asymptotic density, which narrows its application area to only small alphabets. We investigate more thoroughly the other three codes together with the code $D_2$, which has the highest asymptotic density in the class of codes with the shortest word of the length 3. Compression efficiency of multi-delimiter codes was experimentally measured on different sources of English texts. Namely, we took the Bible (King James version), three other famous pieces of writing, and the full content of English Wikipedia. The results are presented in Table \ref{tab8} in terms of the average codeword length. We compared the performance of multi-delimiter codes and the Fibonacci code Fib3, which is taken as the base for comparisons. This code is known as the most efficient for natural language text compression among all Fibonacci codes. \begin{table*}[!t] \caption{Empirical comparison of compression rate (the average codeword length) of Fib3 and some multi-delimiter codes} \label{tab8} \centering \begin{tabular} {|l|l|l|l|l|l|l|} \hline Source & Alphabet size & Fib3 & $D_{2}$ & $D_{2,3}$ & $D_{2,3,5}$ &$D_{2,4,5}$ \\ \hline \hline Bible KJV &12,452 &$9.21$ &$9.35 (+1.6\%)$ &$9.03 (-2\%)$ & $8.95(-2.8\%)$ & $9.04(-1.8\%)$ \\ \hline \hline Hamlet, Shakespeare & 4,501 &$10.0$ & $10.16 (+1.6\%)$ &$9.82 (-1.8\%)$ &$9.74(-2.5\%)$ & $9.81(-1.9\%)$ \\ \hline \hline Robinson Crusoe, D. Defoe & 5,994 & $9.4$ &$9.55 (+1,6\%)$ & $9.19 (-2.2\%)$ & $9.12 (-3\%)$ & $9.21 (-2\%)$ \\ \hline \hline Oliver Twist, C. Dickens & 10,027 & $10.06$ &$10.21 (+1,5\%)$ &$9.91 (-1.6\%)$ &$9.84 (-2.3\%)$ & $9.89 (-1.7\%)$ \\ \hline \hline English Wikipedia & 5,487,696 &$11.585$ &$11.696 (+1\%)$ &$11.521 (-0.6\%)$ &$11.517 (-0.6\%)$ & $11.497 (-0.8\%)$ \\ \hline \end{tabular} \end{table*} As seen, the codes with 2 and 3 delimiters outperform the Fib3 code. For example, the average codeword length for the code $D_{2,3,5}$ is about $2-3\%$ less than that for the code Fib3, if the alphabet size is around 10K words. This is a significant difference if we take into account that the code Fib3 exceeds the entropy bound only by $5-6\%$ for English texts, as reported in \cite{Kl1}. Since the asymptotic density of multi-delimiter codes is lower, their overheads over Fib3 decreases as alphabet size grows. However, codes with 2 and 3 delimiters are still superior even for Wikipedia, which is one of the largest known natural language text corpus up to date, containing over 5 million different words. The code Fib3, in comparison with the multi-delimiter codes, also has a drawback, which refers to the characteristic of the instantaneous separation that is important for searching a word in the compressed file without its decompression. As Fib3, so multi-delimiter codes as well as other codes used for text compression are characterized by the following: if a certain bit sequence $w$ occurs in a compressed file, we can not guarantee that it truly corresponds to the occurrence of the whole codeword $w$, since it could be the suffix of another codeword. In multi-delimiter codes to check if $w$ is truly a separate codeword it is enough to consider a fixed number of bits that precede $w$. For example, it is enough to check 4 bits for the code $D_2$. If they turn out to be $0110$, then $w$ is a codeword, otherwise it is not. However, it is not enough to check any fixed number of bits preceding a codeword in the code Fib3, since a delimiter and the shortest word in this code is 111. Several such codewords can "stick together" if they are adjacent. As one of the ways to avoid this problem, in \cite{Kl1} it is proposed to extract the shortest codeword $111$ from the code Fib3. However, the density and compression efficiency of the code obtained in this way is significantly worse than those for all the codes discussed above, including $D_{2}$. \section {Conclusion} In this paper we introduce a new family of splittable codes that are based on encoding sequences of ordered integer pairs. Splittable codes form a rich set of codes that include the $(2,3)$-codes, the Fibonacci codes of higher orders and the multi-delimiter codes. The multi-delimiter codes are of special interest. They possess all properties known for the Fibonacci codes such as completeness, universality, simple vocabulary representation, and strong robustness. But also they have some more advantages: \begin{itemize}{\IEEEsetlabelwidth{(iii)}} \item[(i)] Adaptability. Varying delimiters we can adapt a multi-delimiter code to a given source probability distribution and an alphabet size. \item[(ii)] Better compression rate for natural language text compressing. \item[(iii)] Good computer performance minimizing time and storage overheads. \item[(iv)] Instantaneous separation of codewords allowing faster compressed search. \end{itemize} The set of multi-delimiter codes together with the set of Fibonacci codes can be useful in many practical applications. \newpage
train/arxiv
BkiUayjxK5YsWPOod0lv
5
1
\section{Introduction\label{SI}} Electromagnetic radiative decays of hadrons provide useful information on the hadron structure. Their quark model description is based on the Elementary Emission Model (EEM) that assumes that the decay takes place through the emission of the photon by a quark (or antiquark) of the hadron, see for example \cite{LeY88}. As the electromagnetic transition operator is known, without any free parameter, radiative decays may be a powerful tool to discriminate among different spectroscopic hadron models. In practice this discrimination may be rather difficult. Think, for example, of heavy quarkonium (bottomonium and charmonium) for which the nonrelativistic quark potential model is undoubtedly the more successful one in the spectral description of states below the open flavor meson-meson thresholds, see for instance \cite{Eic05} and references therein. (This is so even for the low lying charmonium states for which the calculated speed of the quark $Q$, or the antiquark $\overline{Q}$, given by $\frac{\lvert \vb*{p}_{Q}\rvert }{M_{Q}}$ where $\vb*{p}_{Q}$ $\left(M_{Q}\right) $ is the three-momentum (mass) of the quark, can be about half of the speed of light.) Then, in order to build the electromagnetic transition operator for $I\rightarrow\gamma F$, where $I$ and $F$ are bottomonium or charmonium states, a nonrelativistic reduction of the well known point like quark photon interaction up to $\frac{\lvert \vb*{p}_{Q}\rvert }{M_{Q}}$ order is carried out. Moreover, for transitions where the wave length of the emitted photon is larger than the hadronic size scale of the process the operator is further simplified to the so called Long Wave Length Approximation (LWLA). Hence the comparison of calculated radiative widths to data may be testing not only the hadron structure model but also the hadron decay model approximation. This could be the reason why different spectroscopic quark models are successful (or fail) in the description of the same radiative decays \cite{Eic05,GI85,Seg16}. \bigskip In this article we center on bottomonium for which the nonrelativistic spectroscopic quark model as well as the nonrelativistic form of the electromagnetic transition operator can be reasonably taken for granted, and examine the requirements needed to get an accurate general description of radiative decays. We shall show that such a description may be attained, even from the simplest quark potential model wave functions reasonably fitting the spectroscopy, when the calculated mass differences between bottomonium states approximate the experimental ones. In the case that there is a discrepancy of tens of MeV at most, a good description is still feasible if the measured masses are properly implemented in the calculation. \bigskip The contents of the article are organized as follows. In Sec.~\ref{SII} we use the simplest spectroscopic (Cornell) potential model, yet incorporating the basic QCD ingredients for a physical description of bottomonium, for the calculation of the masses of the $S$ and $P$ spin triplet states far below open flavor thresholds. In Sec.~\ref{SIII} we recall the nonrelativistic form of the electromagnetic operator and focus on $S\longleftrightarrow P$ transitions between spin triplet states since these are quantitatively the more important ones and there are more data available. In Sec.~\ref{SIV} we take the LWLA that permits to factor out the final and initial state mass difference dependence in the operator. This allows us to implement the experimental masses in the calculation what turns out to be crucial for an accurate description of decays within the range of validity of the LWLA. In Sec.~\ref{SV} we pursue the mass difference factorization in the general case to get a good description of measured decays beyond the LWLA range of validity and to generate reliable predictions for not yet measured ones. Finally in Sec.~\ref{SVI} our main results and conclusions are summarized. \section{Spectroscopic Quark Model \label{SII}} The simplest non relativistic quark model physical description of bottomonium $\left( b\overline{b}\right) $ comes out from the hamiltonian \begin{equation} H_{C}=\frac{\vb*{p}^{2}}{M_{b}}+V_{C}\left( r\right) \label{hamc} \end{equation} where $V_{C}\left( r\right) $ is a Cornell like potential \cite{Eic05,Eic80,Eic94} \begin{equation} V_{C}\left( r\right) =\sigma r-\frac{\zeta}{r} \label{Cor} \end{equation} with $r$ standing for the $b-\overline{b}$ radial distance and $\sigma$ and $\zeta$ for the string tension and the chromoelectric coulomb strength parameters respectively. This static potential form has been justified from quenched lattice QCD calculations, see \cite{Bal01} and references therein. It should be kept in mind though that in the spirit of the nonrelativistic quark model calculations $\sigma$ and $\zeta$ are effective parameters through which some non considered corrections to the potential may be implicitly taken into account. Any different set of values of the parameters $\sigma,$ $\zeta$ and $M_{b}$ defines a different Cornell potential model. From now on we fix the Coulomb strength to $\zeta=100$ MeV fm corresponding to a strong quark-gluon coupling $\alpha_{s}=\frac{3\zeta}{4\hbar}\simeq0.38$ in agreement with the value derived from QCD from the hyperfine splitting of $1p$ states in bottomonium \cite{Ynd95}. As for $\sigma$ we expect, from lattice studies \cite{Bal01} a value around $900$ MeV/fm. Then, we choose it altogether with the quark mass, $M_{b}$, to get a reasonable fit, within a few tens of MeV, to the masses of $0^{-}\left( 1^{--}\right) $ and $0^{+}\left( J^{++}\right)$, $J=0,1,2,$ spin triplet states (let us recall that the neglected spin-spin contribution to the mass is three times smaller for triplet than for singlet states). More precisely, we define our model by (notice that this model does not contain any additive constant in the potential) \begin{equation} \begin{array} [c]{c} \sigma=850\,\text{MeV}/\text{fm}\\ \zeta=100\,\text{MeV}\,\text{fm}\\ M_{b}=4793\,\text{MeV} \end{array} \label{parameters} \end{equation} from which a reasonable overall description of the spectral masses is obtained as shown in Table~\ref{Tabmassbbbar}. \begin{table} \centering \begin{tabular}{ccccc} \toprule $J^{PC}$ & $ \begin{array}{c} \text{Cornell}\\ nL\;\text{States} \end{array} $ & $ \begin{array}{c} M_{Cor}\\ \text{MeV} \end{array} $ & $ \begin{array} [c]{c} M_{PDG}\\ \text{MeV} \end{array} $ & $ \begin{array}{c} \langle r^{2}\rangle ^{\frac{1}{2}}\\ \text{fm} \end{array} $\\ \midrule $1^{--}$ & $1S$ & $9459$ & $9460.30\pm0.26$ & $0.22$\\ & $2S$ & $10012$ & $10023.026\pm0.31$ & $0.51$\\ & $1S$ & $10157$ & $10163.7\pm1.4$ & \\ & $3S$ & $10342$ & $10355.2\pm0.5$ & $0.75$\\ & $2D$ & $10438$ & & \\ & $4S$ & $10608$ & $10579.4\pm1.2$ & $0.96$\\ & $3D$ & $10682$ & & \\ & $5S$ & $10841$ & & $1.15$\\ & & & $10889.9_{-2.6}^{+3.2}$ & \\ & $4D$ & $10902$ & & \\ \midrule $0^{++}$ & $1P$ & $9920$ & $9859.44\pm0.42\pm0.31$ & $0.41$\\ $1^{++}$ & $1P$ & $9920$ & $9892.78\pm0.26\pm0.31$ & $0.41$\\ $2^{++}$ & $1P$ & $9920$ & $9912.21\pm0.26\pm0.31$ & $0.41$\\ \midrule $0^{++}$ & $2P$ & $10259$ & $10232.5\pm0.4\pm0.5$ & $0.67$\\ $1^{++}$ & $2P$ & $10259$ & $10255.46\pm0.22\pm0.50$ & $0.67$\\ $2^{++}$ & $2P$ & $10259$ & $10268.65\pm0.22\pm0.50$ & $0.67$\\ \midrule $0^{++}$ & $3P$ & $10531$ & & $0.88$\\ $1^{++}$ & $3P$ & $10531$ & $10513.4\pm0.7$ & $0.88$\\ $2^{++}$ & $3P$ & $10531$ & $10524.0\pm0.8$ & $0.88$\\ \bottomrule \end{tabular} \caption{\label{Tabmassbbbar}Calculated $1^{--}$ and $J^{++}$ bottomonium masses, $M_{Cor}$, far below their corresponding $S-$ wave open flavor meson-meson threshold (see for example \cite{Gon14} for a compilation of the values of these thresholds). The spectroscopic notation $nL$, where $n$ and $L$ are the radial and orbital angular momentum numbers respectively, has been used to characterize the $H_C$ eigenstates. Masses for experimental resonances, $M_{PDG}$, have been taken from \cite{PDG18}. For $p$ waves we quote separately the $np_{0}$, $np_{1}$ and $np_{2}$ states. The root mean square radii for the calculated states, $\left\langle r^{2}\right\rangle ^{\frac{1}{2}}$, are also reported.} \end{table} Some comments are in order. First, the low lying $1^{--}$ masses are well reproduced within $15$ MeV. The discrepancy between the calculated mass of the $4S$ state at $10608$ MeV and the experimental mass at $10579.4$ MeV may be indicating mixing of the $4S$ and $3D$ states. So, the measured resonance would have a dominant $4S$ component, whereas a not yet discovered resonance at about $10750$ MeV would have a dominant $3D$ component. Notice that $S-D$ mixing should be also present for the $5S$ and $4D$ states, apart from a possible additional mixing with the lowest hybrid state \cite{Bru19}. For higher $1^{--}$ states, not included in the table, the first $S-$ wave open flavor meson-meson threshold, $B\overline{B}_{1}$ at $11003$ MeV, may play an important role. Second, the calculated masses for $1P$ and $2P$ and $3P$ states differ from the corresponding measured $^{3}\!P_{2}$ masses by less than $10$ MeV. Therefore we may consider that our model fits reasonably well the $1^{--}$, $2^{++}$, and to a lesser extent $1^{++}$, spectroscopy. Third, the calculated speed of the quark or antiquark is at most $0.3\,c$ what justifies the nonrelativistic form of the electromagnetic operator up to $\frac{\lvert \vb*{p}_{b}\rvert }{M_{b}}$ order we shall make use of. \bigskip Certainly potential corrections should be incorporated to the model for a more accurate description of the spectrum. We shall assume henceforth that for states far below (about $100$ MeV or more) their corresponding $S-$ wave open flavor meson-meson thresholds these corrections may be taken into account, at least to some extent, via first order perturbation theory. Then the model provides us with an appropriate set of bottomonium wave functions to be tested. It is also worth to point out that although this model does not contain couple channel corrections it has proved to be useful as a starting point for the implicit incorporation (through the modification of the potential) of dominant spectroscopic threshold effects in bottomonium as well as in charmonium \cite{Gon14,Gon15,BrG19}. Alternatively, couple channel corrections have been explicitly implemented through unquenched quark models from a nonrelativistic \cite{Eic94,Eic04} or a semirelativistic \cite{Fer13,*Fer14_5,*Fer14_9} quark-antiquark Hamiltonian. \section{Electromagnetic Decay Model \label{SIII}} Let us consider the decay $I\rightarrow\gamma F$ where $I$ and $F$ are the initial and final bottomonium states respectively. In the rest frame of the decaying meson $I$ the total width is given by (we follow the PDG conventions, see \cite[p.~567]{PDG18}) \begin{equation} \Gamma_{I\rightarrow\gamma F}=\frac{k_{0}}{8\pi M_{I}^{2}}\frac{1}{\left(2J_{I}+1\right) }\sum_{\lambda=\pm1}\sum_{m_{I},m_{F}}\left\vert\mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda}\right\vert ^{2} \label{width} \end{equation} where $k_{0}$ is the energy of the photon and $M_{I},$ $J_{I}$ and $m_{I}$ stand for the mass of $I$, its total angular momentum and its third projection respectively$.$ The polarization of the photon is represented by $\lambda$ (as usual we choose the three-momentum of the photon in the $Z$ direction) and the transition matrix element by $\mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda}$. This matrix element can be obtained from the interaction hamiltonian $\mathcal{H}_{int}$ as \begin{multline} \left( 2\pi\right) ^{3}\delta^{(3)}\left( \vb*{P}_{I}-\vb*{k}-\vb*{P}_{F}\right) \mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda}=\\ \sqrt{2M_{I}}\sqrt{2E_{F}}\sqrt{2k_{0}}\left\langle F\gamma\right\vert \mathcal{H}_{int}\left\vert I\right\rangle \label{amplitude1} \end{multline} where $\left( E_{I},\vb*{P}_{I}\right) =\left(M_{I},\vb*{0}\right) ,$ $\left( E_{F},\vb*{P}_{F}\right) $ and $\left( k_{0},\vb*{k}\right)$ are the meson and photon four-momenta. \bigskip In the Elementary Emission Decay Model the radiative transition $I\rightarrow\gamma F$ takes place through the emission of the photon by the quark or the antiquark of $I$. By proceeding to a nonrelativistic reduction of the interaction hamiltonian at the quark level (we use the radiation gauge so that the time component of the electromagnetic field vanishes, $A^{0}\left(\vb*{x}\right) =0$) the operator to be sandwhiched between the meson states reads, see for example \cite{LeY88}, \begin{multline} \left\langle \vb*{k},\lambda\right\vert \mathcal{H}_{I}\left\vert 0\right\rangle =-\frac{1}{\sqrt{2k_{0}}}\sum_{\alpha=1,2}\frac{e_{\alpha}}{2M_{\alpha}}\\ \left( e^{-i\vb*{k}\vdot\vb*{r}_{\alpha}}\vb*{p}_{\alpha}+\vb*{p}_{\alpha}e^{-i\vb*{k}\vdot\vb*{r}_{\alpha}}-i\vb*{\sigma}_{\alpha}\times\vb*{k}e^{-i\vb*{k}\vdot\vb*{r}_{\alpha}}\right) \vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right)^{\ast} \label{transop} \end{multline} where the subindices $1$ and $2$ refer to quark $b$ and antiquark $\overline{b}$ respectively, $e_{1}$ $\left( e_{2}\right) $ is the $b$ ($\overline{b}$) electric charge, $e_{b}=-\frac{1}{3}\left\vert e\right\vert$, $\vb*{\epsilon}_{\vb*{k}}^{\lambda}$ stands for the photon polarization vector and $\vb*{k}$ is now a vector number, not an operator. Then, having into account the quantum numbers characterizing the initial and final states \begin{equation} \left\vert I\right\rangle =\left\vert \vb*{P}_{I},J_{I},m_{I},\left( n_{I}L_{I}\right) _{b\overline{b}},\left( S_{I}\right)_{b\overline{b}}\right\rangle \label{IN} \end{equation} \begin{equation} \left\vert F\right\rangle =\left\vert \vb*{P}_{F},J_{F},m_{F},\left( n_{F}L_{F}\right) _{b\overline{b}},\left( S_{F}\right)_{b\overline{b}}\right\rangle \label{FIN} \end{equation} where $\left\vert \left( nL\right) _{b\overline{b}}\right\rangle $ stand for the $H_C$ eigenstates previously calculated, introducing center of mass \begin{equation} \vb*{R}=\frac{\vb*{r}_{1}+\vb*{r}_{2}}{2}, \quad\vb*{P}=\vb*{p}_{1}+\vb*{p}_{2} \end{equation} and relative \begin{equation} \vb*{r}=\vb*{r}_{1}-\vb*{r}_{2},\quad\vb*{p}=\frac{\vb*{p}_{1}-\vb*{p}_{2}}{2} \end{equation} operators and integrating over $\vb*{R},$ the center of mass spatial degrees of freedom, the transition matrix element can be written as \begin{multline} \mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda} =\sqrt{2M_{I}}\sqrt{2E_{F}}\sum_{\alpha=1,2}\frac{e_{\alpha}}{2M_{\alpha}}\\ \left\langle J_{F},m_{F},\left( n_{F}L_{F}\right) _{b\overline{b}},\left( S_{F}\right) _{b\overline{b}}\right\vert \\ \overline{\mathcal{O}}_{\alpha}\left\vert J_{I},m_{I},\left( n_{I}L_{I}\right) _{b\overline{b}},\left( S_{I}\right) _{b\overline{b}}\right\rangle \label{M} \end{multline} where \begin{multline} \overline{\mathcal{O}}_{\alpha} =\left( \left( -1\right)^{\alpha}\left( e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\vb*{p}+\vb*{p}e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\right) \right.\\ +i\vb*{\sigma}_{\alpha}\times\vb*{k}e^{i\left(-1\right) ^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\\ \left. -\left( \frac{\vb*{P}_{I}+\vb*{P}_{F}}{2}\right) e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\right)\vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right)^{\ast} \label{O} \end{multline} The first, second and third addends on the right hand side correspond to electric, magnetic and convective terms respectively since they come from the corresponding terms in the quark electromagnetic current entering in the interaction hamiltonian. \bigskip For practical calculations we use \begin{align} \left[ (\vb*{p})_q ,e^{i\left( -1\right)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right)}\right] &= \sum_{q'} \left[ (\vb*{p})_q, (\vb*{r})_{q'}\right] \frac{\partial e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }}{\partial (\vb*{r})_{q'}}\nonumber \\ &=(-1)^{\alpha}\frac{(\vb*{k})_q}{2}e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) } \label{CONM} \end{align} or equivalently \begin{equation} \vb*{p}e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vb*{r}}{2}\right) }=e^{i(-1)^{\alpha}\left(\frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\vb* {p}+(-1)^{\alpha}\frac{\vb*{k}}{2}e^{i\left(-1\right) ^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}} {2}\right) } \label{EQUIV} \end{equation} Then, by realizing that in the rest frame of the decaying meson $\vb*{P}_{I}=\vb*{0}$ and $\vb*{P}_{F}=-\vb*{k}$, where $\vb*{k}$ is in the $Z$ direction, one has $\vb*{P}_{F}\vdot\left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right)^{\ast}=\vb*{0}=\vb*{k}\vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right)^{\ast}$, and the operator $\overline{\mathcal{O}}_{\alpha}$ reduces to \begin{equation} \mathcal{O}_{\alpha}=\left( e^{i(-1)^{\alpha}\left(\frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\left( \left(-1\right) ^{\alpha}2\vb*{p}+i\vb*{\sigma}_{\alpha}\times\vb*{k}\right) \right) \vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast} \label{Ored} \end{equation} or equivalently to \begin{equation} \mathcal{O}_{\alpha}^{\prime}=\left( \left( \left(-1\right) ^{\alpha}2\vb*{p}+i\vb*{\sigma}_{\alpha}\times\vb*{k}\right) e^{i(-1)^{\alpha}\left(\frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\right) \vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast} \label{Oprimered} \end{equation} Detailed expressions for the direct calculation of elecric and magnetic amplitudes in configuration space for ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ and ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ transitions can be found in Appendices \ref{SVII} and \ref{SVIII}. \bigskip It is important to emphasize that the $\vb*{p}$ operator in \eqref{Ored} or \eqref{Oprimered} makes the matrix element on the r.h.s. of \eqref{M} to have a specific dependence on the $H_c$ eigenvalues for the initial and final states, see below. Indeed, the explicit extraction of this dependence will become essential for an accurate description of radiative decays. \section{Long Wave Length Approximation (LWLA) \label{SIV}} In the limit that the wave length of the emitted photon is sufficiently large as compared to the hadronic size scale of the process (we shall be more quantitative below) we can approximate $e^{i\left( -1\right)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right)}\simeq1$. This simplifies the transition operator to \begin{equation} \left( \mathcal{O}_{\alpha}\right) _{LWLA}=\left( \left( -1\right)^{\alpha}2\vb*{p}+i\vb*{\sigma}_{\alpha}\times\vb*{k}\right) \vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast} =\left( \mathcal{O}_{\alpha}^{\prime}\right)_{LWLA} \label{OLWLA} \end{equation} Furthermore, using \begin{equation} \vb*{p}=-i\frac{M_{b}}{2}\left[ \vb*{r},H_{C}\right] \label{conm} \end{equation} we get \begin{multline} \left( \mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda}\right) _{LWLA}=\sqrt{2M_{I}}\sqrt{2E_{F}}\sum_{\alpha=1,2}\frac{e_{\alpha}}{2}\\ \left\langle J_{F},m_{F},\left( n_{F}L_{F}\right) _{b\overline{b}},\left(S_{F}\right) _{b\overline{b}}\right\vert \\ (-1)^{\alpha}\left( -i\right) \left( M_{I}-M_{F}\right)\vb*{r}+i\vb*{\sigma}_{\alpha}\times\vb*{k} \\ \left\vert J_{I},m_{I},\left( n_{I}L_{I}\right) _{b\overline{b}},\left(S_{I}\right) _{b\overline{b}}\right\rangle \vdot \left(\vb*{\epsilon}_{\vb*{k}}^{\lambda}\right)^{\ast} \label{LWLA} \end{multline} where we have substituted the difference between the $H_{C}$ eigenvalues for the initial and final states by their mass difference. Moreover, for values of $\left\vert \vb*{k}\right\vert =k_{0}$ such that $\frac{\vb*{k}^{2}}{2M_{F}}\ll M_{F}$ we can neglect the kinetic energy of the final meson and substitute $M_{I}-M_{F}\simeq k_{0}$ and $E_{F}\simeq M_{F}.$ \bigskip It is very important to remark that in the LWLA: \begin{enumerate}[i)] \item the amplitude does not depend explicitly on the quark mass; \item the mass dependence has been explicitly factored out. \end{enumerate} Therefore, if we implement the experimental masses in the calculation then the comparison of the calculated widths with data is directly testing the spectroscopic model wave functions (the underlying assumption justifying this procedure is that the difference between the calculated masses and the experimental ones can be obtained in most cases from these wave functions by applying first order perturbation theory). \bigskip For radiative transitions like ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ and ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ , with $J=0,1,2,$ the magnetic term does not contribute, as one can easily check from \eqref{B3} when $\abs{\vb*{k}}\abs{\vb*{r}}\to 0$. Thus, in the LWLA these transitions are purely electric dipole $E1$ transitions. More precisely, using $\vb*{r}\vdot\left(\vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}=\sqrt{\frac{4\pi}{3}}\left( Y_{1}^{\lambda}\left(\widehat{r}\right) \right) ^{\ast}r$ and some angular momentum algebra we can write the amplitude as \begin{widetext} \begin{multline} \left( \mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda}\right) _{LWLA} =i\sqrt{2M_{I}}\sqrt{2E_{F}}e_{b}\left(-1\right) ^{L_{I}}\sqrt{2L_{F}+1}C_{1,\text{ }J_{F},\text{ }J_{I}}^{\lambda,\text{ }m_{F},\text{ }m_{I}}\\ \left(\begin{array}{ccc} L_{F} & 1 & L_{I}\\ 0 & 0 & 0 \end{array}\right) \left[\begin{array}{ccc} 1 & L_{F} & L_{I}\\ S_{F} & J_{I} & J_{F} \end{array}\right] \left( M_{I}-M_{F}\right) \int_{0}^{\infty}\dd{r}\text{ }r^{2}\left( R_{n_{F}L_{F}}\right)^{\ast}rR_{n_{I}L_{I}} \label{ExpLWLA} \end{multline} \end{widetext} where $R_{n_{I}L_{I}}$ $\left( R_{n_{F}L_{F}}\right) $ is the radial wave function of the initial (final) state, \begin{equation} C_{1,\text{ }J_{F},\text{ }J_{I}}^{\lambda,\text{ }m_{F},\text{ }m_{I}}\equiv(-1)^{J_{F}-1-m_{I}}\sqrt{2J_{I}+1} \left(\begin{array}{ccc} 1 & J_{F} & J_{I}\\ \lambda & m_{F} & -m_{I} \end{array}\right) \label{coefnew} \end{equation} with $\left( {}\right) $ standing for the $3j$ symbol, and \begin{multline} \left[\begin{array}{ccc} 1 & L_{F} & L_{I}\\ S_{F} & J_{I} & J_{F} \end{array}\right] \equiv(-1)^{1+L_{F}+S_{F}+J_{I}} \\ \sqrt{\left(2L_{I}+1\right) \left( 2J_{F}+1\right) } \left\{\begin{array}{ccc} 1 & L_{F} & L_{I}\\ S_{F} & J_{I} & J_{F} \end{array}\right\} \label{newcoef} \end{multline} with $\left\{ {}\right\} $ standing for the $6j$ symbol. \bigskip From \eqref{ExpLWLA} and \eqref{width} and using $M_{I}-M_{F}\simeq k_{0}$, $e_{b}=-\frac{1}{3}\left\vert e\right\vert$ and $\left\vert e\right\vert ^{2}=4\pi\hat\alpha,$ where $\hat\alpha\simeq\frac{1}{137}$ is the fine structure constant, the LWLA width reads \begin{multline} \Gamma_{LWLA}=\frac{4\hat\alpha k_{0}^{3}E_{F}}{27M_{I}}\left( 2L_{F}+1\right) \left(\begin{array}{ccc} L_{F} & 1 & L_{I}\\ 0 & 0 & 0 \end{array}\right)^{2} \\ \left( 2L_{I}+1\right) \left( 2J_{F}+1\right) \left\{\begin{array}{ccc} 1 & L_{F} & L_{I}\\ 1 & J_{I} & J_{F} \end{array}\right\} ^{2} \\ \left\vert \int_{0}^{\infty}\dd{r}\text{ }r^{2}\left( R_{n_{F}L_{F}}\right) ^{\ast}rR_{n_{I}L_{I}}\right\vert ^{2} \label{widthLWLA} \end{multline} which is just the standard expression for the dipole electric amplitude in the literature, see for instance \cite{Eic08}, if one takes into account that \begin{equation} \left( 2L_{F}+1\right) \left(\begin{array}{ccc} L_{F} & 1 & L_{I}\\ 0 & 0 & 0 \end{array}\right) ^{2} \left( 2L_{I}+1\right) =\max\left( L_{I},L_{F}\right) \label{ident} \end{equation} For the practical application of \eqref{ExpLWLA} the range of validity of the LWLA needs to be established. For this purpose we may reason that for values of $\left\vert \vb*{r}\right\vert \geq 2\left\langle r^{2}\right\rangle ^{1/2}_F$ , where $2\left\langle r^{2}\right\rangle ^{1/2}_F$ approximates the size of the final state (notice that the size of the initial state is always bigger), the radial wave function for this state almost vanishes giving a negligible contribution to the matrix element \eqref{ExpLWLA}. Hence for values of $\left\vert\vb*{k}\right\vert $ such that $\left\vert \vb*{k}\right\vert 2\left\langle r^{2}\right\rangle ^{1/2}_{F}<1$ we expect that the values of $\left\vert \vb*{r}\right\vert$ contributing dominantly to the matrix element satisfy $\left\vert\vb*{k}\right\vert \left\vert \vb*{r}\right\vert<\frac{1}{2}\Rightarrow e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\simeq1$. Hence we may adopt \begin{equation} \left\vert \vb*{k}\right\vert 2\left\langle r^{2}\right\rangle ^{1/2}_{F}<1 \label{criterium} \end{equation} as a criterion of validity of the LWLA. In Table~\ref{TabkvaluesSP} and Table~\ref{TabkvaluesPS} we list the experimental values, $\left\vert \vb*{k}\right\vert _{Exp}=\left(\frac{M_{I}^{2}-M_{F}^{2}}{2M_{I}}\right) _{Exp},$ and the calculated values of $\left\vert \vb*{k}\right\vert _{Exp} 2\left\langle r^{2}\right\rangle ^{1/2}_{F}$ from our spectroscopic model for ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ and ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ transitions. \begin{table} \centering \begin{tabular}{ccc} \toprule ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ & $\left\vert \vb* {k}\right\vert _{Exp}\text{(MeV)}$ & $\left\vert \vb*{k}\right\vert _{Exp}\left( 2\left\langle r^{2}\right\rangle ^{\frac{1}{2}}\right) _{^{3}\!P_{J}}$\\ \midrule $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $162.2$ & $0.67$\\ $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $129.4$ & $0.54$\\ $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $110.2$ & $0.46$\\ \midrule $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{0}}\left( 2P\right) $ & $122.3$ & $0.83$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{1}}\left( 2P\right) $ & $99.5$ & $0.68$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{2}}\left( 2P\right) $ & $86.6$ & $0.59$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $484.1$ & $2.01$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $451.7$ & $1.88$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $433.5$ & $1.80$\\ \midrule $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{1}}\left( 3P\right) $ & $65.8$ & $0.59$\\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{2}}\left( 3P\right) $ & $55.3$ & $0.47$\\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{0}}\left( 2P\right) $ & $341.2$ & $2.32$\\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{1}}\left( 2P\right) $ & $319.0$ & $2.17$\\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{2}}\left( 2P\right) $ & $306.2$ & $2.08$\\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $695.5$ & $2.89$\\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $664.3$ & $2.76$\\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $646.2$ & $2.69$\\ \midrule $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{1}}\left( 3P\right) $ & $370.0$ & $3.68$\\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{2}}\left( 3P\right) $ & $359.8$ & $3.57$\\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{0}}\left( 2P\right) $ & $637.6$ & $4.32$\\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{1}}\left( 2P\right) $ & $616.0$ & $4.18$\\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{2}}\left( 2P\right) $ & $603.5$ & $4.10$\\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $981.7$ & $4.08$\\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $951.5$ & $3.95$\\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $933.8$ & $3.88$\\ \bottomrule \end{tabular} \caption{\label{TabkvaluesSP}Experimental values of the photon energy $\left\vert \vb*{k}\right\vert _{Exp}$ and calculated values of $\left\vert \vb*{k}\right\vert _{Exp}( 2\left\langle r^{2}\right\rangle ^{\frac{1}{2}}) _{^{3}\!P_{J}}$ from our model for ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ radiative transitions.} \end{table} \begin{table} \centering \begin{tabular}{ccc} \toprule ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ & $\left\vert \vb* {k}\right\vert _{Exp}\text{(MeV)}$ & $\left\vert \vb*{k}\right\vert _{Exp}\left( 2\left\langle r^{2}\right\rangle ^{\frac{1}{2}}\right) _{^{3}\!S_{1}}$\\ \midrule $\chi_{b_{0}}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $390.9$ & $0.87$\\ $\chi_{b_{1}}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $423.5$ & $0.94$\\ $\chi_{b_{2}}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $441.7$ & $0.98$\\ \midrule $\chi_{b_{0}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $206.9$ & $1.07$\\ $\chi_{b_{1}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $229.4$ & $1.19$\\ $\chi_{b_{2}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $243.1$ & $1.26$\\ \midrule $\chi_{b_{0}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $742.9$ & $1.66$\\ $\chi_{b_{1}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $764.2$ & $1.70$\\ $\chi_{b_{2}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $777.1$ & $1.73$\\ \midrule $\chi_{b_{1}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 3S\right) $ & $157.0$ & $1.19$\\ $\chi_{b_{2}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 3S\right) $ & $167.5$ & $1.27$\\ \midrule $\chi_{b_{1}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $478.7$ & $2.47$\\ $\chi_{b_{2}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $488.8$ & $2.53$\\ \midrule $\chi_{b_{1}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $1000.4$ & $2.23$\\ $\chi_{b_{2}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $1009.9$ & $2.25$\\ \bottomrule \end{tabular} \caption{\label{TabkvaluesPS}Experimental values of the photon energy $\left\vert \vb*{k}\right\vert _{Exp}$ and calculated values of $\left\vert \vb*{k}\right\vert _{Exp}\left( 2\left\langle r^{2}\right\rangle ^{\frac{1}{2}}\right) _{^{3}\!S_{1}}$ from our model for ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ radiative transitions.} \end{table} We see that, according to our criterion, the LWLA can only be valid for $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_J}\left( 1P\right)$, $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_J}\left( 2P\right)$, $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_J}\left( 3P\right) $ and $\chi_{b_J}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $. As a test we can compare the widths $\Gamma_{LWLA}^{\left( The\right) }$ obtained from \eqref{widthLWLA}, where the superindex $(The)$ means that they are obtained from the calculated spectral masses (and the calculated $\left\vert \vb*{k}\right\vert _{The}$ from them), with the corresponding widths $\Gamma_{p/M}^{\left( The\right) }$ obtained from \eqref{A4}, \eqref{A11} and \eqref{B3}, when the complete operator $\mathcal{O}_{\alpha}$ in \eqref{Ored} (for ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$) and $\mathcal{O}_{\alpha}^{\prime}$ in \eqref{Oprimered} (for ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$) are used. The results are shown in Table~\ref{TabwidthsLWLA}, first and fifth columns respectively. \begin{table*} \centering \begin{tabular}{cccccc} \toprule Radiative Decay & $ \begin{array}{c} \Gamma_{LWLA}^{\left( The\right) }\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \Gamma_{LWLA}^{\left( The-Exp\right) }\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \Gamma_{Exp}^{PDG}\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \Gamma_{p/M}^{\left( Mixed\right) }\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \Gamma_{p/M}^{\left( The\right) }\\ \text{KeV} \end{array} $\\ \midrule $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $0.30$ & $1.61$ & $1.2\pm0.3$ & $0.5$ & $0.29$\\ $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $0.89$ & $2.46$ & $2.2\pm0.3$ & $1.28$ & $0.91$\\ $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $1.48$ & $2.55$ & $2.3\pm0.3$ & $1.82$ & $1.51$\\ \midrule $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{0}}\left( 2P\right) $ & $0.54$ & $1.72$ & $1.14\pm0.20$ & $0.77$ & $0.54$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{1}}\left( 2P\right) $ & $1.63$ & $2.78$ & $2.6\pm0.5$ & $1.99$ & $1.66$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{2}}\left( 2P\right) $ & $2.72$ & $3.03$ & $2.7\pm0.5$ & $2.87$ & $2.77$\\ \midrule $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{0}}\left( 3P\right) $ & $0.75$ & $1.06$ & & $0.83$ & $0.74$ \\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{1}}\left( 3P\right) $ & $2.24$ & $1.47$ & & 1.97 & 2.27 \\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{2}}\left( 3P\right) $ & $3.74$ & $1.37$ & & 2.70 & 3.79 \\ \midrule $\chi_{b_{0}}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $34.58$ & $22.82$ & & $33.23$ & $38.58$\\ $\chi_{b_{1}}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $34.58$ & $28.81$ & & $32.05$ & $33.71$\\ $\chi_{b_{2}}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $34.58$ & $32.71$ & & $33.21$ & $33.73$\\ \bottomrule \end{tabular} \caption{\label{TabwidthsLWLA}Calculated widths to order $p/M$ as compared to data for $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_J}\left( 1P\right)$, $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_J}\left( 2P\right)$, $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_J}\left( 3P\right) $ and $\chi_{b_J}\left( 1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $. Our educated guess for the unknown $\chi_{b0}\left( 3P\right) $ mass has been $10492$ MeV. Notation as follows. $\Gamma_{LWLA}^{\left( The\right) }$: width in the LWLA without any external input. $\Gamma_{LWLA}^{\left(The-Exp\right) }$: width in the LWLA implemented with the experimental masses and photon energy. $\Gamma_{Exp}^{PDG}:$ measured widths \cite{PDG18}. $\Gamma_{p/M}^{\left( Mixed\right) }:$ width with the experimental photon energy and partially implemented experimental masses. $\Gamma_{p/M}^{\left(The\right) }$: width without any external input. } \end{table*} The similarity of the calculated widths, $\Gamma_{LWLA}^{\left( The\right) }$ and $\Gamma_{p/M}^{\left(The\right) }$, for the considered processes confirms the validity of the LWLA within a few percent of error. On the other hand, their comparison to data, $\Gamma_{Exp}^{PDG},$ third column in the table, makes clear that $\Gamma_{LWLA}^{\left( The\right) }$ or $\Gamma_{p/M}^{\left(The\right) }$ are far from the experimental widths except for$\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b2}\left( 2P\right)$. By realizing that this may have to do with the fact that only for this transition the calculated spectral mass difference $M_{3S}-M_{2P}=83$ MeV is very close to the measured one $M_{\Upsilon\left( 3S\right) }-M_{\chi_{b2}\left(2P\right) }=86$ MeV, we can try to implement the experimental photon energy $\left\vert \vb*{k}\right\vert _{Exp}$ and the measured mass differences instead of the spectral ones for all the transitions to check whether some improvement can be achieved or not. This implementation can be very easily done in the LWLA since the mass dependence in the amplitude is explicitly factorized. A look at the table shows that the resulting widths that we denote as $\Gamma_{LWLA}^{\left( The-Exp\right) }$, second column in the table, are within the error data intervals except for $\Upsilon\left(3S\right) \rightarrow\gamma\chi_{b0}\left( 2P\right) $ and $\Upsilon\left(2S\right) \rightarrow\gamma\chi_{b0}\left( 1P\right) $ where they are less than a $30\%$ and a $10\%$ off respectively. Keeping always in mind that higher $\frac{\left\vert \vb*{p}_{b}\right\vert }{M_{b}}$ orders might be playing some role, this deviations may indicate some deficiency in the calculated $^{3}\!P_{0}$ wave functions. Actually we could have expected this to occur since our model fits much better the $^{3}\!P_{1,2}$ masses than the $^{3}\!P_{0}$ ones. This means that one should go beyond first order perturbation theory, that gives rise to a mass shift but keeps unaltered the wave function, to get an accurate description of the $^{3}\!P_{0}$ states from our model. This can be confirmed by artificially making the parameters to have slightly different values only for $^{3}\!P_{0}$ states in order to fit their masses. Thus, for instance, taking $\left( \sigma\right) _{^{3}\!P_{0}}=875$ MeV fm$^{-1}$ and $\left( \zeta\right) _{^{3}\!P_{0}}=120$ MeV fm, the calculated masses and widths are much closer to data. Hence, we may conclude that the implementation of the experimental masses is an essential ingredient for the explanation of radiative decays. Following this argumentation we may be confident with our predictions $\Gamma_{LWLA}^{\left( The-Exp\right) }$, second column in the table, for $\chi_{b1}\left( 1P\right) \rightarrow \gamma\Upsilon\left( 1S\right) $ and $\chi_{b1}\left( 1P\right)\rightarrow\gamma\Upsilon\left( 1S\right) $ whereas for $\chi_{b0}\left(1P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ we expect a $30\%$ of uncertainty at most. For the sake of comparison let us add that our predicted values are quite in agreement (within a $25\%$ difference) with the ones obtained from other nonrelativistic spectroscopic quark models and from potential nonrelativistic QCD, see Table II in reference \cite{Vai19}. For an alternative theoretical treatment of these decays, see \cite{DeF08}. Regarding $\Upsilon\left(4S\right) \rightarrow\gamma\chi_{b_{J}}\left( 3P\right) $ we expect our predicted widths $\Gamma_{LWLA}^{\left( The-Exp\right) }$, second column in the table, to be accurate for $\Upsilon\left(4S\right) \rightarrow\gamma\chi_{b1}\left( 3P\right) $ and $\Upsilon\left(4S\right) \rightarrow\gamma\chi_{b2}\left( 3P\right) $ and more uncertain for $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b0}\left( 3P\right) $ since the last one is based on an educated guess for the $\chi_{b0}\left( 3P\right) $ mass. \bigskip It is also interesting, for the sake of completeness, to calculate from \eqref{A4}, \eqref{A11} and \eqref{B3} the widths when $\left\vert \vb*{k}\right\vert _{Exp}$ and the experimental masses, when explicitly appearing, are implemented. This means that the measured masses are used in the explicit energy factors entering in the calculation of the width (see \eqref{width} and \eqref{M}) but not in the evaluation of the matrix element in \eqref{M} that still depends implicitly on the calculated spectral masses (this will be detailed in the next section). We call these widths $\Gamma_{p/M}^{\left( Mixed\right) }$, fourth column in the table. An inspection of the table makes clear that except for $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b2}\left( 2P\right) $ where, as explained above, the spectral and experimental mass difference coincides, the widths $\Gamma_{p/M}^{\left( Mixed\right) }$ are out of the error data intervals. This points out to the need of making explicit all the mass dependencies in the transition amplitude for their correct experimental implementation if we pretend an accurate decay description beyond the LWLA regime. \bigskip Therefore, we have shown that: \begin{enumerate}[i)] \item In its range of validity the LWLA, which allows for an easy separation of the mass and wave function dependencies in the transition amplitude, is the more suitable method to give accurate account of the radiative transitions in bottomonium. This is so even for the simplest spectroscopic model, once the experimental masses are properly implemented. \item The description of radiative decays out of the range of validity of the LWLA requires the explicit factorization of all the mass dependencies in the transition amplitude for its correct experimental implementation. \end{enumerate} \section{Beyond the long wave length approximation \label{SV}} The need of going beyond the LWLA has dealt in the past to the evaluation of some corrections to $\left( \mathcal{O}_{\alpha}\right) _{LWLA}$, see for example \cite{Eic05}. Maybe the most common form of the corrected operator is the one where the LWLA mass dependence in the amplitude is preserved while substituting the overlap integral $\int_{0}^{\infty}\dd{r}r^{2}\pqty{R_{n_F L_F}(r)}^*r R_{n_I L_I}(r)$ in \eqref{widthLWLA} by the corrected one (henceforth we shall use $k\equiv\left\vert \vb*{k}\right\vert $). \begin{multline} \int_{0}^{\infty}\dd{r}r^{2} \pqty{R_{n_F L_F}(r)}^*\\ \frac{3}{k}\left[ \frac{kr}{2}j_{0}\left( \frac{kr}{2}\right) -j_{1}\left( \frac{kr}{2}\right) \right]R_{n_I L_I}(r) \label{ILWLA} \end{multline} where $j_{0}$ and $j_{1}$ stand for spherical Bessel functions. However, this prescription, that reproduces the good description of transitions within the range of applicability of the LWLA when the experimental masses and photon energy are implemented, seems not to work for $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_J}\left( 1P\right)$, where the LWLA is not valid. In Table~\ref{Tab3s} we show the results from this prescription, $\Gamma_{CLWLA}$, where the subindex CLWLA stands for Corrected Long Wave Length Approximation, for three different nonrelativistic quark models (NRQM), all of them fitting reasonably well the spectrum. Model I is just our model where the experimental masses and $k_{Exp}$ have been used in the calculation of the widths. Model II is another Cornell potential model with a different set of parameter values ($\sigma_{II}=894.66$ MeV/fm, $\zeta_{II}=102.61$ MeV fm and $\left( M_{b}\right) _{II}=5180$ MeV apart from a constant fixing the origin of the potential) chosen to get a reasonable fit to the mass centers of gravity of $1S,$ $1P$ and $2S$ states \cite{Eic94}. Model III, see \cite{Seg16} and references therein, contains many more terms in the potential apart from the Cornell ones (spin-spin, spin-orbit, tensor\dots) pretending a unified description of the light and heavy quark meson spectra. (For the sake of completeness we show also results for the measured decays for which the LWLA may be applied.) \begin{table*} \centering \begin{tabular}{ccccc} \toprule Radiative Decay & $ \begin{array}{c} \left( \Gamma_{CLWLA}\right) _{I}\\ \text{KeV} \end{array} $ & $ \begin{array} [c]{c} \left( \Gamma_{CLWLA}\right) _{II}\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \left( \Gamma_{CLWLA}\right) _{III}\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \Gamma_{Exp}^{PDG}\\ \text{KeV} \end{array} $\\ \midrule $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $1\times10^{-7}$ & $0.001$ & $0.15$ & $0.054\pm0.013$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $0.004$ & $0.008$ & $0.16$ & $0.018\pm0.012$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $0.01$ & $0.015$ & $0.08$ & $0.20\pm0.06$\\ \midrule $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{0}}\left( 2P\right) $ & $1.67$ & $1.35$ & $1.21$ & $1.14\pm0.20$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{1}}\left( 2P\right) $ & $2.73$ & $2.20$ & $2.13$ & $2.6\pm0.5$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{2}}\left( 2P\right) $ & $3.02$ & $2.40$ & $2.56$ & $2.7\pm0.5$\\ \midrule $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $1.58$ & $1.29$ & $1.09$ & $1.2\pm0.3$\\ $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $2.43$ & $2.00$ & $1.84$ & $2.2\pm0.3$\\ $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $2.52$ & $2.04$ & $2.08$ & $2.3\pm0.3$\\ \bottomrule \end{tabular} \caption{\label{Tab3s}Calculated widths in the CLWA as compared to data for $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_J}\left( 1P, 2P\right) $ and $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_J}\left( 1P\right) $. Notation as follows. $\left( \Gamma_{CLWLA}\right) _{I}$: width from Model I defined in Sec.~\ref{SII} with the experimental masses and photon energy implemented. $\left( \Gamma_{CLWLA}\right) _{II}:$ width from Model II, see \cite{Eic08}. $\left( \Gamma_{CLWLA}\right) _{III}$: width from Model III, see \cite{Seg16}. $\Gamma_{Exp}^{PDG}$: measured widths \cite{PDG18}. } \end{table*} A glance at the table makes evident that the calculated CLWLA widths are in good agreement with data for processes where the LWLA applies, like $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_J}\left( 2P\right)$ and $\Upsilon\left( 2S\right) \rightarrow\gamma\chi_{b_J}\left( 1P\right)$, but they are in complete disagreement for $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_J}\left( 1P\right) $, where the LWLA does not apply. Moreover, in this last case predicted widths for the same decay from different models may differ very much from each other. This points out to an extreme sensitivity of the corrected overlap integral to the details of the wave functions. One could think then of using this sensitivity as a very stringent test of the wave functions. However, before going on with this thought, and according to our discussion in Sec.~\ref{SIV}, one should check whether the assumed mass and wave function dependence separation in the CLWLA should be taken or not for granted. Next we show that it should not and that the difficulties in the description of these decays may be surmounted through a proper factorization of the mass dependencies in the transition amplitude. For this purpose let us consider the matrix element entering in the evaluation of the amplitude \eqref{M} (we may equivalently use $\mathcal{O}_{\alpha}$ or $\mathcal{O}_{\alpha}^{\prime}$). By denoting \begin{equation} \left\vert \Psi\right\rangle \equiv\left\vert J,m,\left( nL\right)_{b\overline{b}},\left( S\right) _{b\overline{b}}\right\rangle \label{notation} \end{equation} we can write the amplitude as \begin{equation} \left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}\equiv\left\langle\Psi_{F}\right\vert \mathcal{O}_{\alpha}\left\vert \Psi_{I}\right\rangle=\left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}^{electric}+\left\langle\mathcal{O}_{\alpha}\right\rangle _{FI}^{magnetic} \label{oalfa} \end{equation} where \begin{equation} \left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}^{electric}=\left\langle\Psi_{F}\right\vert e^{i(-1)^{\alpha}\left(\frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\left( -1\right)^{\alpha}2\vb*{p}\vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}\left\vert \Psi_{I}\right\rangle \label{oalfaelec} \end{equation} and \begin{equation} \left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}^{magnetic}=\left\langle\Psi_{F}\right\vert e^{i(-1)^{\alpha}\left(\frac{\vb*{k}\vdot\vb*{r}}{2}\right) } i\vb*{\sigma}_{\alpha}\times\vb*{k}\vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}\left\vert \Psi_{I}\right\rangle \label{oalfamag} \end{equation} In order to extract the mass dependence in $\left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}^{electric}$ we introduce a Parseval identity $\left(\sum_{int}\left\vert \Psi_{int}\right\rangle \left\langle \Psi_{int}\right\vert \right) $ in terms of eigenstates of the Cornell potential \begin{multline} \left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}^{electric}=\sum_{int}\left\langle \Psi_{F}\right\vert e^{i(-1)^{\alpha}\left(\frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\left\vert \Psi_{int}\right\rangle \\ \left\langle \Psi_{int}\right\vert \left( -1\right) ^{\alpha}2\vb*{p}\vdot\left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}\left\vert \Psi_{I}\right\rangle \label{oalfaint} \end{multline} Then, substituting $\vb*{p}=-i\frac{M_{b}}{2}\left[\vb*{r},H_{C}\right] $ we are left with \begin{multline} \left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}^{electric}=-iM_{b}\sum_{int}\left\langle \Psi_{F}\right\vert e^{i(-1)^{\alpha }\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\left\vert\Psi_{int}\right\rangle \\ \left( M_{I}-M_{int}\right) \left\langle \Psi_{int}\right\vert (-1)^{\alpha}\vb* {r}\vdot\left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}\left\vert \Psi_{I}\right\rangle \label{oalfaelecfinal} \end{multline} so that the mass dependencies have been factored out. Notice also that the multiplicative quark mass factor in \eqref{oalfaelecfinal} cancels the same dividing factor in the amplitude \eqref{M}. Therefore, this form of the matrix element preserves the nice feature of separating explicitly the mass and wave function dependencies in the amplitude. Actually, it is trivial to check that for $e^{i\left(-1\right) ^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}} {2}\right) }\simeq1$ the LWLA is recovered since then $\left\vert \Psi_{int}\right\rangle =\left\vert \Psi_{I}\right\rangle $ is the only surviving contribution. It should be remarked though that for $e^{i\left( -1\right)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\neq1$ the mass and wave function separation dependence in the amplitude is completely different to the one assumed in the CLWLA. Hence the results obtained from the CLWLA beyond the LWLA regime should not be taken for granted. It has to be added that for $e^{i(-1)^{\alpha}\left(\frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\neq1$ there is also a magnetic contribution to the amplitude \eqref{M} which depends on $M_{b}$. Though this introduces an undesired additional model dependence we shall see that for the transitions we are interested in this magnetic contribution has no significant effect on the calculated widths and can be obviated. \bigskip For the sake of completeness and convenience for later calculations we also write the resulting expression for $\mathcal{O}_{\alpha}^{\prime}$: \begin{multline} \left\langle \mathcal{O}_{\alpha}^{\prime}\right\rangle _{FI}^{electric}= \\ -iM_{b}\sum_{int}\left( M_{int}-M_{F}\right) \left\langle \Psi_{F}\right\vert (-1)^{\alpha}\vb*{r}\vdot\left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}\left\vert \Psi_{int}\right\rangle \\ \left\langle \Psi_{int}\right\vert e^{i(-1)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\left\vert \Psi_{I}\right\rangle \label{oprimeelecfinal} \end{multline} \bigskip Expressions \eqref{oalfaelecfinal} and \eqref{oprimeelecfinal} tell us that a good description of a complete set of intermediate states apart form the initial and final ones is needed to accurately reproduce radiative decay widths from a non perfect spectroscopic quark model. Otherwise said, radiative decays are testing the whole spectral model description. \subsection{${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ transitions} Let us apply \eqref{oalfaelecfinal} to the calculation of ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ transitions. Working in configuration space and using $\vb*{r}\vdot\left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}=\sqrt{\frac{4\pi}{3}}\left( Y_{1}^{\lambda}\left( \widehat{r}\right) \right) ^{\ast}r$ and the well known expansion \begin{equation} e^{i\left( -1\right)^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }=\sum_{l=0}^{\infty}\left( i\left( -1\right)^{\alpha}\right) ^{l}\sqrt{4\pi}\sqrt{2l+1}j_{l}\left(\frac{kr}{2}\right) Y_{l}^{0}\left( \widehat{r}\right) \end{equation} the electric part of the amplitude reads, with the same notation as in \eqref{ExpLWLA}, \begin{widetext} \begin{multline} \left( \mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda\text{ }\left(electric\right) }\right) ^{{^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}}=i\sqrt{2M_{I}}\sqrt{2E_{F}}\delta_{S_{I},S_{F}}e_{b}\sum_{l=0}^{\infty}\sum_{n_{int},L_{int},J_{int},m_{int}} (-1)^{l+L_{F}+L_{I}}\left( 4l+1\right) \left( 2L_{int}+1\right) \\ C_{2l,\text{ }J_{int},\text{ }J_{F}}^{0,\text{ }m_{int},\text{ }m_{F}}C_{1,\text{ }J_{int},\text{ }J_{I}}^{\lambda,\text{ }m_{int},\text{ }m_{I}} \left(\begin{array}{ccc} L_{int} & 2l & L_{F}\\ 0 & 0 & 0 \end{array}\right) \left(\begin{array}{ccc} L_{int} & 1 & L_{I}\\ 0 & 0 & 0 \end{array}\right) \left[\begin{array}{ccc} J_{F} & 2l & J_{int}\\ L_{int} & S_{F} & L_{F} \end{array}\right] \left[\begin{array}{ccc} J_{I} & 1 & J_{int}\\ L_{int} & S_{I} & L_{I} \end{array}\right] \\ \left( M_{I}-M_{int}\right) \left( \int_{0}^{\infty}\dd{r}\text{ }r^{2}\left( R_{n_{F}L_{F}}\right) ^{\ast}j_{2l}\left( \frac{kr}{2}\right) R_{n_{int}L_{int}}\right) \left( \int_{0}^{\infty}\dd{r}\text{ }r^{2}\left( R_{n_{int}L_{int}}\right) ^{\ast}rR_{n_{I}L_{I}}\right) \label{FinalSPelectric} \end{multline} \end{widetext} This is our master formula, substituting \eqref{A4}, for a proper implementation of the mass dependencies in the amplitude. \bigskip Let us realize that although this formal expression contains a sum over a complete set of intermediate states only a few contributions survive, the ones making the $6j$ symbols to be different from $0$. The underlying reason is that due to the matrix element \begin{multline} \left\langle \Psi_{int}\right\vert \vb*{r}\vdot \left( \vb*{\epsilon}_{\vb*{k}}^{\lambda}\right) ^{\ast}\left\vert \Psi_{I}\left( ^{3}\!S_{1}\right) \right\rangle=\\ \left\langle \Psi_{int}\right\vert \sqrt{\frac{4\pi}{3}}\left(Y_{1}^{\lambda}\left( \widehat{r}\right) \right) ^{\ast}r\left\vert\Psi_{I}\left( ^{3}\!S_{1}\right) \right\rangle \end{multline} appearing in $\left\langle \mathcal{O}_{\alpha}\right\rangle _{FI}^{electric}$ only intermediate $^{3}\!P_{J_{int}}$ states with $J_{int}=0,1,2$ may give a nonvanishing contribution. Furthermore, from the exponential expansion we see that only the $l=0$ and $l=2$ partial waves contribute to the matrix element $\left\langle \Psi_{F}\left( ^{3}\!P_{J}\right) \right\vert e^{i\left( -1\right) ^{\alpha}\left( \frac{\vb*{k}\vdot\vb*{r}}{2}\right) }\left\vert \Psi_{int}\left( ^{3}\!P_{J_{int}}\right)\right\rangle$. \bigskip From \eqref{FinalSPelectric} for the electric part and \eqref{B3} for the magnetic one the widths are straightforwardly evaluated. In practice, the magnetic contribution hardly plays any role and the sum over intermediate states in the electric part does not need for many terms to converge. More precisely, for $\Upsilon\left( n_{I}S\right) \rightarrow\gamma\chi_{b_J}\left( n_{F}P\right) $ the consideration of $n_{int}P$ with $n_{int}\leq4$ assures convergence to less than a $2\%$ error when $n_{I}\leq4$ (for the not experimentally measured $^{3}\!P_{0}\left( 3P\right) $ we have done an educated guess taking it to be $20$ MeV lower than the measured $^{3}\!P_{1}\left( 3P\right) $ mass; for the not yet measured $^{3}\!P_{0,1,2}\left( 4P\right) $ resonances we have used the Cornell predicted states from our model; the Cornell wave functions have been used in all cases). For $n_{I}=5$ the same level of convergence requires to include $n_{int}=5$ (for the not yet measured $^{3}\!P_{0,1,2}\left( 5P\right) $ resonances we have used the Cornell predicted states from our model). We call the calculated widths $\Gamma_{p/M}^{\left( The-Exp\right) }$ consistently with the notation used in Table~\ref{TabwidthsLWLA}. The results from Model I (our model) and Model II are compiled in Table~\ref{TabwidthsSPint}. Notice that \eqref{FinalSPelectric} cannot be consistently applied to Model III since its hamiltonian $H$ contains a spin-orbit term, therefore $\vb*{p}\neq-i\frac{M_{b}}{2}\left[ \vb*{r},H\right] $. Nonetheless, we have checked that for transitions where the calculated mass difference from Model III agrees with data, the results obtained from \eqref{A4} and \eqref{B3} are in good agreement with the ones in Table~\ref{TabwidthsSPint} from Models I and II. \begin{table*} \centering \begin{tabular}{cccc} \toprule Radiative Decay & $ \begin{array}{c} \left( \Gamma_{p/M}^{\left( The-Exp\right) }\right) _{I}\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \left( \Gamma_{p/M}^{\left( The-Exp\right) }\right) _{II}\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \Gamma_{Exp}^{PDG}\\ \text{KeV} \end{array} $\\ \midrule $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $0.08$ & $0.09$ & $0.054\pm0.013$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $0.21$ & $0.23$ & $0.018\pm0.012$\\ $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $0.34$ & $0.34$ & $0.20\pm0.06$\\ \midrule $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{0}}\left( 2P\right) $ & $0.05$ & $0.05$ & \\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{1}}\left( 2P\right) $ & $0.12$ & $0.13$ & \\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{2}}\left( 2P\right) $ & $0.20$ & $0.21$ & \\ \midrule $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $0.05$ & $0.04$ & \\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $0.11$ & $0.12$ & \\ $\Upsilon\left( 4S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $0.17$ & $0.17$ & \\ \midrule $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{0}}\left( 3P\right) $ & $0.08$ & $0.08$ & \\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{1}}\left( 3P\right) $ & $0.22$ & $0.22$ & \\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{2}}\left( 3P\right) $ & $0.33$ & $0.34$ & \\ \midrule $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{0}}\left( 2P\right) $ & $0.05$ & $0.07$ & \\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{1}}\left( 2P\right) $ & $0.15$ & $0.17$ & \\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{2}}\left( 2P\right) $ & $0.21$ & $0.25$ & \\ \midrule $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{0}}\left( 1P\right) $ & $0.04$ & $0.04$ & \\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{1}}\left( 1P\right) $ & $0.09$ & $0.10$ & \\ $\Upsilon\left( 5S\right) \rightarrow\gamma\chi_{b_{2}}\left( 1P\right) $ & $0.13$ & $0.14$ &\\ \bottomrule \end{tabular} \caption{\label{TabwidthsSPint}Calculated ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ widths to order $p/M$ implemented with the experimental masses and photon energy: $\Gamma_{p/M}^{\left( The-Exp\right) }$. The widths are evaluated with Models I and II and compared to data when available \cite{PDG18}. Our educated guess for the unknown $\chi_{b0}\left( 3P\right) $ mass has been $10492$ MeV.} \end{table*} As can be checked, the improvement with respect to the CLWLA is enormous. The extreme sensitivity of the results to the wave function used has disappeared and the widths obtained for $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b0}\left(1P\right) $ and $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b2}\left(1P\right) $ are much closer to data, being now about a $25\%$ off the experimental intervals. This modest disagreement can be justified in our model from the lack of an accurate wave function description for $^{3}\!P_{0}$ states (notice that they always enter as intermediate states in the calculation of the widths). As for $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b1}\left(1P\right) $ our calculated width is one order of magnitude bigger than current data. Moreover, within our Cornell potential model framework the calculated value lies necessarily in between the calculated $\Upsilon\left(3S\right) \rightarrow\gamma\chi_{b0}\left( 1P\right) $ and $\Upsilon\left(3S\right) \rightarrow\gamma\chi_{b2}\left( 1P\right) $ widths. This is again in contrast with data. Indeed, the experimental situation is rather bizarre as compared to any other $\Upsilon\left( n_{I}S\right)\rightarrow\gamma\chi_{b_J}\left( n_{F}P\right) $ case where the $\Upsilon\left( n_{I}S\right) \rightarrow\gamma\chi_{b1}\left(n_{F}P\right) $ measured width lies always in between those for $\Upsilon\left( n_{I}S\right) \rightarrow\gamma\chi_{b0}\left(n_{F}P\right) $ and $\Upsilon\left( n_{I}S\right) \rightarrow\gamma\chi_{b2}\left( n_{F}P\right)$. Moreover, the experimental relative error in the measurement of the $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b1}\left( 1P\right) $ width is much larger than for $\Upsilon\left(3S\right) \rightarrow\gamma\chi_{b0}\left( 1P\right) $ and $\Upsilon\left(3S\right) \rightarrow\gamma\chi_{b2}\left( 1P\right) .$ Then, it would be very important, in our opinion, to refine as much as possible the measurement of the $\Upsilon\left( 3S\right) \rightarrow\gamma\chi_{b1}\left(2P\right) $ width to solve this puzzle. Meantime we think our predictions for not yet measured ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ decays, also listed in Table~\ref{TabwidthsSPint}, may be taken as reasonable within a $25\%$ of uncertainty. \subsection{${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ transitions} For ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ transitions we proceed in the same manner but using for convenience \eqref{oprimeelecfinal} instead of \eqref{oalfaelecfinal}. The final expression for the electric part of the amplitude element is now \begin{widetext} \begin{multline} \left( \mathcal{M}_{J_{F},m_{F},J_{I},m_{I}}^{\lambda\text{ }\left(electric\right) }\right) ^{{^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}} =i\sqrt{2M_{I}}\sqrt{2E_{F}}\delta_{S_{I},S_{F}}e_{b}\sum_{l=0}^{\infty}\sum_{n_{int},L_{int},J_{int},m_{int}} (-1)^{l}\left(4l+1\right) \sqrt{\left( 2L_{I}+1\right) \left( 2L_{F}+1\right) } \\ C_{2l,\text{}J_{I},\text{ }J_{int}}^{0,\text{ }m_{I},\text{ }m_{int}}C_{1,\text{ }J_{F},\text{ }J_{int}}^{\lambda,\text{ }m_{F},\text{ }m_{int}} \left(\begin{array}{ccc} L_{I} & 2l & L_{int}\\ 0 & 0 & 0 \end{array}\right) \left(\begin{array}{ccc} L_{F} & 1 & L_{int}\\ 0 & 0 & 0 \end{array}\right) \left[\begin{array}{ccc} J_{int} & 2l & J_{I}\\ L_{I} & S_{I} & L_{int} \end{array}\right] \left[\begin{array}{ccc} J_{int} & 1 & J_{F}\\ L_{F} & S_{F} & L_{int} \end{array}\right]\\ \left( M_{int}-M_{F}\right) \left( \int_{0}^{\infty}\dd{r}\text{ }r^{2}\left( R_{n_{F}L_{F}}\right) ^{\ast}rR_{n_{int}L_{int}}\right) \left( \int_{0}^{\infty}\dd{r}\text{ }r^{2}\left( R_{n_{int}L_{int}}\right) ^{\ast}j_{2l}\left(\frac{kr}{2}\right) R_{n_{I}L_{I}}\right) \label{FinalPSelectric} \end{multline} \end{widetext} This is our master formula, substituting \eqref{A11}, for a proper implementation of the mass dependencies in the amplitude. \bigskip From \eqref{FinalPSelectric} for the electric part and \eqref{B3} for the magnetic we can predict the widths for not yet measured processes. Our results are shown in Table~\ref{TabwidthsPSint}. Regarding convergence we have used $n_{int}\leq5$ in all cases to assure convergence at the level of $2\%$ error. For the not experimentally measured $^{3}\!P_{0}\left( 3P\right) $ we have done an educated guess taking it to be $20$ MeV lower than the measured $^{3}\!P_{1}\left( 3P\right) $ mass; for the not yet measured $^{3}\!P_{0,1,2}\left(4P,5P\right) $ resonances we have used the Cornell predicted states from our model; the Cornell wave functions have been used in all cases. \begin{table} \centering \begin{tabular}{ccc} \toprule Radiative Decay & $ \begin{array}{c} \left( \Gamma_{p/M}^{\left( The-Exp\right) }\right) _{I}\\ \text{KeV} \end{array} $ & $ \begin{array}{c} \left( \Gamma_{p/M}^{\left( The-Exp\right) }\right) _{II}\\ \text{KeV} \end{array} $\\ \midrule $\chi_{b_{0}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $10.08$ & $8.70$\\ $\chi_{b_{1}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $14.06$ & $11.99$\\ $\chi_{b_{2}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $17.07$ & $14.70$\\ \midrule $\chi_{b_{0}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $9.95$ & $9.08$\\ $\chi_{b_{1}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $11.83$ & $10.77$\\ $\chi_{b_{2}}\left( 2P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $14.76$ & $13.22$\\ \midrule $\chi_{b_{0}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 3S\right) $ & $5.23$ & $4.81$\\ $\chi_{b_{1}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 3S\right) $ & $8.33$ & $7.25$\\ $\chi_{b_{2}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 3S\right) $ & $10.63$ & $9.24$\\ \midrule $\chi_{b_{0}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $3.99$ & $3.69$\\ $\chi_{b_{1}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $4.82$ & $4.42$\\ $\chi_{b_{2}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 2S\right) $ & $5.80$ & $5.27$\\ \midrule $\chi_{b_{0}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $5.51$ & $5.31$\\ $\chi_{b_{1}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $6.57$ & $6.25$\\ $\chi_{b_{2}}\left( 3P\right) \rightarrow\gamma\Upsilon\left( 1S\right) $ & $8.45$ & $7.88$\\ \bottomrule \end{tabular} \caption{\label{TabwidthsPSint}Calculated ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ widths to order $p/M$ implemented with the experimental masses and photon energy: $\Gamma_{p/M}^{\left( The-Exp\right) }$. Our educated guess for the unknown $\chi_{b0}\left( 3P\right) $ mass has been $10492$ MeV.} \end{table} (Notice that one could alternatively choose \eqref{oalfaelecfinal} for ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ transitions (or \eqref{oprimeelecfinal} for ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ ones). The only difference is in the set of intermediate contributing states that would be formed by $S$ and $D$ waves. This adds support to our former assertion that radiative decays may serve as a stringent test of the whole spectral model description.) These predictions and the ones in Table~\ref{TabwidthsSPint} are the main results of our research. Their comparison to future data will be a definite test of the proposed formalism to deal with radiative decays beyond the LWLA. \section{Summary\label{SVI}} Starting from a simple nonrelativistic quark potential model fitting well the low lying spin triplet $1^{--}$ and $2^{++}$ (and to a lesser extent $1^{++}$) bottomonium spectroscopy we have calculated ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ and ${^{3}\!P_{J}}\rightarrow\gamma\,{^{3}\!S_{1}}$ decay widths by using a nonrelativistic reduction, up to $\frac{\left\vert \vb*{p}_{b}\right\vert }{M_{b}}$ order, of the Elementary Emission Model transition operator. In this decay model the emission of the photon is assumed to take place by the quark or the antiquark of the decaying meson. A great simplification applies when the wave length of the emitted photon is much larger than the hadronic size scale for the transition. This occurs for example for decays involving the lowest lying spectral states. Then, in this Long Wave Length Approximation (LWLA) the amplitude dependence on the mass and wave function of the initial and final mesons can be factored out. This permits a step by step analysis of the requirements needed to get an accurate description of data from a spectroscopic potential model. As a general result, we have shown that the implementation of the experimental masses and photon energy, instead of the calculated ones, in the evaluation of the transition amplitude is an essential requirement for predictions to be in accord with data. This implementation is justified under the assumption that the difference between the measured masses and the calculated ones corresponds in most cases to a first order perturbative effect. The comparison of the resulting widths with data support this assumption since the only modest $\left( 25\%\right) $ deviation from data corresponds to transitions involving $0^{++}$ states for which the difference between the calculated and measured masses is significantly bigger than for the $2^{++}$ and $1^{++}$ cases. \bigskip For general transitions between bottomonium states where the LWLA does not necessarily apply a new method to factor out the mass and wave function dependence of the amplitude has been developed and applied to $^{3}\!S_{1}\longleftrightarrow{^{3}\!P_{J}}$ transitions. This method is based on the introduction of a complete set of intermediate Cornell states in the calculation of the amplitude. Thus, for instance, the ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ amplitude can be written as a sum of LWLA like amplitudes from the initial to intermediate $P-$ wave states with coefficients depending on the intermediate and final states. The introduction of intermediate states for an accurate description of the decay widths from a non perfect spectroscopic model indicates that radiative decays beyond the LWLA may serve as a very stringent test of the spectroscopic wave functions. As a matter of fact, any inaccuracy in the calculation of ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ amplitudes for which the LWLA applies translates into an inaccuracy for general ${^{3}\!S_{1}}\rightarrow\gamma\,{^{3}\!P_{J}}$ transitions beyond the LWLA. From the scarce data available we have verified that the same level of inaccuracy $\left( 25\%\right) $ may be expected in both cases. This makes us confident in our predictions for not yet measured decay widths which may serve as a guide for future experimental searches. \bigskip In summary, we have developed a formalism to get an accurate description of the electromagnetic $^{3}\!S_{1}\longleftrightarrow{^{3}\!P_{J}}$ bottomonium transition widths from a $\frac{\left\vert \vb*{p}_{b}\right\vert}{M_{b}}$ order Elementary Emission Decay Model and a simple nonrelativistic spectroscopic Cornell potential model. Our formalism can be used for more refined nonrelativistic potentials as far as they only depend on the quark-antiquark separation. Couple channel corrections whose effect has been partially analyzed in unquenched quark models \cite{Fer13,*Fer14_5,*Fer14_9} can be also incorporated trough the correction to the wavefunctions. However, the formalism cannot be easily generalized to charmonium where it can be checked that higher orders in $\frac{\left\vert\vb*{p}_{c}\right\vert }{M_{c}}$ play an important role. Work along this line is in progress. \begin{acknowledgments} This work has been supported by \foreignlanguage{spanish}{Ministerio de Economía y Competitividad} of Spain (MINECO) and EU Feder grant FPA2016-77177-C2-1-P and by SEV-2014-0398. R.\,B.\ acknowledges a FPI fellowship from the \foreignlanguage{spanish}{Ministerio de Ciencia, Innovación y Universidades} of Spain under grant BES-2017-079860. We are grateful to J.\,Segovia for providing us with a set of wavefuctions for comparison. \end{acknowledgments}
train/arxiv
BkiUbSLxK7Tt6CA5FNwp
5
1
\section{INTRODUCTION} In the protosolar nebula, the growth of terrestrial planets begins with collisions and mergers of planetesimals, solid objects with radii of $\sim$ 1 km \citep[e.g.][]{saf69,lis87,ws93,wei97}. Initially, planetesimals grow slowly. As they grow, dynamical friction circularizes the orbits of the largest objects; viscous stirring excites the orbits of the smallest objects. Slow, orderly growth ends when the gravitational cross-sections of the largest objects exceed their geometric cross-sections. Because dynamical friction is faster than accretion, the largest objects stay on circular orbits and grow faster and faster relative to the smallest planetesimals. This runaway growth rapidly concentrates solid material into a few protoplanets \citep[e.g.,][]{gre78,gre84,wet89,ws93,wei97,kl98,kok00}. During runaway growth, protoplanets stir the orbits of the leftover planetesimals. Stirring reduces gravitational focusing factors and slows the growth of the protoplanets \citep{ws93,ida93,kb02,raf03c}. Although protoplanets grow slowly, they grow faster than the leftover planetesimals and intermediate mass objects do. These large `oligarchs' slowly clear their orbits of smaller objects and reach a maximum `isolation mass' that depends on the initial surface density of solid material \citep{lis87,lis93,kok98,raf03c,gol04b}. Throughout oligarchic growth, protoplanets repel other oligarchs in the disk \citep{kok96,kok98,kok00,kom02,tho03}. Eventually, however, dynamical interactions produce collisions and gravitational scattering among the oligarchs \citep{cha01,kok02,kom02,tho02,raf03c,gol04b}. During this phase of `chaotic growth', the largest oligarchs merge with and clear away other oligarchs and smaller objects to become full-fledged planets \citep[e.g.,][]{gol04a}. Within this framework, the transition from oligarchic growth to chaotic growth is poorly understood. Although analytic estimates provide a guide to the late evolutionary stages \citep[e.g.,][]{raf03c,gol04a}, numerical calculations are necessary to test the basic theory and to derive the end states and timescales as a function of initial conditions \citep[e.g.][]{kom02}. Published calculations do not test this regime of the theory in much detail. Pure coagulation and simple hybrid calculations cannot follow the transition accurately \citep{ws93,wei97}. Direct $n$-body calculations with planetesimals are computationally expensive and tend to focus on evolution during oligarchic growth, when orbits of individual objects are easier to track \citep[][and references therein]{ale98,kok02}. Direct $n$-body calculations without planetesimals follow dynamical interactions after the transition and concentrate on the clearing phase \citep{cha01}. Thus, understanding this evolutionary phase requires new calculations \citep[see the discussion in][]{kok02}. Here, we use numerical calculations with a new hybrid $n$-body--coagulation code to investigate the transition from oligarchic growth to chaotic growth. Our approach allows us to combine statistical algorithms for the planetesimals with direct $n$-body integrations of the oligarchs. From several simple test cases and complete planet formation calculations, we show that oligarchic growth becomes chaotic when the orbits of oligarchs begin to overlap. If the surface density in oligarchs exceeds a critical value, this transition occurs when the oligarchs contain roughly half of the mass in solids. At 1 AU, the critical initial surface density is $\Sigma_c \approx$ 2--3 g cm$^{-2}$. Thus, oligarchs can make the transition from oligarchic to chaotic growth in disks with masses comparable to the minimum mass solar nebula, where $\Sigma \approx$ 8--10 g cm$^{-2}$ at 1 AU. In disks where the surface density of solids is below the limit for chaotic growth, oligarchs slowly merge to form larger objects. Pairwise interactions, instead of large-scale chaos, drive the dynamics of these systems. Milder, slower interactions between oligarchs then produce less massive planets. We develop the background for our calculations in \S2, describe a suite of calculations in \S3, and conclude with a brief summary and conclusions in \S4. \section{BACKGROUND} The evolution from planetesimals to planets is marked by several phases -- orderly growth, runaway growth, oligarchic growth, and chaotic growth -- with clear transitions in the dynamics and mutual interactions of the ensemble of solid objects. Analytic derivations and sophisticated coagulation and $n$-body calculations identify the physics of these transitions. Here, we summarize some basic results to provide the context for our numerical simulations. Most considerations of planet formation begin with small objects, $r_i \lesssim$ 1 km, that contain all of the solid material. For these sizes, collisional damping and viscous stirring roughly balance for orbital eccentricity $e \sim 10^{-5}$. The gravitational binding energy, $E_g \sim 10^4$ erg g$^{-1}$, is then comparable to the typical collision energy at 1 AU, $E_c \sim 10^3$--$10^4$ erg g$^{-1}$. Both energies are smaller than the disruption energy -- the collision energy needed to remove half of the mass from the colliding pair of objects -- which is $Q_d \gtrsim 10^7$ erg g$^{-1}$ for rocky material \citep{dav85,ben99}. Thus, collisions produce mergers instead of debris. Initially, growth from mergers is slow. The collision cross-section is the product of the geometric cross-section and the gravitational focusing factor, \begin{equation} \sigma_c \sim \pi r^2 f_g \sim \pi r^2 (1 + \beta (e v_K /v_{esc})^2) ~ , \end{equation} where $r$ is the particle radius, $v_K$ is the orbital velocity, $v_{esc}$ is the escape velocity, and $\beta$ is a coefficient that accounts for three-dimensional orbits in a rotating disk \citep{grz90,spa91,ws93}. Because $e v_K \approx v_{esc}$, gravitational focusing factors are small. Thus, growth is slow and orderly \citep{saf69}. As larger objects form, the smaller objects effectively damp the orbital eccentricity of larger particles through dynamical friction \citep[e.g.][]{ws93,kok95,kl98}. Viscous stirring by the large objects heats up the orbits of the small objects. In the case where gas drag is negligible, \citet{gol04b} derive a simple relation for the ratio of the eccentricities of the large (`l') and the small (`s') objects in terms of their surface densities $\Sigma_{l,s}$ \citep[see also][2003b, 2003c]{kok02,raf03a}, \begin{equation} \frac{e_l}{e_s} \sim \left ( \frac{\Sigma_l}{\Sigma_s} \right )^n ~ , \end{equation} with $n =$ 1/4 to 1/2. For $\Sigma_l / \Sigma_s \sim$ $10^{-3}$ to $10^{-2}$, $e_l/e_s \approx$ 0.1--0.25. Because $e_s v_K \lesssim$ $v_{l,esc}$, gravitational focusing factors are large. Runaway growth begins. Runaway growth concentrates more and more mass in the largest objects. Dynamical friction produces the largest gravitational focusing factors among the largest objects. These protoplanets run away from the smaller objects and contain an ever growing fraction of the total mass. At the same time, the large objects continue to stir the leftover planetesimals. The leftovers have orbital velocity dispersions comparable to the escape velocities of the oligarchs. With $e_s v_K \sim v_{esc}$, equations (1) and (2) show that collision rates decline as runaway growth continues. The ensemble of protoplanets and leftover planetesimals then enters the oligarchic phase, where the largest objects grow faster than the leftover planetesimals. Among the oligarchs, the smaller oligarchs grow the fastest. Each oligarch tries to accrete all of the material in an annular `feeding zone' set by balancing the gravity of neighboring oligarchs. Within each annulus, each oligarch stirs up the remaining planetesimals along its orbit. Because smaller oligarchs orbit in regions with smaller $\Sigma_l/\Sigma_s$, equations (1) and (2) show that smaller oligarchs have larger gravitational focusing factors. Thus smaller oligarchs grow faster \citep[e.g.,][]{kok98,gol04b}. During oligarchic growth, protoplanets become isolated from their surroundings \citep{lis93,kok98,kok00,tho03,gol04a,gol04b}. As oligarchs accrete smaller planetesimals, dynamical interactions push them apart to maintain a typical separation of 5--10 Hill radii, $R_H = (m / 3 m_{\odot})^{1/3}$, where $m$ is the mass of an oligarch and $m_{\odot}$ is the mass of the Sun. The separation depends weakly on the mass and semimajor axis of the protoplanet and the local surface density \citep{kok98,kok00}. Oligarchic growth limits the mass of the largest protoplanets \citep{lis87,lis93,kok98}. If an oligarch accretes all planetesimals within a torus of width $a_{acc} = n R_H$, it has a mass $m \approx$ 0.005--0.01 $n^{3.2} m_{\oplus}$, where $m_{\oplus}$ is the mass of the Earth. For $n \sim$ 5, the so-called isolation mass is $m_{iso} \sim$ 0.1 $m_{\oplus}$. Unless oligarchs move out of their feeding zones or additional material is brought into the feeding zone, they cannot grow beyond the isolation mass. A transition from oligarchic growth to chaotic growth appears to provide the best way for oligarchs to evolve into planets \citep[e.g.,][]{kom02,gol04b}. Small oligarchs on roughly circular orbits prevent radial drift by repelling planetesimals and opening up a gap in the feeding zone \citep{raf01,raf03a,raf03b,raf03c,raf03d}. Larger oligarchs that stir up planetesimals outside the gap can grow up to the isolation mass, but no further \citep{raf03c,gol04a,gol04b}. Once the mass in the oligarchs exceeds the mass in leftover planetesimals, dynamical interactions between the oligarchs move them outside their feeding zones. The oligarchs then compete for the remaining planetesimals \citep{gol04a,gol04b}. \citet{gol04a} propose that large-scale dynamical interactions begin when $\Sigma_l \gtrsim \Sigma_s$. However, this regime has not been investigated in detail by numerical simulations. To evaluate the conditions necessary for chaotic orbits to produce terrestrial planets, we calculate the formation of planets from 1--10 km planetesimals. We start with several test calculations to demonstrate and to examine the physical processes involved in oligarchic growth and the transition from oligarchy to chaos. We then consider the growth of planets in a small torus at 0.84--1.16 AU and a large torus at 0.4--2 AU. To provide a measure of the orbital interactions in the calculations, we define two parameters for the ensemble of oligarchs. From published $n$-body calculations, we expect significant orbital interactions between oligarchs when their typical separations are less than 5--10 $R_H$ \citep{kok95,kok98,tho03,bk06}. With $N$ equal mass oligarchs spaced evenly in radial distance within a torus of width $\Delta a$, oligarchs should begin to interact when $5-10 N R_H \approx \Delta a$. Generalizing this idea to $N$ oligarchs of any mass, we expect significant orbital interactions among oligarchs when the sum of their Hill radii is $\sim$ 0.1--0.2 $\Delta a$. To provide a measure of the onset of interactions, we define a Hill parameter as the normalized sum of Hill radii for all oligarchs: \begin{equation} p_H = \frac{\sum_{i=1}^N R_{H,i} }{a_{out} - a_{in}} ~ , \end{equation} where $R_{H,i}$ is the Hill radius for each of $N$ oligarchs and $a_{in}$ ($a_{out}$) is the inner (outer) radius of the coagulation grid. We expect significant dynamical interactions due to orbit overlap when $p_H \gtrsim$ 0.1. The Hill parameter is useful, because we can relate $p_H$ to the surface density required for overlapping orbits. If $m_t$ is the typical mass of an oligarch in equation (3), the surface density in oligarchs is roughly \begin{equation} \Sigma_l \approx 2.5 \left ( \frac{p_H}{0.1} \right ) \left ( \frac{m_t}{\rm 10^{26} ~ g} \right )^{2/3} ~ {\rm g ~ cm^{-2}}. \end{equation} If oligarchic growth ends when $p_H \approx$ 0.1, equation (3) yields $\Sigma_l \sim$ 2--3 g cm$^{-2}$, which is 25\% to 33\% of the mass in a minimum mass solar nebula \citep{wei77,hay81}. To provide a second measure of the transition from oligarchic to chaotic growth, we derive an orbit crossing parameter. We define the absolute value of the difference in the semimajor axis of two oligarchs, $a_{sep,ij}$ = $| a_i - a_j | $. For each oligarch with $j \neq i$, we evaluate $x_{ij} = (a_{sep} - a_j e_j) / R_{H,ij}$, where $e_j$ is the orbital eccentricity and \begin{equation} R_{H,ij} = \left ( \frac{m_i + m_j}{3 m_{\odot}} \right )^{1/3} \left ( \frac{a_i + a_j}{2} \right ) \end{equation} is the mutual Hill radius. We find the minimum value of $x_{ij}$, $x_{min,i} = min(x_{ij})$. The orbit crossing parameter is then \begin{equation} p_o = \frac{\sum_{i=1}^N m_i x_{min,i}}{m_{tot}} ~ , \end{equation} where $m_{tot}$ is the total mass in oligarchs. When $p_o \gtrsim$ 5, the orbits of oligarchs do not cross. When $p_o \sim$ 0, most orbits cross. We expect significant orbital interactions, mergers, and chaotic growth when $p_o \lesssim$ 0. \section{NUMERICAL CALCULATIONS} \subsection{Methods} To explore the end of oligarchic growth, we consider numerical calculations of planet formation. Because the initial number of planetesimals is large, $\sim$ $10^8$ to $10^9$, a statistical approach is the only plausible method to derive the collisional growth of planetesimals \citep[e.g.][]{saf69,ws93}. When a few objects contain most of the mass, the statistical approach fails. N-body methods can then treat the behavior of the largest objects but cannot follow the evolution of the leftover planetesimals. Here, we use a hybrid n-body--coagulation code, which combines the statistical and n-body approaches to follow the growth of 1--10 km planetesimals into planets. \citet[][and references therein]{kb04a} summarize our multi-annulus coagulation code. The model grid contains $N$ concentric annuli with widths $\delta a_i$ centered at heliocentric distances $a_i$. Each annulus contains $n(m_{ik},t$) objects of mass $m_{ik}$ with orbital eccentricity $e_{ik}(t)$ and inclination $i_{ik}(t)$ in $M$ mass batches. Our solution of the coagulation and Fokker-Planck equations treats elastic and inelastic collisions between all particles in all annuli. We adopt collision rates from kinetic theory -- the particle-in-a-box method -- and use an energy-scaling algorithm to assign collision outcomes. We derive changes in orbital parameters from gas drag, dynamical friction, and viscous stirring \citep{ada76, oht02}. \citet{bk06} describe the n-body code and our methods for combining n-body calculations with the coagulation code. When an object in the coagulation code reaches a preset mass, it is `promoted' into the n-body code. To set the initial orbit for this object, we use the three coagulation coordinates, $a$, $e$, and $i$, and select random values for the longitude of periastron and the argument of perihelion. Because the annuli have finite width $\delta a_i$, where $i$ is the index, we set the semimajor axis of the promoted object to $a_p$ = $a_i + (0.5 - x) \delta a_i$, where $x$ is a random number between 0 and 1. When two or more objects within an annulus are promoted to the n-body code during the same timestep, we restrict the choices of the orbital elements to minimize orbital interactions between the newly promoted $n$-bodies. We follow the mutual interactions of the ensemble of `n-bodies', including mergers, using a robust set of integrators. These integrators include accretion and drag terms that link the evolution of the planetesimals to the evolution of the n-bodies. To compute how rapidly n-bodies accrete planetesimals, we adopt a particle-in-a-box formalism. Direct calculations for gas drag and Fokker-Planck gravitational interactions provide rates for changes in $a$, $e$, and $i$ for each n-body due to interactions with planetesimals. \cite{bk06} describe tests of the hybrid code along with initial results for terrestrial planet formation. When the Hill radius of a promoted n-body is small compared to the separation of annuli in the coagulation grid, the hybrid code reproduces published results of previous investigations \citep[e.g.][]{grz90,ws93,wei97,dun98,cha01}. In the calculations described below, we set the promotion mass to $m_{pro} =$ 1--30 $\times 10^{24}$ g, which yields a small Hill radius, $\sim$ 0.0005--0.002 $a$, compared to the separation of adjacent annuli in the coagulation grid, $\sim$ 0.01--0.04 $a$. Our numerical calculations begin with a mass distribution of planetesimals in 32--64 concentric annuli with initial surface density $\Sigma = \Sigma_0$ (a/1 AU)$^n$, with $n$ = 1--1.5. The spacing between successive mass batches is $\delta = m_{i+1}/m_i$. We adopt $\delta$ = 1.4--2.0 for most calculations. All planetesimals start with initial eccentricity $e_0$ and inclination $i_0$. As planetesimals evolve, the average mass $m_{ik}$ and orbital parameters $e_{ik}$ and $i_{ik}$ of each mass batch change. We add mass batches as planetesimals grow in mass and reserve 8--16 mass batches in each annulus for n-bodies. Throughout the calculation, gas drag and collisions transport planetesimals in the coagulation code from one annulus to another. In addition to these processes, mutual gravitational interactions can scatter n-bodies into different annuli. For these calculations, we assume a simple treatment of the gas. The midplane density is \begin{equation} \rho_g(a,t) = \rho_0(a) e^{-t/t_0}, \end{equation} where $\rho_0(a)$ is the initial gas density and $t_0$ is a constant. We adopt a gas surface density $\Sigma_g = 100 \Sigma$ and set $\rho_0 = \Sigma_g / 2H$, where $H$ is the gas scale height \citep{kh87}. Consistent with observations of pre-main sequence stars, we adopt $t_0$ = 1 Myr \citep[see, for example,][]{hai01,you04,cal05,dal05}. \subsection{Simple model} To isolate the important physical parameters in the evolution of oligarchs, we first consider an ensemble of equal mass planetesimals in 32 concentric annuli at 0.84--1.16 AU. The planetesimals have mass $m_s$, surface density $\Sigma$ = $\Sigma_s (a / {\rm 1 ~ AU})^{-3/2}$, initial eccentricity $e_0$ = $10^{-5}$ and inclination $i_0$ = $e_0/2$. Within this swarm, we embed $N$ oligarchs with mass $m_l$ and surface density $\Sigma$ = $\Sigma_l (a / {\rm 1 ~ AU})^{-3/2}$. The oligarchs have the same initial eccentricity and inclination as the planetesimals. To evolve this ensemble in time, we calculate gravitational stirring for all interactions. We allow oligarchs -- but not planetesimals -- to collide and merge. This assumption allows us to focus on the dynamical evolution of the oligarchs in the absence of planetesimal accretion. Figure 1 shows the evolution of the eccentricity for three sets of oligarchs with $\Sigma_l/\Sigma_s$ = 0.5 and $\Sigma_l$ = 1--4 g cm$^{-2}$. The planetesimals have $m_s$ = $10^{24}$ g; the oligarchs have $m_l = 10^{26}$ g, comparable to the isolation mass for this grid. The Hill parameter increases from $p_H = 0.04$ (lower panel) to $p_H$ = 0.07 (middle panel) to $p_H$ = 0.13 (upper panel). In each frame, the colored tracks indicate the eccentricities of oligarchs in the grid. Along the track of a single oligarch, the color changes when two oligarchs merge. Although a single color does not track the motion of an individual oligarch throughout the evolution, the ensemble of curves tracks the merger history of the final set of oligarchs. Tracking backwards in time along connected curves yields the evolution of one of the oligarchs that remains at the end of the calculation. The legends list $\Sigma_l$ and the initial and final number of oligarchs (Figure 2). The eccentricity evolution is sensitive to the initial mass in the swarm. For $\Sigma_l$ = 1 g cm$^{-2}$, the oligarchs have a typical separation of 15--20 $R_H$ and do not have significant interactions. As the oligarchs stir up the planetesimals, dynamical friction maintains a constant ratio $e_s/e_l \sim$ $(m_l/m_s)^{1/2}$ $\sim$ 0.1. Although the orbits of the oligarchs become more and more eccentric, the growth of $e_l$ is slow (Figure 1, lower panel). It takes $\sim$ 1 Myr to reach $e_l \sim$ 0.01 and another 8--9 Myr to reach $e_l \sim$ 0.02. After $\sim$ 100 Myr, when $e \sim$ 0.04, the orbits begin to overlap. As $\Sigma_l$ increases, orbital interactions occur on shorter timescales (Figure 1, middle and upper panels). For $\Sigma_l$ = 2 g cm$^{-2}$, the initial separation of the oligarchs is $\sim$ 10--12 $R_H$. It takes only $5 \times 10^4$ yr for the typical eccentricity to reach $e \sim$ 0.01, when the minimum separation between oligarchs is only $\sim$ 8 $R_H$. At $\sim 10^5$ yr, $e \sim$ 0.02 and orbits overlap. Two oligarchs merge at $\sim 2 \times 10^5$ yr; two more merge at $\sim$ 1 Myr. When we stop the calculation at 1 Myr, two oligarchs remain in eccentric orbits, $e \gtrsim$ 0.025 and are likely to merge with other oligarchs. For $\Sigma_l$ = 4 g cm$^{-2}$, orbits begin to overlap in $\sim 10^4$ yr. After a single merger at $\sim 10^4$ yr, orbits continue to grow more eccentric. At $5 \times 10^4$ yr, all orbits overlap and the merger rate accelerates. There are two additional mergers at $\sim 10^5$ yr, another two by $3 \times 10^5$ yr, and three more by $\sim$ 1 Myr. At 1 Myr, all of the remaining oligarchs have eccentric, overlapping orbits and many are likely to merge over the next few Myr. Figure 2 illustrates the chaotic behavior of the semimajor axis in these test cases. For $\Sigma_l$ = 1 g cm$^{-2}$, the oligarchs have a constant semimajor axis for almost 100 Myr. When the total mass is a factor of two larger ($\Sigma_l$ = 2 g cm$^{-2}$), the semimajor axes are constant for $\sim$ $10^5$ yr. Once the orbits start to overlap, several oligarchs show considerable excursions in semimajor axis of 0.1 AU or more, $\sim$ 30\% to 40\% of the grid. Two of these oligarchs merge with other oligarchs. For $\Sigma_l$ = 4 g cm$^{-2}$, the orbits are very chaotic, with larger radial excursions and many mergers. Figures 3 and 4 show the evolution of the number of oligarchs $N_o$ and the orbit crossing parameter $p_o$ for the three cases in Figures 1--2 and a fourth case with $\Sigma_l$ = 8 g cm$^{-2}$. Mergers are prominent in all but the lower surface density test. In the tests where mergers are important, $p_o \lesssim 0$ when orbits overlap and mergers begin. These tests confirm our general expectations. Oligarchs stir up planetesimals along their orbits. Dynamical friction maintains a fixed ratio of $e_l/e_s$ (equation 2). Because the dynamical friction timescale depends on the surface density, the orbits of oligarchs in more massive disks overlap on shorter timescales than the orbits of oligarchs in less massive disks. More massive disks contain more oligarchs, leading to more chaotic orbits on shorter timescales. To provide additional tests, we consider two more sets of calculations. Models with fixed $\Sigma_l$ and variable $\Sigma_s$ gauge the importance of damping by leftover planetesimals. We first consider models with $m_s = 10^{24}$ g and then examine calculations with $m_s = 10^{16}$ g. Figure 5 shows the evolution of semimajor axis for three calculations with $\Sigma_s$ = 2 g cm$^{-2}$ (upper panel), $\Sigma_s$ = 4 g cm$^{-2}$ (middle panel), and $\Sigma_s$ = 8 g cm$^{-2}$ (lower panel). The tracks in the upper panel repeat those from the middle panel of Figure 2; chaos begins at $\sim 10^5$ yr when the orbits start to overlap. As $\Sigma_s$ increases, dynamical friction between the small and large objects is more efficient. The orbits overlap and chaos begins later when $\Sigma_l / \Sigma_s \lesssim$ 0.5. The middle set of tracks in Figure 5 demonstrates this behavior: there is no chaos until the system has evolved for 1 Myr. However, in models where $\Sigma_s$ exceeds $\sim$ 8 g cm$^{-2}$, viscous stirring among the planetesimals increases their velocity dispersions on shorter timescales. Larger viscous stirring results in larger orbital eccentricities for the oligarchs (equation 2) and a faster onset of chaos. For $\Sigma_s$ = 8 g cm$^{-2}$, orbits begin to overlap at $\sim 3 \times 10^5$ yr. These chaotic interactions grow with $\Sigma_s$. To test the importance of viscous stirring among planetesimals, Figure 6 repeats the calculations of Figure 5 for planetesimals with $m_s = 10^{16}$ g. In this test, the ratio of orbital eccentricity is much larger, $e_s/e_l \sim$ 75--80. The viscous stirring timescale for the planetesimals is much larger than 1 Myr. For all $\Sigma_s$ = 4--32 g cm$^{-2}$, planetesimal stirring is negligible. Because dynamical friction is important, the oligarchs remain well-separated and never develop overlapping orbits. These tests demonstrate the main physical processes involved in the transition from oligarchy to chaos. Dynamical friction maintains a fixed ratio of $e_l/e_s$ (equation 2). Viscous stirring increases the orbital eccentricities of planetesimals until the orbits of oligarchs interact. When large planetesimals contain a significant amount of mass, they contribute to the stirring. Otherwise, oligarchs provide all of the stirring. Once orbits overlap, chaos ensues. Chaos produces mergers and large excursions in semimajor axis, which starts the process that clears out the disk to produce large planets. Three parameters -- $\Sigma_l / \Sigma_s$, $p_H$, and $p_o$ -- provide good measures of the transition from oligarchy to chaos. The Hill parameter measures when the oligarchs have enough mass to interact dynamically. The ratio $\Sigma_l / \Sigma_s$ isolates the time when planetesimals cannot damp the oligarchs and thus prevent large-scale dynamical interactions. The orbit overlap parameter distinguishes times when orbit overlap is important. To understand the transition from oligarchy to chaos in less idealized situations, we now consider complete planet formation simulations using the full hybrid code. The calculations start with 1--10 km planetesimals and allow all objects to collide, merge, and interact gravitationally. When objects in the coagulation code reach $m \approx 2 \times 10^{25}~(\Sigma_0 / {\rm 8~g~cm^{-2}})$ g, we promote them into the $n$-body code and follow their individual trajectories. We describe calculations in a small (large) torus in \S3.3 (\S3.4). \subsection{Planet formation at 0.86--1.14 AU} The calculations begin with 1--3 km planetesimals in a torus extending from 0.86 AU to 1.14 AU. We divide this region into 32 annuli and seed each annulus with planetesimals in nearly circular and coplanar orbits ($e_0 = 10^{-5}$ and $i_0 = e_0/2$). The planetesimals have surface density $\Sigma = \Sigma_0 (a / {\rm 1 ~ AU})^{-3/2}$, with $\Sigma_0$ = 1--16 g cm$^{-2}$ at 1 AU. In these calculations, we do not consider fragmentation, which generally speeds up the growth of the largest objects at the expense of mass loss from disruptions and gas drag \citep{ws93,kl98}. \citet{wei97} consider a similar suite of calculations. Where it is possible to compare, our results agree with these calculations \citep[see also][]{kom02}. For $\Sigma_0$ = 8 g cm$^{-2}$, growth at 1 AU follows a standard pattern \citep{ws93, wei97, kl98}. After a few thousand years, mergers produce a few large objects with radii of $\sim$ 10 km. As dynamical friction circularizes the orbits of these objects, runaway growth begins. It takes only $10^4$ yr to produce several dozen 100--300 km objects. At $\sim 2 \times 10^4$ yr, the first object is promoted into the $n$-body code. As larger objects form farther out in the disk, more promotions occur. These objects continue to grow rapidly until they reach `isolation' masses of $\sim 10^{26}$ g, when stirring begins to reduce gravitational focusing factors. The transition to oligarchic growth begins at the inner edge of the grid and rapidly propagates outwards. At $\sim$ $3 \times 10^5$ yr, the number of oligarchs with masses $m \gtrsim$ $10^{26}$ g peaks at $N_o \sim$ 7. Soon after oligarchic growth begins at the outer edge of the grid, oligarchs at the inner edge of the grid begin to interact dynamically. A wave of strong dynamical interactions then moves out through the grid. It takes $\sim$ 1 Myr for the wave to move from $\sim$ 0.85 AU to $\sim$ 1.15 AU. During this period, some oligarchs merge. Others migrate through the grid on highly eccentricity orbits. From $\sim$ 1 Myr onward, mergers slowly reduce $N_o$. It takes $\sim$ 1 Myr for the first 2 mergers and another $\sim$ 2 Myr for the second 2 mergers. After 100 Myr, only 3 oligarchs remain. One of these has $m \sim$ 0.43 $m_{\oplus}$, $a \sim$ 0.9 AU, and $e \sim$ 0.08. The other two oligarchs have $m \sim$ 0.05 $m_{\oplus}$ and $e \sim$ 0.1 (Figure 7). Aside from the eccentricity of the more massive planet, the properties of these objects are reasonably close to those of the Earth and Mars. Fragmentation and interactions with the gas probably promote smaller eccentricities for the largest objects \citep[e.g.,][]{ws93,agn02,kom02}. Figure 7 illustrates the evolution of the semimajor axes of the oligarchs for three calculations with $\Sigma_0$ = 2 g cm$^{-2}$ (lower panel), $\Sigma_0$ = 4 g cm$^{-2}$ (middle panel), and $\Sigma_0$ = 8 g cm$^{-2}$ (upper panel). As in Figure 1, the tracks change color when two oligarchs collide and merge. The labels indicate the final mass, in Earth masses, of the largest oligarchs at 100 Myr. The timescale for the onset of chaotic growth depends on the initial surface density. More massive disks reach the transition first. \citep[see, for example,][]{lis87}. For $\Sigma_0$ = 8 g cm$^{-2}$, the transition begins at $\sim$ a few $\times ~ 10^5$ yr. The transition is delayed to $\sim$ 1 Myr for $\Sigma_0$ = 2 g cm$^{-2}$. The character of the transition to chaotic growth also depends on the initial surface density. In relatively massive disks with $\Sigma_0$ $\sim$ 8 g cm$^{-2}$, many oligarchs develop highly eccentric orbits and exhibit large variations in their semimajor axes. These large excursions result in many mergers and a rapid reduction in $N_o$. In less massive disks with $\Sigma_0 \lesssim$ 2--4 g cm$^{-2}$, only 1 or 2 oligarchs develop highly eccentric orbits. Most mergers are caused by two-body interactions, instead of large-scale dynamical interactions throughout the grid. Figures 8--10 illustrate these general conclusions. In Figure 8, the orbit crossing parameter rapidly approaches zero for calculations with $\Sigma_0$ = 8 g cm$^{-2}$. At 0.1--1 Myr, $p_o$ has a long plateau; close approaches between oligarchs cause $p_o$ to fall below zero; mergers cause $p_o$ to jump above zero. During a series of 4 mergers at 10 Myr, $p_o$ remains below 0 for a long period. After the final merger, $p_o$ jumps to 40, where it remains for many Myr. For smaller $\Sigma_0$, $p_o$ remains well above zero until one or two close pairwise interactions pushes it below zero. Once these interactions produce a merger, the systems stabilize and $p_o$ moves well above zero. Figure 9 shows the time evolution of the total number of $n$-bodies within the grid. During the transition from runaway growth to oligarchic growth, $N_o$ peaks and remains constant for 0.1--1 Myr. Once orbital interactions begin, $N_o$ declines. The rate of decline is fastest in the most massive systems. The Hill parameter also shows the rapid transition from oligarchy to chaos. When $N_o$ peaks, $p_H$ reaches a maximum. It declines sharply when mergers reduce the number of oligarchs. After a merger or series of mergers, $p_H$ rises slowly as the remaining oligarchs accrete leftover planetesimals. To examine the sensitivity of these results to $m_{pro}$, we consider a broad range of promotion mass. For $\Sigma_0$ = 1--16 g cm$^{-2}$ at 0.8--1.2 AU, calculations with $m_{pro} \gtrsim$ $2 \times 10^{25}$ ($\Sigma_0 / {\rm 4~g~cm^{-2}}$) g cannot follow the evolution of the most massive coagulation particles accurately. Thus, the production rate of oligarchs is inconsistent. Once a few oligarchs form, the models provide a poor treatment of interactions among oligarchs and therefore yield poor estimates of the merger rate and the timescale for the transition from oligarchy to chaos. Calculations with smaller $m_{pro}$ provide better solutions to the evolution of the largest objects. To measure the quality of the results for the transition from oligarchy to chaos, we define $t_{\Sigma}$ as the time when oligarchs with $m \gtrsim 10^{26} (\Sigma_0 / {\rm 8~g~cm^{-2}})$ g contain half of the initial mass and $t_m$ as the time when these oligarchs start to merge and the number of oligarchs starts to decline. The scaling of the oligarch mass with $\Sigma_0$ provides a clean way to compare models with different initial conditions. Our calculations show clear trends for $t_m$ and $t_{\Sigma}$ as a function of $\Sigma_0$ (Figure 11). The timescale for oligarchs to contain half of the initial mass roughly varies as $t_{\Sigma} \propto \Sigma_0^{-2/3}$ with very little dispersion\footnote{The origin of this relation lies in the growth and stirring rates. In the absence of stirring, the growth time is $t \propto \Sigma_0^{-1}$. Because more massive oligarchs form in more massive disks, the vertical scale height $H$ of leftover planetesimals is larger in more massive disks. Larger scale heights reduce the growth rate by roughly $H^{1/3} \propto \Sigma_0^{1/3}$, which results in $t_{\Sigma} \propto \Sigma_0^{-2/3}$.}. The transition from oligarchic growth to chaotic grwoth at $t = t_m$ also depends on $\Sigma_0$. Our results yield an approximate relation, $t_m \propto \Sigma_0^{-3/2}$. However, this trend appears to have inflection points for $\Sigma \lesssim$ 2 g cm$^{-2}$ and $\Sigma \gtrsim$ 8 g cm$^{-2}$. Calculations in progress will allow us to test this relation and its origin in more detail. There are no large trends for $t_m$ and $t_{\Sigma}$ with $m_{pro}$. Although calculations with smaller $m_{pro}$ yield smaller dispersions in $t_m$ and $t_{\Sigma}$ at each $\Sigma_0$, the median values for $t_m$ and $t_{\Sigma}$ are fairly independent of $m_{pro}$. A Spearman rank test \citep{pre92} suggests a weak inverse correlation between $t_m$, $t_{\Sigma}$ and $m_{pro}$, which we plan to test with additional calculations. Independent of $m_{pro}$, our results demonstrate a clear trend of the ratio $r_{\Sigma} = t_m/t_{\Sigma}$ with initial surface density (Table 1). In massive disks with $\Sigma_0 \gtrsim$ 12 g cm$^{-2}$, mergers among oligarchs begin before oligarchs contain half of the mass ($t_m/t_{\Sigma} \lesssim 1$). In low mass disks with $\Sigma_0$ $\lesssim$ 2 g cm$^{-2}$, oligarchs start to merge after they contain half of the mass ($t_m/t_{\Sigma} \gtrsim 1$). To test whether variations in the frequency and strength of dynamical interactions among oligarchs cause the trends in $t_m$ and $r_{\Sigma}$, we define a `nearest neighbor parameter' $n_n$ which measures the average number of oligarchs within 10 $R_H$ of another oligarch. From \S3.2, configurations with $n_n \gtrsim$ 1 lead to strong dynamical interactions among the oligarchs; oligarchs interact mildly when $n_n \lesssim$ 1. Table 1 lists the average of the maximum value of $n_n$ and its dispersion for our calculations. Disks with $r_{\Sigma} \gtrsim$ 0.5 have $n_{n,max} \lesssim$ 1; disks with $r_{\Sigma} \lesssim$ 0.5 have $n_{n,max} \gtrsim$ 1. This result confirms the visual impressions from Figures 7--10: dynamical interactions among oligarchs are stronger in massive disks and milder in low mass disks. To conclude this section, Figure 12 shows the evolution of the mass for each oligarch in one calculation. Here, points of one color correspond to the track of one oligarch. From $\sim 10^4$ yr to $\sim 10^5$ yr, large objects form and grow rapidly. Once stirring reduces gravitational focusing factors, growth slows. During the late stages of runaway growth and the early stages of oligarchic growth, several neighboring oligarchs merge. As their gravitational reach extends, these larger oligarchs grow faster and faster. Smaller oligarchs cannot compete for leftover planetesimals and grow slowly. During chaotic growth, the largest oligarchs merge with smaller oligarchs. As the small oligarchs are depleted, the merger rate slows. With highly eccentric orbits, a few small oligarchs last for 10--30 Myr before colliding with a large oligarch. After 100 Myr, all but one small oligarch have collided and merged with the two large planets that remain at the end of the evolution. \subsection{Planet formation at 0.4--2 AU} These calculations begin with 5 km planetesimals in a torus extending from 0.4 AU to 2 AU. We divide this region into 40 annuli and seed each annulus with planetesimals in nearly circular and coplanar orbits ($e_0 = 10^{-5}$ and $i_0 = e_0/2$). To provide a contrast with previous simulations, the planetesimals have surface density $\Sigma = \Sigma_0 (a / {\rm 1 ~ AU})^{-1}$, with $\Sigma_0$ = 2--16 g cm$^{-2}$ at 1 AU. Calculations with the more standard $\Sigma \propto r^{-3/2}$ yield similar results. As in \S3.3, we do not consider fragmentation. This torus is larger than the 0.5--1.5 AU region examined by \citet[][see also Kominami \& Ida 2002]{wei97}, but similar to the torus described by \citet{cha01}. \citet{cha01} starts his calculations with lunar mass objects, instead of planetesimals. \citet{bk06} compare results derived from our hybrid code for calculations starting with 1--10 km planetesimals or 100--200 lunar mass objects. When we start with smaller objects, our calculations produce fewer oligarchs on shorter timescales than the \citet{cha01} calculations. Growth in the larger torus is similar to that in the smaller torus. For $\Sigma_0$ = 8 g cm$^{-2}$, mergers produce a few large objects with radii of $\sim$ 10 km at 0.4 AU in roughly a thousand years. Once runaway growth begins, it takes only $\sim 10^4$ yr for the first promotion into the $n$-body code. As runaway growth propagates outward in heliocentric distance, objects throughout the grid reach the isolation mass of $\sim 10^{26}$ g. It takes less than $10^5$ yr to produce 10 isolated objects and another $2 \times 10^5$ yr to produce the next 10 isolated objects. These objects continue to grow and reach typical masses of 0.05--0.1 $m_{\oplus}$ in $\sim$ 1 Myr. During oligarchic growth, occasional two body interactions produce mergers of oligarchs. These mergers always occur when the coagulation code produces 2--3 oligarchs in the same annulus. Often, dynamical interactions move one or two oligarchs into neighboring annuli, where they begin to accrete planetesimals more rapidly. Occasionally, dynamical interactions between several oligarchs in neighboring annuli produce one or two mergers, which stabilizes the system locally and leads to the isolation of the remaining 2 or 3 oligarchs. As oligarchs start to form at the outer edge of the grid, oligarchs at the inner edge of the grid begin to interact dynamically (Figure 12). For $\Sigma_0$ = 8 g cm$^{-2}$, these interactions start at $\sim$ 1 Myr. Chaotic interactions then spread throughout the grid. For the next $\sim$ 10 Myr, mergers produce fewer but larger oligarchs, which rapidly sweep up the remaining planetesimals. After $\sim$ 200 Myr, only 5 oligarchs remain. The largest have masses comparable to those of the Earth and Venus. Smaller oligarchs have masses comparable to the mass of Mars. As in \S3.3, the evolution in less massive disks proceeds more slowly and less chaotically. The magnitude of dynamical orbital interactions is non-linear: once chaos starts in a massive disk, it rapidly propagates throughout the grid. Pairwise interactions dominate the dynamics of lower mass disks. These interactions usually do not affect other oligarchs. Figures 13--14 show the time evolution of $p_o$ and $p_H$. In all calculations, the typical orbital separation of the largest objects rapidly approaches zero when the first oligarchs form (Figure 13). This transition occurs sooner in more massive disks. During oligarchic growth, the orbits of oligarchs slowly move closer and closer. The number of oligarchs and the average Hill radius of the oligarchs (Figure 14) rise dramatically; these peak when $\Sigma_l / \Sigma_s$ $\sim$ 0.4--0.5. Once $\Sigma_l / \Sigma_s \gtrsim$ 0.5, chaotic interactions cause oligarchs to merge. When the orbits overlap, large-scale chaos leads to rapid mergers and planets with masses comparable to the mass of the Earth. In less massive disks with $\Sigma_0 \lesssim$ 2 g cm$^{-2}$, the oligarchs remain well-separated relative to their Hill radii (Figure 14). Pairwise interactions between oligarchs then produce most of the mergers. This evolution is slow and results in more planets with masses comparable to the mass of Mercury. Finally, Figure 15 shows the growth of the oligarchs for a calculation with $\Sigma_0$ = 8 g cm$^{-2}$. Here, tracks of a single color show the time evolution in the mass of a single oligarch. Tracks end when an oligarch merges with another oligarch in the grid. The tracks in Figure 15 clearly illustrate the three stages of growth in the terrestrial zone. Most tracks are initially steep and then slowly turn over. The dramatic changes in the slope mark the transition from runaway growth to oligarchic growth. At $\sim 10^5$ yr, this transition propagates as a wave from the inner part of the grid at 0.4 AU to the outer part of the grid at 2 AU. The onset of distinct steps in mass indicates the transition from oligarchic growth to chaotic growth. Starting at $\sim$ 1--3 Myr, this transition tales $\sim$ 3--10 Myr to propagate from 0.4 AU to 2 AU. Once the full grid is chaotic, mergers rapidly separate the oligarchs into small oligarchs, with $m \sim$ 0.01--0.1 $m_{\oplus}$ and $e \gtrsim$ 0.1, and large oligarchs, with $m \gtrsim$ 0.2--1 $m_{\oplus}$ and $e \lesssim$ 0.05. After $\sim$ 100 Myr, this division is stark: there are 3 oligarchs with $m \gtrsim 0.5 ~ m_{\oplus}$ and 3 others with $m \lesssim 0.07 ~ m_{\oplus}$. Calculations with a large range in $m_{pro}$ provide a measure of the quality of these conclusions. Because the 0.4--2 AU grid is much larger than the 0.84--1.16 AU grid, these calculations produce more oligarchs and are less sensitive to $m_{pro}$ than calculations in a smaller grid. Nevertheless, calculations with $m_{pro} \gtrsim$ $3 \times 10^{25}$ ($\Sigma_0 / {\rm 4~g~cm^{-2}}$) g yield a much larger range in the evolution timescales than calculations with smaller $m_{pro}$. As in \S3.3, models with $m_{pro} \sim$ $1-10 \times 10^{24}$ g yield consistent results for these timescales. \subsection{Other Physics} In this set of calculations with the hybrid code, we concentrate on mergers and dynamical interactions between solid objects. We ignore fragmentation and interactions with the gas. In previous calculations, fragmentation of 1--10 km planetesimals produces cm- to m-sized fragments which are more closely coupled to the gas than larger planetesimals. Gas drag circularizes the orbits of these objects, which can then be accreted more rapidly than the leftover planetesimals. From previous numerical calculations, the onset of runaway and oligarchic growth are $\sim$ 25\% sooner than illustrated in \S3.3 and \S3.4 \citep[e.g.,][]{ws93,kl99,kb04b,kb05}. Fragmentation is important for setting the visibility of the disk \citep{kb04b}. The rate of debris production from fragmentation typically peaks during the transition from runaway to oligarchic growth. If fragmentation produces an efficient collisional cascade, radiation pressure and Poynting-Robertson drag may eject small particles before the oligarchs can accrete the fragments. The disk then produces a substantial infrared excess \citep{kb04b}. If some process halts or slows the cascade, oligarchs can accrete the fragments efficiently \citep[e.g.,][]{gol04b}. The disk may then produce a modest infrared excess. We have started calculations to evaluate the importance of fragmentation in setting the timescales for runaway and oligarchic growth and to estimate the amount of dusty debris as a function of time. Interactions with the gas can produce significant evolution in the eccentricity and orbital semimajor axis of oligarchs \citep[e.g.,][]{art93,agn02,kom02,tan02,tan04}. In the calculations here, simple gas drag as in \citet{ada76} has a negligible impact on the evolution of planetesimals and oligarchs. However, interactions with density waves can circularize the orbits of oligarchs on short timescales \citep[e.g.,][]{art93,agn02,kom02,tan04} and can cause significant inward migration of the orbit \citep[e.g.][]{art93,tan02}. For a gaseous disk with the surface density of a minimum mass solar nebula, the circularization timescale for an oligarch is $\tau_c \sim$ 0.1--0.2 Myr at 1 AU \citep{agn02,tan04}. Significant orbital migration of an oligarch can occur over $\tau_m \sim$ 1--3 Myr \citep{tan02,tan04}. Thus, interactions with the gas occur on timescales comparable with timescales for oligarchs to grow and interact dynamically. The estimates for orbital migration and eccentricity damping assume a large gas density in the disk. Observations of young stars suggest the dust -- and presumably the gas -- disappears in $\sim$ 1--10 Myr \citep[e.g.,][and references therein]{cal05,dal05}. If the gas density declines exponentially with time as in equation (7) with $t_0 \sim$ 1 Myr, radial migration may have little impact on the evolution of oligarchs. Eccentricity damping, however, may play an important role in the transition from oligarchic to chaotic growth. We have begun calculations to see how eccentricity damping and radial migration change the outcomes of our calculations. Together, fragmentation and interactions with the gas probably set the eccentricity of the final ensemble of planets. If fragmentation is efficient at converting intermediate mass objects into smaller fragments, the fragments can efficiently circularize the orbits of growing oligarchs (equation 2; Figures 5--6). Interactions with the gas also circularizes the orbits of the most massive objects. We have started a set of calculations to test the relative efficiency of damping and fragmentation in setting the eccentricity of surviving oligarchs. \section{SUMMARY} Our calculations with the hybrid $n$-body--coagulation code are the first to evolve an ensemble of planetesimals into a system of a few planets. The calculations reproduce both the standard results of coagulation calculations for the early stages of planet formation and the results of $n$-body calculations for the late stages of planet formation \citep[see also][]{bk06}. In particular, we follow the evolution through the critical transition from oligarchic growth to chaotic growth, where objects evolve from isolated oligarchs into full-fledged planets. These calculations provide some new insights into the transition from oligarchic growth to chaotic growth \citep[see also][]{kok02,kom02}. In a disk with initial surface density $\Sigma_0 \gtrsim$ 3--16 g cm$^{-2}$ at 1 AU, collisions and mergers of 1--10 km planetesimals naturally lead to the formation of terrestrial planets with masses ranging from 0.05--2 $m_{\oplus}$ \citep[see also][and references therein]{ws93,wei97,cha01,kok02}. The growth follows a standard pattern. Slow orderly growth rapidly gives way to runaway growth, which concentrates much of the initial mass into many protoplanets with masses $\sim$ $10^{26}$ g. Stirring of leftover planetesimals by the largest objects reduces gravitational focusing factors and slows growth. During oligarchic growth, large oligarchs become isolated and slowly accrete the leftovers. Several factors produce a transition from oligarchic growth to chaotic growth \citep[see also][]{kom02}. As the oligarchs grow, they contain an ever increasing fraction of the total mass. A few oligarchs merge but most remain isolated from other oligarchs. As their Hill radii grow, their orbits begin to overlap. When the surface density in oligarchs reaches a critical value, oligarchs interact chaotically. As gravitational interactions scatter oligarchs throughout the disk, the merger rate increases dramatically. Eventually only a few oligarchs remain in roughly circular orbits. Our results isolate the two conditions necessary for the transition from oligarchy to chaos. When oligarchs contain roughly half of the mass of solid material in the disk, $\Sigma_l \sim \Sigma_s$, dynamical interactions between oligarchs are more important than dynamical friction from planetesimals \citep[e.g.,][]{gol04b}. When the surface density in oligarchs exceeds a critical value, $\Sigma_c \approx$ 2--3 g cm$^{-2}$, oligarchs begin to interact chaotically (\S3). More massive disks can reach this limit when $\Sigma_l < \Sigma_s$ (Table 1). In less massive disks, milder dynamical interactions begin when $\Sigma_l \gtrsim \Sigma_s$ (Figure 11). If the surface density in oligarchs remains below the critical value, interactions between oligarchs are less chaotic even when $\Sigma_l \gtrsim \Sigma_s$. Interactions among 2 or 3 oligarchs produce a small merger rate which eventually yields a system with lower mass planets compared to more massive disks. Although dynamical interactions among the ensemble of oligarchs produce terrestrial mass planets in all disks, more massive disks yield more massive planets. Our results suggest a maximum mass, $m_{max} \sim$ 1--2 $m_{\oplus}$ for $\Sigma_0 \sim$ 8--16 g cm$^{-2}$ and $m_{max} \sim$ 0.1--0.2 $m_{\oplus}$ for $\Sigma_0 \sim$ 1--2 g cm$^{-2}$. Because young stars appear to have a wide range of initial disk masses, we expect a wide range in the masses of terrestrial planets in exosolar systems. In future studies, we plan to address this issue in more detail. In terrestrial planet formation, the transitions between different stages of growth produce distinct waves through the disk. During the transition from orderly growth to runaway growth, the increase in the collision rate rapidly propagates from the inner edge of the disk to the outer edge. This transition is rapid and takes $\lesssim 10^5$ yr to move from 0.4 AU to 2 AU. Although less rapid, the transitions from runaway to oligarchic growth and from oligarchic to chaotic growth also propagate from the inner disk to the outer disk. During the transition to chaotic growth, dynamical interactions tend to produce more chaotic orbits at the outer edge of the disk. This behavior depends on the surface density gradient. In disks with steep density gradients, $\Sigma \sim \Sigma_0 a^{-n}$ with $n \gtrsim$ 3/2, chaotic interactions propagate slowly outward. In disks with shallower density gradients, $n \lesssim$ 1, dynamical interactions tend to concentrate more mass in the outer part of the disk. This difference in behavior is set by the growth rate, $\sim P/\Sigma \sim a^{n+3/2}$, where $P$ is the orbital period: the wave of growth propagates more rapidly through disks with shallower density gradients \citep[e.g.][]{lis87}. These results have several interesting consequences for the evolution of planets in the terrestrial zone \citep[see also][]{kom02}. The transition from oligarchic to chaotic growth occurs on timescales, $\sim$ a few Myr, well before radiometric evidence suggests the formation of the Earth was fairly complete \citep[e.g.,][]{yin02}. Planets are also fully formed well before the estimated time of the Late Heavy Bombardment, $\sim$ 100--300 Myr after the formation of the Sun \citep[e.g.,][]{ter74, har80, ryd02, koe03} Throughout the chaotic growth phase, our calculations produce many lunar- to Mars-sized objects on highly eccentric orbits. These objects are good candidates for the `giant impactor' that collided with the Earth to produce the Moon \citep{har75b,cam76,ben86,can04a,can04b}. As we complete calculations with fragmentation and migration, predicted mass and eccentricity distributions for these objects will yield better estimates for the probability of these events. Together with the dynamical influence of Jupiter \citep[e.g.,][]{kom04}, the highly eccentric orbits of lower mass oligarchs have an important role in clearing the inner solar system \citep[e.g.,][]{gol04a,kom04} and delivering water to the Earth \citep{lun03}. Traditionally, data from our solar system provide the only tests of clearing mechanisms \citep[e.g.,][and references therein]{gro01,nes02a,nes02b}. In the next decade, comparisons between predicted infrared excesses from the debris disks leftover from terrestrial planet formation and observations from {\it Spitzer} and {\it TPF-Darwin} will yield new constraints on clearing timescales and the evolution of volatile species in the terrestrial zone \citep[e.g.,][]{bei05}. These comparisons will enable better numerical calculations and an improved understanding of the final stages of terrestrial planet formation. \acknowledgements We thank M. Geller and an anonymous referee for important comments that improved the content of this paper. We acknowledge a generous allotment, $\sim$ 5 cpu years, of computer time on the Silicon Graphics Origin-2000 computers `alhena', `castor', and `pollux' and the Dell Linux cluster `cosmos' at the Jet Propulsion Laboratory through funding from the JPL Institutional Computing and Information Services and the NASA Directorates of Aeronautics Research, Science, Exploration Systems, and Space Operations. We thank V. McGlasson, M. Phelps, and other staff members of the CfA computation facility for installation and support of the SUN cluster `hydra', where we used $\sim$ 6 cpu years to perform many of our calculations. The {\it NASA} {\it Astrophysics Theory Program} supported part of this project through grant NAG5-13278.
train/arxiv
BkiUdYk5qdmDHWQtq_g9
5
1
\section{Introduction} The geodesic flow on a (pseudo) Riemannian manifold $(M, g)$ is integrable if the underlying metric admits a sufficient number of Killing vectors, or Killing tensors. The former correspond to first integrals linear in the velocity, and the latter give polynomial first integrals of higher order. For example the geodesic motion on a round two--sphere is integrable as any generator of the isometry group $SO(3)$ gives a Killing vector commuting with the Hamiltonian. On a tri--axial ellipsoid there are no Killing vectors, but the additional quadratic first integral required for integrability is given by a rank--two Killing tensor \cite{ellipsoid}. Let $u$ be a unit tangent vector to a curve $\Gamma\subset M$, and let $a\equiv \nabla_u u$ be the acceleration of $\Gamma$. The geodesic condition $a=0$ is not invariant under conformal rescalings of the metric $g\rightarrow \Omega^2 g$, but there is a different preferred set of curves on manifolds endowed with only conformal structures $(M, [g])$, where $[g]=\{\Omega^2 g, \Omega:M\rightarrow \mathbb{R}^+$\}. These {\em conformal geodesics} are also known as {\em conformal circles}, and arise as solutions to a system of third order ODEs on $M$. A conformal geodesic $\Gamma$ is uniquely specified by a point, a tangent direction, and a perpendicular acceleration. If $g$ is a representative metric in the conformal class, and $\nabla$ is the Levi--Civita connection of $g$, then the conformal geodesic equations are \begin{equation} \label{conf_circ} \nabla_u a= -(|a|^2 +L(u, u))u+L^{\sharp}(u), \quad \end{equation} where $L\in\Gamma(TM\otimes TM)$ is the Schouten tensor given in terms of the Ricci tensor $R$ and the Ricci scalar $S$ by \begin{equation} \label{schouten_tn} L=\frac{1}{n-2}\Big(R-\frac{1}{2(n-1)}Sg\Big), \end{equation} and $L^{\sharp}:TM\rightarrow TM$ is the endomorphism defined by $g(L^{\sharp}(X), Y)=L(X, Y)$ for all vector fields $X, Y$. It can be demonstrated \cite{BE, tractor, tod_circles, sihlan} that the conformal geodesics only depend on the conformal class of $[g]$, and not on the choice of the representative metric. In (\ref{schouten_tn}) it is assumed that $n>2$. The case where $n=2$ will be discussed in \S\ref{cccm}. Neither the Killing vectors nor the conformal Killing vectors of $g$ give rise to first integrals of (\ref{conf_circ}), and it is natural to ask whether there are examples of integrable conformal geodesic motions, and what geometric structures on $(M, [g])$ give rise to this integrablity. In \cite{tod_circles, rodar} it was shown that the conformal Killing--Yano two--forms (CKY) give rise to first integrals of (\ref{conf_circ}): $Y\in \Lambda^2(T^*M)$ is a CKY if ${(\nabla Y)}_0\in \Gamma (\Lambda^3(T^*M))$, where $T_0$ denotes the trace--free part of $T$. The corresponding first integral is then \begin{equation} \label{first_int_int} Q=Y(u, a)-\frac{1}{n-1} \mbox{div}(Y)(u), \end{equation} where $n=\mbox{dim}(M)$, and $\mbox{div}=*d*$ is the divergence. This was sufficient \cite{tod_circles} to integrate (\ref{conf_circ}) on a non--conformally flat ${\bf Nil}$ 3--manifold, and a squashed 3--sphere but already the conformal class of the Schwarzchild metric proved too difficult to handle, as there do not exist sufficiently many CKYs. In this paper we shall study the integrability of (\ref{conf_circ}) on four--dimensional conformal structures corresponding to some gravitational instantons: solutions to Einstein equations on Riemannian four--manifolds which are compact or complete metrics which asymptotically approach a locally flat space - the decay rate, as well as the topology at infinity varries between different examples of instantons. See \cite{GHr, Dbook} for details. We shall explore the existence of three CKYs corresponding to the underlying hyper--K\"ahler structure, and reduce (\ref{conf_circ}) to a system of 2nd order ODEs corresponding to a forced geodesic equation in a constant magnetic field. Further progress can be made for the anti--self--dual (ASD) Taub--NUT and the Eguchi--Hanson instantons. In the ASD Taub--NUT case we shall establish complete integrability by separating the Hamilton--Jacobi equation. \begin{theo} \label{thm_TN} The conformal geodesic equation on the anti--self--dual Taub--NUT manifold is completely integrable in the Arnold--Liouville sense: it reduces to a geodesic equation in a self--dual magnetic field which admits four first integrals in involution. The associated Hamilton--Jacobi equation is separable, and the conformal geodesic equations reduce to quadratures. \end{theo} The Eguchi--Hanson metric admits an isometric and tri--holomorphic action of $SO(3)$ which preserves the magnetic field, and therefore gives rise to three charged linear first integrals of the Lorentz force equation. While full integrablity cannot be established in this case, there is enough structue to reduce the conformal geodesic motion to quadratures under an additional assumption that the conformal geodesics lie on the 3--dimensional orbits of the $SO(3)$ subgroup of the isometry group. The paper is organised as follows. In the next section we shall introduce the conformal geodesic equations (\ref{conf_circ}), and focus on the special case where the underlying conformal structure admits an Einstein metric. We shall discuss the first--integrals of (\ref{conf_circ}), and establish a link with the Lorentz force equations for hyper--K\"ahler four manifolds (Proposition \ref{prop31}). In \S\ref{sectionTN} and \S\ref{sectionEH} we shall study (\ref{conf_circ}) on the ASD Taub--NUT metric, and the Eguchi--Hanson metric respectively. The ASD Taub--NUT case is completely integrable (Theorem \ref{thm_TN} will follow from Proposition \ref{prop_TN1} and Proposition \ref{prop_TN2}), and the Eguchi--Hanson case has a couple of integrable sub--cases (Proposition \ref{propEH}). In \S\ref{section3} we shall establish the integrability of the conformal geodesic flow for the Fubini--Study metric on $\mathbb{CP}^2$. We shall establish it by making use of nine CKYs on this space, and recover the results of \cite{jap2}, where all conformal geodesics of $\mathbb{CP}^n$ have been characterised by their horizontal lifts to helices on the total space of the fibration $S^{2n+1}\rightarrow \mathbb{CP}^n$. In Appendix A we shall rule out the existence of non--flat Riemannian Gibbons--Hawking metrics with three commuting vector fields, and in Appendix B we shall provide necessary and sufficient conditions for a Killing trajectory to be a conformal geodesic. \subsection*{Acknowledgements.} The work of MD has been partially supported by STFC consolidated grants ST/P000681/1, and ST/T000694/1. \section{Conformal geodesics on Einstein manifolds} \label{section2} Let $(M, g)$ be a Riemannian manifold. When using the index-free notation we shall denote the $g$--inner product of two vector fields $X$ and $Y$ by $g(X, Y)$. We also set $|X|^2\equiv g(X, X)$, and use notation $X\hook\psi$ for the $(p-1)$--form arising as a contraction of the $p$--form $\psi$ with the vector field $X$. If $\Gamma$ is a curve, and $u$ is a tangent vector to $\Gamma$, then $\nabla_u$ denotes the directional derivative along $\Gamma$, where $\nabla$ is the Levi--Civita connection of $g$. In explicit computations involving conformal geodesics, and conformal Killing--Yano tensors it is convenient to adopt the abstract index notation \cite{PR}. Thus $u^a, a=1, 2, \dots, \mbox{dim}(M)$ denotes a vector, $u^a\nabla_a v^b$ denotes the directional covariant derivative $\nabla_u v$ of another vector $v$. On a Riemannian manifold $(M, g)$ an isomorphism between $TM$ and $T^*M$ is realised by $u_a=g_{ab}u^b$, where now $u_a$ denotes a one--form, and the Einstein summation convention is used. Despite the presence of these indices, no choice of basis has been made. \vskip5pt If the metric $g$ is Einstein, i.e. $R=(S/n)g $ then the conformal geodesic equation (\ref{conf_circ}) for a curve $\Gamma$ parametrised by an arc--lengh $s$, and with a unit tangent vector $u$ reduces to \begin{equation} \label{conf_circ_e} \nabla_u a=-|a|^2 u, \quad \mbox{where}\quad |a|^2\equiv g(a, a)\geq 0\,\,\mbox{is a constant}, \quad\mbox{and}\quad g(a, u)=0. \end{equation} This system of 3rd order equations (\ref{conf_circ_e}) has long been studied in Riemannian geometry \cite{jap1, jap2}, where the corresponding solution curves have been called circles. The terminology is motivated by the observation that a development\footnote{In references \cite{jap1, jap2} the equation (\ref{conf_circ_e}) is written as $ \nabla_s X_s =|a| Y_s, \quad \nabla_s Y_s=-|a| X_s $ where $X_s$ and $Y_s$ are unit vector fields, $X_s$ is the tangent vector to the curve $\Gamma$ parametrised by the arc-length $s$, $\nabla_s$ is the covariant derivative of $g$ along $\Gamma$ and the positive constant $|a|^{-1}$ is the {\em radius} of the circle $\Gamma$. If $\Gamma^s_0$ is the parallel displacement of tangent vectors along $\Gamma$ from $\Gamma(s)$ to $\Gamma(0)$, and ${X_s}^*=\Gamma^s_0(X_s)$, then the development $\Gamma^*$ is the unique curve in $T_x M$ starting at the origin such that its tangent vector is parallel to $X_s$ in the Euclidean sense. } of a circle $\Gamma$ starting at a point $x\in M$ is an ordinary circle in $T_xM$. It is also the case that circles on the round sphere $S^n$ are intersections of $S^n\subset \mathbb{R}^{n+1}$ with planes (not necessarily through the origin) in $\mathbb{R}^{n+1}$. Thus they are indeed circles. \subsection{First Integrals} If $Y\in \Lambda^2(M)$ satisfies the conformal Killing--Yano (CKY) equation \begin{equation} \label{CKY} \nabla_aY_{bc}=\nabla_{[a}Y_{bc]}-2g_{a[b}K_{c]} \end{equation} for some one--form $K\in \Lambda^1(M)$, then (if $\mbox{dim}(M)=4$) equation (\ref{conf_circ}) implies that \begin{equation} \label{Q} Q=u^aa^bY_{ab}-u^aK_a \end{equation} is constant along the conformal geodesics. In the special case where $Y_{ab}$ is a K\"ahler form, the one form $K$ is zero and the linear term in $(\ref{Q})$ is not present. In this case the first integral $(\ref{Q})$ has been called the {\em complex torsion} (though it is real) in \cite{jap2}. In four dimensions the condition $u^c\nabla_c Q=0$ with $Q$ given by (\ref{Q}) was established in \cite{tod_circles}. In \cite{rodar} it was put in the general context of parabolic geometries. \subsection{Examples} \subsubsection{Circles on $\mathbb{R}^n$.} These are just ordinary circles. Equation (\ref{conf_circ_e}) becomes $\dddot{{\bf x}}=-|a|^2\dot{\bf x}$, and can be readily solved \[ {\bf{x}}(s)={\bf x}(0)+|a|^{-1}{\bf v}(0)\sin{(|a|s)} +|a|^{-2}{\bf a}(0)(1-\cos{(|a|s)}), \] where ${\bf x}(0)$ is arbitrary, ${\bf v}(0)$ is a unit vector in $\mathbb{R}^n$, and the vector ${\bf a}(0)$ is orthogonal to ${\bf v}(0)$, and has squared norm $|a|^2$. We recognise these conformal geodesics as circles centered at ${\bf x}(0)+|a|^{-2}{\bf a}(0)$, and with Euclidean radius $|a|^{-1}$. Conformal rescalings of the metric preserve conformal geodesics, so circles on $S^n$ are also circles (intersections of $S^n\subset \mathbb{R}^{n+1}$ with planes in $\mathbb{R}^{n+1}$ which do not necessarily pass through the origin). In particular all conformal geodesics on $S^n$ are closed. \subsubsection{Magnetic motion on the upper half--plane.} \label{cccm} Conformal geodesics on the hyperbolic space can also be obtained from those on $\mathbb{R}^n$. This time however they are not necessarily closed, and we shall consider this case separately to illustrate the significance of the first integrals in the qualitative behaviour of conformal geodesics. Let ${\mathbb{H}}$ be the upper half--plane with coordinates $(x, y)$, and a constant curvature metric \[ g=\frac{dx^2+dy^2}{y^2}, \quad y>0 \] (so the Ricci scalar equal to $-2$) and a magnetic field $F=B\; \mbox{vol}_{\mathbb{H}}$ given by a constant $B>0$ multiple of the parallel volume form on ${\mathbb{H}}$. We chose the potential $\Phi=By^{-1}dx$, so that $F=d\Phi$. The magnetic Lagrangian\footnote{The variational formulation of the conformal geodesic equations with $n>2$ has been developed in \cite{DK21}.} \[ L=\frac{1}{2}\frac{\dot{x}^2+\dot{y}^2}{y^2}-B\frac{\dot{x}}{y} \] admits a conserved energy integral which we set to $1$. Taking $u$ to be a unit tangent vector to an integral curve of the corresponding Euler--Lagrange equations we find \[ a\equiv\nabla_u u= B J(u), \] where $J$ is the complex structure on ${\mathbb{H}}$ defined by $g(v, J(w))=\mbox{vol}_{\mathbb{H}}(v, w)$. Therefore the circle equations (\ref{conf_circ_e}) hold with $|a|^2=B^2$, as \[ \nabla_u a=B \nabla_u J(u)=B J(\nabla_u u)=B^2 J^2(u)=-B^2 u. \] The two first integrals $|a|^2$, and $Q$ given by (\ref{Q}) (where now $Y=F$) are therefore equal as \[ Q\equiv F(u, a)=B\; \mbox{vol}_{\mathbb{H}}(u, B J(u))=B^2 g(u, u)=B^2. \] The resulting trajectories are circles of radius $|B|^{-1}$. The behaviour of the circles depends on the value of $B$ (Figure 1) \begin{center} \includegraphics[width=7cm,height=6cm,angle=0]{circlesH2.pdf} \begin{center} {\em Figure 1.} Geodesics (in blue) and circles on ${\mathbb{H}}$. \end{center} \end{center} The circles are open (in the language of \cite{comet} the magnetic field is not strong enough to capture the particle) and unbounded if $0<B<1$, and closed if $B>1$. The special case $B=1$ corresponds to horocircles tangent to the $y=0$ boundary of ${\mathbb{H}}$, or lines $y=$const parallel to the $x$--axis. \section{Circles on hyper--K\"ahler four--manifolds and the Lorentz force} Let $(M, g)$ be an oriented Riemannian--four manifold with a hyper--K\"ahler structure given by parallel two--forms $\Omega^1, \Omega^2, \Omega^3$. In particular $(M, g)$ is Ricci--flat, so the conformal geodesic equations reduce to (\ref{conf_circ_e}). \begin{prop} \label{prop31} The conformal geodesic equations on a hyper--K\"ahler four--manifold $(M, g)$ reduce to the Lorentz force equations \begin{equation} \label{lorentzf} \nabla_u u=|a|J(u), \end{equation} where $J$ is the complex structure corresponding to the K\"ahler form given by a constant self--dual Maxwell field $F=\sum_j c^j\Omega^j$ with constants $(c^1, c^2, c^3)$ such that $|{\bf c}|=1$. \end{prop} \noindent {\bf Proof.} Each of the K\"ahler forms gives rise to a first integral (\ref{Q}), so there are three of those. These three integrals allow us to integrate the circle equations once, and reduce them, on a level set \begin{equation} \label{thecs} c^i=\Omega^i(u, a), \quad i=1, 2, 3, \end{equation} to geodesic equations of a charged particle moving in a magnetic field. If the orientation of $(M, g)$ is chosen so that the Riemann tensor of $g$ is anti--self--dual, then this magnetic field is self--dual. To see how the Lorentz force arises assume that $M$ is paralelizable chose an orthonormal frame such that \[ g={(e^1)}^2+{(e^2)}^2+{(e^3)}^2+{(e^4)}^2, \] and \[ \Omega^1=e^1\wedge e^4+e^2\wedge e^3, \quad \Omega^2=e^2\wedge e^4+e^3\wedge e^1, \quad \Omega^3=e^3\wedge e^4+e^1\wedge e^2. \] Let $u^a$ and $a^a$ be the components of $u, a$ in the frame of vector--fields dual to $e^a$. The relations (\ref{thecs}) can then be solved for $a$. Using the ordinary vector notation for the components ${\bf u}=(u^1, u^2, u^3), {\bf a}=(a^1, a^2, a^3)$ we can rewrite (\ref{thecs}) as ${\bf c}={\bf u}a^4-{\bf a}u^4+{\bf u}\wedge {\bf a}$, and find \begin{equation} \label{aGH} {\bf a}= {\bf c}\wedge {\bf u} - u^4{\bf c} , \quad a^4={\bf u}\cdot {\bf c}. \end{equation} Note that \begin{equation} \label{canda} g(a, u)={\bf a}\cdot {\bf u}+a^4u^4=0, \quad g(a, a)=|{\bf c}|^2 \end{equation} so the first integrals $g(a, a)$ and ${\bf c}$ are not independent. The circle equations now reduce to $a=\nabla_u u$, or (\ref{lorentzf}) which is the Lorentz force equation for a unit mass particle with charge $e=|a|$ in a constant self-dual Maxwell field \[ F=\frac{1}{|a|}\Big(c^1\Omega^1+c^2\Omega^2+c^3\Omega^3). \] In view of (\ref{canda}) we can redefine ${\bf c}$ to be a unit--vector, and recover the statement of Proposition \ref{prop31}. \begin{flushright} $\Box $ \end{flushright} For an uncharged particle $|a|=0$, and (\ref{lorentzf}) reduces to the ordinary geodesic equation. To make further progress with $|a|\neq 0$ we need to seek first integrals of (\ref{lorentzf}). Any Killing vector on $(M, g)$ which is also tri--holomorphic, i .e. ${\mathcal L}_K\Omega^i=0$, will give rise to a first integral: If ${F}=d\Phi$, the potential one--form $\Phi$ can be chosen such that ${\mathcal L}_K\Phi=0$ (we then say that $K$ Lie drags $\Phi$). Indeed, computing \[ 0={\mathcal L}_K F= d(K\hook F) \quad \mbox{implies that, locally}\quad K\hook F=df \] for some function $f:M\rightarrow \mathbb{R}$. Now \[ {\mathcal L}_K\Phi=d(f+K\hook \Phi). \] Choosing $f_1:M\rightarrow \mathbb{R}$ such that $K(f_1)=-f-K\hook \Phi$, and performing a gauge transformation $\Phi\rightarrow \Phi+df_1$ we establish that in this gauge ${\mathcal L}_K\Phi=0$. This implies that \begin{equation} \label{cg4}{\mathcal K}:=K^a(u_a+e\Phi_a)=K^ap_a \end{equation} is conserved along conformal geodesics on a level set of the $c^i$. The momentum $p_a$ is defined by \[u^a=g^{ab}(p_a-e\Phi_a),\] which is the usual convention for charged particles. For (\ref{lorentzf}) to be integrable we in particular need the geodesic equation to be integrable, and we need four constants of the motion in involution. One is the Hamiltonian (the unit mass) and the other three need to come from Killing vectors and Killing tensors. If at least two of them arise from Killing vectors then, to be involutive, these must commute and to give integrals for (\ref{lorentzf}) they must Lie-drag $\Omega^i$. Now necessarily there is a constant linear combination of them which has purely anti-self-dual covariant derivative (this is Theorem 2.3 in \cite{gibbons_ruback}, credited to Hitchin by these authors) and this is the characterising property of the Gibbons-Hawking metrics \cite{GHr}. Consequently we henceforth restrict to these metrics, but there may be other integrable cases with two or more Killing tensors (see \cite{Houri, frolov} for other applications for Yano tensors to geodesic integrability). \subsection{The Gibbons Hawking metrics} All hyper--K\"ahler four--manifolds which admit a tri--holomorphic Killing vector can be put in the Gibbons--Hawking form \cite{GHr} \begin{equation} \label{GH} g=V {\bf dx}\cdot {\bf dx}+V^{-1}(d\tau+ \omega)^2 \end{equation} where $(V, \omega)$ is a function, and a one--form on $\mathbb{R}^3$. Choosing a frame \[ e^4=V^{-1/2}(d\tau+\omega), \;\; e ^i=V^{1/2}dx^i, \quad E_4=V^{1/2}\partial_\tau, \;\; E_i= V^{-1/2} (\partial_i-A_i\partial_\tau) \] one verifies that the two--forms ${\Omega^i}$ are closed iff \[ d\omega=*dV, \] which in particular implies that $V$ is harmonic. Consider the Killing vector $K=\partial_\tau$ of metric (\ref{GH}), and set $K^\flat=V^{-1}(d\tau+\omega)$ and then its derivative is anti-self-dual (the characterising property of the Gibbons-Hawking metrics). It follows that $K$ is tri--holomorphic, i. e. \[{\mathcal{L}}_K\Omega^i=0,\] where $\mathcal{L}_X$ is the Lie-derivative along the vector field $X$. It therefore follows that $K$ gives rise to three moment maps: for each $i$ there is a function $x^i$ satisfying \begin{equation}\label{h1} K\hook \Omega^i=d x^i \end{equation} and these $x^i$ are (up to scale) the flat coordinates $(x,y,z)$ in (\ref{GH}). \vskip5pt The Killing vector $K$ will give a constant of the motion (\ref{cg4}) for (\ref{lorentzf}). This is conserved for any $V$, and we will return to it when we consider examples below. It is however not sufficient for integrability. To integrate (\ref{lorentzf}) we need either two more Killing vectors, commuting with each other and with $K=\partial_\tau$, or another Killing vector commuting with $K$ and a Killing tensor with suitably involutive. The first case is not interesting (see the Appendix A) but for the second we can take motivation from the familiar fact that both the self-dual Taub-NUT metric and the Eguchi-Hanson metric admit Killing tensors \cite{gibbons_ruback} to concentrate on these two metrics in \S \ref{sectionTN} and \S \ref{sectionEH}. \section{Circles on anti--self--dual Taub--NUT} \label{sectionTN} The ASD Taub--NUT metric \cite{GHr} is a special case of (\ref{GH}) with $V=1+m/r$ where $m$ is a non--negative constant, and (using spherical polar coordinates on $\mathbb{R}^3$) $\omega=m\cos{\theta} d\phi$ and \[ g=\Big(1+\frac{m}{r}\Big)(dr^2+r^2(d\theta^2+\sin^2{\theta} d\phi^2))+ \Big(1+\frac{m}{r}\Big)^{-1}(d\psi+m\cos{\theta} d\phi)^2 \] writing $\psi$ for $\tau$ to emphasise the Bianchi IX $SU(2)$ symmetry. The singularity at $r=0$ is removable. The infinity $r\rightarrow \infty$ has the topology of a one--monopole $S^1$ bundle over $S^2$. This behaviour is referred to as asymptotic local flatness (ALF). Thus ASD Taub--NUT is regular everywhere, and is an example of a gravitational instanton. The metric has $SU(2)\times U(1)$ as its group of isometries where the $U(1)$ action generated by $\partial/\partial \psi$ is tri--holomorphic, i. e. \[ {\mathcal{L}}_{\partial/\partial\psi} \Omega^i=0,\quad i=1, 2, 3 \] and so the self--dual derivative of the corresponding one--form $g(\partial/\partial\psi, \cdot)=V^{-1}(d\psi+\cos{\theta}d\phi)$ vanishes. The $SU(2)$ action is not tri--holomorphic, and the exterior derivatives of the corresponding Killing vectors do not have a definite duality (but their SD derivatives are parallel). However for any unit vector ${\bf c}\in \mathbb{R}^3$ there exists a linear combination $L$ of the isometric generators of $SU(2)$, such that $L$ Lie--drags $\sum_ic^i\Omega^i$. \subsection{Conformal Killing--Yano tensors and Arnold--Liouville integrability} The ASD Taub--NUT metric admits a Killing--Yano $2$--form $Z$, i. e. a two--form such that \[ \nabla_a Z_{bc}=\nabla_{[a}Z_{bc]}. \] The self--dual, and anti--self--dual parts of $Z$ are conformal Killing--Yano two--forms which satisfy (\ref{CKY}): \begin{eqnarray*} Z&=& (d\psi+m\cos{\theta} d\phi) \wedge dr+(2r+m)(r+m)r\sigma_1\wedge\sigma_2\\ &=& Y-W, \quad\mbox{where}\qquad *Y=Y, \quad *W=-W \end{eqnarray*} and \begin{equation} \label{Wtensor} Y=\frac{r^3}{m} d(V(d\psi+m\cos{\theta} d\phi)),\quad W=-\frac{(r+m)^3}{m} d(V^{-1}(d\psi+m\cos{\theta} d\phi)). \end{equation} The left invariant one--forms $\sigma_i, i=1, 2, 3$ on the group manifold $SU(2)$ are such that \begin{equation} \label{cijk} d\sigma_1+\sigma_2\wedge\sigma_3=0, \quad d\sigma_2+\sigma_3\wedge\sigma_1=0, \quad d\sigma_3+\sigma_1\wedge\sigma_2=0. \end{equation} These one--forms can be represented in terms of Euler angles by \[ \sigma_1+i\sigma_2=e^{-i\psi}(d\theta+i\sin{\theta} d\phi), \qquad \sigma_3=d\psi+\cos{\theta}d\phi, \] where to cover $SU(2)=S^3$ we require the ranges $ 0\leq\theta\leq\pi, \quad 0\leq\phi\leq 2\pi, \quad 0\leq\psi\leq 4\pi.$ Up to a multiple of $4$ the two--form $Z$ coincides with the Killing--Yano two--from found by Gibbons and Ruback \cite{gibbons_ruback} (see formula (2.12) in this reference with $n=1$, and make a coordinate transformation $t=m+2r$. See also \cite{valent}). Both $Y$ and $W$ satisfy (\ref{CKY}) with the same Killing vector $K=g(\partial/\partial\psi, \cdot)$. The CKY two--form $Y$ was discovered in \cite{DTeinstein}, where it was used to show that the ASD Taub--NUT metric is conformal to a scalar--flat K\"ahler metric with a non--constant conformal factor. Computing the first integral (\ref{Q}) corresponding to $Y$ gives an expression which is functionally dependent with the first integrals arising from the Killing vectors $K=\partial/\partial\psi$ and $L=\partial/\partial\phi$, so it does not give anything useful. We instead turn to $W$, where the resulting first integal (\ref{Q}) is non--trivial, but it does not in general commute with the first integrals (\ref{cg4}) arising from $\partial_\psi$ and $\partial_\phi$. To get around this problem we proceed as follows: Given (\ref{lorentzf}) with some $F$, we may first rotate the $(x, y, z)$ coordinates until $F=-\Omega^3$: if this case is integrable then the general case is integrable. Introducing spherical polar coordinates $(r, \theta, \phi)$ in place of $(x, y, z)$ yields \begin{eqnarray} F&=& -\Omega^3=d\Phi\quad\mbox{with}\nonumber\\ \Phi&=&-z(d\psi+\omega)+\Big(\frac{m}{r}+\frac{1}{2}\Big)(ydx-xdy)\nonumber\\ &=&-r\cos{\theta}d\psi-\Big(mr+\frac{1}{2}r^2\sin^2{\theta}\Big)d\phi \end{eqnarray} or in polars and with the index raised \[\Phi^\#=-zV\partial_\psi+\frac{1}{V}\partial_\phi.\] In this form it is clear that $\Phi^\#$ commutes with the Killing vectors $K:=\partial_\psi$ and $L:=\partial_\phi$, which in turn commute with each other, leading to two constants of the motion for (\ref{lorentzf}). This leads to four independent first integrals ${\mathcal I}_a=({\mathcal K}, {\mathcal L}, {\mathcal W}, {\mathcal H})$ two of which are linear in the velocity, and two of which are quadratic: \begin{eqnarray} \label{first_ints} {\mathcal K}&=&K^a(u_a+e\Phi_a), \quad {\mathcal L}=L^a(u_a+e\Phi_a), \\ {\mathcal H}&=&\frac{1}{2}g_{ab}u^a u^b, \quad {\mathcal W}= eW_{ac}{{F}^c}_b u^a u^b-2{\mathcal H}\;K^a u_a.\nonumber \end{eqnarray} To complete the calculation we shall use Hamiltonian formalism. In the presence of a magnetic field there are two ways to proceed starting from the magnetic Lagrangian ${\mathcal G}=(1/2) g_{ab} u^a u^b+e\Phi_b u^b$. One way is to define the conjugate momentum \[ p_a=\frac{\partial {\mathcal G}}{\partial u^a}=u_a+e\Phi_a \] and perform a Legendre transform to find four first integrals \begin{eqnarray*} {\mathcal K}&=&p_\psi, \quad {\mathcal L}=p_\phi, \\ {\mathcal H}&=&\frac{1}{2}g^{ab}(p_a-e\Phi_a)(p_b-e\Phi_b), \quad {\mathcal W}=e{W^a}_c F^{cb} (p_a-e\Phi_a)(p_b-e\Phi_b)-2{\mathcal H}({\mathcal K} -e K^a\Phi_a). \end{eqnarray*} These are in involution with respect to the standard symplectic structure on $T^*M$ given by $dp_a\wedge dq^a$, where $q^a=(r, \phi, \theta, \psi)$ and $p_a=(p_r, p_\phi, p_\theta, p_\psi)$. Alternatively define a momentum by \[ P_a=p_a-e\Phi_a, \] and consider the charged symplectic form \begin{equation} \label{charged} {\bf \Omega}= dP_a\wedge dq^a+eF \end{equation} on the $8$--dimensional phase--space with coordinates $q^a=(r, \phi, \theta, \psi)$ and $P_a=(P_r, P_\phi, P_\theta, P_\psi)$. The set of first integrals is then ${\mathcal I}_a=({\mathcal K}, {\mathcal L}, {\mathcal H}, {\mathcal W})$, where \begin{subequations} \label{mag_integrals} \begin{eqnarray} {\mathcal K}&=&P_\psi-e r\cos{\theta}, \\ {\mathcal L}&=&P_\phi-e \Big(mr+\frac{1}{2}r^2\sin^2{\theta}\Big),\\ {\mathcal H}&=&\frac{1}{2}\Big(\frac{r}{r+m} {P_r}^2\nonumber\\ & &+ \frac{1}{r(r+m)}{P_\theta}^2+\frac{r+m}{r}{P_\psi}^2 +\frac{1}{r(r+m)\sin^2{\theta}}(P_\phi-m\cos{\theta} P_\psi)^2\Big),\\ {\mathcal W}&=&\frac{e\cos{\theta}}{r}(rP_r-\tan{\theta} P_\theta)^2-\frac{e}{r\cos{\theta}}{P_\theta}^2\nonumber\\ & & -\frac{e\cos{\theta}}{r\sin^2{\theta}}({P_\phi}^2+(m^2-r^2\sin^2{\theta}){P_{\psi}}^2-2\frac{m+r\sin^2{\theta}}{\cos{\theta}}P_\psi P_\phi\Big)-2{\mathcal H}P_\psi. \end{eqnarray} \end{subequations} These integrals are in involution \[ \{ {\mathcal I}_a, {\mathcal I}_b \}=0, \quad a, b =1 ,\dots, 4 \] with respect to Poisson brackets of the charged symplectic structure (\ref{charged}). We have established \begin{prop} \label{prop_TN1} The conformal geodesic equation on the anti--self--dual Taub--NUT manifold is completely integrable in the sense of Arnold--Liouville. \end{prop} Rather than finding the action--angle variables explicitly, we follow \cite{gibbons_ruback} and seek to separate the Hamilton-Jacobi equation, but for (\ref{lorentzf}) rather than for the geodesic equation. \subsection{Separating the Hamilton-Jacobi equation for anti-self-dual Taub-NUT} Following Gibbons-Ruback \cite{gibbons_ruback}, we introduce parabolic coordinates \[\eta=r(1+\cos\theta),\;\;\xi=r(1-\cos\theta),\;\;\phi=\phi,\] so that in the cylindrical polars obtained from $(x,y,z)$ in (\ref{GH}) \[z=\frac12(\eta-\xi),\;\;\rho=\sqrt{\eta\xi},\;\;\phi=\phi.\] Now \[z+i\rho=\frac12(\sqrt{\eta}+i\sqrt{\xi})^2,\] and \[dz^2+d\rho^2=\frac14(\eta+\xi)\big(\frac{d\eta^2}{\eta}+\frac{d\xi^2}{\xi}\big),\] \[V=1+\frac{m}{r}=\frac{(\eta+\xi+2m)}{(\eta+\xi)},\quad\omega=m\cos\theta d\phi=m\frac{(\eta-\xi)}{(\eta+\xi)}d\phi,\] so that the anti-self-dual Taub-NUT metric is \[g=V\left(\frac14(\eta+\xi)\big(\frac{d\eta^2}{\eta}+\frac{d\xi^2}{\xi}\big)+\eta\xi d\phi^2\right)+\frac{1}{V}\left(d\psi+m\frac{(\eta-\xi)}{(\eta+\xi)}d\phi\right)^2.\] For the geodesic equation first, we seek Hamilton's Principal Function $S$ in the separated form \[S=E\psi+J\phi+F(\xi)+G(\eta),\] with constant $E,J$ then the Hamilton-Jacobi equation is \[\mu^2=g^{ab}\nabla_aS\nabla_bS \] where $\mu$ is the conserved mass which we will always take to be one. Algebra leads to \[4\big(\eta G_\eta^2+\xi F_\xi^2)=\mu^2(\eta+\xi+2m)-J^2(\frac{1}{\eta}+\frac{1}{\xi})+2JEm(-\frac{1}{\eta}+\frac{1}{\xi})-E^2(\eta+\frac{m^2}{\eta}+\xi+\frac{m^2}{\xi}+4m),\] and, confirming \cite{gibbons_ruback}, this separates with a new constant $Q$: \[4\eta G_\eta^2=Q+\mu^2(\eta+m)-\frac{J^2}{\eta}-\frac{2JEm}{\eta}-E^2(\eta+\frac{m^2}{\eta}+2m),\] \[4\xi F_\xi^2=-Q+\mu^2(\xi+m)-\frac{J^2}{\xi}+\frac{2JEm}{\xi}-E^2(\xi+\frac{m^2}{\xi}+2m).\] The constant $Q$ is quadratic in momenta and must be associated with a quadratic Killing tensor. To obtain the geodesics from $S$, we solve \[\dot{q}^a=g^{ab}p_b=g^{ab}\nabla_b S,\] so that \[\dot\eta=\frac{4\eta}{\eta+\xi+2m}G_\eta,\;\;\dot\xi=\frac{4\xi}{\eta+\xi+2m}F_\xi,\;\;p_\phi=J,\;\;p_\psi=E.\] We shall now modify this separation procedure to include the Maxwell field present for conformal geodesic motion. \begin{prop} \label{prop_TN2} The Hamilton--Jacobi equation for the conformal geodesics on anti--self--dual Taub-NUT manifold is separable. \end{prop} \noindent {\bf Proof.} To include a Maxwell field, we follow the Hamilton--Jacobi formalism in the presence of electro-magnetic field (see e.g. \cite{LL}). The method is to suppose the that potential is $\Phi_a$, so that \[ \dot{q}^a=g^{ab}(p_b-e\Phi_b)\quad\mbox{and}\quad H=\frac12g^{ab}(p_a-e\Phi_a)(p_b-e\Phi_b).\] Now set $p_a=\nabla_a S$ and take the Hamilton-Jacobi equation to be \[\mu^2=g^{ab}(\nabla_a S-e\Phi_a)(\nabla_b S-e\Phi_b).\] For us \[\Phi=-z(d\psi+\omega)-r^2\sin^2\theta\big(\frac12+\frac{m}{r}\big)d\phi=-\frac12(\eta-\xi)d\psi-\frac12(\eta\xi+m(\eta+\xi))d\phi,\] \[=Xd\phi+Yd\psi, \mbox{ say.}\] The changes to the Hamilton-Jacobi equation are \[J\rightarrow J-eX,\;\;E\rightarrow E-eY,\] leading to \begin{eqnarray} \label{sepwithe} && 4\big(\eta G_\eta^2+\xi F_\xi^2)=\mu^2(\eta+\xi+2m)-J^2(\frac{1}{\eta}+\frac{1}{\xi})+2JEm(-\frac{1}{\eta}+\frac{1}{\xi})-E^2(\eta+\frac{m^2}{\eta}+\xi+\frac{m^2}{\xi}+4m)\nonumber\\ &&-eJ(\eta+\xi+4m)-eE(\eta^2-\xi^2+3m(\eta-\xi))-\frac{e^2}{4}(\eta(\eta+2m)^2+\xi(\xi+2m)^2), \end{eqnarray} which still separates with a quadratic constant and implied Killing tensor. As before, for the particle paths solve \[ \dot{q}^a=g^{ab}p_b=g^{ab}\nabla_b S. \] \begin{flushright} $\Box $ \end{flushright} Thus in anti-self-dual Taub-NUT the conformal geodesic equation is completely integrable: it admits four first integrals in involution established in Proposition \ref{prop_TN1}, and the associated Hamilton--Jacobi equation is separable which reduces the integration of the conformal geodesic equations to quadratures. This completes the proof of Theorem \ref{thm_TN}. \vskip 5pt Separating (\ref{sepwithe}) with a constant $Q$ gives \begin{equation} \label{sep123} \dot{\eta}=\frac{4\eta G_\eta}{\eta+\zeta+2m}, \quad \dot{\zeta}=\frac{4\zeta F_\zeta}{\eta+\zeta+2m} \end{equation} with \[ 4\eta G_\eta= U(\eta, E, Q), \quad 4 \zeta F_\zeta= U(\zeta, -E, -Q) \] where \begin{eqnarray} \label{Usep} U(x, E, Q)&=&\sqrt{u_0+u_1 x+ u_2 x^2+u_3 x^3+u_4 x^4}, \\ u_0&=&-4(J+Em)^2, \quad u_1=4(Q+m(\mu^2+2Je-2E^2)),\nonumber\\ u_2&=&4(\mu^2-m^2e^2-E^2+Je+3Eme), \quad u_3=4e(E-me), \quad u_4=-e^2.\nonumber \end{eqnarray} Therefore (\ref{sep123}) imples that the unparametrised conformal geodesic equations are solvable in terms of elliptic functions: \[ \int\frac{d\eta}{U(\eta, E, Q)}=\int\frac{d\zeta}{U(\zeta, -E, -Q)}. \] \section{Circles on Eguchi--Hanson} \label{sectionEH} The Eguchi--Hanson (EH) metric \cite{EHref} \[ g=f^{-2}dr^2+\frac{1}{4}r^2(\sigma_1^2+\sigma_2^2)+\frac{1}{4}r^2f^2\sigma_3^2,\quad \mbox{where}\quad f=\sqrt{1-\frac{\alpha^4}{r^4}}, \quad \alpha=\mbox{const} \] is another special case of (\ref{GH}). It corresponds to \begin{equation} \label{VEH} V=\frac{1}{r_1}+\frac{1}{r_2}, \end{equation} where $r_1, r_2$ are Euclidean distances from points $P_1, P_2$ which we can chose to be \[ P_1=(0, 0, -\alpha), \quad P_2=(0, 0, \alpha) \] for constant positive $\alpha$. To achieve regularity the period of the $\psi$ coordinate in $\sigma_3$ should be $2\pi$ rather than $4\pi$. Therefore the surfaces of constant $r$ are real projective spaces, and at large $r$ the metric looks like $\mathbb{R}^4/\mathbb{Z}_2$ rather than Euclidean space. The Eguchi--Hanson is an example of an asymptotically locally Euclidean (ALE) manifold (see e.g. Chapter 9 of \cite{Dbook}). The isometry group of EH is enhanced to $SO(3)\times U(1)$, but this time it is the $SO(3)$ which acts tri--holomorphically (so that EH can be put in the form (\ref{GH}) in many ways). The $U(1)$ action is not tri--holomorphic. The parallel basis of ${\Lambda^2}_+$ is given by \[ \Omega^{i}=e^i\wedge e^4+\frac{1}{2} \varepsilon^{ijk} e^j\wedge e^k, \] with \begin{equation} \label{basisEH} e^1=\frac{1}{2}r\sigma_1, \quad e^2=\frac{1}{2}r \sigma_2, \quad e^3=\frac{1}{2} rf\sigma_3,\quad e^4=f^{-1}dr. \end{equation} \begin{prop} \label{propEH} All conformal geodesics on the $SO(3)$ orbits of constant $r$ in the Eguchi--Hanson manifold are of the form \begin{eqnarray} \label{starstar} &&{(u^1)}^2+{(u^2)}^2+{(u^3)}^2=1, \quad c_1 u^1+c_2 u^2=\frac{rf}{2(1-f^2)} (h^2-2({c_1}^2+{c_2}^2)-2m_1),\nonumber\\ &&\mbox{where}\quad h=\frac{1-f^2}{fr}u^3 +\frac{m_0}{{c_1}^2+{c_2}^2} \end{eqnarray} and $(u^1, u^2, u^3)$ depend on $(\phi, \psi, \theta)$ as in (\ref{uspaul}) where $(c_1, c_2, c_3, m_1, m_2, p_1, p_2, p_3)$ are constants of integration. \end{prop} We shall first perform the general analysis of the circle equations using the Lorentz force system (\ref{lorentzf}), and reduce the equations to a system of three 1st order ODEs (\ref{finaleq}) for three unknown functions. We shall then show that this system is completely solvable under the additional assumption $r$=const. This will give the proof of Proposition \ref{propEH}. With $a$ determined in terms of $u$ by (\ref{aGH}) (note that now the components of $a$ and $u$ refer to the basis \ref{basisEH}) the circle equations reduce to the definition of the acceleration $a=\nabla_u u$, which becomes \begin{subequations} \label{ueq1} \begin{eqnarray} \dot{u}^1&=& 2\frac{k^4}{ r\sqrt{1-k^4}}u^2u^3 -\frac{\sqrt{1-k^4}}{r}u^1u^4 +c^2u^3-c^3u^2-c^1u^4,\\ \dot{u}^2&=& -2\frac{k^4}{ r\sqrt{1-k^4}}u^1u^3- \frac{\sqrt{1-k^4}}{r}u^2u^4 +c^3u^1-c^1u^3-c^2u^4,\\ \dot{u}^3&=& -\frac{1+k^4}{r\sqrt{1-k^4}}u^3u^4 +c^1u^2-c^2u^1-c^3u^4,\\ \dot{u}^4&=&\frac{\sqrt{1-k^4}}{r}(1-(u^4)^2)+ 2\frac{k^4}{ r\sqrt{1-k^4}}(u^3)^2+c^1u^1+c^2u^2+c^3u^3 \end{eqnarray} \end{subequations} where $k\equiv\alpha/r$. In this case we can also find the potentials of the triple of parallel two--forms \begin{subequations} \label{potentials} \begin{eqnarray} \Omega^1&=& e^1\wedge e^4+e^2\wedge e^3=d\Big(-\frac{rf}{2}e^1\Big)\\ \Omega^2&=& e^2\wedge e^4+e^3\wedge e^1=d\Big(-\frac{rf}{2}e^2\Big)\\ \Omega^3&=& e^3\wedge e^4+e^1\wedge e^2=d\Big(-\frac{r}{2f}e^3\Big). \end{eqnarray} \end{subequations} The relation \[ u=\dot{\phi}\frac{\partial}{\partial\phi}+\dot{\theta}\frac{\partial}{\partial\theta}+ \dot{\psi}\frac{\partial}{\partial\psi}+\dot{r}\frac{\partial}{\partial r}=u^aE_a \] gives \begin{subequations} \label{uscor} \begin{eqnarray} u^1&=&\frac{1}{2}r(\sin{(\theta)}\sin{(\psi)}\dot{\phi}+\cos{(\psi)}\dot{\theta})\\ u^2&=&\frac{1}{2}r(\sin{(\theta)}\cos{(\psi)}\dot{(\phi)}-\sin{(\psi)}\dot{\theta})\\ u^3&=&\frac{1}{2}rf(\dot{\psi}+\cos{(\theta)}\dot{\phi})\\ u^4&=&\frac{1}{f}\dot{r}. \end{eqnarray} \end{subequations} \subsection{Magnetic first integrals, and right invariant vector fields} The equations (\ref{ueq1}) correspond to a forced geodesic motion of a charged particle in a constant self--dual Maxwell field $F=c^1\Omega^1+c^2\Omega^2+c^3\Omega^3$ \begin{equation} \label{lorenzmd} u^b\nabla_b u^a= {F^a}_b u^b. \end{equation} Let $\Phi$ be a potential for $F=d\Phi$. Using (\ref{potentials}) we find \[ \Phi=-\frac{1}{2}\Big(c_1rfe^1+c_2rfe^2+c_3\frac{r}{f}e^3\Big). \] For any tri--holomrphic Killing vector $R$, such that ${\mathcal{L}}_R \Phi=0$ (this can always be arranged by adding a total derivative of a function to $\Phi$), the function \begin{equation} \label{pauls_c} p\equiv R^a(u_a+\kappa \Phi_a) \end{equation} is a first integral of (\ref{lorenzmd}) (here $\kappa$ is a numerical factor and the value of $\kappa$ can be absorbed into constants $c^i$). Given the left--invariant one--forms $\sigma_i$, let $L_i$ be the dual basis of left--invariant vector fields, i. e. $L_i\hook \sigma_j=\delta_{ij}$, and \[ [L_i, L_j]=\frac{1}{2}\epsilon_{ijk}L_k. \] The tri--holomorphic and isometric $SO(3)$ action is generated by the right invariant vector fields $R_j$ on $SO(3)$ such that \[ [L_i, R_j]=0, \quad [R_i, R_j]=-\frac{1}{2}\epsilon_{ijk}R_k. \] These vector fields can be expanded in terms of $L_i$s and therefore also in terms of the dual basis $E_1, E_2, E_3$ as \[ R_i={M^j}_i E_j \] where the components of the matric $M$ are explicit functions of $(r, \psi, \theta, \phi)$, and we have verified that ${\mathcal{L}}_{R_i} \Phi=0$. Combining this with (\ref{pauls_c}) leads to three first integrals \[ p_i\equiv {R_i}^a(u_a+\kappa\Phi_a)= {M^j}_i (u_j+\kappa\Phi_j). \] These are three linear equations for ${\bf u}$ which can be soved as ${\bf u}=M^{-1}{\bf p}-\kappa{\bf \Phi}$, or (using the explicit form of $M$) \begin{eqnarray} \label{uspaul} u^1&=&\frac{\kappa}{2}c_1 rf+\frac{1}{r}\Big(p_2(\cos{\phi}\cos{\psi}- \sin{\phi}\sin{\psi}\cos{\theta})-p_3 (\sin{\phi}\cos{\psi}- \cos{\phi}\sin{\psi}\cos{\theta})+p_1\sin{\theta}\sin{\psi}\Big)\nonumber\\ u^2&=&\frac{\kappa}{2}c_2 rf+\frac{1}{r}\Big(-p_2(\cos{\phi}\sin{\psi}- \sin{\phi}\cos{\psi}\cos{\theta})+p_3 (\sin{\phi}\sin{\psi}- \cos{\phi}\cos{\psi}\cos{\theta}) +p_1\sin{\theta}\cos{\psi} \Big)\nonumber\\ u^3&=&\frac{\kappa}{2f}c_3 r+\frac{1}{rf}(p_1\cos{\theta}+ p_2\sin{\phi}\sin{\theta}+p_3\cos{\phi}\sin{\theta}). \end{eqnarray} \subsection{A final system of 1st order ODEs} Comparing the expressions (\ref{uspaul}) with (\ref{uscor}) gives a system of three 1st order equations for three unknown functions $\theta(s), \psi(s), \phi(s)$ \begin{subequations} \begin{eqnarray} \label{finaleq} \dot{\phi}&=& \frac{1}{r^2}(2p_1-2\cot{\theta}(p_2\sin{\phi}+p_3\cos{\phi})) +\kappa f\frac{c_1\sin{\psi}+c_2\cos{\psi}}{\sin{\theta}},\\ \dot{\theta}&=&\frac{1}{r^2}(p_2\cos{\phi}-p_3\sin{\phi})+\kappa f (c_1\cos{\psi} -c_2\sin{\psi}), \\ \dot{\psi}&=&\kappa\Big(f\cot{\theta}(c_1\sin{\psi}+c_2\cos{\psi}) +\frac{1}{f^2}c_3)\Big)+\frac{2}{r^2f^2\sin{\theta}}(p_2\sin{\phi}+p_3\cos{\phi})\\ &&+\frac{2\alpha^4}{r^6f^2\sin{\theta}}(\cos{\theta}^2(p_2\sin{\phi}+p_3\cos{\phi}) -p_1\cos\theta\sin{\theta}).\nonumber \end{eqnarray} \end{subequations} The equation for $\dot{r}$ can now be obtained from the 1st integral $g(u, u)=1$. \subsection{An integrable case: circles on $SO(3)$ orbits} In this section we shall establish the complete integrability of the of the conformal geodesic equations under additional assumption that the conformal geodesics lie on surfaces of constant $r$. \noindent {\bf Proof of Proposition \ref{propEH}.} The circle equations can be reduced to a quadrature under the additional assumption that $\dot{r}=0$, that is the circles are confined to the orbits of the isometric $SO(3)$ action which are surfaces of constant $r$. The condition $\dot{r}=0$ is equivalent to $u^4=0$. Set \[ u^1=\alpha(s), \quad u^2=\beta(s), \quad u^3=\gamma(s),\quad \frac{k^4}{r\sqrt{1-k^4}}=R=\mbox{const}. \] Equations (\ref{ueq1}) take the form \begin{subequations} \label{ueq2} \begin{eqnarray} \dot{\alpha}&=&R\;\beta\gamma+c_2\gamma-c_3\beta\\ \dot{\beta}&=& -R\;\alpha\gamma+c_3\alpha-c_1\gamma\\ \dot{\gamma}&=&c_1\beta-c_2\alpha. \end{eqnarray} \end{subequations} Algebraic manipulations with these equations give expressions for $\alpha$ and $\beta$ in terms of $\gamma$ and its derivatives \begin{eqnarray} \label{alphabeta} \alpha&=&-\frac{c_1}{({c_1}^2+{c_2}^2)(R\gamma-c_3)} \ddot{\gamma} -\frac{c_2}{{c_1}^2+{c_2}^2}\dot{\gamma} -\frac{c_1}{R\gamma-c_3} \gamma\nonumber\\ \beta&=&-\frac{c_2}{({c_1}^2+{c_2}^2)(R\gamma-c_3)} \ddot{\gamma} +\frac{c_1}{{c_1}^2+{c_2}^2}\dot{\gamma} -\frac{c_2}{R\gamma-c_3} \gamma. \end{eqnarray} The last equation in (\ref{ueq2}) now holds identically. The first two equations reduce to a single 3rd ODE \begin{equation} h\;\dddot{h}=(\ddot{h}-h^3+m_0)\dot{h}, \quad\mbox{where} \;\; h(s)=R\gamma(s)-c_3, \quad\mbox{and}\;\; m_0=c_3({c_1}^2+{c_2}^2). \end{equation} This equation can be integrated in terms of elliptic functions \begin{equation} \label{diff2ce} s-m_3=2\int\frac{dh}{\sqrt{-h^4+4m_1h^2-8m_0h+4m_2}}, \end{equation} where $m_0, m_1, m_2, m_3$ are constants of integration. Inverting this formula for $h$, and substituting in the expressions above gives $\alpha(s), \beta(s)$. The elliptic functions are however not neccesary to describe the conformal circles. To see this differentiate (\ref{diff2ce}) twice, to obtain expressions for $\dot{h}$ and $\ddot{h}$ in terms of $h$. These expressions are then substituted into (\ref{alphabeta}), and (remembering that $\gamma=R^{-1}(h(s)+c_3)$) yield the explicit parametrisation ${\bf u}(h)$ of $u^1, u^2, u^3$ in terms of a parameter $h$, and five integration constant (note that there is one constraint on the constants $m_0, m_1, m_2, m_3, c_1, c_2$ coming from $g(u, u)=1$). The formulae for ${\bf u}(h)$ are now substituted into (\ref{uspaul}) which yields relations of the form \[ G_1(\phi, \psi, \theta, h)=0, \quad G_2(\phi, \psi, \theta, h)=0,\quad G_3(\phi, \psi, \theta, h)=0 \] where the functions $G_i$ can be read--off from (\ref{uspaul}), and depend on the 8 constants of integrations (five from the procedure above, and $p_1, p_2, p_3$). The parameter $h$ may be elliminated between these three relations which gives unparametrised conformal circles on the $r=$const orbits in the Eguchi--Hanson space. They are given by (\ref{starstar}). \begin{flushright} $\Box $ \end{flushright} \subsection{Separability of the Hamilton--Jacobi equation} In this section we shall find another integrable subcase of the conformal geodesic equations on the Eguchi--Hanson background. This will be done by separating the associated Hamilton--Jacobi equation. Following \cite{gibbons_ruback} again, we introduce prolate spheroidal coordinates adapted to the potential (\ref{VEH}) \[\zeta=\frac{1}{2\alpha}(r_1+r_2),\;\;\;\lambda=\frac{1}{2\alpha}(r_1-r_2),\] so these are constant on confocal ellipses and confocal hyperbolae respectively. Introduce the angles $\theta_1,\theta_2$ as the angles supported at $p_1,p_2$ respectively w.r.t. the $z$-axis; now \[r_1=\alpha(\zeta+\lambda),\;\;r_2=\alpha(\zeta-\lambda),\;\;\cos\theta_1=\frac{\zeta\lambda+1}{\zeta+\lambda},\;\;\cos\theta_2=\frac{\zeta\lambda-1}{\zeta-\lambda}.\] We need $V$ and $\omega$ in these coordinates: for the Eguchi-Hanson metric \[V=\frac{1}{r_1}+\frac{1}{r_2}=\frac{2\zeta}{\zeta^2-\lambda^2},\;\;\omega=\Omega d\phi\quad\mbox{with}\quad\Omega=-2\frac{\lambda(\zeta^2-1)}{{\zeta^2-\lambda^2}},\] with conventions \[\alpha(\zeta^2-1)V_\zeta=\Omega_\lambda,\;\;-\alpha(1-\lambda^2)V_\lambda=\Omega_\zeta.\] Claim \[d\rho^2+dz^2+\rho^2d\phi^2=\alpha^2(\zeta^2-\lambda^2)\left(\frac{d\zeta^2}{\zeta^2-1}+\frac{d\lambda^2}{1-\lambda^2}\right)+\alpha^2(\zeta^2-1)(1-\lambda^2)d\phi^2\] whence \[g_{EH}=V\left(\alpha^2(\zeta^2-\lambda^2)\left(\frac{d\zeta^2}{\zeta^2-1}+\frac{d\lambda^2}{1-\lambda^2}\right)+\alpha^2(\zeta^2-1)(1-\lambda^2)d\phi^2\right)+V^{-1}(d\psi+\Omega d\phi)^2.\] From the geodesic equations we obtain constants of the motion \[p_\psi=V^{-1}(\dot\psi+\Omega\dot\phi):=E,\;\;p_\phi={\alpha^2}V(\zeta^2-1)(1-\lambda^2)\dot\phi+\Omega E:=J,\] from the Killing vectors $K:=\partial_\psi$ and $L:=\partial_\phi$, so that \[p_\phi-\Omega p_\psi=\alpha^2V(\zeta^2-1)(1-\lambda^2)\dot\phi.\] Seek a solution \[S=E\psi+J\phi+F(\zeta)+G(\lambda)\] for the Hamilton-Jacobi equation then \[\mu^2=\frac{\zeta^2-1}{{\alpha^2}V(\zeta^2-\lambda^2)}F_\zeta^2+\frac{1-\lambda^2}{{\alpha^2}V(\zeta^2-\lambda^2)}G_\lambda^2+\frac{(J-\Omega E)^2}{{\alpha^2}V(\zeta^2-1)(1-\lambda^2)}+VE^2.\] Multiply by ${\alpha^2}V(\zeta^2-\lambda^2)$ and simplify to obtain \begin{equation}\label{hj1}2\alpha\zeta=(\zeta^2-1)F_\zeta^2+(1-\lambda^2)G_\lambda^2+J^2(\frac{1}{\zeta^2-1}+\frac{1}{1-\lambda^2}) +\frac{4JE\lambda}{1-\lambda^2}+\frac{4E^2}{1-\lambda^2},\end{equation} which separates. Up to this point we are following \cite{gibbons_ruback}. To extend to the Lorentz force law, we need a Maxwell potential for $F_{ab}$ which commutes with $K$ and $L$. Now there is a problem: for anti-self-dual Taub-NUT there was no loss in generality in choosing $F=-\Omega^3$ but in Eguchi-Hanson there is, as $K$ will Lie-drag any constant $\phi$ but $L$ will not. Thus we can obtain integrability for a sub-class of conformal geodesics, those for which $F=-\Omega^3$, but not for all. With this restriction then we have \[\Omega^3=-(d\psi+\omega)\wedge dz+Vdx\wedge dy.\] If the potential for $-\Omega_3$ is \[\Phi=-z(d\psi+\omega)+Xd\phi=-z(d\psi+\Omega d\phi)+Xd\phi,\] then \[dX\wedge d\phi=zd\Omega\wedge d\phi-V\rho d\rho\wedge d\phi.\] This can be solved by \[X=z\Omega+2\alpha\zeta\] when \[\Phi=(X-z\Omega)d\phi-zd\psi=2\alpha\zeta d\phi-{\alpha}\zeta\lambda d\psi.\] To modify the Hamilton-Jacobi equation we make the replacements \[J\rightarrow J+2e\alpha\zeta,\;\;E\rightarrow E-e\alpha\zeta\lambda\] into (\ref{hj1}). The added terms are \[4e\alpha J\frac{\zeta^3}{\zeta^2-1}+4e^2\alpha^2\frac{\zeta^4}{\zeta^2-1},\] so this still separates, and this sub-class of the conformal geodesic equations is completely integrable. \section{Circles on $\mathbb{CP}^2$} \label{section3} Consider the Fubini--Study metric on $\mathbb{CP}^2$. It is Einstein with the Ricci scalar equal to $24$, has anti--self--dual (ASD) Weyl tensor, and is K\"ahler, but with the ASD K\"ahler two-form (this is some times referred to as `opposite orientation'). The local form of the metric is (see e.g. \cite{GP}) \begin{equation} \label{cp2metric} g_{\mathbb{CP}^2}=\frac{dr^2}{(1+r^2)^2}+ \frac{1}{4}\frac{r^2\sigma_3^2}{(1+r^2)^2} +\frac{1}{4}\frac{r^2}{1+r^2}(\sigma_1^2+\sigma_2^2). \end{equation} The metric is regular everywhere on $\mathbb{CP}^2$, and the apparent singularity at $r=0$ results from using spherical polar coordinates. The metric is also regular at $r=\infty$. To see this set $r=\rho^{-1}$. Fixing $(\phi, \theta)$ now gives $g\sim d\rho^2+(1/4)\rho^2 d\psi^2$ near $\rho=0$. This is a removable bolt singularity (see \cite{Dbook}) if $\psi$ is periodic with the period $2\pi$. At $\rho=0$ the three--dimensional orbits of $SU(2)$ collapse to a two--sphere of constant radius. We shall refer to this as the $\mathbb{CP}^1$ at infinity. It was shown in \cite{DT} that in case of ASD Einstein manifolds with non--zero Ricci scalar there is a one-to-one correspondence between Killing vectors, and self--dual CKY tensors: If $K$ is a Killing vector, then its self dual derivative \[ Y\equiv \frac{1}{2}(dK+*dK) \] satisfies (\ref{CKY}). The Fubini--Study metric is a symmetric space $M=SU(3)/U(2)$, so it has 8 Killing vectors generating the Lie algebra $\mathfrak{su}(3)$, and therefore 8 self--dual CKYs. This gives rise to 8 first integrals of the form (\ref{Q}). The 9th first integral is given by the parallel CKY which is the anti--self--dual K\"ahler form. In Section we aim to use these first integrals to find all conformal geodesics on $\mathbb{CP}^2$. We shall first prove \begin{prop} \label{prop_CP2} All conformal geodesics on the Fubini--Study metric (\ref{cp2metric}) on $\mathbb{CP}^2$ on the surfaces of constant $r$ are trajectories of the Killing vector $\partial/\partial\psi$, or are of the form \begin{equation} \label{theta_phi_psi_new} \chi={\rm arccot} \Big(\frac{\kappa(c_2\cos{(\kappa s)}-\sin{(\kappa s))}}{Q(c_3+c_2\sin{(\kappa s)}+\cos{(\kappa s))}}\Big), \quad \theta={\rm arccot}\Big({\frac{Q-\dot{\chi}}{P\sin{\chi}}}\Big), \quad \phi=P\int\frac{\sin{\chi}}{\sin{\theta}}ds \end{equation} where \[ Q=2\gamma/r-\gamma r,\quad P=\frac{2\sqrt{(1+r^2)(1-\gamma^2)}}{r}, \quad \kappa=\sqrt{P^2+Q^2} \] and $r, \gamma, c_1, c_2, c_3$ are constants. \end{prop}\noindent and then deduce the the general form of conformal geodesics: \begin{theo} \label{theo_CP2} All conformal geodesics of the Fubini--Study metric on $\mathbb{CP}^2$ are of the form ${\bf \rho}\circ\Gamma$ where $\Gamma$ is some conformal geodesic on the $r$=const surface given in Proposition \ref{prop_CP2}, and ${\bf \rho}$ is an isometry of the Fubini--Study metric. \end{theo} \noindent {\bf Proof of Proposition \ref{prop_CP2}}. Pick a tetrad of one--forms \[ e^1=\frac{r}{2\sqrt{1+r^2}}{\sigma_1}, \quad e^2=\frac{r}{2\sqrt{1+r^2}}{\sigma_2}, \quad e^3=\frac{r}{2(1+r^2)}\sigma_3, \quad e^4=\frac{dr}{1+r^2}, \] so that the metric (\ref{cp2metric}) is $g=\delta_{ab}e^ae^b$, and set \begin{equation} \label{formulaua} u={\alpha} E_1+{\beta} E_2+{\gamma} E_3+{\delta} E_4, \quad a=a^1 E_1+a^2 E_2+a^3 E_3+a^4 E_4 \end{equation} where $E_a, a=1, 2, 3, 4$ is the dual tetrad of vector fields, and $|u|^2=1$. The conformal geodesic equations (\ref{conf_circ_e}) become \begin{subequations} \label{fs} \begin{eqnarray} a^1&=&\dot{\alpha}+2 r {\beta}{\gamma}+\frac{1}{r}{\alpha} {\delta} \label{fs1}\\ a^2&=&\dot{\beta}-2 r {\gamma} {\alpha} + \frac{1}{r}{\beta} {\delta}\label{fs2}\\ a^3&=&\dot{\gamma}-r {\gamma}{\delta}+\frac{1}{r}{\gamma}{\delta}\label{fs3}\\ a^4&=&\dot{\delta}+r {\gamma}^2-\frac{1}{r}(1-{\delta}^2)\label{fs4} \end{eqnarray} \end{subequations} and \begin{subequations} \begin{eqnarray} \dot{a}^1&=&\frac{1}{r}({\beta}a^3-{\alpha}a^4-{\gamma}a^2)-2r {\gamma}a^2-|a|^2 {\alpha}\\ \dot{a}^2&=&\frac{1}{r}({\gamma}a^1-{\beta}a^4-{\alpha}a^3)+2r {\gamma}a^1-|a|^2 {\beta}\\ \dot{a}^3&=&\frac{1}{r}({\alpha}a^2-{\gamma}a^4-{\beta}a^1)+r {\gamma}a^4-|a|^2 {\gamma}\\ \dot{a}^4&=&\frac{1}{r}({\alpha}a^1+{\beta}a^2+{\gamma}a^3)-r {\gamma}a^3-|a|^2 {\delta}, \end{eqnarray} \end{subequations} together with \[ |u|^2=1, \quad \delta_{ef}a^ea^f=|a|^2, \quad\mbox{where}\;\;u=(\alpha, \beta, \gamma, \delta). \] Note that $g(u, a)=0$ is satisfied identically if $u^a$ is unit. The CKY corresponding to the Killing vector $\partial/\partial \psi$ is \[ \frac{r^2}{r^2+1}\Big(e^1\wedge e^2+e^3\wedge e^4\Big), \] and gives rise to a first integral \begin{equation} \label{CYint} C_Y:=\frac{r^2}{r^2+1}\Big({\alpha}a^2 -{\beta}a^1 +{\gamma}a^4-{\delta}a^3\Big)+\frac{2r}{r^2+1}{\gamma}. \end{equation} The ASD K\"ahler form is \begin{equation} \label{kahler_form} e^1\wedge e^2-e^3\wedge e^4. \end{equation} This gives rise to a first integral \begin{equation} \label{CKint} C_K:={\alpha}a^2 -{\beta}a^1 -{\gamma}a^4+{\delta}a^3. \end{equation} Combining the two gives \begin{subequations} \begin{eqnarray} (r^2+1)C_Y-r^2C_K&=&2r^3{\gamma}({\gamma}^2+{\delta}^2)+ 2r^2({\gamma}\dot{\delta} -{\delta}\dot{\gamma}). \label{1stint1}\\ (r^2+1)C_Y+r^2C_K&=&4r^3{\gamma}({\gamma}^2+{\delta}^2)+ 2r^2({\alpha}\dot{\beta} -{\beta}\dot{\alpha}) +2r{{\gamma}}-4r^3{\gamma}.\label{1stint2} \end{eqnarray} \end{subequations} The first integral $|a|^2$ after some algebra simplifies to \begin{eqnarray} \label{AAeq} |a|^2&=&\delta_{ef}\dot{u}^e\dot{u}^f+r^2{\gamma}^2(4-3({\gamma}^2+{\delta}^2)) +2r{\gamma}(2{\beta}\dot{\alpha}-2{\alpha}\dot{\beta}+ {\gamma}\dot{\delta}-{\delta}\dot{\gamma})\nonumber\\ && -2{\gamma}^2 -\frac{2}{r}\dot{\delta}+\frac{1}{r^2}(1-{\delta}^2). \end{eqnarray} \vskip 5pt We now aim to find all circles such that ${\delta}=0$, or equivalently all circles lying on surfaces of constant $r$. The vector field $u$ is unit, so set $\beta=\sqrt{1-\alpha^2-\gamma^2}$. Equations (\ref{1stint1}, \ref{1stint2}) and (\ref{AAeq}) give \begin{subequations} \begin{eqnarray} \label{integralscp} 0&=&2(r\gamma)^3-r^2(C_Y-C_K)-C_Y \label{integralscpa} \\ 0&=&2r^2(\alpha\dot{\beta}-\beta\dot{\alpha})+4(r\gamma)^3-4r^3\gamma+2r\gamma -r^2(C_K+C_Y)-C_Y \label{integralscpb}\\ A&=&\dot{\alpha}^2+\dot{\beta}^2+(r\gamma)^2(4-3\gamma^2)+4r\gamma(\beta\dot{\alpha}-\alpha\dot{\beta})-2\gamma^2+\frac{1}{r^2}\label{integralscpc} \end{eqnarray} \end{subequations} so that $\gamma$ is a constant along circles (and this constant depends on $r$). We also get $a^4=r\gamma^2-1/r$, which is a constant along circles, and $a^3=0$. Substituting $\beta=\sqrt{1-\alpha^2-\gamma^2}$ into (\ref{integralscpb}) yields a 1st order equation for $\alpha(s)$ which we can integrate to find \begin{eqnarray} \label{cp2cir} \alpha(s)&=&\sqrt{1-\gamma^2}\sin{(Bs+c_1)}, \quad \beta(s)=\sqrt{1-\gamma^2}\cos{(Bs+c_1)}, \quad\mbox{where}\\ B&=& \frac{4r^3(\gamma^3-\gamma)-r^2(C_K+C_Y)+2 r\gamma-C_Y}{2r^2(1-\gamma^2)}, \quad\mbox{where}\;\;\gamma\neq\pm 1\nonumber \end{eqnarray} and $\gamma$ is a root of (\ref{integralscpa}). To find any additional constraints on $B$ we substitute $u=(\alpha(s), \beta(s), \gamma, 0)$ into the conformal geodesic equations. We find that these equations hold iff \begin{equation} \label{conditons_for_B} B=-3\gamma r, \quad\mbox{or}\quad \gamma^2r^2+\gamma B+1=0. \end{equation} The squared norm of acceleration (\ref{integralscpc}) is constant along conformal circles, and is given by \[ |a|^2=B^2(1-\gamma^2)+4r\gamma B(1-\gamma^2)+r^2\gamma^2(4-3\gamma^2)-2\gamma^2+ \frac{1}{r^2}. \] We can now compute the {\em complex torsion} $\tau=C_K/|a|$ and find that it can take any value between $(-1, 1)$ if $B=-3\gamma r$, but $\tau^2=1$ if $B=-\gamma r^2-\gamma^{-1}$. We shall therefore take $B=-3\gamma r$ from now on, and return to this point in the proof of Theorem \ref{theo_CP2}. If $\gamma=\pm 1$, then the circle is a trajectory of the Killing vector $\partial/\partial \psi=r/(r^2+1)E_3$, and $\alpha=\beta=0$ with the corresponding acceleration $a=(r^2-1)/r E_4$. In this case we find that $u=E_3$ and \begin{equation} \label{psigeod} \nabla_u \nabla_u u=-\frac{(r^2-1)^2}{r^2}u, \end{equation} so the circle equations (\ref{conf_circ_e}) hold\footnote{We also note that in general $\dot{r}=(1+r^2){\delta}$, and use this to verify that $u=E_a (a=1, 2, 3, 4)$ satisfies the circle eq. In the last case we get $|a|^2=0$.} with $|a|^2=(r^2-1)^2/r^2$. \vskip5pt To complete the calculation for $\gamma\neq \pm 1$ we need to find the dependence of the coordinates $(\phi, \theta, \psi, r)$ on the arc-lengh parameter $s$ along the circles (\ref{cp2cir}). Setting \[ u=\dot{\phi}\frac{\partial}{\partial\phi}+\dot{\theta}\frac{\partial}{\partial\theta}+ \dot{\psi}\frac{\partial}{\partial\psi}+\dot{r}\frac{\partial}{\partial r}, \] equating it to $u$ in (\ref{formulaua}) and substituting (\ref{cp2cir}) yields a coupled system of three 1st order ODEs \begin{subequations} \begin{eqnarray} \label{system3} \dot{\phi}&=&2\frac{\sqrt{1+r^2}}{r}\frac{\sqrt{1-\gamma^2}}{\sin{\theta}} \sin{(\psi+Bs+c_1)},\label{system3a} \\ \dot{\theta}&=&2\frac{\sqrt{1+r^2}}{r}{\sqrt{1-\gamma^2}}\cos{(\psi+Bs+c_1)},\label{system3b}\\ \dot{\psi}&=&2\frac{1+r^2}{r}\gamma -2\frac{\sqrt{1+r^2}}{r}\sqrt{1-\gamma^2}\cot{\theta} \sin{(\psi+Bs+c_1)}\label{system3c},\\ \dot{r}&=&0\label{system3d} . \end{eqnarray} \end{subequations} To simplify this slightly, set \begin{equation} \label{introchi} \chi=\psi+Bs+c_1, \quad P=\frac{2\sqrt{(1+r^2)(1-\gamma^2)}}{r}=\mbox{const}, \quad Q=B+2\frac{1+r^2}{r}\gamma=\mbox{const} \end{equation} so that \begin{equation} \label{simpsimp} \dot{\phi}=P\frac{\sin{\chi}}{\sin{\theta}}, \quad \dot{\theta}=P\cos{\chi}, \quad \dot{\chi}=Q-P\cot{\theta}\sin{\chi}. \end{equation} We claim that this system is solvable by quadratures. Given $\chi(s)$ we find \begin{equation} \label{theta_phi} \theta={\rm arccot}\Big({\frac{Q-\dot{\chi}}{P\sin{\chi}}}\Big), \quad \mbox{and then}\quad \phi=P\int\frac{\sin{\chi}}{\sin{\theta}}ds. \end{equation} The final equation for $\chi$ arises from the middle equation in (\ref{simpsimp}): \[ P^2\cos{(\chi)}^{3} -2\cos{(\chi)}\dot{\chi}^2+ 3Q \cos{(\chi)}\dot{\chi}- \cos{(\chi)}({P}^{2}+Q^2)+ \sin{(\chi)} \ddot{\chi}=0. \] The general solution of this equation is given by \begin{equation} \label{formchi} \chi={\rm arccot} \Big(\frac{\kappa(c_2\cos{(\kappa s)}-\sin{(\kappa s))}}{Q(c_3+c_2\sin{(\kappa s)}+\cos{(\kappa s))}}\Big), \quad \mbox{where}\quad \kappa= \sqrt{P^2+Q^2}. \end{equation} This can now be substituted in (\ref{theta_phi}) to find the explicit form of $\theta$. It turns out that (according to MAPLE) even the $\phi$ integral can be computed in terms of elementary functions, but the answer is not illuminating. The integration will in any case introduce another constant - call it $c_4$. Thus there exists a seven--dimensional family of conformal circles on surfaces of constant $r$. This family is parametrised by $c_1, c_2, c_3, c_4, C_Y, C_K, r$, and explicitly given by formulae (\ref{formchi}, \ref{theta_phi}, \ref{introchi}), where $B$ is given by (\ref{cp2cir}), and $\gamma$ is given by (\ref{integralscpa}). \begin{flushright} $\Box $ \end{flushright} We shall now use the conformal geodesics on the $r=$ const surfaces and conjugate them with the elements of the isometry group $SU(3)$ to find all conformal geodesics on $\mathbb{CP}^2$. The fact that all conformal geodesics arise from this construction will follow from the results of \cite{MO} which we now recall. We shall say that two circles $\Gamma_1$ and $\Gamma_2$ are congruent if there exists a holomorphic isometry (note - all isometries on $\mathbb{CP}^2$ are holomorphic) $ {\bf \rho}$ such that $\Gamma_1={\bf \rho}\circ \Gamma_2$. In \cite{MO} (Theorem 5.1) it has been shown that two circles on $\mathbb{CP}^2$ with the same complex torsion \begin{equation} \label{torsion} \tau=g_{\mathbb{CP}^2}(J(u), a/|a|) \end{equation} and curvature $|a|^2$ are congruent. Here $J$ is the complex structure of the ASD K\"ahler form on $\mathbb{CP}^2$ and we note that $\tau=C_K/|a|$, where $C_K$ is the first integral (\ref{CKint}). We also note $\tau^2\leq |J(u)|^2 |a|^2/|a|^2=1$, so that $\tau \in [-1, 1]$. Doing the counting, at a point on $r$=constant there are $2$ dimensions of (unit) velocity $u$ and then $2$ dimensions of acceleration orthogonal to velocity so with $+3$ for the dimension of $r$=constant surface and $-1$ for the one-dimension of the curve that is $6$ dimensions of conformal geodesics confined to $r$=const. But there is a four--dimensional space of Killing vectors tangent to $r$=constant to move them about, leaving $2$ parameters. In the proof of Theorem \ref{theo_CP2} we shall show that these are $|a|$ and $\tau$. The general counting for unparametrised conformal geodesics on $\mathbb{CP}^2$ is as follows: There are $3$ degrees of freedom of velocity, $3$ of acceleration, $4$ of position minus $1$ for the dimension of the curve. This gives gives a $9$ dimensional space of conformal geodesics on $\mathbb{CP}^2$. There are $8$ Killing vectors, but an orbit of $SU(3)$ has fixed $|a|$ and $\tau$ which is $7$ dimensions in the space of conformal geodesics. Therefore there is always $1$-dimensional isometry group stabilising a conformal geodesic i.e. every conformal geodesic $\Gamma(s)=\mbox{exp}(sK) \;\Gamma(0)$, is a trajectory of some Killing vector $K$ which confirms the results of \cite{maeda_kil}. This counting is correct as long as the Killing vector $K$ is tangent to the conformal geodesic, rather than being vanishing on it. To examine this we look at fixed points of Killing vectors in $\mathbb{CP}^2$. A one parameter subgroup of the isometry group comes from an $SU(3)$ matrix, and fixed points correspond to eigenvectors; if the eigenvalues are distinct then the Killing vector vanishes at isolated points (like $\partial_\phi$ in the coordinate system (\ref{cp2metric})) but if there is a repeat then the Killing vector vanishes on a $\mathbb{CP}^1$ in $\mathbb{CP}^2$ (like $\partial_\psi$ which vanishes at $r=\infty$ - the $\mathbb{CP}^1$ at infinity). So if a conformal geodesic does not lie on a $\mathbb{CP}^1$, then it can not lie in the zero set of a Killing so must be a Killing vector trajectory. If it does lie on a $\mathbb{CP}^1$ then there is a Killing vector that vanishes everywhere along the geodesic but it does not matter; take it to be the $\mathbb{CP}^1$ at infinity which is a metric sphere and extrinsically flat, so that the conformal geodesics are genuine circles and are Killing vector trajectories. \vskip5pt \noindent {\bf Proof of Theorem \ref{theo_CP2}.} In view of the discussion above, and the results of \cite{MO} it is sufficient to show that geodesics on constant $r$ surfaces from Proposition \ref{prop_CP2} can have arbitrary values of the torsion $\tau$, and the curvature $|a|^2$. We shall first consider the case when $\tau=\pm 1$. Then the trajectory of the Killing vector $\partial/\partial\psi$ is a conformal geodesic (\ref{psigeod}) with $|a|^2=(r^2-1)^2/r^2$ which can take any value in $[0, \infty)$. We shall therefore assume that $\tau^2<1$, and show that for any value of $(\tau, |a|)$ there is a corresponding value of $(\gamma, r)$. This will be achieved by finding two relations between $(\tau, |a|)$ and $(\gamma, r)$. Recall from (\ref{conditons_for_B}) that $B=-3\gamma r$, or $\gamma^2r^2+\gamma B+1=0$. In the latter case we find ${C_K}^2=|a|^2$, so that $\tau=\pm 1$, but we already have a conformal geodesics with these values of $\tau$. We shall therefore assume that \begin{equation} \label{bminusthree} B=-3\gamma r \end{equation} in which case we find \begin{equation} \label{p4} |a|^2=\gamma^2 r^2-2\gamma^2+\frac{1}{r^2} \end{equation} and \begin{equation}\label{p5} C_K=\frac{\gamma}{r}(1+r^2-2\gamma^2r^2).\end{equation} As a check, calculate \[1-\tau^2=1-\frac{C_K^2}{|a|^2}=\frac{(1-\gamma^2)(1-2r^2\gamma^2)^2}{(1-\gamma^2)+\gamma^2(r^2-1)^2}>0,\] as expected. Also (\ref{p5}) will give the sign of $C_K/\gamma$ (note that, given a conformal geodesic, then traversing it in the opposite direction switches the signs on $C_K$ and $\gamma$). Solving (\ref{p4}) for $\gamma^2$ gives \begin{equation}\label{p6}\gamma^2=\frac{r^2|a|^2-1}{r^2(r^2-2)},\end{equation} (once we have $r^2$ we will need to check that this is less than 1.) This relation together with \[ C_K=-r^3\gamma^3+r\gamma(1+|a|^2) \] (which follows from (\ref{CKint})) gives a cubic \[ F(X)\equiv(XA-1)(X-(2A+1))^2-TA(X-2)^3=0, \quad \mbox{where}\quad X\equiv r^2, A\equiv |a|^2, T\equiv \tau^2. \] We will argue that this cubic has three positive roots which coalesce at $X=2$ when $A=1/2$. \vskip5pt Let us deal separately with $r^2=2$: from (\ref{p4}) for this case $|a|^2=1/r^2=1/2$ for any $\gamma^2$; then from (\ref{p5}) \[\tau=\frac{C_K}{|a|}=\gamma(3-4\gamma^2),\] and for any $\tau$ with $-1<\tau<1$, which is the currently allowed range, inspection of this cubic shows that there are three roots for $\gamma$ in $(-1,1)$. Now we can suppose that $r^2\neq 2$. Substitute for $\gamma^2$ into the square of (\ref{p5}): \[C_K^2=\frac{\gamma^2}{r^2}(1+r^2-2\gamma^2r^2)^2=\frac{r^2|a|^2-1}{r^4(r^2-2)}\left(1+r^2-2r^2\left(\frac{r^2|a|^2-1}{r^2(r^2-2)}\right)\right)^2 \] \[=\frac{(r^2|a|^2-1)}{(r^2-2)^3}(r^2 -(1+2|a|^2))^2. \] Rationalise: \begin{equation}\label{eq1} 0=(r^2|a|^2-1)(r^2 -(1+2|a|^2))^2-C_K^2(r^2-2)^3,\end{equation} and put $r^2=X$ to define the cubic \begin{eqnarray}\label{eq2}F(X)&:=&X^3(|a|^2-C_K^2)-X^2(1+2|a|^2+4|a|^4-6C_K^2)\\ &&+X(2+5|a|^2+4|a|^4+4|a|^6-12C_K^2)\nonumber\\ &&-(1+4|a|^2+4|a|^4-8C_K^2),\nonumber \end{eqnarray} whose vanishing we want. Note that, in $F$: \begin{itemize} \item The coefficient of $X^3$ is \[|a|^2-C_K^2=(1-\tau^2)|a|^2,\] which is strictly positive; \item the coefficient of $X^2$ is \[-(1+2|a|^2(1-3\tau^2)+4|a|^4)=-((1-2|a|^2)^2+6|a|^2(1-\tau^2))\] which is strictly negative; \item the coefficient of $X$ is \[2+5|a|^2+4|a|^4+4|a|^6-12C_K^2>2-7|a|^2+4|a|^4+4|a|^6\]\[=(2|a|^2-1)^2(|a|+2),\] where the first inequality uses $\tau^2<1$, so that this coefficient is strictly positive; \item and the constant term is \[-(1+4|a|^2+4|a|^4-8C_K^2)=-(1-2|a|^2)^2-8|a|^2(1-\tau^2),\] which is strictly negative. \end{itemize} Thus $F(X)$ has no negative roots and at least one positive root. We will see below there are three positive roots, which coalesce at $r^2=2$ when $|a|^2=1/2$. Next we note some particular values of $F$: \begin{equation}\label{eq3}F(1/|a|^2)=\frac{\tau^2}{|a|^8}(2|a|^2-1)^3,\;\;F(2)=(2|a|^2-1)^3\end{equation} and \begin{equation}\label{eq4}F(1+2|a|^2)=-C_K^2(2|a|^2-1)^3.\end{equation} Now we can analyse the roots of $F$ \begin{itemize} \item If $|a|^2=1/2$ then $r^2=2$ is a root, and by inspection of (\ref{eq1}) we see that it is three times repeated. By the discussion above, there is now a value of $\gamma$ giving any value of $\tau$. \item If $2|a|^2-1<0$ then the evaluations in (\ref{eq3}) are both negative and that in (\ref{eq4}) is positive, while \[1+2|a|^2<2<1/|a|^2.\] Therefore there is one root in each of the ranges $(0,1+2|a|^2),(1+2|a|^2,2)$ and $(1/|a|^2,\infty)$. All three make $\gamma^2$ in (\ref{p6}) positive but we shall choose the largest root $r_3^2$ which has \[r_3^2>1/|a|^2>2\] so that there are positive $\delta,\epsilon$ with $\delta<\epsilon$ and \[r_3^2=1/|a|^2+\delta=2+\epsilon.\] Now from (\ref{p6}) \[\gamma^2=\frac{r_3^2|a|^2-1}{r_3^2(r_3^2-2)}=\frac{\delta|a|^2}{\epsilon r_3^2}<\frac{\delta}{2r_3^2\epsilon}<\frac{\delta}{4\epsilon}<\frac14<1,\] so that this choice of $r$ leads to a $\gamma$ in the allowed range. \item If $2|a|^2-1>0$ then the evaluations in (\ref{eq3}) are both positive and that in (\ref{eq4}) is negative, while \[1/|a|^2<2<1+2|a|^2.\] Therefore there is one root in each of the ranges $(0,1/|a|^2),(2,1+2|a|^2)$ and $(1+2|a|^2,\infty)$ and all make $\gamma^2$ in (\ref{p6}) positive. Again we choose the largest root $r_3^2$ which therefore has \[r_3^2>1+2|a|^2>2.\] We choose positive $\delta,\epsilon$ with \[|a|^2=\frac12+\epsilon,\;\;r_3^2=2+\delta,\] so that also \[r_3^2=2+\delta>1+2|a|^2=2+2\epsilon\mbox{ i.e. }\delta>2\epsilon.\] Now from (\ref{p6}) \[\gamma^2=\frac{r_3^2|a|^2-1}{r_3^2(r_3^2-2)}=\frac{(1/2+\epsilon)(2+\delta)-1}{\delta(2+\delta)}=\frac14\frac{\delta+4\epsilon+2\epsilon\delta}{\delta+\delta^2/2} \] \[=\frac14\left(1+\frac{4\epsilon+2\epsilon^2-(\delta-2\epsilon)^2/2}{\delta+\delta^2/2}\right)<\frac14\left(1+\frac{4\epsilon+2\epsilon^2}{\delta+\delta^2/2}\right)\] and with $\delta>2\epsilon$: \[<\frac14\left(1+\frac{4\epsilon+2\epsilon^2}{2\epsilon+2\epsilon^2}\right)<\frac34<1,\] so again we have $\gamma$ in the allowed range. \end{itemize} We have shown that any allowed $(C_K,|a|^2)$ can be obtained from an allowed $(r,\gamma)$. \begin{flushright} $\Box $ \end{flushright} {\bf Remarks.} \begin{itemize} \item A smooth curve $\Gamma$ parametrised by an arc--lengh is called a helix of proper order $d$, if there exist orthonormal vector fields $\{V_1\equiv \dot{\Gamma}, V_2, \dots, V_d\}$ such that \[ \nabla_s V_j=-\kappa_{j-1} V_{j-1}+\kappa_j V_{j+1}, \quad\mbox{where}\quad j=1, \dots, d \] and $V_0=V_d\equiv 0$, and the functions $\kappa_j=\kappa_j(s) $ (we set $\kappa_0=\kappa_d\equiv 0$) where $j=1, \dots, d-1$ are constant along $\Gamma$. Thus a helix of order 1 is a geodesic, and a helix of order 2 is a circle. Let $\pi: S^{5}\rightarrow \mathbb{CP}^2$ be the Hopf fibration such that the round metric on $S^{5}$ is of the form \[ g_5=(dt+\Theta)^2+g_{\mathbb{CP}^2}, \] where $d\Theta=\Omega$ is the K\"ahler form for $\mathbb{CP}^2$ (this is also the derivative of $K=\partial/\partial t$ Killing vector). In \cite{jap2} it is argued that horizontal (with respect to the $U(1)$ connection $\Theta$) lifts of circles on $\mathbb{CP}^5$ to $S^{5}$ are helices of order $2, 3$ or $5$. Moreover the order $2$ occures iff the first integral (\ref{CKint}) vanishes. These are the conformal geodesics of $S^5$ for which velocity and acceleration are both orthogonal to $K$. A necessary condition is $J(u,a)=0$ (to see it, differentiate $g_5(K, a)$ along a conformal geodesic on $S^5$) which we know can be satisfied, so the conformal geodesics of $\mathbb{CP}^2$ with $J(u, a)=0$ come from conformal geodesics of $S^5$. \item There is another special case, where the conformal geodesic equations can be explicitly integrated with relative ease. Let $J$ be the complex structure of the ASD K\"ahler form (\ref{kahler_form}), and let $\lambda$ be a non--zero constant. We shall make the ansatz $a=\lambda J(u)$ so that \[ a_1=-\lambda\beta, \quad a_2=\lambda \alpha, \quad a_3=\lambda\delta, \quad a_4=-\lambda\gamma. \] Note that the first integral (\ref{CKint}) is equal to $\lambda$, and \[ \nabla_u a=\lambda J^2(u)=-\lambda u, \quad \mbox{where}\quad \lambda^2=|a|^2, \quad \mbox{so that}\quad \tau=1. \] The circle equations (\ref{conf_circ_e}) hold identically as a consequence of the definition of $a$, and can be integrated explicitly. We will not reproduce this calculation here, as we know from Theorem \ref{theo_CP2}, that the resulting conformal geodesics lie in the $SU(3)$ orbits of integral curves of $E_4$. \end{itemize} \section*{Appendix A}
train/arxiv
BkiUdbs5qoTAjvd_fUu8
5
1
\section{Motivation} The current trend to move towards highly-scalable computing systems with slow but energy-efficient processors increases the pressure on the interconnection network. The recent leap in terms of bandwidth and latency was achieved by removing the CPU from the packet processing (data) path. Instead, specialized data processors offer remote direct memory access (RDMA) functions and enable tens of gigabytes per second transmission rates at sub-microsecond latencies in modern network interface cards (NICs). However, RDMA only transports data between (virtual) memories of processes on different network endpoints. Different RDMA interfaces, such as OFED~\cite{Hansen:2006:FRO:1188455.1188479}, uGNI/DMAPP~\cite{aries}, Portals 4~\cite{barrett2017portals}, or FlexNIC~\cite{Kaufmann:2016:HPP:2954679.2872367} provide varying levels of support for steering the data at the receiver. Yet, with upcoming terabits-per-second networks~\cite{ethernet-roadmap}, we foresee a new bottleneck when it comes to processing the delivered data: A modern CPU requires 10-15ns to access L3 cache (Haswell: 34 cycles, Skylake: 44 cycles~\cite{Molka:2014:MMC:2618128.2618129,intel-manual}). However, a 400 Gib/s NIC can deliver a 64-Byte message each 1.2ns. The main problem is that packets are simply deposited into main memory, irrespective of the contents of the message itself. Many applications then analyze the received messages and rearrange them into the application structures in host memory (e.g., halo exchanges, parallel graph algorithms, database updates) even though this step can logically be seen as part of the data routing. This poses a barrier, very similar to pre-RDMA packet processing: CPU cores are inefficient message processors because their microarchitecture is optimized for computation. They require thread activation, scheduling, and incoming data potentially pollutes the caches for the main computation. Furthermore, due to the lack of a better interface, the highly-optimized data-movement cores on the NIC are likely to place data \emph{blindly} into host memory. To address these limitations and liberate NIC programming, we propose \emph{streaming Processing in the Network} (\texorpdfstring{\MakeLowercase{s}}{s}PIN{}), which aims to extend the success of RDMA and receiver-based matching to simple processing tasks that are dominated by data-movement. In particular, we design a unified interface where programmers can specify kernels, similar to CUDA~\cite{Nickolls:2008:SPP:1365490.1365500} and OpenCL~\cite{Stone:2010:OPP:622179.1803953}, that execute on the NIC. Differently from CUDA and OpenCL, kernels do not offload compute-heavy tasks but data-movement-heavy tasks, specifically, tasks that can be performed on incoming messages and only require limited local state. Such tasks include starting communications with NIC-based collectives, advanced data steering with MPI datatypes, data processing such as network raid, compression, and database filters. Similarly to OpenCL, \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s interface is device- and vendor-independent and can be implemented on a wide variety of systems. We enable \texorpdfstring{\MakeLowercase{s}}{s}PIN{} on existing NIC microarchitectures with typically very small but fast memories without obstructing line-rate packet processing. For this, we design \texorpdfstring{\MakeLowercase{s}}{s}PIN{} around networking concepts such as packetization, buffering, and packet steering. Packetization is the most important concept in \texorpdfstring{\MakeLowercase{s}}{s}PIN{} because unlike other networking layers that operate on the basis of messages, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} exposes packetization to the programmer. Programmers define \emph{header}, \emph{payload}, and \emph{completion handlers} (kernels) that are executed in a \emph{streaming way} by handler processing units (HPUs) for the respective packets of each matching message. Handlers can access packets in fast local memory and they can communicate through shared memory. sPIN offers protection and isolation for user-level applications and can thus be implemented in any environment. Figure~\ref{fig:overview} shows sPIN's architecture. \begin{figure}[h!] \vspace{-0.6em} \includegraphics[width=\columnwidth]{dia/overview_cropped.pdf} \vspace{-1.5em} \caption{\texorpdfstring{\MakeLowercase{s}}{s}PIN{} Architecture Overview} \label{fig:overview} \centering \end{figure} \vspace{-1em} \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s philosophy is to expose the highly-specialized packet processors in modern NICs to process short user-defined functions. By ``short'', we mean not more than a few hundred instructions from a very simple instruction set. In this sense, handlers are essentially pluggable components of the NIC firmware. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} offers unprecedented opportunities to accelerate network protocols and simple packet processing functions and it can be implemented in discrete NICs, SoCs, or even in parts of CPU cores. Offering programmable network devices liberates the programmer from restricted firmware functionalities and custom accelerators and is the next step towards full software-defined networking infrastructures. The following C code demonstrates how to define handler functions in a user-application: \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] __handler int header_handler(const ptl_header_t h, void *state) { /* header handler code */ } __handler int payload_handler(const ptl_payload_t p, void *state) { /* packet content handler code */ } __handler int completion_handler(int dropped_bytes, bool flow_control_triggered, void *state) { /* post-message handler code */ } channel_id_t connect( peer, /* ... */, &header_handler, &payload_handler, &completion_handler ); \end{lstlisting} The function decoration \lstinline$__handler$ indicates that this function must be compiled for the sPIN device. Handler code is passed at connection establishment. This allows a single process to install different handlers for different connections. Arguments are the packet data and \lstinline$*state$, which references local memory that is shared among handlers. As a principled approach to network offloading, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} has the potential to replace specific offload solutions such as ConnectX CORE-Direct collective offload~\cite{6008920}, Cray Aries~\cite{aries}, IBM PERCS~\cite{ibm-percs-network}, or Portals 4~\cite{hemmert2010using} triggered operations. Instead, the community can focus on developing domain or application-specific \texorpdfstring{\MakeLowercase{s}}{s}PIN{} libraries to accelerate networking, very much like NVIDIA's cuBLAS or ViennaCL~\cite{Rupp:ViennaCL}. A vendor-independent interface would enable a strong collaborative open-source environment similar to the Message Passing Interface (MPI) while vendors can still distinguish themselves by the design of NICs (e.g., specialized architectures for packet processing such as massive multithreading in Intel's Network Flow Processor). Specifically, in this paper, we \begin{itemize}[noitemsep,topsep=2pt,leftmargin=20pt] \item present the design of an acceleration system for NIC offload; \item outline a microarchitecture for offload-enabled smart NICs; \item design a cycle-accurate validated simulation environment integrating network, offload-enabled NICs, and CPUs; \item outline and analyze use cases for parallel applications as well as for distributed data management systems; \item and demonstrate speedups for various real applications. \end{itemize} \subsection{Background}\label{sec:background} \hpara{SMP vs. AM} We now provide a brief overview of related technologies. At first glance, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} may seem similar to active messages (AM)~\cite{vonEicken:1992:AMM:146628.140382}---it certainly shares many potential use cases. Yet, it is very different because it specifies an architecture for fast and tightly integrated NIC packet processing. Both, AM and \texorpdfstring{\MakeLowercase{s}}{s}PIN{} are independent of process scheduling at the host OS and can be defined independently of the target hardware. The major difference is that AMs are invoked on full messages while \texorpdfstring{\MakeLowercase{s}}{s}PIN{} is defined in a streaming manner on a per-packet basis. Early AM systems that constrained the message size may be considered as special cases of \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. Yet, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} enables to pipeline packet processing, similarly to wormhole routing while AM would correspond to store and forward routing. Furthermore, AMs use host memory for buffering messages while \texorpdfstring{\MakeLowercase{s}}{s}PIN{} stores packets in fast buffer memory on the NIC close to the processing units for fastest access; accesses to host memory are possible but should be minimized. A last semantic difference is that in AM, a message can be considered atomic because a handler is invoked after the message is delivered while in \texorpdfstring{\MakeLowercase{s}}{s}PIN{} handlers are invoked on parts of a message and only those parts (i.e., packets) can be processed atomically. \hpara{SMP vs. packet processing in hardware} \texorpdfstring{\MakeLowercase{s}}{s}PIN{} is in fact closer to packet processing systems than to AM. Fast packet processing hardware has been designed in the Intel IXP family of chips and continued in the Netronome NFP series (cf.~\cite{gavrilovska}). Recent progress in software defined networking (SDN) enables users to program switches with simple parse-match-action rules that allow simple packet processing and routing in the network. P4~\cite{bosshart2014p4} is a language to express such rules concisely and it supports fast packet parsing in hardware. Another related proposal is FlexNIC~\cite{Kaufmann:2016:HPP:2954679.2872367}, which builds on the SDN/P4 ideas and extends routing to the DMA engine in the NIC. Yet, in the HPC context, this routing is comparable to what current HPC network interfaces such as Portals 4 already support in hardware (receiver-side steering). \texorpdfstring{\MakeLowercase{s}}{s}PIN{} goes far beyond these to exploit processing of packets on specialized units in fast local memories. \section{Processing in the Network} \hpara{messages vs. packets} \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s central philosophy, which is independent of any particular implementation, is based on the fact that network devices split messages into packets for transmission (messages correspond to network transactions). Packets are easier to manage because they can be buffered and forwarded independently. We adopt this philosophy for \texorpdfstring{\MakeLowercase{s}}{s}PIN{} to enable a direct implementation in a network device. In \texorpdfstring{\MakeLowercase{s}}{s}PIN{}, the programmer defines handler functions that execute on a set of packets that logically form a message. Those functions are executed on one or multiple handler processing units (HPUs). A simple runtime system is responsible for controlling the handlers and scheduling them for execution on HPUs. Each handler owns shared memory that is persistent across the lifetime of a message, i.e., handlers can use that memory to communicate. \begin{figure}[h!] \vspace{-0.7em} \includegraphics[width=\columnwidth]{dia/handlers_cropped.pdf} \vspace{-2.0em} \caption{\texorpdfstring{\MakeLowercase{s}}{s}PIN{} Message Handler Overview} \vspace{-1.0em} \label{fig:messagehandlers} \centering \end{figure} \hpara{handler types} Figure~\ref{fig:messagehandlers} shows how the handlers relate to parts of the message. Network layers enforce that all necessary information to identify a message and steer it into memory is included in the first packet that we call \emph{header packet}. Many communication systems, such as Ethernet, replicate this header information in each packet (black boxes). To enable fast channel-based systems, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} does not rely on replicated header information but delegates to the NIC-based runtime system to identify the set of packets that belongs to the same message. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} defines three handler types to be invoked on different parts of a message: the header handler works on header information, the payload handler processes the message payload after the header handler completes, and the completion handler executes after all instances of a payload handler have completed. There is no requirement that packets arrive or are processed in order. \hpara{resource management} HPU memory is managed by the host operating system and the user-level application using a typical control-/data-plane separation. Performance-critical parts such as invoking handlers are performed without OS involvement while other parts, such as isolation, setting up protection domains, and memory management on the device can be performed through the host OS. The host compiles and offloads handler code to the HPU, similar to how GPU kernels work. Handlers only contain a few hundred instructions and their code can thus be placed into fast memory for quick execution. The system can reject handler code that is too large. The application on the CPU allocates and initializes HPU memory at handler-installation time. Handlers can communicate through shared memory but they cannot perform any memory management and their local memory only offers linear (physical) addressing. \hpara{programming model} Handlers are programmed by the user as standard C/C++ code to enable portable execution and convenient programming. They can only contain plain code, no system calls or complex libraries. Handlers are then compiled to the specific target network ISA. The program can contain static segments of pre-initialized data. Handlers are not limited in their execution time, yet, resources are accounted for on a per-application basis. This means that if handlers consume too much time, they may stall the NIC or drop packets. Thus, programmers should ensure that handlers can operate at line-rate on the target device. It is key that handlers can be started at very low cost for each packet; we assume that execution can start within a cycle after a packet arrived in the buffer (assuming a HPU is available). Furthermore, to guarantee high message rate, handlers need to be set up quickly from the host and parameters must be passed with low overhead. Handlers execute in a sandbox with respect to application memory, i.e., they may only access a restricted memory range in the application's virtual address space. \enlargethispage{2em} \hpara{local handler actions} Handlers can perform various actions besides executing normal C code. Ideally, these actions are implemented as hardware instructions. At the start of a handler, the packet is available in a fast buffer (ideally single-cycle access). Handlers have access to host memory via DMA. This enables the runtime system to deschedule handlers from massively-threaded HPUs while they are waiting for host memory. Handlers do not block and can voluntarily yield to another handler. Yet, it is a central part of the programming philosophy that DMA should be used scarcely, as it is expensive and its performance is non-deterministic. Handlers can generate two types of messages: (1) messages originating from HPU memory and (2) messages originating from host memory. Messages issued from HPU memory can only contain a single packet and are thus limited to the MTU. Messages issued from HPU memory may block the HPU thread until the message is delivered (i.e., the NIC may use HPU memory as outgoing buffer space). Messages issued from host memory shall enter the normal send queue as if they were initiated from the host itself. Messages from host memory shall be nonblocking. \section{A complete \texorpdfstring{\MakeLowercase{s}}{s}PIN{} Interface} \texorpdfstring{\MakeLowercase{s}}{s}PIN{} can be added to any RDMA network. As an example to demonstrate all \texorpdfstring{\MakeLowercase{s}}{s}PIN{} features, we use the Portals 4 network interface because it offers advanced receiver-side steering (matching), OS bypass, protection, and NIC resource management. It has been implemented in hardware and demonstrated to deliver line-rate interactions with the host CPU~\cite{bxi}. Furthermore, its specification is openly available; we briefly summarize the key aspects in the following. \subsection{Overview of Portals 4} \hpara{virtual NIs} Portals 4 specifies logical and physical addressing modes and offers matched or unmatched operation for logical network interfaces that are bound to physical network resources. Logical addressing can be used by runtime systems to offer a NIC accelerated virtualized process space. A matched interface allows the user to specify match bits to direct incoming messages to different logical lists identified by tags (potentially with a mask) in a single match list. Each logical queue head specifies a steering action for incoming packets. Without loss of generality, we focus on logically addressed and matched mode as this combination provides the highest level of abstraction~\cite{barrett2017portals}. \hpara{communication and completion} Portals 4 offers put, get, and atomic communication operations. Completion notification occurs through counting events or appending a full event to an event queue, which is also used for error notification. Memory descriptors (MDs) form an abstraction of memory to be sent; counters and event queues are attached to it. Matching entries (MEs) identify receive memory; matching is performed through a 64-bit masked id; MEs have counters and event queues associated with them. Portals 4 offers full memory access protection through MDs and MEs. \hpara{SMP extends Portals 4 and integrates into it} \sloppy Portals 4 offers two mechanisms that go beyond simple message steering and that allow implementation of a limited Network Instruction Set Architecture (NISA)~\cite{hemmert2010using,exploiting-offload-enabled-ni}. First, it enables communications that have been previously set up to be triggered by counters reaching a certain threshold. Second, Portals 4 MEs can have locally managed offsets where incoming data is efficiently packed into buffers using a local index. Both mechanisms are limited because only incoming messages can trigger and steer operations, not the data in those messages. These actions also cannot process packets (local atomics can be used to emulate very limited processing capabilities~\cite{exploiting-offload-enabled-ni}). \texorpdfstring{\MakeLowercase{s}}{s}PIN{} integrates with and extends Portals 4 to offer more powerful message steering, protocol implementation, and packet processing functionalities. \subsection{A sPIN Interface for Portals 4} \hpara{semantics} Based on the general semantics for \texorpdfstring{\MakeLowercase{s}}{s}PIN{} systems, we now derive a candidate Portals 4 (P4) interface called P4sPIN. We only provide an overview of the functions in the main part of the paper and refer to the appendix for signatures and a detailed description. All packet handlers are associated with a specific matching entry (ME) to which incoming messages are matched. MEs are posted using \lstinline$PtlMEAppend$ (cf.~Appendix~\ref{sec:p4md}). We extend this call to enable registering handlers with additional arguments to identify the three handlers, the shared memory setup, the initial memory state to initialize the shared memory, and the handler's memory region at the host (if needed). \hpara{HPU memory} The ME requires a handle that identifies an HPU shared memory space to run the handler in. HPU memory is allocated using the \lstinline$PtlHPUAllocMem$ function (see~Appendix~\ref{sec:hpumem}) at the host (before handler installation). This explicit management allows the user to re-use the same HPU memory for multiple MEs. HPU memory remains valid until it is deallocated. If multiple incoming messages match MEs that specify the same HPU memory then the handlers should perform concurrency control. \hpara{flow control} If an incoming packet arrives and matches an ME but no HPU execution contexts are available, the NIC may trigger flow control for the respective portal table entry. This is symmetric to the situation where the host runs out of compute resources and fails to post new MEs to the NIC. In a flow control situation, packets arriving at a specific portal table entry are dropped until the entry is re-enabled. Note that this can happen during the processing of a message. In this case, the completion handler is invoked and notified through the flag \lstinline$flow_control_triggered$. \subsubsection{Header Handler} The header handler is called exactly once per message and no other handler for this message is started before the header handler completes. It has access to only the header fields that can include user-defined headers (the first bytes of the payload). User-defined headers are declared statically in a struct to enable fast parsing in hardware. Yet, the struct offers flexibility as it guarantees no specific memory layout. For example, pre-defined headers could be in special registers while user-defined headers can reside in HPU memory. The struct is static such that it can only be used on the right-hand side of expressions. This makes it possible to implement using registers. \subsubsection{Payload Handler} The payload handler is called after the header handler completes for packets carrying a payload. The passed payload does not include the part of the user-header that was specified by \lstinline$user_hdr$. Multiple instances of the payload handler can be executed in parallel and the programmer must account for concurrent execution. Payload handlers share all HPU memory coherently. The illusion of private memory can be created by the programmer, yet no protection exists. To create private memory, the system offers a compile-time constant \lstinline$PTL_NUM_HPUS$ that contains the number of handle execution units. Note that each unit may be used to process multiple payload handlers serially but only \lstinline$PTL_NUM_HPUS$ handlers can be active simultaneously at any given time. Furthermore, a runtime constant \lstinline$PTL_MY_HPU$ allows a running handler to determine on which HPU it is running. Handlers may not migrate between HPUs while they are running. These two constants allow the user to allocate arrays of size \lstinline$PTL_NUM_HPUS$ and index into them using \lstinline$PTL_MY_HPU$ to emulate HPU-private data. \subsubsection{Completion Handler} The completion handler is called once per message after all header handlers and payload handlers have completed but before the completion event is delivered to the host. The handler can be used for final data collection or cleanup tasks after the message has been processed. The value in \lstinline$dropped_bytes$ indicates how many bytes of payload data have been dropped by payload handlers. Bytes can either be dropped by payload handlers returning a variant of \lstinline$DROP$ or if a flow-control situation occurs. The flag \lstinline$flow_control_triggered$ indicates that flow control was triggered during the processing of this message and thus some packets may have been dropped without being processed by payload handlers. The pointer state points at the initial data in HPU memory. This data may have been initialized by the host or header handler. All handlers can perform various actions as described before. The detailed interfaces for all calls are specified in Appendix~\ref{sec:hdlract}. \section{Prototyping \texorpdfstring{\MakeLowercase{s}}{s}PIN{}} We now describe two prototype implementations of \texorpdfstring{\MakeLowercase{s}}{s}PIN{} as an NISA. The first architecture represents a \emph{discrete} network card (``dis'') that is attached to the CPU via a chip-to-chip interconnect such as PCI express (PCIe). The second architecture represents an \emph{integrated} network card (``int'') that is on the same chip as the CPU cores and attached via a fast signaling protocol such as the Advanced eXtensible Interface (AXI). \subsection{HPU Design} \hpara{intro} The HPU architecture is an integral part of the \texorpdfstring{\MakeLowercase{s}}{s}PIN{} design. We briefly describe some design, optimization, and customization ideas without proposing any particular architecture. We assume that most of today's NIC architectures can be re-used. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} can be conceptualized ass being equivalent to installing custom mini-firmware for each application on the NIC. \hpara{HPU} The header processing unit (HPU) should have fastest (ideally single-cycle) access to local memory and packet buffers. To achieve this, it could be connected to packet buffers directly. HPU memory is not cached. Most HPU instructions should be executed in a single cycle and the documentation should be explicit about instruction costs. Handlers should be invoked immediately after a packet arrives or the previous handler completes. Handlers require no initialization, loading, or other boot activities because all their context is pre-loaded and memory is pre-initialized. HPUs can be implemented using massive multithreading to utilize the execution units most efficiently. For example, if handler threads wait for DMA accesses, they could be descheduled to make room for different threads. Only the context would need to be stored, similarly to GPU architectures. \hpara{needed memory overhead} Handler execution will delay each message that is processed. This requires enough HPU cores to process at full bandwidth and additional memory. The required memory overhead can be computed using Little's law. If we assume a handler executes between 10 and 500 instructions at 2.5GHz and IPC=1, we expect a maximum delay of 200ns per packet. With 1Tb/s, we can calculate the overhead as 1 Tb/s $\cdot$ 200ns = 25 kB. We expect that this can easily be made available and more space can be added to hide more latency, probably up to several microseconds. \hpara{can be implemented anywhere} \texorpdfstring{\MakeLowercase{s}}{s}PIN{} can be implemented in multiple different environments. On a discrete NIC, one can take advantage of the existing packet processing infrastructure and buffer space. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} can also be added to an SoC to steer messages to the correct cores for processing. At the other extreme, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} can be implemented in a core with an integrated NIC as a small layer between the transceiver and the core. It could even use the pipeline of a super-scalar core with tagged instructions. \subsection{Simulation Setup} To evaluate \texorpdfstring{\MakeLowercase{s}}{s}PIN{} at scale, we combine two open-source simulators that have been vetted extensively by the research community: LogGOPSim~\cite{hoefler-loggopsim} to simulate the network of parallel distributed memory applications and gem5~\cite{Binkert:2011:GS:2024716.2024718} to simulate various CPU and HPU configurations. LogGOPSim supports the simulation of MPI applications, injection of system noise~\cite{hoefler-noise-sim,hoefler-collnetnoise}, and has been calibrated and validated on InfiniBand clusters~\cite{hoefler-loggopsim}. The cycle-accurate gem5 simulator supports a large number of different CPU configurations and is thus an ideal choice for our designs. In our setup, LogGOPSim is driving the simulation by running a trace-based discrete-event loop. Traced events are all Portals 4 and MPI functions as well as the invocation of handlers. LogGOPSim invokes gem5 for each handler execution and measures the execution time. The two simulators communicate via a special memory-mapped region in gem5 through which an executing handler can issue \emph{simcalls} from gem5 to LogGOPSim. Simcalls enable simulated applications in gem5 to invoke functionality in LogGOPSim, for example, to insert new messages into the discrete-event queue. Overall, this combination of trace-based network simulation and cycle-accurate execution-based CPU simulation enables high-accuracy and efficient modeling of the complete \texorpdfstring{\MakeLowercase{s}}{s}PIN{} system. We parametrize for a future InfiniBand system using the LogP model extended with a packet-level simulation to model the $L$ (Latency) parameter more accurately. The injection overhead is not parallelizable, thus, we use $o=65ns$ (injection overhead), which we measured on a modern InfiniBand system. Similarly, we expect the message rate to stay approximately similar, around 150 Million messages per second for Mellanox ConnectX-4~\cite{mlx-talk} and thus set $g=6.7ns$ (inter-message gap). As bandwidth, we expect networks to deliver 400 Gib/s around the deployment time of~\texorpdfstring{\MakeLowercase{s}}{s}PIN{} and thus set $G=2.5ps$ (inter-Byte gap). The latency is determined by a model for a packet-switched network where we assume a switch traversal time of $50ns$ (as measured on modern switches) and a wire length of $10m$ (delay of $33.4ns$). We construct a fat tree network from 36-port switches. The model is simulated using the LogGOPSim MPI simulator that has been shown to be within 10\% accuracy when compared with real runs~\cite{hoefler-noise-sim}. We model each NIC to have four 2.5GHz ARM Cortex A15 out-of-order HPU cores using the ARMv8-A 32-bit ISA. We configure the cores without cache and with gem5's SimpleMemory module configured as scratchpad memory that can be accessed in $k$ cycles (we use $k=1$ in the paper). Endo et al.~\cite{gem5-microarch} demonstrated that the average absolute error of a comparable ARM Cortex A15 was only 7\% when compared to real hardware. Messages are matched in hardware and only header packets search the full matching queue. A matched header packet will install a channel into a fast content-addressable memory (CAM) for the remaining packets. We assume that matching a header packet takes 30 ns (cf.~\cite{Underwood:2011:EFC:2057181.2057595}) and each following packet takes 2ns for the CAM lookup. We assume that matching and the network gap ($g$) can proceed in parallel. We model the host CPU as eight 2.5Ghz Intel Haswell cores with 8 MiB cache and a DRAM latency and bandwidth of 51 ns and 150 GiB/s, respectively. A similar configuration has been analyzed by Akram et al.~\cite{akram201686}. \subsection{DMA and Memory Contention} HPU accesses to host memory are performed via DMA. We extended the simulator by adding support to model contention for host memory. This contention either happens through the north-bridge via PCIe (discrete NIC) or through the memory controller of the CPU (integrated NIC). We model DMA at each host as a simple LogGP system~\cite{pciemodel,Martinasso:2016:PCP:3014904.3014989}. We set $o=0$ and $g=0$ because these times are already captured by the cycle-accurate gem5 simulation when initiating the request. We set $L$ and $G$ depending on the discrete or integrated HPU configuration as follows. The discrete NIC is connected through an off-chip interconnect such as PCI express. We use 32-lane PCI express version 4 as a candidate system with a latency of $L=250ns$, and $G=15.6ps$ (64 GiB/s). The integrated NIC is connected directly to the chip's memory controller, which allows a much lower latency of $L=50ns$ and the same bandwidth as the main CPU $G=6.7ps$ (150 GiB/s). The DMA time is added to the message transmission when the NIC delivers data into host memory (e.g., for every message in RDMA and Portals 4), for HPU calls PutFromHost, and when the HPU invokes DMA routines to main memory. \subsection{Microbenchmarks} We first demonstrate the parameters of \texorpdfstring{\MakeLowercase{s}}{s}PIN{} with a set of microbenchmarks before we show a series of use cases for real-world applications. \begin{figure*}[ht!]% \centering \subfloat[Schematic Ping Pong (thick lines show multi-packet messages, DMA transactions at source omitted for clarity)]{ \includegraphics[height=3.72cm]{dia/ping-pong_cropped.pdf} \label{fig:pingpongdiagram} }% \hfill \subfloat[Ping Pong (integrated NIC)]{{ \includegraphics[height=3.72cm]{rplot/plot_ping_pong_integrated_processed} \label{fig:pingpongintegrated} }}% \hfill \subfloat[Ping Pong (discrete NIC)]{{ \includegraphics[height=3.72cm]{rplot/plot_ping_pong_discrete_processed} \label{fig:pingpongdiscrete} }}% \hfill \subfloat[Accumulate (both types)]{{ \includegraphics[height=3.72cm]{rplot/plot_accumulate_processed} \label{fig:accumulate} }}% \vspace{-1em} \caption{Ping pong and remote accumulate comparing RDMA, Portals 4, and various \texorpdfstring{\MakeLowercase{s}}{s}PIN{} implementations}% \vspace{-1em} % \end{figure*} \subsubsection{Ping-Pong Latency} We compare our two \texorpdfstring{\MakeLowercase{s}}{s}PIN{} systems with standard RDMA as well as Portals 4 with a simple ping-pong benchmark. This illustrates the basic capabilities of processing messages on the NIC. For RDMA and Portals 4, all messages need to be stored to and loaded from main memory. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} can avoid this memory traffic and reply directly from the NIC buffer, leading to a lower latency and less memory traffic at the host. Figure~\ref{fig:pingpongdiagram} illustrates the following explanations of time spent on the CPU, the host memory, the NIC, and its memory when executing ping-pong. All variants start a transmission from the main CPU and the message travels for time $L$ through the network. For RDMA, the pong message is sent by the main CPU. Thus, the destination CPU polls for a completion entry of the incoming ping message, performs message matching, and immediately posts the pong message. The completion will only appear after the whole message has been deposited into host memory. Processing occurs on the CPU, therefore, system noise may delay the operation. For Portals 4, the pong message is pre-set up by the destination CPU and the reply is automatically triggered after the incoming message has been deposited into host memory. Thus, system noise on the main CPU will not influence the pong message. Even though the message itself is automatically triggered, the data is fetched via DMA from the CPU's main memory as in the RDMA case. In \texorpdfstring{\MakeLowercase{s}}{s}PIN{} ping-pong, the ping message may invoke header, payload, and/or completion handlers. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} gives us multiple options for generating the pong message: (1) (store) the ping message consists of a single packet and a pong can be issued with a put from device, (2) (store) the ping message is larger than a packet and the pong message is issued with put from host using the completion handler after the packet is delivered to host memory, and (3) (stream) a payload handler could generate a pong put from device for each incoming packet. Here the NIC would act as a packet processor and split a multi-packet message into many single-packet messages. The first two correspond to store and forward processing for different message sizes while the last corresponds to fully streaming operation. The performance of ping-pong for all configurations is shown for integrated \texorpdfstring{\MakeLowercase{s}}{s}PIN{} implementations in Figure~\ref{fig:pingpongintegrated} and for discrete implementations in Figure~\ref{fig:pingpongdiscrete}. The latency difference is more pronounced in the discrete setting due to the higher DMA latency. Large messages benefit in both settings from the streaming approach where data is never committed to the host memory. The full handler code is shown in Appendix~\ref{sec:pingpongcode}. \subsubsection{\texorpdfstring{\MakeLowercase{s}}{s}PIN{} Accumulate} In the second microbenchmark we evaluate \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s interaction with local memory at the destination. For this, we choose a simple accumulate benchmark where we send an array of double complex numbers to be multiplied to an array of numbers of the same type at the destination. The multiplication is either performed on the CPU or by the NIC/HPU. This example represents an operation that is not typically supported as a NIC atomic in RDMA or Portals 4 NICs. Yet, it can easily be implemented using \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. If the operation was supported by the NIC directly, then the performance would be similar to \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. In an RDMA implementation, the data would be delivered into a temporary buffer that is read by the CPU and then accumulated into the destination buffer. Here, the NIC writes the array to host memory, notifies the CPU, which then reads two arrays from host memory and writes the result back. So if the data is of size $N$, we have two $N$-sized read and two $N$-sized write transactions. In \texorpdfstring{\MakeLowercase{s}}{s}PIN{}, the packets will arrive and be scheduled to different HPUs. Each HPU will fetch the data from host memory, apply the operation, and write it back. For an array of size $N$, we only read $N$ bytes and write $N$ bytes. Thus, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} halves the memory load compared to RDMA and P4. However, because the data has to be moved twice through the bus from the host memory, the bus latency may slow-down processing of small messages. Many NICs employ caching of atomic operations to hide the DMA latency~\cite{aries} by relaxing the memory coherence---\texorpdfstring{\MakeLowercase{s}}{s}PIN{} can use similar techniques but we decided to show a coherent system with the latency overhead. Figure~\ref{fig:accumulate} shows the accumulate results. As expected, the latency for small accumulates is higher for \texorpdfstring{\MakeLowercase{s}}{s}PIN{} than for RDMA because the data has to be first fetched via DMA to the HPU. This is especially pronounced for the discrete NIC configuration where we see the 250ns DMA latency. However, due to \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s streaming parallelism and the resulting pipelining of DMA requests, processing large accumulates gets significantly faster for larger messages. The full handler code is shown in Appendix~\ref{sec:accumulatecode}. \paragraph{How many HPUs are needed?} We use this example to discuss one of the most important design choices of a \texorpdfstring{\MakeLowercase{s}}{s}PIN{} NIC: the number of HPU cores. Each packet is processed by an HPU and multiple packets belonging the same or different messages may be processed in parallel. Now we can discuss the number of needed HPUs in order to guarantee line-rate execution. This can be modeled by Little's law. If we assume an average execution time per packet of $\overline{T}$ and an expected arrival rate of $\overline{\Delta}$, then we need $\overline{T}\cdot \overline{\Delta}$ HPUs in the system. With a fixed bandwidth ($1/G$), the arrival rate only depends on the packet size $s$ and the gap $g$ such that $\overline{\Delta}=\min\{1/g, 1/(G\cdot s)\}$. \begin{figure}[h!] \vspace{-0.6em} \includegraphics[width=\columnwidth]{rplot/numbers_of_hpus_processed} \vspace{-2.3em} \caption{HPUs needed depending on $\overline{T}$ and $s$.} \vspace{-1.0em} \label{fig:hpusneeded} \centering \end{figure} \begin{figure*}[t]% \centering \subfloat[Broadcast on a binomial tree (discrete NIC)]{ \includegraphics[height=3.82cm]{rplot/plot_bcast_processed} \label{fig:broadcast} }% \hfill \subfloat[Matching protocols (left: small messages, right: large messages; top: recv called before arrival, bottom: after arrival)]{{ \includegraphics[height=3.82cm]{dia/rendezvous.pdf} \label{fig:protocols} }}% \hfill \subfloat[Application overview]{{ \footnotesize \sf \begin{tabular}[b]{lllll} \toprule \textbf{program} & \textbf{p} & \textbf{msgs} & \textbf{ovhd} & \textbf{spdup}\\ \midrule MILC & 64 & 5.7M& 5.5\% & 3.6\%\\ POP & 64 & 772M & 3.1\% & 0.7\%\\ coMD & 72 & 5.3M & 6.1\% & 3.7\%\\ coMD & 360 & 28.1M & 6.5\% & 3.8\%\\ Cloverleaf & 72 & 2.7M & 5.2\% & 2.8\%\\ Cloverleaf & 360 & 15.3M & 5.6\% & 2.4\%\\ \bottomrule \end{tabular} \vspace{5em} \label{tab:apps} }}% \vspace{-1em} \caption{Broadcast and message matching protocols implemented using \texorpdfstring{\MakeLowercase{s}}{s}PIN{}}% % \end{figure*} For our parameters, this means $12.5\mathrm{Mmps} \leq \overline{\Delta}\leq 150\mathrm{Mmps}$ (Mmps = million messages per second). Figure~\ref{fig:hpusneeded} shows how many HPUs are needed to guarantee line rate for different packet sizes and processing times. With our design of 8 HPUs, we can support any packet size if the handler takes less than $\hat{T}_s=53ns$. From $g/G=335$B, the link bandwidth becomes the bottleneck and we can support full line rate as long as the handler executes in less than $\hat{T}_l(s)=8Gs$. For full 4KiB packets, $\hat{T}_l(4,096)=650ns$. \subsubsection{\texorpdfstring{\MakeLowercase{s}}{s}PIN{} Offloaded Broadcast} For the last microbenchmark, we demonstrate the design of distributed algorithms in \texorpdfstring{\MakeLowercase{s}}{s}PIN{} using a broadcast operation. We implement a binomial tree algorithm, which would require logarithmic space on a Portals 4 NIC and would thus be limited in scalability. In \texorpdfstring{\MakeLowercase{s}}{s}PIN{}, the algorithm is not limited in scalability while it will occupy one HPU for its execution. We implemented the broadcast operation in RDMA on the CPU, in Portals 4 as predefined triggered operations, and with \texorpdfstring{\MakeLowercase{s}}{s}PIN{} using store-and-forward as well as streaming HPU kernels. As for ping-pong, the store-and-forward mode sends messages that are smaller than a packet directly from the device and from host memory otherwise. Thus, the performance is always within 5\% of the streaming mode for single-packet messages and to Portals 4 for multi-packet messages. Thus, we omit the store-and-forward mode from the plots. Figure~\ref{fig:broadcast} shows the small message (8 B) and large-message (64 KiB) case for varying numbers of processes and the different implementations. We observe the benefit of direct forwarding for small messages as well as streaming forwarding for large messages. We only show data for the discrete NIC configuration to maintain readability. The integrated NIC has slightly lower differences but \texorpdfstring{\MakeLowercase{s}}{s}PIN{} is still 7\% and 5\% faster than RDMA and Portals 4 at 1,024 processes, respectively. The full handler code is shown in Appendix~\ref{sec:broadcastcode}. All benefits known from collective offloading implementations~\cite{hemmert2010using,Roweth:2005:OGR:1104994.1105020,6008920} such as asynchronous progression and noise-resilience remain true for \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. As opposed to existing offloading frameworks that restrict the collective algorithms (e.g., to pre-defined trees), \texorpdfstring{\MakeLowercase{s}}{s}PIN{} supports arbitrary algorithms (including pipeline and double-tree~\cite{hoefler-moor-collectives}) due to the flexible programmability and high forwarding performance of the HPUs. In fact, the very low overheads for HPU packet processing suggest new streaming algorithms for collective operations. We leave a more detailed investigation for future work. \section{Use cases for \texorpdfstring{\MakeLowercase{s}}{s}PIN{}} We now discuss several more complex use cases for \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. The idea is not to present full applications for which our cycle-accurate simulation environment would be too slow, but to present detailed simulation results of the critical pieces of real applications. \subsection{Asynchronous Message Matching} High-speed interconnection network attempts to offload as much work as possible to the NIC. This is simple for remote memory access (RMA) programming models where the source determines the destination address and the NIC simply deposits the data into the correct virtual address (as specified in the packet). However, this requires some form of distributed agreement on the destination buffer offset. Message passing simplifies the program logic and allows the target process to determine the local buffer location by calling \lstinline$recv$. This simplification at the programming level complicates the offloading of the matching because the correct destination address may only be known after the packets of the message arrive. Protocols to implement message passing over RDMA networks are typically implemented by the CPU~\cite{Woodall:2006:HPR:2091359.2091382}. However, progressing these protocols requires synchronous interaction with the CPU. Thus, communication/computation overlap for rendezvous as well as nonblocking collective operations are often hindered~\cite{hoefler-ib-threads}. These issues led to the development of specialized protocol offload engines that \texorpdfstring{\MakeLowercase{s}}{s}PIN{} generalizes to a universal network instruction set architecture (NISA). Figure~\ref{fig:protocols} illustrates the matching process for small messages (left) and large messages (right) as well as the cases where the receive is posted before the message arrived (top) or after the message arrived (bottom). The matching mechanism of Portals 4 and \texorpdfstring{\MakeLowercase{s}}{s}PIN{} allows for offloading progression and matching of small message transmissions. If the receive is posted before the first packet arrives it installs a matching entry (filled circle) and the NIC will deposit the data into the correct memory at the target process upon receiving the message. Otherwise, as shown in case III, the packets will match a default action (hollow circle) that stores the message into a predetermined location. When the matching receive is then called later, the CPU finds the message and copies the data into the receive buffer and completes immediately. This allows a data copy to be saved in case I, while RDMA will always perform a copy (similar to case III). If the data is too large to be buffered, the process is more complex because it requires synchronization between the sender and the receiver. Ideally, this is fully offloaded to the receiver's NIC (without the need to synchronize on the receive-side CPU). Barrett et al.~\cite{barrett2011using} propose a protocol for Portals 4 where the receiver monitors the received bytes, and if a message arrives that writes more than the eager threshold. This message triggers a get to the source that is matched to the correct pre-set-up memory. Unfortunately, this protocol is not practical due to the following limitations: (1) it requires triggered gets to be set up for each of the $P-1$ potential sources, requiring $\Omega(P)$ memory per process; (2) it requires additional match-bits to keep a message counter that is used to identify the correct matching entry at the source; and (3) it does not support wildcard receives (e.g., \lstinline$MPI_ANY_SOURCE$). In \texorpdfstring{\MakeLowercase{s}}{s}PIN{}, we implement a practical protocol that avoids all three limitations as illustrated in the right half of Figure~\ref{fig:protocols}. If the receive was called before the message arrived (case II), it sets up a header handler and a payload handler for the first message (filled circle at receiver). The header handler checks whether the message is large or small (determined by its size) and falls back to the normal Portals 4 handling for small messages. If the message is large, the handler interprets the first and second user-header as the total message size and the tag at the source. Then, the header handler uses these two fields to issue a get operation to the source. This get matches a descriptor that has been set up during the send (filled circle at source). The payload handler then deposits the payload of the message at the beginning of the host's memory descriptor. If the message arrived before receive was called (case III) then the handler logic is executed by the main CPU. The main benefits of \texorpdfstring{\MakeLowercase{s}}{s}PIN{} compared to RDMA are one less copy in the small-message case and completely asynchronous progress in the large-message case. We simulate the influence of fully-offloaded execution using LogGOPSim. The overhead of the local copy can be significant because the network deposits data at a rate of 50 GiB/s, while the local memory only delivers 150 GiB/s. This can lead to a copy overhead of up to 30\%. We only considered point-to-point operations for implementation because collective communication may use specialized protocols. We simulated traces for several real-world and proxy applications. Due to simulation overheads, we only execute relatively short runs between 20 and 600 seconds. Yet, we measure full application execution time including initialization and output (from \verb@MPI_Init@ to \verb@MPI_Finalize@). Thus, we expect that the speedups for longer runs are higher. We discuss the results below and summarize them in Table~\ref{tab:apps}. \paragraph{MILC} The MIMD Lattice Computation (su3\_rmd) is used to study Quantum Chromodynamics (QCD), the theory of the strong interaction~\cite{bernard1991studying} as a finite regular hypercubic grid of points in four dimensions with mostly neighbor interactions. We traced MILC on 64 processes where it spent 5.5\% of execution time in point-to-point communications. In the simulation, MILC exchanged 5,743,212 messages and generated a total of 48M events. Fully offloaded matching protocols improved the overall execution time by 3.6\%. \paragraph{POP} The Parallel Ocean Program~\cite{jones2005practical} (POP) models general ocean circulation and is used in several climate modeling applications. The logically (mostly) rectangular problem domain is decomposed into two-dimensional blocks with nearest-neighbor communications and global exchanges. We traced POP on 64 processes where it spent 3.1\% of execution time in point-to-point communications. In the simulation, POP exchanged 772,063,149 messages and generated a total of 1.5B events. Fully offloaded matching protocols improved the overall execution time by 0.7\%. \paragraph{coMD} The codesign app for molecular dynamics is part of the Mantevo proxy application suite~\cite{heroux2009improving}. It features the Lennard-Jones potential and the Embedded Atom Method potential. We traced coMD on 72 processes where it spent 6.1\% of execution time in point-to-point communications. In the simulation, coMD exchanged 5,337,575 messages and generated a total of 22M events. Fully offloaded matching protocols improved the execution time by 3.7\%. \paragraph{Cloverleaf} Cloverleaf is also part of the Mantevo proxy applications~\cite{heroux2009improving} and implements a two-dimensional Eulerian formulation to investigate materials under stress. We traced Cloverleaf on 72 processes where it spent 5.2\% of execution time in point-to-point communications. In the simulation, Cloverleaf exchanged 2,677,705 messages and generated a total of 12M events. Fully offloaded matching protocols improved the overall execution time by 2.8\%. We remark that offloading message matching and asynchronous transmissions is not limited to MPI. For example, Kim et al.~\cite{Kim:2003:ETC:781498.781506} propose an asynchronous task offloading model that could also be implemented with \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. \subsection{MPI Datatypes} Communicated data is often not consecutive in host memory. For example, in a simple three-dimensional stencil code, only two of the six communicated halos are consecutive in host memory. Most applications use process-local copying of the data to marshal it into a consecutive buffer for sending and from a consecutive buffer for receiving. As Schneider et al.~\cite{mpi-ddt-benchmark} point out, this is often not considered as part of the communication overhead even though it is in practicality, a part of the communication. Furthermore, they showed that the data marshaling time can be up to 80\% of the communication time for real-world applications because it is performed at both the send and receive side. Data marshaling can be implemented in \texorpdfstring{\MakeLowercase{s}}{s}PIN{} without the extra memory copy, potentially reducing the communication overhead by 80\%. A datatype processing library could implement this transparently to the MPI user and upload a handler for each message. The HPUs would compute the correct offsets on the NIC and DMA the data into the final location. Here, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} not only improves the performance of the communication but also relieves the memory subsystem of the host. Without loss of generality, we focus on the most common strided access that can be represented with MPI vector datatypes. Most of today's networking interfaces (e.g., Portals 4 or OFED) support iovecs to specify nonconsecutive accesses. However, they require $\mathcal{O}(n)$ storage to specify $n$ blocks to be copied, even for strided access. With vector datatypes, strided access can be expressed as an $\mathcal{O}(1)$ tuple of $\langle$\textit{start, stride, blocksize, count}$\rangle$, where count blocks of size blocksize are copied beginning from address start. \begin{figure}[h!] \includegraphics[width=\columnwidth]{dia/datatypes_cropped.pdf} \vspace{-2.2em} \caption{Processing vector datatypes in payload handlers} \vspace{-1.2em} \label{fig:datatypes} \centering \end{figure} \begin{figure*}[t]% \centering \subfloat[Strided receive with varying blocksizes]{ \includegraphics[height=3.62cm]{rplot/plot_datatype_processed} \label{fig:datatype_results} }% \hfill \subfloat[Distributed RAID using RDMA (left) and \texorpdfstring{\MakeLowercase{s}}{s}PIN{} (right)]{ \includegraphics[height=3.62cm]{dia/storage_cropped.pdf} \label{fig:storage} }% \hfill \subfloat[Update time in a distributed RAID-5 system]{ \includegraphics[height=3.62cm]{rplot/plot_raid_processed} \label{fig:storage_data} }% \vspace{-1em} \caption{Strided datatype and distributed RAID performance comparing RDMA/Portals 4 and \texorpdfstring{\MakeLowercase{s}}{s}PIN{}}% \vspace{-1em} % \end{figure*} Figure~\ref{fig:datatypes} illustrates how the payload handlers copy the data into the correct positions for a strided layout. The figure shows three packets of a stream that are deposited as a strided type into main memory. The user only specifies the tuple $\langle$\textit{start, stride=2.5 KiB, blocksize=1.5 KiB, count=8}$\rangle$ and the MTU is 4 KiB (the illustration omits headers for clarity). The packets can be processed by the payload handlers in any order or in parallel because each packet carries its offset in the message and the payload handlers computes the correct offset in the block. Arrows represent DMA writes and their width indicates the size of the transaction. The full code is shown in Appendix~\ref{sec:datatypecode}. To demonstrate the performance of \texorpdfstring{\MakeLowercase{s}}{s}PIN{} in practice, we simulate an execution of datatype processing (unpack) at the destination. For this, we choose a fixed message size of 4 MiB and vary the blocksize. We keep the strides fixed at twice the blocksize. Figure~\ref{fig:datatype_results} shows the results. The DMA overhead for small transfers dominates up to block size 256, then \texorpdfstring{\MakeLowercase{s}}{s}PIN{} is able to deposit the data nearly at line-rate (50 GiB/s) while RDMA remains at a bandwidth around 8.7 GiB/s due to the additional strided copies. \subsection{Distributed RAID Storage} After demonstrating \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s benefits for parallel applications, we now show that it can also benefit system software on large-scale compute systems. For example, filesystems often use replication at the object storage layer involving multiple nodes to improve reliability of the data~\cite{weber2007high}. We assume that the inode-to-block lookup has been performed by the client that addresses the correct blocks at the storage server. Blocks are accessed through the network and if a block is updated, the parity block on another server needs to be updated as well. The computation is simple: $p'= p \oplus n' \oplus n$ where $p$ and $p'$ are the old and new parity blocks and $n$ and $n'$ are the old and new data blocks, respectively. Filesystem nodes are often accessed through RDMA (e.g., object data servers in Lustre). Since replication is totally transparent to the client, RDMA cannot be used directly because it would reply before the parity node is updated. Thus, such systems implement a more complex protocol using the target's CPU as shown in the left part of Figure~\ref{fig:storage}. With \texorpdfstring{\MakeLowercase{s}}{s}PIN{}, the server NIC can issue requests to update the parity without involving the servers CPU. Furthermore, the parity node's NIC can apply the update in host memory. This can easily be used to implement a traditional disk-backed storage server or a distributed memory system similar to ramcloud~\cite{ousterhout2010case}. In both cases, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} offloads most of the storage protocol to the NIC. We show a simple latency/bandwidth benchmark comparing in-memory storage using four data nodes and one parity node in a RAID-5 configuration. For this test, we update contiguous memory of growing size strided across the four data nodes and measure the time until all ACKs are received (after updating the parity node). Figure~\ref{fig:storage_data} shows the performance comparing RDMA and \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. The results demonstrate the comparable performance for small messages and the significantly higher bandwidth of \texorpdfstring{\MakeLowercase{s}}{s}PIN{} for large block transfers, the common case for parallel filesystems. To demonstrate \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s influence on real-world workloads, we simulate five traces obtained from the Storage Performance Council~\cite{spc-traces}. The first two traces represent OLTP applications running at a large financial institution. The remaining three are I/O traces from a popular search engine. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} improves the processing time of all traces between 2.8\% and 43.7\%. The integrated \texorpdfstring{\MakeLowercase{s}}{s}PIN{} NIC with financial traces showed the largest speedup. The full handler code is shown in Appendix~\ref{sec:reedsolomoncode}. \subsection{Other Use Cases} We have demonstrated all key features of \texorpdfstring{\MakeLowercase{s}}{s}PIN{} in the previous sections. However, many applications, tools, and system services can benefit from \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. Here, we outline other use cases for which we have to omit a detailed analysis due to space restrictions. \paragraph{Distributed Key-Value Stores} Distributed key-value stores provide a storage infrastructure where data is identified by keys that simplify the application's storage interface~\cite{fitzpatrick2004distributed}. They can be compared to large distributed two-level hash-tables. The first level determines the target node and the second level determines a location at that node. Let $(k,v)$ represent a key-value pair with $k\in \mathcal{K}$ and $v\in \mathcal{V}$. We assume that there exist two hash functions $H_1(x): \mathcal{K} \mapsto \{0..P-1\}$ and $H_2(x): \mathcal{K} \mapsto \{0..N-1\}$ where $P$ and $N$ are the number of nodes and hashtable-size per node, respectively. We assume $H_1$ and $H_2$ are reasonably balanced with respect to the expected key distribution but not generally free of conflicts. Various high-performance RDMA implementations with varying complexity exist, ranging from replicated state machines~\cite{Poke:2015:DHS:2749246.2749267} and HPC storage layers~\cite{Docan:2010:DIC:1851476.1851481} to distributed databases~\cite{Dragojevic:2014:FFR:2616448.2616486,Dragojevic:2015:NCD:2815400.2815425}. We now describe how \texorpdfstring{\MakeLowercase{s}}{s}PIN{} could be used to offload the insert function: A client that wants to insert the KV pair $(k,v)$ first computes $H_1(k)$ to determine the target node $p$. Then, it computes $H_2(k)$ and crafts a message $(H_2(k),len(k),k,v)$ (where $len(k)$ is the size of the key in Bytes) to be sent to node $p$. We use a header handler to allocate memory to deposit $v$ and link it to the correct position $H_2(k)$ in the hash table. Depending on the hash table structure (e.g., closed or open), the handler may need to walk through a list in host memory. To not back up the network, the header handler would abort after a fixed number of steps and deposit the work item to the main CPU for later processing. Other functions such as get or delete can be implemented in a similar way. \paragraph{Conditional Read} Many distributed database problems scan remote tables using simple attributes. For example the statement \texttt{SELECT name FROM employees WHERE id = 100} may cause a full scan of the (distributed) table \texttt{employees}. Reading all data of this table via RDMA would be a waste of network bandwidth. Since our current handler definition does not allow interception and filtering of the data for a get operation, we implement our own request-reply protocol. The request message contains the filter criterion and a memory range and the reply message contains only the data that matches. The more complex query offload model described by Kumar et al.~\cite{Kumar:2006:EPN:1898953.1899010} can also be implemented in \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. \paragraph{Distributed Transactions} Distributed transactions require bookkeeping of the addresses that have been accessed during the transaction. Complex protocols using memory protection mechanisms are employed for multi-core CPUs~\cite{Herlihy:1993:TMA:173682.165164}. We can use \texorpdfstring{\MakeLowercase{s}}{s}PIN{} to log remote accesses to local memory. For this, we introspect the header handlers of all incoming RDMA packets and record the accesses in main memory. The introspection can be performed at line rate and transaction management is then performed at commit time on the host by evaluating the logs. \paragraph{Simple Graph Kernels} Many graph algorithms have very simple functions to be invoked for each vertex. For example, a BFS only checks if the vertex was not visited before and assigns it a number at the first visit. Shortest path search (SSSP) algorithms update a vertex' distance with the minimum of its current distance and the preceding vertex' distance plus the weight of the connecting edge. In distributed settings, node-boundaries can be crossed by the traversal. Then, messages are sent from the originating vertex to the destination vertex on the remote node. A message contains the new distance (i.e., the distance of the source vertex plus the edge weight). The remote handler then (atomically) checks if the destination vertex needs to be updated and conditionally performs the update. This is typically implemented by receiving messages in batches into buffers and processing them on the main CPU. Yet, this requires to store and load the message data from and to memory just to discard it after the update. With \texorpdfstring{\MakeLowercase{s}}{s}PIN{} we can define an offloaded handler to process the updates immediately and save memory bandwidth. \paragraph{Fault-tolerant Broadcast} There are many different ways to implement a fault-tolerant broadcast. Some rely on failure detectors and a two-phase broadcast-and-check protocol, where the root restarts the broadcast with a different tree if nodes have failed~\cite{buntinas2012scalable}. Others redundantly send messages in a virtual topology such as a binomial graph~\cite{angskun2007binomial}. The former rely on failure detectors, which cannot easily be implemented in the current RDMA or Portals 4 networks. The latter guarantee reliable delivery for less than $\log_2 P$ failures and often outperform the broadcast-and-check protocols in practice. Usually, these protocols are implemented with the help of the main CPU to forward messages. This means that all $\log_2 P$ redundant messages must be delivered to host memory. We can use \texorpdfstring{\MakeLowercase{s}}{s}PIN{} to accelerate such protocols by only delivering the first message to the user. This would enable a transparent reliable broadcast service offered by the network. \section{Related work} We already discussed the relation of \texorpdfstring{\MakeLowercase{s}}{s}PIN{} and active messages and switch-based packet processing such as P4 in Section~\ref{sec:background}. Programmable NICs have existed in special-purpose environments. For example, Quadrics QSNet~\cite{petrini2002quadrics} offered a programming interface that was used to offload collectives~\cite{1303191} and an early Portals implementation~\cite{pedretti-offload} used a programmable NIC. However, QSNet had to be programmed at a very low level and was rather limited, which hindered wider adoption. Myrinet provided open firmware that allowed researchers to implement their own modules on the specialized NIC cores in C~\cite{1392618}. As opposed to these constrained solutions, \texorpdfstring{\MakeLowercase{s}}{s}PIN{} defines a simple offload interface for network operations, similar to the ones that have widely been adopted for compute offloading. High-speed packet processing frameworks used for router implementations, software defined networking~\cite{1392618} and P4~\cite{bosshart2014p4} provide similar functions. They also relate well to \texorpdfstring{\MakeLowercase{s}}{s}PIN{} in that the key idea is to apply simple functions to packets. However, these frameworks are not designed to interact with host memory and the execution units are stateless and are thus much less powerful than \texorpdfstring{\MakeLowercase{s}}{s}PIN{}. \section{Discussion} \paragraph{Will \texorpdfstring{\MakeLowercase{s}}{s}PIN{} NICs be built?} With \texorpdfstring{\MakeLowercase{s}}{s}PIN{}, we define an offloading interface for NICs (which we call NISA) and we outline the requirements for a NIC microarchitecture. Using our results from simulating ARM CPUs, a vendor could immediately build such a NIC. In fact, we are aware of several vendors that will release smart NICS that can be programmed to support \texorpdfstring{\MakeLowercase{s}}{s}PIN{} with a similar microarchitecture this year. \texorpdfstring{\MakeLowercase{s}}{s}PIN{} enables the development of a vendor-independent ecosystem just like MPI where third parties can develop libraries for domain-specific handlers. Network acceleration could then be, very much like NVIDIA's cuBLAS or NCCL libraries, offered for domains outside of HPC, such as machine learning and data analytics, to impact bigger markets. \paragraph{Can \texorpdfstring{\MakeLowercase{s}}{s}PIN{} work with other libraries than Portals 4} Yes, for example, it would be straight-forward to define \texorpdfstring{\MakeLowercase{s}}{s}PIN{}'s handlers for OFED or Cray's uGNI. Here, the three handlers would not be attached to a matching entry but a queue pair and they would be invoked for every arriving message. One can also define \texorpdfstring{\MakeLowercase{s}}{s}PIN{} for connection-less protocols such as SHMEM or Cray's DMAPP. Here, one would define handlers to be executed for messages arriving at certain contexts or address ranges (similar to an ME). We chose to demonstrate the functionality with the most complex interface, Portals 4, the principles remain the same for others. \paragraph{Can \texorpdfstring{\MakeLowercase{s}}{s}PIN{} be executed on network switches?} The definition of handlers allows them to be executed at any switch in the network with some limitations. Since they're not associated with a host, the functions put and get from host are not allowed and DMA commands cannot be issued. Yet, the handlers can manipulate packets and use their shared memory to keep state. We believe that this extension would be simple but we defer a detailed analysis of use cases to future work. \paragraph{What if \texorpdfstring{\MakeLowercase{s}}{s}PIN{} handlers run too long?} In general, handlers may run for a very long time and incorrect handlers may even not terminate. We would recommend to kill handlers after a fixed number of cycles and move the interface into flow control. However, this flow-control behavior is specific to Portals 4. In general, one can imagine various ways to deal with slow handlers. We do not recommend backing-up data into the (most likely lossless) interconnect because a bad handler may block the whole network. Instead, arriving packets that cannot be processed can be dropped and the user, once notified of this event, can tune the handlers until they can perform at line-rate. \section{Summary and Conclusions} \balance We defined \texorpdfstring{\MakeLowercase{s}}{s}PIN{}, a vendor-independent and portable interface for network acceleration. We discuss a reference implementation for Portals 4 and develop and share a simulation infrastructure that combines a network and a microarchitecture simulator to analyze the benefits of network offloading\footnote{https://spcl.inf.ethz.ch/Research/Parallel\_Programming/sPIN}. We show several use cases of how it can be used in real-world parallel applications as well as system services for data management. Our simulations demonstrate significant speedups for real applications as well as important kernels. We believe that \texorpdfstring{\MakeLowercase{s}}{s}PIN{} will change the way we approach networking and how we design NICs in the future---it will make exposing the specialized data movement cores on the NIC to the user simple and enables the development of a sophisticated ecosystem. \subsection*{Acknowledgments} TH edited the manuscript and developed the original idea and specified the interface with input from RB and RG. SG developed the simulation toolchain and implemented the first prototype. KT implemented the handler codes and performed all experiments. We thank Keith Underwood, James Dinan, Charles Giefer, and Sayantan Sur for helpful discussions during the initial design. The interface and theory was developed during a research stay of the first author at Sandia National Laboratories. \bibliographystyle{ACM-Reference-Format}
train/arxiv
BkiUdH04eIZjuBQZue1n
5
1
\section{Critical metrics of the eigenvalue functionals} Let $M$ be a closed manifold of dimension $n \geq 2$ and let $k$ be a positive integer. Before introducing the notion of critical metric of the functional $\lambda_{k}$, notice that this functional is not scaling invariant. Therefore, we will restrict $\lambda_{k}$ to the set of metrics of given volume. In view of Theorem \ref{main}, a natural way to introduce the notion of critical metric is the following: \begin{definition} A metric $g$ on $M$ is said to be ``critical'' for the functional $\lambda_{k}$ if, for any volume-preserving analytic deformation $(g_{t})_{t}$ of $g$ with $g_{0}=g$, the left and the right derivatives of $\lambda_{k}(g_{t})$ at $t=0$ satisfy $$ {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^-} \times {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^+} \leq 0. $$ \end{definition} It is easy to see that $$ {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^+} \leq 0 \leq {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^-}\Longleftrightarrow \lambda_{k}(t) \leq \lambda_{k}(0)+o(t)$$ and $$ {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^-} \leq 0 \leq {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^+}\Longleftrightarrow \lambda_{k}(t) \geq \lambda_{k}(0)+o(t).$$ Therefore, $g$ is critical for $\lambda_{k}$ if, for any volume-preserving analytic deformation $(g_{t})_{t}$ of $g$, one of the following inequalities holds: $$ \lambda_{k}(g_{t}) \leq \lambda_{k}(g) +o(t)$$ or $$ \lambda_{k}(g_{t}) \geq \lambda_{k}(g) +o(t).$$ Of course, if $g$ is a local maximizer or a local minimizer of $\lambda_{k}$, then $g$ is critical in the sense of the previous definition. In all the sequel, we will denote by $S^{2}_{0}(M,g)$ the space of covariant $2$-tensors $h$ satisfying $\int_{M} \hbox {trace}_{g}h v_{g}= \int_{M} (g,h) v_{g}=0$, endowed with its natural $L^2$ norm induced by $g$. \begin{proposition}\label{prop 31} If $g$ is a critical metric for the functional $\lambda_{k}$ on $M$, then, $\forall h \in S^{2}_{0}(M,g)$, the quadratic form $$ Q_{h}(u)=-\int_M\left( du\otimes du+ {1 \over 4} \Delta_{g}u^{2}g,h\right) v_g$$ is indefinite on $E_{k}(g).$ \end{proposition} \begin{proof} Let $h \in S^{2}_{0}(M,g)$. The deformation of $g$ defined for small $t$ by $ g_{t}=\displaystyle{\left[{\mbox{vol}(g) \over \mbox{vol}(g+th)}\right]^{2/n}(g+th)}$, where $\mbox{vol}(g)$ is the Riemannian volume of $(M,g)$, is volume-preserving and depends analytically on $t$ with ${d \over dt}g_{t}\big|_{t=0}=h$. Using Theorem \ref{main}, we see that, if $g$ is critical, then the operator $P_{h,k}$ admits a nonnegative and a nonpositive eigenvalues on $E_{k}(g)$ which means that the quadratic form $Q_{h}$ is indefinite (Lemma \ref{lem 21}). \end{proof} In the case where $\lambda_{k}(g)>\lambda_{k-1}(g)$ or $\lambda_{k}(g)<\lambda_{k+1}(g)$, one can show that the converse of Proposition \ref{prop 31} is also true. Indeed, we have the following \begin{proposition}\label{prop 32} Let $g$ be a Riemannian metric on $M$ such that $\lambda_{k}(g)>\lambda_{k-1}(g)$ or $\lambda_{k}(g)<\lambda_{k+1}(g)$. Then $g$ is critical for the functional $\lambda_{k}$ if and only if, $\forall h \in S^{2}_{0}(M,g)$, the quadratic form $Q_{h}$ is indefinite on $E_{k}(g)$. \end{proposition} \begin{proof} Let $(g_{t})_{t}$ be an analytic volume-preserving deformation of $g$ and let $h={d \over dt} g_{t}\big|_{t=0}$. Since $\mbox{vol}(g_{t})$ is constant with respect to $t$, the tensor $h$ belongs to $S^{2}_{0}(M,g)$ (indeed, $\int_{M} (g,h) v_{g}={d \over dt} \hbox{vol}(g_{t})\big|_{t=0}=0$). The indefiniteness of $Q_{h}$ implies that the operator $P_{k,h}=\Pi_{k} \circ \Delta'$ admits both non-negative and non-positive eigenvalues on $E_{k}(g)$ (see Lemma \ref{lem 21}). The result follows immediately from Theorem \ref{main} (iii) and (iv). \end{proof} The indefiniteness of $Q_{h}$ on $E_{k}(g)$ for all $h \in S^{2}_{0}(M,g)$, can be interpreted intrinsically in terms of the eigenfunctions of $\lambda_{k}(g)$ as follows. \begin{lemma}\label {lem 31} Let $g$ be a Riemannian metric on $M$. The two following conditions are equivalent: \begin{enumerate} \item[i)] $\forall h \in S^{2}_{0}(M,g)$, the quadratic form $Q_{h}$ is indefinite on $E_{k}(g)$. \item[ii)] There exists a finite family $\left\{u_{1}, \cdots , u_{d}\right\}\subset E_{k}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ such that $$ \sum_{i \leq d} du_{i} \otimes du_{i}=g.$$ \end{enumerate} \end{lemma} \begin{proof} The proof of ``(i) implies (ii)'' uses the same arguments as in the proof of Theorem 1.1 of \cite{EI1}. For the sake of completeness, we will recall the main steps. First, we introduce the convex set $K \subset S^{2}(M,g)$ given by $$K= \left\{\sum_{j \in J} \left[du_{j}\otimes du_{j} + {1 \over 4} \Delta_{g}u_{j}^{2}\, g\right]; u_{j}\in E_{k}(g), \, J \subset \mathbb N, \, J \,\hbox{finite} \right\}.$$ Let us first show that $g\in K$. Indeed, if $g \notin K$, then, applying classical separation theorem in the finite dimensional subspace of $S^{2}(M,g)$ generated by $K$ and $g$, endowed with the $L^{2}$ inner product induced by $g$, we deduce the existence of a $2$-tensor $h\in S^{2}(M,g)$ such that $\int_{M}(g,h) v_{g}>0$ and, $\forall\ T \in K, \int_{M} (T,h) v_{g} \leq 0. $ The tensor $$h_{0}= h- \left(\displaystyle{{1 \over n \ \mbox{vol}(g)}}\int_{M} (g,h) v_{g}\right)g$$ belongs to $S^{2}_{0}(M,g)$ and we have, $\forall u \in E_{k}(g)$, $u \ne 0$, $$Q_{h_{0}}(u)= - \int_{M} (du \otimes du+{1 \over 4}\Delta_{g}u^{2}g,h)v_{g} + {\int_{M}(g,h)v_{g} \over n \,\mbox{vol}(g)} \int_{M} |du|^{2} v_{g}$$ $$\geq {\lambda_{k}(g) \over n \,\mbox{vol}(g)} \int_{M} (g,h)v_{g} \int_{M}u^{2} v_{g}.$$ Since $\int_{M} (g,h) v_{g} > 0$, the quadratic form $Q_{h_{0}}$ is positive definite, which contradicts the assumption (i). Now, $g\in K$ means that there exists $u_{1}, \cdots , u_{m} \in E_{k}(g)$ such that \begin{equation}\label{5} \sum_{i \leq d} (du_{i} \otimes du_{i} + {1 \over 4} \Delta_{g}u_{i}^{2})g=g. \end{equation} Hence, since $\Delta u_{i}^{2}=2(\lambda_{k}(g)u_{i}^{2}-|du_{i}|^{2})$, we obtain after taking the trace in (\ref{5}), $${\lambda_{k}(g) \over 2}\sum_{i \leq d}u_{i}^{2}= 1+ {n-2 \over 2n} \sum_{i \leq d} |du_{i}|^{2}.$$ For $n=2$, we immediately get $\sum_{i \leq d} u_{i}^{2}= {2 \over \lambda_{k}(g)}$ and, for $n\ge 3$, we consider the function $f:=\sum_{i \leq d}u_{i}^{2} - {n \over \lambda_{k}(g)}$ and observe that it satisfies $$ (n-2) \Delta_{g}f= 2(n-2)(\lambda_{k}(g) \sum_{i \leq d} u_{i}^{2} - \sum_{i \leq d}|du_{i}|^{2})=-4\lambda_{k}(g) f.$$ Thus, $f=0$ (the Laplacian being a non-negative operator) and, then, $\forall n\ge 2$, $\sum_{i \leq d} u_{i}^{2}= {n \over \lambda_{k}(g)}$. Replacing in (\ref{5}), we obtain $$ \sum_{i \leq d} du_{i} \otimes du_{i}=g.$$ Conversely, let $u_{1}, \cdots, u_{d}$ be as in (ii). This means that the map $x\in M \longmapsto u(x)=(u_{1}(x), \cdots , u_{d}(x)) \in \mathbb R^{d}$ is an isometric immersion. The vector $\Delta u(x)= (\Delta u_{1}(x),\cdots , \Delta u_{d}(x))= \lambda_{k}(g) u(x)$ represents the mean curvature vectorfield of the immersed submanifold $u(M)$. Hence, $\forall x \in M$, the position vector $u(x)$ is normal to $u(M)$ which implies that $u(M)$ is contained in a sphere of $\mathbb R^{d}$ centered at the origin. Thus, $\sum_{i \leq d}u_{i}^{2}$ is constant on $M$. Consequently, $\forall h \in S^{2}_{0}(M,g),$ $$ \sum_{i \leq d} Q_{h}(u_{i})= \cdots =0.$$ It follows that $Q_{h}$ is indefinite on $E_{k}(g)$. \end{proof} Propositions \ref{prop 31}, \ref{prop 32} and Lemma \ref {lem 31} lead to the following characterization of critical metrics of $\lambda_{k}$: \begin{theorem} \label {th 31} Let $g$ be a Riemannian metric on $M$. \begin{enumerate} \item[i)] If $g$ is a critical metric of the functional $\lambda_{k}$, then there exists a finite family $\left\{u_{1}, \cdots , u_{d}\right\} \subset E_{k}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ such that $$ \sum_{i \leq d} du_{i} \otimes du_{i} =g. $$ \item[ii)] Assume that $\lambda_{k}(g)> \lambda_{k-1}(g)$ or $\lambda_{k}(g)< \lambda_{k+1}(g)$. Then $g$ is a critical metric of the functional $\lambda_{k}$ if and only if there exists a finite family $\left\{u_{1}, \cdots , u_{d}\right\} \subset E_{k}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ such that $$ \sum_{i \leq d} du_{i} \otimes du_{i} =g. $$ \end{enumerate} \end{theorem} According to Theorem \ref{th 31} (ii), the standard metrics $g$ of compact rank one symmetric spaces are critical metrics of the functionals $\lambda_{k}$, for any $k$ such that $\lambda_{k}(g)> \lambda_{k-1}(g)$ or $\lambda_{k}(g)< \lambda_{k+1}(g)$. More generally, this is the case of all compact Riemannian homogeneous spaces with irreducible isotropy representation. Indeed, if $\left\{u_{1}, \cdots , u_{d}\right\}$ is an $L^{2}(g)$-orthonormal basis of $E_{k}(g)$, then the tensor $\sum_{i \leq d}du_{i} \otimes du_{i}$ is invariant under the isometry group action which implies that it is proportional to $g$ (Schur's Lemma). In \cite{EI3}, we studied the notion of critical metrics of the trace of the heat kernel $Z_{g}(t)= \sum e^{-\lambda_{k}(g) t}$, considered as a functional on the set of metrics of given volume. We obtain various characterizations of these critical metrics. An immediate consequence of Theorem \ref{th 31} and \cite[Theorem 2.2]{EI3}, is the following \begin{corollary} Let $g$ be a Riemannian metric on $M$. If $g$ is a critical metric of the trace of the heat kernel at any time $t>0$, then $g$ is a critical metric of the functional $\lambda_{k}$ for all $k$ such that $\lambda_{k}(g)> \lambda_{k-1}(g)$ or $\lambda_{k}(g)< \lambda_{k+1}(g)$. \end{corollary} In particular, the flat metrics $g_{sq}$ and $g_{eq}$ on the 2-Torus $\mathbb{T}^{2}$ associated with the square lattice $\mathbb Z^{2}$ and the equilateral lattice $\mathbb Z (1,0)\oplus \mathbb Z (1/2,\sqrt{3}/2)$, respectively, are critical metrics of the functionals $\lambda_{k}$ for all $k$ such that $\lambda_{k}(g_{sq})> \lambda_{k-1}(g_{sq})$ or $\lambda_{k}(g_{sq})< \lambda_{k+1}(g_{sq})$ and such that $\lambda_{k}(g_{eq})> \lambda_{k-1}(g_{eq})$ or $\lambda_{k}(g_{eq})< \lambda_{k+1}(g_{eq})$ respectively (see \cite{EI3}). Other examples of critical metrics can be obtained as Riemannian products of previous examples (see \cite{EI3}). As we noticed in the proof of Lemma \ref {lem 31}, the condition $ \sum_{i \leq d} du_{i} \otimes du_{i} =g$, with $u_{i} \in E_{k}(g)$, implies that the map $u=(u_{1}, \cdots , u_{d})$ is an isometric immersion of $(M,g)$ into a $(d-1)$-dimensional sphere. In particular, the rank of $u$ is at least $n$. Therefore we have the following \begin{corollary} If $g$ is a critical metric of the functional $\lambda_{k}$, then $$ \dim E_{k}(g) \geq \dim M+1.$$ Moreover, the equality implies that $(M,g)$ is isometric to an Euclidean sphere. \end{corollary} In the particular case where a metric $g$ is a local maximizer of $\lambda_{k}$ (that is $\lambda_{k}(g_{t})\leq \lambda_{k}(g)$ for any volume-preserving deformation $(g_{t})_{t}$ of $g$), we have the additional necessary condition that $ \lambda_{k}(g)=\lambda_{k+1}(g)$ (see Proposition \ref{cor 43} below). For a local minimizer, we have $ \lambda_{k}(g)=\lambda_{k-1}(g)$. In particular, the functional $\lambda_{1}$ admits no local minimizing metric. This result have been obtained by us in \cite{EI1} using different arguments. We end this section with the following result in the spirit of Berger's work \cite{B}: \begin{corollary} \label{cor33} Let $g$ be a Riemannian metric on $M$. Let $p \geq 1$ and $q \geq p$ be two natural integers such that $$ \lambda_{p-1}(g) < \lambda_{p}(g)=\lambda_{p+1}(g)= \dots = \lambda_{q}(g)< \lambda_{q+1}(g).$$ The metric $g$ is critical for the functional $ \sum_{i=p}^{q}\lambda_{i}$ if and only if there exists an $L^{2}(M,g)$-orthonormal basis $u_{1}, u_{2}, \dots , u_{m}$ of $E_{p}(g)$ such that $\sum_{i=1}^{m}du_{i} \otimes du_{i}$ is proportional to $g$. \end{corollary} \begin{proof} The multiplicity of $\lambda_{p}(g)$ is $m=q-p+1$. Let $(g_{t})_{t}$ be a volume-preserving analytic deformation of $g$ and $h={d \over dt} g_{t}\big|_{t=0} \in S_{0}^{2}(M,g)$. Let $\Lambda_{1}(t), \cdots, \Lambda_{m}(t)$ and $v_{1}(t), \cdots, v_{m}(t)$ be the families of eigenvalues and orthonormal eigenfunctions of $\Delta_{g_{t}}$ depending analytically on $t$ and such that $\Lambda_{1}(0)= \cdots = \Lambda_{m}(0)=\lambda_{p}(g)$, as in the proof of Theorem \ref{main}. For sufficiently small $t$, one has $$ \sum_{i=p}^{q}\lambda_{i}(g_{t})=\sum_{i=1}^{m}\Lambda_{i}(t).$$ Hence, $\sum_{i=p}^{q}\lambda_{i}(g_{t})$ is differentiable at $t=0$ and one has (see the proof of Theorem \ref{main} and Lemma \ref{lem 21}), $ \frac{d}{dt} \sum_{i=p}^{q}\lambda_{i}(g_{t})\big|_{t=0} = \sum_{i=1}^{m}\Lambda'_{i}(0)= \sum_{i=1}^{m} Q_{h}(v_{i})$, with $v_{i}:=v_{i}(0)$. Therefore, $g$ is critical for $\sum_{i=p}^{q}\lambda_{i}$ if and only if, $ \forall h \in S_{0}^{2}(M,g), \sum_{i=1}^{m} Q_{h}(v_{i})=0$. As in the proof of Lemma 3.1, this last condition means that $\sum_{i \le m} dv_{i} \otimes dv_{i}$ is proportional to $g$. \end{proof} \section{Critical metrics of the eigenvalue functionals in a conformal class} Let $M$ be a closed manifold of dimension $n \geq 2$. For any Riemannian metric $g$ on $M$, we will denote by $C(g)$ the set of metrics which are conformal to $g$ and have the same volume as $g$, i.e $$ C(g)= \left\{e^{\alpha}g \,;\, \alpha \in \mathcal{C}^{\infty}(M)\, \hbox {and} \; vol(e^{\alpha}g)= vol(g)\right\}.$$ Let $k$ be a positive integer. The purpose of this section is to study critical metrics of the functional $\lambda_{k}$ restricted to a conformal class $C(g)$. \begin{definition} A metric $g$ is said to be critical for the functional $\lambda_{k}$ restricted to $C(g)$ if, for any analytic deformation $\left\{g_{t}= e^{\alpha_{t}}g\right\}\subset C(g)$ with $g_{0}=g$, we have $$ {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^-} \times {d \over dt}\lambda_{k}(g_{t})\big|_{t=0^+} \leq 0.$$ \end{definition} In the sequel, we denote by $\mathcal{A}_{0}(M,g)$ the set of regular functions $\varphi$ with zero mean on $M$, that is, $\int_{M} \varphi \, v_{g}=0$. In the spirit of Propositions \ref{prop 31} and \ref{prop 32}, we obtain in the conformal setting, the following \begin{proposition}\label{prop 41} Let $g$ be a Riemannian metric on $M$. \begin{enumerate} \item[i)] If $g$ is a critical metric of the functional $\lambda_{k}$ restricted to $C(g)$, then, $\forall \varphi \in \mathcal{A}_{0}(M,g)$, the quadratic form $$ q_{\varphi}(u)= \int_{M} (\lambda_{k}(g)u^{2}-{n-2 \over n}|du|^{2})\varphi\, v_{g}$$ is indefinite on $E_{k}(g)$. \item[ii)] Assume that $\lambda_{k}(g)> \lambda_{k-1}(g)$ or $\lambda_ {k}(g)< \lambda_{k+1}(g)$. The metric $g$ is critical for the functional $\lambda_{k}$ restricted to $C(g)$ if and only if, $\forall \varphi \in \mathcal{A}_{0}(M,g)$, the quadratic form $q_{\varphi}$ is indefinite on $E_{k}(g)$. \end{enumerate} \end{proposition} \begin{proof} i) Let $\varphi \in \mathcal{A}_{0}(M,g)$. The conformal deformation of $g$ given by $$ g_{t}:= \left[ {\mbox{vol}(g) \over \mbox{vol}(e^{t\varphi}g)}\right]^{2 \over n} e^{t \varphi}g,$$ belongs to $C(g)$ and depends analytically on $ t $ with $ {d \over dt} g_{t}\big|_{t=0}= \varphi g $. Following the arguments of the proof of Proposition \ref{prop 31}, we show that the criticality of $g$ for $\lambda_{k}$ restricted to $C(g)$ implies the indefiniteness of the quadratic form $Q_{\varphi g}$ on $E_{k}(g)$. Applying Lemma \ref{lem 21}, we observe that $Q_{\varphi g}= -\frac{n}{2}q_{\varphi}$. ii) Let $g_{t}= e^{\alpha_{t} g} \in C(g)$ be an analytic deformation of $g$. Since $\mbox{vol}(g_{t})$ is constant with respect to $t$, the function $\varphi = {d \over dt}\alpha_{t}\big|_{t=0}$ belongs $\mathcal{A}_{0}(M,g)$. Applying Theorem \ref{main} (iii) and (iv) and Lemma \ref{lem 21} with $h=\varphi g$, we get the result. \end{proof} \begin{lemma}\label{lem 41} Let $g$ be a Riemannian metric on $M$. The two following conditions are equivalent: \begin{enumerate} \item[i)] $\forall \ \varphi \in \mathcal{A}_{0}(M,g)$, the quadratic form $q_{\varphi}$ is indefinite on $E_{k}(g)$. \item[ii)] There exists a finite family $\left\{u_{1}, \cdots , u_{d}\right\} \subset E_{k}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ such that $$ \sum_{i \leq d} u_{i}^{2}=1.$$ \end{enumerate} \end{lemma} \begin{proof} ``(i) implies (ii)'': We introduce the convex set $$ H=\left\{\sum_{i \in I}\left[\lambda_{k}(g)u_{i}^{2}-{n-2 \over n}|du_{i}|^{2}\right]; \, u_{i} \in E_{k}(g), \, I \subset \mathbb N, \, I \,\hbox {finite}\right\}.$$ Using the same arguments as in the proof of Lemma \ref{lem 31}, we show that the constant function $1$ belongs to $H$. Hence, there exist $u_{1}, \cdots , u_{d} \in E_{k}(g)$ such that $$ \sum_{i \leq d} (\lambda_{k}(g)u_{i}^{2}-{n-2 \over n}|du_{i}|^{2})=\frac{2}{n}\lambda_{k}(g). $$ For $n=2$, we immediately get $\sum_{i \leq d} u_{i}^{2}= 1$. For $n\ge 3$, we set $f= \sum_{i \leq d}u_i^2 -1 $ and get, after a straightforward calculation, $$ {n-2 \over 4} \Delta_{g}f= - \lambda_{k}(g) f.$$ Thus, $f=0$ and $\sum_{i \leq d}u_{i}^{2}=1$. \noindent ``(ii) implies (i)'': let $u_{1}, \cdots , u_{d} \in E_{k}(g)$ be such that $\sum_{i \leq d}u_{i}^{2}=1$. One has $$ \sum_{i \leq d} |du_{i}|^{2}= -{1 \over 2}\Delta_{g}\sum_{i \leq d}u_{i}^{2}+ \lambda_{k}(g) \sum_{i \leq d}u_{i}^{2}= \lambda_{k}(g).$$ Therefore, $\forall \varphi \in \mathcal{A}_{0}(M,g)$, $$\sum_{i \leq d} q_{\varphi}(u_{i})= \frac{2}{n}\int_{M} \lambda_{k}(g) \varphi v_{g}=0 $$ which implies the indefiniteness of $q_{\varphi}$. \end{proof} Proposition \ref{prop 41} and Lemma \ref{lem 41} lead to the following \begin{theorem}\label{th 41} Let $g$ be a Riemannian metric on $M$. \begin{enumerate} \item [i)]If $g$ is a critical metric of the functional $\lambda_{k}$ restricted to $C(g)$, then there exists a finite family $\left\{u_{1}, \cdots , u_{d}\right\} \subset E_{k}(g)$ of eigenfunctions associated with $\lambda_{k}$ such that $\sum_{i \leq d}u_{i}^{2}=1$. \item[ii)]Assume that $\lambda_{k}(g)> \lambda_{k-1}(g)$ or $\lambda_{k}(g)< \lambda_{k+1}(g)$. Then, $g$ is critical for the functional $\lambda_{k}$ restricted to $C(g)$ if and only if, there exists a finite family $\left\{u_{1}, \cdots , u_{d}\right\} \subset E_{k}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ such that $$ \sum_{i \leq d} u_{i}^{2}=1.$$ \end{enumerate} \end{theorem} The Riemannian metric $g$ of any homogeneous Riemannian space $(M,g)$ is a critical metric of the functional $\lambda_{k}$ restricted to $C(g)$ for all $k$ such that $\lambda_{k}(g)> \lambda_{k-1}(g)$ or $\lambda_{k}(g)< \lambda_{k+1}(g)$. Indeed, any $L^{2}(g)$-orthonormal basis $\left\{u_{i}\right\}_{i \leq d}$ of $E_{k}(g)$ is such that $\sum_{i \leq d}u_{i}^{2}$ is constant on $M$. In \cite[Theorem 4.1]{EI3}, we proved that a metric $g$ is critical for the trace of the heat kernel restricted to $C(g)$ if and only if its heat kernel $K$ is constant on the diagonal of $M \times M$. This last condition implies that, $\forall k \in \mathbb N^{\ast}$, any $L^{2}(g)$-orthonormal basis $\left\{u_{i}\right\}_{i \leq d}$ of $E_{k}(g)$ is such that $\sum_{i \leq d}u_{i}^{2}$ is constant on $M$. Hence, we have the following \begin{corollary}\label{cor 41} Let $g$ be a Riemannian metric on $M$ and let $K$ be the heat kernel of $(M,g)$. Assume that, $\forall t>0$, the function $x \in M \longmapsto K(t,x,x)$ is constant, then the metric $g$ is critical for the functional $\lambda_{k}$ restricted to $C(g)$ for all $k$ such that $\lambda_{k}(g)> \lambda_{k-1}(g)$ or $\lambda_{k}(g)< \lambda_{k+1}(g)$. \end{corollary} An immediate consequence of Theorem \ref{th 41} is the following \begin{corollary}\label{cor 42} If $g$ is a critical metric of the functional $\lambda_{k}$ restricted to $C(g)$, then $\lambda_{k}(g)$ is a degenerate eigenvalue, that is $$ \dim E_{k}(g) \geq 2.$$ \end{corollary} This last condition means that at least one of the following holds: $\lambda_{k}(g)= \lambda_{k-1}(g)$ or $\lambda_{k}(g)= \lambda_{k+1}(g)$. In the case when $g$ is a local maximizer or a local minimizer, we have the following more precise result \begin{proposition}\label{cor 43} \begin{enumerate} \item[i)]If $g$ is a local maximizer of the functional $\lambda_{k}$ restricted to $C(g)$, then $\lambda_{k}(g)=\lambda_{k+1}(g)$. \item[ii)]If $g$ is a local minimizer of the functional $\lambda_{k}$ restricted to $C(g)$, then $\lambda_{k}(g)=\lambda_{k-1}(g)$. \end{enumerate} \end{proposition} \begin{proof} Assume that $g$ is a local maximizer and that $\lambda_{k}(g)< \lambda_{k+1}(g)$. Let $\varphi \in \mathcal{A}_{0}(M,g)$ and let $g_{t}=e^{\alpha_{t}}g \in C(g)$ be a volume-preserving analytic deformation of $g$ such that ${d \over dt}g_{t}\big|_{t=0}= \varphi g$. Denote by $\Lambda_{1}(t), \cdots, \Lambda_{m}(t)$, with $m=\dim E_{k}(g)$, the associated family of eigenvalues of $\Delta_{g_{t}}$, depending analytically on $t$ and such that $\Lambda_{1}(0)= \cdots= \Lambda_{m}(0)=\lambda_{k}(g)$ (see the proof of Theorem \ref{main}). For continuity reasons, we have, for sufficiently small $t$ and all $i \leq m$, $$\Lambda_{i}(t) < \lambda_{k+1}(g_{t}).$$ Hence, $\forall i \leq m$ and $\forall t$ sufficiently small, $$ \Lambda_{i}(t) \leq \lambda_{k}(t) \leq \lambda_{k}(g)=\Lambda_{i}(0).$$ Consequently, $\Lambda'_{i}(0)=0$ for all $i \leq m$. Since $\Lambda'_{1}(0), \cdots , \Lambda'_{m}(0)$ are eigenvalues of the operator $\Pi_{k} \circ \Delta'$ (by Theorem \ref{main}), this operator is identically zero on $E_{k}(g)$. Applying Lemma \ref{lem 21}, we deduce that, $\forall \varphi \in \mathcal{A}_{0}(M,g)$, $ Q_{\varphi g}\equiv 0$ on $E_{k}(g)$. Thus, $\forall u \in E_{k}(g)$, there exists a constant $c$ so that $$ |du|^{2}+ {n \over 4} \Delta_{g}u^{2}= c.$$ Integrating, we get $ c= {\lambda_{k}(g) \over \mbox{vol}(g)} \int_{M} u^{2} v_{g}$. Since $\Delta_{g}u^{2}=2(\lambda_{k}u^{2}-|du|^{2})$, we obtain $${n \over 2} u^{2} - {n-2 \over 2\lambda_{k}(g)}|du|^{2}= {1 \over \mbox{vol}(g)}\int_{M} u^{2} v_{g}.$$ Let $x_{0} \in M$ be a point where $u^{2}$ achieves its maximum. At $x_{0}$, we have $du(x_{0})=0$ and, then, $$ {n \over 2} \max u^{2}= {n \over 2} u^{2}(x_{0})= {1 \over \mbox{vol}(g)}\int_{M} u^{2} v_{g}$$ which leads to a contradiction (since $u$ is not constant and ${n \over 2} \geq 1$).\\ A similar proof works for (ii). \end{proof} In \cite{CE}, Colbois and the first author proved that \begin{equation}\label{CE} \sup_{g' \in C(g)} \lambda_{k+1}(g')^{n \over 2} - \sup_{g' \in C(g)} \lambda_{k}(g')^{n \over 2}\geq n^{n \over 2} \omega_{n}, \end{equation} where $\omega_n$ is the volume of the unit euclidean sphere of dimension $n$. An immediate consequence of this result and Proposition \ref{cor 43} is the following \begin{corollary}\label{cor44} Let $g$ be a Riemannian metric on $M$. Assume that $g$ maximizes the functional $\lambda_{k}$ restricted to $C(g)$, that is $$\lambda_{k}(g)=\sup_{g' \in C(g)} \lambda_{k}(g').$$ Then $g$ cannot maximize neither $\lambda_{k+1}$ nor $\lambda_{k-1}$ (for $k \geq 2$) on $C(g)$. \end{corollary} More precisely, if $g$ maximizes $\lambda_{k}$ on $C(g)$, then, using Proposition \ref{cor 43} and (\ref{CE}), $$\lambda_{k+1}(g)^{n \over 2} \leq \sup_{g' \in C(g)} \lambda_{k+1}(g')^{n \over 2} - n^{n \over 2} \omega_{n}.$$ Finally we have the following conformal version of Corollary \ref{cor33} \begin{corollary} \label{cor45} Let $g$ be a Riemannian metric on $M$. Let $p \geq 1$ and $q \geq p$ be two natural integers such that $$ \lambda_{p-1}(g) < \lambda_{p}(g)=\lambda_{p+1}(g)= \dots = \lambda_{q}(g)< \lambda_{q+1}(g).$$ The metric $g$ is critical for the functional $ \sum_{i=p}^{q}\lambda_{i}$ restricted to $C(g)$ if and only if there exists an $L^{2}(M,g)$-orthonormal basis $u_{1}, u_{2}, \dots , u_{m}$ of $E_{p}(g)$ such that $\sum_{i=1}^{m}u_{i}^{2}$ is constant on $M$. \end{corollary} \section{Critical metrics of the eigenvalue ratios functionals} Let $M$ be a closed manifold of dimension $n \geq 2$ and let $k$ be a positive integer. This section deals with the functional $ g \longmapsto \displaystyle{{\lambda_{k+1}(g)\over \lambda_{k}(g)}}$. This functional is invariant under scaling, so it is not necessary to fix the volume of metrics under consideration. If $(g_{t})_{t}$ is any analytic deformation of a metric $g$, then $t \longmapsto \displaystyle{{\lambda_{k+1}(g_{t})\over \lambda_{k}(g_{t})}}$ admits left and right derivatives at $t=0$ (Theorem \ref{main}). \begin{definition} i) A metric $g$ is said to be critical for the ratio $\displaystyle{{\lambda_{k+1}\over \lambda_{k}}}$ if, for any analytic deformation $(g_{t})$ of $g$, the left and right derivatives of $\displaystyle{{\lambda_{k+1}(g_{t})\over \lambda_{k}(g_{t})}}$ at $t=0$ have opposite signs.\\ ii) The metric $g$ is said to be critical for the ratio functional $\displaystyle{{\lambda_{k+1}\over \lambda_{k}}}$ restricted to the conformal class $C(g)$ if the condition above holds for any conformal analytic deformation $g_{t}=e^{\alpha_{t}}g$ of $g$. \end{definition} Let $g$ be a Riemannian metric on $M$. For any covariant 2-tensor $h \in S^{2}(M)$, we introduce the following operator $$ \tilde{P}_{k,h}: E_{k}(g) \otimes E_{k+1}(g)\longrightarrow E_{k}(g) \otimes E_{k+1}(g)$$ defined by $$ \tilde{P}_{k,h}= \lambda_{k+1}(g) P_{h,k}\otimes Id_{E_{k+1}(g)}-\lambda_{k}(g) Id_{E_{k}(g)}\otimes P_{k+1,h},$$ where $P_{k,h}$ is defined in Lemma \ref{lem 21}. The quadratic form naturally associated with $\tilde{P}_{k,h}$ is denoted by $\tilde{Q}_{k,h}$ and is given by, $\forall u \in E_{k}(g) \, \hbox {and}\, \forall v \in E_{k+1}(g)$, $$ \tilde{Q}_{k,h}(u\otimes v)=\lambda_{k+1}(g) \left\|v\right\|^{2}_{L^{2}(g)} Q_{h}(u)- \lambda_{k}(g) \left\|u\right\|^{2}_{L^{2}(g)} Q_{h}(v),$$ where $ Q_{h}(u)= -\int_{M} (du \otimes du + {1 \over 4} \Delta_{g}u^{2}g,h) v_{g}.$ Of course, if $\lambda_{k+1}(g)=\lambda_{k}(g)$, then $g$ is a global minimizer of the ratio ${\lambda_{k+1}\over \lambda_{k}}$. Notice that, thanks to Colin de Verdi\`ere's result \cite{CV}, $\forall k\ge 1$, any closed manifold $M$ carries a metric $g$ such that $\lambda_{k+1}(g)=\lambda_{k}(g)$. A general characterization of critical metrics of ${\lambda_{k+1}\over \lambda_{k}}$ is given in what follows \begin{proposition}\label{prop 51} A Riemannian metric $g$ on $M$ is critical for the functional ${\lambda_{k+1}\over \lambda_{k}}$ if and only if, $\forall h \in S^{2}(M)$, the quadratic form $\tilde{Q}_{k,h}$ is indefinite on $ E_{k}(g) \otimes E_{k+1}(g)$. \end{proposition} \begin{proof} The case where $\lambda_{k+1}(g)=\lambda_{k}(g)$ is obvious ($\tilde{Q}_{k,h} (u\otimes u)=0$). Assume that $\lambda_{k+1}(g)>\lambda_{k}(g)$ and let $(g_{t})_{t}$ be an analytic deformation of $g$. From Theorem \ref{main}, ${d \over dt}\lambda_{k}(g_{t})\big|_{t=0^-}$ and ${d \over dt}\lambda_{k}(g_{t})\big|_{t=0^+}$ are the least and the greatest eigenvalues of $P_{k,h}$ on $E_{k}(g)$ respectively.\\ Similarly, ${d \over dt}\lambda_{k}(g_{t})\big|_{t=0^-}$ and ${d \over dt}\lambda_{k}(g_{t})\big|_{t=0^+}$ are the greatest and the least eigenvalues of $P_{k+1}$ on $E_{k}(g)$. Therefore, $$\lambda_{k}(g)^{2} {d \over dt}{\lambda_{k+1}(g_{t})\over \lambda_{k}(g_{t})}\big|_{t=0^-} =\left[\lambda_{k}(g) {d \over dt}\lambda_{k+1}(g_t)\big|_{t=0^-}- \lambda_{k+1}(g) {d \over dt}\lambda_{k}(g_t)\big|_{t=0^-}\right]$$ is the greatest eigenvalue of $\tilde{P}_{k,h}$ on $E_{k}(g) \otimes E_{k+1}(g)$, and $$\lambda_{k}(g)^{2} {d \over dt}{\lambda_{k+1}(g_{t})\over \lambda_{k}(g_{t})}\big|_{t=0^+} =\left[\lambda_{k}(g) {d \over dt}\lambda_{k+1}(g_t)\big|_{t=0^+}- \lambda_{k+1}(g) {d \over dt}\lambda_{k}(g_t)\big|_{t=0^+}\right]$$ is the least eigenvalue of $\tilde{P}_{k,h}$ on $E_{k}(g) \otimes E_{k+1}(g)$. Hence, the criticality of $g$ for $\lambda_{k+1}/\lambda_{k}$ is equivalent to the fact that $\tilde{P}_{k,h}$ admits eigenvalues of both signs, which is equivalent to the indefiniteness of $\tilde{Q}_{k,h}.$ \end{proof} \begin{lemma}\label{lem 51} Let $g$ be a Riemannian metric on $M$. The two following conditions are equivalent: \begin{enumerate} \item [i)] $\forall h \in S^{2}(M)$, the quadratic form $\tilde{Q}_{k,h}$ is indefinite on $E_{k}(g) \otimes E_{k+1}(g)$. \item[ii)] There exist two finite families $\left\{u_{1}, \cdots, u_{p}\right\}\subset E_{k}(g)$ and $\left\{v_{1}, \cdots, v_{q}\right\}\subset E_{k+1}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ and $\lambda_{k+1}(g)$ respectively, such that $$ \sum _{i \leq p} (du_{i} \otimes du_{i} + {1 \over 4} \Delta_{g}u_{i}^{2}\,g)=\sum_ {j \leq q} (dv_{j} \otimes dv_{j} + {1 \over 4} \Delta_{g}v_{j}^{2}\,g).$$ \end{enumerate} \end{lemma} \begin{proof} ``(i) implies (ii)'': Let us introduce the two following convex cones $$K_{1}=\left\{\sum_{i \in I}(du_{i} \otimes du_{i}+ {1 \over 4}\Delta_{g}u_{i}^{2}\, g); \, u_{i}\in E_{k}(g), \, I\subset \mathbb N, \, I \, \hbox {finite}\right\}\subset S^{2}(M)$$ and $$K_{2}=\left\{\sum_{j \in J}(dv_{j} \otimes dv_{j}+ {1 \over 4}\Delta_{g}v_{j}^{2}\, g); \, v_{j}\in E_{k+1}(g), \, J\subset \mathbb N, \, J \, \hbox {finite}\right\}\subset S^{2}(M).$$ It suffices to prove that $K_{1}$ and $ K_{2}$ have a nontrivial intersection. Indeed, otherwise, applying classical separation theorems, we show the existence of a 2-tensor $h \in S^{2}(M)$ such that, $\forall \ T_{1}\in K_{1},\, T_{1}\ne 0$, $$ \int_{M}(T_{1},h) v_{g} >0$$ and, $\forall \ T_{2}\in K_{2}$, $$ \int_{M}(T_{2},h) v_{g} \leq 0.$$ Therefore, $\forall u \in E_{k}(g)$ and $\forall v \in E_{k+1}(g)$, with $u \ne 0$ and $v \ne 0$, one has $Q_{h}(u)<0$, $Q_{h}(v) \ge 0$ and \begin{eqnarray*} \tilde{Q}_{k,h}(u \otimes v)&=&\lambda_{k+1}(g)\left\|v\right\|^{2}_{L^{2}(g)}Q_{h}(u)-\lambda_{k}(g)\left\|u\right\|^{2}_{L^{2}(g)}Q_{h}(v)\\ &\le& \lambda_{k+1}(g)\left\|v\right\|^{2}_{L^{2}(g)}Q_{h}(u)<0, \end{eqnarray*} which implies that $\tilde{Q}_{k,h}$ is negative definite on $E_{k}(g)\otimes E_{k+1}(g)$.\\ \noindent``(ii) implies (i)'': Let $\left\{u_{i}\right\}_{i \leq p}$ and $ \left\{v_{j}\right\}_{j \leq q}$ be as in (ii). From the identity in (ii), we get, after taking the trace and integrating, $$ \sum_{i \leq p} \int_{M}|du_{i}|^{2} v_{g}=\sum_{j \leq q} \int_{M}|dv_{j}|^{2} v_{g},$$ which gives, $$ \lambda_{k}(g) \sum_{i \leq p}\left\|u_{i}\right\|^{2}_{L^{2}(g)}=\lambda_{k+1}(g) \sum_{j \leq q}\left\|v_{j}\right\|^{2}_{L^{2}(g)}.$$ Now, $$\sum_{i,j}\tilde{Q}_{k,h}(u_{i}\otimes v_{j})=\sum_{i,j}\lambda_{k+1}(g) \left\|v_{j}\right\|^{2}_{L^{2}(g)} Q_{h}(u_{i})- \lambda_{k}(g) \left\|u_{i}\right\|^{2}_{L^{2}(g)} Q_{h}(v_{j}).$$ Assumption (ii) implies that $ \sum_{i \leq p}Q_{h}(u_{i})= \sum_{j \leq q}Q_{h}(v_{j}).$ Therefore, $$ \sum_{i,j}\tilde{Q}_{k,h}(u_{i}\otimes v_{j})=\left(\sum_{j \leq q}\lambda_{k+1}(g) \left\|v_{j}\right\|^{2}_{L^{2}(g)} - \sum_{i \leq p}\lambda_{k}(g) \left\|u_{i}\right\|^{2}_{L^{2}(g)}\right)\sum_{i \leq p} Q_{h}(u_{i})=0.$$ Hence, $\tilde{Q}_{k,h}$ is indefinite on $E_{k}(g) \otimes E_{k+1}(g)$. \end{proof} Consequently, we have proved the following \begin{theorem}\label{th 51} A metric $g$ on $M$ is critical for the functional $\displaystyle{{\lambda_{k+1}\over \lambda_{k}}}$ if and only if there exist two families $\left\{u_{1}, \cdots, u_{p}\right\}\subset E_{k}(g)$ and $\left\{v_{1}, \cdots, v_{q}\right\}\subset E_{k+1}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ and $\lambda_{k+1}(g)$, respectively, such that \begin{equation}\label{7} \sum _{i \leq p} du_{i} \otimes du_{i} -\sum_ {j \leq q} dv_{j} \otimes dv_{j} =\alpha g \end{equation} for some $\alpha \in \mathcal{C}^{\infty}(M)$, and \begin{equation}\label{8} \lambda_{k}(g) \sum_{i \leq p}u_{i}^{2}-\lambda_{k+1}(g) \sum_{j \leq q}v_{j}^{2}={n-2 \over n} (\sum_{i \leq p}|du_{i}|^{2}-\sum_{j \leq q}|dv_{j}|^{2}). \end{equation} \end{theorem} Indeed, a straightforward calculation shows that the two equations (\ref{7}) and (\ref{8}) are equivalent to the condition (ii) of Lemma \ref{lem 51}. \begin{corollary} If $g$ is a critical metric of the functional ${\lambda_{k+1} \over \lambda_{k}}$, with $\lambda_{k+1}(g) \ne \lambda_{k}(g)$, then $$ \inf \left\{\dim E_{k}(g),\dim E_{k+1}(g)\right\} \geq 2. $$ \end{corollary} \begin{proof} Let $\left\{u_{i}\right\}_{i \leq p}\subset E_{k}(g)$ and $\left\{v_{j}\right\}_{j \leq q}\subset E_{k+1}(g)$ be two families of eigenfunctions satisfying (\ref{7}) and (\ref{8}) above. Taking the trace in (\ref{7}) and using (\ref{8}) we get \begin{equation}\label{9} \alpha= {1 \over n}(\sum_{i \leq p}|du_{i}|^{2}-\sum_{j \leq q}|dv_{j}|^{2})={1 \over 4}\Delta_{g}(\sum_{j \leq q}v_{j}^{2}- \sum_{i \leq p}u_{i}^{2}). \end{equation} Assume that $\alpha=0$. Using (\ref{8}) and (\ref{9}), we deduce that both $\sum_{i \leq p}u_{i}^{2}$ and $\sum_{j \leq q}v_{j}^{2}$ are constant on $M$. Since the $u_{i}$'s and the $v_{j}$'s are not constant, we get the result. Assume now $\alpha \ne 0$. Since $\int_{M} \alpha\ v_{g}=0$ (see (\ref {9})), the function $\alpha$ takes both positive and negative values. Let $x \in M$ such that $\alpha(x)>0$. From (\ref{7}), the quadratic form $\sum _{i \leq p} du_{i} \otimes du_{i} $ is clearly positive definite on $T_{x}M$. Hence, the family $\left\{du_{i}\right\}$ has maximal rank at $x$. This shows that $\dim E_{k}(g) \geq n$. At a point $x \in M$ where $\alpha(x)<0$, the quadratic form $\sum_ {j \leq q} dv_{j} \otimes dv_{j}$ is positive definite on $T_{x}M$ and, then, $\dim E_{k+1}(g) \geq n$. \end{proof} When we deal with critical metrics of the functional ${\lambda_{k+1} \over \lambda_{k}}$ restricted to $C(g)$, only tensors of the form $h= \varphi\, g$, with $\varphi \in \mathcal{C}^{\infty}(M)$, are involved. The corresponding quadratic forms on $E_{k}(g) \otimes E_{k+1}(g)$ are given by $$ \tilde{q}_{k,\varphi}(u \otimes v)= \lambda_{k+1}(g)\left\|v\right\|^{2}_{L^{2}(g)} q_{\varphi}(u)-\lambda_{k}(g)\left\|u\right\|^{2}_{L^{2}(g)} q_{\varphi}(v).$$ Following the steps of the proof of Proposition \ref{prop 51}, we can show that: \begin{proposition} A Riemannian metric $g$ on $M$ is critical for the functional ${\lambda_{k+1} \over \lambda_{k}}$ restricted to $C(g)$ if and only if, $\forall \varphi \in \mathcal{C}^{\infty}(M)$, the quadratic form $\tilde{q}_{k,\varphi}$ is indefinite on $E_{k}(g) \otimes E_{k+1}(g)$. \end{proposition} Replacing the convex cones $K_{1}$ and $K_{2}$ in the proof of Lemma \ref{lem 51} by $$ H_{1}= \left\{\sum_{i \in I}(\lambda_{k}(g) u_{i}^{2}- {n-2 \over n} |du_{i}|^{2}); \, u_{i}\in E_{k}(g),\,I \subset \mathbb N,\, I \hbox{ finite}\right\}\subset L^{2}(M,g)$$ and $$ H_{2}= \left\{\sum_{j \in J}(\lambda_{k+1}(g) v_{j}^{2}- {n-2 \over n} |dv_{j}|^{2}); \, v_{j}\in E_{k+1}(g),\,J \subset \mathbb N,\, J \hbox{ finite}\right\}\subset L^{2}(M,g),$$ we can show, by the same arguments, that the indefiniteness of $\tilde{q}_{k,\varphi}$ for all $\varphi \in \mathcal{C}^{\infty}(M)$, is equivalent to the fact that $H_{1}$ and $H_{2}$ have a non-trivial intersection. Therefore, one has: \begin{theorem}\label{th 52} A Riemannian metric $g$ on $M$ is critical for the functional ${\lambda_{k+1} \over \lambda_{k}}$ restricted to $C(g)$ if and only if, there exist two families $\left\{u_{1}, \cdots, u_{p}\right\}\subset E_{k}(g)$ and $\left\{v_{1}, \cdots, v_{q}\right\}\subset E_{k+1}(g)$ of eigenfunctions associated with $\lambda_{k}(g)$ and $\lambda_{k+1}(g)$, respectively, such that $$\lambda_{k}(g) \sum_{i \leq p}u_{i}^{2}-\lambda_{k+1}(g) \sum_{j \leq q}v_{j}^{2}={n-2 \over n} (\sum_{i \leq p}|du_{i}|^{2}-\sum_{j \leq q}|dv_{j}|^{2}).$$ \end{theorem} \begin{remark} In dimension $2$, the condition of Theorem \ref{th 52} amounts to $$ \sum_{j \leq q}v_{j}^{2}= {\lambda_{k}(g) \over \lambda_{k+1}(g)}\sum_{i \leq p} u_{i}^{2}.$$It is clear that in this case, if $\lambda_{k+1}(g) \ne \lambda_{k}(g)$, then at least one of the eigenvalues $\lambda_{k}(g)$ and $\lambda_{k+1}(g)$ is degenerate. \end{remark}
train/arxiv
BkiUefDxK7FjYEXSGCee
5
1
\section{Introduction} Bus systems are the backbone of public transportation in the US, carrying over 47\% of all public passenger trips and 19,380 million passenger miles in the US \cite{neff20172016} . For the majority of cities in the US which do not have enough urban forms or budget to build expensive transit infrastructures like subways, the reliance is on buses as the most important transit system since bus systems have advantages of relatively low cost and large capacity. Nonetheless, the bus system is also one of the most unpredictable transit modes. Our study found that the average on-time performance across all routes of Nashville bus system was only 57.79\% (see Section~\ref{opt18:sec:sensitivity}). The unpredictability of delay has been selected as the top reason why people avoid bus systems in many cities \cite{apta2017report}. Providing reliable transit service is a critical but difficult task for all metropolis in the world. To evaluate service reliability, transit agencies have developed various indicators to quantify public transit systems through several key performance measurements from different perspectives \cite{benn1995bus}. In the past, a number of technological and sociological solutions have helped to evaluate and reduce bus delay. Common indicators of public transit system evaluation include schedule adherence, on-time performance, total trip travel time, etc. In order to track the transit service status, transit agencies have installed AVL on buses to track their real-time locations. However, the accuracy of AVL in urban areas is quite limited due to the low sampling rate (every minute) and the impact of high buildings on GPS devices. To have some basic controls during bus operation, public transit agencies often use time point strategies, where special timing bus stops (time points are special public transit stops where transit vehicles try to reach at scheduled times) are deployed in the middle of bus routes to provide better arrival and departure time synchronizations. An effective approach for improving bus on-time performance is creating timetables that maximize the probability of on-time arrivals by examining the actual delay patterns. When designing schedules for real-world transport systems (e.g. buses, trains, container ships or airlines), transport planners typically adopt a tactical-planning approach \cite{fan2006optimal}. Conventionally, metro transit engineers analyze the historical data and adjust the scheduled time from past experience, which is time consuming and error prone. A number of studies have been conducted to improve bus on-time performance by reliable and automatic timetabling. Since the timetable scheduling problem is recognized to be an NP-hard problem \cite{wu2016multi} , many researchers have employed heuristic algorithms to solve the problem. The most popular solutions include ad-hoc heuristic searching algorithms (e.g. greedy algorithms), neighborhood search (e.g. simulated annealing (SA) and tabu search (TS)), evolutionary search (e.g. genetic algorithm) and hybrid search \cite{szeto2011simultaneous}. \begin{figure*}[t] \centering \includegraphics[scale=0.3]{opt_images/toolbox.png} \caption{The proposed toolbox for bus on-time performance optimization. City planners use bus schedule, historical trip information and desired on-time range and layover time, and get outputs of optimized timetable as well as estimated on-time performance.} \label{opt18:fig:toolbox} \end{figure*} However, there are few stochastic optimization models that focus on optimizing bus timetables with the objective of maximizing the probability of bus arrivals at timepoint with delay within a desired on-time range (e.g. one minute early and five minutes late), which is widely used as a key indicator of bus service quality in the US \cite{arhin2014bus}. A timepoint is a bus stop that is designed to accurately record the timestamps when buses arrive and leave the stop. Bus drivers use timepoints to synchronize with the scheduled time. For example, to quantify bus on-time arrival performance, many regional transit agencies use the range of [-1,+5] minutes compared to the scheduled bus stop time as the on-time standard to evaluate bus performance using historical data \cite{arhin2014bus}. The actual operation of bus systems is vulnerable to many internal and external factors. The external factors include urban events (e.g., concerts, sporting events, etc.), severe weather conditions, road construction, passenger and bicycle loading/offloading, etc. One of the most common internal factors is the delay between two consecutive bus trips, where the arrival delay of previous trips causes departure delay of the next trip. Furthermore, there are monthly and seasonal variation in the actual delay patterns, but most transit agencies publish a uniform timetable for the next several months despite the variations. How to cluster the patterns and optimize timetables separately remains an open problem. Furthermore, heuristic optimization techniques have attracted considerable attention, but finding the optimal values of hyper-parameters are difficult, since they depend on nature of problem and the specific implementation of the heuristic algorithms, and are generally problem specific. \hspace{1 em} \textbf{Research Contributions:} In this paper, the monthly and seasonal delay patterns are studied and outlier analysis and clustering analysis on bus travel times to group months with similar patterns together are carried out. The feature vectors are aggregated by routes, trips, directions, timepoint segments and months encompassing mean, median and standard deviation of the historical travel times. This work significantly extends prior work on the problem as in \cite{sun2017unsupervised}. Along with a greedy algorithm and a Genetic Algorithm (GA), swarm based optimization algorithm has been introduced in this work, where the semi-autonomous agents in the swarm can update their status guided by the full knowledge of the entire population state. Thus, Particle Swarm Optimization (PSO) \cite{488968} algorithm is employed in this work to generate new timetables for month clusters that share similar delay patterns with the goal of testing both evolutionary computing and swarm intelligence approaches. It is observed that the optimized on-time performance averaged across all bus routes has increased by employing PSO as compared to that by GA. Also the execution times of PSO are much less than GA and are more stable indicating lesser variability of results over different runs. Sensitivity analysis on choosing the optimal hyper-parameters for the proposed heuristic optimization algorithms are also presented. A stability analysis of the respective algorithms have been put forward by studying the on-time performance and execution time over several runs. The overall workflow of the proposed optimization mechanisms is illustrated in Figure~\ref{opt18:fig:toolbox}. \hspace{1 em} The rest of the paper is organized as follows: Section~\ref{opt18:sec:relatedwork} compares our work with related work on transit timetabling; Section~\ref{opt18:sec:formulation} presents the problem formulation; Section~\ref{opt18:sec:data} presents the details of the transit data stores; Section~\ref{opt18:sec:solutions} discusses the timetable optimization mechanisms used; Section~\ref{opt18:sec:evaluations} evaluates the performance of the optimization mechanisms and presents sensitivity analysis results; Section~\ref{opt18:sec:conclusion} presents concluding remarks and future work. \section{Related Work and Challenges} \label{opt18:sec:relatedwork} This section compares our system with related work on transit timetable scheduling. A number of studies have been conducted to provide timetabling strategies for various objectives: (1) minimizing average waiting time \cite{wang2017data} (2) minimizing transfer time and cost \cite{chakroborty1995optimal}\cite{hairong2009optimal}\cite{szeto2011simultaneous}, (3) minimizing total travel time \cite{nayeem2014transit}, (4) maximizing number of simultaneous bus arrivals \cite{eranki2004model}, \cite{ibarra2012synchronization}, (5) minimizing the cost of transit operation \cite{ting2005schedule}, (6) minimizing a mix of cost (both the user's and the operator's) \cite{chakroborty2003genetic}. The design of timetable with maximal synchronizations of bus routes without bus bunching has been researched by Ibarra-Rojas et al. \cite{ibarra2012synchronization}. The bus synchronization strategy has been discussed from the perspective of taking waiting time into account in the transfer stops in the work of Eranki et al. \cite{eranki2004model}. An improved GA in minimizing passenger transfer time considering traffic demands has been explored by Yang et al. \cite{hairong2009optimal}. Traffic and commuter demand has also been considered in the work by Wang et al. \cite{wang2017data}. Other than employing optimization algorithms several deep learning techniques \cite{DBLP:journals/corr/abs-1905-13294} have been applied in bus scheduling problems \cite{10.1007/978-3-319-31753-3_44}. Nayeem et al. \cite{nayeem2014transit} set up the optimization problem over several criteria, such as minimizing travel time and number of transfers and maximizing passenger satisfaction. A route design and neighborhood search through genetic algorithm minimizing number of transfers has been discussed by Szeto et al. \cite{szeto2011simultaneous}. Zhong et al. \cite{psotraffic} used improved Particle Swarm Optimization for recognizing bus rapid transit routes optimized in order to serve maximum number of passengers. \iffalse Heuristic searching algorithms have been studied in a wide range of the related work for various objectives (such as minimizing the cost of both user's and operator's, maximizing the simultaneous bus arrivals, etc.). However, to the best of the authors' knowledge, there are few stochastic optimizing models proposed focusing on optimizing the transit service quantified by on-time arrival range (e.g. one minute earlier and six minute later than advertised schedule) \cite{arhin2014bus}. Furthermore, the importance of identifying monthly and seasonal delay patterns in helping timetabling is not recognized enough. In this paper, we propose an unsupervised clustering analysis mechanism that group months with similar delay patterns together. Separate timetables are then optimized separately for each month group. We present three optimization algorithms to maximize the probability of on-time arrival. Sensitivity analysis is also conducted to provide suggestions of finding optimal hyper-parameters when using genetic algorithm and particle swarm optimization algorithm. \fi \subsection{Research Challenges} \textbf{\hspace{1 em} (a) Clustering Monthly and Seasonal Variations in Historical Arrival Data:} Studying the historical travel time at segments can be an effective way to set bus timetables. However, existing work doesn't consider the monthly and seasonal variation in historical monthly data, and the variation can be utilized for better scheduling. Generating one timetable for all months may not be the best solution. As traffic and delay patterns are prone to changes over seasonal variations and various times, we generate clusters grouping months with unsupervised algorithm and develop optimization strategies for the generated clusters. We evaluate the proposed mechanism via simulation. The cluster-specific schedule is shown to further increase the on-time performance compared to generating one uniform timetable. \vspace{1 em} \textbf{(b) Computing Efficiently and Accurately in the Solution Space:} Transit performance optimization techniques rely on historical delay data to set up new timetables. However, the large amount of historical data makes it a challenge to compute efficiently. For example, Nashville MTA updates the bus schedule every 6 months but each time there are about 160,000 historical record entries to use. Moreover, the solution space has typically very large under constraints (e.g., sufficient dwelling time at bus stops, adequate layover time between trips, etc.). A suitable optimization algorithm is necessary for efficient and accurate computation. Since this is a discrete-variable optimization problem, gradient-based methods cannot be used and gradient-free methods need to be considered. A naive algorithm for discrete optimization is exhaustive search, i.e., every feasible time is evaluated and the optimum is chosen. Exhaustive search works for a small finite number of choices, and cannot be used for high-dimensional problems. Genetic algorithm \cite{chakroborty1995optimal}\cite{chakroborty2003genetic}, as well as particle swarm optimization \cite{488968} are used commonly in solving heuristic problems . Thus we consider applying genetic algorithm and particle swarm optimization (PSO) in the context. Section~\ref{opt18:sec:solutions} describes the key steps of how we apply greedy, genetic and PSO algorithms to solve the timetable optimization problem. \section{Problem Formulation} \label{opt18:sec:formulation} \begin{table}[tb] \centering \caption{The scheduled time and recorded actual arrival and departure time of two sequential trips that use the same bus of route 4 on Aug. 8, 2016. The arrival delay at the last timepoint of the first trip accumulates at the first timepoint of the second trip.} \label{opt18:tbl:example} \begin{tabular}{|l|l|l|l|l|l|} \hline & & \multicolumn{4}{c|}{Timepoints} \\ \hline & & MCC4\_14 & SY19 & PRGD & GRFSTATO \\ \hline \hline \multirow{3}{*}{Trip 1} & Scheduled Time & 10:50 AM & 11:02 AM & 11:09 AM & 11:18 AM \\ \cline{2-6} & Actual Arrival Time & 10:36 AM & 11:10 AM & 11:18 AM & 11:27 AM \\ \cline{2-6} & Actual Departure Time & 10:50 AM & 11:10 AM & 11:18 AM & 11:30 AM \\ \hline \hline \multirow{3}{*}{Trip 2} & Scheduled Time & 11:57 AM & 11:40 AM & 11:25 AM & 11:20 AM \\ \cline{2-6} & Actual Arrival Time & 12:11 PM & 11:51 AM & 11:34 AM & 11:27 AM \\ \cline{2-6} & Actual Departure Time & 12:11 PM & 11:51 AM & 11:34 AM & 11:30 AM \\ \hline \end{tabular} \end{table} Typically, transit delay are not only affected by external factors (such as traffic, weather, travel demand, etc.), but also by some internal factors. For example, the accumulated delay occurred on previous trips may cause a delay in consecutive trips by affecting the initial departure time of the next trip. In order to illustrate the problem context with simplicity and without generality, we take two sequential bus trips of route 4 in Nashville as an example (the scheduled time and the actual arrival and departure time recorded on Aug. 8, 2016 are shown in Table~\ref{opt18:tbl:example}) to describe the optimization problem. On each service day, after a vehicle of the first trip (121359) arrives at the last stop (Timepoint GRFSTATO) with scheduled time of 11:18 AM, the second trip is scheduled to depart using the same vehicle from the same stop at 11:20 AM. On Aug. 8, 2016, the arrival time at the last stop (Timepoint GRFSTATO) of the first trip (121359) is exceptionally late for 9 minutes, which contributes to the 10-minute departure delay at the beginning of the second trip. Since the scheduled layover time between the two trips is only 2 minutes (between 11:18 AM and 11:20 AM), any large delay at the first trip is very likely to transfer to the next trip. Therefore, the optimization problem should involve a process that considers not only the travel delay on segments, but also the improper lay over time between trips. Figure~\ref{opt:fig:att_rsd} illustrates the large variation of bus travel time distribution. The example shows travel time data collected from bus trips depart at a specific time of the day on route 3 in Nashville. The coefficient of variation (also known as relative standard deviation) , which is a standardized measure of dispersion of a probability distribution, is very high on all timepoints along the route. The complexity and uncertainty of travel times introduce great challenges to the task of timetable optimization. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=1.0\columnwidth]{opt_images/complexity.png}} \caption{(a) A route segment on bus route 3 leaving downtown; (b) The variance of actual travel time and (c) the relative standard deviation of actual travel times on a bus route segment in time period between Sept. 1, 2016 and Feb. 28, 2017.} \label{opt:fig:att_rsd} \end{center} \end{figure} \subsection{Problem Definition} \label{opt18:sec:formulation:problem} For a given bus trip schedule $b$, let $H = \{h_{1},h_{2},...,h_{m}\}$ be a set of $m$ historical trips with each trip passing $n$ timepoints $\{s_{1},s_{2},...,s_{n}\}$. So the \textit{on-time performance} of the bus trip schedule $b$ can be defined as a ratio of an indicator function $I(h_{i}, s_{j})$ summed over all timepoints for all historical trips to the product of the total number of historical trips and total number of timepoints. The indicator function $I(h_i, s_j)$ is 1 if $d_{i,j} \in [t_{early}, t_{late}]$, otherwise 0, where $d_{i,j} = t^{arrival}_{h_{i}, s_{j}} - T^{arrival}_{h_{i}, s_{j}}$ The objective is to design a schedule optimization problem to generate new $T^{departure}_{h,s}$, ensuring on-time performance maximization. $t_{early}$ and $t_{late}$ are two time parameters pre-defined by the transit authority as a measure of schedule maintenance and $d_{i,j}$ is the actual delay that arriving in timepoint $s_{j}$. \section{Data Store} \label{opt18:sec:data} \subsection{Data Sources} We established a cloud data store and reliable transmission mechanisms to feed our Nashville Metropolitan Transit Authority (MTA) updates the bus schedule information every six months and provides the schedule to the public via GTFS files. In order to coordinate and track the actual bus operations along routes, MTA has deployed sensor devices at specially bus stops (called timepoints) to accurately record the arrival and departure times. In Nashville, there are over 2,700 bus stops all over the city and 573 of them are timepoint stops. City planners and MTA engineers analyze the arrival and departure records regularly to update the transit schedule. The details of the datasets are as follows: \begin{itemize} \item \textit{Static GTFS. } This dataset defines the static information of bus schedule and associated geographic information, such as routes, trips, stops, departure times, service days, etc. The dataset is provided in a standard transit schedule format called General Transit Feed Specification (GTFS). \item \textit{GTFS-realtime.} This dataset is recorded real-time transit information in GTFS-realtime format, which include bus locations, trip updates and service alerts. The GTFS-realtime feed is collected and stored in one-minute interval. \item \textit{Timepoints.} This dataset provides accurate and detailed historical arrival and departure records at timepoint stops. The information include route, trip, timepoint, direction, vehicle ID, operator, actual arrival and departure time, etc. The dataset is not available in real-time but collected manually by Nashville MTA at the end of each month. \end{itemize} Even though the same timepoint datasets are utilized in the study, the proposed method is not limited to the timepoint datasets and can use some surrogate data sources: (1) automatic passenger counters (APC) data: APC datasets records both passenger counts and departure/arrival times at stops (2) GTFS-realtime feed: the real-time bus locations reported by automatic vehicle locator (AVL) installed on buses. Compared with timepoint datasets, APC data also provides accurate times at normal stops thus it is the most suitable alternative dataset. However, GTFS-realtime suffers from low sampling rate and low accuracy in the city and may reduce the performance of the proposed mechanism. \subsection{Data Cleaning} Since raw transit dataset often contains missing, duplicate and erroneous samples, preprocessing is a necessary step to prepare a clean and high-quality dataset. Missing data issue occurs due to hardware or network problems. Generally, there are samples with missing data can be dropped or filled with a specific or average values. Duplicated data (e.g., a bus trip is recorded more than one time) will oversample certain delay values and make the delay dataset biased. We drop the trips with no historical records and remove duplicated records. Outliers are values that are distant from most of the observed data in presence of which clustering can be inappropriate. K-means clustering algorithm is also sensitive to outliers present in the data. The approach taken here is to calculate Median Absolute Deviation (MAD), a robust measure of statistical dispersion. The MAD of a data set [$X=(x_1, x_2,..., x_n)$] can be calculated as: $MAD = median( |x_i - median(X)| )$. For normal distribution the scaled MAD is defined as (MAD/0.6745), approximately equal to the standard deviation. $x_i$ is considered an outlier if the difference between $x_i$ and median is larger than three times of standard deviation (i.e. scaled MAD). \section{Timetable Optimization Mechanisms} \label{opt18:sec:solutions} \subsection{Month Grouping by Clustering Analysis} \label{opt18:sec:clustering} This section introduces a clustering analysis mechanism that groups months with similar transit delay patterns together and the results will later be used to generate separate timetables for each group. \textit{Feature Engineering.} We assume the monthly delay patterns can be represented by the mean, median and standard deviation that derived from historical delay data. Considering a bus trip consists of $n$ timepoints, there are $n-1$ segments between the timepoints. The mean value $\mu$, the median value $m$, and the standard deviation $\sigma$ of the historical travel times for each timepoint segment in each month are integrated to generate feature vectors to represent the historical delay data distribution: \begin{equation} [\mu_1,m_1,\sigma_1,\mu_2,m_2,\sigma_2,...,\mu_{n-1},m_{n-1},\sigma_{n-1}] \label{equa-distribution-a-month} \end{equation} \textit{Month Clustering.} Clustering is an unsupervised/supervised learning technique for grouping similar data. We employ k-means algorithms to identify the homogeneous groups where months share similar patterns. The trip data per month is first normalized and then clustered using feature vectors (in Equation~\ref{equa-distribution-a-month}) by K-Means algorithm: \begin{equation} \argmin\limits_{S} \sum_{i=1}^{k} \sum_{x \in S_{i}}\| x - \mu_{i} \|^{2} \label{equa:k-means} \end{equation} where $\mu_{i}$ is the mean of all datapoints in cluster $S_{i}$. Determining the optimal number of clusters in a data set is a fundamental issue in partitioning clustering. For k-means algorithms, the number of clusters is a hyper-parameter that needs to be set manually. An upper bound is set in advance. Elbow \cite{kodinariya2013review}, Silhouette \cite{rousseeuw1987silhouettes} and gap statistic \cite{tibshirani2001estimating} methods are popular direct and statistical methods to find the optimal number of clusters. Particularly, Silhouette analysis is employed in this study to measure how close each point is to others within one cluster. The silhouette score $s(i)$ is defined as: \begin{equation} \begin{aligned} s(i) = \frac{b(i) - a(i)}{max\{a(i), b(i)\}} \end{aligned} \label{eq:silhouete} \end{equation} where for each data point with index $i$ in the cluster, $a_{i}$ is the average distance between $data_i$ and the rest of data points in the same cluster, $b_{i}$ is the smallest average distance between $data_i$ and every other cluster. Some other clustering techniques that can be applied to these kind of problem can be found in \cite{8301693}. \textit{Example.} Figure~\ref{fig:month_cluster} plots the [mean, standard deviation, median] vectors of the monthly travel time for a segment (WE23-MCC5\_5) on a bus trip of route 5 (Figure~\ref{fig:combinedfigs}(a)). From figure~\ref{fig:month_cluster} the monthly variation of the data is evident and hence two clusters ([May, June, July] and [August]) can be formed from these four months of data to prepare distinct schedules for the clusters. \begin{figure}[t] \vspace{-0.1in} \begin{center} \centerline{\includegraphics[scale=0.17]{opt_images/clustered_months2.png}} \caption{The feature vectors [mean, standard deviation, median] of the travel time in 4 months of 2016 for a segment (WE23-MCC5\_5) on a bus trip of route 5. } \label{fig:month_cluster} \end{center} \vspace{-0.3in} \end{figure} \subsection{Estimating On-time Performance of Transit Schedules} \label{opt18:sec:on-time-estimate} \textit{Historical Dwell Time Estimation.} Travel demand at bus stops is important statistics for setting up proper schedule times. However, for bus systems without automatic passenger counters (APCs), historical travel demand (represented by number of commuters boarding) is not available in original datasets. To get demand patterns, we utilize historical arrival and departure times to estimate the dwell time caused by passengers. Particularly, we consider the following two scenarios in historical records: (1) when a bus turns up at a stop, earlier than scheduled time, the waiting time between the scheduled time and actual departure time is used, (2) on the other hand, when it turns up later than scheduled time, the waiting time between the actual arrival time and departure time is used. As shown in Table~\ref{opt18:tbl:example}, for the timepoint SY19 on trip 1 with scheduled time of 11:02 AM: \begin{itemize} \item For the case when a bus arrived earlier at 10:58 AM instead of the scheduled time at 11:02 AM and departed at 11:04 AM, as the bus would always wait there at least for 4 minutes (the difference between the actual and scheduled arrival time) irrespective of presence of passengers, the dwell time caused by passengers is calculated as the additional time taken for departure after the scheduled time (11:04 AM - 11:02 AM = 2 minutes). \item On the other hand, if the bus arrived later at 11:05 AM and departed at 11:06 AM, then the dwell time caused by passengers is calculated as the additional time spent after the actual arrival time (11:06 AM - 11:05 AM = 1 minutes). \end{itemize} \textit{Arrival Time Estimation.} The arrival time of a bus at a stop is impacted by two factors: (1) travel times at segments before the stop, and (2) dwell times at the previous stops. We assume that a bus will wait until the scheduled time if it arrives earlier than the scheduled time, and the historical travel time between two timepoints will remain the same in the simulation. In order to obtain an estimate of the arrival time, the historical dwell time caused by commuters (which in turn is representative of the historical travel demand), is factored into account by adding it to the arrival time at any timepoint. The simulation will stall for an additional time till the new scheduled time is reached in the event that the previous sum is earlier than the new scheduled time. By taking into consideration the simulated departure time $st^{depart}_{h, s_{j}}$ at previous timepoint $s_j$, the actual travel time $t^{arrive}_{s_{j+1}}-t^{depart}_{s_j}$ between $s_j$ and $s_{j+1}$, the dwell time $t^{dwell}_{s_{j+1}}$, the simulated departure time $st^{depart}_{h, s_{j+1}}$ at a timepoint $s_{j+1}$ can be found out. The new schedule time $T^{depart}_{h, s{s_{j+1}}}$ at $s_{j+1}$ is expressed as: \begin{equation} st^{depart}_{h, s_{j+1}} = \max(T^{depart}_{h, s{s_{j+1}}}, st^{depart}_{h, s_{j}}+(t^{arrive}_{s_{j+1}}-t^{depart}_{s_j})+t^{dwell}_{s_{j+1}}) \end{equation} \subsection{Timetable Optimization Using a Greedy Algorithm} \label{opt18:subsec:greedy} We employed a greedy algorithm that adjusts the scheduled arrival time greedily and sequentially for the succeeding segments between timepoints. The main objective is to optimize the bus arrival time for succeeding timepoints such that new optimized schedule is guaranteed to maximize the probability of bus arrivals between any two consecutive stops with delay bounded in the desired range of [$t_{early}, t_{late}$]. We utilized the empirical cumulative distribution function (CDF) to evaluate the percentage of historical delay in desired range instead of assuming that the data is drawn from any specific distribution (e.g. Gaussian distribution). An empirical CDF is a non-parametric estimator of the CDF of a random variable. The empirical CDF of variable $x$ is defined as: \begin{equation} \hat{F}_n(x) = \hat{P}_n(X\leq x) = n^{-1} \sum_{n=1}^{n} I(x_i \leq x) \label{equa:ecdf} \end{equation} where $I()$ is an indicator function: \begin{equation} I(x_i \leq x) = \begin{cases} 1, & \text{if}\ x_i \leq x \\ 0, & \text{otherwise} \end{cases} \label{equa:ecdf-indicator} \end{equation} Then the CDF of $x$ in range [$x+t_{early},x+t_{late}$] can be calculated using the following equation: \begin{equation} \begin{aligned} \hat{F}_n(x+t_{late})-\hat{F}_n(x+t_{early}) \\ = n^{-1} \sum_{n=1}^{n} I(x+t_{early} \leq x_i \leq x+t_{late}) \label{equa:ecdf-range} \end{aligned} \end{equation} \subsection{Timetable Optimization Using Heuristic Algorithms} The performance optimization for scheduling transit vehicles is a multidimensional problem and as such the objective function is nonconvex in nature consisting of several troughs and ridges. Hence, to compute the optimally scheduled routing strategy with acceptable time constraints, an approach powered by high quality of solution estimation techniques such as evolutionary algorithms and metaheuristics can be considered. \subsubsection{Genetic Algorithm} Genetic algorithm \cite{Goldberg:1989:GAS:534133} is a heuristic optimization algorithm that derives from biology. The basic steps involved in genetic algorithms include initialization, selection, crossover, mutation, and termination. The timetable for each trip is decided by the scheduled departure time at the first stop as well as the scheduled travel time between any two subsequent timepoints along the trip. Since our goal is to update timetables to make the bus arrivals more on time, we assign the scheduled travel times between timepoints as chromosomes in populations, and use the on-time performance estimation mechanism proposed in Section~\ref{opt18:sec:on-time-estimate} as objective functions. The chromosome of the individual solutions in the genetic algorithm is a vector of integers representing travel time between subsequent timepoints. In order to reduce the search space and match the real-world scenarios, the travel time in each individual is re-sampled to a multiple of 60 seconds and restricted to the unit of minutes. The performance of this algorithm is governed by different hyperparameters such as population size, crossover and mutation rate controlling the algorithm's exploitation and exploration capability. The choice of such hyperparameters are explained in detail in section \ref{opt18:sec:evaluations}. \subsubsection{Particle Swarm Optimization} Eberhert and Kennedy \cite{488968} proposed particle swarm optimization (PSO) as a stochastic population based optimization algorithm which can work with non-differentiable objective function without explicitly assuming its underlying gradient disparate from gradient descent techniques. The interested reader is directed to \cite{psoreview} by Sengupta et al. for a detailed understanding of the algorithm. PSO has been shown to satisfactorily provide solutions to a wide array of complex real-life engineering problems, usually out of scope of deterministic algorithms \cite{7060145}\cite{BOUYER2018172}\cite{Banks2008}. PSO exploits the collective intelligence arising out of grouping behavior of flocks of birds or schools of fish.This manifestation of grouping is termed as 'emergence', a phenomenon in which a cohort of individuals from a social network is aimed to accomplish a task beyond their individual capability. Likewise, each particle in the swarm, represents a potential solution to the multi-dimensional problem to be optimized. \textbf{Initialization} Each particle has certain position which can be thought of as a collection of co-ordinates representing the particle's existence in a specific region in the multidimensional hyperspace. As a particle is a potential solution to the problem, the particle's position vector has the same dimensionality as the problem. The velocity associated with each particle is the measure of the step size and the direction it should move in the next iteration. Each particle in the swarm maintains an n-dimensional vector of travel times. At first, the position for each particle in the population is initialized with the set of travel time between the timepoints randomly selected between the minimum and maximum of the aggregated actual historical data. With swarm size as \textit{p}, every particle \textit{i} (1$<$\textit{i}$<$\textit{p}) maintains a position vector $x_{i}$=($x_{i1}$,$x_{i2}$,$x_{i3}$,...,$x_{in}$) and a velocity vector $v_{i}$=($v_{i1}$,$v_{i2}$,$v_{i3}$,...,$v_{in}$) and a set of personal bests $p_{i}$=($p_{i1}$,$p_{i2}$,$p_{i3}$,...,$p_{in}$). \textbf{Optimization} At each iteration, the position of a particle is updated, and compared with the personal best (\textit{pbest}) obtained so far. If the fitness due to the position obained at current iteration is more (as it is a fitness maximization problem) than the \textit{pbest} obtained upto the previous iteration, then the current position becomes the personal best or \textit{pbest}, otherwise \textit{pbest} remains unchanged. Thus the best position of a particle obtained so far is stored as \textit{pbest}. The global best or \textit{gbest} is updated when the population's overall current best, i.e., the best of the \textit{pbsest}s is better than that found in the previous iteration. After initializing positions and velocities, each particle updates its velocity based on previous velocity component weighted by an inertial factor , along with a component proportional to the difference between its current position and \textit{pbest} weighted by a cognition acceleration coefficient, and another component proportional to the difference between its current position and (\textit{gbest}), weighted by a social acceleration coefficient. This is socio-cognitive model of PSO and facilitates information exchange between members of the swarm. Since all members are free to interact with each other, the flow of information is unrestricted and the PSO algorithm is said to have a 'fully-connected' topology. While updating the velocity, a particle's reliance on its own personal best is dictated by its cognitive ability, and the reliance on the entire swarm's best solution is dictated by its social interactive nature. Hence those factors in the velocity component are weighted by the cognition acceleration coefficient \textit{c1} and social acceleration coefficient \textit{c2}. The new positions of the particles are updated as the vector sum of the previous positions and the current velocities. Thus the positions of the particles, are updated aiming towards intelligent exploration of the search space, and subsequent exploitation of the promising regions in order to find the optimal solution based on fitness optimization of the stated problem. After each iteration is completed, the velocity and position of a particle are updated as follows: \begin{equation} v_{i,j}(t+1)=w.v_{i,j}(t)+c_{1}.r_{1}(t).(p_{i,j}(t)-x_{i,j}(t))+c_{2}.r_{2}(t).(p_{g,j}(t)-x_{i,j}(t)) \end{equation} \begin{equation} x_{i,j}(t+1)=x_{i,j}(t)+v_{i,j}(t+1) \end{equation} $v_{i,j}$ and $x_{i,j}$ represent the velocity and position of the \textit{i-th} particle in the \textit{j-th} dimension. Cognition and social acceleration coefficients are indicated by $c_{1}$ and $c_{2}$, whereas $r_{1}$ and $r_{2}$ are random numbers uniformly distributed between 0 to 1. $p_{i,j}$ represents a particle$'$s personal best and $p_{g,j}$ represents the global best of the population. \textit{w} acts as an inertial weight factor controlling the exploration and exploitation of new positions in the search space and t denotes the number of iterations. The problem is formulated as fitness maximization problem in order to bring out optimal travel times to improve on-time performance. Hence the personal best of a particle is updated as follows at the end of each iteration. \begin{equation} p_{i,j}(t+1) = \begin{cases} p_{i,j}(t), & \text{if}\ fitness(x_{i,j}(t+1)) \leq fitness(p_{i,j}(t))\\ x_{i,j}(t+1), & \text{if}\ fitness(x_{i,j}(t+1)) > fitness(p_{i,j}(t))\\ \end{cases} \end{equation} \begin{algorithm}[t] \small \vspace{-0.0in} \KwData{$D \leftarrow$ Historical timepoint datasets} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{ (1) [$t_{early}$,$t_{late}$] $\leftarrow$ on-time range , (2) $maxIter$ $\leftarrow$ maximum number of iterations $maxIter$, (3) $npop$ $\leftarrow$ number of particles in the population size $npop$, (4) $w$ $\leftarrow$ inertia weight, (5) $c1$ $\leftarrow$ cognition acceleration coefficient, (6) $c2$ $\leftarrow$ social acceleration coefficient, (7) $h$ $\leftarrow$ bus trip for optimization, (8) $upperLimit$ $\leftarrow$ upper limit of the number of clusters } \Output{Optimized schedule b at timepoints for bus trip $h$} GetAllTimepoints($D$, $h$)\; GetHistoricalData($D$, $h$)\; $monthClusters \leftarrow$ ClusterMonthData($upperLimit$)\; \For{monthCluster $\in$ $monthClusters$}{ $P \leftarrow$ []\; \For{population size $npop$}{ Initialize each particle with random position and velocity $P \leftarrow P \cup InitialIndividual()$\; } \While{$maxIter$ is not reached}{ Evaluate the fitness function (\textit{J}) for each particle's position (\textit{x}) if \textit{J(x)} > \textit{J(pbest)}, then \textit{pbest = x} $gbest \leftarrow$ Update if the population's overall current best is better than that in previous iteration Update the velocity of each particle according to equation \textit{(10)} Update the position of each particle according to equation \textit{(11)} } Give \textit{gbest} as the optimal schedule b at timepoints for bus trip \textit{h} } \caption{Particle Swarm Optimization algorithm for bus on-time performance optimization} \label{algorithm:particle swarm optimization} \end{algorithm} \textbf{Termination} The termination condition set for PSO is the predefined maximum number of iterations. Since the optimized on time performance is different for each trip, the termination condition is not set as any predefined upper limit of the fitness value. With other hyperparameters fixed PSO can produce the optimal solution approximately in 30 iterations for this problem. The pseudo code for PSO is discussed in Algorithm 3. Historical timepoint datasets are used to conduct the particle swarm optimization algorithm for this problem. The input includes on-time range, maximum number of iterations, number of particles in the population size, inertia factor, cognition and social acceleration coefficient, bus trip and upper limit of number of month clusters. \section{Evaluation of the Results} \label{opt18:sec:evaluations} \subsection{Evaluating the Clustering Analysis} \label{opt18:sec:sensitivity} To evaluate the effectiveness of the clustering analysis, we compared the optimized on-time performance with and without a clustering analysis step: (1) months are not clustered and a single timetable is generated for all months, (2) month clustering is conducted at first and the optimization algorithms is applied on different month groups to generate separate timetables. Table~\ref{opt18:tbl:exp1} shows the original and optimized on-time performance on average across all bus routes. Using the genetic algorithm without clustering step improved the original performance from 57.79\% to 66.24\%. By adding the clustering step which groups months with similar patterns the performance was improved to 68.34\%. \begin{table}[tbp] \centering \small \caption{Comparison of original and optimized on-time performance averaged across all bus routes for GA without and with clustering and PSO with clustering respectively. } \begin{tabular}{|c|c|c|c|c|} \hline & Original & GA w/o. Clustering & GA w. Clustering & PSO w. Clustering \\ \hline On-time Perf. & 57.79\% & 66.24\% & 68.34\% & 68.93\% \\ \hline \end{tabular} \label{opt18:tbl:exp1} \end{table} \begin{figure}[tphb] \begin{center} \centerline{\includegraphics[scale=0.35]{opt_images/stability.png}} \caption{The chart shows the simulation results of on-time performance and execution times for GA and PSO to run 10 times.} \label{opt18:fig:stability} \end{center} \end{figure} \subsection{Comparing Optimization Performance of Greedy, Genetic and PSO Algorithms} The original on-time performance, optimized on-time performance using greedy algorithm, genetic algorithm and PSO are illustrated in Figure~\ref{opt18:fig:on-time}. It is observed that while all the algorithms can improve the on-time performance, the genetic algorithm and PSO outperforms the greedy algorithm because they optimize the schedule for all timepoint segments on each trip all together. The original on-time performance of all bus routes is 57.79\%. The greedy algorithm improved it to 61.42\% and the genetic algorithm improved it further to 68.34\%. The PSO algorithm has a slightly better optimized on-time performance of 68.93\%. Figure~\ref{opt18:fig:stability} shows the simulation results of the stability analysis for GA and PSO. Even though GA and PSO got similar on-time performance, with PSO surpassing the performance of GA by a small extent, the execution times of PSO are much less than GA and are more stable. \begin{figure}[tphb] \begin{center} \centerline{\includegraphics [scale =0.2]{opt_images/on-time3.png}} \caption{The original on-time performance and the optimized on-time performance using greedy algorithm, genetic algorithm with/without clustering analysis and PSO algorithm.} \label{opt18:fig:on-time} \end{center} \end{figure} \subsection{Sensitivity Analysis on the Hyper-parameters of the Genetic algorithm} \label{opt18:sec:sensitivity:results} We designed three simulations that choose different hyper-parameters: (1) population sizes that range from 10 to 110, (2) crossover rates that range from 0.1 to 1.0, (3) mutation rates that range from 0.1 to 1.0. Real-world data is collected from Route 5, which is one of major bus routes that connects downtown Nashville and the southwest communities in Nashville. The route contains 6 timepoint stops and 5 segments between the 6 timepoint stops. The bus trips with direction from Downtown are selected. The goal is to maximize the on-time performance for these trips by optimizing the schedule time at the 6 timepoint stops. \begin{figure}[tphb] \begin{center} \centerline{\includegraphics[scale = 0.62]{opt_images/newcombined2.PNG}} \caption{(a) Timepoints on bus route 5 in Nashville \cite{sun2017unsupervised}, (b) The the on-time performance and overall execution time for different population sizes for GA, (c) The on-time performance and overall execution time for different crossover rates, which controls the exploitation ability of the GA, (d) The on-time performance and overall execution time for different mutation rates, which controls the exploration ability of the GA, (e) The on-time performance and overall execution time for different inertia weights, exploring new regions of search space in PSO, (f) The on-time performance and overall execution time for different cognition acceleration coefficients c1, in PSO, (g) The on-time performance and overall execution time for different social acceleration coefficients c2, in PSO, (h) The on-time performance and overall execution time for various population size, in PSO} \label{fig:combinedfigs} \end{center} \end{figure} Figure~\ref{fig:combinedfigs}(b) shows the simulation results of choosing different population sizes. Increasing the population size from 10 to 90 results a better on-time performance, however, increasing the size ever further doesn't help making the on-time performance any better. On the other hand, the total time increases linearly as the population size grows. So a population size around 90 is the optimal size to use. Figure~\ref{fig:combinedfigs}(c) illustrates results of using different crossover rates. The optimized on-time performance remains almost the same for the crossover range, but there is a significant difference in terms of the total execution time. The crossover rate impacts the exploitation ability. A proper crossover rate in the middle of the range can faster the process to concentrate on an optimal point. Figure~\ref{fig:combinedfigs}(d) show the simulation results when using different mutation rates. The total execution time is small when the mutation rate is either very small or very large. Mutation rates controls the exploration ability. During the optimization, a small mutation rate will make sure the best individuals in a population do not vary too much in the next iteration and thus is faster to get stable around the optimal points. So we suggest setting a very small mutation rates when running the proposed algorithm. \subsection{Sensitivity Analysis on the Hyper-parameters of Particle Swarm Optimization} We designed four simulation setups that choose different hyper-parameters: (1) The inertial weight factor,\textit{w} that range from 1 to 8, (2) Social acceleration coefficient \textit{c1} that range from 1 to 8, (3) Cognition acceleration coefficient \textit{c2} that range from 1 to 8, and (4) Number of particles that range from 2 to 36. Real-world data regarding bus timings is collected from Route 8, which is one of the major bus routes that connects Music City Central Nashville and the Lipscomb University in Nashville. The route contains 5 timepoint stops and 4 segments between the 5 timepoint stops. The goal is to maximize the on-time performance for these trips by optimizing the schedule time at the 5 timepoint stops. Figure~\ref{fig:combinedfigs}(e) shows the simulation result while optimizing for the inertial weight \textit{w} by varying it. It is observed that the optimized on-time performance is at its peak when \textit{w} is nearly equal to 5 with less execution time. Performance deteriorates along with an increase in execution time as the selection is moved away from 5. So, an optimal value to choose for w, will be somewhere around 5. Figure~\ref{fig:combinedfigs}(f) shows the simulation result for optimizing the cognition acceleration coefficient \textit{c1} by varying it. The particle has a velocity component towards its own best position weighted by \textit{c1}, hence the term 'cognitive'. It is observed that the optimized on-time performance is improved when \textit{c1} increases from 3 to 5 with less execution time. After that the performance deteriorates along with increase in execution time. So, an optimal value to choose for \textit{c1}, will be within the range specified. Figure~\ref{fig:combinedfigs}(g) shows the simulation result for optimizing for the social acceleration coefficient \textit{c2} by varying it. The particle has a velocity component towards the global best position weighted by \textit{c2}, hence the term social. It is observed that the optimized on-time performance is improved when \textit{c2} is equal to 5 with less execution time. Also, \textit{c2} being 4 produces good results, but there is an increase in execution time at that value. But the overall effect of parameter \textit{c2} affects the on-time performance only within a range of two percent. Sometimes, PSO is able to produce optimal or near optimal performance, when all other hyperparameters are fixed, and thus is not sensitive to a particular hyperparameter, which is the case considered here. So an optimal value to choose for \textit{c2}, may be close to 5, maintaining approximately a ratio near to 1:1:1 among \textit{w}, \textit{c1} and \textit{c2}. Figure~\ref{fig:combinedfigs}(h) shows the simulation result for optimizing the number of particles by varying the population size. It is observed that the optimized on-time performance is maximized when the number of particles reaches 30. The execution time increases with the number of particles, so it is better to choose such number of particles that produces the best pair in the accuracy-execution time tradeoff. So, the population size can be chosen as 30 in this case as it yields equally efficient results with a relatively small execution time. Although a good insight about choice of hyperparameters can be obtained from this sensitivity analysis, variations of the hyperparameters may produce better results in specific routes. \section{Conclusion} \label{opt18:sec:conclusion} In this paper, we presented research findings within a bus on-time performance optimization framework that significantly extends our prior work \cite{sun2017unsupervised} by proposing a stochastic optimization toolchain and presenting sensitivity analyses on choosing optimal hyper-parameters. Particularly, we describe an unsupervised analysis mechanism to find out how months with similar delay patterns can be clustered to generate new timetables. A classical, fully-connected PSO is benchmarked against a greedy algorithm as well as a genetic algorithm in order to optimize the schedule time to maximize the probability of bus trips that reach the desired on-time range. It is observed that the PSO implementation improves the bus on-time performance compared to other heuristics while requiring lesser execution time across all routes. Simulations of optimization performance as well as sensitivity analyses on the hyper-parameters of the GA and PSO algorithms are conducted. The results indicate different strategies for choosing between the genetic algorithm and PSO, and selecting optimal hyper-parameters guided by the problem specificity and resource availability. With the knowledge of this extensive study on applying guided random search techniques for bus on-time performance optimization and the selection of hyperparameters that generate promising results, a possible extension of this generalizable architecture to other real-world optimization problems is worth looking at as future work. \section*{Acknowledgments} This work is supported by The National Science Foundation under the award numbers CNS-1528799 and CNS-1647015 and 1818901 and a TIPS grant from Vanderbilt University. We acknowledge the support provided by our partners from Nashville Metropolitan Transport Authority. \bibliographystyle{splncs04}
train/arxiv
BkiUcQXxK7Tt6CA5GJLU
5
1
\section{Introduction} The Mahowald invariant $$M : \pi_*(S^0) \rightsquigarrow \pi_{*}(S^0), \quad \alpha \mapsto M(\alpha)$$ is a construction for producing nontrivial classes in the stable homotopy groups of spheres from classes in lower stable stems. The chromatic filtration organizes the stable stems into $v_n$-periodic families \cite{Rav84}. These $v_n$-families are completely understood when $n\leq 1$ and fairly well-understood when $n \leq 2$, but they are much more mysterious for larger $n$. Computations of Sadofsky \cite{Sad92}, Mahowald and Ravenel \cite{MR93}, Bruner \cite{Bru98}, Behrens \cite{Beh06} \cite{Beh07}, and the author \cite{Qui19c} suggest that the Mahowald invariant of a $v_n$-periodic class is often $v_{n+1}$-periodic. Therefore the Mahowald invariant gives a (conjectural) means of studying mysterious $v_n$-periodic families, $n \geq 1$, by relating them to less mysterious $v_{n-1}$-periodic families. The motivating example of this phenomenon comes from the pioneering work of Mahowald and Ravenel \cite{MR93}. Adams constructed $v_1$-periodic elements $\{v_1^{4i} \eta^j, v_1^{4i}8\sigma : 1 \leq j \leq 3, i \geq 0\}$ in \cite{Ada66}. These can be constructed using the non-nilpotent $v_1$-self-map on the mod two Moore spectrum, iterated Toda brackets, and connective real K-theory. We review their definition in Section \ref{Section:Prime}, but for now we emphasize that their nontriviality depended on geometric input related to connective real K-theory. Mahowald and Ravenel gave another construction of these $v_1$-periodic elements using the Mahowald invariant. More precisely, they showed that for all $i \geq 0$, $$M(2^{4i+j}) \ni \begin{cases} v_1^{4i}\eta^j \quad & \text{ if } 1 \leq j \leq 3, \\ v^{4(i-1)}_1 8\sigma \quad & \text{ if } j=0. \end{cases}$$ Although this computation also uses connective real K-theory, we regard this as an independent construction of Adams' $v_1$-periodic families since it does not appeal to the existence of a non-nilpotent $v_1$-self-map on the mod two Moore spectrum. Periodic phenomena in the motivic stable stems is not well-understood; we briefly recall what is known in the $\c$-motivic case. Work of Levine allows one to lift classical $v_n$-periodic families into the $\c$-motivic stable stems. Subsequent work of Andrews \cite{And18}, Gheorghe \cite{Ghe17b}, and Krause \cite{Kra18} shows that in addition to these $v_n$-periodic lifts, there exist ``exotic periodic families" in the $\c$-motivic stable stems. For example, Andrews constructed exotic $w_1$-periodic families of elements analogous to Adams' $v_1$-periodic families mentioned above. Despite these interesting results, we still lack a complete understanding of periodic phenomena in the $\c$-motivic stable stems. Over other fields of characteristic zero, even less is known. We refer the reader to \cite[Sec. 4-5]{IO18} for an extensive discussion of the $\r$- and $\c$-motivic stable stems. The $\c$-motivic Mahowald invariant $$M^\c : \pi_{**}^\c(S^{0,0}) \to \pi_{**}^\c(S^{0,0}), \quad \alpha \mapsto M^\c(\alpha)$$ was introduced in \cite{Qui19a} to study these families over $\Spec(\c)$. In particular, it was shown that $$M^\c(2^{4i+j}) \ni \begin{cases} v_1^{4i}\eta \quad & \text{ if } j=1, \\ v^{4i}_1 \tau \eta^j \quad & \text{ if } j = 2,3, \\ v^{4(i-1)}_1 8\sigma \quad & \text{ if } j=0. \end{cases}$$ and $$ M^\c(\eta^{4i+j}) \ni \begin{cases} w_1^{4i} \nu \quad & \text{ if } j=1, \\ w^{4i}_1 \nu^j \quad & \text{ if } j=2,3, \\ w^{4(i-1)}_1 \eta^2 \eta_4 \quad & \text{ if } j=0. \end{cases}$$ In \cite{Qui19b}, $\r$-motivic lifts of these $v_1$- and $w_1$-periodic elements were constructed and identified as $\r$-motivic Mahowald invariants. The goal of this paper is to obtain analogous computations over general base fields of characteristic not two. Very little is currently known about periodicity in the motivic stable stems over a general base field $F$ of characteristic not two. Work of Morel \cite{Mor12} shows that $\pi^F_{m,n}(S^{0,0}) = 0 $ for $m<n$, and the $0$-line $\bigoplus_{n \in \z} \pi^F_{n,n}(S^{0,0})$ is Milnor-Witt K-theory $K^{MW}_*(F)$. R{\"o}ndigs-Spitzweck-{\O}stv{\ae}r computed the $1$-line $\bigoplus_{n \in \z} \pi^F_{n+1,n}(S^{0,0})$ in \cite{RSO19}, and they showed that there is a short exact sequence of Nisnevich sheaves $$0 \to K^M_{2-n}(-)/24 \to \pi_{n+1,n}^{(-)}(S^{0,0}) \to \pi_{n+1,n}^{(-)} f_0(KQ)$$ where $K^M$ is Milnor K-theory and $f_0(KQ)$ is the effective cover of Hermitian K-theory. Other infinite computations have been done after inverting the Hopf map $\eta \in \pi_{1,1}^F(S^{0,0})$ by Guillou-Isaksen \cite{GI15}\cite{GI16}, Andrews-Miller \cite{AM17}, R{\"o}ndigs \cite{Ron16}, and Wilson \cite{Wil18}. We refer the reader to \cite[Sec. 6]{IO18} for a survey of other computations over general base fields. Our first main theorem proves that many of Adams' $v_1$-periodic families can be lifted to the $F$-motivic stable stems, where $F$ is an arbitrary field of characteristic not two. The first part of the theorem applies in the case $\chara(F) > 2$. Recall that over $\Spec(\c)$, one has $v_1$-periodic families of the form $\{ v^{4i}_1 \eta, v^{4i}_1 \tau \eta^2, v^{4i}_1 \tau \eta^2, v^{4i}_1 8 \sigma \}_{i \geq 0}$ which can be constructed as iterated Toda brackets using \cite{Qui19a}. Moreover, the Betti realization \cite{MV99} of these classes recovers Adams' classical elements \cite{Ada66} with the same names. Let $g : F \to \overline{F}$ be the inclusion of $F$ into its algebraic closure and note that $\pi_{s,w}^{\overline{F}}(S^{0,0}) \cong \pi_{s,w}^{\c}(S^{0,0})$ when $s \geq w \geq 0$ or $s<w$ by \cite{WO17}. \begin{thm*}[Theorem \ref{Thm:FamiliesGeneral}, Part (1)] Suppose that $\chara(F) > 2$. Let $\alpha \in \{ v^{4i}_1 \eta, v^{4i}_1 \tau \eta^2, v^{4i}_1 \tau \eta^3, v^{4i}_1 8 \sigma \}_{i \geq 0} \subset \pi_{**}^{\overline{F}}(S^{0,0})$. There exists a nontrivial class $\tilde{\alpha} \in \pi_{**}^F(S^{0,0})$ such that $g^*(\tilde{\alpha}) = \alpha$. \end{thm*} Perioidicity is more subtle in characteristic zero. Although one has motivic analogs of the classical periodicity operators and $v^4_1$-periodic families over $\Spec(\c)$ \cite[Sec. 5]{Qui19a}, some of these are not well-defined over $\Spec(\r)$ or $\Spec(\q)$. \begin{exm} The following three phenomena occur working over $\Spec(\r)$ but not over $\Spec(\c)$. \begin{enumerate} \item The class $16\sigma \in \pi_{7,4}^\r(S^{0,0})$ is nonzero \cite[Fig. 3]{DI16a}. In particular, the Toda bracket $v_1^4(-) = \langle \sigma, 16, \alpha \rangle$ cannot be defined. \item Recall from \cite[Thm. 5.12]{Qui19a} that $8\sigma \in M^{\c}(2^4) \subset \pi_{7,4}^\c(S^{0,0})$. As a consequence of the previous fact, one can show that $8\sigma \notin M^\r((2+\rho\eta)^4) \subset \pi_{7,4}^\r(S^{0,0})$ but $16\sigma \in M^\r((2+\rho\eta)^4) \subset \pi_{7,4}^\r(S^{0,0})$. This suggests that the $F$-motivic Mahowald invariant ``detects" differences in the order of $(2+\rho\eta)$-torsion in the image of a conjectural $F$-motivic $J$-homomorphism. \item The class $Ph_1 \in Ext^{5,14,5}_{A^\c}(\m_2^\c,\m_2^\c)$ detects $v_1^4 \eta \in \pi_{9,5}^\c(S^{0,0})$, but it is not in the image of base change along $\r \to \c$ by \cite[Lem. 5.7]{DI16a}. In particular, the class $Ph_1$ does not survive in the $\rho$-Bockstein spectral sequence which calculates $Ext^{***}_{A^\r}(\m_2^\r,\m_2^\r)$ from $Ext^{***}_{A^\c}(\m_2^\c,\m_2^\c)$. \end{enumerate} \end{exm} Despite these obstructions to defining $v_1$-periodic families in characteristic zero, we are able to construct two nontrivial infinite families. Let $g : F \to \overline{F}$ be the inclusion of $F$ into its algebraic closure. \begin{thm*}[Theorem \ref{Thm:FamiliesGeneral}, Part (2)] Suppose that $\chara(F) = 0$. Let $\alpha \in \{ v^{4i}_1 \tau \eta^2, v^{4i}_1 \tau \eta^3\}_{i \geq 0} \subset \pi_{**}^{\overline{F}}(S^{0,0})$. There exists a nontrivial class $\tilde{\alpha} \in \pi_{**}^F(S^{0,0})$ such that $g^*(\tilde{\alpha}) = \alpha$. \end{thm*} Note that since $\tau \eta^4 = 0$ in $\pi_{**}^{\bar{F}}(S^{0,0})$, these classes cannot be seen using Wilson's $\eta$-local computations over the rationals \cite{Wil18}. Therefore the theorem provides two new infinite families in the $F$-motivic stable stems for any field $F$ of characteristic zero. In this work, we extend the definition of the motivic Mahowald invariant over any field $F$ of characteristic not two. Let $M^F(\alpha)$ denote the $F$-motivic Mahowald invariant of a class $\alpha$ in the $F$-motivic stable stems. In \cite{Qui19a} and \cite{Qui19b}, we showed that the infinite families above are realized as the $\c$-motivic Mahowald invariants of $2^i$, $i \geq 1$, and the $\r$-motivic Mahowald invariants of $(2+\rho \eta)^i$, $i \equiv 2,3 \mod 4$. Our second main theorem is that this is true over any field of characteristic not two. \begin{thm*}[Theorem \ref{Thm:MotMI2i}] Suppose that $\chara(F) > 2$. Then the $F$-motivic Mahowald invariant of $(2+\rho \eta)^i$ is given by $$ M^F((2+\rho\eta)^{4i+j}) \ni \begin{cases} v^{4i}_1 \eta \quad & \text{ if } j =1, \\ v^{4i}_1 \tau \eta^2 \quad & \text{ if } j=2,\\ v^{4i}_1 \tau \eta^3 \quad & \text{ if } j=3, \\ v^{4i}_1 8\sigma \quad & \text{ if } j=4. \end{cases} $$ Suppose that $\chara(F) = 0$. Then the $F$-motivic Mahowald invariant of $(2+\rho \eta)^i$ is given by $$ M^F((2 + \rho \eta)^{4i+j} \ni \begin{cases} v^{4i}_1 \tau \eta^2 \quad & \text{ if } j=2,\\ v^{4i}_1 \tau \eta^3 \quad & \text{ if } j=3. \end{cases} $$ \end{thm*} \subsection{Outline} In Section \ref{Section:Prelim}, we recall results from forthcoming work of Gepner-Heller \cite{GH18} and Gregersen-Heller-Kylling-Rognes-{\O}stv{\ae}r \cite{GHKRO18}. In particular, we discuss equivariant motivic homotopy theory, a motivic analog of Lin's Theorem, and the compatibility of both of these with base-change. We then extend the definition of the motivic Mahowald invariant to base fields $F$ of characteristic not two. We also prove the main technical lemma which is used in Section \ref{Section:General} to infer $F$-motivic Mahowald invariants from our computations over $\Spec(\f_q)$ and $\Spec(\q)$ (from Section \ref{Section:Prime}), and $\Spec(\c)$ (from \cite{Qui19a}). In Section \ref{Section:Prime}, we discuss $v_1$-periodicity over algebraically closed fields. We then define $v_1$-periodic families over $\Spec(\f_q)$ (where $q$ is an odd prime), $\Spec(\q_\nu)$ (where $\nu$ is any prime), and $\Spec(\q)$. Our primary tool is the $\rho$-Bockstein spectral sequence introduced by Hill in \cite{Hil11}. We also prove that the map $(2 + \rho \eta)^4$ is null on certain stunted motivic lens spectra using the Atiyah-Hirzebruch spectral sequence and base-change. In Section \ref{Section:General}, we use our computations from Section \ref{Section:FiniteFields} to compute $M^{\f_q}(2^i)$, $i \geq 1$, and we use our computations from Section \ref{Section:Rationals} to compute $M^\q((2+\rho \eta)^i)$ for $i \equiv 2,3\mod 4$. In both cases, we follow the proof from \cite[Sec. 5]{Qui19b} which is modified from the proof of \cite[Thm. 2.17]{MR93}. We then use the key comparison lemma from Section \ref{Section:Prelim} to compute the motivic Mahowald invariants of $(2+\rho\eta)^i$ over any field $F$ of characteristic not two. The key point is $F$ fits into a sequence of field extensions \begin{align*} \f_q \to F \to \overline{F}, \quad & \text{ if } \chara(F)=q \neq 0, \\ \q \to F \to \overline{F}, \quad & \text{ if } \chara(F) =0. \end{align*} The motivic Mahowald invariants of $(2+\rho \eta)^i$ agree over $\Spec(\f_q)$, $\Spec(\q)$, and $\Spec(\overline{F})$ in the congruence classes of $i$ where we can calculate them, so by compatibility with base-change, they must agree over $\Spec(F)$ as well. \subsection{Notation} We employ the following notation and conventions throughout: \begin{enumerate} \item $k$, $F$ , and $L$ are fields of characteristic not two. \item $\f_q$ is the finite field with $q$ elements, where $q$ is an odd prime. \item $SH(k)$ is the motivic stable homotopy category over $\Spec(k)$. \item $S^{0,0}$ is the motivic sphere spectrum. \item Everything is implicitly $(2,\eta)$-complete. \item $\m_2^k$ is the mod two motivic cohomology of a point over $\Spec(k)$. \item $A^k$ (resp. $A^k_*$) is the motivic (resp. dual motivic) Steenrod algebra over $\Spec(k)$. \item $Ext^{s,f,w}_{A^k}$ denotes the cohomology of the $k$-motivic Steenrod algebra in stem $s$, Adams filtration $f$, and motivic weight $w$. \end{enumerate} \subsection{Acknowledgements} The author thanks Jonas Irgens Kylling first and foremost. Many of the ideas and insights in this paper were discovered in collaboration with him during the author's visit to the University of Oslo in August 2018 and the subsequent months. The author also thanks Tom Bachmann, Mark Behrens, Jeremiah Heller, Dan Isaksen, Paul Arne {\O}stv{\ae}r, and an anonymous referee for helpful discussions. This project was also partially completed at the Newton Institute workshop on equivariant and motivic homotopy theory in 2018. We gratefully thank the Newton Institute and the University of Oslo for their support. The author was partially supported by NSF grant DMS-1547292. \section{Motivic Lin's Theorem and the motivic Mahowald invariant revisited}\label{Section:Prelim} In this section, let $f : k \to F$ be a field extension and let $f^* : SH(k) \to SH(F)$ be the corresponding base-change functor. \subsection{Motivic Lin's Theorem revisited}\label{Section:MotTate} We begin by defining geometric universal spaces and geometric classifying spaces following \cite{GH18}. Let $\cp(Sm_k^G)$ be the category of motivic $G$-spaces over $\Spec(k)$ \cite{GH18}. \begin{defin}[{\cite[Def. 3.2]{GH18}}] Let $\cf$ be a family of subgroups of $G$. The \emph{universal motivic $\cf$-space over $\Spec(k)$} is the object $E_{gm}\cf_k \in \cp(Sm_k^G)$ whose value on $X \in Sm^G_k$ is $$E_{gm}\cf_k(X) = \begin{cases} \emptyset \quad &\text{ if } X^H \neq \emptyset \text{ for some } H \notin \cf, \\ pt \quad&\text{ else.} \end{cases} $$ When $\cf$ is the family of proper subgroups of $G$, we will use the notation $E_{gm}G := E_{gm}\cf$. In this case, we define $$B_{gm}G := (E_{gm}G) / G.$$ In this case, $B_{gm}G$ is the geometric classifying space originally defined by Morel-Voevodsky \cite{MV99} and Totaro \cite{Tot99}. \end{defin} Geometric universal spaces are preserved under base-change by \cite[Prop. 3.8]{GH18} and satisfy motivic analogs of many useful properties of universal spaces in classical homotopy theory. \begin{defin}[{\cite{Gre12}}]\label{Def:Proj} For all $n \in \z$, the \emph{stunted motivic lens spectrum} $\underline{L}^\infty_n$ is defined by setting $$\underline{L}^\infty_n := Th(n\gamma \to B_{gm}C_2)$$ where $\gamma$ is the tautological line bundle over $B_{gm}C_2$. Define $$\underline{L}^\infty_{-\infty} := \underset{n}{\holim} \underline{L}^\infty_{-n}.$$ \end{defin} The key property of this construction is the following, which was proven by Gregersen over fields of characteristic zero with finite virtual cohomological dimension, and by Gregersen-Heller-Kylling-Rognes-{\O}stv{\ae}r over fields of characteristic not two. \begin{thm}[{\cite[Thm. 2.0.2]{Gre12}\cite{GHKRO18}}]\label{Thm:MotLinsThm} The map $$S^{-1,0} \to \underline{L}^\infty_{-\infty}$$ induces a $\pi_{**}$-isomorphism after $(2,\eta)$-completion. \end{thm} We will use this in the next section to define the $k$-motivic Mahowald invariant. We conclude this section by recording a useful property of stunted motivic lens spectra. \begin{lem}[{\cite{GHKRO18}}]\label{Lem:ProjBaseChange} Let $n \in \z$. Then $\underline{L}^\infty_n$ is preserved under base-change. \end{lem} \subsection{The motivic Mahowald invariant revisited}\label{Section:MotMI} Using the results of the previous section, we can extend the definition of the motivic Mahowald invariant from \cite[Sec. 2]{Qui19a} to general base fields. \begin{defin}~\label{Def:MotMI} Let $\alpha \in \pi^k_{s,t}(S^{0,0})$. We define the \emph{$k$-motivic Mahowald invariant} of $\alpha$, denoted $M^k(\alpha)$, as follows. Consider the coset of completions of the following diagram \[ \begin{tikzcd} S^{s,t} \arrow[dashed,rr] \arrow{d}{\alpha} && S^{-2N+1,-N} \vee S^{-2N+2,-N+1} \arrow{d} \\ S^{0,0} \arrow{r}{\simeq} & \Sigma^{1,0} \underline{L}^\infty_{-\infty} \arrow{r} & \Sigma^{1,0} \underline{L}^\infty_{-N} \end{tikzcd} \] where $N>0$ is minimal so that the left-hand composition is nontrivial. If the composition of the dashed arrow with the projection onto the higher dimensional sphere is nontrivial, we define the $k$-motivic Mahowald invariant $M^k(\alpha)$ to be the coset of completions composed with the projection onto the higher dimensional sphere. Otherwise, the composition of the dashed arrow with the projection onto the higher dimensional sphere is trivial and we define the motivic Mahowald invariant $M^k(\alpha)$ to be the coset of completions composed with the projection onto the lower dimensional sphere. We illustrate this convention in the examples later in this section. \end{defin} Recall that the ``Squeeze Lemmas" from \cite[Sec. 3]{Qui19b} were used to compute generalized Mahowald invariants by comparing them under functors such as equivariant Betti realization and geometric fixed points. Since $\underline{L}^\infty_{-N}$ is preserved by base-change, we obtain the following comparison lemma which will be essential in the last section. \begin{lem}\label{Lem:Squeeze} Let $f : k \to F$. Suppose $\alpha,\beta \in \pi^{F}_{**}(S^{0,0})$ such that $\beta \in M^F(\alpha)$. Suppose further that there exist $\alpha', \beta' \in \pi^k_{**}(S^{0,0})$ such that $f^*(\alpha') = \alpha$ and $f^*(\beta') = \beta$. Then $$|M^k(\alpha')| \leq |\beta'|.$$ Further, if $|M^k(\alpha')| = |\beta'|$, then $\beta' \in M^k(\alpha')$. \end{lem} \begin{proof} Suppose that the $F$-motivic Mahowald invariant $M^F(\alpha)$ is defined by the commutative diagram \[ \begin{tikzcd} S^{s,t} \arrow{d}{\alpha} \arrow[rr,dashed,"\beta"] && S^{-2N+1,-N} \wedge S^{-2N+2,-N+1} \arrow{d} \\ S^{0,0} \arrow{r}{\simeq} & \Sigma^{1,0} \underline{L}^\infty_{-\infty} \arrow{r} & \Sigma^{1,0} \underline{L}^\infty_{-N}, \end{tikzcd} \] so in particular $N>0$ is minimal so that the left-hand composite is nontrivial. Then the left-hand composite in the commutative diagram \[ \begin{tikzcd} S^{s,t} \arrow{d}{\alpha'} \arrow[rr,dashed,"\beta'"] && S^{-2N+1,-N} \wedge S^{-2N+2,-N+1} \arrow{d} \\ S^{0,0} \arrow{r}{\simeq} & \Sigma^{1,0} \underline{L}^\infty_{-\infty} \arrow{r} & \Sigma^{1,0} \underline{L}^\infty_{-N} \end{tikzcd} \] must also be nontrivial since the first commutative diagram can be obtained from this one by applying $f^*$. Since there may be some $N' < N$ such that the left-hand composite is nontrivial, we only obtain an inequality as in the statement of the lemma. However, if $|\beta'| = |M^k(\alpha')|$, then $N$ is minimal and $\beta' \in M^k(\alpha')$ by definition, which proves the last claim. \end{proof} Applying the previous lemma to the a composite of base-change functors gives the following. \begin{cor} Let $k \overset{f}{\to} F \overset{g}{\to} L$ be a sequence of field extensions. Suppose that we have $\alpha,\beta \in \pi^k_{**}(S^{0,0})$, $\alpha', \beta' \in \pi^F_{**}(S^{0,0})$, and $\alpha'', \beta'' \in \pi^L_{**}(S^{0,0})$ such that $g^*(\alpha') = \alpha''$, $g^*(\beta') = \beta''$, $f^*(\alpha) = \alpha'$, $f^*(\beta) = \beta'$, $\beta \in M^k(\alpha)$, and $\beta'' \in M^L(\alpha'')$. Then $\beta' \in M^F(\alpha')$. \end{cor} \section{$v_1$-periodic families over prime fields}\label{Section:Prime} Over $\Spec(\c)$, one can construct $v_1$-periodic families via iterated Toda brackets following \cite{Qui19a}. We begin this section by constructing analogous families over any algebraically closed field using a result of Wilson-{\O}stv{\ae}r \cite{WO17}. We then construct lifts of these infinite families in $\pi_{**}^F(S^{0,0})$ where $F = \f_q$ ($q$ an odd prime), $F = \q_\nu$ ($\nu$ any prime), and $F = \q$. That is, we construct families which base-change to the $v_1$-periodic families in the algebraic closure. We also study the $F$-motivic homotopy of certain stunted motivic lens spectra. \subsection{$v_1$-periodicity over algebraically closed fields}\label{Section:Closed} In this section, we define some infinite families over the algebraically closed fields. We begin by recalling the analogous families from the classical and $\c$-motivic settings. In the classical setting, Adams constructed infinite families $\{v^{4i}_1 \eta^j, v^{4i}_1 8\sigma: 1 \leq j \leq 3, i \geq 0\}$ as the (nontrivial) composites \begin{align*} S^{j} \overset{\widetilde{\eta^j}}{\longrightarrow} S^{0}/2 \overset{v^{4i}_1}{\longrightarrow} \Sigma^{-8i} S^{0}/2 \to S^{-8i+1}, \\ S^{0} \hookrightarrow S^{0}/2 \overset{v^{4i}_1}{\longrightarrow} \Sigma^{-8i} S^{0}/2 \to S^{-8i+1}, \end{align*} where $S^{0}/2$ is the mod two Moore spectrum, $\widetilde{\eta^j} : S^j \to S^0/2$ is a lift of $\eta^j \in \pi_j(S^0)$ to the top cell of $S^0/2$, and $v^{4}_1 : S^{0,0}/2 \to \Sigma^{-8} S^{0,0}/2$ is a non-nilpotent self-map of $S^0/2$ \cite{Ada66}. These classes are detected by the classes $\{P^i h_1^j, P^i h_0^3 h_3: 1 \leq j \leq 3, i \geq 0\}$ in the Adams spectral sequence, where $P^i(-)$ is the Massey product $P(-) := \langle h_3, h_0^4, - \rangle$ iterated $i$-times. We discussed $\c$-motivic lifts of these infinite families in \cite{Qui19a}. In particular, one can define $P(-) := \langle h_3, h_0^4, - \rangle$ in the $\c$-motivic Adams spectral sequence and define infinite families $\{v^{4i}_1 \eta, v^{4i}_1 \tau \eta^2, v^{4i}_1 \tau \eta^3, v^{4i}_1 8\sigma : i \geq 0\}$ as the classes detected by $\{P^i h_1, P^i \tau h_1^2, P^i \tau h_1^3, P^i h_0^3 h_3: i \geq 0\}$. All of these classes are nontrivial since they are permanent cycles (for degree reasons) and their Betti realizations are Adams' classical families. We can construct analogs of these classes over arbitrary algebraically closed fields of characteristic not two using the following theorem of Wilson and Wilson-{\O}stv{\ae}r: \begin{thm}\label{Thm:WO} Let $\overline{F}$ be an algebraically closed field of exponential characteristic $q \neq 2$. For all $s \geq w \geq 0$, there are isomorphisms $\pi^{\overline{F}}_{s,w}(S^{0,0})[q^{-1}] \cong \pi^\c_{s,w}(S^{0,0})[q^{-1}]$. \end{thm} \begin{proof} If $q > 2$, this follows from \cite[Thm. 1.1]{WO17}. If $q = 0$, this follows from the proof of \cite[Prop. 7]{Wil18}. \end{proof} \begin{defin} We define $v^{4i}_1 \eta$, $v^{4i}_1 \tau \eta^2$, $v^{4i}_1 \tau \eta^3$, and $v^{4i}_1 8 \sigma$ to be the classes in $\pi_{**}^{\overline{F}}(S^{0,0})$ corresponding to the classes with the same names in $\pi_{**}^\c(S^{0,0})$ under the isomorphism in Theorem \ref{Thm:WO}. \end{defin} Our primary goal in the remainder of this section is to construct lifts of these to prime fields. \subsection{Computations over finite fields of prime order}\label{Section:FiniteFields} We start by constructing some infinite families in $\pi_{**}^{\f_q}(S^{0,0})$ using the $\rho$-Bockstein spectral sequence \cite{Hil11} and base-change along $\f_q \to \overline{\f}_q$. We break the analysis into two cases (Lemmas \ref{Lem:Extq1} and \ref{Lem:Extq3}) depending on the congruence class of $q$ modulo four; the results are summarized in Theorem \ref{Thm:FamiliesFiniteFields} for future reference. We then study the $\f_q$-motivic homotopy of a certain stunted lens spectrum. The computations in this section serve as a warm-up for the analogous computations over $\Spec(\q_\nu)$ ($\nu$ any prime) in Section \ref{Section:pAdicRationals} and \ref{Section:Rationals}. They will also be used in Section \ref{Section:PrimeMI}. We note that the $\rho$-Bockstein spectral sequence converging to $\pi_{**}^{\f_q}(kq)$ was studied by Kylling \cite{Kyl15} and the $\rho$-Bockstein spectral sequence for $\pi_{**}^{\f_q}(S^{0,0})$ was studied by Wilson \cite{Wil16} and Wilson-{\O}stv{\ae}r \cite{WO17}. We refer the reader to their work, as well as \cite{Hil11} and \cite{GHIR17}, for further applications of the $\rho$-Bockstein spectral sequence. \begin{defin} The \emph{Milnor-Witt degree} of a class $\alpha \in \pi_{s,w}^k(S^{0,0})$ is defined to be $MW(x) := s-w$. \end{defin} Recall that in \cite[Sec. 5]{Qui19b}, we constructed classes $v^{4i}_1 \tau \eta^j \in \pi_{**}^\r(S^{0,0})$ for all $i \geq 0$ and $2 \leq j \leq 3$. The class $v^{4i}_1 \tau \eta^j$ is (by definition) detected by $P^i \tau h_1^j \in Ext^{***}_{A^\r}$ where $P(-)$ is the matric Massey product $$P(x) := \left\langle \begin{bmatrix} h_3 & \rho^3 h_1^2 \end{bmatrix}, \begin{bmatrix} h_0^4 \\ c_0 \end{bmatrix}, x \right\rangle.$$ Note that if $\rho^3 = 0$ in $\m_2^k$, then this operator simplifies to the Massey product $\langle h_3, h_0^4, - \rangle$ studied in \cite{Qui19a} which lifted the periodicity operator introduced by Adams in \cite{Ada66b}. The following lemma allows us to construct analogous classes in $Ext^{***}_{A^F}$ where $F$ is any field of characteristic not two. We refer the reader to \cite[Sec. 2.1]{IO18} for a review of Milnor K-theory $K^M_*(F)$. \begin{lem}[{\cite[Lem. 4.1]{KW18}}]\label{Lem:KW} Let $Ext^i$ denote the $i$-th $Ext$-group. Over a field of characteristic not two, we have for each $i$ an extension $$0 \to Ext^i_{A^\r} \otimes_{\f_2[\rho]} K^M_*(F)/2 \to Ext^i_{A^F} \to Tor_1^{\f_2[\rho]}(Ext^{i+1}_{A^\r}, K^M_*(F)/2) \to 0.$$ \end{lem} In particular, there is an injective map $$\phi_F : Ext^i_{A^\r} \otimes_{\f_2[\rho]} K_*^M(F)/2 \to Ext^i_{A^F}$$ for any field $F$ of characteristic not two. We recall the following calculation of the Milnor K-theory of finite fields for the reader's convenience. \begin{thm}[{\cite{Mil70}\cite[Ex. 2.6]{IO18}}] The Milnor K-theory of a finite field $\f_q$ is given by $$K^M_*(\f_q) \cong \z[u]/u^2$$ where $u = [a]$ is the class of any generator $a \in \f_q^\times \cong K_1^M(\f_q)$. In particular, we have $$K^M_*(\f_q)/2 \cong \begin{cases} \f_2[u]/u^2 \quad & \text{ if } q \equiv 1 \mod 4, \\ \f_2[\rho]/\rho^2 \quad & \text{ if } q \equiv 3 \mod 4, \end{cases} $$ where $\rho = [-1]$. \end{thm} \begin{lem}\label{Lem:Extq1} The following statements hold over $\Spec(\f_q)$ where $q \equiv 1 \mod 4$: \begin{enumerate} \item For all $i \geq 0$, the classes $P^i h_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$ are well-defined classes in $Ext_{\f_q}$. \item The classes above are permanent cycles in the $\f_q$-motivic Adams spectral sequence. \item The classes above detect nontrivial classes in $\pi_{**}^{\f_q}(S^{0,0})$ which base-change to the classes with the same name in $\pi_{**}^{\overline{\f}_q}(S^{0,0})$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item We have $Ext_{\f_q} \cong Ext_\c \otimes E(u)$ where $|u| = (0,-1,-1)$ by \cite[Prop. 7.1]{WO17}. These classes are just the images under $\phi_F$ of the classes in $Ext_\c$ with the same name tensored with $1$. \item Adams differentials preserve motivic weight and decrease stem by one. It follows from \cite[Prop. 5.4]{Qui19a} that there are no possible targets for Adams differentials on these classes. \item The statement about base-change is clear since $\pi_{**}^{\overline{\f}_q}(S^{0,0}) \cong \pi_{**}^\c(S^{0,0})$ and base-change sends clases in $\pi_{**}^{\f_q}(S^{0,0})$ to classes with the same name in $\pi_{**}^{\overline{\f}_q}(S^{0,0})$. Since base-change induces a map of Adams spectral sequences and the classes with the same name in the $\overline{\f}_q$-motivic Adams spectral sequence are nonzero, it follows from Part (2) that the classes we are interested in detect nonzero classes in the $\f_q$-motivic Adams spectral sequence. \end{enumerate} \end{proof} \begin{lem}\label{Lem:Extq3} The following statements hold over $\Spec(\f_q)$ where $q \equiv 3 \mod 4$: \begin{enumerate} \item For all $i \geq 0$, the classes $P^i h_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$ are permanent cycles in the $\rho$-Bockstein spectral sequence. \item There are no classes in higher $\rho$-Bockstein filtration which contribute to the same tridegrees of $Ext_{\f_q}$ as the classes above. \item The classes above are permanent cycles in the $\f_q$-motivic Adams spectral sequence. \item The classes above detect nontrivial classes in $\pi_{**}^{\f_q}(S^{0,0})$ which base-change to the classes with the same name in $\pi_{**}^{\overline{\f}_q}(S^{0,0})$. \end{enumerate} \end{lem} \begin{proof} We will implicitly use the fact that $\rho^2=0$ in $\m_2^{\f_q}$ in this proof. In particular, this implies that the $\rho$-Bockstein spectral sequence collapses at $E_2$ so we only need to consider $d_1$-differentials. \begin{enumerate} \item We provide the proof for $P^i h_1$; the other cases are similar. The case $i = 0$ and $i=1$ follows from \cite{WilCharts}. In general, we have $MW(P^i h_1) = 4i$, $stem(P^i h_1) = 8i+1$, and $filt(P^i h_1) = 4i+1$. The possible targets of a $\rho$-Bockstein differential have the form $\rho z$ where $z \in Ext^{8i+1, 4i+2, 4i+2}_{A^\c}$, but this group is zero by \cite[Prop. 5.4]{Qui19a}. \item For $P^i h_1$, $P^i \tau h_1^2$, and $P^i h_0^3 h_3$, this is clear from inspection of $Ext_\c$ in \cite{Isa14b} along with \cite[Prop. 5.4]{Qui19a}. On the other hand, the class $\rho P^i h_0 h_2$ in the $E_1$-page of the $\rho$-Bockstein spectral sequence converging to $Ext_{\f_q}$ could contribute to the same tridegree as $P^i \tau h_1^2$. However, we have $d_1(P^i \tau h_2) = \rho P^i h_0 h_2$ for all $i \geq 0$, so this does not occur. \item This follows from \cite[Prop. 5.4]{Qui19a} by similar arguments since Adams differentials preserve motivic weight and decrease stem by one. \item The base-change statement is clear from Part (1). Since base-change induces a map of Adams spectral sequences and the classes with the same name in the $\overline{\f}_q$-motivic Adams spectral sequence are nonzero, it follows from Part (2) that the classes we are interested in detect nonzero classes in the $\f_q$-motivic Adams spectral sequence. \end{enumerate} \end{proof} We summarize the key results from the previous two lemmas in the following definition and theorem. \begin{defin} We define $v^{4i}_1 \eta$, $v^{4i}_1 \tau \eta^2$, $v^{4i}_1 \tau \eta^3$, and $v^{4i}_1 8\sigma$ to be the classes in $\pi_{**}^{\f_q}(S^{0,0})$ detected by $P^i h_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$, respectively. \end{defin} \begin{thm}\label{Thm:FamiliesFiniteFields} The classes $v^{4i}_1 \eta$, $v^{4i} \tau \eta^2$, $v^{4i}_1 \tau \eta^3$, and $v^{4i}_1 8\sigma$ in $\pi_{**}^{\f_q}(S^{0,0})$ are nonzero. \end{thm} \begin{prop}\label{Prop:16NullFinite} Let $L_m$ be the subcomplex of $\underline{L}^\infty_{-\infty}$ with cells in dimension $-5+8m \leq d \leq 2+8m$. Over $\f_q$, the degree $16$ map is null on $L_m$ for all $ m \in \z$. \end{prop} \begin{rem2} The subcomplex $L_0$ may be constructed as follows; the cases $m \neq 0$ can be obtained by suspension. Let $X = \underline{L}_{-3}^\infty = Th(-3\gamma \to B_{gm}C_2)$ and let $Y$ denote the simplicial $2$-skeleton of $X$, so $Y$ has cells in simplicial dimensions $-6 \leq d \leq 2$. Then $L_0$ is the cofiber of the inclusion of the $-6$-cell into $Y$. A cell diagram for $L_0$ can be produced from \cite[Fig. 1]{Qui19a} by restricting to the values $-5 \leq d \leq 2$ along the horizontal axis. \end{rem2} \begin{rem2} We will freely use the computation of $\pi_{**}^{\f_q}(S^{0,0})$ in low dimensions in the following proof. More precisely, we need to know $\pi_{-2k \mp \epsilon, -k \mp \epsilon}^{\f_q}$ for all $-3 \leq k \leq 3$ and $\epsilon \in \{0,1\}$. There are no Adams differentials in the tridegrees of the $\f_q$-motivic Adams spectral sequence computing these groups \cite[Sec. 7.3-7.4]{WO17}. With this in mind, these groups can be obtained as follows: \begin{enumerate} \item When $q \equiv 1 \mod 4$, the relevant motivic homotopy groups satisfy $\pi_{**}^{\f_q}(S^{0,0}) \cong \pi_{**}^\c(S^{0,0})\{1\} \oplus \pi_{*+1,*+1}^{\c}(S^{0,0}) \{u\}.$ This follows from the previous statement about Adams differentials and the calculation of the Adams $E_2$-page in \cite[Prop. 7.1]{WO17}. \item When $q \equiv 3 \mod 4$, the relevant motivic homotopy groups can be obtained from the $E_2$-page of the $\rho$-Bockstein spectral sequence depicted in \cite[Fig. 2]{DI16a} by replacing infinite $\rho$-towers $x[\rho]$ by $x[\rho]/(\rho^2)$ and inserting classes $y \tau^{2i+1}\rho$, $i \geq 0$, for any $y \in \pi_{**}^\c(S^{0,0})$ which supports nontrivial $h_0$-multiplication but is not $h_0$-divisible. This follows from the previous statement about Adams differentials and the $\rho$-Bockstein spectral sequence $d_1$-differential $d_1(\tau) = \rho h_0$ from \cite[Prop. 3.2]{DI16a}. \end{enumerate} \end{rem2} \begin{proof} The isomorphism $\pi_{**}^{\overline{\f}_q}(S^{0,0}) \cong \pi_{**}^\c(S^{0,0})$ and \cite[Prop. 5.11]{Qui19a} imply that the result holds over $\overline{\f}_q$. Let $X := L_m \wedge D^{\f_q} L_m$ where $D^{\f_q}(-) := F(-,S^{0,0})$ is the $\f_q$-motivic Spanier-Whitehead dual functor. By the argument from the proof of \cite[Prop. 5.11]{Qui19b}, we see that the class detecting $16$ in $\pi_{0,0}^{\f_q}(X)$ must base-change to a $\tau$-torsion class in $\pi_{**}^{\overline{\f}_q}(S^{0,0})$. The $\f_q$-motivic homotopy group $\pi_{0,0}^{\f_q}(X)$ may be computed via the Atiyah-Hirzebruch spectral sequence arising from the filtration of $X$ by topological dimension. As in the proof of \cite[Prop. 5.11]{Qui19b}, the possible contributions to $\pi_{0,0}^{\f_q}(X)$ in the Atiyah-Hirzebruch spectral sequence have the form $\alpha[2k\pm \epsilon, k\pm \epsilon]$ with $\alpha \in \pi_{-2k \mp \epsilon, -k \mp \epsilon}^{\f_q}(S^{0,0})$ where $-3 \leq k \leq 3$ and $\epsilon \in \{0,1\}$. In addition to the classes $\alpha$ which base-change to zero in $\pi_{**}^{\overline{\f}_q}(S^{0,0})$, we may omit classes which appeared in the proof of \cite[Prop. 5.11]{Qui19b} since the attaching map structure of $X$ is independent of the base-field. When $q \equiv 1 \mod 4$, the classes $$\alpha \in \{uh_3h_0^i, 1 \leq i \leq 3; uh_2h_0; uh_0^j, j \geq 1\}$$ lie in the correct bidegrees of $\pi_{**}^{\f_q}(S^{0,0})$, base-change to $\tau$-torsion classes (actually to zero) in $\pi_{**}^{\overline{\f}_q}(S^{0,0})$, and have not been considered in the proof of \cite[Prop. 5.11]{Qui19b}. We rule them out using the Atiyah-Hirzebruch spectral sequence. Recall that the differentials in the Atiyah-Hirzebruch spectral sequence correspond to the attaching maps in $X$. The $d_1$-differentials correspond to attaching maps detected by $h_0$, which in turn correspond to a nontrivial action of $Sq^1$ in $H^{**}(X)$. The action of $Sq^1$ on $H^{**}(X)$ may be calculated using the Cartan formula, the known action of $Sq^1$ on $H^{**}(L_0)$ (as depicted in \cite[Fig. 1]{Qui19a}), and the known action of $Sq^1$ on $H^{**}(DL_0)$ (calculated for a Spanier-Whitehead dual using the action of $Sq^1$ on $H^{**}(X)$ and the fact that $\chi Sq^1 = Sq^1$, where $\chi$ is conjugation in the motivic Steenrod algebra). \begin{enumerate} \item The classes $u h_3 h_0^i$, $1 \leq i \leq 3$, are the targets of $d_1$- and $d_8$-differentials. We verify this claim in the case $m=0$ for simplicity; the other cases follow from $8$-fold periodicity of the action of $Sq^i$ on $H^{**}(\underline{L}^\infty_{-N})$ (where $N$ is any integer) for $i \leq 4$. The classes $uh_3 h_0^i$, $1 \leq i \leq 3$, all lie in $\pi_{6,3}^{\f_q}(S^{0,0})$, so we must study the action of $Sq^1$, $Sq^2$, $Sq^4$, and $Sq^8$ on the generators for an additive basis of $H^{-6,-3}(X)$. Any such generator has the form $e_{i} \otimes f_{k}$ where $e_{i} \in H^{i,j}(L_0)$ and $f_{k} \in H^{k,\ell}(DL_0)$ with $i+k=-6$ and $j+\ell = -3$. The only options are then $e_{-5} \otimes f_{-1}$ and $e_{-4} \otimes f_{-2}$. We have $$Sq^1(e_{-5} \otimes f_{-1}) = Sq^1(e_{-5}) \otimes f_{-1} + e_{-5} \otimes Sq^1(f_{-1}) = e_{-4} \otimes f_{-1},$$ $$Sq^1(e_{-4} \otimes f_{-2}) = Sq^1(e_{-4}) \otimes f_{-2} + e_{-4} \otimes Sq^1(f_{-2}) = e_{-4} \otimes f_{-1}.$$ We may take a basis to be $\{e_{-5} \otimes f_{-1}, e_{-5} \otimes f_{-1} + e_{-4} \otimes f_{-2}\}$, so then we see that the classes $uh_0^ih_3$ on the cell $e_{-5} \otimes f_{-1}$ are targets of $d_1$-differentials. We claim that the remaining classes $uh_0^i h_3$ on the cell corresponding $e_{-5} \otimes f_{-1} + e_{-4} \otimes f_{-2}$ are the targets of $d_8$-differentials. To see this, we apply the Cartan formula to calculate $$Sq^8(e_{-5} \otimes f_{-1} + e_{-4} \otimes f_{-2}) = e_{0} \otimes f_{2}.$$ We therefore have the claimed $d_8$-differential if we can show that the classes $u h_0^i$ survive on the cell corresponding to $e_0 \otimes f_2$ up to the $E_8$-page of the Atiyah-Hirzebruch spectral sequence. \item The class $uh_2h_0$ supports a $d_1$-differential. \item The classes $uh_0^j$, $j \geq 1$, are the targets of $d_1$-differentials. \end{enumerate} When $q \equiv 3 \mod 4$, there are no classes which lie in the correct bidegrees, base-changes to a $\tau$-torsion class in $\pi_{**}^{\overline{\f}_q}(S^{0,0})$, and were not considered in the proof of \cite[Prop. 5.11]{Qui19b}. We have ruled out any classes which could detect $16$ in $\pi_{0,0}^{\f_q}(X)$, so it must be zero. \end{proof} \subsection{Computations over the $\nu$-adic rationals}\label{Section:pAdicRationals} We now make the analogous computations over $\Spec(\q_\nu)$. The computations of this section serve as input for Section \ref{Section:Rationals}. The mod two Milnor K-theory of $\q_\nu$ is given below; see \cite[Pg. 10]{Wil18} for details. \[K^M_*(\q_\nu)/2 \cong \begin{cases} \f_2[\pi, u]/(\pi^2, u^2) \quad & \text{ if } \nu \equiv 1 \mod 4, \\ \f_2[\pi, \rho]/(\rho^2, \rho \pi + \pi^2) \quad & \text{ if } \nu \equiv 3 \mod 4, \\ \f_2[\pi,\rho,u]/(\rho^3,u^2,\pi^2,\rho u, \rho \pi, \rho^2 + u \pi) \quad & \text{ if } \nu = 2. \end{cases} \] \begin{lem} The following statements hold over $\Spec(\q_\nu)$ for $\nu \equiv 1 \mod 4$: \begin{enumerate} \item For all $i \geq 0$, the classes $P^i h_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$ are well-defined classes in $Ext_{\q_\nu}$. \item The classes above are permanent cycles in the $\q_\nu$-motivic Adams spectral sequence. \item The classes above detect nontrivial classes in $\pi_{**}^{\q_\nu}(S^{0,0})$ which base-change to the classes with the same name in $\pi_{**}^{\overline{\q}_\nu}(S^{0,0})$. \end{enumerate} \end{lem} \begin{proof} This follows from the same argument as Lemma \ref{Lem:Extq1}; the key point is that adjoining one more power of $u$ does not carry us out of the isomorphism range in \cite[Prop. 5.4]{Qui19a}. \end{proof} \begin{lem} The following statements hold over $\Spec(\q_\nu)$ where $\nu = 2$ or $\nu \equiv 3 \mod 4$: \begin{enumerate} \item For all $i \geq 0$, the classes $P^i h_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$ are permanent cycles in the $\rho$-Bockstein spectral sequence. \item There are no classes in higher $\rho$-Bockstein filtration which contribute to the same tridegrees of $Ext_{\q_\nu}$ as the classes above. \item The classes above are permanent cycles in the $\q_\nu$-motivic Adams spectral sequence. \item The classes above detect nontrivial classes in $\pi_{**}^{\q_\nu}(S^{0,0})$ which base-change to the classes with the same name in $\pi_{**}^{\overline{\q}_\nu}(S^{0,0})$. \end{enumerate} \end{lem} \begin{proof} This follows from the same argument as Lemma \ref{Lem:Extq3}. \end{proof} \begin{defin} Let $\nu$ be any prime. We define $v^{4i}_1 \eta$, $v^{4i}_1 \tau \eta^2$, $v^{4i}_1 \tau \eta^3$, and $v^{4i}_1 8\sigma$ to be the classes in $\pi_{**}^{\q_\nu}(S^{0,0})$ detected by $P^ih_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$, respectively. \end{defin} \begin{thm}\label{Thm:FamiliespAdic} The classes $v^{4i}_1 \eta$, $v^{4i} \tau \eta^2$, $v^{4i}_1 \tau \eta^3$, and $v^{4i}_1 8\sigma$ in $\pi_{**}^{\q_\nu}(S^{0,0})$ are nonzero. \end{thm} \begin{proof} A slight modification of the proof from the case $F = \f_q$ implies that the classes $P^i h_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$, $i \geq 0$, are permanent cycles in the $\q_\nu$-motivic Adams spectral sequence. The images of these classes under base-change along $f : \q_\nu \to \overline{\q}_\nu$ detect nonzero classes in the $\overline{\q}_\nu$-motivic Adams spectral sequence. Therefore they must detect nonzero classes in the ${\q_\nu}$-motivic Adams spectral sequence. \end{proof} \begin{prop}\label{Prop:16NullpAdic} Let $L_m$ be the subcomplex of $\underline{L}^\infty_{-\infty}$ with cells in dimensions $-5+8m \leq d \leq 2 + 8m$. Over $\q_\nu$, the degree $16$ map is null on $L_m$ for all $m \in \z$. \end{prop} \begin{proof} The reductions from the proof of Proposition \ref{Prop:16NullFinite} carry over \emph{mutatis mutandis}. Let $X := L_m \wedge D^{\q_\nu} L_m$. We split the analysis into three cases. When $\nu \equiv 1 \mod 4$, the classes $$\alpha \in \{\pi u \tau h_3 h_1^2; \pi u \tau c_0 h_1; \pi u h_3 h_0^i, 1 \leq i \leq 3; \pi u h_2 h_0 \}$$ lie in the correct bidegrees of $\pi_{**}^{\q_\nu}(S^{0,0})$, base-change to $\tau$-torsion classes (actually to zero) in $\pi_{**}^{\overline{\q}_\nu}(S^{0,0})$, and have not been considered in the proof of \cite[Prop. 5.11]{Qui19b}. \begin{enumerate} \item The classes $\pi u \tau h_3 h_1^2$ and $\pi u \tau c_0 h_1$ are the targets of $d_2$-differentials. \item The classes $ \pi u h_3 h_0^i$, $1 \leq i \leq 3$, are the targets of $d_1$-differentials. \item The class $\pi u h_2 h_0$ is the target of a $d_1$-differential. \end{enumerate} When $\nu \equiv 3 \mod 4$ (resp. $\nu = 2$), the only new classes are the ones appearing in the case where $\nu \equiv 1 \mod 4$, with $\pi u$ replaced by $\pi \rho$ (resp. $\pi u$ replaced by $\rho^2$). The same argument carries through to eliminate these classes. We have ruled out any classes which could detect $16$ in $\pi_{0,0}^{\q_\nu}(X)$, so it must be zero. \end{proof} \subsection{$v_1$-periodic families over the rationals}\label{Section:Rationals} In this section, we construct the infinite families $v^{4i}_1 \tau \eta^2$ and $v^{4i}_1 \tau \eta^3$ in $\pi_{**}^\q(S^{0,0})$. We also study the $\q$-motivic homotopy of $L_m$. In both cases, we employ previous calculations along with the motivic Hasse principle developed in \cite[Sec. 4-5]{OO13}. The construction of the $\r$-motivic May spectral sequence \cite[Sec. 4.3]{Qui19b} may be modified by replacing $\m_2^\r$ by $\m_2^\q$ and $(A^\r)^\vee$ by $(A^\q)^\vee$ to obtain the $\q$-motivic May spectral sequence. The discussion in \cite[Sec. 5.1]{Qui19b} carries over to define a $\q$-motivic periodicity operator $$P_\q(x) := \left\langle \begin{bmatrix} h_3 & \rho^3 h_1^2 \end{bmatrix}, \begin{bmatrix} h_0^4 \\ c_0 \end{bmatrix}, x \right\rangle$$ on the elements $x \in Ext^{***}_{A^\q}(\m_2^\q, \m_2^\q)$ which are both $h_0^4$ and $c_0$-torsion. \begin{lem} The following statements hold over $\Spec(\q)$: \begin{enumerate} \item For all $i \geq 0$, the classes $P^i_\q \tau h_1^2$ and $P^i_\q \tau h_1^3$ are nontrivial in $Ext^{***}_{A^\q}(\m_2^\q,\m_2^\q)$. \item The classes above are permanent cycles in the $\q$-motivic Adams spectral sequence. \item The classes above detect nontrivial classes in $\pi_{**}^{\q}(S^{0,0})$ which base-change to the classes with the same name in $\pi_{**}^{\overline{\q}}(S^{0,0})$. \end{enumerate} \end{lem} \begin{proof} The analogous statements hold over every completion $\q_\nu$ of $\q$: \begin{itemize} \item Over $\r$, the statements follow from \cite{Qui19b}. \item Over $\q_\nu$ with $\nu$ a prime, the statements follow from Section \ref{Section:pAdicRationals}. The identification of the relevant elements as (matric) Massey products in $Ext^{***}_{A^{\q_\nu}}(\m_2^{\q_\nu},\m_2^{\q_\nu})$ follows from the construction of $\phi_F$ in the proof of \cite[Lem. 4.1]{KW18}. \end{itemize} The map $K^M_*(\q)/2 \to \prod_\nu K^M_*(\q_\nu)/2$ induces an injective map between the $E_1$-pages of the motivic May spectral sequences converging to $Ext^{***}_{A^\q}$ and $\prod_\nu Ext^{***}_{A^{\q_\nu}}$. Then $(1)$ is clear since the same results hold over every completion of $\q$. The motivic Hasse map also induces a map of motivic Adams spectral sequences, so $(2)$ follows similarly. Finally, $(3)$ follows from quotienting by $\rho$ as in \cite[Sec. 5.1]{Qui19b}. \end{proof} \begin{defin} We define $v^{4i}_1 \tau \eta^2$ and $v^{4i}_1 \tau \eta^3$ to be the classes in $\pi_{**}^\q(S^{0,0})$ detected by $P^i \tau h_1^2$ and $P^i \tau h_1^3$, respectively. \end{defin} \begin{thm} The classes $v^{4i}_1 \tau \eta^2$ and $v^{4i}_1 \tau \eta^3$ are nonzero in $\pi_{**}^\q(S^{0,0})$. \end{thm} \begin{prop}\label{Prop:16NullRationals} Let $L_m$ be the subcomplex of $\underline{L}^\infty_{-\infty}$ with cells in dimensions $-5+8m \leq d \leq -2 + 8m$. Over $\q$, the degree $16$ map is null on $L_m$ for all $m \in \z$. \end{prop} \begin{proof} The analogous results hold over every completion $\q_\nu$ of $\q$: \begin{enumerate} \item Over $\r$, this follows from \cite[Sec. 5.2]{Qui19b}. \item Over $\q_\nu$ with $\nu$ a prime, this follows from Section \ref{Section:pAdicRationals}. \end{enumerate} The result then follows from base-change and the motivic Hasse map. In particular, we see that the induced map from the $E_2$-page of the $\q$-motivic Adams spectral sequenceto the product over all completions of the $E_2$-page of the $\q_\nu$-motivic Adams spectral sequence is injective in the relevant tridegrees. Since there are no differentials in this range (by base-change and the analogous statements over $\Spec(\r)$ and $\Spec(\q_\nu)$), the map in homotopy groups is injective in the relevant bidegrees. The result follows. \end{proof} \section{Motivic Mahowald invariant of $(2+\rho\eta)^i$}\label{Section:General} We now apply the computations from Section \ref{Section:Prime} to compute $M^F((2+\rho\eta)^i)$ for all $i \geq 0$ if $\chara(F) > 2$ and for all $i \equiv 2,3 \mod 4$ if $\chara(F) = 0$. \subsection{Prime field Mahowald invariants}\label{Section:PrimeMI} We begin by computing over prime fields. That is, we compute the $\f_q$-Mahowald invariant of $2^i$, $i \geq 1$, where $q > 2$ is prime, and the $\q$-Mahowald invariant of $(2+\rho \eta)^i$, $i \equiv 2,3\mod 4$. \begin{thm}\label{Thm:InitialMotMI2i} Let $i \geq 1$. The $\f_q$-Mahowald invariant of $2^i$ is given by \[ M^{\f_q}(2^{4i+j}) \ni \begin{cases} v^{4i}_1 8\sigma \quad & j=0,\\ v^{4i}_1\eta &j=1,\\ v^{4i}_1 \tau \eta^2 &j=2,\\ v^{4i}_1 \tau \eta^3 &j=3. \end{cases} \] \end{thm} \begin{proof} The proof is analogous to the proof of \cite[Thm. 5.12]{Qui19b}; we summarize the necessary changes below. \begin{enumerate} \item The isomorphism $\pi_{**}^{\overline{\f}_q}(S^{0,0}) \cong \pi_{**}^\c(S^{0,0})$ implies that $M^{\overline{\f}_q}(2^{4i+j})$ has the desired form. Base-change along $\f_q \to \overline{\f}_q$ and Lemma \ref{Lem:Squeeze} then give the upper bound $|M^{\f_q}(2^{4i+j})| \leq |M^{\overline{\f}_q}(2^{4i+j})| = |M^\c(2^{4i+j})|$. Moreover, if bound is tight, then $M^{\f_q}(2^{4i+j})$ must base-change to $M^{\overline{\f}_q}(2^{4i+j})$. By Section \ref{Section:FiniteFields}, the classes in the theorem statement are the only classes which do this. \item Proposition \ref{Prop:16NullFinite} and low-dimensional computations show that the inequality is an equality; compare with the proof of \cite[Thm. 2.17]{MR93}. \end{enumerate} \end{proof} \begin{thm} Let $ i \geq 0$ and let $j \in \{2,3\}$. The $\q$-Mahowald invariant of $2^{4i+j}$ is given by \[ M^\q((2+\rho \eta)^{4i+j}) \ni \begin{cases} v^{4i}_1 \tau \eta^2 \quad & j=2, \\ v^{4i}_1 \tau \eta^3 \quad & j=3. \end{cases} \] \end{thm} \begin{proof} We follow the same proof idea as above. \begin{enumerate} \item Lemma \ref{Lem:Squeeze} applied to the isomorphism $\pi_{**}^{\overline{\q}}(S^{0,0}) \cong \pi_{**}^\c(S^{0,0})$ implies that $M^{\overline{\q}}((2+\rho\eta)^{4i+j})$ has the desired form. Base-change along $\q \to \overline{\q}$ and Lemma \ref{Lem:Squeeze} then give the upper bound $|M^{\q}(2^{4i+j})| \leq |M^{\overline{\q}}(2^{4i+j})| = |M^\c(2^{4i+j})|$. Moreover, if this inequality is actually an equality, then $M^{\q}(2^{4i+j})$ must base-change to $M^{\overline{\q}}(2^{4i+j})$; by Section \ref{Section:Rationals}, the classes in the theorem statement are the only possible classes with this property. \item Proposition \ref{Prop:16NullRationals} and low-dimensional computations show that the inequality is an equality; compare with the proof of \cite[Thm. 2.17]{MR93}. \end{enumerate} \end{proof} \subsection{$F$-Mahowald invariants of $2^i$}\label{Section:GeneralMI} We now apply the base-change functor, Lemma \ref{Lem:Squeeze}, and our computations over $\Spec(\f_q)$, $\Spec(\q)$, and $\Spec(\c)$ to compute the $F$-Mahowald invariants of $2^i$ for any field $F$ of characteristic not two. \begin{notn} We will use the notation $$k \overset{f}{\to} F \overset{g}{\to} L$$ to denote any of the following sequences of field extensions: $$\f_q \to F \to \overline{F} \quad \text{ or } \quad \q \to F \to \overline{F}.$$ \end{notn} The classes in the $F$-motivic Adams spectral sequence may be defined by comparison with the $\rho$-Bockstein spectral sequence in $SH(k)$ and $SH(L)$. \begin{defin} Let $F$ be any field as above. \begin{enumerate} \item Suppose $\chara(F) > 2$. We define $v^{4i}_1 \eta$, $v^{4i} \tau \eta^2$, $v^{4i}_1 \tau \eta^3$, and $v^{4i}_1 8\sigma$ to be the classses in $\pi_{**}^{F}(S^{0,0})$ detected by $P^i h_1$, $P^i \tau h_1^2$, $P^i \tau h_1^3$, and $P^i h_0^3 h_3$, respectively, in the $F$-motivic Adams spectral sequence. \item Suppose $F$ be a field with $\chara(F) = 0$. We define $v^{4i}_1 \tau \eta^2$ and $v^{4i}_1 \tau \eta^3$ to be the classes in $\pi_{**}^F(S^{0,0})$ detected by $P^i \tau h_1^2$ and $P^i \tau h_1^3$, respectively, in the $F$-motivic Adams spectral sequence. \end{enumerate} \end{defin} \begin{thm}\label{Thm:FamiliesGeneral} Let $F$ be any field as above. \begin{enumerate} \item Suppose $\chara(F) > 2$. The classes $v^{4i}_1 \eta$, $v^{4i}_1 \tau \eta^2$, $v^{4i}_1 \tau \eta^3$, and $v^{4i}_1 8\sigma$ in $\pi_{**}^{F}(S^{0,0})$ are nonzero. \item Suppose $\chara(F) =0$. The classes $v^{4i}_1 \tau \eta^2$ and $v^{4i}_1 \tau \eta^3$ in $\pi_{**}^{F}(S^{0,0})$ are nonzero. \end{enumerate} \end{thm} \begin{proof} The classes are all permanent cycles by base-change along $k \to F$ and the results of Section \ref{Section:Prime}. They are nonzero by base-change along $F \to L$ along with the isomorphism $\pi_{**}^{L}(S^{0,0}) \cong \pi_{**}^\c(S^{0,0})$ from Theorem \ref{Thm:WO}. \end{proof} \begin{thm}\label{Thm:MotMI2i} Suppose that $\chara(F) > 2$. Then the $F$-motivic Mahowald invariant of $(2+\rho \eta)^i$ is given by $$ M^F((2+\rho\eta)^{4i+j}) \ni \begin{cases} v^{4i}_1 \eta \quad & \text{ if } j =1, \\ v^{4i}_1 \tau \eta^2 \quad & \text{ if } j=2,\\ v^{4i}_1 \tau \eta^3 \quad & \text{ if } j=3, \\ v^{4i}_1 8\sigma \quad & \text{ if } j=4. \end{cases} $$ Suppose that $\chara(F) = 0$. Then the $F$-motivic Mahowald invariant of $(2+\rho \eta)^i$ is given by $$ M^F((2 + \rho \eta)^{4i+j} \ni \begin{cases} v^{4i}_1 \tau \eta^2 \quad & \text{ if } j=2,\\ v^{4i}_1 \tau \eta^3 \quad & \text{ if } j=3. \end{cases} $$ \end{thm} \begin{proof} Applying Lemma \ref{Lem:Squeeze} shows that $$|M^k(2^i)| \leq |M^F(2^i)| \leq |M^L(2^i)|.$$ Theorem \ref{Thm:InitialMotMI2i} identifies the left-hand side and Theorem \ref{Thm:WO} identifies the right-hand side with $|M^\c(2^i)|$ which was computed in \cite{Qui19a}. These sides agree; the theorem follows. \end{proof} \bibliographystyle{plain}
train/arxiv
BkiUfH7xK1Thg98Uh2af
5
1
\section{Introduction} The detection of the Higgs boson is one of the most ambitious aims of the CERN large hadron collider (LHC). Precision measurements strongly suggest the presence of this excitation in the hundred GeV range, but its very existence seems to be in contrast with our understanding of naturalness. Any alternative electro-weak symmetry breaking~ (EWSB) sector has to explain this tension. If the forthcoming experiments detect a light CP-even scalar but no accompanying states -- the latter being too heavy or too broad to be directly observed -- it would turn out to be crucial to understand whether the light scalar \textit{is} the actual Higgs, namely the physical component of an electroweak doublet. In this letter we would like to address this issue using a phenomenological approach. A parametric separation between the mass of the light scalar and those of the heavy states is most naturally justified by a strongly coupled EWSB theory in which the scalar emerges as an approximate Goldstone boson of some broken global symmetries. We focus on two classes of models which are consistent with our assumptions. The first class is a deformation of the standard model~ Higgs sector, in which the CP-even scalar emerges as the fourth excitation of a light composite Higgs doublet. The physics of this strongly coupled light Higgs (SILH) has been studied in detail in~\cite{SILH}. The second class of models include scenarios in which the strong EWSB sector is a nearly conformal field theory (CFT) that admits a light dilaton in the spectrum~\cite{GGS}. In this class the CP-even scalar emerges as an approximate Goldstone boson of the breaking of the conformal symmetry down to the Poincar\'e group, and it is not directly related to the EWSB sector. Appropriate field theories are expected to reproduce the invoked mechanism in a quantum mechanical context. However, a generic strong dynamics is not a realistic candidate. An heuristic motivation is that in the latter case the conformal breaking is governed by the running of the gauge coupling; in the IR the coupling is strong and there is no small parameter suppressing the dilaton mass compared to the dynamically generated scale~\cite{RZ}. Lattice simulations confirm these conclusions. This picture changes if the small coupling is taken to be $1/N$, where $N$ is the number of fundamental constituents. Evidences in favor of this mainly come from the gauge/gravity correspondence (see~\cite{Piai} for a recent example). The Randall-Sundrum model~\cite{RS}, as well as several other models for physics beyond the standard model~ belonging to the same universality class, are well known realizations of the light dilaton scenario. Supersymmetry then offers a simple way to relate the presence of a light $\eta'$ and the dilaton\footnote{I am indebted to Sergio Cecotti for an illuminating discussion of this point.}. Indeed, in a supersymmetric theory the scale current and the axial current are part of the same multiplet. If there exists a limit (the large N limit in a nonsupersymmetric theory!) in which the anomalous axial symmetry is exact, then in a non-chiral vacuum an approximate Goldstone boson (GB), the so called $\eta'$, would emerge along with a CP-even pseudo-GB partner, i.e. the dilaton. All of these considerations suggest that theories with many fundamental degrees of freedom may represent the natural framework for the realization of the light dilaton scenario. This would have implications in the study of the phenomenological signatures of this class of models. It is very important to appreciate that, in the case the operators responsible for EWSB and the spontaneous breaking of the CFT do coincide, the Higgs and the dilaton are the very same field. This is what happens at the classical level in the standard model~~\cite{Ellis}, and what is expected to occur in well calibrated walking technicolor scenarios~\cite{Sannino}. In this limit the SILH and the light dilaton effective theories overlap. This intrinsic ambiguity suggests that a phenomenological dicrimination between the two scenarios is a highly non-trivial task, in general. Nevertheless, we believe that the identification of features that most naturally characterize one theory as opposed to the other, and in particular a study of how natural is the above limit and what the phenomenogical implications are, has physical relevance. This is one of the aims of the present study. \section{Phenomenology of a light scalar} A light and chargeless CP-even scalar can arise as the physical excitation of a Higgs doublet, but in principle it may also arise as a generic pseudo Goldstone boson of an appropriately engineered strong dynamics. In the latter case the scalar field would have no direct relations with the Higgs sector. In this paper we address the question whether these two classes of scenarios can be disentangled under the hypothesis that the only new physics detected in the forthcoming experiments is the light CP-even scalar. To be definite we choose a representative of each class: from the first class we consider the strongly interacting light Higgs scenario (SILH), while from the second we consider the strongly interacting light dilaton (SILD) scenario. Following the notation introduced in~\cite{SILH}, we decide to describe our models by using three free parameters: $f$, $g_\rho$, and $m$. The first parameter is the energy scale $f$ at which the strong dynamics spontaneously breaks its approximate global symmetry, and hence leading to the appearance of a Goldstone doublet (SILH) or singlet (SILD). The broken symmetry can be either a global one, as in the case of the SILH scenario, or a space-time symmetry, as in the case of the dilaton scenario. The scale $f$ is basically a measure of the strength of the Goldstone boson interactions. The second parameter, $g_\rho$, is the typical coupling of the massive resonances, which in particular is expected to enter into the definition of the masses of the heavy composites as a proportionality factor, $m_\rho\propto g_\rho$. The statement that the new physics is strong compared to the standard model~ can be expressed via the relation $g_{SM}\ll g_\rho$, where $g_{SM}$ is a typical standard model~ coupling and $g_\rho=4\pi$ for a maximally strong dynamics. The third parameter is the mass of the pseudo Goldstone boson, $m$, and crucially depends on the explicit source of symmetry breaking. The rules of non-linear realizations are expected to hold up to corrections of order $m^2/f^2\ll1$. The physics of both the SILH and SILD can be captured by the effective lagrangian of a generic chargeless CP-even scalar $S$: \begin{eqnarray}\label{gen} {\cal L}_{eff} &=&\frac{1}{2}m_V^2V_\mu^2\left(1+2a_1\frac{S}{v}+a_2^2\frac{S^2}{v^2}\right)\\\nonumber &+&\bar \psi_i\psi_j \left[m_i\delta^{ij}\left(1+b\frac{S}{v}\right)+b^{ij}\frac{S}{v}\right]\\\nonumber &+&c\frac{g_{SM}^2}{16\pi^2}F_{\mu\nu}^2\frac{S}{v}+\dots \end{eqnarray} We allowed the presence of a higher dimensional operator $SF_{\mu\nu}^2$, where $F_{\mu\nu}$ is the photon or gluon field strength, because no such a coupling is present at leading order, and it may play a phenomenological role if the scale of new physics is not too high~\cite{MW}. The standard model~ Higgs boson is just a particular case of the theory~(\ref{gen}) with $a_1=a_2=b=1$ and $b_{ij}=c=0$. For compliteness we mention that the integration of the top induces a coupling $c=O(1)$. \paragraph{The SILH:} In the SILH model the full Higgs doublet emerges as a Goldstone field, the massive composites being naturally at the scale $m_\rho= g_\rho f$. The explicit violation of the global symmetry induces the generation of a potential for the doublet and eventually implies EWSB. In the broken electro-weak~ phase the Higgs boson is the only physical and light relic of the strong dynamics. A generic coupling of the SILH to the standard model~ vectors and fermions differs by an $O(v^2/f^2)$ correction with respect to the standard model~ Higgs, and approaches the latter as the heavy states decouple, $v\ll f$. Although this regime appears unnatural, it turns out that a mild suppression $v<f$ is sufficient for fitting the electro-weak~ precision data. In this model the authors of~\cite{SILH} found that the physics of the light scalar can be effectively described by the lagrangian~(\ref{gen}) with: \begin{eqnarray}\label{SILH} a_1=1-c_H\frac{v^2}{2f^2};\quad a_2^2=1-2c_H\frac{v^2}{f^2};\quad b=1-(c_y+c_H/2)\frac{v^2}{f^2};\quad b_{ij}=0;\quad c=0, \end{eqnarray} where $c_{y,H}$ are $O(1)$ parameters defined in~\cite{SILH}. In the rest of the paper we will be mainly concerned with the dilaton physics. We refer the reader to the paper~\cite{SILH} for more details on the SILH scenario. \paragraph{The SILD:} In the SILD the only Goldstone mode emerging at a scale $f$ is the dilaton itself, while EWSB is triggered by a strong dynamics at a somewhat lower scale $v=250$ GeV. For practical purposes the standard model~ symmetry can be taken to be non-linearly realized below the characteristic mass scale of the strong Higgs sector, $m_\rho=g_\rho v$. Notice that the light scalar $S$ in this case is not the physical excitation of a Higgs doublet. The couplings of the dilaton to the standard model~ fields can be derived by applying the theory of phenomenological lagrangians. For completeness we now briefly review the main aspects of the procedure, for a detailed derivation see for instance~\cite{Salam}. The conformal symmetry is an extension of the Poincar\'e algebra which includes scale invariance as well as the so called special conformal transformations. By definition, a CFT is necessarily scale invariant, but the converse need not be true. Let us first focus on the implications of scale invariance on the effective field theory. Given a local operator ${\cal O}(x)$ of scaling dimension $\Delta$, the transformation under dilatations $x^\mu\rightarrow e^\lambda x^\mu$, where $\lambda$ is an arbitrary constant, is given by ${\cal O}(x)\rightarrow e^{\lambda\Delta}{\cal O}(e^\lambda x)$. The theory is invariant under scaling if the lagrangian has dimension $\Delta=4$. For $\Delta\neq4$ the scale symmetry can be locally restored by introducing an appropriate GB field $\sigma$, the dilaton. This field transforms in-homogeneously under a dilatation $\sigma\rightarrow\sigma+f\lambda$ and thus signals the spontaneous breakdown of the $\sigma=0$ vacuum. In complete analogy with the non-linear realization of internal symmetries, it is convenient to introduce a field transforming linearly under scale transformations: \begin{eqnarray}\label{dilaton} \chi(x)\equiv fe^{\sigma(x)/f}\rightarrow e^\lambda\chi(e^\lambda x) \end{eqnarray} For practical purposes we can identify $\bar\chi=\chi-f=\sigma+O(\sigma^2)$ as our dynamical field rather than $\sigma$ itself. Because $\sigma$ and $\bar\chi$ generate the same one-particle states, the S-matrices of the two fields do coincide. A dimension-4 operator can finally be obtained by replacing the dimension $\Delta$ operator ${\cal O}$ with ${\cal O}(\chi/f)^{4-\Delta}$. At the classical level this is basically what the standard model~ Higgs does~\cite{Ellis},\cite{GGS}. To appreciate this, let us derive the leading couplings of the dilaton to the spin-1, spin-1/2, and the Goldstone $SU(2)$ matrix $U$ of the standard model~ $SU(2)\times U(1)\rightarrow U(1)$ symmetry breaking pattern by assuming that the standard model~ fields have a classical scaling: \begin{eqnarray}\label{embed} {\cal L} &=&\frac{v^2}{4}Tr|D_\mu U|^2\left(\frac{\chi}{f}\right)^2+m_i\bar \psi_i U\psi_i\left(\frac{\chi}{f}\right)+\dots\\\nonumber &=& \frac{1}{2}m_V^2V_\mu^2\left(1+\frac{\bar\chi}{f}\right)^2+m_i\bar \psi_i \psi_i\left(1+\frac{\bar\chi}{f}\right)+\dots \end{eqnarray} The model~(\ref{embed}) corresponds to the case \begin{eqnarray}\label{SILD} a_1=a_2=\frac{v}{f};\quad b=\frac{v}{f};\quad b_{ij}=0 \end{eqnarray} and proves that, apart from a $v/f$ rescaling, the couplings of the dilaton are formally those of a fundamental Higgs. The $\Delta>4$ operators are either irrelevant for our purposes or severely constrained by electro-weak~ precision data\footnote{In this regard we should mention that a light dilaton is expected to alleviate the tension between the scale $m_\rho$ of new physics and the electro-weak~ precision measurements by screening the large corrections of the strong dynamics to the precision parameters. The effect is estimated to be small -- see the quantitative discussion in~\cite{Barbieri} -- so that the higher dimensional operators can be neglected in our discussion. Yet, it would be interesting to understand the relation between this screening effect and the slight improvement on the electro-weak~ $S$-parameter fit that a walking dynamics, as opposed to a generic strongly coupled theory, is expected to cause.}. An exception appears to be the coupling to the unbroken gauge bosons. The latter arises as an effect of the scale-dependence of the renormalized coupling $g(\mu)$, i.e. the so called scale anomaly. The prescription for the non-linear realization of the scale symmetry illustrated above may be equivalently restated by saying that the renormalization scale $\mu$ must be compensated by an appropriate power of the dilaton field $\mu\rightarrow\mu\chi/f$. Hence, if we assume that the gauge symmetry of the standard model~ is part of the CFT, the coupling \begin{eqnarray} -\frac{1}{4g_{SM}^2}F_{\mu\nu}^2=+\frac{\beta_{SM}}{2g_{SM}}F_{\mu\nu}^2\frac{\bar\chi}{f}+\dots \end{eqnarray} should be added to~(\ref{embed}). (Notice that on the right hand side of the equation the vectors have been canonically normalized). This is equivalent to the introduction of a coefficient: \begin{eqnarray}\label{c} c=\frac{8\pi^2\beta_{SM}}{g_{SM}^3}\frac{v}{f} \end{eqnarray} in~(\ref{gen}). The phenomenology of~(\ref{SILD}) plus~(\ref{c}) has been studied by the authors of~\cite{GGS}. Because the coefficient~(\ref{c}) can potentially be larger than the top loop contribution, the anomalous term may well represent the most accessible signal of the SILD, especially in the ambiguous regime $v\sim f$~\cite{GGS}. If the standard model~ gauge symmetry represents an explicit breaking of the scale invariance, then the dilaton couplings to the unbroken gauge group is no more predicted by the trace anomaly, and $c$ is no more given by~(\ref{c}). The latter coupling is more generally mediated by mixing between composites and standard model~ vectors, and by the integration of charged heavy composites. The resulting operator has been included in~(\ref{gen}) with a model dependent $c=O(v/f)$ factor. Such a contribution can be in principle sizable if the scale invariant theory has a large number of degrees of freedom at the scale $g_{\rho}f$, and may compete with the top loop irrespective of whether the gauge symmetry is or is not embedded into the broken CFT. In the next few sections we will also argue that the lagrangian~(\ref{embed}) is not accurate if the Higgs sector has non-negligible anomalous dimensions, as it is expected from a strongly coupled sector, or if the CFT flavor structure is non-trivial, as it is expected in a generic model addressing the standard model~ fermions hierarchy. In particular we will see that the relations $a_1^2=a_2^2$ and $b^{ij}=0$ are not characteristic features of the light dilaton scenario. We conclude this section by mentioning that a phenomenologically acceptable realization of both the SILH and SILD scenarios requires the existence of an explicit breaking of the global symmetries of the strong dynamics, i.e. the generation of a mass $m$ for the light scalar. The source of explicit breaking is typically the standard model~ in the SILH scenario~\cite{SILH}, while, as already stressed in the introduction, it may be the standard model~ as well as the strong dynamics itself in the SILD model. A deformation of the CFT generally implies corrections to the dilaton vacuum expectation value with respect to the symmetric solution. The true vacuum can thus differ from $f$, causing a shift in the dilaton couplings. The relation $f<v$ may be in principle allowed in this case, but the validity of our phenomenological approach would come into question. We will comment more on the magnitude of the ratio $v/f$, and its physical meaning, in a following section. For the moment we emphasize that the phenomenology of the SILD approaches that of a SILH in the regime $v\sim f$, whereas that of an ordinary Higgsless theory for $v/f\ll1$. \paragraph{Implications of the special conformal transformations:} The additional constraints coming from the special conformal transformations affect the derivative couplings only. Given a local operator ${\cal O}$ of weight $\Delta$ and transforming under the Lorentz group as an irreducible representation with infinitesimal generators $S_{\mu\nu}$, we can define a conformally invariant derivative as~\cite{Salam} \begin{eqnarray} \label{der}D_\mu{\cal O}=\left[\partial_\mu+\left(iS_{\mu\nu}-\Delta\eta_{\mu\nu}\right)\partial_\nu\log\chi \right]{\cal O}.\end{eqnarray} An important point to be noticed is that the non-linear realization of the full conformal group does not require the introduction of additional Goldstone bosons: the broken special conformal transformations do not generate new physical excitations from the vacuum. Contenting ourselves with a leading order expansion in external momenta, the derivative interaction of the dilaton field required by~(\ref{der}) has physical impact only if ${\cal O}$ is a scalar field with non-vanishing vacuum expectation value (see section 5). Being mainly interested in the dilaton couplings to spin $1$ and $1/2$, we conclude that the leading low energy effective field theory is not sensitive to whether the UV completion possesses a full conformal invariance or just a scaling symmetry. \paragraph{The radion:} The model~(\ref{SILD}) plus~(\ref{c}) has a dual interpretation in terms of the RS1 scenario~\cite{RS} with a heavy Higgs and the standard model~ fields placed on the IR brane. Yet, as soon as the SM fields move away from the IR brane the relations~(\ref{SILD}) and~(\ref{c}) no longer hold. The presence of the IR brane in the bulk AdS background can be interpreted as a spontaneous breaking of the conformal symmetry of the strongly coupled 4D dual theory, and the gauge/gravity interpretation of the dilaton is given in terms of the radion field~\cite{RZ}~\cite{AHPR}. The dynamical description of the radion field is encoded in the perturbed line element~\cite{CGR} \begin{eqnarray}\label{rad} ds^2&=&e^{-2ky+Qe^{2ky}}\eta_{\mu\nu}dx^\mu dx^\nu-\left(1-Qe^{2ky}\right)^2dy^2\\\nonumber &=&e^{-2kw}\eta_{\mu\nu}dx^\mu dx^\nu-dw^2+\dots \end{eqnarray} where $Q(x)$ represents the 4D field appearing in the effective field theory. The solution $Q=0$ is the conventional Randall-Sundrum background. In the second line of~(\ref{rad}) we made a change of variable to the new coordinate $w(y,x)$, defined by $2kw=2ky-Qe^{2ky}$. In terms of it the metric can be expressed in a compact form where the dynamical field $w$ determines the spacetime-dependent proper length of the extra dimension with classical vacuum $y$. At the quantum level the extra dimension can thus be thought of as a physical degree of freedom, the dilaton of the dual theory. Notice in fact that the model preserves a dilatation symmetry $x^\mu\rightarrow e^\lambda x^\mu$ at the perturbed level if the field $w$ transforms in-homogeneously $kw\rightarrow \lambda +kw$, in complete analogy with $\sigma$ in~(\ref{dilaton}). The second line of~(\ref{rad}) has the same form as the naive ansatz for the radion profile proposed originally by Randall and Sundrum~\cite{RS}, up to $O\left(\partial_\mu w\right)$ terms having subleading impact on the 4D physics. This clarifies why the physics of the actual dynamical mode $Q$ agrees (up to derivative couplings and hierarchically suppressed corrections) with the results found for the naive ansatz (see for instance~\cite{RZ},\cite{GW},\cite{CGK},\cite{Peloso}, and references therein), although the latter is not a dynamical mode. See the Appendix for a discussion of this point. \section{Light scalars and EWSB} Suppose the LHC detects a light CP-even scalar and no other states: can we tell whether this particle is really the physical excitation of a Higgs doublet? Thanks to the equivalence theorem, at external momenta $p^2\gg m_V^2$ the three Goldstone boson fields $\pi^a$ belonging to the unitary matrix $U=e^{i\pi^a\sigma^a/v}$, where $\sigma^a$ are the Pauli matrices, can be identified with the longitudinally polarized vector bosons $V_L^a$. Thus, we can directly probe the strong dynamics by focusing on the physics of the scalars $\pi^a$ and $S$, and rewriting the first line of~(\ref{gen}) as\footnote{This section has some overlaps with the analysis presented in~\cite{contino} and~\cite{Grojean}.} \begin{eqnarray} \label{Spp} \frac{v^2}{4}Tr\left(\partial_\mu U^\dagger\partial^\mu U\right)\left(1+2a_1\frac{S}{v}+a_2^2\frac{S^2}{v^2}\right).\end{eqnarray} The lagrangian~(\ref{Spp}) should be supplemented with a potential for the scalar $S$; however, being mainly interested in the high energy behavior of the amplitudes, this term can be discarded. Typical values of the parameters $a_1$ and $a_2$ for the SILH are given up to $O(v^2/f^2)$ in~(\ref{SILH}), and for the SILD in~(\ref{SILD}), under the assumption of negligible anomalous dimensions. In both cases we see that the elastic scattering of longitudinally polarized vectors violates perturbative unitarity at the scale of compositeness of the Higgs sector: \begin{eqnarray}\label{pppp} {\cal A}(\pi^a\pi^a\rightarrow\pi^b\pi^b)= \frac{s}{v^2}(1-a_1^2),\end{eqnarray} with $s$ the Mandelstam center of mass energy and $a\neq b$. These processes can be effectively used to discriminate between the two scenarios only if we are able to probe the heavy resonances, at scales of order $g_\rho v$ and $g_\rho f$ for the SILD and SILH respectively. Even if the heavy states are not detected, this channel will provide crucial informations regarding the strength of the Higgs sector, see~\cite{maina} for a recent study. Another interesting channel is the elastic scattering $2S\rightarrow2\pi$~\cite{Rattazzi}, for which \begin{eqnarray}\label{SSpp} {\cal A}(SS\rightarrow\pi^a\pi^a)= \frac{s}{v^2}(a_1^2-a_2^2).\end{eqnarray} If we neglect the anomalous dimensions of the Higgs sector of the CFT, $a_1^2=a_2^2$, and the latter amplitude vanishes for $S$ the dilaton, as in the standard model~ Higgs scenario; the observation of high energy $2S\rightarrow2\pi$ events in this limit would be a characterizing feature of the SILH. More generally, if we allow for non-zero anomalous dimensions of the CFT Higgs sector (we will see in section 5 that this implies $a_1^2\neq a_2^2$), then~(\ref{SSpp}) represents an indisputable evidence in favor of the compositeness of the EWSB sector. If we were able to isolate the strong dynamics from the explicit symmetry breaking, though, the Goldstone boson sectors of the SILH and SILD scenarios would look radically different. On the one hand, the SILH scenario, as the fundamental Higgs of the standard model, possesses an $O(4)$ symmetry rotating the 4 scalars $\pi^a,S$. At scales $f^2\gg p^2\gg m^2$ the Goldstones $\pi$ effectively recombine with the SILH in the full order parameter, i.e. the Higgs doublet, and the $O(4)$ symmetry manifests itself in relations between the $S,\pi$ scattering amplitudes\footnote{The $O(v^2/f^2)$ corrections are next to leading order in the expansion adopted in~\cite{SILH}, but are already present at the 2-derivative level.} \begin{eqnarray}\label{O} {\cal A}(SS\rightarrow2\pi^a)={\cal A}(2\pi^a\rightarrow2\pi^b)\left(1+O\left(\frac{v^2}{f^2}\right)\right), \end{eqnarray} (compare~(\ref{SSpp}) with~(\ref{pppp}) by making use of~(\ref{SILH})), and in the suppression of $O(4)$ violating events as $S\rightarrow2\pi$. On the other hand, no such symmetry is present in the SILD model or any model in which the light scalar $S$ is not part of the weak doublet. In the latter case, relations of the type~(\ref{O}) are not observed, whereas events like $\chi\rightarrow2\pi$ are allowed by the strong dynamics and are therefore enhanced at high energies. These observations reveal that a discrimination between the two models may be feasible in the limit in which the corrections in~(\ref{O}) are sufficiently small. For example, if the SILH had, say, a $v^2/f^2$ of the order of a few percent, while the SILD $v/f\sim1$ at the ten percent level, then the two scalars would look very much like the fundamental Higgs boson for what concerns the couplings to the standard model~ fields, but they would present a radically different UV physics for what concerns the couplings to the unphysical Goldstone bosons. First, the SILH would satisfy the crucial relation~(\ref{O}), where the amplitudes are enhanced at high energy ${\cal A}\sim s/f^2$ as opposed to a fundamental Higgs. This would lead to interesting observable consequences, like the sum rule eq.(4.19) in~\cite{SILH}. Second, observations of energy enhanced off-shell $O(4)$ violating processes $S\rightarrow 2V_L$ (in vector boson fusion as well as associated production events) would be a distinctive signature of the dilaton model. These processes may be for example tested as excesses in $t\bar t\rightarrow \chi\rightarrow V_LV_L'(\rightarrow 4l,ll\nu\nu)$ events at large invariant mass. The deviations in the hard process $t\bar t\rightarrow S\rightarrow V_LV_L$ are potentially significant, and reach the $100\%$ at moderately high energy scales $\sqrt{s}>3m$ even for $O(10\%)$ departures from the standard model couplings. Given such a sensitivity in the hard process, the potentialities of these events in the extraction of the relevant Higgs couplings are remarkable. At the LHC, even after a relatively low luminosity $\sim30$ fb$^{-1}$, it would be possible to achieve accuracies as high as $5-10\%$, thus approaching the noise of a $\sim2\sigma$ deviation~\cite{Zepp}. Processes involving only the light scalar $S$ are generally more difficult to observe but are also useful to probe the strong dynamics. At low energies the scalar self-couplings are dominated by the potential interactions, and the extrapolations of trilinear ($g_{SSS}$) and quartic ($g_{SSSS}$) couplings would provide useful informations on the source of explicit breaking of the Goldstone symmetry~\cite{SILH},~\cite{GGS}. Large $O(1)$ deviations from the standard model~ Higgs trilinear coupling may in principle be observed at the LHC in a limited mass range and after an integrated luminosity of about $300$ fb$^{-1}$ is attained. An international linear collider (ILC) would certainly be more adequate, and may be able to test the triple coupling with a much larger accuracy, of order some $10{\%}$. At sufficiently large energies these measurements are, however, substantially altered by the strong dynamics. In the absence of an explicit breaking of the global invariance the scalar self-couplings are derivatively induced. The SILH self-couplings are governed in the high energy regime by the $O(4)$ invariance and, for instance: \begin{eqnarray}\label{SSS} {\cal A}_{SILH}(2S\rightarrow2S)= c_H\frac{s}{f^2}. \end{eqnarray} The SILD lagrangian is, up to $O(p^4)$, \begin{eqnarray}\label{self} \frac{(\partial_\mu\chi)^2}{2}+\frac{c_3}{(4\pi)^2}\frac{\partial_\mu\chi\partial_\nu\chi\partial^\mu\partial^\nu\chi} {\chi^3}+\frac{c_4}{(4\pi)^2}\frac{(\partial_\mu\chi\partial_\nu\chi)^2}{\chi^4}+\dots, \end{eqnarray} where for brevity we neglected operators that give vanishing contribution on shell. The coefficients $c_i$ can be estimated using naive dimensional analysis\footnote{As opposed to the chiral lagrangian the interactions start at $O(p^4)$ -- although this property is a bit obscured in terms of the variable $\sigma=f\log\chi/f$. In a cutoff regularization scheme, we can estimate the coefficients by requiring for example that the operators $c_i$ ($i=3,4$) induce small corrections on the kinetic term (this is not a relevant correction on shell, but our conclusions are completely general). This condition requires $c_i<(4\pi/g_\rho)^i$, where we cutoff the loop integrals at the physical scale $g_\rho f$. For $g_\rho\sim4\pi$ our estimate $c_i=O(1)$ agrees with the rules proposed in~\cite{SILH}.}, and are expected to be of order unity in a strong dynamics. Both $2\chi\rightarrow2\chi$ and the ("$O(4)$ violating") process $2\chi\rightarrow\chi$ start at $O(p^4)$ and are suppressed in the regime of validity of our effective description. In both the SILH and the SILD, the strong dynamics modifies significantly the scalar vertices at energies of the order of $f$ or bigger. This observation is potentially relevant for the SILD physics at a linear collider, especially in the regime $v\sim f$, as the $O(p^4)$ terms would come into play at the scales ($500-1000$ GeV) at which the ILC operates. As an instance, for $v/f=1+10\%$ a perturbative description of the $V_LV_L$ scattering may be delayed up to $\sim4$ TeV, whereas a strong $\chi$ self-coupling would enter around $1$ TeV. \section{Dilaton couplings to standard model~ fermions} Tree level flavor violations mediated by a composite Higgs are typically suppressed if the Higgs is a SILH~\cite{Contino}. We now show that this conclusion does not apply to many realistic realizations of the SILD scenario. The dilaton couplings to the standard model~ fermions are expected not to mediate flavor changing neutral currents (FCNC) only if the standard model~ fermions couple to CFT operators with flavor universal scaling dimensions. This is certainly realized if the standard model~ fermions have all the same scaling representation. If this is the case, the most general Yukawa coupling for the dilaton can be written after EWSB as \begin{eqnarray}\label{FC} F_{ij}v\,\bar\psi_i\psi_j\left(\frac{\chi}{f}\right)^{\Delta}=m_{ij}\bar\psi_i\psi_j\left(1+\Delta\frac{\bar\chi}{f}+\dots\right) \end{eqnarray} where $\Delta$ is an arbitrary number, $i,j$ are flavor indeces, and $F_{ij}$ is a dimensionless scale invariant function, which we can identify with the Yukawa matrix. If the CFT operators do not have a flavor universal scaling, or if the CFT is broken, in the low energy dynamics it would emerge a non-scale invariant function of the dilaton $F_{ij}=F^{(0)}_{ij}+F^{(1)}_{ij}\bar\chi/f+\dots$, with $F^{(a)}_{ij}$'s naturally of the same order. Equivalently, the dimension $\Delta$ in~(\ref{FC}) would become flavor-dependent and would generally lead to flavor violating events. It is instructive to show how these conclusions arise by looking at specific UV realizations. One can identify two distinct ways to generate the standard model~ flavor structure. The first is the one implemented in composite Higgs models and (extended) technicolor models, and contains the minimal flavor violation (MFV) class. This is based on the coupling \begin{eqnarray}\label{class1} {\cal L}_{1}=y_{ij,a}\bar\psi_i\psi_j{\cal O}_{a}, \end{eqnarray} where the CFT operators ${\cal O}_{a}$ have generally different scaling dimensions $\Delta_{a}$. After EWSB the low energy effective theory is obtained by integrating out the energetic fluctuations of the CFT and reads \begin{eqnarray}\label{1} {\cal L}_{1}&=&y_{ij,a}\,\bar\psi_i\psi_j \langle{\cal O}_{a}\rangle\left(\frac{\chi}{f}\right)^{\Delta_{a}}+\frac{y^2}{m_\rho^2}\bar\psi\psi\bar\psi\psi+\dots\\\nonumber &=&m_{ij}\bar\psi_i\psi_j\left(1+\Delta_{ij}\frac{\bar\chi}{f}+\dots\right)+\frac{y^2}{m_\rho^2}\bar\psi\psi\bar\psi\psi+\dots \end{eqnarray} The coefficients of the four fermion operator is a symbolic representation for the sum $y^2/m_\rho^2= y_{ij,a}D_{ab}y_{kl,b}$, where the $D$ matrix originates from the propagator of the operators ${\cal O}_{a}$. It is important to emphasize that the FCNC couplings are there even if the standard model~ fermions are part of the CFT, for example if $\Delta_a=4-\Delta_i-\Delta_j$. For a universal dimension $\Delta_{a}=\Delta$ we recover~(\ref{FC}), that is $\Delta_{ij}=\Delta$. The second class approaches the flavor problem by employing a seesaw type mechanism. This is the class usually implemented in the Randall-Sundrum scenarios with standard model~ fermions in the bulk~\cite{PG}, see~\cite{CP} for the dual interpretation. The bare lagrangian now reads: \begin{eqnarray}\label{class2} {\cal L}_{2}=\lambda^{ia}_L\psi^i_L{\cal O}^a_R+\lambda^{jb}_R\psi^j_R{\cal O}^b_L \end{eqnarray} and leads to an effective field theory of the form \begin{eqnarray}\label{2} {\cal L}_{2}&=&\lambda^{ia}_L\lambda^{jb}_R\langle D^{ab}\rangle\psi_L^i\psi_R^j\left(\frac{\chi}{f}\right)^{\Delta_R^a+\Delta_L^b-4}+ \frac{\lambda^4}{m_\rho^2}\bar\psi\psi\bar\psi\psi+\dots\\\nonumber &=&\lambda^{ia}_L\lambda^{jb}_R\langle D^{ab}\rangle\psi_L^i\psi_R^j \left(1+(\Delta_R^a+\Delta_L^b-4)\frac{\bar\chi}{f}+\dots\right)+ \frac{\lambda^4}{m_\rho^2}\bar\psi\psi\bar\psi\psi+\dots\\\nonumber &=&m_{ij}\bar\psi_i\psi_j\left(1+\Delta_{ij}\frac{\bar\chi}{f}+\dots\right)+ \frac{\lambda^4}{m_\rho^2}\bar\psi\psi\bar\psi\psi+\dots \end{eqnarray} The appearance of the power of the dilaton follows from the expansion of the operator $\int d^4y{\cal O}_R^a(x){\cal O}_L^b(y)$ in the process of CFT integration. Again, if the dimensions $\Delta_{L,R}$ were not to depend on the family label $a,b$, the SILD coupling would be flavor diagonal $\Delta_{ij}=\Delta_R+\Delta_L-4$. For a derivation of eq.~(\ref{2}) in the Randall-Sundrum model see~\cite{Toharia}. If the CFT is flavor anarchic we expect $\Delta_{ij}$ in~(\ref{1}) and (\ref{2}) to be a generic matrix with no hierarchical relations between the components. We will assume this is the case in the following. The leading FCNC effects can be thus described by the following lagrangian: \begin{eqnarray}\label{FCNC} \bar \psi_i\psi_j \left[m_i\delta_{ij}\left(1+b\frac{\bar\chi}{f}\right)+\sqrt{m_im_j}b_{ij}\frac{\bar\chi}{f}\right]+C_{ijkl}\bar\psi_i\psi_j\bar\psi_k\psi_l, \end{eqnarray} where the factor $\sqrt{m_im_j}$ in front of the flavor violating term characterizes both classes of models illustrated above. The flavor mixing term $b_{ij}$ is proportional to the anomalous dimensions of the CFT operators, and it is expected to be an $O(1)$ parameter in a strongly coupled dynamics. This parameter is severely constrained by FCNC bounds. Tree level exchanges of the dilaton lead to 4 fermions interactions with coefficients $C\sim(mb^{ij}/f)^2/m_\chi^2$. The strongest bound for a generic left-right mixing and CP violating operator comes from $K\bar K$ mixing. The UTFit analysis~\cite{UTFit} reports the bound $Im(C)\lesssim(10^5$ TeV$)^{-2}$ (we are neglecting subleading logarithmic running effects), which translates into \begin{eqnarray}\label{b} |b_{ij}|\frac{v}{f}\lesssim10^{-2}\left(\frac{m_\chi}{100\, GeV}\right). \end{eqnarray} If $v<f$, the bound~(\ref{b}) may be saturated for $b_{ij}=O(1)$ and a light dilaton. In this case the dominant decay channel is the $\chi\rightarrow b\bar b$, and flavor violating processes may have relatively large branching ratios $BR(\chi\rightarrow b\bar s)\sim5\%$. In the regime $v\sim f$, the $b_{ij}$ should be neglected compared to the flavor diagonal couplings. More generally, the flavor violating coupling becomes phenomenologically relevant at relatively large dilaton masses. In the latter case, however, the dominant decay mode is expected to be into massive vectors, as for a fundamental Higgs, and branching ratios into fermions become much less accessible. One can estimate flavor violating events to be down by an order $BR(\chi\rightarrow t\bar c)\sim10^{-4}$ in this limit. Production channels like $t\rightarrow\chi c$ may still play a role~\cite{Toharia}. The 4-fermions contact terms in~(\ref{FCNC}) have coefficients of the order $C_{ijkl}\sim \sqrt{y_iy_jy_ky_l}/m_\rho^2$, where $m_i\sim y_iv$ is a standard model~ fermion mass. The phenomenological bounds on these latter can be obtained from the bounds on the dilaton couplings by noting the correspondence $m_\rho\sim fm_\chi/(|b_{ij}|v)>10$ TeV. In a generic theory with heavy composites around the scale $4\pi v$, this bound cannot be satisfied and one should find a way to push the flavor violating effects at higher scales. This can be done by adjusting the theory in such a way that the only physics at the scale $g_\rho v\sim4\pi v$ is the strong Higgs sector, for which FCNC effects are negligible. If our model can account for a splitting between the CFT breaking scale and the electro-weak~ vacuum, the 4-fermion coupling $C_{ijkl}$ of~(\ref{FCNC}) would be suppressed by a high scale $m_\rho=g_\rho f$ and the constraint~(\ref{b}) would translate into $f\sim$ TeV for a maximally strong dynamics ($g_\rho\sim 4\pi$). To conclude, models of type~(\ref{class1}) are more adequate to embed MFV, while those of type~(\ref{class2}) seem more appropriate to explain the fermion hierarchy. From the above qualitative estimates we see that $v/f\lesssim$ few$\times0.1$ in generic models, while $v\sim f$ is compatible with FCNC observations only if MFV is at work, in which case $b_{ij}=0$ by definition. \section{Higgs-dilaton mixing} As far as scale invariance is not explicitly broken, a hierarchical separation $v\ll f$ is unnatural in a CFT. Under this assumption, the dilaton is an exact Nambu-Goldstone boson and parameterizes a flat direction. Its effective lagrangian, obtained after integrating out all the other fields, contains no potential (see eq.~(\ref{self})): the only CFT invariant candidate, $V=a\chi^4$, would imply an unbroken vacuum $\langle\chi\rangle=0$, that would contradict our original hypothesis $\langle\chi\rangle=f$. The integration of light particles at the scale $v$ must therefore exactly compensate the contribution of the heavy composites at the scale $f$. Because by dimensional analysis the former give $a_{light}\sim (v/f)^4$, the smaller the ratio $v/f$, the larger the fine-tuning required to accommodate a vanishing potential. This explains why, if the full standard model~ is embedded into a spontaneously broken CFT, the natural cut-off scale for new physics cannot be far from the weak scale. The same conclusion is found in the context of the Randall-Sundrum~ scenario if the standard model~ is placed on the IR brane, where the cut-off suppressing higher dimensional operators is naturally at the weak scale irrespective of the mass of the Kaluza-Klein modes. If the weak scale is somewhat smaller than $f$, and if the CFT is not badly broken, it makes sense to consider an effective theory for the strong EWSB sector of the SILD scenario and the dilaton itself. We focus on a simplified theory in which the Higgs sector is described by an interpolating Higgs doublet and for simplicity we further assume that the anomalous dimensions can be algebraically summed\footnote{This can be made rigorous for appropriate local operators at leading $1/N$ order and for a certain class of SUSY theories with R-symmetry.}. Our results are completely general, though. Under our simplifying assumptions the most general action for the strongly coupled Higgs is constructed out of the conformally covariant derivative~(\ref{der}): \begin{eqnarray} \left(D_\mu-\Delta\frac{\partial_\mu\chi}{\chi}\right)H, \end{eqnarray} where $D_\mu=\partial_\mu+iA_\mu$ is the usual gauge covariant derivative, and the potential \begin{eqnarray} V(\chi,H)=\chi^4\hat V\left(\frac{H^\dagger H}{\chi^{2\Delta}}\right). \end{eqnarray} Both kinetic and potential terms induce a mixing between the Higgs after EWSB. For the case of a dimension $\Delta=1$ Higgs, the kinetic mixing coincides with the one induced in warped models by the non-minimal gravitational coupling $\xi RH^\dagger H$ with $\xi=1/6$~\cite{GRW}, see~\cite{CGK} for a detailed analysis. A deviation from the conformal factor $\xi=1/6$ accounts for the breaking of special conformal symmetries, and it amounts to including an additional $(1-6\xi) H^\dagger D_\mu H\partial^\mu\chi/\chi+h.c.$ operator in our 4D language. In the conformal limit, the system can be easily diagonalized by defining the scale-invariant field $\hat H=H(f/\chi)^\Delta$. The mass eigenstates are then the physical component of the scale-invariant field, $\hat H^t=(0,v+\hat h)$, and the dilaton $\bar\chi=\chi-f$. Notice that, in order to be compatible with the background solution $\chi=f$, $|H|^2=v^2$, the potential must satisfy $\hat V=\hat V'=0$ on the vacuum. These conditions automatically ensure that no mass term is generated for the dilaton. The leading coupling between the dilaton and the physical Higgs $\hat h$ is dictated by scale invariance: $$ \frac{m_\rho^2}{2}\hat h^2\left(1+4\frac{\bar\chi}{f}+\dots\right), $$ where $m_\rho^2=\hat V''v^2$. The mass of the physical composite Higgs $\hat h$ is of order $m_\rho\lesssim4\pi v$ in a strong dynamics and its fluctuation may be neglected at scales $E<m_\rho$. Because of the mixing $H-\chi$, all the couplings of the gauge eigenvector $H$ induce vertices for the SILD. In terms of the mass eigenstates, the electromagnetic singlet component of the doublet $H$ reads: \begin{eqnarray}\label{h-d} h=(\hat h+v)\left(1+\Delta \frac{\bar\chi}{f}+\frac{1}{2}\Delta(\Delta-1)\frac{\bar\chi^2}{f^2}+O(\bar\chi^3)\right). \end{eqnarray} If we integrate out the heavy state $\hat h$, we see that the dilaton couples as a Higgs up to a universal $\Delta v/f$ scaling and modulo universality violations induced by the anomalous dimension $\Delta-1$ of the strong Higgs. In particular, we see that the relation $a_1^2=a_2^2$ in~(\ref{gen}) characterizes the SILD scenario in the limit of small anomalous dimensions for the Higgs sector. In principle, one may expect a large dimension $\Delta$ to compensate the suppression $v/f$ so that $\Delta v/f\sim1$; however, the dimension of the Higgs cannot be arbitrary large if we require our model to be self-consistent. For example, assuming that the top Yukawa coupling remains perturbative up to the naive dimensional analysis scale $m_\rho\sim4\pi v$ requires $\Delta<2$~\cite{CTC}. Once sources of explicit symmetry breaking of the CFT are taken into account, a non-zero dilaton mass $m$ is generated and the decoupling procedure described above is no more valid. As far as $m^2\ll m_\rho^2$ applies, we expect our results to be accurate. \paragraph{The dilaton as the Higgs:} The quantity $v/f$ is an actual measure of the Higgs contribution to the spontaneous CFT breaking, and parameterizes the amount of mixing $h-\bar\chi$. This is explicit in~(\ref{h-d}) and it can be equivalently derived as a general consequence of the algebra of the generators, see eq.~(19) in~\cite{Fan}. For $v\sim f$ the dilaton-Higgs mixing is maximal and in the extreme scenario $v=f$ the Higgs itself is the dilaton. The scenario in which the Higgs and the dilaton coincide naturally account for a parametric separation between the Higgs mass and the naive dimensional analysis scale $4\pi v$. However, both electro-weak~ precision parameters and flavor measurements are potentially crucial tests for this class. It is therefore of some interest to study the viability of this scenario by considering explicit and computable models in which this feature is realized. A tractable candidate may be found by considering the Randall-Sundrum scenario. In principle, in order to realize our program we should be able to control the physics responsible for the IR brane generation. This physics is related to operators with large dimensions -- $\Delta=O(N)$ -- which are dual to heavy stringy modes that we cannot control on the 5D side~\cite{RZ}. Nevertheless, consistently with our leading $1/N$ approximation we find that warped higgsless theories~\cite{higgsless} behave as models in which the Higgs field can be identified with the dilaton. To see this, we first introduce the conformal coordinate $z=\log y$, in terms of which the AdS geometry can be written as \begin{eqnarray}\label{AdS} ds^2=\frac{1}{z^2}\left(\eta_{\mu\nu}dx^\mu dx^\nu-dz^2\right), \end{eqnarray} where the curvature has been normalized to 1 for simplicity. In higgless models the Higgs sector can be idealized as a 5D field $\phi$ infinitely peaked on the IR brane, which we assume to be placed at some finite $z=z_{IR}$. Let us see how this translates in the dual strong dynamics. Any bulk field $\phi$ on the gravity side is believed to be dual to some operator ${\cal O}$ on the gauge side. The 4D Fourier transform of a scalar field $\phi$ of 5D mass $m^2>0$ on the AdS geometry~(\ref{AdS}) is given by $\phi(p,z)=c(p)z^2(J_\nu(pz)+\beta_p Y_\nu(pz))$, where $\nu^2=4+m^2$ and $\beta_p$ is determined by the boundary conditions. In the UV region $z\rightarrow0$ we have $\phi\rightarrow Az^{\Delta}+\varphi_0z^{4-\Delta}$, where $\Delta=2+\nu$ represents the scaling dimension of the dual scalar operator ${\cal O}$ and corresponds to the positive root of $m^2=\Delta(\Delta-4)$. The AdS/CFT correspondence instructs us to identify the parameters $A$ and $\varphi_0$ as the vacuum expectation value and the source of the operator ${\cal O}$, respectively~\cite{Klebanov}. By switching off the source $\varphi_0$ and taking the limit of very large dimension $\Delta$ (i.e. large 5D mass, $m^2\rightarrow\infty$) we find $\phi(p,z)=c(p)z^2J_\nu(pz)\rightarrow Az^{\nu+2}=A(p)z^\Delta$. Using the identification introduced in~\cite{Klebanov} we finally write: \begin{eqnarray} \phi=\frac{\langle{\cal O}\rangle}{2\Delta-4}z^\Delta. \end{eqnarray} For a translational invariant vacuum, no propagating mode exists in the $m\rightarrow\infty$ limit and the bulk field reduces to a non-dynamical background. Now, if the operator ${\cal O}$ acquires a vacuum expectation value $\langle{\cal O}\rangle=cf^\Delta$, then the corresponding 5D profile in the limit $\Delta\rightarrow\infty$ can be approximated by a step function: \begin{eqnarray} \phi\,\rightarrow\; \left\{ \begin{array}{ccc} 0\quad\;\,\texttt{for}\;\, zf<1, \\ \infty\quad\texttt{for}\;\,zf>1. \end{array}\right. \end{eqnarray} This is exactly the physics at the origin of the IR brane in the RS model, provided we identify $z_{IR}=1/f$~\cite{RZ}. The argument can be generalized to the case in which a number of operators ${\cal O}_i$ contribute to the spontaneous breaking of the CFT by an amount $\langle{\cal O}_i\rangle=c_if^{\Delta_i}$. As far as their dimension is very large $\Delta_i=O(N)$, the excitations of these operators are not captured by our leading $1/N$ approximations, and the information contained in the coefficients $c_i$ is lost. In this limit the effective theory is equivalent to having a single operator ${\cal O}_1$ with vacuum $f$. Any field $\phi$ \textit{exactly} localized on the IR brane can be interpreted as the dual description of ${\cal O}_1$. In particular, without loss of generality, we can identify the Higgs as our candidate and conclude that the above model is equivalent to a higgsless scenario of EWSB with $v=f$, as anticipated. This model is characterized by standard model~ fields masses of order $g_{SM}f$ and CFT composites at the scale $g_\rho f$. The well known tension between perturbativity of the strong sector and the fit with precision data -- in particular the flavor constraints outlined in the previous section and the electro-weak~ precision parameter -- is manifest. The question whether subleading corrections in the $1/N$ expansion can alleviate this tension while keeping a relatively light dilaton remains open. \section{Discussion and conclusions} If the forthcoming experiments discover a light and chargeless CP-even scalar but no additional heavy states, it would be a priority to understand whether the detected spin-0 particle is the physical excitation of a light Higgs doublet, being it composite or fundamental, or not. In order to answer this question we focused on two scenarios of strong EWSB. The first includes the fundamental Higgs doublet model and it is known as the SILH class~\cite{SILH}. The second scenario describes a broken CFT with an emerging light dilaton~\cite{GGS}, the strongly-interacting light dilaton (SILD) class. The phenomenology of the latter has been extensively studied in the context of the Randall-Sundrum~ scenario, where it is generally accompanied by a light composite Higgs (for a recent study see~\cite{csaki}). We find it useful to attack the problem from the broader perspective of 4D effective field theories. Our primary aim was the identification of model-independent signals characterizing the dilaton with respect to the SILH scenario, questions regarding the compatibility of the models with the electro-weak precision parameters (in particular $\hat S$) were not considered. We used a phenomenological approach and characterized the models by using three parameters: the scale at which the (approximate) symmetry of the strong dynamics is spontaneously broken ($f$), the strength of the interactions among the resonances ($g_\rho$), and the mass of the light scalar ($m$). An explicit breaking of the global symmetries of the strong sector is necessary to generate the mass $m$. The physically relevant aspect is how much the standard model~ is involved in the breaking. Given a competitive low energy phenomenology for the light scalars (in particular, similar couplings to the standard model fermions), the two scenarios are in principle distinguishable at high energies, as the heavy states are expected at parametrically different scales, $g_{\rho}v\lesssim4\pi v$ for the SILD and $g_{\rho}f\lesssim4\pi f$ for the SILH. In the low energy regime, however, a discrimination between a dilaton and a Higgs field, either fundamental or composite, is possible in specific regions of the parameter space only as the result of combined fits and sufficiently high statistics. A detailed estimate of the actual experimental significance of our conclusions is a model-dependent issue and it has not been addressed here. At the LHC, the measurements of rates times branching ratios ($\sigma\times BR$) in many channels is expected to provide useful pieces of information regarding the couplings of the light scalar for a sufficiently high integrated luminosity. Under some reasonable assumptions on the new physics model\footnote{One of the assumptions that characterize the analysis of~\cite{Zepp} and~\cite{LPRZD} is that the couplings of the light scalar to two $W$'s or $Z$'s are bounded by above by those of the standard model Higgs. These assumptions are satisfied in both the dilaton scenario and the SILH (see~\cite{Low}), as far as our perturbative description holds.} and for an intermediate scalar mass $m$, it should be possible to probe relative deviations $\delta$ from the standard model~ Higgs couplings in the range $|\delta|>10\%$, see for instance~\cite{Zepp} and~\cite{LPRZD}. With these accuracies, if the SILD has an $f$ within $10\%$ or less of $v$, the LHC will not be able to tell the difference between the two scenarios~\cite{GGS}. A more promising environment is thus provided by a linear collider like the ILC, at which a precision of the percent level in the extraction of $\delta$ from $\sigma\times BR$ may be achieved. Strong deviations from the SILH physics are likely to be observed in the dilaton couplings to gluons or photons $\chi\rightarrow2g,2\gamma$ mediated by the conformal anomaly~\cite{GRW,GGS}. If the standard model~ gauge symmetry is an explicit breaking of the CFT the situation is a bit more subtle because the coefficients entering in such processes are very much model-dependent. Nevertheless, we emphasize that observable departures in the production/decay into massless vectors are expected to be relevant in theories with a large number of fundamental constituents, which is quite a natural framework for obtaining a light dilaton. A characteristic feature of the SILD -- or any model in which the scalar is not part of a weak doublet -- would be the observation of energy enhanced $O(4)$-violating processes in $V_LV_L\rightarrow\chi$ events (vector boson fusion as well as associated production) at high virtuality, for instance in the promising channel $gg\rightarrow\chi\rightarrow V_LV_L'\rightarrow4l,l^\pm\nu\nu$. To reach sufficiently high energies, $p^2\gg m^2$, a hadron collider seems more adequate, but the precision may not be sufficient due to QCD uncertainties and the large background. We found that the dilaton generally mediates tree level flavor violating processes in the effective theory. In contrast, a SILH does not if the alignment mechanism described in~\cite{Contino} applies. The flavor violating parameters are directly related to the flavor non-universality of the CFT operators that couple to the standard model~ fermions, and are present irrespective of whether the standard model~ is or is not embedded into the CFT. Potentially testable branching ratios for flavor violating processes like $\chi\rightarrow b\bar s$ are natural features of a SILD. Other events characterizing the dilaton model and involving standard model~ fermions were considered in~\cite{Fan}. The CFT necessarily introduces 4-fermions contact interactions in the low energy effective theory, as well, and the current bounds on flavor violation can be translated into a bound on the ratio $v/f$. Except for somewhat ad-hoc scenarios in which the minimal flavor violation paradigm is at work, for which no real explanation of the standard model~ fermion hierarchy is given, we found that $v/f\lesssim$ few $0.1$ is a realistic estimate. Additional phenomenological bounds on this ratio come from LEP, and apply to a light dilaton $m<110$ GeV. These are discussed in~\cite{GGS}. In the preferable case $v<f$, a somewhat lighter Higgs sector compared to the scale $m_\rho=g_\rho f$ is required. We analyzed the implications of this assumption on the Higgs-dilaton mixing and studied the decoupling of the physical excitations in the limit in which the explicit CFT breaking source is negligible. It is intriguing to further consider the possibility of a large Higgs-dilaton mixing, $v\sim f$, especially in light of a possible realization in a walking dynamics context~\cite{Sannino}. We showed that the existing warped higgsless models of EWSB~\cite{higgsless} are tractable incarnations of this idea. In summery, if no significant deviations in the magnitude of the standard model~ couplings to the light scalar (especially in the gauge sector) are observed, no $O(4)$-violating process is detected, as well as no tree level FCNC exchange is measured, then it would be fair to say that the observed scalar is a (composite) Higgs. \acknowledgments It is a pleasure to acknowledge Christophe Grojean and Andreas Weiler for stimulating discussions, Riccardo Rattazzi for bringing this topic to my attention, and Roberto Contino and Michael Graesser for comments on the manuscript. This work has been partially supported by the Italian INFN and MIUR under the program "Fundamental Constituents of the Universe" and the EU Network "UniverseNet" (MRTN-CT-2006-035863), and by the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396.
train/arxiv
BkiUdJ45qX_BlbLl_e48
5
1
\section{Introduction} Consider a random source which evolves on a finite set. It follows from existing literature, see for example \cite{Shannon} and \cite{GallagerInformationTheory} (Pages 491-500, in particular, see Definition (9.8.3) and Theorem 9.8.3 for achievability), that the limit of the normalized rate-distortion functions of block-independent approximations of a a stationary, ergodic source is equal to the rate-distortion function of the source. Specializing this theorem to irreducible, aperiodic Markoff chains, it follows that the limit of rate-distortion functions of block-independent approximations of an irreducible, aperiodic Markoff chain which starts in the stationary distribution is equal to the rate-distortion function of this Markoff chain. It is known that the rate-distortion function of an irreducible, aperiodic Markoff chain is independent of its initial distribution (follows from \cite{Gray}). In this paper, it will be proved that the limit of the normalized rate-distortion functions of block-independent approximations of an irreducible, aperiodic Markoff chain is independent of its initial distribution. It follows, then, that the rate-distortion function of an irreducible, aperiodic Markoff chain and the limit of the normalized rate-distortion functions of its block independent approximations are equal and these functions are independent of the initial distribution of the Markoff chain. Literature on rate-distortion theory is vast. The seminal works are \cite{Shannon} and \cite{ShannonReliable}. A work for rate-distortion theory for random processes is \cite{Kolmogorov}. Much of the classical point-to-point literature on rate distortion theory gets subsumed under the books \cite{GallagerInformationTheory} and \cite{Gray}. Another reference is \cite{Berger}. The reader is refered to these three books and references therein for the literature on rate-distortion theory. In particular, the reader is referred to \cite{Gray} because non-stationary sources are dealt with in great detail in this book, and the concern here is with a non-stationary process, albeit, a non-stationary Markoff chain. For understanding Markoff chains, the reader is referred to \cite{Gnedenko}, \cite{Shiryaev}, and \cite{Feller1}. \section{Notation and definitions} \label{NotAndDef} $\mathbb X$ and $\mathbb Y$ denote the source input and source reproduction spaces respectively. Both are assumed to be finite sets. Asume that $\mathbb X = \mathbb Y$. Assume that the cardinality of $\mathbb X$ is greater than or equal to $2$. $d: \mathbb X \times \mathbb Y \rightarrow [0, \infty )$ is the single-letter distortion measure. Assume that $d(x,x) = 0 \ \forall x \in \mathbb X$ and that $d(x,y) > 0$ if $x \neq y$. Denote \begin{align} D_{\max} \triangleq \max_{x \in \mathbb X, y \in \mathbb Y}d(x,y), D_{\min} \triangleq \min _{\{x \in \mathbb X, y \in \mathbb Y | d(x,y) > 0\}} d(x,y) \end{align} In what follows, the distortion levels will be assumed to be strictly greater than $0$. For $x^n \in \mathbb X^n, y^n \in \mathbb Y^n$, the $n$-letter rate-distortion measure is defined additively: \begin{align} d^n(x^n, y^n) \triangleq \sum_{i=1}^n d(x^n(i), y^n(i)) \end{align} where $x^n(i)$ denotes the $i^{th}$ component of $x^n$ and likewise for $y^n$. Let $X_1, X_2, \ldots$, be a Markoff chain with transition probability matrix $P$, where each $X_i$ is a random-variable on $\mathbb X$. For $x, x' \in \mathbb X$, $p_{xx'}$ denotes the probability that the Markoff chain is in state $x'$ at time $t+1$ given that it is in state $x$ at time $t$. $p_{xx'}$ is independent of $t$. Assume that the Markoff chain is irreducible, aperiodic. This implies that it has a stationary distribution, henceforth denoted by $\pi$, which will be reserved exclusively for the stationary distribution. In order to specify the Markoff chain completely, we need to specify its initial distribution. If $X_1 \sim \pi'$ denote the Markoff chain $(X_1, X_2, \ldots)$ by $X_{[\pi', P]}$. Recall that $P$ is the transition probability matrix of the Markoff chain. $X_{[\pi',P]}$ will be called the Markoff $X_{[\pi',P]}$ chain. $X^n_{[\pi', P]}$ will denote $(X_1, X_2, \ldots, X_n)$. The above mentioned assumptions that $\mathbb X = \mathbb Y$, $d(x,x) = 0$ and $d(x,y)> 0$ is $x \neq y$, that the distortion levels are strictly greater than zero, and that, the Markoff chain is irreducible, aperiodic, will be made throughtout this paper and will not be re-stated. A rate $R$ source-code is a sequence: $<e^n, f^n>_1^\infty$, where $e^n: \mathbb X^n \rightarrow \{1, 2, \ldots, 2^{\lfloor nR \rfloor} \}$ and $f^n: \{1, 2, \ldots, 2^{\lfloor nR \rfloor} \} \rightarrow \mathbb Y^n$. We say that rate $R$ is achievable for source-coding the Markoff $X_{[\pi', P]}$ source within distortion-level $D$ under the expected distortion criterion if there exists a rate $R$ source code $<e^n, f^n>_1^\infty$ such that \begin{align} \limsup_{n \to \infty} E \left [ \frac{1}{n} d^n(X^n_{\pi'}, f^n(e^n(X^n_{[\pi', P]}))) \right ] \leq D \end{align} The infimum of all achievable rates is the rate-distortion function $R^E_{X_{[\pi', P]}}(D)$. The block-independent approximation (henceforth shortened to BIA) $X^T_{[\pi',P]}$ source is a sequence of random vectors $(S_1, S_2, \ldots, S_n, \ldots)$, where $S_i$ are independent, and $\forall i$, $S_i \sim X^T_{[\pi',P]}$. To simplify notation, we will sometimes denote $(S_1, S_2, \ldots)$ by $S$. $S^n$ will denote $(S_1, S_2, \ldots, S_n)$. Note that BIA $X^T_{[\pi',P]}$ source is an i.i.d. vector source and will also be called the vector i.i.d. $X^T_{[\pi',P]}$ source. Since the BIA $X^T_{[\pi',P]}$ source is an i.i.d vector source, the rate-distortion function for it is defined in exactly the same way as for an i.i.d. source. The details are as follows: The source input space for the BIA $X^T_{[\pi',P]}$ source is $\mathbb X^T$ and the source reproduction space is $\mathbb Y^T$. Denote these by $\mathbb S$ and $\mathbb T$ respectively. A generic point in $\mathbb S$ is a $T$-length sequence $s$. The $i^{th}$ component of $s$ is denoted by $s(i)$. A generic point in $\mathbb T$ is a $T$-length sequence $t$. The $i^{th}$ component of $t$ is denoted by $t(i)$. The single letter distortion measure is denoted by $d'$ and is defined as $d'(s,t) \triangleq \sum_{j=1}^T d(s(j),t(j))$. For $s^n \in \mathbb S^n$, $t^n \in \mathbb T^n$, the $n$-letter distortion measure $d'^n$ is defined additively: $d'^n(s^n, t^n) \triangleq \sum_{i=1}^n d'(s^n(i),t^n(i))$. Note that $s$ can be thought of as either a scalar in $\mathbb S$ or a $T$ dimensional vector in $\mathbb X^T$. With this identification, $d' = d^T$ and $d'^n$ can be thought of as $d^{nT}$. A rate $R$ source code is a sequence $<e^n, f^n>_1^\infty$, where $e^n: \mathbb S^n \rightarrow \{1, 2, \ldots, 2^{\lfloor nR \rfloor} \}$ and $f^n: \{1, 2, \ldots, 2^{\lfloor nR \rfloor} \} \rightarrow \mathbb T^n$. We say that rate $R$ is achievable for source-coding the BIA $X^T_{[\pi',P]}$ source within distortion-level $D$ under the expected distortion criterion if there exist a sequence of rate $R$ source codes $<e^n, f^n>_1^\infty$ such that \begin{align} \limsup_{n \to \infty} E \left [ \frac{1}{n} d'^n(S^n, f^n(e^n(S^n))) \right ] \leq D \end{align} The infimum of all achievable rates corresponding to a given distortion level $D$ is the operational rate-distortion function at that distortion level, henceforth denoted by $R^E_{X^T_{[\pi',P]}}(D)$. The normalized rate-distortion function at block-length $T$ and distortion level $D$ is defined as \begin{align} \frac{1}{T} R^E_{X^T_{[\pi',P]}}(TD) \end{align} and the limit is \begin{align} \label{Normalized} \lim_{T \to \infty} \frac{1}{T} R^E_{X^T_{[\pi',P]}}(TD) \end{align} The theorems in this paper prove the equality of $R^E_{X_{[\pi', P]}}(D)$ and (\ref{Normalized}), and that these functions do not depend on $\pi'$. The statements of these theorems are stated in Section \ref{Lemma}. Before that, we carry out a discussion on the rate-distortion function of a non-stationary Markoff chain. \subsection{Discussion} To be entirely correct, the rate-distortion function of a Markoff source should be defined as follows: Let $n$ be the block-length. Denote $U_i \triangleq X_{(i-1)n+1}^{in}$. Each $U_i$ is thus, a random vector of length $n$. Let $<e^n, f^n>_1^\infty$ be a source to code the source $X_{[P, \pi']}$. When the block length is $n$, we would like to use the source-code successively over all intervals of time of block-length $n$. Thus, it is more logical to define the distortion as: \begin{align} \limsup_{n \to \infty} \sup_{i \in \mathbb N} E \left [\frac{1}{n} d^n(U_i, f^n(e^n (U_i))) \right ] \end{align} and correspondingly define the rate-distortion function. This does not end up making a difference, and hence, we stick to the originally given definition for distortion. Note that if $\pi' = \pi$, the stationary distribution, the $\sup$ in the above definition can be removed since the distribution of $X_{(i-1)n+1}$ is independent of $i$. \section{The theorems} \label{Lemma} \begin{thm}\label{OT1} $R^E_{X_{[\pi', P]}}(D) = R^E_{X_{[\pi,P]}}(D)$ where $\pi$ is the stationary distribution and $\pi'$ is an arbitrary probability distribution on $\mathbb X$ \end{thm} \begin{proof} Follows from \cite{Gray} or see Appendix \ref{EQM} for an independent proof tailored for Markoff chains. \end{proof} \begin{thm} \label{OT2} For $D > 0$, \begin{align} \label{MainTheorem} \lim_{T \to \infty} \frac{1}{T} R^E_{X^T_{[\pi', P]}}(TD) \ \mbox{exists, and is independent of $\pi'$} \end{align} \end{thm} This theorem will be proved in Section \ref{Pf}. \begin{thm}\label{OT3} \begin{align} R^E_{X_{[\pi',P]}}(D) = R^E_{X_{[\pi,P]}}(D) = \lim_{T \to \infty} \frac{1}{T} R^E_{X^T_{[\pi, P]}}(TD) =\lim_{T \to \infty} \frac{1}{T} R^E_{X^T_{[\pi', P]}}(TD) \end{align} where $\pi$ is the stationary distribution and $\pi'$ is an arbitrary distribution on $\mathbb X$. \end{thm} \begin{proof} Follows from Theorems \ref{OT1}, \ref{OT2} and \cite{GallagerInformationTheory}, Pages 490-500. \end{proof} In order to prove Theorem \ref{OT2}, we need more notation and this is the subject of the next section. The theorem is proved in the section following the next. \section{Further notation} \label{FN} The information-theoretic rate-distortion function of the vector i.i.d. $X^T_{[\pi',P]}$ source is denoted and defined as \begin{align} R^I_{X^T_{[\pi',P]}}(D) \triangleq \inf_{\mathbb W } I(X^T; Y^T) \end{align} where $X^T \sim X^T_{[\pi',P]}$ and $\mathbb W$ is the set of $W: \mathbb S \rightarrow \mathbb P(\mathbb T)$ defined as \begin{align} \mathbb W \triangleq \left \{ W \ \left |\ \sum_{s \in \mathbb S, y \in \mathbb T} p_{X^T_{[\pi',P]}}(s)W(t|s) d'(s,t) \leq D \right . \right \} \end{align} where $p_{X^T_{[\pi',P]}}$ denotes the distribution corresponding to $X^T_{[\pi',P]}$. Note that this is the usual definition of the information-theoretic rate-distortion function for an i.i.d. source; just that the source under consideration is vector i.i.d. Let $s \in \mathbb S$. Denote by $J_{\tau}$ the projection transformation. $ J_{\tau}(s) \triangleq (s(\tau+1), s(\tau+2), \ldots s(T)) $. Fix $s$. Denote $ \mathbb A \triangleq \{t \in \mathbb S \ | \ J_{\tau}(t) = J_{\tau}(s) \} $. Under the distribution induced by $X^T_{[\pi', P]}$, the probability of the set $\mathbb A$ is \begin{align} \pi'^{(\tau)}(s(\tau+1))\prod_{i=\tau+1}^{T-1} p_{s(i)s(i+1)} \end{align} for some distribution $\pi'^{(\tau)}$ on $\mathbb X$ which satisfies $\pi'^{(\tau)}(x) \to \pi(x)$ as $\tau \to \infty$ $\forall x \in \mathbb X$. Note further, that if $\pi'=\pi$, $\pi'^{(\tau)} = \pi$. For $x \in \mathbb X$, denote $ \pi'^{(\tau)}(x) = \pi(x) + {\delta}^{(\tau)}(x) $ where ${\delta}^{(\tau)}(x) \to 0$ as $\tau \to \infty$. ${\delta}^{(\tau)}(x)$ may be negative. Denote by $J_{\tau}(X^T_{[\pi', P]})$, the probability distribution on $\mathbb X^{T-\tau}$ which causes the probability of a sequence $r \in \mathbb X^{T-\tau}$ to be \begin{align} \pi'^{(\tau)}(r(1)) \prod_{i=1}^{T-1} p_{r(i)r(i+1)} \end{align} Note that $J_{\tau}(X^T_{[\pi', P]})$ is the marginal of $X^T_{[\pi', P]}$ on the last $T-\tau$ dimensions. An i.i.d. source can be formed from $J_{\tau}(X^T_{[\pi', P]})$ by taking a sequence of independent random vectors, each distributed as $J_{\tau}(X^T_{[\pi', P]})$. This will be called the vector i.i.d. $J_{\tau}(X^T_{[\pi', P]})$ source. The rate-distortion function for the vector i.i.d. $J_{\tau}(X^T_{[\pi', P]})$ source in defined in the same way as the rate-distortion function for the vector i.i.d. $X^T_{[\pi', P]}$ source: For $T-\tau$ length sequences, the single-letter distortion measure is defined as $d''(p,q) = \sum_{i=1}^{T-\tau} d(p(i), q(i))$ where $p \in \mathbb X^{T-\tau}$, $q \in \mathbb Y^{T-\tau}$. The $n$-letter rate-distortion measure is defined additively: $d''^n(p^n,q^n) = \sum_{i=1}^n d''(p^n(i), q^n(i))$ where $p^n \in (\mathbb X^{T-\tau})^n$ and $q^n \in (\mathbb Y^{T-\tau})^n$. A sequence of rate $R$ source codes is a sequence $<e^n, f^n>_1^\infty$, where $e^n: (\mathbb X^{T-\tau})^n \rightarrow \{1, 2, \ldots, 2^{\lfloor nR \rfloor} \}$ and $f^n: \{1, 2, \ldots, 2^{\lfloor nR \rfloor} \} \rightarrow \mathbb (\mathbb Y^{T-\tau})^n$. The rate-distortion functions for i.i.d. $J_{\tau}(X^T_{[\pi', P]})$ source when the distortion measure is d'' is defined analogously as for the i.i.d. $X^T_{[\pi', P]}$ vector source; the details are omitted. Denote the operational rate-distortion function for the vector i.i.d. $J_{\tau}(X^T_{[\pi'', P]})$ source by $R^E_{J_{\tau}(X^T_{[\pi', P]})}(\cdot)$ and denote the information-theoretic rate-distortion function for the same source by $R^I_{J_{\tau}(X^T_{[\pi', P]})}(\cdot)$. For the same reason as that stated before regarding $d'$, $d'' = d^{T-\tau}$ and $d''^n$ can be thought of as $d^{n(T-\tau)}$. \section{{Proof of the Theorem \ref{OT2}}} \label{Pf} Before we prove the theorem, note the following: \begin{lemma}\label{ConvexLemma} Let $f: [0, \infty) \rightarrow [0, \infty )$ be a convex $\cup$ non-increasing function. Let $f(0) = K$. Let $0 < a < a'$. Then, \begin{align} f(a) - f(a') \leq \frac{K}{a}(a'-a) \end{align} \end{lemma} \begin{proof} \begin{align} & f(a)-f(a') \leq \frac{K}{a}(a'-a) \impliedby \frac{f(a)-f(a')}{a'-a} \leq \frac{f(0)-f(a)}{a} \\ \impliedby & \frac{f(a')-f(a)}{a'-a} \geq \frac{f(a)-f(0)}{a-0} \impliedby \left (1-\frac{a}{a'} \right )f(0) + \frac{a}{a'} f(a') \geq f(a) \nonumber \\ \impliedby & \mbox{Definition of convexity, see for example \cite{V}} \nonumber \end{align} \end{proof} This lemma is a direct result of the the definition of convexity and this observation will be used crucially in the proof of the theorem , which follows below. Proof of Theorem \ref{OT2}: \begin{proof} By the rate-distortion theorem, $R^E_{X^T_{[\pi', P]}}(TD) = R^I_{X^T_{[\pi', P]}}(TD)$. Comparing definitions with \cite{GallagerInformationTheory}, Page 491, \begin{align} \frac{1}{T}R^I_{X^T_{[\pi, P]}}(TD) \ \mbox{ (notation in this document)} = R_T(D) \ \mbox{(notation in \cite{GallagerInformationTheory})} \end{align} By Theorem 9.8.1 in \cite{GallagerInformationTheory}, it follows that \begin{align}\label{LimExists} \lim_{T \to \infty} \frac{1}{T} R^E_{X^T_{[\pi, P]}}(TD) \ \mbox{exists} \end{align} (\ref{LimExists}) will be used crucially towards the end of the proof. The proof follows three steps: \begin{enumerate} \item Bound the difference between $R^E_{J_{\tau} (X^T_{[\pi', P]})}(\cdot)$ and $R^E_{J_{\tau} (X^T_{[\pi, P]})}(\cdot)$. \item Relate $R^E_{J_{\tau}(X^T_{[\pi', P]})}(\cdot)$ and $R^E_{X^T_{[\pi', P]}}(\cdot)$. \item Use these relations to prove the desired result by computing various bounds. \end{enumerate} The first step in the proof is to come up with a bound for the difference between $R^E_{J_{\tau} (X^T_{[\pi', P]})}(\cdot)$ and $R^E_{J_{\tau} (X^T_{[\pi, P]})}(\cdot)$. To this end, we first do the same for $R^I_{J_{\tau} (X^T_{[\pi', P]})}(\cdot)$ and $R^I_{J_{\tau} (X^T_{[\pi, P]})}(\cdot)$. To this end, denote the distribution corresponding to $J_{\tau}(X^T_{[\pi', P]})$ on $\mathbb X^{T-\tau}$ by $Q'$, and the distribution corresponding to $J_{\tau}(X^T_{[\pi, P]})$ by $Q$. The $l^1$ distance between $Q'$ and $Q$, \begin{align}\label{l1QQ'} l^1(Q',Q) & \triangleq \sum_{x^{t-\tau} \in \mathbb X^{T-\tau}} \left |Q'(x^{T-\tau}) - Q(x^{T-\tau} )\right | \\ &= \sum_{x^{t-\tau} \in \mathbb X^{t-\tau}} |{\pi'}^{(\tau)}(x^{t-\tau}(1) - \pi^{(\tau)}(x^{t-\tau}(1)| \prod_{i=1}^{T-\tau-1}p_{x^{t-\tau}(i)x^{t-\tau}(i+1)} \nonumber \\ &=\sum_{x \in \mathbb X} |\delta^{(\tau)}(x)| \nonumber \\ & \triangleq \delta^{(\tau)} \nonumber \end{align} In the above calculation, we have used the fact that if $\pi'=\pi$, $\pi'^{(\tau)} = \pi$. Condition (Z) stated in \cite{Hari} holds based on the assumptions we have made, Lemma 2 in \cite{Hari} can be applied, and it follows that for $\tau$ sufficiently large (reasoning stated below after a few lines) and any $T > \tau$, \begin{align} \label{BoundJtaupipi'} \left | \frac{1}{T-\tau} R^I_{J_{\tau}(X^T_{[\pi', P]})}((T-\tau)D) - \frac{1}{T-\tau} R^I_{J_{\tau}(X^T_{[\pi, P]})}((T-\tau)D) \right | \leq K \delta^{(\tau)} \log \frac{1}{\delta^{(\tau)}} \end{align} where \begin{align} K = \frac{1}{T-\tau} \frac{7d^*}{\tilde d} \log \left (\left |\mathbb X^{T-\tau} \right | \left |\mathbb Y^{T-\tau} \right | \right ) \end{align} In (\ref{BoundJtaupipi'}) , $\delta^{(\tau)} \log \frac{1}{\delta^{(\tau)}}$ is defined as zero if $\delta^{(\tau)}$ is zero. $|\mathbb X^{T-\tau}|$ and $|\mathbb Y^{T-\tau}|$ denote the cardinalities of the input and output spaces on which the random source ${J_{\tau}(X^T_{[\pi', P]})}$ is defined. $d^*$ is defined as \begin{align} d^* \triangleq \max_{x^{T-\tau} \in \mathbb X^{T-\tau}, ^{yT-\tau} \in \mathbb Y^{T-\tau}} d''(x^{T-\tau}, y^{T-\tau} = (T-\tau) D_{\max} \end{align} and $\tilde d$ is defined as \begin{align} \tilde{d} \triangleq \min_{\{ x^{T-\tau} \in \mathbb X^{T-\tau}, y^{T-\tau} \in \mathbb Y^{T-\tau}\ | \ d''(x^{T-\tau}, y^{T-\tau}) > 0\}} d''(x^{T-\tau}, y^{T-\tau}) = (T-\tau) D_{\min} \end{align} It follows that \begin{align} K = 7 \frac{D_{\max}}{D_{\min}} (\log (|\mathbb X|) + \log (|\mathbb Y|)) \end{align} Note that $K$ is a constant independent of $T, \tau, D$. Also, we said above that (\ref{BoundJtaupipi'}) holds for $\tau$ sufficiently large: this is because by Lemma 2 in \cite{Hari}, we need $\tau$ large enough so that \begin{align} \label{deltabound} \delta^{(\tau)} \leq 4 \frac{D_{\min}}{D_{\max}} \end{align} which is possible considering the fact that $\delta^{(\tau)} \to 0$ as $\tau \to \infty$, and it is for this reason that we require $\tau$ to be sufficiently large. Note that the bound in (\ref{deltabound}) is independent of $T, \tau$. It then follows from (\ref{BoundJtaupipi'}) and the equality of information-theoretic and operational rate-distortion functions for i.i.d. sources, that for $\tau$ sufficiently large and any $T > \tau$, \begin{align} \label{Jpipi'relation} \left | \frac{1}{T-\tau} R^E_{J_{\tau}(X^T_{[\pi', P]})}((T-\tau)D) - \frac{1}{T-\tau} R^E_{J_{\tau}(X^T_{[\pi, P]})}((T-\tau)D) \right | \leq K \delta^{(\tau)} \log \frac{1}{\delta^{(\tau)}} \end{align} The bound (\ref{Jpipi'relation}) will be used crucially later, towards the end of the proof. Next step is to relate $R^E_{J_{\tau}(X^T_{[\pi', P]})}(\cdot)$ and $R^E_{X^T_{[\pi', P]}}(\cdot)$. We will argue the following: \begin{align} \label{REJBound1} R^E_{X^T_{[\pi', P]}}((T-\tau)D + \tau D_{\max}) \leq R^E_{J_{\tau}(X^T_{[\pi', P]})}((T-\tau)D) \end{align} and \begin{align} \label{REJBound2} R^E_{J_{\tau}(X^T_{[\pi', P]})}(TD) \leq R^E_{X^T_{[\pi', P]}}(TD) \end{align} Very rough idea to prove (\ref{REJBound1}) is the following: Given a sequence of rate $R$ source codes for the vector i.i.d. $J_{\tau}(X^T_{[\pi', P]})$ source, we can use the same sequence of rate $R$ source-codes for the vector i.i.d. $X^T_{[\pi', P]}$ source by not coding the time-slots which were not projected onto when defining $J_{\tau}(X^T_{[\pi', P]})$. These banished slots will incur a maximum distortion of $\tau D_{\max}$ per symbol of $X^T_{[\pi', P]}$. (\ref{REJBound1}) follows. See Appendix \ref{App} for precise argument. Very rough idea to prove (\ref{REJBound2}) is the following: Consider a two-dimensional random vector $(A,B)$ on some space and the i.i.d. source got by taking i.i.d. copies of $(A,B)$. Consider a distortion measure which is additive over the two dimensions. Consider, also, the i.i.d. source formed by taking identical copies of $A$. Then, for a given distortion level, the rate-distortion function of the vector i.i.d. $(A,B)$ source is greater than or equal to the rate-distortion function of the i.i.d. $A$ source. This is stated more rigorously in Appendix \ref{App}. Note that $J_{\tau}(X^T_{[\pi, P]})$ is a projection of $X^T_{[\pi, P]}$ onto certain dimensions and the distortion measure over these dimensions is additive. (\ref{REJBound2}) follows from this. Next, we get to Step 3. Assuming $TD>\tau D_{\max}$, by replacing $D$ in (\ref{REJBound1}) by \begin{align} D = \frac{TD-\tau D_{\max}}{T-\tau} \end{align} and by (\ref{REJBound2}), it follows that \begin{align} \label{BKBKBound} R^E_{J_{\tau}(X^T_{[\pi', P]})}(TD) \leq R^E_{X^T_{[\pi', P]}}(TD) \leq R^E_{J_{\tau}(X^T_{[\pi', P]})} ( TD-\tau D_{\max}) \end{align} It follows from (\ref{BKBKBound}) by rearranging, that \begin{align} \label{BBound} 0 \leq R^E_{X^T_{[\pi', P]}}(TD) - R^E_{J_{\tau}(X^T_{[\pi', P]})}(TD) \leq R^E_{J_{\tau}(X^T_{[\pi', P]})} ( TD-\tau D_{\max}) - R^E_{J_{\tau}(X^T_{[\pi', P]})}(TD) \end{align} By noting that $R^E_{J_{\tau}(X^T_{[\pi', P]})}(D)$ is a non-increasing, convex $\cup$ function of $D$ which is upper bounded by $(T-\tau)\log |\mathbb X|$ at $D = 0$, it follows, assuming that $TD > \tau D_{\max}$, by Lemma \ref{ConvexLemma} that \begin{align}\label{BBBBBound} R^E_{J_{\tau}(X^T_{[\pi', P]})} ( TD-\tau D_{\max}) - R^E_{J_{\tau}(X^T_{[\pi', P]})}(TD) \leq \tau D_{\max} \log |\mathbb X| \end{align} From (\ref{BBBBBound}) and (\ref{BBound}), it follows that \begin{align} \label{E1} \lim_{T \to \infty} \left [ \frac{1}{T}R^E_{X^T_{[\pi', P]}}(TD) - \frac{1}{T}R^E_{J_{\tau}(X^T_{[\pi', P]})}(TD) \right ] = 0 \end{align} Note further, by noting that $R^E_{X^T_{[\pi', P]}}(TD) \leq T \log |\mathbb X|$, that \begin{align}\label{E2} \lim_{T \to \infty} \left | \frac{1}{T-\tau}R^E_{X^T_{[\pi', P]}}(TD) - \frac{1}{T}R^E_{X^T_{[\pi', P]}}(TD) \right | \leq \lim_{T \to \infty} \frac{\tau}{T(T-\tau)} T \log|\mathbb X| = 0 \end{align} Also, by noting that $R^E_{J_{\tau}(X^T_{[\pi', P]})}(D)$ is a non-increasing, convex $\cup$ function of $D$ which is upper bounded by $(T-\tau)|\mathbb X|$, it follows by use of Lemma \ref{ConvexLemma} that \begin{align}\label{E3} & \lim_{T \to \infty} \left | \frac{1}{T-\tau}R^E_{J_{\tau}(X^T_{[\pi', P]})}(TD) - \frac{1}{T-\tau}R^E_{J_{\tau}(X^T_{[\pi', P]})}((T-\tau)D) \right | \leq \\ & \hspace{6cm} \lim_{T \to \infty} \frac{\log|\mathbb X|}{(T-\tau)D}\tau D \to 0 \ \mbox{as} \ T \to \infty \nonumber \end{align} It follows, then, from (\ref{E1}), (\ref{E2}), (\ref{E3}) and by noting that \begin{align} \lim_{n \to \infty} a_n + \lim_{n \to \infty} b_n + \lim_{n \to \infty} c_n = \lim_{n \to \infty} (a_n + b_n + c_n) \end{align} if the three limits on the left hand side exist (follows from definitions, see for example \cite{V}), that \begin{align} \label{Anotherbound} \lim_{T \to \infty} \left [ \frac{1}{T}R^E_{X^T_{[\pi', P]}}(TD) - \frac{1}{T-\tau}R^E_{J_{\tau}(X^T_{[\pi', P]})}((T-\tau)D) \right ] = 0 \end{align} From (\ref{Jpipi'relation}) and (\ref{Anotherbound}), it follows by the use of triangle inequality, that for $\tau$ sufficiently large and $T > \tau$, \begin{align} \label{EEE1} \left | \frac{1}{T}R^E_{X^T_{[\pi', P]}}(TD) - \frac{1}{T-\tau}R^E_{J_{\tau}(X^T_{[\pi, P]})}((T-\tau)D) \right | \leq K\delta^{(\tau)} \log \frac{1}{\delta^{(\tau)}} + \kappa_{\tau,T} \end{align} for some $\kappa_{\tau,T} \to 0$ as $T \to \infty$. The above equation holds for $\pi' = \pi$ too, that is, for $\tau$ sufficiently large and $T > \tau$, \begin{align}\label{EEE2} \left | \frac{1}{T}R^E_{X^T_{[\pi, P]}}(TD) - \frac{1}{T-\tau}R^E_{J_{\tau}(X^T_{[\pi, P]})}((T-\tau)D) \right | \leq K\delta^{(\tau)} \log \frac{1}{\delta^{(\tau)}} + \eta_{\tau,T} \end{align} for some $\eta_{\tau,T} \to 0$ as $T \to \infty$. From (\ref{EEE1}) and (\ref{EEE2}), by use of the triangle inequality, it follows, that for $\tau$ sufficiently large and $T > \tau$, \begin{align} \label{KKK1} \left | \frac{1}{T}R^E_{X^T_{[\pi', P]}}(TD) - \frac{1}{T}R^E_{X^T_{[\pi, P]}}(TD) \right | \leq 2K \delta^{(\tau)} \log \frac{1}{\delta^{(\tau)}} + \eta_{\tau,T} + \kappa_{\tau,T} \end{align} From (\ref{KKK1}) and (\ref{LimExists}), and by noting that $\delta^{(\tau)} \log \frac{1}{\delta^{(\tau)}} \to 0$ as $\tau \to \infty$, $\eta_{\tau,T} \to 0$ as $T \to \infty$, and $\kappa_{\tau,T} \to 0$ as $T \to \infty$, it follows that \begin{align}\label{KKK2} \lim_{T \to \infty} \frac{1}{T}R^E_{X^T_{[\pi', P]}}(TD) \ \mbox{exists and is independent of $\pi'$} \end{align} This finishes the proof \end{proof} The assumptions $\mathbb X=\mathbb Y$, $d(x,x) =0$, $d(x,y) > 0$ if $x \neq y$ which have been made are not necessary, and can be replaced by weaker assumptions. Nothing is lost in terms of idea of the proof by making these assumptions, and making these assumptions prevents one from thinking of pathological cases; for these reasons they have been made. \section{$\psi$-mixing sources or a variant?} A set of sources to which this result may be generalizable with the proof technique used is $\psi$-mixing sources or close variants, appropriately defined. See \cite{Prohorov}, \cite{Bradley} and \cite{paperTogether2} for mixing of sources and \cite{Bradley}, \cite{paperTogether2}, in particular, for results on $\psi$-mixing sources. The main property (among others) that made $\psi$-mixing sources amenable to the result in \cite{paperTogether2} is the decomposition in Lemma 1 in \cite{paperTogether2}, wherein, a stationary $\psi$-mixing source is written as a convex combination of an i.i.d. distribution and another general distribution where the i.i.d. distribution dominates as memory is lost with time. Precisely, the equation is Equation (19) in \cite{paperTogether2}: \begin{align} \label{EqPsiConvex} \Pr(X_{t+\tau+1}^{t+\tau+T} \in \mathbb B|X_1^t \in \mathbb A) = (1-\lambda_\tau)P_T(\mathbb B) + \lambda_{\tau} P'_{t, \tau, T, \mathbb A}(\mathbb B) \end{align} where $\lambda_{\tau} \to 0$ as $\tau \to \infty$. This lemma, though, required stationarity. If a variant of (\ref{EqPsiConvex}) would hold for non-stationary sources, then, there is a possibility that the result in this paper be generalized to such sources. Irreducible, aperiodic Markoff chains statisfy this property, with $P_T(\mathbb B)$ taken as the stationary distribution, and $P'$ is some distribution depending on the initial distribution of the Markoff chain. An important bound in proving Theorem \ref{OT2} in this paper is the $l^1$ distance between $Q$ and $Q'$, see (\ref{l1QQ'}). This result will hold for sources which satisfy (\ref{EqPsiConvex}) or a variant. Similarly, proving (\ref{REJBound1}) and (\ref{REJBound2}) in the proof of Theorem \ref{OT2} or similar equations may also be possible. The rest of the proof of Theorem \ref{OT2} is bounding various differences of `close by' rate-distortion functions and this may be possible too. This is just an idea at this point and needs to be studied carefully to see if any of this is at all possible. \section{Recapitulation and research directions} In this paper, it was proved that the limit of the normalized rate-distortion functions of block independent approximations of an irreducible, aperiodic Markoff chain is independent of the initial distribution and is equal to the rate-distortion function of the Markoff chain. It would be worthwhile trying to generalize this theorem to ergodic sources to the extent possible, not necessarily Markoff sources, in particular, to $\psi$-mixing sources; this would not only make the result general, but also shed light on the `internal workings' of rate-distortion theory. Further, it would be worthwhile trying to prove this result using existing literature, in particular, see if it follows directly from some result, for example, in \cite{Gray}; this would help with generalization and insight into the `internal workings' of rate-distortion theory, too. \section{Acknowledgements} The author is infinitely grateful to Prof. Robert Gray for ten's of hours of his time spent in insightful discussions with the author. \bibliographystyle{IEEEtran}
train/arxiv
BkiUf6rxK4tBVgEFXtRK
5
1
\section{Introduction} Recently \cite{Herdeiro:2011ck,Coelho:2012sy}, we have studied the collision of two $D$-dimensional Aichelburg-Sexl (AS) shock waves \cite{Aichelburg:1970dh}, using a method first developed (in $D=4$) by D'Eath and Payne \cite{D'Eath:1992hb,D'Eath:1992hd,D'Eath:1992qu}, with the goal of obtaining the radiated energy. This method is conceptually and technically elaborate, involving both analytical and numerical studies. Remarkably, the fraction of radiated energy - which we refer to as the \textit{inelasticity of the collision} - agrees in first order perturbation theory, within the numerical error of the method (less than 0.1\%), with the simple formula \begin{equation} \epsilon_{\rm 1st \, order}= \frac{1}{2}-\frac{1}{D} \ . \label{miracle} \end{equation} Asymptotically, $\epsilon_{\rm 1st \, order} \rightarrow 1/2$, which agrees with the \textit{bound}, $\epsilon_{AH}$, obtained by computing the apparent horizon (AH) on the past light cone \cite{Eardley:2002re} (or on the future one \cite{Yoshino:2005hi}) for a head-on collision of two AS shock waves: \begin{equation} \epsilon_{\rm AH}= 1-\frac{1}{2}\left(\frac{D-2}{2}\frac{\Omega_{D-2}}{\Omega_{D-3}}\right)^{\frac{1}{D-2}} \ , \qquad \lim_{D\rightarrow\infty}\epsilon_{\rm AH}=\frac{1}{2} \ , \end{equation} where $\Omega_n$ is the volume of the unit $n$-sphere. Moreover, the trend with $D$ observed from $\epsilon_{\rm 1st \, order}$ agrees, qualitatively, with that of $\epsilon_{\rm AH}$ (cf. Fig. 3 in \cite{Coelho:2012sy}). An immediate question is if the appeal and simplicity of the result \eqref{miracle} is kept in higher order perturbation theory. To answer it, the $D$ dimensional formalism developed in \cite{Herdeiro:2011ck,Coelho:2012sy} must be extended beyond linearised theory and, in particular, so does the radiation extraction method. In \cite{Herdeiro:2011ck,Coelho:2012sy}, we have approached the problem of extracting the gravitational radiation by using the Landau-Lifshitz pseudo-tensor, which was straightforward to apply for the setup therein. Indeed, the first order calculation amounts to approximating the outgoing radiation by an isotropic flux, with a value obtained as the limit of the flux at the axis computed in linearised theory. This allowed us to obtain a relatively simple expression from the pseudo-tensor, since only the radiation in the direction of the symmetry axis had to be computed. In higher order perturbation theory, on the other hand, we will have to include higher order contributions to the news function, following \cite{D'Eath:1992hb,D'Eath:1992hd,D'Eath:1992qu}, which means that the pseudo-tensor components become more complicated, making this method less manageable and clear. We shall therefore, in this paper, discuss the higher order perturbative formalism with a different radiation extraction method, valid for generic axially symmetric spacetimes, based on the Bondi mass loss formula in $D$ dimensions~\cite{Tanabe:2011es,Tanabe:2012fg}. In particular, we shall obtain a formula for the inelasticity which is valid to all orders (cf. \eqref{epsilon}), and which makes closer contact with the original Payne and D'Eath computation \cite{D'Eath:1992hb,D'Eath:1992hd,D'Eath:1992qu}, since it uses the natural generalisation to higher $D$ of the Bondi news function used by these authors. In linearised theory, of course, the result obtained using the Bondi mass loss formula coincides with the pseudo-tensor method. The second part of this paper is devoted to the study of a collision of shocks obtained from infinitely boosted charged particles. Such shocks were constructed in \cite{Lousto:1988ua,Ortaggio:2006gh} and considered in \cite{Yoshino:2006dp,Gingrich:2006qh} to estimate an upper bound on the inelasticity, motivated by TeV gravity scenarios \cite{ArkaniHamed:1998rs,Dimopoulos:2001hw, Giddings:2001bu} (see \cite{Cardoso:2012qm}, Sec. 4, for a recent review). In performing this study, our goal is to probe the validity of a perturbative method to study a process that, at its core, includes a non-perturbative phenomenon - black hole formation. As it turns out, this example illustrates quite well how the method ceases to work, and creates a clear contrast with the neutral case. The reason why the method fails in such a case is that the repulsive nature of the charged gravitational source implies that an important contribution to the radiation, in the perturbative approach, is obtained from the strong field region. In reality, this contribution, or at least an important part of it, should be cloaked by a horizon, as it would be manifest in a non-perturbative computation. Thus, in this example, the perturbative approach entangles the physical signal with a significative spurious signal, making the method uninformative. By contrast, in the neutral case, the spurious signal appears to be subleading. This paper is organised as follows. In Section \ref{eq:generalframework} we shall set up the higher dimensional perturbation theory for the problem at hand and derive the formula for the inelasticity, leaving some technical details, related to the transformations between the various coordinate systems involved, to Appendix \ref{app:Bondi}. In Section \ref{sec_charge} we discuss the charged case. Althought we have tried to make the discussion self-contained, the construction uses some details provided in \cite{Herdeiro:2011ck,Coelho:2012sy}. Two technical issues, the gauge fixing to de Donder coordinates and the evaluation of the integral solution for the perturbative metric, were organized as Appendices \ref{gaugetransformation} and \ref{app:simplify}. We close with some final remarks - Section \ref{sec_final_rem}. \section{Higher order shock wave perturbation theory} \label{eq:generalframework} In~\cite{Herdeiro:2011ck} we have shown that, in a boosted frame, the metric on the future light cone of the collision of two higher dimensional Aichelburg-Sexl shock waves is given by a perturbative series: \begin{equation} g_{\mu\nu}=\nu^{\frac{2}{D-3}}\left[\eta_{ \mu \nu}+h_{\mu \nu}\right]=\nu^{\frac{2}{D-3}}\left[\eta_{ \mu \nu}+\sum_{i=1}^\infty\left(\frac{\lambda}{\nu}\right)^i h_{\mu\nu}^{(i)}\right]\ .\label{eq:pertexpansion} \end{equation} Here, $\lambda,\nu$ are the energy parameters of the weak/strong shock in the boosted frame, respectively, and the background flat metric in null coordinates is \begin{equation} ds^2\equiv \eta_{{\mu}{\nu}}dx^{{\mu}} dx^{{\nu}} = -2d{u} d{v} + dx^{{i}} dx^{{j}} =-2d{u} d{v}+d\rho^{2} + \rho^2 d {\Omega}^2_{D-3} \ . \end{equation} $d\Omega_{D-3}$ is the line element of the unit $D-3$ sphere. We call this coordinate system \textit{Brinkmann coordinates} \cite{Brinkmann}, where the retarded and advanced times $(\sqrt{2}{u},\sqrt{2}{v})$ are $(t-z,t+z)$ in terms of Minkowski coordinates, and $\{x^{{i}}\}$ are the remaining Cartesian coordinates on the plane of the shocks, ${i}=1\ldots D-2$, such that the transverse radius is ${\rho}=\sqrt{x^{{i}}x_{{i}}}$. In this Section we will have two further types of coordinates: i) \textit{de Donder} coordinates and ii) \textit{Bondi} coordinates, both to be introduced and explained below. Since the de Donder coordinates coincide with Brinkmann coordinates outside the future light cone of the collision, we adopt the same notation for Brinkmann and de Donder coordinates (as in~\cite{Herdeiro:2011ck}). The superposition of two shock waves produces boundary conditions on $ u=0$ (the location of the strong shock) which go up to second order. Thus, in second order perturbation theory, the boundary conditions are exact. Suggestively, for $D=4$, the second order result coincides with the outcome of numerical relativity simulations \cite{Sperhake:2008ga}. This sharpens the motivation to pursue this computation (at least) to second order. The general perturbative method consists of the following steps. Once we have the boundary data for the perturbation $h_{\mu\nu}|_{{u}=0}$, we insert the ansatz~\eqref{eq:pertexpansion} into the Einstein equations and equate order by order. The components of the metric perturbations do not decouple immediately, but we can perform a gauge transformation to the so called de Donder coordinates so that they indeed decouple. We take the perturbative gauge transformation in the form \begin{equation} x^{ \mu}\rightarrow x^{\mu}+\sum_{i=1}^{+\infty} \left(\frac{\lambda}{\nu}\right)^i\xi^{(i)\mu}(x^{\alpha}) \ , \end{equation} where the vector $\xi^{(i)\mu}(x^{\alpha})$ is to be determined order by order so that the de Donder gauge condition is obeyed; i.e. for $u>0$ \begin{equation} \label{gauge} \bar{h}^{(i)\alpha\beta}_{\phantom{(i)\alpha\beta},\beta}=0 \ , \end{equation} (barred quantities are trace reversed). Now $h^{(i)\alpha \beta}$ are the metric perturbations in de Donder coordinates. Using~\eqref{eq:pertexpansion} and~\eqref{gauge}, the $n$-th order perturbation components obey decoupled wave equations \begin{equation}\label{eq:nth_order_Feq} \Box h^{(i)}_{\mu\nu}=T^{(i-1)}_{\mu\nu}\left[h_{\alpha\beta}^{(j<i)}\right] \ , \end{equation} where the source on the right hand side is generated by the lower order perturbations, and can be computed explicitely order by order. The general integral solution of~\eqref{eq:nth_order_Feq} can be written using the Green's function method (see Theorem~6.3.1 of~\cite{Friedlander:112411}) \begin{equation} \label{eq:sol_orderbyorder} h^{(i)}_{\mu\nu}=F.P.\int_{u'>0}d^{D}y'\, G(y,y')\left[T^{(i-1)}_{\mu\nu}(y')+2\delta(u')\partial_{v'}h^{(i)}_{\mu\nu}(y')\right] \ , \end{equation} where $F.P.$ denotes the finite part of the integral, $y=\left\{u,v,x_i\right\}$ and we have used the fact that the source only has support in $u>0$. Due to the axial symmetry of the problem, a basis of vectors and tensors on the transverse plane, constructed from $x^i$ and $\delta_{ij}$, is \begin{equation}\label{eq:basis_tensors} \Gamma_i\equiv \dfrac{x_i}{\rho} \; \;, \; \; \; \; \delta_{ij} \; \;, \; \; \; \; \Delta_{ij}\equiv \delta_{ij}-(D-2)\Gamma_i\Gamma_j \;, \end{equation} where we have chosen the last tensor to be traceless. Then, the metric perturbations in de Donder coordinates are decomposed into seven functions of $(u,v,\rho)$, here denoted $A,B,C,E,F,G,H$, in the following way: \begin{eqnarray} &h_{uu}\equiv A=A^{(1)}+A^{(2)}+\ldots \qquad &h_{ui}\equiv B \,\Gamma_i =(B^{(1)}+B^{(2)}+\ldots)\Gamma_i \nonumber\\ &h_{uv}\equiv C=C^{(1)}+C^{(2)}+\ldots \qquad &h_{vi}\equiv F \,\Gamma_i =(F^{(1)}+F^{(2)}+\ldots)\Gamma_i \label{app:gen_perts}\\ &h_{vv}\equiv G=G^{(1)}+G^{(2)}+\ldots \qquad &h_{ij}\equiv E \,\Delta_{ij}+H\, \delta_{ij} = (E^{(1)}+\ldots) \Delta_{ij}+(H^{(1)}+\ldots) \delta_{ij} \ .\qquad\nonumber \end{eqnarray} With this setup, using the boundary condition on $u=0$ and the solution~\eqref{eq:sol_orderbyorder}, one can find the metric perturbations by solving for these scalars after suitable contractions of~\eqref{eq:sol_orderbyorder} with the tensors~\eqref{eq:basis_tensors}. \subsection{Extracting the Gravitational Radiation} \label{sec_ext_gr} As mentioned in the Introduction, we shall here construct an energy extraction method which is the higher $D$ generalisation of the original method used by D'Eath and Payne, based on Bondi's news function. The $D$ dimensional extension of the news function formalism has been recently addressed in~\cite{Tanabe:2011es,Tanabe:2012fg}, where the following mass loss formula for the Bondi mass $M_{B}$ was derived: \begin{equation}\label{eq:BondiMassLoss} \dfrac{dM_B}{d\hat\tau}=-\dfrac{1}{32\pi G_D}\int_{S^{D-2}}\dot{\mathfrak{h}}_{\hat I\hat J}^{[1]}\dot{\mathfrak{h}}^{[1]\hat I\hat J}d\Omega_{D-2} \ . \end{equation} The dot denotes derivative with respect to the retarded time $\hat \tau$; the $\hat I$ latin indices are raised with the metric components $g_{\hat I\hat J}$; all these quantities, together with the metric functions appearing inside the integral, will be defined in the following. The geometry we are considering in de Donder coordinates has the generic form \begin{equation}\label{eq:metricDeDonder} ds^2=ds^2_{Flat}+h_{uu}du^2+2h_{ui}dx^{i}du+h_{vv}dv^2+2h_{uv}dudv+2h_{vi}dx^idv+h_{ij} dx^i dx^j \ . \end{equation} We can transform it to coordinates $x^{\mu'}=\{\tau,r,\theta,\phi^i\}$, where $r=\sqrt{\rho^2+z^2}$, $\tau=t-r$ and $\theta$ is the angle with the $z$ axis and $\phi^i$ are the angles on the transverse plane, and we have used an index with a prime to denote these new coordinates. Then the metric reads: \begin{equation}\label{eq:metricDeDonderAngular} ds^2=ds^2_{Flat}+h_{\tau\tau}d\tau^2+2h_{\tau r}d\tau dr+h_{rr}dr^2+2h_{\tau \theta}d\tau d\theta+h_{\theta\theta}d\theta^2+2h_{r\theta}drd\theta+r^2\sin^2\theta h_{\phi\phi}d\Omega_{D-3}^2 \ , \end{equation} where the various components $h_{\mu' \nu'}$ are defined in Appendix~\ref{app:Bondi} in terms of the seven aforementioned scalar functions. Next, we change to Bondi coordinates $\{x^{\hat \mu}\}$, related to these intermediate coordinates through the transformation \begin{equation}\label{eq:ChangeBondi} x^{\mu'}= x^{\mu'}(x^{\hat \alpha}) \ , \end{equation} such that the new $g_{\hat r \hat r}$ and $g_{\hat r \hat \theta}$ metric components vanish, and the form of the metric becomes \begin{equation}\label{eq:metricBondi} ds^2=g_{\hat \tau \hat \tau}d\hat\tau^2+2g_{\hat \tau \hat r}d\hat \tau d\hat r+2g_{\hat I \hat \tau}dx^{\hat I}d\hat\tau+g_{\hat I \hat J}dx^{\hat I}dx^{\hat J} \ . \end{equation} Here $x^{\hat I}=\left\{\hat\theta,\hat \phi ^i\right\}$ and the radial coordinate $\hat r$ is chosen such that \begin{equation}\label{eq:AerialR} \sqrt{|g_{\hat I\hat J}|}=\hat r^{D-2}\Omega_{D-2}\ . \end{equation} Once found such coordinates, assuming that we have an asymptotically flat spacetime with gravitational radiation, it follows that asymptotically~\cite{Tanabe:2011es} \begin{equation}\label{eq:BondiDecay} \dfrac{g_{\hat I \hat J}}{\hat r^2}=\Omega_{\hat I \hat J}+\mathfrak{h}_{\hat I\hat J} =\Omega_{\hat I \hat J}+\sum_{k\geq 0}\dfrac{\mathfrak{h}_{\hat I \hat J}^{[k+1]}}{\hat r^{D/2+k-1}} \ , \end{equation} where $\Omega_{\hat I \hat J}$ is the metric on the unit $(D-2)$-sphere and $k$ runs over all integers for $D$ even and over semi-integers for $D$ odd. This defines the $\mathfrak{h}_{\hat I \hat J}^{[1]}$ components appearing in the mass loss formula~\eqref{eq:BondiMassLoss}. Due to the axial symmetry of our problem one needs not, in fact, transform the angles on the transverse plane, i.e. $\phi^i=\hat\phi^i$, and \begin{equation} \mathfrak{h}_{\hat I \hat J}dx^{\hat I} dx^{\hat J}=\mathfrak{h}_{\hat \theta \hat \theta}\,d{\hat\theta}^2+\mathfrak{h}_{\hat \phi \hat \phi}\sin^2\hat\theta\, \Omega_{ij}d\hat\phi^id\hat\phi^j \ . \end{equation} From condition~\eqref{eq:AerialR}, we can then eliminate $\mathfrak{h}_{\hat \theta \hat \theta}$ in terms of $\mathfrak{h}_{\hat \phi \hat \phi }$ and find that asymptotically \begin{equation} \mathfrak{h}_{\hat \theta \hat \theta}\rightarrow-(D-3)\mathfrak{h}_{\hat \phi \hat \phi}\ , \end{equation} so the mass loss formula can be written as a $\hat \theta$ angular integral of the following angular power flux \begin{equation}\label{eq:BondiMassSimpler} \dfrac{dM_B}{d\hat\tau d\cos\hat\theta}=-\dfrac{(D-2)(D-3)\Omega_{D-3}}{32\pi G_D}\lim_{\hat r\rightarrow +\infty}\left[\hat r\hat \rho^{\frac{D-4}{2}} \dot{\mathfrak{h}}_{\hat \phi \hat \phi}\right]^2 \; . \end{equation} Using the general form of our metric in de Donder coordinates~\eqref{eq:metricDeDonder}, we have constructed the coordinate transformation~\eqref{eq:ChangeBondi} in Appendix~\ref{app:Bondi}. One then shows that \begin{equation}\label{eq:BondiMassFinal} \dfrac{dM_B}{d\hat\tau d\cos\hat\theta}=-\dfrac{(D-2)(D-3)\Omega_{D-3}}{32\pi G_D}\lim_{\hat r\rightarrow +\infty}\left[\hat r\hat \rho^{\frac{D-4}{2}} \left(\dot E+\dot H\right)\right]^2 \ , \end{equation} where all functions are evaluated with Bondi coordinates. An important remark is that our derivation does not rely on the metric being perturbative, and therefore it is valid \textit{non-perturbatively}. The assumptions are simply: i) The metric is expressed in de Donder coordinates as in Eq.~\eqref{eq:metricDeDonder}; ii) the metric is axisymmetric, cf. Eq.~\eqref{app:gen_perts}; iii) the spacetime is asymptotically flat and contains gravitational radiation, cf. Eq.~\eqref{eq:BondiDecay}. Furthermore, asymptotically, the Bondi coordinates used in~\eqref{eq:BondiMassSimpler} will approach de Donder coordinates, from the construction in Appendix~\ref{app:Bondi}. So, in general, we can express the mass loss formula in de Donder coordinates as \begin{equation} \frac{dM_B}{d{\tau}d\cos \theta}=-\frac{(D-2)(D-3)\Omega_{D-3}}{64\pi G_D} \lim_{r \rightarrow + \infty} \left[r\rho^\frac{(D-4)}{2}\left(E_{,v} +H_{,v}+E_{,u}+H_{,u}\right) \right]^2\ . \label{eq:bondiadapted} \end{equation} Observe that the $\partial_u$ terms correspond to fluxes across $v={\rm constant}$ surfaces, which are supposed to vanish on $\theta=\pi$, as argued in~\cite{Herdeiro:2011ck}, for the problem of gravitational shock wave collisions (indeed we have checked this numerically). We shall now specialize the general formula~\eqref{eq:bondiadapted}, as to facilitate the application of this result to the perturbative problem of shock wave collisions. We assume spacetime has an ADM energy scale $2\mu$, with which we construct a length scale $L^{D-3}=8\pi G_D \mu/\Omega_{D-3}$. Then, taking units with $L=1$, and dividing by the total ADM energy scale, the inelasticity factor, corresponding to the fraction of radiated energy into gravitational waves, is \begin{equation} \epsilon_{\rm radiated}= \int_{-1}^{1} \tfrac{d\cos\theta}{2}\lim_{r\rightarrow +\infty}\int d\tau W(\tau,r,\theta)^2\equiv \int_{-1}^{1} \tfrac{dx}{2}C(x)\label{epsilon} \ , \end{equation} with \begin{equation} W(\tau,r,\theta)\equiv\sqrt{\frac{(D-2)(D-3)}{8}}\, \, r\rho^\frac{D-4}{2}\left(E_{,v} +H_{,v}+E_{,u}+H_{,u}\right) \label{app:waveformDef} \ . \end{equation} $C(x)$ is the generalisation of the Bondi news function (integrated over $\tau$) to higher dimensions. In the appropriate limit, this (more general) result reduces to the one obtained in \cite{Herdeiro:2011ck} using the Landau-Lifschitz method. In the particular case of the perturbative method for shock wave collisions, the framework is valid for $\theta$ close to $\pi$ ($x \sim -1$). Then, the approximation taken is usually to expand around the axis, and extrapolate off the axis by integrating the truncated expansion over $\theta$. Besides being axially symmetric, our system is invariant under reflections $z\leftrightarrow -z$ so $C(x)$ must be even. Then \begin{equation} \epsilon_{\rm radiated}= \int_{-1}^{1} \tfrac{dx}{2}\sum_{n=0}^{+\infty}C_n(x^2-1)^{n}=\sum_{n=0}^{+\infty}\dfrac{C_n(-2)^nn!}{(2n+1)!!}\label{epsilonexpandee} \ . \end{equation} If the news function $C(x)$ is analytic, then the expansion close to the axis is indeed sufficient, provided that $\lim_{n\rightarrow +\infty}|C_{n+1}/C_n|\leq 1$. Furthermore, since we have an extra suppression factor in~\eqref{epsilonexpandee}, we expect higher orders to become increasingly less important. The approximation used by D'Eath and Payne~\cite{D'Eath:1992qu} in $D=4$, corresponds to an isotropy approximation so only $C_0$ is used in their result which give (to second order in perturbation theory) a result of $\epsilon_{\rm radiated}=0.163$. This is in agreement with the latest numerical relativity simulations of ultra-relativistic particle or black hole collisions at large boost~\cite{East:2012mb,Sperhake:2008ga}, so it seems to indicate that the angular corrections ($C_n$ for $n>0$) are small in $D=4$. \section{$D$ dimensional shock wave collisions with charge} \label{sec_charge} \label{section2} In this Section we shall analyze an example of shock wave collisions with an electric charge parameter. We will apply the formalism developed in the previous Section while testing the assumptions of the formalism and commenting on its limitations. This example will make clear that the perturbative construction is only applicable if the bulk of the gravitational radiation is generated far away from the strongly curved region of space-time. \subsection{The $D$-dimensional metric} \label{dmetric} The geometry of the $D$-dimensional Reissner-Nordstr\"om solution with mass $M$ and charge $Q$ is \cite{Tangherlini:1963bw} \begin{equation} ds^2 = -V(r)dt^2 + \frac{dr^2}{V(r)} + r^2 d\Omega_{D-2}\ , \end{equation} where \begin{equation} V(r)=1- \frac{16\pi G_D M}{ (D-2) \Omega_{D-2}}\frac{1}{r^{D-3}}+\frac{8\pi G_D Q^2}{(D-2)(D-3)} \frac{1}{r^{2(D-3)}}\ . \end{equation} It is intuitive, as first argued by Pirani \cite{Pirani}, that the gravitational field of a fast-moving mass should become increasingly similar to that of a gravitational plane-wave, as the speed is increased. For the case of a RN `particle', the corresponding Aichelburg-Sexl shock wave is found by boosting this black hole and then taking simultaneously the limit of infinite boost $\gamma$ and vanishing mass and charge, keeping fixed \cite{Lousto:1988ua} \begin{equation} \mu=\gamma M \ , \qquad \mathcal{Q}^2=\gamma Q^2 \ . \end{equation} The resulting geometry for a particle moving in the $+z$ direction in Brinkmann coordinates is \begin{equation} ds^2 = -2d{u} d{v} + d{\rho}^{2} + {\rho}^2 d {\Omega}^2_{D-3}+\sqrt{2}\kappa \Phi({\rho}) \delta({u}) d{u}^2\ ,\label{AiSe} \end{equation} where $\kappa\equiv 8\pi G_D \mu/\Omega_{D-3}$. The function $\Phi$ depends only on ${\rho}$ and takes the form \cite{Yoshino:2006dp,Gingrich:2006qh} \begin{equation} \Phi({\rho},a/\kappa)=-\frac{2a/\kappa}{(2D-7){\rho}^{2D-7}}+\left\{ \begin{array}{ll} -2\ln({\rho})\ , & D=4\ \vspace{2mm}\\ \displaystyle{ \frac{2}{(D-4){\rho}^{D-4}}}\ , & D>4\ \label{phidef} \end{array} \right. \ , \end{equation} where \begin{equation} a\equiv \frac{8\pi^2G_D\mathcal{Q}^2}{D-3}\frac{(2D-5)!!}{(2D-4)!!} \ . \end{equation} The above coordinates are discontinuous at the shock. Transforming to Rosen coordinates $(\bar{u},\bar{v},\bar{x}^i)$ \cite{Rosen}, which are continuous at the shock (see \cite{Herdeiro:2011ck} for the explicit transformation), the geometry for two oppositely directed shock waves, with equal charge parameter $a$ in \eqref{phidef}, and equal energy $\mu$ may be written everywhere as a simple superposition of the two individual geometries, except in the future of the collision. Moreover, in a boosted frame, moving with respect to the $({u},{v})$ chart with velocity $\beta$ in the $-z$ direction, the oppositely directed shock waves keep their form, but acquire new energy parameters, respectively, \begin{equation} \kappa \rightarrow e^\alpha \kappa\equiv \nu \ , \qquad \kappa \rightarrow e^{-\alpha} \kappa\equiv \lambda \ , \end{equation} where $e^\alpha=\sqrt{(1+\beta)/(1-\beta)}$. In this boosted frame, the geometry reads \begin{multline}ds^2 = -2d{\bar u} d{\bar v} + \left[\Big(1+\dfrac{\nu {\bar u} \theta({\bar u})}{\sqrt{2}}\Phi''\Big)^2+ \Big(1+\dfrac{\lambda {\bar v} \theta({\bar v})}{\sqrt{2}}\Phi''\Big)^2-1\right]d{\bar \rho}^{2} \\ + {\bar \rho}^2\left[\Big(1+ \frac{\nu {\bar u} \, \theta({\bar u})}{\sqrt{2} \bar \rho}\Phi'\Big)^2 + \Big(1+ \frac{\lambda {\bar v} \, \theta({\bar v})}{\sqrt{2} \bar \rho}\Phi'\Big)^2-1\right] d\Omega^2_{D-3} \ ,\label{collision} \end{multline} which is valid everywhere except in the future light cone of ${\bar u}={\bar v}=0$. \subsection{Setting up the perturbative computation and the boundary conditions} To set up a perturbative computation and derive the geometry in the future light cone of the collision, we proceed as in \cite{Herdeiro:2011ck}. In the boosted frame one shock carries much more energy than the other and we thus face the weak shock (traveling in the $-z$ direction) as a perturbation of the geometry of the strong shock (traveling in the $+z$ direction). The geometry of the latter is flat for $\bar{u}>0$; thus we make a perturbative expansion of the Einstein equations around flat space-time in this region. We shall make the perturbative expansion in Brinkmann coordinates. Moreover, we choose to work with the following rescaled dimensionless coordinates $({u},{v},x^i)\rightarrow\kappa^{1/(D-3)}({u},{v},x^i)$ and rescaled profile function $\Phi({\rho};a/\kappa)\rightarrow\kappa^{-\frac{D-4}{D-3}}\Phi({\rho};a/\kappa^2)$. Thus, hereafter, $\Phi({\rho})\equiv\Phi\left({\rho};a/\kappa^2\right)$ and all coordinates are dimensionless. The boundary conditions for the perturbative computation are given by the geometry (\ref{collision}) in the limit ${u}=0^+$, yielding only the first two orders in~\eqref{eq:pertexpansion}(notice that these boundary conditions are exact, albeit written in a perturbative form):\footnote{The coordinate transformation from Rosen coordinates \eqref{collision} to Brinkmann coordinates is adapted to the strong shock. In particular, the metric remains continuous at $\bar{v}=0$ after the coordinate transformation. This behaviour at $\bar{v}=0$ is consistent with the propagation of the initial data from $u=0$, as can be checked numerically.} \begin{eqnarray} h^{(1)}_{{u}{u}}&=&-\Phi'^2k({v},{\rho})\ ,\qquad h^{(1)}_{{u}{i}}=\frac{x_{{i}}}{{\rho}}\sqrt{2}\Phi'k({v},{\rho}) \ , \\ h^{(1)}_{{i}{j}}&=&-2\delta_{{i}{j}}h({v},{\rho})-2\frac{x_{{i}}x_{{j}}}{{\rho}^2}\left(k({v},{\rho})-h({v},{\rho})\right) \ , \end{eqnarray} and \begin{eqnarray} h^{(2)}_{uu}=\frac{\Phi'^2}{2}k(v,\rho)^2 \ ,\qquad h^{(2)}_{ui}=-\frac{x_i}{\rho}\frac{\Phi'}{\sqrt{2}}k(v,\rho)^2\ , \\ h^{(2)}_{ij}=\delta_{ij}h(v,\rho)^2-\frac{x_ix_j}{\rho^2}\left(k(v,\rho)^2-h(v,\rho)^2\right) \ , \end{eqnarray} where \begin{equation} h(v,\rho)\equiv-\frac{\Phi'}{2\rho}(\sqrt{2}v-\Phi)\theta(\sqrt{2}v-\Phi)\ ,\qquad k(v,\rho)\equiv-\frac{\Phi''}{2}(\sqrt{2}v-\Phi)\theta(\sqrt{2}v-\Phi) \ . \end{equation} \begin{figure}[t] \includegraphics[scale=0.2]{pastlightconecharge} \caption{Evolution of the weak shock null generators (incident blue, green and brown arrows) from the viewpoint of Brinkmann coordinates in the boosted frame in $D>4$. For $u<0$ they are at $v=0$; then the generators undergo a discontinuity in $v$ at $u=0$, which is $\rho$ dependent and negative for small $\rho$. They jump to the collision surface (green and brown lines). Generically, the rays gain shear and: i) for large $\rho$ (green part of the curve) focus along the caustic (red line); ii) for small $\rho$ (brown part of the curve) diverge. Undeflected rays are drawn in green.} \label{lightcone} \end{figure} The step function in the previous equations jumps at the collision, which, in Brinkmann coordinates, occurs at $u=0$, $v= \Phi/\sqrt{2}$. As the weak shock null generators (traveling along $v=0,u<0$) reach the strong shock at $u=0$ there is a discontinuity in Brinkmann coordinates. To understand what occurs, we consider the trajectory of a weak shock null generator that, before the collision, obeys the parametric equations \begin{equation} u(\Lambda)=\Lambda \ , \qquad v(\Lambda)=0 \ , \qquad \rho(\Lambda)= \xi \ . \end{equation} Then, after the shock, its trajectory is given by \cite{Herdeiro:2011ck} \begin{equation} u(\Lambda)=\Lambda \ , \qquad v(\Lambda) = \theta(\Lambda)\left(\frac{\Phi(\xi)}{\sqrt{2}} + \frac{\Lambda \Phi'(\xi)^2}{4}\right)\ , \qquad \rho(\Lambda)= \xi\left(1-\frac{\sqrt{2}\Lambda \theta(\Lambda)}{\xi^{D-2}}\right) \ . \end{equation} Indeed, the $v$ coordinate jumps from $v=0$ to the surface $v=\Phi/\sqrt{2}$, i.e. the \textit{collision surface} - Fig. \ref{lightcone}. After this jump the $v$ coordinate of the trajectory increases, \textit{except} at points where $ \Phi'(\xi)=0$. In the uncharged case there were no extrema of this profile function. But in the charged case there is one maximum. The ray incident at the corresponding value of $\xi$ will follow a path of $v={\rm constant}$ after the collision. This is possible for a null trajectory because such ray is \textit{undeflected}. In general the rays are deflected by an angle $\alpha$ in the $u-\rho$ plane, such that \begin{equation} \tan \alpha=\frac{\sqrt{2}}{\rho^{D-3}}\left(1-\frac{a/\kappa^2}{\rho^{D-3}}\right) \ . \end{equation} It follows that the weak shock null generators become convergent (towards the symmetry axis) for $\rho>(a/\kappa^2)^{1/(D-3)}$, divergent for $\rho<(a/\kappa^2)^{1/(D-3)}$ and there are undeflected generators for $\rho=(a/\kappa^2)^{1/(D-3)}$ - Fig. \ref{rays}. This is qualitatively very different from the uncharged case where all rays are convergent (see Fig. 4 in \cite{Herdeiro:2011ck}); the divergent behaviour is caused by the repulsive gravitational effect of the charge. Finally, for an observation point $\mathcal{P}$ far from the collision and near the axis, Fig. \ref{rays} suggests four bursts of radiation, associated to the four optical paths that reach $\mathcal{P}$. The order in which the four rays should arrive at $\mathcal{P}$ defines their numbering. The first (second) ray comes from the same (opposite) side of the axis as $\mathcal{P}$ and from a low redshift region (cf. Fig. \ref{lightcone}). The third (fourth) ray comes from the same (opposite) side of the axis as $\mathcal{P}$ and from a high redshift region. \begin{figure}[t] \includegraphics[scale=0.25]{rays_charged2} \caption{Diagram illustrating (a section of) the \textit{spatial} trajectories of the null generators of the weak shock, exhibiting their behaviour after $u=0$. The introduction of charge leads to a divergent behaviour of the rays for $\rho<(a/\kappa^2)^{1/(D-3)}$ and the existence of the undeflected rays (green lines). Moreover, amongst the convergent rays there is one family of maximal deflection (dashed red and blue lines). The convergent rays will meet at the axis forming a caustic. For points outside the axis, far away from the collision, such as point $\mathcal{P}$, the diagram suggests four radiation peaks associated to rays 1-4. We shall see this interpretation matches the wave forms exhibited in Section \ref{sec_num_rel}. An analogous Figure was presented in \cite{Yoshino:2006dp}.} \label{rays} \end{figure} \subsection{Gauge Fixing at $u=0$ and Future Development of the Metric} \label{sec_gauge_fix} To the future of $u=0$, we use the perturbative expansion introduced in Section~\ref{eq:generalframework} in de Donder gauge. In first order perturbation theory, the gauge fixing condition~\eqref{gauge} does not affect the radiative components in $h^{(1)}_{ij}$; we refer the interested reader to Appendix~\ref{gaugetransformation} for the details. Using the results in Appendix~\ref{gaugetransformation} we find the initial conditions \begin{eqnarray}\label{eq:Indpt-Scalars_init1} E^{(1)}(0,v,\rho)&=&\left(\frac{\Phi'_1}{\rho}+\frac{2D-5}{D-2}\frac{\Phi'_2}{\rho}\right)(\sqrt{2}v-\Phi)\theta(\sqrt{2}v-\Phi) \ , \\ \label{eq:Indpt-Scalars_init2} H^{(1)}(0,v,\rho)&=&\frac{(D-3)(D-4)}{2(D-2)}\frac{\Phi'_2}{\rho}(\sqrt{2}v-\Phi)\theta(\sqrt{2}v-\Phi) \ , \end{eqnarray} where $\Phi_{1,2}$ are, respectively, the charge (in)dependent parts of $\Phi$. To obtain the relevant scalar functions in first order, $E^{(1)},H^{(1)}$, in the future ligth cone of the collision, we use the integral solution~\eqref{eq:sol_orderbyorder} \begin{equation} \label{eq:sol_orderbyordercharged} h^{(1)}_{\mu\nu}=F.P.\int_{u'>0}d^{D}y'\, G(y,y')\left[T^{(0)}_{\mu\nu}(y')+2\delta(u')\partial_{v'}h^{(1)}_{\mu\nu}(y')\right] \ . \end{equation} The source that must be considered in first order perturbation theory, $T^{(0)}_{\mu\nu}$, is the energy momentum tensor associated to the Maxwell field of the \textit{background} geometry. By taking the limit described at the beginning of section \ref{dmetric} also for the Maxwell field, one computes that the energy momentum tensor associated to the shock with support at $u=0$ has a single non-vanishing component: \begin{equation} T_{uu}=\frac{\mathcal{Q}^2\pi}{\rho^{2D-5}}\frac{(2D-5)!!}{(2D-4)!!}\delta(u) \ . \label{bsource} \end{equation} First observe that this energy-momentum tensor \textit{does not} have support on $u>0$. Second, this energy-momentum tensor does not source the radiative components of the first order perturbation $h_{ij}^{(1)}$, since only the $T_{uu}$ component is non-vanishing. Finally, this energy-momentum tensor is completely taken into account already by the strong shock geometry, and hence by the initial conditions considered. Indeed, the Einstein tensor computed from (\ref{AiSe}) reads \begin{equation} G_{uu}=\frac{(D-3)a}{\rho^{2D-5}}\delta(u) \ , \end{equation} which, of course, solves the Einstein equations with the source (\ref{bsource}). As for the weak shock, it sources an energy momentum tensor with support on $\bar{v}=0$ that, in principle, contributes to the radiative components via the first term in \eqref{eq:sol_orderbyordercharged}. In this paper we shall focus on the second term in \eqref{eq:sol_orderbyordercharged}, which suffices to demonstrate the difficulties in applying the perturbative method to shock waves with charge. A summary of the determination of the radiative scalar functions $E^{(1)}, H^{(1)}$, is described in Appendix~\ref{app:simplify}. \subsection{Integration Limits} We can now discuss the domain for the time integration in (\ref{epsilon}). In the uncharged case, we observed in \cite{Herdeiro:2011ck} that both the beginning of the radiation burst and its peak, as observed at some space-time point $\mathcal{P}$ in the future of the collision, could be understood by a simple ray analysis, similar to that displayed in Figs. \ref{lightcone} and \ref{rays}, together with an analysis of the intersection of the past light-cone of $\mathcal{P}$ with the collision surface. A similar reasoning for the charged case can be made. Let $\mathcal{P}$ have space-time coordinates $(u,v,\rho)$, or, equivalently $(\tau, r,\theta)$. One now observes that an observation point $\mathcal{P}$, specified by coordinates $(r,\theta)$, close to the axis and far away from the collision is struck by four rays - Fig. \ref{rays}. These arrive at retarded times $\tau_i$, $i=1...4$, where $\tau_i=\tau_i(r,\theta)$ are determined by solving \begin{eqnarray} r\sin\theta=s\bar{\rho}\left(1+\Phi'(\bar{\rho})\frac{\tau+2r\sin^2\frac{\theta}{2}}{2\bar{\rho}}\right)\ ,\\ \tau\left(1-\frac{\Phi'^2(\bar{\rho})}{4}\right)+2r\left(\cos^2\frac{\theta}{2}-\frac{\Phi'^2(\bar{\rho})}{4}\sin^2\frac{\theta}{2}\right)=\Phi(\bar{\rho}) \ , \label{retardedtimes} \end{eqnarray} with $s=+1$, for $\tau_1$ and $\tau_3$, and $s=-1$, for $\tau_2$ and $\tau_4$, simultaneously determining the auxiliary variable $\bar{\rho}$, which now (unlike the uncharged case) has two solutions for each of the two values of $s$, reflecting the two rays coming from each side (cf. Fig. \ref{rays}). A qualitative difference with respect to the uncharged case is, however, that the past light cone of $\mathcal{P}$ will have a non-vanishing intersection with the collision surface at \textit{all} retarded times, corresponding to points very close to the axis. We shall now argue, however, that this contribution is unphysical and should be neglected.\footnote{Numerically, this contribution is small but non-zero, in contrast to the neutral case where there is no domain of integration prior to $\tau_1$.} In Rosen coordinates (cf. Section \ref{section2}), the collision occurs at $\bar{u}=0=\bar{v}$. In these (continuous) coordinates, the future light cone of the collision has therefore two branches: $\bar{u}=0, \bar{v}>0$ and $\bar{u}>0, \bar{v}=0$. In terms of the Brinkmann coordinates these conditions read: \begin{equation} u=0 \ , \ \ v>\frac{\Phi(\rho)}{\sqrt{2}} \ , \qquad {\rm and} \qquad u>0 \ , \ \ v=\frac{\Phi(\bar{\rho})}{\sqrt{2}}+u\frac{\Phi'(\bar{\rho})^2}{4} \ , \ \ {\rm where} \ \ \rho=\bar{\rho}\left(1+\frac{u\Phi'(\bar{\rho})}{\sqrt{2}\bar{\rho}}\right) \ . \label{branches} \end{equation} The second of these branches determines a surface which intersects the worldline of $\mathcal{P}$ at the retarded times $\tau_i$ computed from (\ref{retardedtimes}). In particular, prior to $\tau_1$, the worldline of $\mathcal{P}$ is \textit{not} in the future light-cone of the collision. Thus, the radiation observed along the worldline of $\mathcal{P}$ before $\tau_1$ is not causally connected to the collision and consequently we neglect it. \subsection{Numerical Results} \label{sec_num_rel} \begin{figure}[t] \includegraphics[scale=0.66,clip=true,trim= 0 0 0 0]{RedW1D6.eps} \hspace{1cm} \includegraphics[scale=0.66,clip=true,trim= 0 0 0 0]{RedW2D6.eps} \vspace{3mm}\\ \includegraphics[scale=0.66,clip=true,trim= 0 0 0 0]{RedW1D8.eps}\hspace{1cm} \includegraphics[scale=0.66,clip=true,trim= 0 0 0 0]{RedW2D8.eps} \caption{\label{fig:RedWFCharged} {\em Left panels:} First wave form signals with a time scale set by $\Delta \tau_1\equiv \tau_2-\tau_1$, suitably rescaled. {\em Right panels:} Second wave form signals with a time scale set by $\Delta \tau_2\equiv \tau_4-\tau_3$, suitably rescaled. The top panels are for $D=6$ and the bottom ones $D=8$. The signals were generated for $a=0.01,0.1$ and $1$ with no variation in shape.} \end{figure} In Fig.~\ref{fig:RedWFCharged}, we display some wave forms for $D=6,8$ obtained from the numerical integration of the contributions in Eqs.~\eqref{wf1}, \eqref{wf2}, \eqref{eq:Ecommau} and~\eqref{eq:Hcommau}. The integration was done using the numerical code developed in~\cite{Herdeiro:2011ck,Coelho:2012sy}. We have found two radiation signals as expected from the geometrical optics analysis. The left hand side plots, corresponding to the signal coming from rays 1 and 2 in Fig. \ref{rays}, coincide precisely with the wave forms computed without charge in~\cite{Herdeiro:2011ck,Coelho:2012sy}. The second signal (right hand side plots), is associated to rays 3 and 4 in Fig. \ref{rays}; these are the rays that are incident close to the axis. In a non-linear computation with horizon formation, a considerable part of this signal should be caught inside the black hole horizon, and should therefore be absent from the viewpoint of an asymptotic observer. The evidence that supports this statement comes from considering the apparent horizon. Indeed, cutting off the $\rho'$ integration region inside the exterior apparent horizon (defined in \cite{Yoshino:2006dp}), the second burst of radiation (between $\tau_3$ and $\tau_4$) vanishes. Both re-scaled signals are actually independent of the charge parameter for large $r$. This should not be suprising for rays that are incident far away from the center of the collision, since for large $\rho$, the charge contribution to the gravitational field decays faster. However, the second (anomalous) re-scaled signal is also independent of the charge parameter so it gives a constant contribution when integrated in time, to $\epsilon_{\rm radiated}$. In particular this means that the result is discontinuous in the $a\rightarrow 0$ limit, as compared to the $a=0$ result. This is clearly related to the fact that the source is always repulsive for non-vanishing $a$. We have extracted the two contributions to $\epsilon_{\rm radiated}$ for large $r$, and found that the contribution from the first wave form coincides (within a numerical error of less that 1\%) with the $a=0$ computation. For $D=6$ we found $0.332\pm 0.004$ and for $D=8$, we found $0.374\pm 0.003$. If we add the contribution from the anomalous wave form signal, we get respectively $0.695\pm 0.004$ and $0.876\pm 0.002$, independently of $a$. Up to now we have focused on the even $D$ case. For $D$ odd, we expect the method to become even less meaningful since the Green's function for odd $D$ has support not only on the past light cone, but also \textit{inside} the past light cone. Since the shock wave profile becomes repulsive at the center (see Fig.~\ref{lightcone}), we will get contributions to the integrals which come from highly deflected rays that went through the highly curved and non-linear region of spacetime. We should also mention that for $D=4$, the integration of the wave form does not even converge for large $r$ to extract a finite $\epsilon_{\rm radiated}$. These results, altogether, indicate the break down of the perturbative method, and clarify its regime of validity. To summarise, the perturbative method should only capture the relevant physics whenever the optical rays arriving at the observation point go through weak field regions only \cite{Herdeiro:2011ck,Coelho:2012sy,D'Eath:1992hb,D'Eath:1992hd,D'Eath:1992qu}. \section{Final Remarks} \label{sec_final_rem} An ultra-relativistic particle collision is a highly dynamical process and also, if a black hole forms as a result, a strong field process. It is then quite remarkable if we may use a perturbative method, such as the one originally proposed in \cite{D'Eath:1992hb,D'Eath:1992hd,D'Eath:1992qu}, to extract relevant physics, such as the inelasticity of the process. A good way to test the method is to generalise it. In \cite{Herdeiro:2011ck,Coelho:2012sy} we have extended it to higher dimensions, revealing a remarkably simple pattern in first order perturbation theory. Second order perturbation theory is then the next goal and, in this problem, one of particular importance. On the one hand, it will test if the simple pattern observed in \cite{Coelho:2012sy} is a special property of linear theory. On the other hand, the initial data is exact in second order. Furthermore, to this order, an agreement is found (in $D=4$) with numerical relativity simulations. In this paper we have paved the road for this second order computation, by presenting the setup for higher order perturbation theory and formulas for extracting the inelasticity, based on a generalisation of the Bondi mass formula to higher $D$. It remains to compute the relevant scalar functions in second order and to perform the numerical integrations. We shall report on this elsewhere. Another generalisation considered in this paper was the collision of shock waves with a charge parameter, reminiscent of the Reissner-Nordstr\"om black holes from which they where obtained. These collisions have been considered before for apparent horizon computations \cite{Yoshino:2006dp,Gingrich:2006qh}, from which bounds on the inelasticity of the process can be obtained. The generic observed behaviour can be described as follows (for simplicity we consider only shock waves with equal charge parameter). Firstly, including the charge parameter increases the value of $\epsilon_{\rm AH}$, suggesting that a larger fraction of the energy is radiated away. Intuitively, this may be associated to the fact that the charge term in the Reissner-Nordstr\"om (RN) black hole yields a repulsive effect. Secondly, beyond a certain value of the charge parameter no apparent horizon can be seen. Again, this is expected from the RN solution, which has a limit for the charge to mass ratio, beyond which no event horizon exists. Thirdly, these results are independent of the relative sign of the initial charged particles that were infinitely boosted, in agreement with the observation that gravity is the dominant interaction in trans-Planckian scattering \cite{'tHooft:1987rb}. The observed behaviour using the perturbative method to first order shows a qualitative agreement with the first and third observations above. But, as emphasized already, the perturbative method lacks legitimacy in this problem, since there is a considerable amount of radiation that originates in the strong field region, where we have no reason to believe the method. Thus, a more reasonable stance is to regard this charged example as an illustration of how the perturbative method can fail. \begin{acknowledgments} F.C., M.S. and C.R. are funded by FCT through the grants SFRH/BD/60272/2009, SFRH/BPD/69971/2010 and SFRH/BPD/77223/2011. The work in this paper is also supported by the FCT grants CERN/FP/116341/2010, PTDC/FIS/098962/2008, PTDC/FIS/098025/2008, PTDC/FIS/116625/2010, the Marie Curie action NRHEP--295189-FP7-PEOPLE-2011-IRSES, the FCT strategic project PEst-C/CTM/LA0025/2011 and by Funda\c{c}\~{a}o Calouste Gulbenkian through the ``Programa de Est\'imulo \`{a} Investiga\c{c}\~{a}o.'' \end{acknowledgments}
train/arxiv
BkiUfBo4uzliCt-nLgez
5
1
\section{Introduction} \par We propose a quantization of a scalar field inside the Schwarzschild black hole. We adopt the conventional canonical quantization approach and the well known classical fact that in the Schwarzschild\ interior test particles move inexorably inward. It thus seems reasonable to assume that in the quantum theory we take the radial momentum operator to be {\it positive} definite in the direction of {\it decreasing} $r$ co-ordinate. However, the inclusion of out-going modes with positive outward radial wave-number $k_{\omega}$ would also be required by completeness. These then have to be interpreted as in-going modes with positive inward wave-number but of opposite charge. This generalises the CPT theorem as usually formulated in Minkowski space. There is a well known lucid explanation for the existence of antiparticles in Minkowski space due to Feynman; an interesting thought experiment was given by Weinberg \cite{wein}. Here we obtain a generalization to the black hole interior. From the axiomatic point of view the main issue is the spectrum condition, viz., the existence of a positive definite operator whose spectrum would guarantee the existence of a ground state. For the case of curved spacetime, Haag et al \cite{haag1} have formulated this as the principle of local stability of the Wightman functions. It requires that the support of the two-point function be restricted to the forward, i.e. here, the radially inward lightcone. We show that this principle holds for our quantization. Candelas and Jensen \cite{jencan} extended the Feynman Green Function with Hartle-Hawking boundary conditions to the interior of the Schwarzschild black hole. This Green Function obeys a periodic boundary condition in the time co-ordinate, and thus assumes a Kubo-Martin-Schwinger type state as the state of lowest energy. This is appropriate to the presence of an asymptotic heat bath. The boundary conditions that our prescription entails are similar to those of Boulware\cite{Boul} who has determined the vacuum state for the global Schwarzschild manifold. We provide an {\it a priori} motivation for the vacuum state strictly in the interior and construct a causal propagator. Our results agree with those of Boulware in the interior. However, our prescription leaves open the possibility of matching with different quantizations in the exterior. This paper is organised as follows. In section 2 we begin with quantization conditions and the re-interpretation of the modes, in section 3 we introduce the radial momentum operator. In section 4 we construct the causal propagator. Section 5 contains discussion and outlook. \section{Quantization Inside the Schwarzschild Black Hole} \par The Schwarzschild spacetime is described by the spherically symmetric line element \begin{equation} \label{lineelem} ds^{2} = (1 - \frac{2m}{r}) dt^{2} - \frac{dr^{2}}{1 - \frac{2m}{r}} - r^{2} d{\theta}^{2} - r^{2}\sin^{2}\theta {d\phi}^{2} \ee in the $\{t,r,\theta,\phi\}$ coordinate system. These coordinates permit separation of variables and are suitable for finding the mode functions. We shall also need to use the so called tortoise co-ordinate\cite{mtw} $r^*$$=$$r+2m\ln|r/2m-1|$ in which the metric is \begin{equation} \label{linestar} ds^{2} = (1 - \frac{2m}{r}) (dt^{2} - dr^{*2}) - r^{2} d{\theta}^{2} - r^{2}\sin^{2}\theta {d\phi}^{2} \ee In this form the $t-r$ part is conformal to 1+1 Minkowski space. For $r<2m$, $r^*$ ranges from $0$ to $-\infty$ as a monotonically decreasing function of $r$. The spacetime has the Killing symmetries corresponding to time translation invariance $(t \rightarrow t + \tau)$ and rotational invariance. It has an event horizon at $r=r_{H}=2m$. The Killing vector $\partial/\partial t$ is timelike in the exterior, null on the horizon and spacelike in the interior. The vector $\partial/\partial r$ is spacelike in the exterior and timelike in the interior. Thus, inside the black hole, the roles of time and space are reversed. Furthermore, any particle inside the black hole is inexorably drawn to the singularity. In other words, the future lightcone of a particle in the interior does not cross the horizon and necessarily terminates at the singularity. This is what motivates our prescription for quantization. For simplicity we consider a charged massless minimally coupled scalar field. The scheme however can be extended to any realistic field. Accordingly, defining the canonically conjugate momentum \begin{equation} \label{canmom} \Pi_r = \frac{\partial L}{\partial(\partial_{r^*} \phi)} = \partial_{r^*} \phi^{\dag}, \ee we impose the radial quantization conditions \begin{equation} \label{qcondr1} [\phi(r,t,\Omega),\frac{\partial \phi^{\dag}}{\partial r^{*}}(r,t',\Omega')] = i \delta_{\Sigma}(t,\Omega;t',\Omega') = i [-g(\Sigma)]^{-1/2} \ \delta(t - t') \ \delta(\Omega - \Omega') \ee \begin{equation} \label{qcondr2} [\phi(r,t,\Omega), \phi^{\dag}(r,t',\Omega')] = 0 = [\frac{\partial}{\partial r^{*}}\phi(r,t,\Omega), \frac{\partial}{\partial r^{*}} \phi^{\dag}(r,t',\Omega')] \ee These are ``equal $r$ commutation relations" for the field. Quantization is performed on constant-$r$ hypersurfaces near the horizon in the interior (denoted by $\Sigma$), which are Cauchy surfaces in the black hole interior. Consider the local stability requirement of \cite{haag1}. Paraphrased for the present case, it requires that the 2-point function $W^{(2)}(z_{1},z_{2})$ has support on the forward, i.e., radially $inward$ light cone \begin{equation} p \cdot p \geq 0 \ee with the radial component of the momentum \begin{equation} p^r \leq 0 \ee Our quantization conditions (\ref{qcondr2}) are that $\phi$ and separately $\pi$ at same $r$ commute whereas according to (\ref{qcondr1}) $\phi$-$\pi$ commutator is nontrivial in the inward directed light-cone. Thus these can be taken to be implementation of the local stability requirement. Minkowski space quantization on hypersurfaces of constant $t$ is covariant under rigid Lorentz transformations of the cartesian $t-\vec x$ coordinates. The background geometry of the blackhole in Schwarzschild\ co-ordinates suggests a preferred slicing for quantiztion as chosen above. The hypersurfaces of constant $r$ are also those with ${\rm Tr}K={\rm constant}$ where $K$ is the extrinsic curvature of the hypersurface. This choice of foliation is standard in dynamical formulation of gravity \cite{wijaw}. Here the resulting field theory will be covariant under constant translations of the $t$ co-ordinate and rigid rotations centred on the origin already chosen. A quantization scheme based on any other co-ordinates will be inequivalent to the present one. We now proceed to obtain a representation of this algebra by introducing creation and annihilation operators. Consider the massless scalar wave equation $\Box \phi = 0$ with $\Box$ the appropriate d'Alembertian operator for the black hole interior. Separating this in the $\{t,r,\theta,\phi\}$ coordinates, we obtain the mode functions \begin{equation}\label{modefunc} h_{\omega lm}(r,t,\Omega) \sim \frac{R_{\omega l}(r)}{r} \ Y_{lm}(\theta ,\phi) \ e^{-i\omega t} \end{equation} where $Y_{lm}$ are the spherical harmonics. In the co-ordinate $r^*$, the equation satisfied by $R_{\omega l}$ is \begin{equation} \label{reqn} \frac{d^{2}R_{\omega l}}{dr^{*2}} + (\omega^{2} - [\frac{l(l+1)}{r^{2}} + \frac{2m}{r^{3}}][1 - \frac{2m}{r}])R_{\omega l} = 0 \end{equation} Writing this in the form \begin{equation} \frac{d^{2}R_{\omega l}}{dr^{*2}} + k_{\omega}^{2} R_{\omega l} = 0 \end{equation} we can identify the positive definite quantity \begin{eqnarray} k_{\omega}^{2} &=& \omega^{2} - V^{\rm eff} \label{veffdef}\\ &\equiv& \omega^{2} - [\frac{l(l+1)}{r^{2}} + \frac{2m}{r^{3}}][1 - \frac{2m}{r}] \label{krdef} \end{eqnarray} with the radial wave-number squared of the mode. The $V^{\rm eff}$ vanishes at the horizon so that $k_{\omega}=\omega$ there. Further, close to the horizon, i.e., for $2m-r\ll 2m$, $R_{\omega l} \sim e^{\pm i(k_{\omega}r^{*} - \omega t)}$. We shall now on take $k_{\omega}$ to be the positive square root of the above equation, and choose $h_{\omega lm}$ to be those mode functions which satisfy near the horizon\footnote{The $e^{-i\omega t}\psi^l(r,-\omega)$ of \cite{Boul} correspond to our $h^*_{\omega lm}$} \begin{eqnarray} \frac{\partial}{\partial r^{*}} h_{\omega lm}(r,t,\Omega) &=& -i k_{\omega} \ h_{\omega lm}(r,t,\Omega) \nonumber \\ &\phantom{=}&\hskip 20 pt ({\rm for}\quad r<2m\quad {\rm and}\quad 2m-r\ll 2m) \label{posmode} \end{eqnarray} i.e., the positive definite eigenvalues are associated with {\it ingoing} radial momentum. This is analogous to the condition for positive frequency modes in Minkowski space. With appropriate normalisation, these modes satisfy the completeness relation \begin{equation} \label{comprln} \sum_{lm} \int d\omega \ h_{\omega lm}(r,t,\Omega) \ h_{\omega lm}(r,t',\Omega') = \delta_{\Sigma}(r,t,\Omega;r,t',\Omega') = \frac{\delta(t - t') \ \delta(\Omega - \Omega')}{r^2} \ee where $\Sigma$ is the surface of quantization, a constant-$r$ hypersurface near the horizon, as before. In the mode expansion we now take summation over modes with the parameter $\omega$ taking both positive and negative values while $k_{\omega}$ remains positive. Accordingly, the interior Fourier expansion near the horizon is \begin{equation} \label{modeexp} \phi_{in} = \sum_{lm} \int^{\infty}_{-\infty} \frac{d\omega}{\sqrt{2\pi} \thinspace r\thinspace \sqrt{2k_{\omega}} } (a_{\omega} \ e^{-i(k_{\omega}r^{*} - \omega t)} Y_{lm} + b_{\omega}^{\dag} \ e^{i(k_{\omega}r^{*} - \omega t)} Y^{*}_{lm}) \ee where we have written $h_{\omega} \sim e^{-i(k_{\omega}r^{*} - \omega t)}/r$ as asymptotic forms near the horizon of general modes $h_{\omega lm}$. The indices $l,m$ on $a_{\omega}$ and $b_{\omega}$ have been suppressed. Now imposing the quantization conditions (\ref{qcondr1}), (\ref{qcondr2}) we obtain for the expansion parameters, \begin{equation} \label{qcondab} [a_{\omega}, a_{\omega'}^{\dag}] = \delta(\omega - \omega') = [b_{\omega}, b_{\omega'}^{\dag}] \ee and that all other commutators vanish. We can justify the interpretation of modes implied by conditions (\ref{qcondab}) by paraphrasing the CPT invariance argument of Weinberg \cite{wein}. Consider an observer in the interior who performs an experiment in which a particle is created at point A and is ``later", i.e., further down the radial direction is destroyed at B. This radial (temporal in the interior) ordering of events cannot be reversed for any classical events by another observer without a superluminal Lorentz transformation. However for those events separated by spacetime intervals less than the Compton wavelength of the particle, this is not guaranteed. In this case it is possible to observe a particle of the same charge being annihilated at B before being created at A, thus travelling backward in $r$. The only way causality can be maintained is to insist that a particle of opposite charge travelled forward in $r$, emitted from B and absorbed at A. Eq.s (\ref{qcondab}) are consistent with this interpretation of the modes. \section{Recovering the Spectrum condition and CPT} Next we need to verify that the algebra of the operators corresponding to spacetime symmetries is realised, and in particular that a positive definite operator exists, whose spectrum guarantees the existence of a ground state upon which the spectrum generated by the creation operators can be built. To begin with we demand the existence of a vacuum or no-particle state $\mid 0^I \rangle$ satisfying \begin{equation} \label{gstate} a_{\omega} \mid 0^{I} \rangle = b_{\omega} \mid 0^{I} \rangle = 0, \ee The global Killing symmetries of time translation (spacelike in the interior) and rotations on surfaces of constant $r$ are implemented by the operators \begin{equation} K_{t} = \int d \Lambda \ T^{t}_{t} \ee \begin{equation} K_{\theta} = \int d \Lambda \ T^{\theta}_{\theta} \ee \begin{equation} K_{\varphi} = \int d \Lambda \ T^{\varphi}_{\varphi} \ee with $d\Lambda$ denoting the induced 3-volume on constant $r$ hypersurfaces and with $T$ components obtained in the usual way from the matter Lagrangian. These can be thought of as the infalling mass-energy at any fixed radius and angular momentum content respectively of the quantum field $\phi$. The energy now is not positive definite, but this is the operator corresponding to the energy operator outside. In the vacuum introduced above, we find \begin{equation} \langle 0^{I} \mid K_{t} \mid 0^{I} \rangle = \langle 0^{I} \mid K_{\theta} \mid 0^{I} \rangle = \langle 0^{I} \mid K_{\varphi} \mid 0^{I} \rangle = 0 \ee The above vanishing of the expectation values is clear because $\int^{\infty}_{-\infty} d\omega \ \omega \ldots$ and $\sum^{l}_{m=-l}$ etc. appear. The quantum dynamics is now generated by the operator \begin{equation} K_{r} = \int d \Lambda \ T^{r}_{r} \ee The radial momentum density is \begin{equation} T^{r}_{r} = \Pi_{r} \partial_{r^*} \phi + \Pi^{\dag}_{r} \partial_{r^*} \phi^{\dag} - L \ee \begin{equation} \ \ = \frac{1}{\frac{2m}{r} - 1} [\partial_{r^*} \phi^{\dag} \ \partial_{r^*} \phi + \dot{\phi}^{\dag} \ \dot{\phi} ] \ee This is clearly positive definite in the black hole interior. Promoted to a quantum operator, $K_r$ is thus positive definite. Substituting the mode expansion (\ref {modeexp}) and using the expression for the volume element on the constant $r$ hypersurface near the horizon \begin{equation} d\Lambda = (2m/r - 1)^{1/2} r^2 \ \sin\theta \ dt \ d\theta \ d\varphi \ee we obtain the following expression for normal ordered $K_r$ \begin{equation} :K_r:\ =\ r (\frac{2m}{r} - 1)^{-1/2} \int^{\infty}_{-\infty} d \omega \ k_{\omega} \ (a^{\dag}_{\omega} a_{\omega} + b^{\dag}_{\omega} b_{\omega}) \ee This makes the state defined through eq.s (\ref{gstate}) a genuine ground state as well as a no-particle state. Note that the sign in front of $k_{\omega}$ in above eqn. is due to our choice the sign of $k_{\omega}>0$ below eqn. (\ref{krdef}). The ground state thus characterised has been shown to be stable\cite{Boul}. One may think of it as the adiabatic vacuum useful to a freely infalling observer. The latter should detect particle production, and the same will be finite if the vacuum specified here is used as the template with which to compare his local ground states. This is similar to what happens in Friedmann-Robertson-Walker geometries which have a conformal timelike Killing vector. In the present case we expect copious particle production as the singularity is approached, much as in the collapsing phase of relevant FRW metrics. The normal ordering prescription used may seem arbitrary. But the effect of the infinite contribution discarded manifests itself as higher derivative terms in the effective action for gravity. It is possible to choose a renormalisation prescription such that when one returns to the gauge specified, the numerical values of the renormalised operators will be the same as that obtained by simple normal ordering. Finally, CPT invariance can be realised by requiring \begin{equation} C \phi C^{\dag} = \phi^{\dag} \ee which implies the effect of $C$ is \begin{equation} a_{\omega} \leftrightarrow b_{\omega} \ee Similarly, a restricted parity operator corresponding to reflection in the blackhole origin is given by \begin{equation} P^{\Omega} \phi(r,t,\theta,\varphi) {P^{\Omega}}^{\dag} = \phi(r,t,\pi-\theta,\varphi+\pi) \ee which implies, under $P^{\Omega}$ \begin{equation} a_{\omega,l,m} \leftrightarrow a_{\omega l,-m} (-1)^{l} \ee and for vector operators \begin{equation} P^{\Omega} V^{\mu}(r,t,\theta,\varphi) {P^{\Omega}}^{\dag} = V_{\mu}(r,t,\pi-\theta,\varphi+\pi) \ee The $P^\Omega$ symmetry is complemented by reversal symmetry of the $t$ co-ordinate \begin{equation} \tilde{T} \phi(r,t,\Omega) {\tilde{T}}^{\dag} = \phi(r,-t,\Omega) \ee such that $\tilde{T}$ is unitary. This gives \begin{equation} a_{\omega} \leftrightarrow a_{-\omega} {\rm \ and \ \ } b_{\omega} \leftrightarrow b_{-\omega} \ee There is no global symmetry reversing the arrow of dynamical evolution since $r$-translations are not a symmetry. However in the adiabatic approximation, a local and antiunitary $r$ reversal operator $\tilde{R}(r)$ may be assumed to exist such that for small increments $\Delta r$, \begin{equation} \tilde{R}(r) \phi(r+\Delta r,t,\Omega) {\tilde{R}(r)}^{\dag} = \phi(r-\Delta r,t,\Omega) \ee \section{The propagator} Since the inexorable direction of propagation is along the inward radial vector (corresponding to the $t$ variable in the exterior), the propagator in the interior is expected to be $r$-ordered. This is analogous to $t$-ordering in the exterior. \noindent Such a propagator can be written as \begin{equation} iG(x,x') = \ <0^{I}| R \phi(x) \phi^{\dag}(x') |0^{I}> \ee \begin{equation} iG(x,x') = \theta(r^* - r'^{*}) <0^{I}|\phi(x) \phi^{\dag}(x') |0^{I}> \ + \ \theta(r'^{*} - r^*) <0^{I}| \phi^{\dag}(x') \phi(x) |0^{I}> \ee The $\theta$ function defined above is the usual step function. This propagator satisfies the equation \begin{equation} \Box_{x} G(x,x') = \delta(x,x') =[-g(x)]^{-1/2} \delta^{4} (x - x') \ee thus making $G(x,x')$ a Green's function for the wave equation in the Schwarzschild interior geometry. Near the horizon, the field can be expanded as in eqn.(\ref{modeexp}). Using the integral representation for the theta function, the expression for $G(r^*,t;{r^*}',t')$ (suppressing the angular co-ordinates) becomes \begin{equation} G(r^*,t;{r^*}',t') = \lim_{\epsilon \rightarrow 0+} \int^{\infty}_{-\infty} \frac{d\omega \ d\Lambda}{(2 \pi)^{2} \ r \ r'} \frac{ e^{-i \Lambda (r^{*} - r'^{*}) + i \omega (t - t')}}{\Lambda^{2} - k_{\omega}^{2} + i \epsilon} \ee \section{Conclusion} We have shown that taking account of the classical inexorably inward motion and incorporating the requirements of CPT, a unique quantization can be obtained for a matter field in the black hole interior. The quantization is of the kind possible in spaces with a conformal Killing vector such as FRW universes. It can be taken to be the QFT set up by the freely infalling observer, the equivalent of a comoving observer in FRW spacetime. We have also shown that imposing microcausality dictates the form of the propagator at least in the co-ordinates used. The quantization is expected to break down close to the classical singularity where semiclassical techniques fail. It would be interesting to ask whether there is any signature of the Hawking radiation detected in this quantization. This requires matching of this QFT to the standard QFT in the exterior. \vspace{2cm}
train/arxiv
BkiUa-U5qg5A5wELCQ1R
5
1
\section{Introduction} Periodic driving is a powerful tool to engineer variety of unique and fascinating phenomena in many-body systems, such as Mott-insulator-superfluid transition\cite{Mott1}, dynamical gauge field\cite{DynGF1,DynGF2}, many-body echo\cite{mbe1, mbe2, mbe3}, the realization of topological band structures\cite{ExpFloTopo1,ExpFloTopo2}. Recently, Floquet systems also bring new possibilities to simulate time-translation symmetry broken phases, which are called discrete time crystals (DTC), and this has attracted much attention from both experimentalists \cite{DTCexp1,DTCexp2,DTCexp3,DTCexp4,DTCexp5,DTCexp6,DTCexp7,DTCexp8,DTCexp9,DTCexp10,DTCexp11,DTCexp12} and theorists\cite{DTCthe1,DTCthe2,DTCthe3,DTCthe4,DTCthe5,DTCthe6,DTCthe7,DTCthe8}. The concept of the time crystal was originally proposed by Wilczek\cite{Wilczek}. However, it was ruled out by the no-go theorem in thermal equilibrium systems\cite{nogo1,nogo2}. Then Shivaji Sondhi's~\cite{Phase2016Sondhi} and Chetan Nayak's~\cite{Floquet2016Nayak} groups generalized the concept of time crystal and proposed the DTC, which exhibits a unique property that the expectation values of generic observables manifest a sub-harmonic oscillation. For example, the kicked Ising chain model with disordered interaction, where spins collectively flip after one period and back to their initial state after two periods, is a canonical realization of the DTC. In a non-interacting spin chain system, taking $\hat{U}=\exp{(-i\theta\sum_j\hat{X}_j)}$ with $\theta=\pi/2$ as a Floquet evolution operator is a straightforward method to flip all spins in one period. Here, $\hat{X}_j$ is a spin operator acting on site $j$. However, when $\theta$ deviates slightly from $\pi/2$, the period of observables also deviates from twice of the Floquet period. It means the sub-harmonic response induced by $\hat{U}=\exp{(-i\theta\sum_j\hat{X}_j)}$ is a fine-tuned result and easily destroyed by perturbations. This simple example implies that the robustness of sub-harmonic response is a crucial property of the DTC. According to the previous results, many-body localization and pre-thermalization may provide two mechanisms to stabilize the sub-harmonic response. Besides, topologically protected anomalous edge states \cite{FloEdge1,FloEdge2,FloEdge3} also suggest another mechanism of generating the robust DTC and the reasons are listed in the following. Firstly, edge states in topological insulators (superconductors) are protected by symmetries. As long as the symmetries are not broken and the gap does not close, topological edge states are stable and robust. Secondly, Floquet topological insulators (superconductors) host anomalous edge states with quasi-energy $\pi/T$ which can generate a sub-harmonic response to driving frequency $2\pi/T$. Although the relation between $\pi$-mode edge states in Floquet topological system and the DTC has been discussed in previous research\cite{TopoDTC1,TopoDTC2,TopoDTC3,TopoDTC4,TopoDTC5,TopoDTC6}, detailed and systematic of topologically protected DTC in a more general spin chain model are still lacking. Bridging Floquet topological superconductors to the topologically protected DTC in a general periodic driven spin chain model and explicitly analyzing the dynamics of observables are significantly helpful for deeply understanding the existence and robustness of the DTC. In this work, we investigate the existence of the DTC phase in a general Floquet spin chain model. Such a periodically driven spin chain can be mapped to a Floquet superconductor through the Jordan-Wigner transformation, after which the model becomes the form of Majorana fermion. This model is intrinsic with particle-hole symmetry. Furthermore, this system can be classified into D class or BDI class, which is dependent on whether chiral symmetry is preserved. The topological Floquet superconductor exhibit a special kind of topologically non-trivial phase, where anomalous edge states with quasi-energy $\pi / T$ exist. In order to observe a robust DTC, the observable should be selected as the anomalous edge mode or the end spin. We numerically demonstrate that both observables manifest a sub-harmonic response with a generic initial product state. Finally, we also confirm the robustness of the DTC by adding symmetry-preserving and symmetry-breaking perturbations. The paper is organized as follows. In section \ref{sec:model}, we describe a general periodically driven spin chain model and map it to a Floquet superconductor. In section \ref{sec:topoClass}, we discuss the topological classification of this model, calculate the topological invariants and obtain the phase diagrams. In section \ref{sec:TC}, we demonstrate the existence of the DTC resulting from the Floquet superconductors, by selecting the observable as the anomalous edge mode or the end spin. Finally, in section \ref{sec:Robust}, we examine the robustness of the DTC is protected by topologically non-trivial phase by adding symmetry-preserving and symmetry-breaking perturbations. \section{Model: Periodically Driven Spin Chain} \label{sec:model} We consider a periodically driven spin-$1/2$ chain as illustrated in Fig. \ref{fig:Setup} (a), \begin{eqnarray} \label{Ht12} \hat{H}(t)=\begin{cases} \hat{H}_{1}, & \text{for}\quad nT\leqslant t<nT+t_{1}\\ \hat{H}_{2}, & \text{for}\quad nT+t_{1}\leqslant t<(n+1)T \end{cases}, \end{eqnarray} where \begin{equation} \begin{aligned} \label{HiSpin} &\hat{H}_{m}=J_{m}^{xx}\sum_{j}\hat{X}_{j}\hat{X}_{j+1}+J_{m}^{yy}\sum_{j}\hat{Y}_{j}\hat{Y}_{j+1}\\ &+J_{m}^{xy}\sum_{j}\hat{X}{}_{j}\hat{Y}_{j+1}+J_{m}^{yx}\sum_{j}\hat{Y}_{j}\hat{X}_{j+1}+h_{m}^{z}\sum_{j}\hat{Z}{}_{j}. \end{aligned} \end{equation} $\hat{X}_j$, $\hat{Y}_j$ and $\hat{Z}_j$ are spin operators (with the form of Pauli matrices) acting on the $j$-th site and the chain contains $N$ spins. $J_m^{xx}$, $J_m^{yy}$, $J_m^{xy}$, $J_m^{yx}$ represent the strengths of nearest spin interactions and $h_m^z$ the transverse field during $m$-th time interval. By employing the Jordan-Wigner transformation \begin{equation} \begin{aligned} \hat{X}_{j}&=(\hat{c}_{j}^{\dagger}+\hat{c}_{j})e^{i\pi\sum_{l<j}\hat{c}_{l}^{\dagger}\hat{c}_{l}},\\ \hat{Y}_{j}&=-i(\hat{c}_{j}^{\dagger}-\hat{c}_{j})e^{i\pi\sum_{l<j}\hat{c}_{l}^{\dagger}\hat{c}_{l}},\\ \hat{Z}_{j}&=2\hat{c}_{j}^{\dagger}\hat{c}_{j}-1. \end{aligned} \end{equation} where $\hat{c}_j^\dagger$ and $\hat{c}_j$ are the creation and annihilation fermion operators on $j$-th site. Then the Hamiltonian in Eq.~(\ref{HiSpin}) can be mapped to a fermionic system, \begin{equation} \begin{aligned} \label{Hcc} \hat{H}_{m}&=\sum_{j}\left(J_{m}^{xx}-J_{m}^{yy}-iJ_{m}^{xy}-iJ_{m}^{yx}\right)\hat{c}_{j}^{\dagger}\hat{c}_{j+1}^{\dagger}\\ &+\sum_{j}\left(J_{m}^{xx}+J_{m}^{yy}+iJ_{m}^{xy}-iJ_{m}^{yx}\right)\hat{c}_{j}^{\dagger}\hat{c}_{j+1}\\ &+\frac{h_{m}^{z}}{2}\sum_{j}(2\hat{c}_{j}^{\dagger}\hat{c}_{j}-1)+h.c..\\ \end{aligned} \end{equation} \begin{figure} \includegraphics[width=8cm]{Setup.pdf} \caption{(a) Schematic of spin chain. (b) Schematic of Majorana chain. (c) Quasi-energy spectrum $\exp(-i\epsilon_{\mu}T)$ with $\epsilon_\mu$ the quasi-energy excitation. $0$ and $\pi/T$ represent two edge states quasi-energy excitation. (d) Eigenvalues of the Floquet evolution operator $\hat{U}_T$. Quasi-particle operator $\hat{\beta}^\dagger_\pi$ excites a state $|E\rangle$ with eigenenergy $E$ to another state $|E + \pi / T\rangle$.} \label{fig:Setup} \end{figure} Furthermore, by defining $\hat{\gamma}_{j, 1}=\frac{1}{\sqrt{2}}\left(\hat{c}_{j}^{\dagger}+\hat{c}_{j}\right)$ and $\hat{\gamma}_{j, 2}=i \frac{1}{\sqrt{2}}\left(\hat{c}_{j}-\hat{c}_{j}^{\dagger}\right)$, we obtain a Hamiltonian in Majorana representaion, as illustrated in Fig. \ref{fig:Setup} (b), \begin{equation} \begin{aligned} \label{Hgg} \hat{H}_m&= 2 J_{m}^{x x} \sum_{j} i \hat{\gamma}_{j, 2} \hat{\gamma}_{j+1,1}-2 J_{m}^{y y} \sum_{j}i \hat{\gamma}_{j, 1} \hat{\gamma}_{j+1,2} \\ &+2 J_{m}^{x y} \sum_{j} i \hat{\gamma}_{j, 2} \hat{\gamma}_{j+1,2}-2 J_{m}^{y x} \sum_{j}i \hat{\gamma}_{j, 1} \hat{\gamma}_{j+1,1} \\ &+2 J_{m}^{z} \sum_{j} i \hat{\gamma}_{j, 2} \hat{\gamma}_{j, 1}. \\ \end{aligned} \end{equation} Both Hamiltonians $\hat{H}_1$ and $\hat{H}_2$ in Eq. (\ref{Hgg}) are quadratic, so they can be rewritten as \begin{equation} \hat{H}_m=\frac{1}{2}\hat{\Psi}^\dagger H_m \hat{\Psi}, \end{equation} with \begin{equation} \label{Psigamma} \hat{\Psi}^{\dagger}=\left(\begin{array}{ccccc} \hat{\gamma}_{1,1} & \hat{\gamma}_{1,2} & \hat{\gamma}_{2,1} & \hat{\gamma}_{2,2} & ...\end{array}\right), \end{equation} where $H_m$ is a $2N*2N$ anti-symmetric matrix. Here and after, $\hat{H}$ ($\hat{U}$) represents an operator and $H$ ($U$) represents a matrix. Applying Fourier transformation $\hat{\gamma}_{j,m}=\frac{1}{\sqrt{N}}\sum_{k}\hat{\gamma}_{k,m}e^{ikj}$, $\hat{H}_m(k)$ has such a form \begin{align} \hat{H}_m= &\sum_{k}\left(\begin{array}{cc} \hat{\gamma}_{k,1}^\dagger & \hat{\gamma}_{k,2}^\dagger \end{array}\right) \left[d^{0}_m(k)I+\bm{d}_m(k)\cdot\bm{\sigma}\right] \left(\begin{array}{c} \hat{\gamma}_{k,1}\\ \hat{\gamma}_{k,2} \end{array}\right), \end{align} where $\bm{\sigma} = (\sigma_x,\sigma_y,\sigma_z)$ denote Pauli matrices. $d^0_m(k)$ and $\bm{d}_m(k)$ are given by \begin{equation} \begin{aligned} d_{m}^{0}(k) &=\left(J_{m}^{y x}-J_{m}^{x y}\right) \sin (k), \\ d_{m}^{x}(k) &=\left(J_{m}^{y y}-J_{m}^{x x}\right) \sin (k), \\ d_{m}^{y}(k) &=J_{m}^{z}+\left(J_{m}^{x x}+J_{m}^{y y}\right)\cos (k), \\ d_{m}^{z}(k) &=\left(J_{m}^{x y}+J_{m}^{y x}\right) \sin (k). \end{aligned} \end{equation} For the periodically driven Hamiltonian in Eq.~(\ref{Ht12}), the Floquet evolution operator $\hat{U}_T$ and the effective Hamiltonian $\hat{H}_{\text{eff}}$ are defined by \begin{align} \label{expHeff} \hat{U}_{T}&=\hat{\mathcal{T}}\exp\left(-i\int_{0}^{T}\hat{H}(t)dt\right)=\exp(-i\hat{H}_{2}t_{2})\exp(-i\hat{H}_{1}t_{1})\nonumber\\ &=\exp(-i\hat{H}_{\rm{eff}}T), \end{align} where $t_2+t_1=T$. Based on the quadratic form of $\hat{H}_1$ and $\hat{H}_2$ in Eq.~(\ref{Hgg}), we have proven in Appendix \ref{app:a} that the Floquet effective Hamiltonian is also quadratic, which means $\hat{H}_{\rm{eff}}$ also takes the form \begin{equation} \hat{H}_{\rm{eff}}= \frac{1}{2}\hat{\Psi}^{\dagger} H_{\rm{eff}} \hat{\Psi}. \end{equation} In order to obtain the eigenenergies of $\hat{H}_{\rm{eff}}$, we diagonalize the above matrix $H_{\rm{eff}}$ in real space as \begin{equation} H_{{\rm eff}}=V\Lambda^{\epsilon}V^{\dagger}, \end{equation} where \begin{equation} V=\left(\begin{array}{cccc} |V_{1}\rangle & |V_{2}\rangle & ... & |V_{2N}\rangle\end{array}\right), \end{equation} \begin{equation} \Lambda=\left(\begin{array}{cccc} \epsilon_{1}\\ & \epsilon_{2}\\ & & \ddots\\ & & & \epsilon_{2N} \end{array}\right). \end{equation} $|V_{\mu}\rangle$ is the eigenvector of $H_{\rm{eff}}$ with eigenvalue $\epsilon_{\mu}$ such that $H_{{\rm eff}}|V_{\mu}\rangle=\epsilon_{\mu}|V_{\mu}\rangle$. $\epsilon_{\mu}$ is the Bogoliubov quasi-particle excitation spectrum of $\hat{H}_{\rm{eff}}$, which is restricted to the Floquet Brillouin Zone $[-\pi/T,\pi/T)$. Then, the Floquet effective Hamiltonian $\hat{H}_{\rm{eff}}$ in this quasi-particle representation can be rewritten as \begin{equation} \label{Heffhat} \hat{H}_{{\rm eff}}=\frac{1}{2}\hat{\Psi}^{\dagger}V\Lambda^{\epsilon}V^{\dagger}\hat{\Psi}=\frac{1}{2}\hat{\Phi}^{\dagger}\Lambda^{\epsilon}\hat{\Phi}, \end{equation} where $\hat{\Phi}$ is composed of Bogoliubov quasi-particles $\hat{\Phi}^{\dagger}=\left(\begin{array}{cccc} \hat{\alpha}_{1}^{\dagger} & \hat{\alpha}_{2}^{\dagger} & \cdots & \hat{\alpha}_{2N}^{\dagger}\end{array}\right)$ and $\hat{\alpha}^\dagger_\mu=\hat{\Psi}^\dagger|V_\mu\rangle$. Due to the particle-hole symmetry constraint, the eigenvalues and eigenvectors of $H_{\rm{eff}}$ must come in pair \begin{equation} \begin{aligned} \label{UFV} H_{\rm{eff}}|V_\mu\rangle&=\epsilon_\mu|V_\mu\rangle,\\ H_{\rm{eff}}|V_\mu^\ast\rangle&=-\epsilon_\mu|V_\mu^\ast\rangle. \end{aligned} \end{equation} Substituting these relations into Eq.~({\ref{Heffhat}}), the effective Hamiltonian $\hat{H}_{\rm{eff}}$ becomes \begin{equation} \label{HatHeff} \hat{H}_{{\rm eff}}=\sum_{\epsilon_{\mu}>0}\epsilon_{\mu}(\hat{\alpha}_{\mu}^{\dagger}\hat{\alpha}_{\mu}-\frac{1}{2}). \end{equation} The eigenstate of $\hat{H}_{\rm{eff}}$ is given by $\prod_{\mu}|n_{\mu}\rangle=(\alpha_{\mu}^{\dagger})^{n_{\mu}}|0\rangle$. $n_\mu=0,1$ represents the number of occupations on quasi-particle mode. Then the energy spectrum of this system can be expressed as \begin{align} E&=\langle\hat{H}_{{\rm eff}}\rangle\nonumber\\ &=\sum_{\epsilon_{\mu}>0}\epsilon_{\mu}(\langle\alpha_{\mu}^{\dagger}\alpha_{\mu}\rangle-\frac{1}{2})\nonumber\\ &=E_0+\sum_{\mu}\epsilon_\mu{n}_{\mu}, \end{align} where $E_0=-\frac{1}{2}\sum_{\epsilon_{\mu}>0}\epsilon_\mu$ is the energy of ground state and different configurations of ${n}_\mu$ generate $2^N$ eigenenergies of the spin chain system. To arrive at our final destination of observing the DTC in this periodically driven spin chain, we firstly concentrate on the evolution of operator $\hat{\alpha}^\dagger_\mu$ \begin{align} \label{alphamu} \hat{\alpha}^\dagger_{\mu}(nT)&=\hat{U}_{T}^{-n}\hat{\alpha}^\dagger_{\mu}(0)\hat{U}_{T}^{n} =\hat{\alpha}^\dagger_{\mu}(0)e^{inT\epsilon_{\mu}}. \end{align} For a given initial state, the expectation value of the operator $\hat{\alpha}^\dagger_{\mu}$ oscillates with period ${2\pi}/{\epsilon_\mu}$. Generally, this oscillating behavior is not rigid and easily perturbed, so the performance for operators $\hat{\alpha}^\dagger_{\mu}(nT)$ cannot be regarded as a signal of the DTC phase. However, there is an exception due to anomalous edge states in Floquet topological systems. Such systems exhibit two kinds of edge states with eigenenergies $\epsilon_\mu=0$ and $\epsilon_\mu=\pi/T$, protected by topological invariants $\nu_0$ and $\nu_\pi$, respectively. Here, we mainly focus on the edge states with eigenenergy $\epsilon_\mu=\pi/T$, denoted as $H_{\rm {eff}}|W\rangle =\pi/T|W\rangle$ and we define $\hat{\beta}^\dagger_\pi=\hat{\Psi}^\dagger|W\rangle$. The expectation value of the operator $\hat{\beta}^{\dagger}_\pi$ exactly oscillates with period $2T$ as shown in Eq.~(\ref{alphamu}). More importantly, this oscillation period is protected by the Floquet topologically non-trivial phases such that this phenomenon of the DTC is robust against symmetry-preserving perturbations. To further demonstrate the DTC induced by Floquet topological superconductors, we systematically discuss the phase diagrams of Floquet topological superconductors in section \ref{sec:topoClass} and discuss the DTC for different observables, including anomalous edge operators and end spin operators in section \ref{sec:TC}. \section{Floquet topological phases} \label{sec:topoClass} Following the standard approach of studying topological superconductors, we calculate the topological invariants of the effective Hamiltonian in momentum space. $\hat{H}_{\rm{eff}}$ is expressed as \begin{equation} \hat{H}_{{\rm eff}}=\sum_{k}\left(\begin{array}{cc} \hat{\gamma}_{k,1} & \hat{\gamma}_{k,2}\end{array}\right) H_{\rm{eff}}(k) \left(\begin{array}{c} \hat{\gamma}_{k,1}\\ \hat{\gamma}_{k,2} \end{array}\right), \end{equation} where $H_{\rm{eff}}(k)$ are obtained from \begin{equation} \begin{aligned} U(k)&=\exp[-iH_{2}(k)t_{2}]\exp[-iH_{1}(k)t_{1}] \\ &=\exp[-iH_{{\rm eff}}(k)T] \end{aligned} \end{equation} according to Appendix \ref{app:a}. Here, $d_{{\rm eff}}^{0}(k)$ and $\boldsymbol{d}_{{\rm eff}}(k)$ are defined using $H_{{\rm eff}}(k)=d_{{\rm eff}}^{0}(k)I+\boldsymbol{d}_{{\rm eff}}(k)\cdot\boldsymbol{\sigma}$. In the following, we request $\epsilon(k)=d_{{\rm eff}}^{0}(k)\pm\sqrt{d_{{\rm eff}}^{2}(k)}\neq 0$ or $\pi / T$ for any $k\in[-\pi,\pi)$, due to the topological superconductors should be gapped. For a one-dimensional two-band system with intrinsic particle-hole constraint, topologically non-trivial phases can only be classified into classes BDI (with chiral symmetry) and D (without chiral symmetry) \cite{foot1,classTopo}. So, in the following, we discuss the calculations of topological invariants for classes D and BDI, respectively. \subsection{D Class} For simplicity, we take $J_1^{yy}=J_2^{yy}=0, J_2^{xx}=h_1^z=0$, non-zero coupling strengths $J_{1}^{xy}=J_{2}^{xy}=J_{1}^{yx}= J_{2}^{yx}$ to break chiral symmetry. Besides, here and after, we take $t_1=t_2=t$. Then $H_1(k)$ and $H_2(k)$ is given by \begin{align} &H_1(k)=-J_1^{xx}(\sin(k)\sigma_x-\cos(k)\sigma_y)+(J_{1}^{xy} + J_1^{yx})\sin(k)\sigma_z,\nonumber\\ &H_2(k)=h_2^z\sigma_y+(J_{1}^{xy} + J_1^{yx})\sin(k)\sigma_z. \end{align} Following the approach in Ref.\cite{FloquetTopo1}, topological invariants $\nu_0$ and $\nu_\pi$ in Floquet superconductors have such forms \begin{equation} \begin{aligned} \label{Q0Qp} \nu_{0}\nu_{\pi}&=\operatorname{sgn}\left[{\rm{Pf}}(M_{0})\right] \operatorname{sgn}\left[{\rm{Pf}}(M_{\pi})\right], \\ \nu_{0}&=\operatorname{sgn}\left[{\rm{Pf}}(N_{0})\right] \operatorname{sgn}\left[{\rm{Pf}}(N_{\pi})\right], \end{aligned} \end{equation} where $\operatorname{Pf}\left[X\right]$ is the Pfaffian number of a skew marix $X$ and $M_{k}=\log\left[U(k)\right], N_{k}=\log\left[\sqrt{U(k)}\right]$. Since both $\log({X})$ and $\log(\sqrt{X})$ are multivalued functions, for $k=0$ and $k=\pi$ we have \begin{equation} \label{MNxik} M_{k}=-i\xi_{M}(k)\sigma_{y},\quad N_{k}=-i\xi_{N}(k)\sigma_{y}, \end{equation} with \begin{equation} \begin{aligned} \label{xiMNk} \xi_{M}(k)+2z\pi&=J_1^{xx}t\cos(k)+h_2^zt,\\ \xi_{N}(k)+2z\pi&=\frac{J_1^{xx}t\cos(k)+h_2^zt}{2}. \end{aligned} \end{equation} Here, $z$ is taken as an appropriate integer so that $\xi_{M/N}(k)$ can be constrained to the interval $[-\pi,\pi)$. Finally, we obtain \begin{equation} \begin{aligned} \nu_{0}\nu_{\pi}&={\rm sgn}[\sin(P_{+})\sin(P_{-})],\\ \nu_{0}&={\rm sgn}[\sin(\frac{P_{+}}{2})\sin(\frac{P_{-}}{2})]. \end{aligned} \end{equation} with \begin{equation} \begin{aligned} P_{+}&=h_{2}^{z}t+J_{1}^{xx}t,\\ P_{-}&=h_{2}^{z}t-J_{1}^{xx}t. \end{aligned} \end{equation} In Fig.~\ref{fig:phase1} (a, b), we show the phase diagrams of topological invariants $\nu_0$ and $\nu_\pi$ on the $\theta_1$-$\theta_2$ plane by taking $\theta_1=J_1^{xx}t$ and $\theta_2=h_2^{z}t$. For the D class, the system is classified by a $\mathbb{Z}_2$ topological invariant. $\nu_0$ and $\nu_\pi$ take the value $\pm 1$, and $\nu_\pi=-1$ indicates anomalous topologically non-trivial phases. The quasi-energy spectrum as a function of $\theta_1\in[- \pi, \pi)$ is plotted in Fig.~\ref{fig:phase1} (c). The gapless $0$-mode and $\pi$-mode must exist where the topological invariants $\nu_0 = - 1$ and $\nu_\pi = - 1$, respectively. Consequently, the topologically protected discrete time crystal could be observed in the regime $\nu_\pi = - 1$. Furthermore, because of particle-hole symmetry, the eigenvalues and eigenvectors of $H_{\rm{eff}}$ always come in pair as shown in Eq.~(\ref{UFV}). Thus for the edge states with quasi-energies $\epsilon_\mu=0$ or $\pi/T$, the eigenvectors $|V_\mu\rangle$ and $|V_\mu^*\rangle$ span a degenerate subspace, from which we can construct a pure real vector and a pure imaginary vector as $|V_\mu^R\rangle=|V_\mu\rangle+|V_\mu^*\rangle$ and $|V_\mu^I\rangle=|V_\mu\rangle-|V_\mu^*\rangle$. In Fig.~\ref{fig:phase1} (d), we have shown the probability distribution of an anomalous edge state $|W^{R}\rangle =(|W\rangle+|W^*\rangle)/\sqrt{2}$. For convenience, here and after, $\{j,s\}$ ($j=1,2,...L$ and $s=1,2$) in Eq. \eqref{Psigamma} is relabelled as $\{2j+s-2\}$ for the index of $\hat{\gamma}$, as well as $|W^{R}\rangle$. \begin{figure} \includegraphics[width=8cm]{Phase_1.pdf} \caption{(a, b) Phase diagrams of the D class topological superconductor (without chiral symmetry). The areas of topological invariants with value $- 1$ indicate a topologically non-trivial superconductor, otherwise topologically trivial. Topological invariant $\nu_0$ is shown in (a) and $\nu_\pi$ in (b). (c) Quasi-energy spectrum of the topological superconductor under open boundary condition with parameters $\theta_1 \in [- \pi, \pi), \theta_2 = 3 \pi / 4$ along the black solid line shown in (a, b). (d) $\pi$-mode edge state $|W^R\rangle$ with parameters $(\theta_1, \theta_2) = (\pi / 2, 3 \pi / 4)$. Here, we set $J_1^{xx} t=\theta_1, h_2^z t=\theta_2$.} \label{fig:phase1} \end{figure} \subsection{BDI Class} For simplicity, we take non-zero coupling strengths $J_1^{xx}=J_2^{xx}$, $h_2^z$ and other parameters zero so that the system obeys chiral symmetry. With given parameters, the chiral operator is taken as $U_S=\sigma_z$ and we have \begin{equation} \begin{aligned} U_S^\dagger U^{a, b} U_S = (U^{a, b})^\dagger. \end{aligned} \end{equation} where \begin{equation} \begin{aligned} {U}^{a}&=\exp(-iH_{1}t_{1}/2)\exp(-iH_{2}t_{2})\exp(-iH_{1}t_{1}/2)\\ &=\exp({-iH_{\rm{eff}}^a} T), \\ U^{b}&=\exp(-iH_{2}t_{2}/2)\exp(-iH_{1}t_{1})\exp(-iH_{2}t_{2}/2)\\ &=\exp({-iH_{\rm{eff}}^b} T), \end{aligned} \end{equation} and \begin{equation} \begin{aligned} &H_1(k)=-J_1^{xx}(\sin(k)\sigma_x-\cos(k)\sigma_y),\\ &H_2(k)=h_2^z\sigma_y-J_1^{xx}(\sin(k)\sigma_x-\cos(k)\sigma_y). \end{aligned} \end{equation} Here, we introduce $U^{a}$ and $U^b$, which are both unitarily transformed from $U(k)$, in order that the chiral operator is explicitly $\sigma_z$. Besides, based on $U^a$ and $U^b$ the topological invariants $\nu_0$ and $\nu_\pi$ can be calculated as \cite{FloquetTopo2} follows, \begin{equation} \nu_{0,\pi}=\frac{\nu_a\pm\nu_b}{2}, \end{equation} where $\nu_{a}$ and $\nu_{b}$ are the winding numbers defined as \begin{equation} \nu_{a,b}=\frac{1}{2\pi}\int dk\frac{-d_{a,b}^{x}(k)\partial_{k}d_{a,b}^{y}(k)+d^{y}_{a,b}(k)\partial_{k}d^x_{a,b}(k)}{\left(d_{a,b}^{x}(k)\right)^{2}+\left(d_{a,b}^{y}(k)\right)^{2}}, \end{equation} and $d_{a,b}^{x,y}(k)$ is given by \begin{align} H_{\rm{eff}}^{a,b}(k)&=d_{a,b}^x(k)\sigma_x+ d_{a,b}^y(k)\sigma_y. \end{align} In Fig.~\ref{fig:phase2} (a) and (b), we show the phase diagrams of topological invariants $\nu_0$ and $\nu_\pi$ on the $\theta_1$-$\theta_2$ plane. For the BDI class, the system is classified by a $\mathbb{Z}$ topological invariant. Both $\nu_0$ and $\nu_\pi$ take integer values, and $\nu_\pi\neq0$ indicates anomalous topologically non-trivial phases. The quasi-energy spectrum as a function of $\theta_1\in[- \pi, \pi)$ is plotted in Fig.~\ref{fig:phase1} (c). In contrast to D class, the topological invariant in class BDI $\nu_0$ and $\nu_\pi$ could be $\pm 2, \pm 3, \cdots $ leading to larger degeneracy of edge states. For example, for $(\theta_1,\theta_2)=(3\pi/8,7\pi/8)$, we have $\nu_\pi=2$, which results in four $\pi$-mode edge states. These four edge states are labeled as $|W_\eta\rangle$ and $|W_\eta^*\rangle$ with $\eta = 1, 2$. In Fig.~\ref{fig:phase2} (d), we show the the probability distribution of the anomalous edge states as $|W^R_{\eta = 1}\rangle$ and $|W^R_{\eta = 2}\rangle$, respectively. \begin{figure} \includegraphics[width=8cm]{Phase_2.pdf} \caption{(a, b) Phase diagrams of the BDI class topological superconductor (with chiral symmetry). The areas of topological invariants with non-zero values indicate a topologically non-trivial superconductor, otherwise topologically trivial. Topological invariant $\nu_0$ is shown in (a) and $\nu_\pi$ in (b). (c) Quasi-energy spectrum of the topological superconductor under open boundary condition with parameters $\theta_1 \in [- \pi, \pi), \theta_2 = 7 \pi / 8$ along the black solid line shown in (a, b). (d) $\pi$-mode edge state $|W^R\rangle$ with parameters $(\theta_1, \theta_2) = (3 \pi / 8, 7 \pi / 8)$. Here, we set $\theta_1 = J_1^{xx} t = J_2^{xx} t, \theta_2 = h_2^z t$. } \label{fig:phase2} \end{figure} \section{Sub-harmonic oscillations of the DTC} \label{sec:TC} \begin{figure}[htp] \includegraphics[width=8cm]{Dynamics.pdf} \caption{ (a, b) Dynamics of anomalous edge mode $\hat{\beta}_\pi^R$. (c, d) Distribution of response function $A_\beta(\omega)$. The frequency distributions in (c) and (d) are the Fourier transformation of dynamics in (a) and (b), respectively. (a, c) with the same parameters as Fig.~\ref{fig:phase1} (d) correspond to D class, while (b, d) with the same parameters as Fig.~\ref{fig:phase2} (d) correspond to BDI class. The initial state is taken as a product state polarized along $x$ direction.} \label{fig:dyn3} \end{figure} \begin{figure}[t] \includegraphics[width=8cm]{Frequency_Distribution.pdf} \caption{(a, b) Upper panels: the values of topological invariants in different regimes. Lower panels: frequency distributions of the response function for observable $\hat{X}_1$ and the color represents the amplitude of $A_X(\omega)$. Parameters of (a) are the same as the black solid line in Fig.~\ref{fig:phase1}, while parameters of (b) are the same as the black solid line in Fig.~\ref{fig:phase2}.} \label{fig:fredis} \end{figure} In this section, we'll examine the sub-harmonic oscillations of the DTC in topologically non-trivial phases with $\nu_\pi=-1$ in the D class and $\nu_\pi\neq0$ in the BDI class, by selecting the anomalous edge mode and the end spin as observables. We first take the anomalous edge mode as the observable for the DTC, \begin{equation} \hat{\beta}_{\pi}^{R}=\frac{\hat{\beta}_{\pi}+\hat{\beta}_{\pi}^{\dagger}}{\sqrt{2}}. \end{equation} As we have derived in Eq.~(\ref{alphamu}), the time evolution of observable $\hat{\beta}_\pi^R$ is expressed as \begin{equation} \begin{aligned} \langle\hat{\beta}_\pi^R(nT)\rangle&=\langle\psi(0)|\hat{U}_{T}^{-n}\hat{\beta}_{\pi}^{R}\hat{U}_{T}^{n}|\psi(0)\rangle\\ &=(-1)^{n}\langle\psi(0)|\hat{\beta}_{\pi}^{R}|\psi(0)\rangle. \end{aligned} \end{equation} It is obvious that the dynamics of $\langle\hat{\beta}_\pi^R(nT)\rangle$ manifests an exact sub-harmonic response for a generic initial state with $\langle\psi(0)|\hat{\beta}_{\pi}^{R}|\psi(0)\rangle\neq0$. After the Fourier transformation of $\langle\hat{\beta}_\pi^R(nT)\rangle$, we obtain the response function \begin{equation} \label{Oomega} A_\beta(\omega)=\frac{1}{n_{\rm{max}}}\sum_{n=1}^{n_{\rm{max}}}\langle\hat{\beta}_\pi^R(nT)\rangle e^{i\omega nT}. \end{equation} where $n_{\text{max}}$ represents the maximal number of periods. In Fig.~\ref{fig:dyn3} (a) and (b), we show the expectation values of $\hat{\beta}^R_{\pi}$ as a function of evolution time for D class and BDI class, respectively. Correspondingly the response functions $A_\beta(\omega)$ are plotted in Fig.~\ref{fig:dyn3} (c) and (d). $\langle\hat{\beta}^R_{\pi}\rangle$ oscillates with double periods and $A_\beta(\omega)$ peaks at $\omega = \pi / T$, which confirm sub-harmonic response for the observable $\hat{\beta}^R_{\pi}$ in the regime of Floquet topologically non-trivial phases with anomalous edge states. However, we have to admit that $\hat{\beta}^R_{\pi}$ is difficult to engineer and observe in experiments, and the concrete form of operator $\hat{\beta}^R_{\pi}$ in spin basis depends on coupling parameters. Therefore, it is important to choose a more appropriate operator that is easily observed in experiments. Another observable for the DTC is taken as the end spin operator $\hat{X}_1$, since the anomalous edge states finitely occupy the end sites. Actually, the end spin $\hat{X}_1$ is nothing but $\hat{\gamma}_1$ after the Jordan-Wigner transformation, which could be decomposed as \begin{equation} \label{gamma1} \hat{\gamma}_{1}=\sum_{\eta=1}^{n_{\rm{edge}}}|W_{\eta} \rangle_1\hat{\beta}_{\pi,\eta} + \sum_{\epsilon_\mu\neq\pi/T}|V_{\mu} \rangle_1\hat{\alpha}_{\mu} + h.c., \end{equation} where $|W_{\eta} \rangle_1$ and $|V_{\mu} \rangle_1$ represent the first element of vectors $|W_{\eta} \rangle$ and $|V_{\mu} \rangle$, respectively. $n_{\rm edge}$ represents the number of pairs of edge states. Dynamics of the observable $\hat{X}_{1}$ is then given by \begin{equation} \begin{aligned} \langle\hat{X}_{1}(nT)\rangle&=\langle\psi(0)|\hat{U}_{T}^{-n}\hat{\gamma}_{1}\hat{U}_{T}^{n}|\psi(0)\rangle\\ &=(-1)^{n}\sum_{\eta=1}^{n_{\rm{edge}}}2{\rm{Re}}(|W_{\eta}\rangle_1\langle\psi(0)|\hat{\beta}_{\pi,\eta}|\psi(0)\rangle)\\ &+\sum_{\epsilon_{\mu}\neq\pi/T}2{\rm Re}(|V_{\mu}\rangle_1 e^{2i\epsilon_{\mu}}\langle\psi(0)|\hat{\alpha}_{\mu}|\psi(0)\rangle). \end{aligned} \label{eq:endspinevo} \end{equation} In D class, there is only one pair of edge states with $n_{\rm{edge}}=1$, which is localized around $\hat{\gamma}_1|0\rangle$ and $\hat{\gamma}_{2N}|0\rangle$. Therefore, $|W_{1}\rangle_1$ is finitely distributed so that $\langle\hat{X}_{1}(nT)\rangle$ manifests a sub-harmonic response at $\omega=\pi/T$ as is shown in the second line of Eq.~(\ref{eq:endspinevo}). In Fig.~\ref{fig:fredis} (a), we show the response function $A_X(\omega)$ of $\hat{X}_1$, which peaks at $\omega=\pi/T$ when $\nu_\pi=-1$. These numerical results indicate the presence of the DTC for the end spin observable. But in BDI class, according to the bulk-edge correspondence\cite{asboth2016short}, $n_{\rm{edge}}=|\nu_\pi|$ and $\nu_\pi>0$ ($\nu_\pi<0$) represents the number of edge states occupying $\hat{\gamma}_{1}|0\rangle$ ($\hat{\gamma}_{2}|0\rangle$). Therefore, $|W_{\eta}\rangle_1$ is finite when $\nu_\pi>0$, while $|W_{\eta}\rangle_1$ is exactly $0$ when $\nu_\pi<0$. As shown in Fig.~\ref{fig:fredis} (b), $A(\omega)$ peaks at $\omega =\pi/T$ when $\nu_\pi>0$, which also indicats the presence of the DTC for BDI class. Note that the DTC regime, where $A(\omega)$ peaks at $\omega=\pi/T$, doesn't exactly match with the topological region with $\nu_\pi=-1$ ($\nu_\pi>0$) for D (BDI) class. We think this phenomenon is due to the finite size effect, which causes the quasi-energy of edge mode to be not perfectly $\pi/T$. In order to check the finite size effect, the numerical results for different lengths of the chain are shown in Appendix \ref{app:b}. \section{Robustness of the DTC} \label{sec:Robust} In order to demonstrate the robustness of the DTC induced by topological superconductors, we continue to investigate the frequency distributions of response function for $\hat{X}_1$ after adding symmetry-preserving and symmetry-breaking perturbations. For D class or BDI class, the Hamiltonian obeys particle-hole symmetry, which means the Hamiltonian \eqref{Hcc} is invariant after particle-hole transformation $\hat{c}^\dagger_j\rightarrow(-1)^{j}\hat{c}_j$, $\hat{c}_j\rightarrow(-1)^{j}\hat{c}^\dagger_j$, and $i\rightarrow{-i}$. In the following, we add an operator $\sum_j \delta_j\hat{Z}_j$ as the particle-hole symmetry-preserving perturbation and $\sum \delta_j\hat{X}_j$ as the symmetry-breaking perturbation, with $\delta_j$ disordered parameters. As shown in Fig.~\ref{fig:fredisdis} (a) and (b), the frequency distributions of Eq.~(\ref{Oomega}), for D class and BDI class respectively, are almost unaffected by the symmetry-preserving perturbation. As a comparison, the sub-harmonic response is collapsed after adding symmetry-breaking perturbations for both classes. These results suggest that sub-harmonic response is robust to symmetry-preserving perturbations, which confirms the DTC is originated from topologically non-trivial phases. \begin{figure}[t] \includegraphics[width=8cm]{Frequency_Distribution_Disorder.pdf} \caption{(a-d) Frequency distributions of the response function for observable $\hat{X}_1$ and the color represents the amplitude of $A_X(\omega)$. In (a) and (b), $\sum_j \delta_j \hat{Z}_j$ is added which preserves the particle-hole symmetry, while in (c) and (d) $\sum_j \delta_j \hat{X}_j$ is added which breaks the particle-hole symmetry. Parameters of (a, c) are the same as the black solid line in Fig.~\ref{fig:phase1}, while parameters of (b,d) are the same as the black solid line in Fig.~\ref{fig:phase2}. $\delta_j$ is randomly distributed in $[- 0.1 h_2^z, 0.1 h_2^z]$, which is only added in the second interval during a period. } \label{fig:fredisdis} \end{figure} \section{Summary} \label{sec:sum} We have systematically studied the Floquet time crystal in a periodically driven spin chain model by elaborating the topological classifications and phase diagrams after mapping it to a Majorana chain. In this paper, we have shown that the dynamics of the observable edge mode operator or end spin operator indeed exhibit robust sub-harmonic oscillation, which is a typical signature of the DTC. Besides, the rigidity and robustness of the DTC build on topologically non-trivial phases. Furthermore, since the system is a general spin chain model and can be strictly solved, it is potentially generalized to other interacting or dissipative systems. Our results might be helpful in deeply understanding the mechanism for other kinds of DTC. Besides, the model is easily implemented and the topological DTC for the end spin are readily realized in experiments. {\bf Acknowledgements} We would like to thank Hui Zhai, Wei Zheng, Chengshu Li, Yanting Cheng, Haifeng Tang and Fan Yang for helpful discussions. This work has been supported by the National Natural Science Foundation of China (Grant No. 12204428).
train/arxiv
BkiUdds4eIXgu5sT-IiW
5
1
\section{Introduction} The notion of an abstract Riemannian manifold raises the question of whether every such manifold can be isometrically realized as a submanifold of Euclidean space. This problem has been given an affirmative answer in the smooth category by Nash \cite{Nash:1954:CII}, provided that the codimension of the submanifold is sufficiently large. If one asks the more specific question of whether a given 2-dimensional Riemannian manifold $(M,g)$ can be isometrically immersed into Euclidean $3$-space, not too much is known. There are general local existence results for real analytic metrics \cite{Spivak:1975:CIDG} and for smooth metrics under certain curvature assumptions \cite{Lin:1986:LIE}. Non-existence results are easier to come by: for instance, Hilbert's classical result that the hyperbolic plane does not admit an isometric immersion into $\mathbb{R}^3$, or the fact that a compact non-positively curved $2$-dimensional Riemannian manifold cannot be isometrically immersed into $\mathbb{R}^3$. \begin{figure}[h] \center \includegraphics[width=.85\textwidth]{DoubleTorus.png} \caption{\label{fig:genus2-hyperbolic} A smooth, almost isometric immersion of the Riemannian surface of genus 2 with constant curvature shown on the left found by the algorithm in \cite{Chern:2018:shape}. } \end{figure} Surprisingly though, if one relaxes the smoothness of the immersion, every $2$-dimen\-sional Riemannian manifold $(M,g)$ admits a $C^1$-isometric immersion $f\colon M\to \mathbb{R}^3$ into Euclidean space \cite{Nash:1954:CII,Kuiper:1955:CII,Gromov:1986:PDR}. Unfortunately, neither the original existence proofs nor the recent explicit constructions of such isometric immersions~\cite{Borrelli:2012:FTT,Borrelli:2013:IES} reflect much of the underlying geometry of $(M,g)$, as shown in Figure~\ref{fig:C1-flattorus}. On the other hand, there are piecewise linear embeddings of a flat torus which make visible its intrinsic geometry (see Figure~\ref{fig:linear-flatorus}). In a more general vein, one could attempt to find isometric immersions in the class of piecewise smooth immersions $f\colon M\to \mathbb{R}^3$, that is, local topological embeddings whose restrictions to the closed faces of a triangulation of $M$ are smooth. Experiments carried out with a recently developed numerical algorithm~\cite{Chern:2018:shape} provide support of the following. \begin{Conjecture*} Given a Riemannian surface $(M,g)$, there exists a piecewise smooth isometric immersion $f\colon M\to \mathbb{R}^3$ in each regular homotopy class. \end{Conjecture*} The added detail---to realize a given intrinsic geometry within a prescribed regular homotopy class---is advantageous in applications to computer graphics~\cite{Chern:2018:shape} and also for the theoretical approach to the isometric immersion problem. It is the latter which will be discussed in this paper. Our objective is to rephrase the isometric immersion problem of an oriented Riemannian surface $(M,g)$ into Euclidean space $\mathbb{R}^3$ as a variational problem with parameters whose minima, if they were to exist, converge (for limiting parameter values) to isometric immersions $f\colon M\to \mathbb{R}^3$ in a given regular homotopy class. As was pointed out already, for a generic metric $g$ there will be no smooth isometric immersion into $\mathbb{R}^3$, let alone one within a prescribed regular homotopy class. But experiments with the aforementioned algorithm ~\cite{Chern:2018:shape} give some credence to our conjecture that there should be minima in the larger class of piecewise smooth immersions. Adjusting the parameters in our functional, the Willmore energy $\int H^2$, the averaged squared mean curvature of the immersion, is one of its contributors and hence immersions close to a minimizer will avoid excessive creasing. This has the effect that potential minimizers of our functional reflect the intrinsic geometry of $(M,g)$ well, in contrast to the $C^1$-isometric immersions by Nash and Kuiper ~\cite{Nash:1954:CII,Kuiper:1955:CII,Gromov:1986:PDR}. In order to explain our approach in more detail, we first relax the original problem to that of finding a {\em conformal} immersion $f\colon M\to\mathbb{R}^3$ of a compact {\em Riemann surface} $M$ in a given regular homotopy class. It is known that such a conformal immersion always exists in the smooth category ~\cite{Garsia:1962:ASES,Ruedy:1971:EORS}, and hence our variational problem will have a minimizer if we turn off the contribution from the Willmore energy. But keeping the Willmore energy in the functional has the effect that potential minimizers will minimize the Willmore energy in a given conformal and regular homotopy class, that is, will be constrained Willmore minimizers. There are partial characterizations of constrained Willmore minimizers when $M$ has genus one ~\cite{KuwSch:2001:WFSIE, SchNdy:2014:CCWM, HelNdy:2017:ECWM}, but hardly anything---besides existence ~\cite{KuwSch:2001:WFSIE} if the Willmore energy is below $8\pi$---is known in higher genus, even though there are some conjectures \cite{HelPed:2017:TCWC}. One of our future goals is to develop a discrete algorithm based on the approach outlined in this paper to find conformal immersions of a compact Riemann surface in a fixed regular homotopy class minimizing the Willmore energy. Given a (not necessarily conformal) immersion $f\colon M\to\mathbb{R}^3$, we can decompose its derivative $df\in\Omega^1(M,\mathbb{R}^3)$ uniquely into $df=\omega\circ B$ where $\omega\in \Gamma(\Conf(TM,\mathbb{R}^3))$ is a conformal, nowhere vanishing $\mathbb{R}^3$-valued $1$-form and $B\in\Gamma(\End(TM))$ is a positive, self-adjoint (with respect to any conformal metric) endomorphism with $\det B=1$. Obviously $B=\Id$ if and only if $f$ is conformal. The space of conformal $1$-forms $\Conf(TM,\mathbb{R}^3)$ is a principal bundle with stretch rotations $\mathbb{R}_{+}{\bf SO}(3)$ as a structure group acting from the left. In particular, any two $\omega,\,\tilde{\omega}\in \Gamma(\Conf(TM,\mathbb{R}^3))$ are related via $\tilde{\omega}= h\, \omega$ for a unique $h\colon M\to \mathbb{R}_{+}{\bf SO}(3)$. Two immersions $f,\,\tilde{f}\colon M\to\mathbb{R}^3$ are regularly homotopic if and only if their derivatives $df$ and $d\tilde{f}$ are homotopic~\cite{Smale:1959:CIT,Hirsch:1959:IOM}. The space of positive, self-adjoint bundle maps $B\in\Gamma(\End(TM))$ with $\det B=1$ is contractible, and we obtain the equivalent reformulation that $f,\, \tilde{f}\colon M\to\mathbb{R}^3$ are regularly homotopic if and only if their corresponding $\omega$ and $\tilde{\omega}$ are homotopic in $\Gamma(\Hom(TM,\mathbb{R}^3))$. As will be detailed in Section~\ref{sec:spinbundles}, any nowhere vanishing $\omega\in \Gamma(\Conf(TM,\mathbb{R}^3))$ induces a spin bundle $L\to M$ and $\omega=(\psi,\psi)$ for a unique (up to sign) nowhere vanishing section $\psi\in\Gamma(L)$ where $(\cdot,\cdot)\colon L\times L\to \Hom(TM,\H)$ denotes the spin pairing. Since homotopic $\omega\in \Gamma(\Conf(TM,\mathbb{R}^3))$ give rise to isomorphic spin bundles, we obtain a description of regular homotopy classes of immersions via isomorphism classes of their induced spin bundles $L\to M$. If the genus of $M$ is $p$, there are $2^{2p}$ many non-isomorphic spin bundles and hence $2^{2p}$ many regular homotopy classes of immersions $f\colon M\to \mathbb{R}^3$ (see Figure~\ref{fig:tori-spinstructures}). \begin{figure}[b] \center \includegraphics[width=.9\textwidth]{figs/flattoruscomp.png} \caption{\label{fig:linear-flatorus} The angular defects of the embedded, piecewise linear torus shown on the left are the same at all vertices and their sum is zero, implying that the induced metric is flat. The right image shows a smooth, almost isometric immersion of another flat torus found by the algorithm in \cite{Chern:2018:shape}.} \end{figure} Fixing a regular homotopy class, that is, a spin bundle $L\to M$, our aim is to find a nowhere vanishing section $\psi\in\Gamma(L)$ in such a way that the $\mathbb{R}^3$-valued conformal $1$-form $\omega=(\psi,\psi) \in \Gamma(\Conf(TM,\mathbb{R}^3))$ is exact. In this case, the primitive $f\colon M\to \mathbb{R}^3$ of $\omega=df$ is a conformal immersion in the given regular homotopy class. We show (see also \cite{icmpaper, klassiker}) that the closedness of $\omega=(\psi,\psi)$ is equivalent to the non-linear Dirac equation \begin{equation}\label{eq:Dirac} \dbar \psi +\tfrac{1}{2}H J\psi (\psi, \psi)=0, \end{equation} where $\dbar$ is the Dirac structure (see Lemma~\ref{lem:Diracstructure}) on the spin bundle $L$. The function $H\colon M\to \mathbb{R}$ is the mean curvature, calculated with respect to the induced metric $|df|^2$, of the resulting conformal immersion $f\colon \tilde{M}\to \mathbb{R}^3$ on the universal cover with translation periods. As we shall discuss in Section~\ref{sec:variational}, the Dirac equation~\eqref{eq:Dirac} can be given a variational characterization: for non-negative coupling constants $\epsilon=(\epsilon_1,\epsilon_2,\epsilon_3)$, we consider the family of variational problems $E_{\epsilon}\colon \Gamma(L^{\times})\to\mathbb{R}$ on nowhere vanishing sections of $L$ given by \begin{equation}\label{eq:functional} E_{\epsilon}(\psi)=\epsilon_1\int_M \tfrac{\langle*\dbar\psi\wedge\dbar\psi\rangle}{|\psi|^2}+ (\epsilon_2-\epsilon_1)\int_M\tfrac{{\langle *\dbar\psi\wedge\psi(\psi,\psi)\rangle}^2}{|\psi|^4}+ (\epsilon_3-\epsilon_1)\int_M\tfrac{{\langle *\dbar\psi\wedge J\psi(\psi,\psi)\rangle}^2}{|\psi|^4}\,. \end{equation} Here $|\cdot|^2\colon L\to |K|$ denotes the half-density valued quadratic form $|\psi|^2=|(\psi,\psi)|$ on $L$ and $\langle \cdot, \cdot \rangle\colon L\to |K|$ is the half-density valued inner product obtained via polarization. The complex structure $*$ on $TM^{*}$ is the {\em negative} of the Hodge-star on $1$-forms. It is worth noting that the functional $E_{\epsilon}$ is conformally invariant, that is, well-defined on the Riemann surface $M$, and independent on constant scalings of $\psi$. In particular, we could normalize $\psi$ by restricting to the $L^4$-sphere of sections satisfying $\int_M|\psi|^4=1$. The last integral in \eqref{eq:functional} turns out to be the Willmore functional $\int_M H^2|\psi|^4$ and the first two integrals measure, in $L^2$, the failure of the non-linear Dirac equation~\eqref{eq:Dirac} to hold. Thus, for $\epsilon_3=0$ and $\epsilon_1, \epsilon_2>0$, the functional attains its minimum value $E_{\epsilon}(\psi)=0$ at nowhere vanishing sections $\psi$ which correspond to---in general rather singular---conformal immersions. It is therefore conceivable that minimizers of $E_{\epsilon}$ for $\epsilon_3>0$, which has the effect of keeping the Willmore energy as a regularizer, will converge as $\epsilon_3$ tends to zero to smooth conformal immersions of $M$ minimizing the Willmore energy, that is, constrained Willmore surfaces. Since the Dirac equation only guarantees the closedness of $(\psi,\psi)$, the resulting conformal immersion given by $df=(\psi,\psi)$ generally will have translation periods which are controlled by adding the squared lengths of the period integrals $|\int_{\gamma} (\psi,\psi)|^2$ to the functional~\eqref{eq:functional}. As it turns out, this strategy works surprisingly well~\cite{Chern:2018:shape} when searching for isometric immersions $f\colon M\to\mathbb{R}^3$ of a compact, oriented Riemannian surface $(M,g)$. Since the conformal class of $g$ gives $M$ the structure of a Riemann surface, we can consider the family of functionals~\eqref{eq:functional} with the additional {\em isometric constraint} \[ |\psi|^4=g\,. \] Then the resulting minimizers under the above described procedure will be isometric immersions $f\colon M\to\mathbb{R}^3$ whose Willmore energy is ``small''. The resulting surfaces provide examples of how piecewise smooth, and sometimes even smooth, isometric immersions of compact Riemannian surfaces might look, as can be seen in Figures~\ref{fig:genus2-hyperbolic}, \ref{fig:linear-flatorus}, and \ref{fig:genus2-abeljac}. We should point out that spinorial descriptions of surfaces have been applied to a variety of problems, both in the discrete~\cite{Crane:2011:STD,Crane:2013:RFC,Zi:2018:DIEDO,Hoffmann:2018:Discrete} and smooth settings~\cite{kamberov, icmpaper, klassiker, KusSch:1996:SRSS,Taimanov:2008:S3DLG, Konopelchenko:2000:WR,heller-lawson}. The present paper is novel as it focuses on the spinorial construction of conformal and isometric immersions of surfaces in $\mathbb{R}^3$ from a purely intrinsic point of view. \section{Spin bundles and regular homotopy classes}\label{sec:spinbundles} Given a (not necessarily oriented or compact) $2$-dimensional manifold $M$, we will discuss how to relate a regular homotopy class of immersions $f\colon M \to \mathbb{R}^3$ and an isomorphism class of spin bundles $L$ over $M$. The material is somewhat folklore \cite{Smale:1959:CIT,Hirsch:1959:IOM,Pinkall:1985:RHC,icmpaper,klassiker}, even though there seems to be no single source one could reference. Recall that two smooth immersions $f,\tilde{f}\colon M\to \mathbb{R}^3$ are {\em regularly homotopic} if and only if there is a smooth homotopy via immersions $f_{t}\colon M\to \mathbb{R}^3$ with $f_0=f$ and $f_1=\tilde{f}$. It is well known~\cite{Smale:1959:CIT,Hirsch:1959:IOM} that two immersions $f$ and $\tilde{f}$ are regularly homotopic if and only if their derivatives $df$ and $d\tilde{f}$ are smoothly homotopic as sections in $\Hom(TM,\mathbb{R}^3)$. \begin{Definition}\label{def:spinbundle} A {\em spin bundle} over $M$ is a right quaternionic line bundle $L\to M$ together with a non-degenerate quaternionic skew-Hermitian pairing \begin{equation}\label{eq:pairing} (\cdot,\cdot)\colon L\times L\to \Hom(TM,\H)\,, \end{equation} which we refer to as a {\em spin pairing}. Two spin bundles $L,\tilde{L}\to M$ are isomorphic if there is a bundle isomorphism $T\colon L\to\tilde{L}$ intertwining their respective spin pairings. \end{Definition} Later in the paper we use the extension of the spin pairing to the $2$-form valued pairing \[ (\cdot,\cdot)\colon \Hom(TM,L) \times L\to \Lambda^2 TM^{*}\otimes\H\,,\qquad (\mu,\psi)_{X,Y}:=(\mu_X,\psi)_Y-(\mu_Y,\psi)_X \] obtained by inserting an $L$-valued $1$-form $\mu$ on the left, where $X,Y\in TM$. Requiring the skew-Hermitian property \[ \overline{(\mu,\psi)}=-(\psi,\mu) \] to pertain in this scenario, necessitates the analogous definition \[ (\cdot,\cdot)\colon L\times \Hom(TM,L)\to \Lambda^2 TM^{*}\otimes\H\,,\qquad (\psi,\mu)_{X,Y}=(\psi,\mu_X)_Y-(\psi,\mu_Y)_X \] when inserting the $L$-valued $1$-form $\mu$ on the right. Note that by transversality a quaternionic line bundle $L\to M$ always has a nowhere vanishing smooth section $\psi\in\Gamma(L)$. We denote the (right) $\H^{\times}$ principal bundle obtained by removing the zero-section of $L$ by $L^{\times}$, then $\Gamma(L^{\times})$ is the space of nowhere vanishing sections. The following are immediate consequences from the definition of a spin bundle: \begin{enumerate} \item $\overline{(\psi,\psi)}=-(\psi,\psi)$, so that $(\psi,\psi)\in \Hom(TM,\mathbb{R}^3)$ is an $\mathbb{R}^3$-valued $1$-form where we identify $\mathbb{R}^3=\Im \H$. \vspace{1em} \item Any two sections $\psi,\varphi\in\Gamma(L^{\times})$ scale by a nowhere vanishing function $\lambda\in C^{\infty}(M,\H)$ and hence \[ (\varphi, \varphi)=(\psi\lambda,\psi\lambda)=\bar{\lambda}(\psi,\psi)\lambda \] are pointwise related by a stretch rotation in $\mathbb{R}^3$. Therefore, the Riemannian metrics $|(\varphi, \varphi)|^2 =|\lambda|^2|(\psi,\psi)|^2$ are conformally equivalent and $M$ inherits a conformal structure, rendering $(\psi,\varphi)\in\Omega^1(M,\H)$ conformal. Whence we observe that an oriented $M$ becomes a Riemann surface in which case we denote its complex structure by $*\colon TM^*\to TM^*$, the negative of the Hodge-star operator on $1$-forms. Since $(\psi,\varphi)\in \Hom(TM,\H)$ now are conformal $1$-forms there is, at each point of $M$, a unique (quaternionic linear) complex structure $J\in\End(L)$ such that \begin{equation}\label{eq:J-relate} *(\psi,\varphi)=(J\psi,\varphi)=(\psi, J\varphi) \end{equation} for all $\psi,\varphi\in L$. In particular, $L$ becomes a rank $2$ complex vector bundle. \end{enumerate} Note that if we had started from a Riemann surface in our Definition~\ref{def:spinbundle} of a spin bundle, the existence of the complex structure $J\in\Gamma(\End(L))$ and this last compatibility relation~\eqref{eq:J-relate} would become part of the axioms. \begin{Example}\label{ex:inducedspin}[The induced spin bundle] Let $f\colon M\to\mathbb{R}^3$ be a (not necessarily conformal) immersion of a conformal surface. We can uniquely decompose the derivative \begin{equation}\label{eq:polar} df=\omega\circ B \end{equation} into a nowhere vanishing conformal $1$-form $\omega\in\Gamma(\Conf(TM,\mathbb{R}^3))$ and a positive, self-adjoint (with respect to any conformal metric) bundle isomorphism $B\in\Gamma(TM)$ with $\det B=1$. Note that $B=\Id$ if and only if $f$ is conformal. We define the induced spin bundle to be the trivial quaternionic line bundle \[ L_f:=M\times \H \] together with the spin pairing \[ (\psi,\varphi)= \bar{\psi}\,\omega\,\varphi\,. \] Note that, by construction, $\omega=(1,1)$ with $1\in\Gamma(L_f)$ the constant section. \begin{figure} \center \includegraphics[width=.85\textwidth]{figs/flattorusC1.jpg} \caption{\label{fig:C1-flattorus}A square flat torus can be isometrically \(C^1\)-embedded into Euclidean 3-space \cite{Borrelli:2013:IES}. Pictures by the H{\'e}v{\'e}a project.} \end{figure} In case $M$ is oriented, and thus a Riemann surface, the immersion $f\colon M\to\mathbb{R}^3$ has an oriented normal $N\colon M\to S^2$, which, viewed as an imaginary quaternion, satisfies $N^2=-1$. Then the conformal $1$-form $\omega\in\Gamma(\Conf(TM,\mathbb{R}^3))$ satisfies $*\omega=N\omega$ and $J\varphi:=-N\varphi$, for $\varphi\in L_f$, is the unique complex structure on $L_f$ which satisfies the compatibility properties \eqref{eq:J-relate}. \end{Example} \begin{Theorem}\label{thm:spin-homotop} The assignment $f\mapsto L_f$ is a bijection between regular homotopy classes of immersions $f\colon M\to\mathbb{R}^3$ and isomorphism classes of spin bundles $L\to M$. \end{Theorem} \begin{proof} Let $f_t\colon M\to \mathbb{R}^3$ be a regular homotopy between two (not necessarily conformal) immersions $f_0=f$ and $f_1=\tilde{f}$. Then their derivatives $df, d\tilde{f}$ are homotopic by~\cite{Smale:1959:CIT,Hirsch:1959:IOM}, and therefore, by uniqueness of \eqref{eq:polar}, we have a homotopy $ \omega_t$ in $\Gamma(\Conf(TM,\mathbb{R}^3))$ connecting $\omega_0=\omega$ and $\omega_1=\tilde{\omega}$. Since $\Conf(TM,\mathbb{R}^3)$ is an $\mathbb{R}_{+}{\bf SO}(3)$ principal bundle, there exists a path $h\in C^{\infty}([0,1],\mathbb{R}_{+}{\bf SO}(3))$, starting at the identity $h_0=1$, with $\omega_t=h_t\,\omega$. Whence we conclude that there is a lift $\lambda\colon [0,1]\to \H^{\times}$ of $h$ with $\omega_t=\bar{\lambda}_t\omega \lambda_t$, and in particular we have \[ \tilde{\omega}=\bar{\lambda}_1\omega \lambda_1\,. \] This last implies that the map $\varphi\mapsto T(\varphi):=\lambda_1\varphi$ is a isomorphism between the induced spin bundles $L_f$ and $L_{\tilde{f}}$. In order to show the converse, let $L\to M$ be a spin bundle and choose a nowhere vanishing section $\psi\in\Gamma(L^{\times})$. Then $\omega=(\psi,\psi)\in\Gamma(\Conf(TM,\mathbb{R}^3))$ is a maximal rank $2$ conformal bundle map. By Smale's theorem \cite{Smale:1959:CIT}, there exists an immersion $f\colon M\to\mathbb{R}^3$ with $df$ homotopic to $\omega$ in $\Gamma(\Hom(TM,\mathbb{R}^3))$. From what was said before, we can conclude that $L\cong L_f$. Since all sections of $\Gamma(L^{\times})$ are homotopic, the regular homotopy class of the resulting immersion is independent of the nowhere vanishing section chosen. In particular, immersions constructed from isomorphic spin bundles are regularly homotopic. \end{proof} So far, we were mainly concerned with the differential topological properties of spin bundles. Understanding how to construct conformal and isometric immersions from spin bundles, we additionally need to investigate their holomorphic aspects. Let $L\to M$ be a spin bundle over a Riemann surface $M$, in which case the spin pairing \eqref{eq:pairing} is compatible by \eqref{eq:J-relate} with the complex structures on $M$ and $L$. To fix notations and for future use, we list a number of properties of spin bundles over a Riemann surface that follow immediately from their definition. \begin{enumerate} \item The complex line subbundles \[ E_{\pm}=\{\varphi\in L\,;\, J\varphi=\pm\varphi i\}\subset L \] are isomorphic via quaternionic multiplication by $j$ on the right, and thus as a complex rank $2$ bundle $L\cong E\oplus E$ is isomorphic to the double of the complex line bundle $E= E_{+}\cong E_{-}$. This isomorphism is also quaternionic linear provided $E\oplus E$ has the right quaternionic structure given by the Pauli matrices. \item The spin pairing \eqref{eq:pairing} restricts to a non-degenerate complex pairing $E\times E\to K$ with values in the canonical bundle $K$ of $M$, exhibiting $E\to M$ as a complex spin bundle, that is $E^2\cong K$. The holomorphic structure $\dbar^K$ of the canonical bundle is given by the exterior derivative $d$ on $\Gamma(K)=\Omega^{1,0}(M,\mathbb{C})$. The isomorphism $E^2\cong K$ induces a unique holomorphic structure $\dbar^{E}$ on $E$ such that $\dbar^K=\dbar^E\otimes\dbar^E$ or, equivalently, \[ d (\psi,\varphi)=(\dbar^{E}\psi,\varphi)+(\psi, \dbar^{E} \varphi) \] for $\psi,\varphi\in\Gamma(E)$. In particular, if $M$ is compact, $\deg E= p-1$ is half of the degree of the canonical bundle, where $p=\text{genus}\,M$. Furthermore, $E$ with $\dbar^E$ is a holomorphic spin bundle and since there are $2^{2p}$ many holomorphic square roots of the canonical bundle $K$---the half lattice points in the Picard torus of isomorphism classes of degree $p-1$ holomorphic line bundles---there are $2^{2p}$ many isomorphism classes of holomorphic spin bundles $E\to M$ over a compact Riemann surface. \end{enumerate} \begin{Lemma}\label{lem:Diracstructure} Let $L\to M$ be a spin bundle. Then there exists a unique operator \[ \dbar\colon \Gamma(L)\to\Gamma(\bar{K}L) \] called the {\em Dirac structure}, with the following properties. \begin{enumerate} \item $\dbar$ is complex linear, that is $[J,\dbar]=0$. \item $\dbar$ satisfies the product rule \[ \dbar(\psi\lambda)=(\dbar \psi)\lambda+ (\psi \,d\lambda)^{0,1} \] over quaternion valued functions $\lambda\in C^{\infty}(M,\H)$, where $(\cdot)^{0,1}$ denotes the usual type decomposition of complex vector bundle valued $1$-forms. In particular, $\dbar$ is a (right) quaternionic and (left) complex linear first order elliptic operator. \item $\dbar$ is compatible with the spin pairing \begin{equation}\label{eq:d-comp} d(\psi,\varphi)=(\dbar\psi,\varphi)+(\psi, \dbar\varphi)\,. \end{equation} \end{enumerate} \end{Lemma} \begin{proof} Since $L\cong E\oplus E$ is the double of a complex holomorphic spin bundle $E$, the operator $\dbar:=\dbar^{E}\oplus\dbar^{E}$ can be shown to fulfill the requirements of the lemma. Any other operator satisfying the properties of the lemma has to be of the form $\dbar+\alpha$ with $\alpha\in\Gamma(\bar{K})$ a $1$-form of type $(0,1)$. But then \eqref{eq:d-comp} implies that $\alpha=0$. \end{proof} \begin{Corollary} Let $M$ be a compact oriented surface of genus $p$. Then there are $2^{2p}$ many isomorphism classes of spin bundles $L\to M$ and therefore, by Theorem~\ref{thm:spin-homotop}, also $2^{2p}$ many regular homotopy classes of immersions $f\colon M\to\mathbb{R}^3$. \end{Corollary} \begin{proof} We know that a spin bundle $L\to M$ induces a unique complex structure on $M$ and $L$. Since $L\cong E\oplus E$ for a holomorphic spin bundle $E\to M$, we conclude that there are $2^{2p}$ many isomorphism classes of spin bundles $L\to M$. \end{proof} At this point it is helpful to briefly review the notion of a quaternionic holomorphic structure \cite{icmpaper} on a quaternionic line bundle $L\to M$ over a Riemann surface. Such a structure is given by an operator \[ D\colon \Gamma(L)\to\Gamma(\bar{K}L) \] satisfying the product rule \[ D(\psi\lambda)=(D \psi)\lambda+ (\psi \,d\lambda)^{0,1} \] over quaternion valued functions $\lambda\in C^{\infty}(M,\H)$. Note that choosing $\lambda\in\H$ constant, the product rule implies that $D$ is quaternionic linear. If $L\to M$ is a spin bundle, then we can demand the quaternionic holomorphic structure to be compatible with the spin pairing. \begin{Definition}\label{def:holospin} Let $L\to M$ be a spin bundle over a Riemann surface. A quaternionic holomorphic structure $D\colon \Gamma(L)\to\Gamma(\bar{K}L)$ is called a {\em quaternionic holomorphic spin structure}, if $D$ is compatible with the spin pairing \begin{equation}\label{eq:D-compatible} d(\psi,\varphi)=(D\psi,\varphi)+(\psi, D\varphi) \end{equation} where $\psi,\varphi\in\Gamma(L)$. \end{Definition} Note that by Lemma~\ref{lem:Diracstructure}, the Dirac structure $\dbar$ on a spin bundle $L\to M$ is a quaternionic holomorphic spin structure, in fact the unique one commuting with the complex structure $J$ on $L$. The general quaternionic holomorphic spin structure $D$ will not commute with $J$, and therefore will have a decomposition \begin{equation}\label{eq:D-decompose} D=D_{+}+D_{-} \end{equation} into $J$ commuting and $J$ anti-commuting parts. The component $D_{+}=\dbar+\alpha$, a complex holomorphic structure on $L$, differs from the Dirac structure $\dbar$ by a $(0,1)$-form $\alpha\in\Gamma(\bar{K})$, where we identify $\End_{+}(L)\cong \underline{\mathbb{C}}$. The component $D_{-}\in\Gamma(\bar{K}\End_{-}(L))$ is a $(0,1)$-form with values in the complex antilinear endomorphisms $\End_{-}(L)$. In order to characterize quaternionic holomorphic spin structures, it is helpful to identify $\bar{K}\End_{-}(L)$ with {\em half-densities}. Recall that the bundle of half-densities is the real, oriented line bundle $|K|\to M$, whose fiber over $x\in M$ is given by $|K|_x=\mathbb{R}\sqrt{g_x}$, where $g$ is a Riemannian metric in the conformal class of $M$. The half-density valued quadratic form \begin{equation}\label{eq:quadraticform} |\cdot|^2:L\to |K|\,,\qquad |\psi|^2:=|(\psi,\psi)| \end{equation} on the spin bundle $L$ can be polarized to the non-degenerate, symmetric inner product \begin{equation}\label{eq:anglebrac} \langle \cdot, \cdot \rangle\colon L\times L\to |K|\,. \end{equation} We frequently will identify $|K|^2\cong \Lambda^2 TM^*$ by assigning a metric $g$ its volume $2$-form $\vol_g$. Since $|\psi|^4\in\Gamma(|K|^2)$ for $\psi\in\Gamma(L)$, a spin bundle carries the conformally invariant $L^4$-metric $\int_M |\psi|^4$ on $\Gamma(L)$. \begin{Lemma}\label{lem:eta} Let $L\to M$ be a spin bundle over a Riemann surface. Then the complex line bundle $\bar{K}\End_{-}(L)$ is isomorphic to the complexified half-density bundle \[ |K|\otimes\mathbb{C} \cong \bar{K}\End_{-}(L)\colon U+VJ\mapsto (U+VJ)\eta \] where $\eta\in\Gamma(\bar{K}\End_{-}(L)|K|^{-1})$ is the nowhere vanishing section \[ \eta:=J\psi\frac{(\psi,\cdot)}{|\psi|^2}\,. \] Notice that $\eta$ is well-defined independent of the choice of the nowhere vanishing section $\psi\in\Gamma(L^{\times})$. \end{Lemma} \begin{proof} If $\tilde{\psi}=\psi\lambda$ is another nowhere vanishing section of $L$, then \[ \tilde{\psi}\frac{(\tilde{\psi},\cdot)}{|\tilde{\psi}|^2}=\psi\lambda\frac{\bar{\lambda}(\psi,\cdot)}{|\lambda|^2|{\psi}|^2}=\psi\frac{(\psi,\cdot)}{|\psi|^2} \] which shows that $\eta$ is well-defined. It remains to verify that $*\eta = -J\eta$, that is, $\eta\in\Gamma(\bar{K}\End_{-}(L)|K|^{-1})$. Let $\psi\in\Gamma(L^{\times})$ be a nowhere vanishing section so that $J\psi=\psi N$ with $N^2=-1$. Then, using the compatibility relation~\eqref{eq:J-relate}, we obtain \[ *\eta=J\psi\frac{*(\psi,\cdot)}{|\psi|^2}=J\psi\frac{(J\psi,\cdot)}{|\psi|^2}=J\psi\frac{(\psi N,\cdot)}{|\psi|^2}=J\psi(-N)\frac{(\psi,\cdot)}{|\psi|^2}=-J\eta, \] which finishes the proof of the lemma. \end{proof} With these preparations, we can now give a characterization of quaternionic holomorphic spin structures, which also can be found in~\cite{icmpaper}, albeit from a slightly different perspective. \begin{Lemma}\label{lem:spin-decompose} Every quaternionic holomorphic spin structure $D$ on a spin bundle $L\to M$ over a Riemann surface is of the form \[ D=\dbar+U\eta \] with $\dbar$ the Dirac structure and $U\in\Gamma(|K|)$ a real half-density, the {\em Dirac potential}. \end{Lemma} \begin{figure} \center \begin{picture}(385,80)(0,0) \put(-10,-10){\includegraphics[height=3.3cm]{tunneledsphere_final.pdf}} \put(75,0){(a)} \put(175,0){(b)} \put(275,0){(c)} \put(365,0){(d)} \end{picture} \caption{\label{fig:tunneled sphere} Consider the Riemannian torus obtained by identifying the two boundary loops of the surface shown on the left (a). We believe that this torus does not admit a \(C^\infty\)-immersion into \(\mathbb{R}^3\). However, we can cut the surface by a plane (b) and reflect its upper part to obtain the surface (c). Applying this construction on the lower part of the surface results in the piecewise smooth isometric immersion of the torus (d).} \end{figure} \begin{proof} From~\eqref{eq:D-decompose} and Lemma~\ref{lem:eta}, we know that \[ D=\dbar+\alpha+(U+JV)\eta \] with $\dbar$ the Dirac structure, $\alpha\in\Gamma(\bar{K})$, and $U,V\in\Gamma(|K|)$ half-densities. By Lemma~\ref{lem:Diracstructure} the Dirac structure already fulfills $d(\psi,\varphi)=(\dbar\psi,\varphi)+(\psi,\dbar\varphi)$. Thus, $D$ is a quaternionic holomorphic spin structure if and only if the relation \begin{comment Hence, for $D$ to be a quaternionic holomorphic spin structure, the relation \[ d(\psi,\varphi)=((\dbar+\alpha+(U+VJ)\eta)\psi,\varphi)+(\psi, (\dbar+\alpha+(U+VJ)\eta)\varphi) \] has to be satisfied. But $d(\psi,\varphi)=(\dbar\psi,\varphi)+(\psi,\dbar\varphi)$ by Lemma~\ref{lem:Diracstructure}, whence it follows that $D$ is a quaternionic holomorphic spin structure if and only if \end{comment} \begin{equation}\label{eq:condition} ((\alpha+(U+JV)\eta)\psi,\varphi)+(\psi, (\alpha+(U+JV)\eta)\varphi)=0 \end{equation} holds. To evaluate this last, we will use the following easy to verify identities. \begin{enumerate} \item Every $\alpha\in\Gamma(\bar{K})$ is of the form $\alpha=\beta+*\beta J$ for a unique real $1$-from $\beta\in\Omega^{1}(M,\mathbb{R})$, and then \[ (\alpha \psi,\varphi)+(\psi,\alpha\varphi)=2(\beta-*\beta)\wedge (\psi,\varphi)\,. \] \item If $\psi\in\Gamma(L^{\times})$ is nowhere vanishing, then \[ |\psi|^2\left((\eta\psi,\varphi)+(\psi,\eta\varphi)\right)=0 \] and \[ |\psi|^2 \left( (J\eta\psi,\varphi)+(\psi,J\eta\varphi) \right)=2(\psi,\psi)\wedge(\psi,\varphi)\,. \] \end{enumerate} Since the spin pairing is quaternionic Hermitian and $\eta$ is quaternionic linear, we may put $\varphi=\psi\in\Gamma(L^{\times})$ in \eqref{eq:condition}. Together with the above identities, \eqref{eq:condition} unravels to \begin{align*} 0&=\left((\alpha+(U+JV)\eta)\psi,\psi\right)+(\psi, (\alpha+(U+JV)\eta)\psi)\\ &=2(\beta-*\beta)\wedge (\psi,\psi)+ 2\frac{V}{|\psi|^2}(\psi,\psi)\wedge(\psi,\psi)\,. \end{align*} Letting $J\psi=\psi N$ with $N^2=-1$, we deduce from the properties of the spin pairing~\eqref{eq:J-relate} that $(\psi,\psi)$ anti-commutes with $N$. Hence, the $\mathbb{R}^3$-valued $2$-forms $(\beta-*\beta)\wedge (\psi,\psi)$ and $\frac{V}{|\psi|^2}(\psi,\psi)\wedge(\psi,\psi)$ in the last relation take values in complementary subspaces of $\mathbb{R}^3$. Therefore, $D=\dbar+\alpha+(U+JV)\eta$ is a quaternionic holomorphic spin structure if and only if $V=0$ and $\beta-*\beta=0$, which implies $\beta=0$ and thus $\alpha=0$. \end{proof} In Example~\ref{ex:inducedspin} we showed how an immersion $f\colon M\to \mathbb{R}^3$ of a surface $M$ induces a spin bundle $L_f\to M$. In case $f$ is a conformal immersion of a Riemann surface, the induced spin bundle $L_f$ additionally carries an induced quaternionic holomorphic spin structure. \begin{Example}\label{ex:inducedD}[Induced quaternionic holomorphic structure] Let $M$ be a Riemann surface and $f\colon M\to\mathbb{R}^3$ a conformal immersion. Since $B=\Id$ in the decomposition~\eqref{eq:polar}, the spin pairing of the induced spin bundle $L_f=M\times \H$ is given by \[ (\psi,\varphi)=\bar{\psi}\,df\, \varphi \] for $\psi,\varphi\in\Gamma(L_f)$. In particular, $df=(1,1)$ for the constant section $1\in\Gamma(L_f^{\times})$. If $N\colon M\to S^2$ with $N^2=-1$ denotes the Gauss normal map of $f$, the conformality condition reads \[ *df=N\,df = -df\,N \,. \] The complex structure $J\in\Gamma(\End(L_f))$ on $L_f$ is given by the quaternionic linear endomorphism \[ J\varphi:=-N\varphi \] for $\varphi\in\Gamma(L_f)$ and the compatibility relations \eqref{eq:J-relate} hold. There is a natural quaternionic holomorphic structure on $L_f$ given by the $(0,1)$-part of the trivial connection \[ D=d^{\,0,1}\colon \Gamma(L_f)\to\Gamma(\bar{K}L_f)\colon \varphi\mapsto \frac{1}{2}(d\varphi +J*d\varphi)\,. \] In order to verify that $D$ is in fact a quaternionic holomorphic spin structure, we need to assert the compatibility \eqref{eq:D-compatible} with the spin pairing \begin{align*} d(\psi,\varphi)&=d(\bar{\psi}\,df\,\varphi)=d\bar{\psi}\wedge df\, \varphi-\bar{\psi}\,df\wedge d\varphi = \overline{d^{\,0,1}\psi}\wedge df\, \varphi-\bar{\psi}\,df\wedge d^{\,0,1}\varphi\\ &= (D\psi,\varphi)+(\psi,D\varphi)\,. \end{align*} Here we have used $\overline{d^{\,1,0}\psi}\wedge df= df\wedge d^{\,1,0}\varphi=0$ by type considerations. Therefore, $D=d^{\,0,1}$ is a quaternionic holomorphic spin structure on $L_f$, and as such decomposes by Lemma~\ref{lem:spin-decompose} into \[ D=\dbar+U\eta \] with $\dbar$ the Dirac structure and $U\in\Gamma(|K|)$ the Dirac potential. One can easily compute \cite{icmpaper, klassiker, coimbra} that the Dirac potential is given by the mean curvature half-density $U=\tfrac{1}{2}H|df|$, where $H\colon M\to \mathbb{R}$ is the mean curvature of $f$ calculated with respect to its induced metric $|df|^2$. The constant section $1\in\Gamma(L_f^{\times})$ lies in the kernel of $D=d^{\,0,1}$, which is expressed by the non-linear Dirac equation \[ \dbar 1+ \tfrac{1}{2}H J 1(1,1) =0\,. \] Here we used $|df|=|(1,1)|=|1|^2$ and the definition of $\eta$ in Lemma~\ref{lem:eta} with $\psi=1\in\Gamma(L_f^{\times})$. The non-linear Dirac equation will be the starting point for our construction of conformal and isometric immersions in a given regular homotopy class. \end{Example} \section{Conformal and isometric immersions} \label{sec:variational} Given a Riemann surface $M$, we want to construct a conformal immersion $f\colon M\to\mathbb{R}^3$ with small Willmore energy in a given regular homotopy class. By Theorem~\ref{thm:spin-homotop}, a regular homotopy class is given by a choice of spin bundle $L\to M$ that comes equipped with the Dirac structure $\dbar$ from Lemma~\ref{lem:Diracstructure}. Any nowhere vanishing section $\psi\in\Gamma(L^{\times})$ gives rise to a putative derivative $(\psi,\psi)\in\Gamma(\Conf(TM,\mathbb{R}^3))$ of a conformal immersion in the regular homotopy class defined by $L$. The problem is that, in general, $(\psi,\psi)$ will not be closed, which is necessary for the existence of a conformal immersion $f\colon M\to\mathbb{R}^3$ satisfying $df=(\psi,\psi)$. \begin{Lemma}\label{lem:nonlinearDiracimm} Let $L\to M$ be a spin bundle over the Riemann surface $M$ and $\psi\in\Gamma(L^{\times})$ a nowhere vanishing section of $L$. Then the conformal 1-form $(\psi,\psi)\in\Gamma(\Conf(TM,\mathbb{R}^3))$ is closed if and only if $\psi$ solves the non-linear Dirac equation \begin{equation}\label{eq:nonlinearDirac} \dbar \psi + \frac{1}{2}HJ \psi (\psi,\psi) =0 \end{equation} for some real valued function $H\colon M\to\mathbb{R}$. \begin{figure}[b] \center \includegraphics[width=.7\textwidth]{figs/tori.png} \caption{\label{fig:tori-spinstructures}The four regular homotopy classes of a torus.} \end{figure} The resulting conformal immersion $f\colon \widetilde{M}\to\mathbb{R}^3$ on the universal cover with translation periods satisfying $df=(\psi,\psi)$ has Gauss normal map $N\colon M\to S^2$ given by $J\psi=:-\psi N$. The induced spin bundle $L_f\cong L$ and the induced quaternionic holomorphic spin structure $d^{\,0,1}$ on $L_f$ corresponds, under this isomorphism, to the quaternionic holomorphic spin structure $D= \dbar+\frac{1}{2}HJ \psi (\psi,\cdot)$. In particular, $H$ is the mean curvature of $f$ calculated with respect to the induced conformal metric $|df|^2=|\psi|^4$. \end{Lemma} \begin{Remark} Strictly speaking, Examples~\ref{ex:inducedspin} and ~\ref{ex:inducedD} are stated for a (conformal) immersion $f\colon M\to\mathbb{R}^3$ without periods, but all constructions only use information about the derivative $df$. Whence a (conformal) immersion $f\colon \tilde{M}\to\mathbb{R}^3$ on the universal cover with translation periods induces a spin bundle $L_f\to M$ together with the induced quaternionic holomorphic structure $d^{\,0,1}$ over $M$. \end{Remark} \begin{proof} From Lemma~\ref{lem:spin-decompose} we know that \[ D=\dbar+\frac{1}{2}HJ \psi (\psi,\cdot) =\dbar+U\eta \,, \] where $U=\tfrac{1}{2}H|\psi|^2$, is a quaternionic holomorphic spin structure. In particular, $D$ is compatible \eqref{eq:D-compatible} with the spin pairing \[ d(\psi,\psi)=(D\psi,\psi)+(\psi,D\psi)\,. \] Therefore, if $\psi$ solves the non-linear Dirac equation, which, expressed in terms of $D$, reads $D\psi=0$, the conformal $1$-form $(\psi,\psi)$ is closed. The converse follows from computations similar to the proof of Lemma~\ref{lem:spin-decompose}. Trivializing $L\cong M\times\H$ via the nowhere vanishing section $\psi\in\Gamma(L^{\times})$ provides the isomorphism $L\cong L_f$. The remaining statements follow from Example~\ref{ex:inducedD}. \end{proof} Given a spin bundle $L\to M$, our goal is to set up a variational problem with parameters \[ E_{\epsilon}\colon \Gamma(L^{\times})\to\mathbb{R} \] on the space of nowhere vanishing sections of $L$, whose minima will give rise to conformal immersions $f\colon M\to\mathbb{R}^3$. From the previous Lemma~\ref{lem:nonlinearDiracimm} we know that a nowhere vanishing section $\psi\in\Gamma(L^{\times})$ gives rise to a conformal immersion (with translation periods) whose derivative satisfies $df=(\psi,\psi)$, provided that $\psi$ solves the non-linear Dirac equation~\eqref{eq:nonlinearDirac}. In other words, $\psi$ has to satisfy \[ \dbar \psi = U\eta\psi \] for some real half-density $U\in\Gamma(|K|)$. In general, since $\psi$ is nowhere vanishing, $\dbar\psi= Q\psi$ for a $(0,1)$-form $Q\in\Gamma(\bar{K}\End(L))$ with values in the endomorphims of $L$. Decomposing $Q$ into the sum $Q=Q_{+}+Q_{-}$ of the $J$ commuting part $Q_{+}=\alpha\in\Gamma(\bar{K})$ and the $J$ anti-commuting part $Q_{-}=(U+VJ)\eta$ with $U,V\in\Gamma(|K|)$ real half-densities, we obtain \begin{equation}\label{eq:decomposition} \dbar\psi= \alpha\psi+(U+VJ)\eta\,\psi\,. \end{equation} Thus, $\psi$ solves the non-linear Dirac equation, if and only if $\alpha=0$ and $V=0$. Before continuing, it is worthwhile to discuss the geometric implications of these conditions. \begin{Remark} The nowhere vanishing section $\psi\in\Gamma(L^{\times})$ gives rise to the conformal $1$-form $(\psi,\psi)\in\Gamma(\Conf(TM,\mathbb{R}^3))$ which is the putative derivative $df$ of a conformal immersion $f\colon M\to\mathbb{R}^3$. From Lemma~\ref{lem:nonlinearDiracimm} the candidate for the Gauss normal map $N\colon M\to S^2$ of $f$ is given by $J\psi=:-\psi N$. We can decompose the rank 2 complex bundle $L\to M$ into the sum of complex line bundles, the $\mp N$ eigenspaces \[ L_{\pm}=\{\varphi\in L\,;\, J\varphi=\mp\varphi N\} \] of the complex structure $J\in\End(L)$. Then $L_{+}\subset L$ is a trivial line bundle via the nowhere vanishing section $\psi\in\Gamma(L_{+})$. Since \[ *(\psi,\psi)=N(\psi,\psi)=- (\psi,\psi)N \] due to \eqref{eq:J-relate}, we have the well-defined complex line bundle isomorphism \begin{equation}\label{eq:isomorph} L_{+}^2\to K N^* TS^2\colon \psi^2\mapsto (\psi,\psi)\,. \end{equation} The Dirac structure induces complex holomorphic structures $\dbar_{\pm}$ on the summands $L=L_{+}\oplus L_{-}$. Since $*\eta=-J\eta$, the decomposition \eqref{eq:decomposition} \[ \dbar\psi= \alpha\psi+(U+VJ)\eta\,\psi \] is adapted to the splitting $\bar{K}L= \bar{K}L_{+}\oplus \bar{K}L_{-}$. Therefore, $\alpha=0$ if and only if the isomorphism \eqref{eq:isomorph} is holomorphic, that is, $\dbar_{+} $ is the trivial holomorphic structure. In other words, $\alpha$ measures the failure of \eqref{eq:isomorph} to be holomorphic. Since $U\in\Gamma(|K|)$ is the putative mean curvature half-density, it remains to uncover the geometric meaning of the half density $V\in\Gamma(|K|)$ in \eqref{eq:decomposition}. The derivative $dN\in\Omega^1(M,N^*TS^2)$ of the candidate Gauss map $N\colon M\to S^2$ can be decomposed into conformal and anti-conformal $\mathbb{R}^3$-valued $1$-forms \[ dN=dN_{+}+dN_{-}= \frac{2}{|\psi|^2}(U(\psi,\psi) +V*(\psi,\psi)) + q\,. \] If $(\psi,\psi)=df$ were closed, than the latter would be the decomposition of the shape operator $dN$ into the trace part $H\,df$ and the trace-free part $q$, the Hopf differential. Therefore, $V=0$ is exactly the condition that the shape operator $dN$ is self-adjoint for one (and hence any) conformal metric on $M$. Incidentally, the above discussion of the geometric content of the decomposition \eqref{eq:decomposition} also gives an algorithmic answer to the question ``when is a map $N\colon M\to S^2$ from a {\em compact} Riemann surface $M$ the Gauss normal map of a conformal immersion?'' We first choose a spin bundle $L\to M$ which comes with a complex structure $J\in\Gamma(\End(L))$ compatible \eqref{eq:J-relate} with the Riemann surface structure of $M$. According to Theorem~\ref{thm:spin-homotop}, the spin bundle $L$ encodes one of the $2^{2p}$ regular homotopy classes of the resulting conformal immersion, where $p\in\mathbb{N}$ denotes the genus of $M$. The eigenspace decomposition \[ L_{\pm}=\{\varphi\in L\,;\, J\varphi=\mp\varphi N\} \] defines the two complex line subbundles $L_{\pm}\subset L$ and we need $L_{+}\to M$ to admit a global nowhere vanishing section $\psi\in\Gamma(L_{+}^{\times})$. In other words, $L_{+}$ has to be trivializable which is equivalent to $\deg L_{+}=0$. Due to \eqref{eq:isomorph} this last is guaranteed if and only if $\deg N= 1-g$, that is, $N$ has the correct degree required by the Gauss-Bonnet Theorem. Moreover, we have seen that $L_{+}$ needs to be holomorphically trivial, which puts $2p=\dim_{\mathbb{R}} \Jac(M)$ real conditions on $N$. Having chosen an $N$ satisfying those conditions, it remains to check whether the half-density $V\in\Gamma(|K|)$ in the decomposition \eqref{eq:decomposition} vanishes. Note that globally the only remaining freedom is to rescale $\psi$ by a non-vanishing complex number $\lambda\in\mathbb{C}^{\times}$, which has the effect of a real scaling and a rotation of the complex half density $U+VJ$. Provided that such a constant rotation renders this complex half density real, there will be a conformal immersion (with translation periods) $f\colon \tilde{M}\to \mathbb{R}^3$ whose Gauss normal map is given by $N$. \end{Remark} After this brief interlude describing the geometric ramifications of the requirements $\alpha=0$ and $V=0$ in the decomposition \eqref{eq:decomposition}, which guarantee that the conformal $\mathbb{R}^3$-valued $1$-form $(\psi,\psi)\in\Gamma(\Conf(TM,\mathbb{R}^3))$ is closed, we shift towards the variational aspects of those conditions. On a compact Riemann surface $M$ the requirements $\alpha=0$ and $V=0$ are equivalent to the vanishing of the sum of their $L^2$-norms $ \int_M *\bar{\alpha}\wedge \alpha+ \int_M V^2=0\,. $ Put differently, our variational problem should be designed to measure, in $L^2$, the failure of $\psi\in\Gamma(L^{\times})$ to solve the non-linear Dirac equation~\eqref{eq:nonlinearDirac}. In the following lemma we calculate the possible contributions to our functional. \begin{Lemma}\label{lem:contributions} Let $L\to M$ be a spin bundle, $\dbar$ the Dirac structure on $L$, and $\psi\in \Gamma(L^{\times})$ a nowhere vanishing section. Then we have the following expressions for the components of $\dbar\psi$ in the decomposition \eqref{eq:decomposition}: \begin{enumerate} \item $\langle*\dbar\psi\wedge \dbar\psi\rangle=|\psi|^2(*\bar{\alpha}\wedge\alpha+|U|^2+|V|^2)$, \item $\langle*\dbar\psi\wedge \eta\psi\rangle=|\psi|^2 U$, \item $\langle*\dbar\psi\wedge J\eta\psi\rangle=|\psi|^2 V$. \end{enumerate} \end{Lemma} \begin{proof} The real half-density valued inner product \eqref{eq:anglebrac} on the quaternionic line bundle $L$ can always be seen as the real part of a quaternionic Hermitian symmetric inner product. Thus, we have \[ \quad \langle \psi\lambda, \psi\mu\rangle=\Re(\bar{\lambda}\mu)|\psi|^2 \] for $\lambda,\mu\in\H$ and $\psi,\varphi\in L$. Moreover, the compatibility~\eqref{eq:J-relate} of the complex structure $J\in\Gamma(\End(L))$ with the spin pairing implies \[ \langle \psi,\psi\rangle=\langle J\psi,J\psi\rangle\quad\text{and thus}\quad \langle \psi,J\psi\rangle=0\,. \] We therefore also have \[ \langle a\psi, b\psi\rangle=\Re(\bar{a}b)|\psi|^2 \] for $a,b\in\mathbb{C}$. Recall that $\eta\psi=J\psi\frac{\omega}{|\psi|^2}$ with $\omega=(\psi,\psi)\in\Omega^1(M,\mathbb{R}^3)$ so that $\bar{\omega}=-\omega$. The $(0,1)$-form $\alpha\in\Gamma(\bar{K})$ can be written as $\alpha=\beta+*\beta J$ for a real $1$-form $\beta\in\Omega^1(M,\mathbb{R})$. Applying the above identities, after some calculations we obtain the following results. \begin{gather*} \langle *\alpha\psi\wedge \alpha\psi\rangle=\Re(*\bar{\alpha}\wedge\alpha)|\psi|^2\\ |\psi|^2 \langle *\alpha\psi\wedge \eta\psi\rangle= 2\Re(*\beta\wedge\omega)|\psi|^2 = 0\\ |\psi|^4\langle \eta\psi\wedge \eta\psi\rangle=-\Re(\omega\wedge\omega)|\psi|^2=0\\ |\psi|^4\langle J\eta\psi\wedge J \eta\psi\rangle=-\Re(\omega\wedge\omega)|\psi|^2=0\\ \langle *\eta\psi\wedge \eta\psi\rangle=\frac{1}{|\psi|^4}\Re(*\bar{\omega}\wedge\omega)|\psi|^2=|\psi|^2 \end{gather*} In the third and fourth relation we used the fact that the 2-form $\omega\wedge \omega$ takes values in the orthogonal complement in $\mathbb{R}^3$ of the image of the conformal $1$-form $\omega$. In the last relation, we also used the identification $\Lambda^2 TM^*\cong |K|^2$ of $2$-forms with conformal metrics on $M$. Applying those formulas, we deduce \begin{align*} \langle *\dbar\psi\wedge\dbar\psi\rangle&=\Re(*\bar{\alpha}\wedge \alpha)|\psi|^2+ U^2\langle *\eta\psi\wedge\eta\psi\rangle+V^2\langle J*\eta\psi\wedge J\eta\psi\rangle\\ &=|\psi|^2(|\alpha|^2+|U|^2+|V|^2)\,, \end{align*} which is the first identity of the lemma. The second identity follows from \[ \langle*\dbar\psi\wedge \eta\psi\rangle=\langle U*\eta\psi\wedge \eta\psi\rangle = U|\psi|^2 \] and likewise does the third. \end{proof} As we have discussed, a nowhere vanishing section $\psi\in\Gamma(L^{\times})$ gives rise to the closed, $\mathbb{R}^3$ valued $1$-form $(\psi,\psi)$---and thus to a conformal immersion with translation periods---if and only if $ \int_M *\bar{\alpha}\wedge \alpha+ \int_M V^2=0 $. Here the $(0,1)$-form $\alpha\in\Gamma(\bar{K})$ and the real half-density $V\in \Gamma(|K|)$ are the components in the decomposition \eqref{eq:decomposition} of $\dbar\psi$ which, due to the previous lemma, we can express in terms of the section $\psi$. \begin{Theorem}\label{thm:functional} Let $L\to M$ be a spin bundle over a compact Riemann surface and denote by $\dbar$ the Dirac structure. For non-negative $\epsilon=(\epsilon_1,\epsilon_2,\epsilon_3)$ the family of functionals \[ E_{\epsilon}\colon \Gamma(L^{\times})\to \mathbb{R} \] on nowhere vanishing sections of $L$, given by \[ E_{\epsilon}(\psi)=\epsilon_1\int_M \tfrac{\langle*\dbar\psi\wedge\dbar\psi\rangle}{|\psi|^2}+ (\epsilon_2-\epsilon_1)\int_M\tfrac{{\langle *\dbar\psi\wedge\psi(\psi,\psi)\rangle}^2}{|\psi|^4}+ (\epsilon_3-\epsilon_1)\int_M\tfrac{{\langle *\dbar\psi\wedge J\psi(\psi,\psi)\rangle}^2}{|\psi|^4}\, \] is well-defined on the Riemann surface $M$ and invariant under constant, non-zero scalings of $\psi$. In particular, one could constrain the functional to the $L^4$-sphere of sections satisfying $\int_M |\psi|^4=1$. For $\epsilon_3=0$ and arbitrary $\epsilon_1,\epsilon _2>0$ the functional assumes its minimum value zero at a section $\psi\in\Gamma(L^{\times})$, which gives rise to a conformal immersion (with translation periods) $f\colon \tilde{M}\to \mathbb{R}^3$ satisfying $df=(\psi,\psi)$ in the prescribed regular homotopy class given by the spin bundle $L$. \end{Theorem} The proof of the theorem follows immediately from Lemma~\ref{lem:contributions}, in which the various terms of the functional $E_{\epsilon}$ are calculated. It should be noted that, in order to guarantee exactness of the closed $1$-form $(\psi, \psi)$, the functional $E_{\epsilon}$ needs to be augmented by the sum of the squares of the periods $\sum_{\gamma}|\int_{\gamma}(\psi,\psi)|^2$, where $\gamma$ ranges over a basis of the homology group $H_1(M,\mathbb{Z})$. This being said, in the sequel we will always assume that our resulting immersions are defined on $M$. \begin{Remark} For $\epsilon_3>0$ the functional $E_{\epsilon}$ contains as a contribution the Willmore energy $\int_M U^2$ of the resulting immersion. It is therefore tempting to minimize $E_{\epsilon}$ for $\epsilon_3>0$ while taking $\epsilon_3\to 0$. The resulting conformal immersion would then be a constrained Willmore surface, that is, a minimizer for the Willmore energy in a fixed conformal and regular homotopy class. At the moment there is no evidence that this strategy, which involves $\Gamma$-convergency of our functionals, might be successful. The development of an algorithm based on Theorem~\ref{thm:functional} to carry out experiments is a work in progress. \end{Remark} We finish this section with a discussion of how to adapt our variational approach to find isometric immersions $f\colon M\to\mathbb{R}^3$ of an oriented Riemannian surface $(M,g)$ in a given regular homotopy class described by a spin bundle $L\to M$. Every oriented Riemannian surface $(M,g)$ has a unique Riemann surface structure in which $g$ is a conformal metric. The induced metric of the immersion $f$, constructed from a nowhere vanishing section $\psi\in\Gamma(L^{\times})$ satisfying the non-linear Dirac equation, is given by \[ |df|^2=|(\psi,\psi)|^2=|\psi|^4\,. \] Hence, we need to minimize our functional under the constraint $|\psi|^4=g$ in order to find an isometric immersion. \begin{figure} \center \includegraphics[width=.5\textwidth]{figs/AbelJacobiDuck_smooth.png} \caption{\label{fig:genus2-abeljac}The Abel--Jacobi map of a compact Riemann surface $M$ induces a Riemannian metric on $M$. The Gaussian curvature of this metric vanishes at the Weierstrass points. The picture shows an almost isometric smooth realization of the Abel--Jacobi metric on the abstract genus 2 surface in Figure~\ref{fig:genus2-hyperbolic} computed by the algorithm in \cite{Chern:2018:shape}. The six Weierstrass points lie on the intersection of the surface with its axis of symmetry.} \end{figure} \begin{Theorem}\label{thm:isometric} Let $L\to M$ be a spin bundle over a compact, oriented Riemannian surface and denote by $\dbar$ the Dirac structure on $L$ (where we think of $M$ as a Riemann surface). For non-negative $\epsilon=(\epsilon_1,\epsilon_2,\epsilon_3)$ the family of functionals \[ E_{\epsilon}\colon \Gamma(L^{\times})\to \mathbb{R} \] on nowhere vanishing sections of $L$, given by \[ E_{\epsilon}(\psi)=\epsilon_1\int_M \tfrac{\langle*\dbar\psi\wedge\dbar\psi\rangle}{|\psi|^2}+ (\epsilon_2-\epsilon_1)\int_M\tfrac{{\langle *\dbar\psi\wedge\psi(\psi,\psi)\rangle}^2}{|\psi|^4}+ (\epsilon_3-\epsilon_1)\int_M\tfrac{{\langle *\dbar\psi\wedge J\psi(\psi,\psi)\rangle}^2}{|\psi|^4}\, \] subject to the constraint $|\psi|^4=g$, is well-defined on the Riemannian surface $(M,g)$. For $\epsilon_3=0$ and arbitrary $\epsilon_1,\epsilon _2>0$ the functional assumes its minimum value, zero, at a section $\psi\in\Gamma(L^{\times})$ which gives rise to an isometric immersion $f\colon M\to \mathbb{R}^3$ satisfying $df=(\psi,\psi)$ in the prescribed regular homotopy class given by the spin bundle $L$. \end{Theorem} For a generic Riemannian surface $(M,g)$ there will not exist a smooth isometric immersion into $\mathbb{R}^3$, even though there always is a $C^1$ isometric immersion \cite{Nash:1954:CII, Kuiper:1955:CII}. The methods to construct $C^1$ isometric immersions result in surfaces in $\mathbb{R}^3$ that do not reflect the intrinsic geometry of $(M,g)$ well (see Figure~\ref{fig:C1-flattorus}). On the other hand, minimizing $E_{\epsilon}$ for non-zero $\epsilon_3>0$, that is, with the Willmore energy $\int_M |U|^2$ turned on as a contribution to the functional, we expect the limiting isometric immersion as $\epsilon_3\to 0$ to have small Willmore energy and thus avoid excessive creasing. This has indeed been carried out experimentally with an algorithm based on Theorem~\ref{thm:isometric}, which is detailed in~~\cite{Chern:2018:shape}. These experiments give some credence to our conjecture, that there should be a piecewise smooth isometric immersion of any Riemannian surface $(M,g)$ in a given regular homotopy class. Again, a theoretical analysis of this conjecture would involve an understanding of the $\Gamma$-convergency properties of our family of functionals $E_{\epsilon}$. \bibliographystyle{acm}
train/arxiv
BkiUdLzxK0zjCxN_tM1S
5
1
\section{Introduction} In this work we study the scalar-isoscalar resonances $f_{0}(1370)$, f_{0}(1500)$ and $f_{0}(1710)$ \cite{PDG} in the framework of the extended Linear Sigma Model (eLSM) \cite{susden, dilaton, dick, dilnf3}. The eLSM is built accordingly to two fundamental properties of QCD: chiral symmetry and dilatation invariance, the former being spontaneously broken by a Mexican-hat potential, the latter being explicitly broken in such a way to mimic the trace anomaly of QCD \cite{schechter, migdal}. As a consequence of these two requirements, the eLSM Lagrangian contains only a finite number of terms. Moreover, (axial)vector d.o.f. are included from the very beginning in the effective model. The inclusion of the latter in a chiral invariant way makes the model more complete and has a strong influence on the phenomenology. In Ref. \cite{susden} the eLSM has been first studied, both in the baryonic and mesonic sectors, in presence of two-flavours, $N_{f}=2.$ The inclusion of the dilaton/glueball field for $N_{f}=2$ has been carried out in\ Ref. \cite{dilaton}, where it has been shown that the glueball is predominantly contained in either $f_{0}(1500)$ or $f_{0}(1710)$ (the former case being favoured). Still, the study was not complete because scalar mesons containing the $s$-quark were not included. A detailed study of the eLSM for three flavours, $N_{f}=3$, has been performed in\ Ref. \cite{dick}, where a very good description of various meson masses and decays was achieved. Yet, the dilaton, although formally included to justify the form of the Lagrangian, was inert (i.e., with zero width, in agreement with large- N_{c}$ limit). The scalar-isoscalar system was studied with the nonstrange quarkonium field $\sigma _{N}\equiv (\bar{u}u+\bar{d}d)/\sqrt{2}$ and with the hidden-strange quarkonium field $\sigma_{S}\equiv \bar{s}s$ only. Ref. \cite{dick} shows through a fit to experimental data that these quark-antiquark states are heavy (above 1 GeV); similarly, the scalar isovector and isodoublet states are identified with the heavy resonances $a_{0}(1450)$ and $K_{0}^{\star }(1430),$ respectively. Consequently, the scalar mesons below 1 GeV shall be interpreted differently: tetraquarks or molecular/dynamically generated states are the most prominent options, e.g. Refs. \cite{tetraquark,lowscalars} and refs. therein. In the scalar-isoscalar sector a three-body mixing problem must be solved: the three bare fields $\sigma _{N},$ $\sigma _{S},$ and, in addition, the dilaton $G\equiv gg$ mix and generate the physical resonances $f_{0}(1370)$, $f_{0}(1500)$ and $f_{0}(1710)$. The aim is to determine the mixing matrix in such a way that the masses and the decays of $f_{0}(1370)$, $f_{0}(1500)$ and $f_{0}(1710)$ are in agreement with the experimental results listed in Ref. \cite{PDG}. Such a mixing problem was investigated in a variety of phenomenological models, see Ref. \cite{review,close,scalars} and refs. therein. However, a full calculation involving a $full$ chiral approach has not yet been achieved for $N_{f}=3$. In Ref. \cite{dilnf3} the first step toward this direction has been performed, but the attention was focused on the masses only (no decays were calculated). Based on the parameter determination obtained in\ Ref. \cite{dick}, a correct description of the masses $f_{0}(1370)$, $f_{0}(1500)$ and $f_{0}(1710)$ implies that the predominant glueball content is located in the resonance $f_{0}(1710)$ (and not in $f_{0}(1500)$). In this work we continue the study of this system by calculating, for the first time, the decay widths of the scalar-isoscalar states. It is shown that the mixing matrix of Ref. \cite{dilnf3} implies too large decay widths for $f_{0}(1500)$ and $f_{0}(1710),$ and must be discarded. Therefore, we search for other solutions. The system is extremely dependent on subtle issues such as constructive and destructive interference effects; for this reason, a fit (which is the most straightforward thing to do) could not yet deliver acceptable results. Anyhow, an interesting partial solution could be obtained by a simple `guess and try' procedure. In this solution, the gluon condensate is very large and the glueball is to a very good extent described by the resonance $f_{0}(1710)$ only. The phenomenology of $f_{0}(1710)$ and f_{0}(1370)$ can be qualitatively described, but the kaon-kaon decay of f_{0}(1500)$ is still too large. Future studies will show if this novel solution is phenomenologically acceptable or not. This proceeding is organized as follows: In Sec. 2 we present the effective Lagrangian of the eLSM, in Sec. 3 we discuss the results and in Sec. 4 we provide a summary and an outlook for work in progress. \section{The eLSM Lagrangian} For the purpose of studying the mixing behavior in vacuum of the scalar-isoscalar mesons below 2 GeV we use eLSM Lagrangian developed in\ Refs. \cite{susden,dilaton, dick}: \begin{align} \mathcal{L}& =\mathcal{L}_{dil}+\text{\textrm{Tr}}[(D^{\mu }\Phi )^{\dag }(D_{\mu }\Phi )]-m_{0}^{2}\left( \frac{G}{G_{0}}\right) ^{2}\text{\textrm{T }}[\Phi ^{\dag }\Phi ]-\lambda _{1}(\text{\textrm{Tr}}[\Phi ^{\dag }\Phi ])^{2}-\lambda _{2}\text{\textrm{Tr}}[(\Phi ^{\dag }\Phi )^{2}] \notag \\ & +c_{1}(\text{det}\Phi -\text{det}\Phi ^{\dag })^{2}+\mathrm{Tr}[H(\Phi ^{\dag }+\Phi )]+\text{Tr}\left[ \left( \frac{m_{1}^{2}}{2}\left( \frac{G} G_{0}}\right) ^{2}+\Delta \right) \left( L^{\mu 2}+R^{\mu 2}\right) \right] \notag \\ & -\frac{1}{4}\text{Tr}\left[ L^{\mu \nu 2}+R^{\mu \nu 2}\right] +\frac{h_{1 }{2}\text{\textrm{Tr}}[\Phi ^{\dag }\Phi ]\text{\textrm{Tr}}[L_{\mu }L^{\mu }+R_{\mu }R^{\mu }]+h_{2}\text{\textrm{Tr}}[\Phi ^{\dag }L_{\mu }L^{\mu }\Phi +\Phi R_{\mu }R^{\mu }\Phi ^{\dag }] \notag \\ & +2h_{3}\text{\textrm{Tr}}[\Phi R_{\mu }\Phi ^{\dag }L^{\mu }]\text{ +... ,} \label{Lagrangian} \end{align where $D^{\mu }\Phi =\partial ^{\mu }\Phi -ig_{1}(L^{\mu }\Phi -\Phi R^{\mu })$. The nonstrange $\sigma _{N}\equiv \left( \bar{u}u+\bar{d}d\right) \sqrt{2}$ and the strange $\sigma _{S}\equiv \bar{s}s$ bare quark-antiquark mesons are contained in the (pseudo)scalar multiplet \begin{equation} \Phi =\frac{1}{\sqrt{2}}\left( \begin{array}{ccc} \frac{(\sigma _{N}+a_{0}^{0})+i(\eta _{N}+\pi ^{0})}{\sqrt{2}} & a_{0}^{+}+i\pi ^{+} & K_{0}^{\star +}+iK^{+} \\ a_{0}^{-}+i\pi ^{-} & \frac{(\sigma _{N}-a_{0}^{0})+i(\eta _{N}-\pi ^{0})} \sqrt{2}} & K_{0}^{\star 0}+iK^{0} \\ K_{0}^{\star -}+iK^{-} & \bar{K}_{0}^{\star 0}+i\bar{K}^{0} & \sigma _{S}+i\eta _{S \end{array \right) . \label{phimatex} \end{equation The matrices $L_{\mu }$ and $R_{\mu }$ describe (axial)vector fields, see Ref. \cite{dick} for details. The scalar glueball $G\equiv gg$ is described by the dilaton Lagrangian $\mathcal{L}_{dil}$, in which a logarithmic term with the energy scale $\Lambda $ mimics the trace anomaly of the pure Yang-Mills sector of QCD \cite{schechter, migdal}: \begin{equation} \mathcal{L}_{dil}=\frac{1}{2}(\partial _{\mu }G)^{2}-\frac{1}{4}\frac m_{G}^{2}}{\Lambda ^{2}}\left( G^{4}\ln \left\vert \frac{G}{\Lambda \right\vert -\frac{G^{4}}{4}\right) \text{ .} \label{ldil} \end{equation The three scalar fields $\sigma _{N}$, $\sigma _{S},$ and $G$ condense, leading to the following shifts: $\sigma _{N}\rightarrow \sigma _{N}+\phi _{N}$, $\sigma _{S}\rightarrow \sigma _{S}+\phi _{S},$ and $G\rightarrow G+G_{0}$. As a consequence, bilinear mixing terms $\sim \sigma _{N}\sigma _{S}$, $G\sigma _{N}$ and $G\sigma _{S}$ arise. The corresponding potential reads: \begin{equation} V(\sigma _{N},G,\sigma _{S})=\frac{1}{2}(\sigma _{N},G,\sigma _{S})\left( \begin{array}{ccc} m_{\sigma _{N}}^{2} & z_{G\sigma _{N}} & z_{\sigma _{S}\sigma _{N}} \\ z_{G\sigma _{N}} & M_{G}^{2} & z_{G\sigma _{S}} \\ z_{\sigma _{S}\sigma _{N}} & z_{G\sigma _{S}} & m_{\sigma _{S}}^{2 \end{array \right) \left( \begin{array}{c} \sigma _{N} \\ G \\ \sigma _{S \end{array \right) \text{ } \label{pot} \end{equation where $z_{G\sigma _{N}}=-2m_{0}^{2}{\phi _{N}}/G_{0}$, $z_{G\sigma _{S}}=-2m_{0}^{2}{\phi _{S}}/G_{0}$ and $z_{\sigma _{N}\sigma _{S}}=2\lambda_{1}{\phi _{N}}{\phi _{S}}$. The physical states are obtained upon diagonalization: \begin{equation} \left( \begin{array}{c} f_{0}(1370) \\ f_{0}(1500) \\ f_{0}(1710 \end{array \right) =B\left( \begin{array}{c} \sigma _{N}\equiv \left( \bar{u}u+\bar{d}d\right) /\sqrt{2} \\ G\equiv gg \\ \sigma _{S}\equiv \bar{s} \end{array \right) \text{ .} \label{trafo} \end{equation The aim is to determine the mixing matrix $B$. \section{Results} We use the parameters determined in\ Ref. \cite{dick}. These parameters allow for a correct description of a variety of vacuum properties of mesons up to 1.5 GeV. However, four parameters could not be uniquely determined in Ref. \cite{dick}: these are $m_{G} ,$ $\Lambda ,$ $\lambda _{1}$ and $h_{1}$. In Ref. [5] three of them ($m_{G}$, $\Lambda$ and $\lambda _{1}$) were determined in order to obtain the measured experimental masses of the resonances $f_{0}(1370)$, f_{0}(1500)$ and $f_{0}(1710)$: $M_{f_{0}(1370)}=(1200$-$1500)$ MeV, M_{f_{0}(1500)}=(1505\pm 6)$ MeV and $M_{f_{0}(1710)}=(1720\pm 6)$ MeV \cit {PDG}. A solution in which $f_{0}(1500)$ is predominantly gluonic could not be found. On the contrary, the masses could be well described for m_{G}=1.580$ GeV, $\Lambda =0.93$ GeV, and $\lambda _{1}=2.03.$ The resulting mixing matrix reads: \begin{equation} B=\left( \begin{array}{ccc} 0.92 & -0.39 & 0.05 \\ -0.22 & -0.40 & 0.89 \\ -0.33 & -0.83 & -0.4 \end{array \right) \text{ .} \end{equation We have now tested this scenario by evaluating the decay widths (the corresponding mathematical expressions will be presented in a forthcoming publication \cite{preparation}). For $h_{1}=0$ (large-$N_{c}$ limit), one finds the decay widths into kaons and pions: $\Gamma _{f_{0}(1710)\rightarrow \pi \pi }=0.83$ GeV, $\Gamma _{f_{0}(1710)\rightarrow KK}=0.42$ GeV, $\Gamma _{f_{0}(1500)\rightarrow \pi \pi }=0.22$ GeV, $\Gamma _{f_{0}(1500)\rightarrow KK}=1.14$ GeV, $\Gamma _{f_{0}(1370)\rightarrow \pi \pi }=1.78$ GeV, $\Gamma _{f_{0}(1370)\rightarrow KK}=0.89$ GeV. These results are clearly \emph{too large} and cannot be cured by varying the only remaining free parameter $h_{1}$ (which should anyhow be small). Thus, the decay widths do not support this scenario as physical. Note, such a large decay width of the predominantly glueball state is in agreement with the study of Ref. \cite{ellis}. The search for an acceptable solution is extremely difficult due to interference effects in the decay amplitudes. A direct fit to the known decay widths was so far not successful. This is why we have searched a solutions by trying to guess the right area of the parameter space. First, we use as an input the bare glueball mass $m_{G}=1.7$ GeV in agreement with lattice QCD \cite{Morningstar}. Then, due to the fact that $f_{0}(1710)$ was too broad in the previous solution, we have increased the value of the dimensionful parameter $\Lambda $: for $\Lambda \simeq 2$ GeV the resonance f_{0}(1710)$ is sufficiently narrow. By further tuning $\lambda _{1}\simeq -10$ and $h_{1}\simeq -5,$ we obtain the mixing matrix \begin{equation} B=\left( \begin{array}{ccc} 0.90 & -0.05 & 0.41 \\ -0.42 & -0.03 & 0.90 \\ -0.04 & -0.99 & -0.0 \end{array \right) \text{ .} \label{bnum} \end{equation The resonance $f_{0}(1710)$ is (almost) a pure glueball. The masses that are determined by these parameters are acceptable: $M_{f_{0}(1370)}=1.06$ GeV, M_{f_{0}(1500)}=1.48$ MeV and $M_{f_{0}(1710)}=1.70$ GeV. The resulting decay widths are: $\Gamma _{f_{0}(1710)\rightarrow \pi \pi }=0.082$ GeV, \Gamma _{f_{0}(1710)\rightarrow KK}=0.064$ GeV, $\Gamma _{f_{0}(1500)\rightarrow \pi \pi }=0.14$ GeV, $\Gamma _{f_{0}(1500)\rightarrow KK}=0.13$ GeV, $\Gamma _{f_{0}(1370)\rightarrow \pi \pi }=0.12$ GeV, $\Gamma _{f_{0}(1370)\rightarrow KK}=0.07$ GeV. Thus, while the decays of $f_{0}(1370)$ and $f_{0}(1710)$ are at least in qualitative agreement with the experiment, this is not the case for f_{0}(1500)$: the decays are still too large. Note also that the very large value of $\Lambda $ implies a large gluon condensate: lattice results \cite{Lattice} suggest that $\Lambda \lesssim 0.6$ GeV, see the discussion in Ref. \cite{dilaton}. Thus, at this level this solution is not yet conclusive, but can point to an interesting direction where to look for it: large bare glueball mass in agreement with lattice (1.7 GeV) and a large value of the gluon condensate. \section{Conclusions and outlook} In this work we have studied the masses and the decays of the resonances f_{0}(1370)$, $f_{0}(1500)$ and $f_{0}(1710)$ within the eLSM. Presently, the favoured glueball seems to be $f_{0}(1710),$ in agreement with Refs. \cite{scalars, chen}, but further studies are needed. Namely, decay widths which are -at least qualitatively- in agreement with data could only be found for a large (eventually too large!) value of the gluon condensate. Another possibility is an improvement of the underlying effective model of Ref. \cite{dick}, by studying the influence of a quadratic mass term in the (pseudo)scalar sector. This is a minimal change of Ref. \cite{dick}, which however can have interesting phenomenological implications due to the fact that the strange current quark mass is not negligible. For value of the gluon condensate in agreement with lattice, a not too broad glueball can only be found if destructive interferences between the different amplitudes occur. This is why an improved numerical analysis, which allows to study in detail the whole parameter space, would be also helpful. More in general, one can extend the study of glueballs with other quantum numbers. In Ref. \cite{psg} the pseudoscalar glueball was investigated within the eLSM. A similar program can be carried out for a tensor glueball with a mass of about 2.2 GeV, see e.g. Ref. \cite{tensor} and refs. therein, as well as heavier glueballs, such as the vector and pseudovector glueball states, with lattice predicted masses of about $3.8$ and $3$ GeV, respectively. \section*{Acknowledgments} The authors thank I. Mishustin, D. Parganlija, and D. H. Rischke for useful discussions. S. J. acknowledges support from H-QM and HGS-HIRe. \section*{References}
train/arxiv
BkiUcwI241xiEnW_4M8m
5
1
\section{Introduction} The tidal disruption, and subsequent accretion, of a star by the super massive black hole (SMBH) at the centre of a galaxy offers a novel probe of an otherwise quiescent population of massive black holes. These tidal disruption events (TDEs) result in luminous flares which in the last two decades have been observed across a wide range of observing frequencies, including hard X-rays (e.g. Cenko et al. 2012), soft X-rays (e.g. Greiner et al. 2000), optical and UV (e.g. Gezari et al. 2008, van Velzen et al. 2021), infrared (e.g. Jiang et al. 2016, van Velzen et al. 2016b), and radio (e.g. Alexander et al. 2016, Goodwin et al. 2022). TDEs harbouring powerful radio and X-ray bright jets have also been discovered (e.g. Burrows et al. 2011). The emission from these sources typically rises to its peak in a matter of weeks to months, before fading away over the subsequent months and years. In recent years the number of TDEs observed at soft X-ray energies has greatly increased, with a current population of approximately 20 sources (Saxton {\it et al}. 2021). X-ray bright TDEs represent a particularly interesting TDE sub-population, as physical parameters of a TDE system can in principal be inferred from the modelling of their X-ray spectral energy distribution (Mummery \& Balbus 2020, Wen {\it et al}. 2020). The spectral energy distributions of X-ray observations of TDEs have historically been modelled with a single temperature blackbody function (e.g., Brown et al. 2017, Holoien et al. 2018, van Velzen et al. 2019, Wevers et al. 2019, Stein et al. 2020, Cannizzaro et al. 2021, Hinkle et al. 2021; although see Wen {\it et al}. 2020 for a more realistic slim disc model), which allows two parameters to be inferred: the temperature and `size' of an emitting region. The inferred size of the X-ray emitting region is a parameter of interest as, assuming that this emission results from a disc with inner edge at the innermost stable circular orbit (ISCO), it can be used as an estimate for the TDEs central black hole mass, $R_{\rm model} \simeq R_{\rm ISCO} \propto GM_{\rm BH}/c^2$. The black hole mass at the centre of a TDE is an important physical parameter, as it strongly correlates with the peak luminosity of the disc which forms in the aftermath of a TDE (Mummery \& Balbus 2021). Therefore, the radial size of an X-ray bright TDE disc could, if measured correctly, be used to understand more general properties of both the TDE and the SMBH. In a recent paper Mummery (2021) demonstrated that the parameters inferred from fitting a single temperature blackbody model to a spectrum produced by a multi-temperature accretion disc would, for the typical parameter space of interest for TDEs, suffer from substantial systematic errors. This prevents these parameters from being interpreted physically (e.g., Wevers {\it et al}. 2019, Gezari 2021), and inferences cannot therefore be made about the more general properties of these sources from their X-ray spectra. Mummery (2021) put forward an alternative X-ray spectral model, derived in the context of relativistic thin discs, which does not suffer from these systematic errors. This new model takes as input two parameters of physical interest, a radial scale $R_p$, and a temperature scale $T_p$. As the parameters inferred from fitting this new model to X-ray spectral data correspond to physical properties of the TDE disc system, self consistent inferences about the properties of X-ray emitting TDE discs can be made for the first time. Of particular relevance is the possibility of computing the bolometric luminosities of TDE accretion discs directly, and self consistently, from their X-ray spectrum. This luminosity will be closely related to the disc's accretion state, which determines the fundamental physical and observable properties of the flow. It is the broader goal of this work to test the hypothesis that the thermal X-ray emission observed from many TDE candidates stems from accretion discs in ``soft'' accretion states similar to those observed in Galactic X-ray binary systems. This will be done by both testing whether the inferred model parameters make physical sense (i.e., do the radial scales inferred track the expected ISCO location), and by inferring the Eddington ratios of this TDE population which, if the hypothesis is correct, should lie in the range $0.01 \lesssim f_{\rm edd} \lesssim 1.0$. This hypothesis is premised on the supposed scale (i.e., black hole mass) invariance of the black hole accretion process. TDEs represent an excellent probe of the scale-invariance of black hole accretion, as they evolve on much shorter timescales than active galactic nuclei. Evidence in favor, or against, the scale invariance of black hole accretion would be a result of fundamental theoretical interest. In addition, the development of a mapping between TDE physics and the better studied properties of X-ray binaries would allow for a deeper understanding of the dominant emission processes of TDEs. To test this hypothesis, in this paper we fit the new Mummery (2021) X-ray model to the X-ray spectra of a comprehensive sample of X-ray bright TDEs. We find strong evidence that thermal TDEs behave as ``scaled up'' analogues of Galactic X-ray binaries, and that at least some black hole accretion states are mass independent. The radial sizes measured from TDE X-ray spectra lie at those scales expected from the black hole mass inferred from the TDE's $M-\sigma$ relationship, in contrast with previous studies using pure blackbody models. Quantitatively, the amplitudes of these radial sizes typically correspond to $1-10$ times the gravitational radius of the $M-\sigma$ black hole mass. In addition, using both the temperature and radial scale of the inner regions of the TDE discs, we show that the bolometric luminosity of these sources peak at levels which are sub-Eddington, but larger than the typical hard state transitional scales observed in Galactic X-ray binaries. At late times we show that the bolometric luminosity of these TDEs remains exponentially larger than their observed X-ray luminosity. This provides a natural solution of the missing energy problem. Finally, we show that the peak disc luminosities of some of our sample are smaller than their optical peak luminosities, implying that not all of the early time optical and UV emission of all TDEs can be sourced from reprocessed disc emission, and an additional energy source is required. This could be in the form of shocks in the disc circularisation process (as argued by e.g., Shiokawa et al. 2015, Piran et al. 2015, Bonnerot \& Stone 2021, Bonnerot et al. 2021). The layout of this paper is as follows. In section \ref{sec2}, we recap the disc model introduced in Mummery (2021). In section \ref{sec3} we present the TDE sample used in this study; we discuss the fitting techniques and procedure in section \ref{sec4}. In section \ref{sec5} we analyse the results of the spectral fits, before concluding in section \ref{conc}. Some additional data tables and figures are presented in Appendices. Optical spectroscopic redshifts are available for all of the sources in our sample, and these are converted to luminosity distances assuming a flat $\Lambda$ cold dark matter cosmology with $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_m$ = 0.3 and $\Omega_\Lambda = 0.7$. \section{The new model}\label{sec2} An accretion disc in the ``soft'' state produces broadband emission well described by a superposition of black bodies. Each disc region has both a different emitting temperature and different emitting area, and contributes its own blackbody spectrum to the gross disc spectrum. In a relativistic disc system the observed emission is modified by both gravitational red-shifting, and by the Doppler boosting of the local emission by the disc fluids relativistic rotational velocities. Finally, the opacity of the disc atmosphere may differ from that of pure absorption, with electron scattering important in many regimes. When scattering of photons is dominant, the local emission will appear hotter than expected {(Shimura and Takahara 1995, Done et al. 2012)}, as photons which originated deeper in the disc are observed {(Davis et al. 2006)}. Combining these effects, the observed thermal emission from an accretion disc is generally given by the following expression \begin{equation}\label{fullmodel} F_\nu = \frac{1}{D^2}\iint_{\cal S} {f_\gamma^3 f_{\rm col}^{-4} B_\nu (\nu/f_\gamma, f_{\rm col} T)} ~\text{d}b_1\text{d} b_2. \end{equation} Here ${\cal S}$ is the surface of the disc, and $b_1$ and $b_2$ are cartesian image plane photon impact parameters. The function $B_\nu$ is the Planck function: \begin{equation} B_\nu(\nu, T) = \frac{2\pi h \nu^3}{ c^2} \frac{1 }{ \exp\left({h\nu / kT}\right) - 1} . \end{equation} The factor $f_\gamma$ is the photon energy-shift factor, defined as the ratio of the observed photon frequency $\nu$ to the emitted photon frequency in the discs rest frame $\nu_{\rm emit}$, $f_\gamma \equiv \nu/\nu_{\rm emit}$. The constants $h, k$ and $c$ are the Planck constant, Boltzmann constant, and the speed of light respectively. $T$ is the temperature of the disc, a function of both disc radius and time $T(r,t)$. Finally, $f_{\rm col}$ is the `color-correction' factor, which is included to model disc {opacity} effects. This correction factor generally takes a value $f_{\rm col} \sim 2.3$ for typical TDE disc temperatures (Done {\it et al}. 2012). The parameter $D$ is the luminosity distance to the source. For a more detailed derivation and discussion of this expression we refer to Section 2 of Mummery \& Balbus 2021. While this expression may look unwieldy, many TDE discs have the fortunate property of being relatively cool, with their spectra peaking below the lower bandpass of X-ray telescopes, $kT \ll 0.3$ keV. This means that X-ray observations of TDE discs probe the quasi-Wien tail of the disc spectrum, a limit in which equation (\ref{fullmodel}) becomes analytically tractable. In Mummery \& Balbus (2021) it was shown that equation (\ref{fullmodel}) can be integrated by performing a Laplace expansion about the hottest regions of the disc. A Laplace expansion proceeds by defining a small parameter, in our case \begin{equation} \varepsilon \equiv {k\T_p \over h \nu} \ll 1, \end{equation} where $T_p$ is the peak temperature of the disc at a given time. For $\varepsilon \ll 1$, the Planck function can be replaced by its Wien-tail form \begin{equation} B_\nu(\nu, T) \simeq \frac{2\pi h \nu^3}{ c^2} \exp\left(-{h\nu \over kT}\right) , \end{equation} an approximation which introduces exponentially small corrections. In the limit $\varepsilon \ll 1$, the Wien-tail flux observed across an X-ray band pass results physically from only a very small region of the disc, with the contributions from all other disc regions exponentially suppressed. We may therefore spatially Taylor expand the temperature profile in the exponential, and perform the integral (equation \ref{fullmodel}) term by term. The resulting Laplace expansion has the form (Mummery \& Balbus 2021) \begin{equation}\label{LaplaceExpansion} F_\nu(\varepsilon) = F_0 \,\varepsilon^\gamma\, \exp\left(-{1 \over \varepsilon}\right) \, \sum_{n=0}^{\infty} \, \alpha_n \varepsilon^n , \end{equation} where $F_0$ and $\alpha_n$ are temperature-independent constants, and $\gamma$ depends on the precise properties of the temperature maximum (this will be further discussed below). A key mathematical point here is that the X-ray spectral model built upon this Laplace expansion has only a finite parameter space of applicability, and can only be used when $kT_p \ll h\nu$. The corrections involved in approximating the solution of equation (\ref{fullmodel}) with equation (\ref{LaplaceExpansion}) are exponentially small in the small $\varepsilon$ limit. However, these corrections are not small in the opposite limit, and this model will quickly breakdown when $\varepsilon \sim 1$. The corrections to this model in the $\varepsilon > 1$ limit cannot be mitigated by keeping additional terms in the summation, which is not guaranteed to converge for $\varepsilon > 1$. On one hand, if one fits an X-ray model derived in this manner to data and finds a set of best fit parameters which imply $\varepsilon \sim 1$ then, even if the fit is formally acceptable, the inferred parameters cannot be physically interpreted. On the other hand, this model will converge exponentially to the full solution for $\varepsilon < 1$ and therefore, in its regime of applicability, will be highly accurate. In Mummery \& Balbus (2021) the Laplace expansion solution of equation (\ref{fullmodel}) was found, and it is given by the following expression \begin{multline}\label{MB} F_\nu(R_p, \T_p) = \frac{4\pi \xi_1 h\nu^3}{c^2f_{\rm col}^4}\left( \frac{R_p}{D} \right)^2 \left(\frac{k \T_p}{h \nu} \right)^\gamma \exp\left(- \frac{h\nu}{k \T_p} \right) \\ \times \left[ 1 + \xi_2\left(\frac{k \T_p}{h \nu} \right) + \xi_3\left(\frac{k \T_p}{h \nu} \right)^{2} +\, \dots \right], \end{multline} Here we have defined $\T_p \equiv \max(f_{\rm col} f_\gamma T)$, the maximum ``effective'' temperature in the accretion disc. The radius $R_p$ corresponds to the image plane co-ordinate of this hottest region. The constant $\gamma$ depends on assumptions about both the inclination angle of the disc and the disc inner boundary condition, and is limited to the range $1/2 \leq \gamma \leq 3/2$. We note that $\gamma = 1/2$ for a vanishing ISCO stress disc observed precisely face on. The positive constants $\xi_1, \xi_2$ and $\xi_3$ are all order unity, $\xi_1 \simeq 2.19, \xi_2 \simeq 3.50, \xi_3 \simeq 1.50$ (Mummery \& Balbus 2021). Here `$\dots$' denotes higher order terms which scale $\propto ({k \T_p}/{h \nu} )^{n }$ (where $n \geq 3$), which can be safely neglected, provided we are in the parameter space of interest. While this equation was derived in the context of relativistic thin discs, it is actually more broadly applicable than in just the ``thin'' disc limit. The only assumptions inherent to this modelling is that each disc radius emits like a colour-corrected blackbody, and that there exists some disc radius $R_p$ where the disc temperature peaks, at a level below the observed frequency $k\T_p \ll h\nu_l$. These assumptions will still be valid in the ``slim'' disc limit, and thus this model will produce valid descriptions of the X-ray spectra of accreting sources at high Eddington ratios $f_{\rm edd} \sim 1$. The X-ray spectrum of an accretion disc observed in the Wien-tail can therefore be described by just three parameters: $R_p$, $T_p$ and $\gamma$. Two of these parameters, $R_p$ and $T_p$, encode physical information about the disc system itself. The value of $R_p$ will be driven primarily by the central black holes mass, for example, a parameter of real physical interest. In addition, the bolometric luminosity of the accretion disc will scale as $R_p^2 T_p^4$, and can therefore be constrained directly from the detailed modeling of X-ray spectra (section \ref{sec5}). \section{The data set}\label{sec3} We search the literature for all promising TDE candidates for which good quality X-ray data (in the Swift/XRT, XMM-Newton and NICER archives) are publicly available. We include only those TDEs detected prior to April 2020. This yields a list of $\sim$20 sources (Table \ref{table:sample}), excluding ROSAT TDE candidates for which only ROSAT data is available (because these spectra are not retrievable) We furthermore collect the peak integrated UV (blackbody) luminosities from the literature, as well as velocity dispersion measurements that can be converted to black hole mass estimates through the M--$\sigma$ relation. Galactic hydrogen column densities ($n_H$) are taken from the HI4PI survey (HI4PI Collaboration, 2016). \subsection{XMM-Newton data reduction} Observations were downloaded from the XMM-Newton Science Archive and processed with the XMM-Newton Science Analysis System (SAS v19.1.0; Gabriel et al. 2004). EPIC-pn event files were filtered for background solar flares, following the standard processing thread with a cut at a count rate of 0.4 c/s in the 10-12 keV band. Source spectra were extracted using a circular region about the source and local background spectra were produced from a source-free region on the same CCD. \subsection{NICER} AT2019dsg and AT2020ksf were observed by {\it NICER} on multiple epochs (see Table \ref{table:results}). We started {\it NICER} data analysis with the raw level-1 files that we downloaded from the publicly accessible High Energy Astrophysics Science Archive Research Center (HEASARC; \href{https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl}{https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl}). We used the {\it nicerl2} task to extract the cleaned eventlists. To produce the Good Time Intervals (GTIs), we used the default values for all parameters except for the overshoot, the undershoot, and overshoot expression. These were screened at a later stage to minimize the loss of data. For a detailed discussion on this see (Pasham et al. 2021). X-ray spectra were filtered for so-called hot detectors and the ancillary response files (arf) and the redistribution matrix files (rmfs) were generated using the tools {\it nicerarf} and {\it nicerrmf}, respectively. The background spectra were estimated using the 3c50 model (Remillard et al. 2022). \subsection{Swift/XRT} We use the Swift online XRT tool to extract (stacked) X-ray spectra of all sources with sufficient coverage. If significant variability was reported (or observed) we make stacked spectra for different time ranges as indicated in Table \ref{table:1}. Swift X-ray spectra typically contain far lower photon counts than XMM or NICER spectra. As such we typically rely on XMM and NICER spectra for our analysis, using Swift spectra only for those sources lacking observations with these instruments (see Table \ref{table:results}). \subsection{Magellan/MagE optical spectroscopy} The host galaxy of 2XMM J1847 was observed on 19 August 2021 with the Magellan Echellete spectrograph (MagE), mounted on the Magellan Baade telescope located at Las Campanas Observatory, Chile. We used a 0.7 arcsec slit, which delivers a FWHM spectral resolution of 50 km s$^{-1}$ at 4000 \AA. The spectrum was reduced using the dedicated MagE data reduction pipeline (Kelson et al. 2000, Kelson 2003). Subsequently the spectrum is normalised to the continuum by fitting a low order spline function to remove the strong curvature from each échelle order, masking strong emission and absorption lines that may distort the continuum identification. We then follow the method of (Wevers et al. 2017) to measure the stellar velocity dispersion by combining the penalized pixel fitting routine (Cappellari 2017) with the Elodie stellar spectral library (Prugniel et al. 2007). We measure a velocity dispersion of $\sigma = 91 \pm 4 $ km s$^{-1}$. {This can be turned into an estimate for the 2XMM J1847 black hole mass using the $M-\sigma$ relationship. Explicitly this $\sigma$ measurement implies a black hole mass of $\log_{10}M_{\rm BH}/M_\odot$ = 6.6 $\pm$ 0.4 (Ferrarese \& Ford 2005), or $\log_{10}M_{\rm BH}/M_\odot$ = 6.4 $\pm$ 0.4 (McConnell \& Ma 2013).} \begin{table*} \caption{Sample used in this work together with some basic properties. Galactic column densities are taken from the HI4PI survey (HI4PI collaboration, 2016). The references are: [1] Holoien et al. 2016a, [2] Wen et al. 2020, [3] Holoien et al. 2016b, [4] Gezari et al. 2017, [5] Wevers et al. 2019a, [6] Wyrzykowski et al. 2016, [7] Kajava et al. 2020, [8] Cannizzaro et al. 2021, [9] van Velzen et al. 2019, [10] Wevers et al. 2019b, [11] Wevers 2020, [12] Lin et al. 2017, [13] Saxton et al. 2017, [14] Lin et al. 2015, [15] Wevers et al. 2022, [16] Saxton et al. 2012, [17] Saxton et al. 2019, [18] Lin et al. 2011, [19] Short et al. 2020, [20] Wevers et al. 2019a, [21] Liu et al. 2019, [22] Nicholl et al. 2020. [23] Pasham et al. 2020, [24] Stein et al. 2021. } \centering \begin{tabular}{cccccccc} Source & RA & Dec. & Redshift & Distance & Galactic $n_H$ & $\sigma$ & Reference \\ & & & & (Mpc) & (10$^{20}$ cm$^{-2}$) & (km/s) & \\\hline ASASSN--14li &12 48 15.23& 17 46 26.4 & 0.0206 & 90 & 1.9 & 78 $\pm$ 2 & [1], [2] \\ ASASSN--15oi &20 39 09.03&--30 45 20.8& 0.051 & 216 & 4.8 & 61$\pm$7 & [3], [4], [5]\\%\citet{Holoien2016, Gezari2017, Wevers2019a} \\ OGLE16aaa &01 07 20.88 &--64 16 20.7& 0.1655 & 800 & 2.7 & --- & [6], [7] \\ AT2019dsg &20 57 02.974 &+14 12 15.86& 0.0512 & 224 & 6.5 & 94 $\pm$ 1 & [8], [24]\\%\citet{Cannizzaro2021}\\ AT2018zr &07 56 54.54& +34 15 43.61& 0.071 & 322 & 4.4 & --- & [9]\\%\citet{vanVelzen2020}\\ AT2018fyk &22 50 16.09&--44 51 53.50& 0.059 & 264 & 1.15 & 158$\pm$1 & [10], [11] \\ 3XMM J1500 &15 00 52.07 & +01 54 53.8 &0.145 & 692 & 4.1 & 59$\pm$3 & [12] \\ XMMSL1 J0740 &07 40 08.09 & --85 39 31.3& 0.0173 & 73 & 5.3 & 112$\pm$ 3& [13], [11] \\ 3XMM J1521 &15 21 30.72 &+07 49 16.5&0.179 & 866 & 2.7 & 58$\pm$2 & [14] \\ GSN069 &01 19 08.663 & --34 11 30.52& 0.018 & 69 & 2.3 &64 $\pm$ 4 & [15] \\ SDSSJ1201 &12 01 36.03 &+30 03 05.5& 0.146 & 700 & 1.3 & 122$\pm$4 &[16] \\ XMMSL2 J1446 &14 46 05.22 &+68 57 31.1& 0.029 & 127 & 1.7 &167 $\pm$15& [17], [11] \\ 2XMM J1847 &18 47 25.12&--63 17 25.3& 0.035 & 156 & 6.3 & 91$\pm$4 & [18] \\ AT2018hyz &10 06 50.871 &+01 41 34.08& 0.0457 & 204 & 2.7 & 57 $\pm$ 1& [19] \\ RBS1032 &11 47 26.69 &+49 42 57.7& 0.026 & 114 & 1.4 & 49 $\pm$ 7 & [20] \\ AT2019azh &08 13 16.95 &+22 38 54.03& 0.022 & 96 & 4.2 & 77 $\pm$ 2 & [21]\\%\citet{Liu2019, Wevers2020}\\ 2MASX J0249 &02 49 17.31 &--04 12 52.1 & 0.019 & 83 & 3.2 & 43 $\pm$ 4& [20] \\ AT2019qiz &04 46 37.88 & --10 13 34.90& 0.0151 & 66 & 6.6 & 71 $\pm$ 2 & [22] \\ AT2020ksf &21 35 27.26&--18 16 35.54& 0.0923 & 426 & 3.6 & --- & [23] \\ \hline \end{tabular} \label{table:sample} \end{table*} \begin{table*} \caption{Best fit parameters obtained through spectral fitting. Cash statistics are used to find the best fit parameters, and are indicated in the Fit stat column together with the number of degrees of freedom (d.o.f.). The uncertainties indicate the 90 per cent confidence intervals, obtained by varying the parameters and finding $\Delta$stat = 2.7. The spectral range over which the fit was performed is also listed. $^\dagger$ indicates a column density fixed to the Galactic value (see text). We only display the best fitting value of $\gamma$ in this table. The $\gamma$ parameter is poorly constrained by our fits and the uncertainty interval on each measurement should be understood to include the entire allowed range $\gamma \in 1/2$ -- $3/2$.} \label{table:results} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{cccccccccc} Source & Instrument & Photon counts & MJD & Spectral range & $n_H$ & $R_p$ & $T_p$ & $\gamma$ & Fit statistic \\ &&& (days) & (keV) & (10$^{20}$ cm$^{-2}$) & ($10^{12}$ cm) & ($10^{5}$ K) & & (stat$\big/$d.o.f.) \\\hline ASASSN--14li & XMM/RGS & 9200 & 56997& 0.35 -- 1.2 & $6.1^{+2.2}_{-1.5}$ & $6.1^{+3.7}_{-0.5}$ & $3.7^{+0.6}_{-0.1}$ & 0.5 & 353/266 \\ ASASSN--14li & XMM/RGS & 36150 & 56999& 0.35 -- 1.2 & $4.6^{+0.8}_{-0.7}$ & $4.8^{+0.8}_{-0.2}$ & $3.8^{+0.2}_{-0.1}$ & 0.5 & 1267/873 \\ ASASSN--14li & XMM/RGS & 9600 & 57024& 0.35 -- 1.2 & $6.5^{+1.4}_{-1.2}$ & $6.4^{+1.7}_{-0.4}$ & $3.6^{+0.2}_{-0.04}$ & 0.5 & 378/269 \\ ASASSN--14li & XMM/RGS & 2086 & 57213& 0.35 -- 1.2 & $4.0^{+0.5}_{-0.5}$ & $4.8^{+4.5}_{-0.7}$ & $3.2^{+0.6}_{-0.1}$ & 0.5 & 79/72 \\ ASASSN--14li & XMM/RGS & 5700 & 57367& 0.35 -- 1.2 & $4.0^{+0.5}_{-0.5}$ & $5.2^{+4.1}_{-0.7}$ & $3.0^{+0.4}_{-0.1}$ & 0.5 & 222/232 \\ ASASSN--14li & XMM/PN & 22400 & 57399 & 0.2 -- 1.0 & $3.7^{+0.9}_{-0.4}$ & $5.1^{+1.5}_{-0.3}$ & $2.9^{+0.4}_{-0.1}$ & 0.5 & 214/157 \\ ASASSN--14li & XMM/PN & 18400 & 57544 & 0.2 -- 1.0 & $3.9^{+0.6}_{-0.5}$ & $6.0^{+1.6}_{-0.4}$ & $2.6^{+0.2}_{-0.1}$ & 0.5 & 171/157\\ ASASSN--14li & XMM/PN & 9800 & 57727 & 0.2 -- 1.0 & $3.9^{+0.8}_{-0.6}$ & $6.2^{+3.2}_{-0.5}$ & $2.5^{+0.3}_{-0.2}$ & 0.5 & 133/157 \\ ASASSN--14li & XMM/PN & 8200 & 57912& 0.2 -- 1.0 & $3.7^{+0.9}_{-0.8}$ & $8.0^{+6.1}_{-0.7}$ & $2.2^{+0.4}_{-0.1}$ & 0.5 & 129/157 \\ ASASSN--14li & XMM/PN & 7600 & 58093 & 0.2 -- 1.0 &$1.5^{+0.8}_{-0.6}$ & $2.6^{+2.4}_{-0.2}$ & $2.5^{+0.4}_{-0.2}$ & 0.5 & 175/157 \\\hline ASASSN--15oi & XMM/PN & 600 & 57324 & 0.2 -- 1.0 & $4.8^\dagger$ & $0.58^{+0.22}_{-0.16}$ & $5.0^{+0.5}_{-0.7}$ & 1.5 & 139/157 \\ ASASSN--15oi & XMM/PN & 4050 & 57482 & 0.2 -- 1.0 & $4.8^\dagger$ & $2.8^{+0.8}_{-0.3}$ & $3.3^{+0.3}_{-0.1}$ & 0.5 & 179/157 \\\hline OGLE16aaa & XMM/PN & 260 & 57548 & 0.2 -- 1.0 & $3.3^{+3.4}_{-3.3}$ & $1.7^{+1.4}_{-0.8}$ & $4.1^{+1.9}_{-1.2}$ & 0.5 & 112/157\\ OGLE6aaa & XMM/PN & 4500 & 57722 & 0.2 -- 1.0 & $1.3^{+0.6}_{-0.9}$ & $1.5^{+0.5}_{-0.6}$ & $6.1^{+0.4}_{-1.1}$ & 0.5 & 173/157\\ \hline AT2019dsg & NICER & 13000 & 58624 &0.3 -- 1.0& $8.6^{+1.4}_{-0.8}$ & $1.8^{+1.3}_{-0.3}$ & $4.9^{+0.8}_{-0.1}$ & 0.5 & 10.5/13 \\ AT2019dsg & NICER & 3800 & 58625 &0.3 -- 1.0& $10.0^{+1.7}_{-3.1}$ & $2.8^{+1.4}_{-1.4}$ & $5.6^{+0.5}_{-0.9}$ & 1.5 & 14/12 \\ AT2019dsg & NICER & 1300 & 58630 &0.3 -- 1.0& $8.1^{+2.4}_{-0.7}$ & $1.8^{+2.8}_{-0.5}$ & $4.8^{+1.2}_{-0.5}$ & 0.5 & 8/12 \\ AT2019dsg & NICER & 750 & 58633 &0.3 -- 1.0& $9.1^{+2.9}_{-1.1}$ & $2.3^{+4.7}_{-0.9}$ & $4.5^{+1.3}_{-0.5}$ & 1.4 & 13/12 \\ AT2019dsg & NICER & 660 & 58634 &0.3 -- 1.0& $6.3^{+2.7}_{-0.8}$ & $1.1^{+1.9}_{-0.5}$ & $5.0^{+1.7}_{-0.6}$ & 0.6 & 10/11\\ \hline AT2018zr & XMM/PN & 300 & 58220 & 0.2 -- 1.0 & $14.3^{+2.7}_{-3.3}$ & $0.071^{+0.040}_{-0.036}$ & $9.5^{+0.3}_{-1.1}$ & 1.5 & 198/157\\ AT2018zr & XMM/PN & 185 & 58242 & 0.2 -- 1.0 & $17.1^{+8.9}_{-5.1}$ & $0.095^{+0.045}_{-0.040}$ & $8.7^{+0.5}_{-2.2}$ & 1.5 & 165/157 \\ \hline 3XMM J1521 & XMM/PN & 3050 & 51778 &0.2 -- 1.2 & $2.1^{+1.3}_{-0.8}$ & $0.32^{+0.03}_{-0.03}$ & $8.2^{+1.7}_{-0.5}$ & 0.5 & 232/198\\\hline GSN 069 & XMM/PN & 55000 & 56996 &0.2 -- 1.0& $3.3^{+0.3}_{-0.3}$ & $0.67^{+0.06}_{-0.02}$ & $3.8^{+0.2}_{-0.05}$ & 0.5 & 256/158\\\hline 2XMM J1847 & XMM/PN &2000 & 53985 &0.2 -- 1.2& $6.3^\dagger$ & $0.20^{+0.03}_{-0.03}$ & $7.5^{+0.4}_{-0.4}$ & 1.5 & 176/175 \\ 2XMM J1847 & XMM/PN &18000 & 54206 & 0.2 -- 1.2& $9.6^{+0.4}_{-0.8}$ & $0.55^{+0.05}_{-0.07}$ & $8.2^{+0.2}_{-0.4}$ & 1.5 & 254/198 \\\hline AT2019azh & Swift &240& 58553 - 58634 &0.3 -- 10& $5.4^{+8.0}_{-5.3}$ & $0.6^{+3.8}_{-0.5}$ & $4.3^{+1.8}_{-1.5}$ & 1.5 & 90/82 \\ AT2019azh & Swift &2500& 58767 - 58977 &0.3 -- 10& $4.2^\dagger$ & $1.4^{+1.1}_{-0.1}$ & $3.8^{+1.0}_{-0.05}$ & 0.5 & 55/65 \\\hline 2MASX J0249 & XMM/PN &1780& 53930 &0.2 -- 1.2 & $31.8^{+10.2}_{-3.8}$ & $0.23^{+0.52}_{-0.09}$ & $6.8^{+1.6}_{-0.6}$ & 0.5 & 260/219 \\\hline AT2020ksf & NICER &14450& 59187 - 59189&0.3 -- 1.5& $3.3^{+1.4}_{-0.6}$ & $0.8^{+0.3}_{-0.05}$ & $7.5^{+1.0}_{-0.1}$& 0.5 & 22/22 \\ AT2020ksf & NICER &11230& 59191 - 59195&0.3 -- 1.4& $6.2^{+1.0}_{-0.9}$ & $1.7^{+0.5}_{-0.2}$ & $5.7^{+0.4}_{-0.2}$ & 0.5 & 27/19\\ AT2020ksf & Swift &440 &59179 - 59205 &0.3 -- 1.5& $5.2^{+6.2}_{-3.5}$ & $1.1^{+2.1}_{-0.5}$ & $6.3^{+2.1}_{-0.7}$ & 0.5 & 45/65 \\\hline \end{tabular} \end{table*} \section{Analysis}\label{sec4} \subsection{Model implementation } The model described by equation (\ref{MB}) takes as input four parameters, one of which -- the source-observer distance $D$ -- is fixed at the observed distance of the TDEs host galaxy. The remaining three parameters ($R_p$, $\T_p$ and the index $\gamma$) are allowed to vary for each observation. The index $\gamma$, which determines the leading power-law behaviour of the disc spectrum, is the only parameter which is constrained theoretically, as it must lie in the following range: \begin{equation} 1/2 \leq \gamma \leq 3/2 . \end{equation} Physically, the parameter $\gamma$ varies depending on two factors. First, the observed geometry of the hottest region of the accretion disc (whether the disc is observed edge-on or face-on). Second, on whether the disc temperature maximum occurs at the disc inner edge, or in the bulk of the disc body. The following values are known from theory: \begin{itemize} \item $\gamma = 1/2$, disc observed face on, with the temperature maximum inside the disc body $R_{\rm in} < R_p < R_{\rm out}$. \item $\gamma = 1$, disc observed edge on, with the temperature maximum inside the disc body $R_{\rm in} < R_p < R_{\rm out}$. \item $\gamma = 1$, disc observed face on, with the temperature maximum at the disc interior $R_{\rm in} = R_p$. \item $\gamma = 3/2$, disc observed edge on, with the temperature maximum at the disc interior $R_{\rm in} = R_p$. \end{itemize} The properties of $\gamma$ are discussed in more detail in Balbus (2014) and Mummery \& Balbus (2020a). In practice, the observed accretion disc spectrum is only weakly dependent on $\gamma$, which cannot be strongly constrained from observations. In this work we therefore treat $\gamma$ as a nuisance parameter, letting it vary between its allowed limits at each epoch. In fact, the 1-$\sigma$ uncertainties on $\gamma$ typically fill the entire allowed range of $\gamma \in 1/2$ -- $3/2$, and as such the $\gamma$ parameter merely extends the uncertainty range of the parameters $R_p$ and $T_p$. The quantity $\T_p$ is given by $\T_p \equiv f_{\rm col} f_\gamma T_p$, where $T_p$ is the hottest physical temperature in the accretion disc. The XSPEC model takes the physical peak temperature $T_p$ as an input. We then compute the disc color-correction factor using the Done {\it et al}. (2012) model: \begin{align}\label{col1} &f_{\rm col}(T_p) = \left(\frac{72\, {\rm keV}}{k T_p}\right)^{1/9}, \quad T_p > 1\times10^5 {\rm K}. \\ &f_{\rm col}(T_p) = \left(\frac{T_p}{3\times10^4 {\rm K}} \right)^{0.83}, \quad 3\times10^4 {\rm K} < T_p < 1\times10^5 {\rm K} .\label{col2} \\ &f_{\rm col}(T_p) = 1, \quad T_p < 3\times 10^4 {\rm K} . \label{col3} \end{align} This model is routinely used for the modelling of AGN disc spectra. It is likely that the disc conditions in TDEs will be most similar to those in AGN (though AGN discs will be more radially extended), and so this model should accurately model TDE disc colour-correction effects. The peak temperatures of TDE discs are typically in the high temperature regime $T_p > 1\times 10^5$K. The disc color-correction factor is therefore only weakly temperature dependent $f_{\rm col} \propto T_p^{-1/9}$, with typical value $f_{\rm col} \simeq 2.3$. The final component of computing $\T_p$ is the photon energy-shift factor $f_\gamma$. This quantity depends on numerous factors which lie beyond the scope of this model, chiefly the black hole spin, disc-observer inclination angle, and the radius in gravitational units at which the temperature maximum occurs. As none of these parameters can be determined from the existing observations, we use the energy-shift value associated with the ISCO radius of a Schwarzschild black hole, observed face-on $f_\gamma = 1/\sqrt{2}$. The factor $f_\gamma$ does not vary strongly over the entire parameter space of black hole spin, disc radius and observer inclination angle, and so uncertainty in its value will not cause significant errors to propagate into the inferred disc parameters (this is verified in section \ref{sec5}). \subsection{Fitting procedure} The model described above is fit to the data (as summarised in Tables \ref{table:sample}, \ref{table:results}) using Xspec v12.11.1 in HEASoft v6.28 (Arnaud 1996). We account for Galactic extinction by including Hydrogen column densities ($n_H$) as measured by HI4PI (HI4PI collaboration, 2016) and tabulated in Table \ref{table:sample}, using a {\tt TBabs} model in Xspec. Initially this is left as a free parameter, but if the best fit value is significantly below the Galactic value, we fix it to the Galactic value instead. This may happen if the signal-to-noise ratio is low, which can be assessed by either the number of photons in the spectrum or the number of degrees of freedom, both of which are tabulated in Tables \ref{table:1}, \ref{table:results}. {We verified that the finding of sub-galactic $n_H$ was a result of poor data quality and not a model deficiency by re-fitting these spectra with the {\tt diskbb} model. We also found sub-galactic $n_H$ for {\tt diskbb}, supporting the low signal-to-noise hypothesis. } The normalisation of the disk model is set by the distance, so the model norm parameter is fixed to 1. We use the unbinned spectra in combination with Cash statistics (Cash 1979) to find the best fit parameters, unless otherwise indicated; for the NICER data we use the optimal binning scheme of Kaastra \& Bleeker (2016). The spectral range that is used in the fitting is also listed in Table \ref{table:results}, as it can differ depending on the instrument response, signal-to-noise ratio of the data, as well as the spectral shape. For example, a pure thermal spectrum will tend to be noise-dominated for $E \gtrsim$1-1.5 keV, while a thermal + power-law spectrum will contain signal above the background out to higher energies. In the latter case, the expectation (and observation, e.g. Wevers et al. 2019) is that the thermal component dominates at low energies $\lesssim 1$ keV while the non-thermal component dominates at higher energies. For sources with a prominent power-law spectral component present we fit the data using a joint model ({\tt tdediscspec + powerlaw}) to describe the entire energy range with signal above the noise. Finally, uncertainties are assessed by using the Xspec error command. In cases where this leads to divergent results, we instead perform a manual stepwise exploration of the fit statistic surface and determine 90 per cent confidence intervals by finding the parameter values for which $\Delta$ stat = 2.7, using the steppar command. Examples of X-ray spectra and their best-fit models are shown in Figure \ref{fig:examplespectra}. For ASASSN14li, the presence of additional ionized absorption has been reported (Kara et al. 2018). We therefore tested the effect of including an additional absorption component (using the {\tt gabs} model) in the accretion disk model parameters (primarily $n _H$, $T_p$ and $R_p$), finding that this can lead to variations of ~25 $\%$ in these parameters. This is typically within the reported error budget. The best fit parameters and their uncertainties are presented in Table \ref{table:results}. Some example best-fitting X-ray spectra are displayed in Figure \ref{fig:examplespectra} in Appendix \ref{secB}. \subsection{Sources which must be removed from the sample} We reiterate that the corrections to equation (\ref{MB}) are large and poorly constrained when the disc temperature is of order the bandpass energy $k\T_p \sim E_l$. We cannot mitigate this effect by keeping additional higher order turns in the Laplace expansion (equation \ref{LaplaceExpansion}), and must instead simply remove these sources from our sample. The best fitting parameter values inferred from the spectral fits to the sources XMMSL1 J0740, XMMSL2 J1446, 3XMM J1500, AT2018fyk and SDSS J1201 all lie outside of the regime of validity of the derivation of the underlying physical model (equation \ref{MB}). Unfortunately, this means that the parameters inferred for these sources cannot be physically interpreted. Many of the sources which have inferred temperatures which are ``too high'' to be interpreted within our model also have a strong power-law component present in their spectra, in addition to a thermal component (e.g. AT2018fyk, XMMSL1 J0740, XMMSL2 J1446). It is likely that these temperature values are not physical, and are merely an artifact of fitting a thermal model added to a distinct power law component to a system where the thermal and nonthermal components are likely to be intrinsically linked. The addition of a distinct power-law component will only ever approximate the effect of Comptonisation on the Wien spectrum, and the model temperature of these sources is likely strongly biased upwards by these simple model artifacts. Three sources in our sample, AT2018hyz, AT2019qiz, and RBS1032 have a very low number of photon counts: 50 (AT2018hyz), 65 (AT2019qiz) and 120 (RBS1032) respectively. For these sources it was not possible to compute error ranges on the best fitting parameters, the fit was simply too degenerate, and we remove them from our sample on data quality grounds. As these eight sources are either not in the regime where the observed emission is dominated by the Wien-tail of thermal disc emission, or do not have sufficient photon counts for a robust statistical analysis, we do not include them in our analysis going forward. \section{Results and implications}\label{sec5} \begin{figure} \centering \includegraphics[width=\linewidth]{TemperatureVsRadius.pdf} \caption{The inferred peak disc temperatures plotted against the radii at which this temperature occurred for all of the TDEs in our sample. We see a clear anti-correlation between temperature maximum and radius, which follows a $T_p \propto R_p^{-1/4}$ relationship (black dashed curve). This is the exact relationship one would expect to find if the radius scaled linearly with the black hole mass, and the bolometric luminosity of these sources was a fixed fraction of the black hole Eddington luminosity. } \label{fig:fig3} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{RadiusVsSigmaCurrent.pdf} \caption{The radial location inferred from the X-ray spectra, plotted against the TDE host galaxies velocity dispersion $\sigma$. The dashed lines display the values of 1, 6 and 10 $GM_{\sigma}/c^2$, while the gray contours show the typical scatter in the $M-\sigma$ relationship (0.4 dex). The typical radii inferred from TDE X-ray spectra lie in the parameter space which would be expected if these radii tracked the ISCO radius of the TDEs black hole. } \label{fig:fig2} \end{figure} The parameter values inferred from each X-ray spectral fit are displayed in Table \ref{table:results}. We find peak temperature values which range between $2 \times 10^5 < T_p ({\rm K}) < 1 \times 10^6$, and radii which span roughly two orders of magnitude $10^{11} < R_p ({\rm cm}) < 10^{13}$. This large radial range is not surprising given that our sample includes sources with velocity dispersion measurements ($\sigma$) which span a factor $\sim 2.5$. Super massive black hole masses are known to correlate strongly with velocity dispersion $M_{\rm BH} \sim \sigma^{\alpha}$, with $\alpha \sim 4.5-5.5$. One interesting trend that is immediately obvious in our results is that the peak temperature of TDE discs are strongly anti-correlated with the inferred disc radii (Fig. \ref{fig:fig3}). Interestingly, the correlation approximately follows $T_p \propto R_p^{-1/4}$ (black dashed curve, Fig. \ref{fig:fig3}). This scaling is exactly what would be expected if the radius $R_p$ scaled linearly with black hole mass $R_p \propto M_{\rm BH}$, and the bolometric luminosity was a fixed fraction of the Eddington ratio $L_{\rm bol} \propto R_p^2 T_p^4 \propto L_{\rm edd} \propto M_{\rm BH}$. We reiterate that there is no intrinsic link between the temperature and radial parameters within the model itself, and that this correlation is a property of the sources themselves. It appears therefore that there are two key physical correlations which deserve further attention: a trend between bolometric luminosity and central black hole mass, and between the X-ray radius and black hole mass. Those sources with both a high quality X-ray spectrum, and a measurement of the galaxy central velocity dispersion $\sigma$ are ideal targets for an analysis of any $R_p-M_{\rm BH}$ scaling relationship. In Fig. \ref{fig:fig2} we plot the X-ray radial measurements against velocity dispersion for the 8 sources in our sample where both measurements are available. We display by dashed lines the values of 1, 6 and 10 $GM_{\sigma}/c^2$, with the mass values taken from the $M-\sigma$ relationship. The typical scatter ($\pm 0.4$ dex) in the $M-\sigma$ relationship is shown by the gray contours surrounding each curve. The typical radii inferred from TDE X-ray spectra lie in the parameter space which would be expected if these radii tracked the ISCO radius of the black hole. This is in sharp contrast with the findings of previous works which relied on the use of a pure blackbody profile to describe the X-ray emission (e.g. Brown et al. 2017, Holoien et al. 2018, van Velzen et al. 2019, Wevers et al. 2019, Stein et al. 2020, Cannizzaro et al. 2021, Hinkle et al. 2021), which typically find X-ray radii far smaller than the gravitational radius of the host black hole. A simple polynomial fit of $R_p = A \sigma^B$, finds \begin{equation} B = 2.1 \pm 1.4, \end{equation} which is somewhat shallower than the indices found for the $M_{\rm BH}-\sigma$ relationship (e.g., Ferrarese \& Ford 2005, McConnel \& Ma 2013). We do not suggest that this represents evidence for a break in the $M-\sigma$ relationship, due to our small sample size. However, future large samples of X-ray bright thermal state TDEs may represent the best probe of the $M-\sigma$ relationship at low $\sigma$, as TDEs preferentially occur around black holes with lower masses (e.g., Magorrian \& Tremaine 1999, Stone \& Metzger 2016, Wevers et al. 2017). In the following sub-sections we discuss a number of the implications of the results presented in Table \ref{table:results}. We begin with a discussion of the Eddington ratios of X-ray bright TDEs. \subsection{The Eddington ratio of TDE discs} It is well known that many accretion disc systems show dramatic changes in their behaviour as a function of the disc's Eddington ratio (Fender 2001, Maccarone 2003, Maccarone et al. 2003, Fender \& Belloni 2004, Fender et al. 2004, Koerding, Jester \& Fender 2006, Ruan et al. 2019, Wevers 2020, Wevers et al. 2021). In particular, X-ray binary discs inhabit the so-called ``soft state'', spectrally analogous to the thermal TDE X-ray spectra studied in this paper, when their luminosity spans roughly $L_{\rm bol} \sim 0.01 - 1 L_{\rm edd}$. With both the value and radial location of the peak TDE disc temperature determined from observations, the bolometric luminosity of the TDE disc can be self consistently estimated. In a Newtonian theory of gravity, the bolometric disc luminosity is given by \begin{equation} L_{\rm bol} = \int^{R_{\rm out}}_{R_{\rm in}} 2\pi R\, 2\sigma_{SB} T^4(R) \,\,{\rm d}R , \end{equation} where $R_{\rm in}$ and $R_{\rm out}$ are the inner and outer disc radii respectively, and $\sigma_{SB}$ is the Stefan-Boltzmann constant. In a relativistic disc the only modification of this expression is the deviation from the local disc area element from $2\pi R {\rm d}R$. This minor simplification will not greatly affect our results. If we assume that there is minimal emission from interior to the radius at which the temperature peaks, and that the temperature falls exterior to this radius according to the classical $T(R) \propto R^{-3/4}$ accretion profile, we have \begin{equation}\label{LBOL} L_{\rm bol} = 4\pi\sigma_{SB} R_p^2 T_p^4 \left[ 1- \left(\frac{R_p}{R_{\rm out}}\right)\right] . \end{equation} It is important to note here that this luminosity corresponds to the disc frame ``de-absorbed'' luminosity, as the effects of absorption of photons between the TDE and the observer have been modelled out with the inclusion of the {\tt TBabs} model (Table \ref{table:results}). We generally find low values for the absorbing hydrogen column density $n_H$ (Fig. \ref{fig:nsigma}), and so any errors introduced by incorrect modelling of the absorption are likely to be minimal. {We test the validity of the simplifications used in this modelling in Appendix C. We find that the errors associated with these simplifications are typically at the $\sim$ factor $2$ level, with equation \ref{LBOL} on average slightly overestimating the bolometric luminosity of the system. A factor $2$ uncertainty is less than the error range introduced by the uncertainty in the fitted parameters for most observations in our sample (Table \ref{table:results}). } We take $R_{\rm out}$ to be the circularisation radius of the TDE, whereby \begin{equation} R_{\rm out} = 94 \beta \, R_g \, \left({M_{\rm BH} \over 10^6M_\odot}\right)^{-2/3} \end{equation} for a solar type star. {In this expression $\beta$ is the ``penetration factor'' of the orbit (the ratio of the disrupted stars orbital pericentre to the tidal radius $\beta = R_T/R_p \leq 1$). We will assume that $\beta = 1$ for the remainder of this paper. }This choice of outer radius has only a small effect on the inferred bolometric luminosity for our TDE sample\footnote{Choosing $R_{\rm out} \to \infty$ changes the individual luminosity values of our sample by no more than 15\% for any data point, and typically by $\sim 5$\%. On the other hand, if a TDE originated from a highly relativistic disruption with pericenter radius $R_{\rm peri} \sim R_I$, ($\beta \ll 1$) then the resulting bolometric luminosity may be smaller than the values given here. At late times the outer radius of all TDE discs will be large as a result of the conservation of angular momentum of the accreting disc material. } which, as they generally have small inferred $M_{\rm BH, \sigma}$, have large outer radii (Figs. \ref{fig:bolmsig}, \ref{fig:fig1}). Using the values of $R_p$ and $T_p$ inferred from the TDEs X-ray spectra, and the black hole masses inferred from the $M_{\rm BH}-\sigma$ relationship, we plot the disc bolometric luminosity versus the $M_{\rm BH} - \sigma$ black hole mass in Figure \ref{fig:bolmsig}. The inset shows the peak bolometric luminosity observed from each source. The black dashed curve shows the Eddington luminosity \begin{equation} L_{\rm edd} = 1\times 10^{45} \left({M_{\rm BH} \over 8 \times 10^6 M_\odot}\right) \,\, {\rm erg/s}, \end{equation} while the red dashed curve is equal to $0.01 L_{\rm edd}$. \begin{figure} \centering \includegraphics[width=\linewidth]{BolometricLuminosityVsMsigma.pdf} \caption{The inferred bolometric disc luminosity plotted against the black hole mass implied from the $M-\sigma$ relationship. Inset: the peak bolometric luminosity of each TDE plotted against $M-\sigma$ mass. We see a clear positive correlation between peak bolometric luminosity and black hole mass (best fit displayed by blue dashed curve in the inset). Each TDE observation is, within the error bars, consistent with being between $L = L_{\rm edd}$ (black dashed curve) and $L = 0.01 L_{\rm edd}$ (red dashed curve). } \label{fig:bolmsig} \end{figure} In Figure \ref{fig:bolmsig} we see a clear positive correlation between the peak bolometric luminosity and black hole mass of our TDE sample. A polynomial fit to the sample of $L_{\rm bol, peak} = A M_{\rm BH, \sigma}^B$ gives \begin{equation} B = 0.93 \pm 0.33, \end{equation} i.e., consistent with a linear relationship within the one standard deviation uncertainties, suggesting that TDE discs form at constant Eddington ratio. The best-fitting amplitude $A$ in this relationship corresponds to an Eddington ratio of $f_{\rm edd} = 0.37^{+0.46}_{-0.21}$. We further quantify this result by performing a power-law fit to these data, and taking into account the heteroscedastic measurement uncertainties. We use the Python implementation of the {\tt linmix} package (Kelly 2007) to perform linear regression in log-space: \begin{equation} \log_{10}(L_{\rm bol, peak}) = \alpha + \beta \log_{10}(M_{{\rm BH},\sigma}), \end{equation} {with $L_{\rm bol, peak}$ in units of erg s$^{-1}$, and $M_{{\rm BH},\sigma}$ in solar masses. } To determine $\alpha$ and $\beta$ with their uncertainties, we bootstrap the data by uniformly sampling within the 1-$\sigma$ error bars and performing the correlation analysis on 1000 realizations. These results are consistent with a linear relationship, where the median and standard deviation are given by $\alpha = 37 \pm 2$ and $\beta = 1.1 \pm 0.3$ (see the inset of Figure \ref{fig:bolmsig}). In Figure \ref{fig:bolmsig} we see that each TDE observation is, within the error bars, consistent with being between $0.01 L_{\rm edd} < L < L_{\rm edd}$. This can be seen more clearly in Figure \ref{fig:edrat}, where we plot the Eddington {luminosity} ratio of each TDE \begin{equation} f_{\rm edd} \equiv L_{\rm bol}/L_{\rm edd} . \end{equation} It is interesting, but potentially not surprising, that the TDEs we have examined in this paper have Eddington ratios which are at the same level as those of X-ray binaries in their ``soft'' state $0.01 \leq f_{\rm edd} \leq 1$. The TDEs that are modelled successfully by equation (\ref{MB}) are themselves in a thermal dominated state. Our results suggests that the soft accretion state properties of black hole discs are universal, and in particular are black hole mass independent. \begin{figure} \centering \includegraphics[width=\linewidth]{EddingtonRatioVsSigmaMassCurrent.pdf} \caption{The Eddington {luminosity} ratio of the TDE sources with both a high quality X-ray spectrum and a galactic velocity dispersion measurement. The Eddington ratio is computed assuming that the TDE black hole mass is given by the $M_{\rm BH}-\sigma$ value, and that the bolometric luminosity is given by equation \ref{LBOL}. Every TDE is, within the uncertainties, consistent with having a sub-Eddington luminosity $f_{\rm edd} \leq 1$. In addition, all TDE sources are, within the uncertainties, consistent with having a luminosity higher than the hard-state transition luminosity seen in X-ray binaries $f_{\rm edd} \geq 0.01$. } \label{fig:edrat} \end{figure} Finally, in Fig. \ref{fig:edrat}, we plot as a red dashed curve the canonical `fallback rate' calculation of the Eddington {mass accretion rate} ratio $f_{\rm fb} \equiv {\dot M_{\rm fb} / {\dot M}_{\rm edd}} \propto M_{\rm BH}^{-3/2}$ (Rees 1988), assuming a star of stellar mass and nominal accretion efficiency of $0.1$. We find no evidence for a fundamental link between the observed Eddington {luminosity} ratio of TDE discs and the fallback {accretion} rate {ratio}. {For disc systems which are out of inflow equilibrium, such as a TDE disc, the accretion rate does not represent a fundamental constant which encapsulates the physical properties of the disc. This is because the accretion rate in these systems will vary at every disc radius, and also at each time. It is likely that the bolometric luminosity of a TDE disc will better trace the mass accretion rate across its inner edge (at the ISCO), while the fallback rate represents the mass flux into the discs {\it outer} edge. The mass fluxes at the inner and outer edges of the disc are not equal and may in fact be very different (e.g., Mummery \& Balbus 2021). In addition, it is possible that winds launched in the early stages of TDE accretion remove material from the flow, restricting the luminosity to near Eddington values. Finally, steady-state discs at high mass accretion rates are known to deviate from a linear luminosity-accretion rate relationship (e.g., Abramowicz et al. 1998, Jiang et al. 2019), which may further explain why the fall-back rate fails to describe the observed luminosity evolution of our sources. \subsection{(A lack of) Absorption in TDE X-ray spectra} \begin{figure} \centering \includegraphics[width=\linewidth]{nHVsMsigma.pdf} \caption{ The absorbing column depth of the {\tt TBabs} model, plotted against the $M_{\rm BH} - \sigma$ black hole mass, for the TDEs in our sample. There is no correlation between $n_H$ and black hole mass, and all values of $n_H$ are relatively small $n_H \ll 10^{22}$ cm${}^{-2}$. Points displayed without vertical error bars are fixed at the galactic value for the column density. } \label{fig:nsigma} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{evolving_nH.pdf} \caption{The evolving absorbing column depth of the {\tt TBabs} model, plotted against the time since the first X-ray observation of ASASSN-14li and AT2019dsg. The final observation of ASASSN-14li shows that the absorption has fallen to the level of the galactic column density in its direction (denoted by the dotted lines). } \label{fig:n_t} \end{figure} The results of the previous sub-section indicate that the bolometric luminosity of the TDEs in our sample are limited by the Eddington luminosity of their host black hole. It is important to verify that this result is not produced by an increasing absorbing column density at low black hole masses, which could potentially act as the dominant systematic effect in our sample. In Figure \ref{fig:nsigma} we plot the absorbing column density $n_H$ from the {\tt TBabs} model against the $M_{\rm BH}-\sigma$ black hole mass of the TDEs in our sample. There is no correlation between $n_H$ and black hole mass, and all values of $n_H$ are relatively small\footnote{The one potential exception to this finding is 2MASXJ0249, which has a moderate absorbing column density $n_H \simeq 3 \times 10^{21}$ cm${}^{-2}$. We note that the detailed spectral modelling of Strotjohann {\it et al}. (2016) found that 2MASXJ0249 was best described by a complex absorber with both cold and ionised gas. } $n_H < 10^{21}$ cm${}^{-2}$. A systematic effect resulting from an increase in $n_H$ around low black hole mass TDEs can be ruled out as driving the $L_{\rm bol}-M_{\rm BH}$ correlation of the previous sub-section. This result is important but potentially unsurprising, as we have selected a sample of bright thermal X-ray sources which must will by construction have modest absorbing column depths. It is interesting to examine the evolution of the absorbing column density for the two TDE's where we have the best temporal coverage, ASASSN-14li and AT2019dsg (Figure \ref{fig:n_t}). Both sources are observed to have absorbing column densities above the galactic level (denoted by dotted lines) at early times. At the latest times it appears that, particularly for ASASSN-14li, the absorbing column density of the sources evolves to the galactic level (as also found by Kara et al. 2018, Wen et al. 2020). One interpretation of this behaviour would be a wind of material which is launched in the early stages of the TDE, which at early times acts as an absorbing column, but which is not present, or is significantly weaker, at late times. \subsection{Measuring black hole masses from TDE X-ray spectra} The correlation between $R_p$ and $\sigma$ (Figure \ref{fig:fig2}) suggests that the X-ray radius $R_p$ can be used as a measurement of the black hole mass. This is unsurprising, as the parameter $R_p$ corresponds to the observed location at which the TDE disc temperature is a maximum. As this radial co-ordinate will likely correspond to the region around the ISCO (Mummery 2021), it will scale linearly with the physical radius of the TDE's accretion disc. As highlighted in Mummery (2021) however, there are two compounding parameter degeneracies which must be taken into account before this inversion can be made. The first relates to the inclination angle between the TDE disc and the observer, which modifies the projected image plane radius of the temperature region. The second intrinsic degeneracy between radius and mass relates to the black hole's spin, which changes the size of the ISCO in units of the black hole's mass. To examine the effects of varying black hole spin and disc-observer inclination angle on the inferred radial size parameter $R_p$, we numerically simulate and fit a number of 0.3--2 keV X-ray spectra with known black hole masses, spins and inclinations with the model of equation \ref{MB}. The mock X-ray spectra are generated by solving the relativistic thin disc equations, and ray-tracing the resulting temperature profiles (see e.g., Mummery \& Balbus 2020 for a description of the algorithms used to solve both the disc evolution and photon orbit equations). The mock spectra were then fit with equation \ref{MB}, and the parameter $R_p$ was compared to the (known) value of both the ISCO radius $R_I$ and the (known) gravitational radius $R_g \equiv GM_{\rm BH}/c^2$. In Figure \ref{fig:Rc} we display the inferred $R_p$ parameter, in units of the gravitation radius $R_g$ (which we denote by $Y \equiv R_p/R_g$) as a function of black hole spin and inclination angle. Typically the radius inferred from the spectral observations lies in the range of 1 to 10 gravitational radii, with extremes a factor two higher (lower) for low (high) spins and low (high) inclinations. The ratio of $R_p$ to the location of the ISCO radius is shown in Figure \ref{fig:DR}, indicating that a measurement of $R_p$ should provide a good estimate of the size of the TDE disc's ISCO for most inclination angles, with large errors only at high $\theta_{\rm inc} > 80^\circ$ and low $\theta_{\rm inc} < 10^\circ$ inclinations. The ratio $R_I/R_p$ is principally only dependent on the inclination of the disc, with higher inclinations leading to $R_p < R_I$, and smaller inclinations $R_p > R_I$. While the unknown value of $\theta_{\rm inc}$ introduces scatter into the $R_p-M_{\rm BH}$ relationship, if the TDE black hole spins are assumed to be uniformly distributed, and the observed inclination angles are assumed to be distributed uniformly on a sphere, then the mean value of the ratio $R_I/R_p$ across the whole population is found to be $0.97$. The distribution of the black hole masses one will infer from the radial parameter $R_p$ for a population of TDEs will depend on the intrinsic distributions of both the inclination angle $\theta$ and the black hole spins $a$. It is likely that TDE sources will be distributed randomly on a sphere, meaning that $\cos\theta$ will be uniformly distributed. The distribution of TDE black hole spin's however is much less certain. In Fig. \ref{fig:Ydf} we plot the probability density functions (PDFs, solid curves) and cumulative distribution functions (CDFs, dashed curves) of the variable $Y = R_p/R_g$ (i.e., the radius inferred from the X-ray spectra in units of the black hole mass) for three different intrinsic TDE spin distributions. In blue, the spin distribution is assumed to be uniform, while the green curves are for a spin distribution\footnote{The notation $p(a)$ here denotes the probability density function of the black hole spin. } $p(a) \propto a^2$ (i.e., higher spins are favoured), while the red curves are for a spin distribution $p(a) \propto (a-1)^2$ (i.e., lower spins are favoured). While the choice of the black hole spin distributions does affect the black hole mass estimate, within the uncertainties inherent to this sort of modelling (the systematic offsets between the different distributions are smaller than their widths), it is unlikely to affect the interpretation of our results. \begin{figure} \centering \includegraphics[width=\linewidth]{ContourRp_Rg.png} \caption{The distribution of $Y = c^2 R_p/GM_{\rm BH}$ as a function of black hole spin and inclination angle. Increasing either the spin or inclination angle causes the inferred $R_p$ parameter to decrease. } \label{fig:Rc} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{ContourRI_Rp.png} \caption{The distribution of the ratio $R_I/R_p$ as a function of black hole spin and inclination angle. While there remains some trend with spin, this ratio is principally dependent on $\theta$. The white line at approximately $\theta \simeq 50^\circ$ shows that the model performs best at this inclination. } \label{fig:DR} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Rp_distributions.pdf} \caption{Probability density function (PDF, solid curves) and cumulative distribution function (CDF, dashed curves) of $Y = c^2 R_p/GM_{\rm BH}$, for three different assumptions of the underlying SMBH black hole spin distribution. The blue curves are for a uniform spin distribution ($p(a) = 1$), the green curves are for a spin distribution $p(a) \propto a^2$ (i.e., higher spins are favoured), while the red curves are for a spin distribution $p(a) \propto (a-1)^2$ (i.e., lower spins are favoured). } \label{fig:Ydf} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{MeanRadiusVsMSigmaCurrent.pdf} \caption{The average of the peak-temperature radial location inferred from the X-ray spectra, plotted against the black hole mass implied by the $M-\sigma$ relationship. Plotted as a brown solid curve is the circularisation radius of a TDE, expected to be the disc {\it outer} edge. Shown as dashed curves are the values 1, 2 and 6 $GM/c^2$. The blue shaded region shows 99\% confidence region of the uniform black hole spin distribution (Fig. \ref{fig:Ydf}). The TDE sources studied in this paper occupy the expected region of $R_p-M_{\rm BH, \sigma}$ parameter space. } \label{fig:fig1 \end{figure} In Fig. \ref{fig:fig1} we plot the mean radius inferred from the X-ray spectral measurements of each TDE against the black hole masses inferred from the $M-\sigma$ relationship, using our measurements of the velocity dispersion (Table \ref{table:sample}). Shown as dashed curves are the values 1, 2 and 6 $GM/c^2$. The blue shaded region shows 99\% confidence region of the uniform black hole spin distribution (Fig. \ref{fig:Ydf}). As is clear in Fig. \ref{fig:fig1}, despite the large scatter inherent to the $M-\sigma$ relationship, and the uncertainty in our measurements in $R_p$, the TDE sources studied in this paper occupy the expected region of $R_p-M_{\rm BH, \sigma}$ parameter space. Assuming a uniform spin distribution, we can calibrate a radius-to-mass conversion factor $X$, defined as \begin{equation} \left( { M_{{\rm BH}, R_p} \over 10^6 M_\odot}\right) = X \left({R_p \over 10^{12} \, {\rm cm}}\right) . \end{equation} We find a mean value of $X$ of \begin{equation}\label{conversion_factor} \overline{X} \simeq 4.9^{+7.1}_{-3.0} \end{equation} {Where the error range denotes the 1$\sigma$ confidence interval (note that this interval corresponds to roughly $\pm0.4$ dex).} From this conversion we can compute an X-ray spectral fit black hole mass for each TDE. Note that this particular of value $\overline{X}$ implies that $R_p \simeq 1.4 R_g$, and that a radius $R_p = 2 \times 10^{11}$ cm corresponds to a black hole mass of $\sim 10^6 M_\odot$ (i.e., all sources bar AT2018zr studied in this work are consistent with having a black hole mass $M>10^6 M_\odot$). In Table \ref{tab:mass} we collate the black hole masses inferred from the mean X-ray radius of the TDEs examined in this paper. As a measure of the uncertainty in this conversion we include error ranges which correspond to the 5\%-95\% confidence region of the radius-to-mass conversion distribution, assuming a uniform distribution of black hole spins (blue curve, Figure. \ref{fig:Ydf}). We note that two of these sources, ASASSN-14li and ASASSN-15oi, have had black hole masses inferred from their X-ray spectra using relativistic slim disc modelling (Wen et al. 2020). It is encouraging that our inferred best-fitting masses ($M_{14{\rm li}} = 10 \times 10^6 M_\odot$, $M_{15{\rm oi}} = 5 \times 10^6 M_\odot$) are both consistent with the values found from the more complex models of Wen et al. (2020): $M_{14{\rm li}} = 10 \times 10^6 M_\odot$, $M_{15{\rm oi}} = 4 \times 10^6 M_\odot$, albeit with wider uncertainties. \begin{table} \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{c|c} Source & $\log_{10} M_{\rm BH}/M_\odot$ \\ \hline ASASSN-14li & $7.19^{+0.62}_{-0.66}$ \\ \hline ASASSN-15oi & $6.67^{+0.63}_{-0.66}$ \\ \hline OGLE16aaa & $6.66^{+0.62}_{-0.66}$ \\ \hline AT2018zr & $5.37^{+0.62}_{-0.66}$ \\ \hline AT2019azh & $6.46^{+0.62}_{-0.66}$ \\ \hline AT2019dsg & $6.74^{+0.63}_{-0.66}$ \\ \hline AT2020ksf & $6.53^{+0.62}_{-0.67}$\\ \hline GSN069 & $6.28^{+0.62}_{-0.66}$ \\ \hline 2XMM J1847 & $6.02^{+0.62}_{-0.66}$ \\ \hline 2MASX J0249 & $5.81^{+0.62}_{-0.66}$\\ \hline 3XMM J1521 & $5.96^{+0.62}_{-0.66}$\\ \hline \end{tabular} \caption{The black hole masses inferred from the mean X-ray radius of the TDEs examined in this paper. The error ranges correspond to the 5\%-95\% confidence region of the radius-to-mass conversion distribution, assuming a uniform distribution of black hole spins (blue curve, Figure. \ref{fig:Ydf}). The values themselves correspond to the 50th percentile of the distribution. } \label{tab:mass} \end{table} {The inversion procedure developed here has relatively large intrinsic scatter, of order $\sim 0.6$ dex. It may be possible to shrink this uncertainty further by incorporating additional spectral information (the measured disc temperature) into the parameter inversion procedure. This has promise as the disc temperature is also found to scale with the disc radius (Fig. \ref{fig:fig3}), a result of the near-universal Eddington luminosity ratio found in this work, and this may be an interesting future extension of this analysis. However, the fact that a given TDE source will be accreting with a near-Eddington luminosity is not guaranteed to be true apriori, and it seems reasonable to assume that sources with $f_{\rm edd} \neq 1$ will exist at some non-zero rate in the total TDE population. The time-dependent cooling of the disc temperature may also complicate the inversion procedure, and we therefore do not incorporate the disc temperature into the analysis performed here. } With this radius-to-mass conversion factor determined, we plot in Figures \ref{fig:bolrad}, \ref{fig:edratRP} and \ref{fig:nr} the bolometric luminosity, Eddington ratio and absorbing column depth against $M_{{\rm BH}, R_p}$, the black hole mass determined from each X-ray radius. We again find that the bolometric luminosity scales approximately linearly with black hole mass, and that the Eddington ratios of these discs is limited to the range $0.01 \leq f_{\rm edd} \leq 1$. Note that the values of the luminosities and Eddington ratios of the sources with $\sigma$ measurements in Figs. \ref{fig:bolrad} and \ref{fig:edratRP} differ slightly from those of Figs. \ref{fig:bolmsig} and \ref{fig:edrat}. This is due to the use of $M_{\rm BH} - R_p$ black hole mass values in calculating $L_{\rm bol}$ and $f_{\rm edd}$, not the $M_{\rm BH} - \sigma$ values. In addition, once again, no relationship between absorbing column density and black hole mass is found. This result indicates that the finding of Eddington limited TDE accretion is robust, and does not depend on use of the $M_{\rm BH}-\sigma$ relationship. \begin{figure} \centering \includegraphics[width=\linewidth]{BolometricLuminosityVsMassFromRadius.pdf} \caption{The inferred bolometric disc luminosity plotted against the $M_{\rm BH}-R_p$ mass for all of the TDEs in our sample. We see a clear positive correlation between bolometric luminosity and black hole mass. This is the exact relationship one would expect to find if the bolometric luminosity of these sources was a fixed fraction of the black hole's Eddington luminosity. } \label{fig:bolrad} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{EddingtonRatioVsMassFromRadius.pdf} \caption{The Eddington {luminosity} ratio of the TDE sources in our sample. The Eddington ratio is computed assuming that the TDE black hole mass is given by the $M-R_p$ conversion of equation \ref{conversion_factor}, and that the bolometric luminosity is given by equation \ref{LBOL}. Every TDE is, within the uncertainties, consistent with having a sub-Eddington luminosity $f_{\rm edd} \leq 1$. In addition, all TDE sources are, within the uncertainties, consistent with having a luminosity higher than the hard-state transition scale seen in X-ray binaries $f_{\rm edd} \geq 0.01$. } \label{fig:edratRP} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{nHVsMRp.pdf} \caption{ The neutral absorbing column depth of the {\tt TBabs} model, plotted against the $M_{\rm BH} - R_p$ black hole mass, for the TDEs in our sample. There is no correlation between $n_H$ and black hole mass, and all values of $n_H$ are relatively small $n_H \ll 10^{22}$ cm${}^{-2}$. Points displayed without vertical error bars are fixed at the galactic value for the column density. } \label{fig:nr} \end{figure} \subsection{The missing energy problem} The integrated observed energy in TDE X-ray light curves typically reaches values of $E_X \sim 10^{50}$ erg (e.g., Holoien et al. 2014b), far below the energies expected from the accretion of a significant fraction of the incoming stellar mass $M_{\rm acc} \sim 0.1 M_\odot$ ($E_{\rm rad} \sim 10^{52}$ erg). By modelling the X-ray and UV light curves of a number of TDEs, Mummery (2021b) argued that the discrepancy between the observed and expected radiated energies could be explained by the large ($\eta_X \sim 10-100$) bolometric corrections inherent to TDE discs, a line of reasoning also discussed by Saxton {\it et al}. (2021). This argument can be further tested with the results of our X-ray spectral fitting. {We will now demonstrate analytically that the observed X-ray luminosity of a typical TDE disc will be an exponentially small fraction of the total disc luminosity, and that this fraction further decreases exponentially as a function of time (i.e., as the disc cools one observes a smaller and smaller fraction of the total energy emitted in the X-rays). } The {observed} X-ray luminosity of the accretion disc is given by the integral of the disc spectrum (eq. \ref{MB}) over the X-ray bandpass of the telescope\footnote{This calculation explicitly assumes that the disc remains in a thermal state independent of the disc temperature. This is unlikely to be universally valid: at the very lowest temperatures TDE disc's will likely transition into a harder state, where accretion energy is diverted into creating hot coronal electrons. This will increase $L_X$ and $\eta_X$ will subsequently reduce. }: \begin{equation} L_X = 4\pi D^2\int_{\nu_l}^{\nu_u} F_\nu(\nu, R_p, T_p, \gamma, D) \,\,{\rm d}\nu, \end{equation} where $E_l = 0.3$ keV, and $E_u = 10$ keV. Once again, this value corresponds to the disc-frame ``de-absorbed'' luminosity. Combined with the bolometric luminosities of the previous sub-section, we can calculate another key parameter of the TDE system: the bolometric correction $\eta_X$, defined as \begin{equation} \eta_X \equiv L_{\rm bol}/L_X . \end{equation} This bolometric correction quantifies the fraction of the total energy liberated from the accretion disc which is observed at X-ray frequencies. As both the X-ray and bolometric disc luminosities have leading dependencies which scale as $R_p^2$, the bolometric correction only depends on the peak disc temperature $T_p$ and $\gamma$ parameter\footnote{Formally the bolometric correction also depends on the disc's {outer} radius, but this effect is minimal (of order a few percent for typical TDE black hole masses). }. An analytical estimate of the bolometric correction can be obtained by approximating \begin{equation} L_{\rm bol} \simeq 4\pi \sigma_{SB} R_p^2 T_p^4, \end{equation} and, using equation \ref{MB} (extending the upper integration limit to $+\infty$ introduces exponentially small corrections): \begin{multline} L_X \simeq {16 \pi^2 R_p^2 \xi_1 h \over c^2 f_{\rm col}^4} \int_{\nu_l}^\infty \nu^3 \\ \left[ \left(\frac{k \T_p}{h \nu} \right)^\gamma + \xi_2\left(\frac{k \T_p}{h \nu} \right)^{1+\gamma} + \xi_3\left(\frac{k \T_p}{h \nu} \right)^{2 + \gamma} \right] \exp\left(- \frac{h\nu}{k \T_p} \right) {\rm d}\nu . \end{multline} The solution to this integral can be written in terms of incomplete $\Gamma$ functions, defined as: \begin{equation} \Gamma(s, z) \equiv \int_z^\infty t^{s-1} \exp(-t) \, {\rm d}t . \end{equation} Explicitly, after defining $t \equiv h\nu/k\T_p$, we have \begin{multline} L_X \simeq {16 \pi^2 R_p^2 \xi_1 h \over c^2 f_{\rm col}^4} \left({k\T_p \over h}\right)^4 \\ \int_{h\nu_l/k\T_p}^\infty \left[ t^{3 - \gamma} + \xi_2 t^{2-\gamma} + \xi_3 t^{1 - \gamma} \right] \exp\left(- t \right) {\rm d}t , \end{multline} with solution \begin{multline} L_X = {16 \pi^2 R_p^2 \xi_1 h \over c^2 f_{\rm col}^4} \left({k\T_p \over h}\right)^4 \\ \left[\Gamma\left(4 - \gamma, {h\nu_l \over k\T_p}\right) + \xi_2 \Gamma\left(3 - \gamma, {h\nu_l \over k\T_p}\right) + \xi_3 \Gamma\left(2 - \gamma, {h\nu_l \over k\T_p} \right) \right]. \end{multline} Therefore \begin{multline}\label{etax} \eta_X \simeq {2\pi^4 \over 15 \xi_1} \Bigg[\Gamma\left(4 - \gamma, {h\nu_l \over k\T_p}\right) + \xi_2 \Gamma\left(3 - \gamma, {h\nu_l \over k\T_p}\right) \\ + \xi_3 \Gamma\left(2 - \gamma, {h\nu_l \over k\T_p} \right) \Bigg]^{-1}. \end{multline} The asymptotic behaviour of the incomplete $\Gamma$ function is the following \begin{equation} \Gamma(s, z\rightarrow \infty) \sim z^{s-1} e^{-z}, \end{equation} and so \begin{equation} \eta_X \simeq {2\pi^4 \over 15 \xi_1} \left({h\nu_l \over k\T_p} \right)^{\gamma-3} \exp\left({h\nu_l \over k\T_p} \right) \simeq 5.93 \, \left({h\nu_l \over k\T_p} \right)^{\gamma-3} \exp\left({h\nu_l \over k\T_p} \right), \end{equation} for low disc temperatures $k\T_p \ll h\nu_l$. As a TDE disc cools with time, the bolometric correction of its light curves will grow exponentially. {This equation demonstrates that because the X-ray portion of a typical TDE disc spectrum is in the Wien-tail, the integrated X-ray energy may only correspond to an {\it exponentially suppressed} fraction of the total disc energy. Rather simply this results from X-ray observations only probing the very highest energy photons emitted from the disc, this number of photons decreases exponentially for all energies above the peak disc scale. This suppression fraction is, for temperatures $\sim 100$ eV, of order 10 (depending on $\gamma$), and can be as high as 100 for $k\T_p \sim 50$ eV, both typical TDE disc temperatures. A magnitude $\sim 100$ correction is typically what is required to resolve the ``missing energy problem'' of an observed X-ray bright TDE. The extreme limit of the effect discussed here corresponds to optically-bright TDEs which have peak disc temperatures $\sim 25$ eV, and which are completely unobservable at X-ray frequencies. } In Figure \ref{fig:etaX}, we plot the (numerically calculated) bolometric correction of the TDEs in this study against the peak temperature of the disc inferred from each X-ray spectra. The bolometric correction of the TDEs examined in this study ranges from $\sim2$ up to $\sim 100$, meaning that the radiated energy inferred from a TDEs X-ray light curve can dramatically underestimate its total radiated energy. Compounding this effect, the bolometric correction of a TDE disc grows exponentially as the accretion disc cools. This can be seen in the growth of the bolometric corrections of ASASSN-14li and AT2019dsg, the best temporally sampled TDEs in our study. Both sources have bolometric corrections which grow by roughly an order of magnitude over the course of many observations. \begin{figure} \centering \includegraphics[width=\linewidth]{BolometricCorrection.pdf} \caption{The X-ray ``bolometric correction'' factor, which relates the observed disc X-ray luminosity to the discs bolometric luminosity. The bolometric correction of the TDEs examined in this study ranges from $\sim2$ up to $\sim 100$, meaning that the radiated energy inferred from the X-ray light curve will dramatically underestimate the total radiated energy. The grey shaded region represents the analytical approximation of equation \ref{etax}, for $1/2 \leq \gamma \leq 3/2$. Note that the bolometric correction of ASASSN-14li grows exponentially at late times. } \label{fig:etaX} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{example_lcs.pdf} \caption{The evolving X-ray (diamonds) and bolometric (circles) luminosity of the two well-sampled TDEs in our sample: AT2019dsg and ASASSN-14li. It is clear to see that the bolometric luminosity of a TDE is always significantly larger than its observed X-ray luminosity. At late times this bolometric correction grows exponentially (as is particularly apparent for ASASSN-14li). The black dashed and purple dot-dashed late time evolution profiles are the theoretical predictions of Mummery \& Balbus (2020), see text. } \label{fig:example_lcs} \end{figure} The total radiated energy of AT2019dsg's bolometric light curve was a factor 10 higher than it's X-ray light curve, reaching a value $E_{{\rm rad}, {\rm 19dsg}} \simeq 2 \times 10^{50}$ erg $\sim 10 E_{{\rm rad}, {\rm X-ray}}$. However, our observations only span a temporal baseline of roughly 10 days, and a total radiated energy budget is hard to extrapolate from this data set. For ASASSN-14li however, we have observations spanning more than 1200 days, and a robust estimate of the total radiated energy can be determined. We find \begin{equation} E_{{\rm rad}, {\rm 14li}} \simeq 1.17 \times 10^{52} \,\, {\rm erg}, \end{equation} which corresponds to an accreted mass \begin{equation} M_{\rm acc} \simeq 0.11 \left({ 0.057 \over \eta}\right) \, M_\odot , \end{equation} where $\eta$ is the mass to light efficiency of the accretion process ($\eta = 0.057$ is the value appropriate for thin disc accretion onto a Schwarzschild black hole). An accreted mass value of this magnitude is exactly as would be expected from the tidal disruption of a star of mass $M_\star \sim 0.2 M_\odot$, and there is therefore no missing energy. This high value for ASASSN-14li's accreted mass is comparable to those values found from significantly more complex models of ASASSN-14li's evolving light curves, namely the results of Mummery \& Balbus (2020) and Wen {\it et al}. (2020). \begin{table*} \caption{The maximum early time UV/optical black body luminosity of the six TDEs with excellent early time optical and UV data, compared to the peak of the disc luminosity derived in this paper (i.e., the maximum bolometric luminosity of all the epochs studied here). For sources where the peak disc luminosity was significantly later than the optical/UV peak (ASASSN-15oi, OGLE16aaa and AT2019azh), we also display the disc luminosity at a time closest to optical peak. The column $\Delta t$ denotes the time offset between peak optical/UV and disc luminosity measurements (positive indicating the optical/UV proceeds the disc measurement). Four of the six TDEs have early time optical/UV luminosities in excess of their peak disc luminosities, suggesting that the early time UV/optical emission cannot be solely powered by the reprocessing of disc emission. } \centering \begin{tabular}{cccccc} Source & $L_{{\rm BB, max}}$ & $L_{\rm disc, max}$ & $L_{{\rm BB, max}}\big/L_{\rm disc, max}$ & $\Delta t $& Reference \\ & (erg/s) & (erg/s) & & (days) & \\ \hline ASASSN-14li & $1.0 \times 10^{44}$ & $3.1 \times 10^{44}$ & 0.32 & 41 & Holoien et al. 2016a \\ \hline ASASSN-15oi & $1.3 \times 10^{44}$ & $4.8 \times 10^{43}$ &2.7 & 234 & Holoien et al. 2016b \\ "" & "" & $1.45 \times 10^{43}$ & 9.0 & 76 & "" \\ \hline OGLE16aaa & $2.1 \times 10^{44}$ & $2.1 \times 10^{44}$ & 1.0 & 315 & van Velzen et al. 2021 \\ "" & "" & $5.6 \times 10^{43}$ & 3.75 & 141 & "" \\ \hline AT2018zr & $5.6 \times 10^{43} $ & $3.7 \times 10^{42}$ &15.1 & 40 & van Velzen et al. 2021 \\ \hline AT2019azh & $2.8 \times 10^{44}$ & $2.7 \times 10^{43}$ & 10.4 & 200 & van Velzen et al. 2021 \\ "" & "" & $9.0\times 10^{42}$ & 31.1 & 30 & "" \\ \hline AT2019dsg & $2.9 \times 10^{44}$ & $4.9 \times 10^{44}$ & 0.59 & 18 & van Velzen et al. 2021\\ \hline \end{tabular} \label{tab:uv} \end{table*} The large TDE bolometric correction can be seen most explicitly in Figure \ref{fig:example_lcs}. Here we plot the bolometric (circular) and X-ray (diamond) light curves of ASASSN-14li and AT2019dsg. The light curves of ASASSN-14li are the most illuminating, as they span the longest temporal baseline. At large times ($t-t_0 > 100$ days) the bolometric luminosity of ASASSN-14li falls off by a factor $\sim 5$, while its observed X-ray luminosity falls off by a factor $\sim 150$. As a test that the evolution of both light curves is being driven by the cooling of an accretion disc of fixed area, we plot the two asymptotic theoretical light curve models of Mummery \& Balbus (2020), namely \begin{equation} L_{\rm bol} \propto t^{-n}, \end{equation} and \begin{equation} L_X \propto t^{-n/2} \exp\left(- A \left({t + t_0\over t_0}\right)^{n/4} \right) , \end{equation} where \begin{equation} A_{14{\rm li}} \equiv {0.3 {\rm keV} \over k T_{p, 14{\rm li}}} \simeq 5.75 . \end{equation} These expressions are derived under the assumption that the only time dependence inherent in the TDE system is the peak disc temperature cooling according to \begin{equation} T_p \propto t^{-n/4}, \quad n \simeq 0.75 . \end{equation} We see in Fig. \ref{fig:example_lcs} that both theoretical profiles produce an excellent description of the evolution. Clearly, the low values of the observed radiated energy inferred from X-ray observations do not preclude large masses from being accreted onto the central black hole. Instead, the bolometric disc luminosity remains exponentially larger than the X-ray luminosity as the disc cools, and the majority of the energy content is released into far UV frequencies (where the disc spectrum peaks), which are not observed. \subsection{The early time optical/UV emission} \begin{figure} \centering \includegraphics[width=\linewidth]{DiscAndBBLuminosityVsRadius.pdf} \caption{The maximum disc (circular) and UV/optical black body (diamonds) luminosities of the six TDEs in this sample plotted against radial size inferred from the X-ray spectrum (error bars may be smaller than the marker sizes). While the disc luminosity appears to correlate with the X-ray emission radius (and therefore black hole mass), the UV/optical luminosity does not. In addition, the UV/optical blackbody luminosity exceeds the disc luminosity for four of the six sources. This suggests that the optical/UV luminosity is not produced by reprocessing of the disc luminosity. } \label{fig:uvdisc} \end{figure} TDEs are often observed to be extremely bright at optical and UV frequencies at early times. This early time UV/optical emission typically decays away over $\sim$ years (e.g., Holoien et al. 2016a, 2016b), and is not produced by direct emission from an accretion disc. At later times (typically a few hundred days post initial disruption), this early emission has decayed away and the remaining observed optical/UV emission is dominated by {direct} emission from the disc. This direct disc UV emission is characterised by a much slower, near time-independent, evolution (van Velzen et al. 2019, Mummery \& Balbus 2020, Mummery 2021b). The physical origin of this early time emission is contested, with some models assuming that the emission is produced by the reprocessing of accretion disc luminosity by an outflowing photosphere (e.g., Metzger \& Stone 2016, Dai et al. 2018, Nicholl et al. 2020), or by reprocessing from a cooling envelope (Metzger 2022). Other models posit that this emission is produced by shocking debris streams in the disc formation process (e.g., Shiokawa et al. 2015, Piran et al. 2015, Bonnerot \& Stone 2021, Bonnerot et al. 2021). One way in which the physical origin of the early time emission can be probed using the results of this study is by comparing the peak of the luminosity of this early time UV/optical component with the peak of the accretion disc luminosity inferred from X-ray spectral measurements. A fundamental requirement for any accretion powered scenario is an ionizing (accretion disc) luminosity that is equal to or larger than the reprocessed (UV/optical) component. This is something that is directly testable with our new results, as we now have estimates of the intrinsic accretion disc luminosity of each TDE source. In Table \ref{tab:uv} we collate the peak bolometric disc luminosity and the peak `black body' luminosity found from the early time optical/UV emission of the six sources with excellent early time optical/UV coverage from the literature. This optical/UV luminosity value results from the fitting of a single temperature blackbody to the early time optical emission (which usually provides an acceptable fit to the observations), which is then integrated over a broad range of frequencies (typically corresponding to wavelengths of 0.03-3 microns) to produce a luminosity. For sources where the peak disc luminosity was significantly later than the optical/UV peak (ASASSN-15oi, OGLE16aaa and AT2019azh), we also display the disc luminosity at a time closest to optical peak. The column $\Delta t$ denotes the time offset between peak optical/UV and disc luminosity measurements. From the comparison in Table \ref{tab:uv}, we see that four of the six TDEs have early time optical/UV `blackbody' luminosities in excess of their peak disc luminosities, suggesting that the early time UV/optical emission cannot be solely powered by the reprocessing of disc emission, and that some additional source of energy input is required. The analysis in Table \ref{tab:uv} is likely to be somewhat conservative, as clearly not all of the accretion disc luminosity can power early optical/UV emission: these sources are detected at X-ray energies. We reiterate that the density of neutral intervening material found for the sources in our sample is low $n_H \ll 10^{22}$ cm${}^{-2}$, and so it is unlikely that this effect can be explained by our observations ``missing'' some of the intrinsic disc luminosity. There are other, more circumstantial, lines of reasoning that suggests that the early optical/UV luminosity is not sourced by reprocessed disc emission. The evolutionary properties of the disc and early emission are often substantially different. ASASSN-14li has, for example, a bolometric disc luminosity that varies by a factor $\sim 5$ over $\sim 1000$ days (Fig. \ref{fig:example_lcs}), while it's optical/UV luminosity displays a much more pronounced decay (Holoien et al. 2016a). In addition, the early time optical/UV luminosity of this sample does not correlate with the radius inferred from the X-ray spectral fit (i.e., the sources black hole mass; although see Hammerstein et al. (2022) for evidence of a correlation between peak blackbody luminosity and host galaxy mass), while the peak bolometric disc luminosity does (Figs. \ref{fig:bolmsig}, \ref{fig:bolrad}, \ref{fig:uvdisc}). It seems unlikely that the observed reprocessed accretion luminosity would have fundamentally different scaling properties than the accretion luminosity it is sourced from. It seems likely therefore that the shocking of debris streams will be important in powering at least some of the early time optical/UV emission, for at least some TDEs. \section{Conclusions}\label{conc} In this paper we have analysed a uniform sample of TDEs observed at X-ray energies, with a new X-ray spectral disc model with physically interpretable parameters. Of our initial sample of 19 X-ray bright TDEs, 11 inhabit a region of parameter space where this model is valid and we had sufficient data for a detailed analysis of their properties. Our key results are the following:\\ (1) The bolometric luminosity of these TDE accretion discs is limited by the Eddington luminosity of their host black holes. A strong linear correlation between peak bolometric luminosity and $M_{\rm BH}-\sigma$ mass is found, indicating that thermal X-ray bright TDE discs form at near universal Eddington luminosity ratios. Quantitatively this Eddington luminosity ratio was found to be $f_{\rm edd} \sim 0.4^{+0.5}_{-0.2}$. (2) This correlation can not be explained by a systematic increase in the neutral absorbing column of low black hole mass TDEs (as may be expected if lower mass BHs have highly super-Eddington accretion rates, driving strong accretion disc winds), as the column depth of the sources in our sample do not correlate with black hole mass. We find low levels of neutral intervening material $n_H \ll 10^{22}$ cm${}^{-2}$ for all TDEs of our sample. (3) This correlation is robust, and does not depend on the use of the $M_{\rm BH}-\sigma$ relationship in computing the TDEs black hole masses. The radii inferred from X-ray spectra of our TDE sample lie in the parameter space expected for the ISCO radii of black hole masses corresponding with the observed galactic velocity dispersion (Figs. \ref{fig:fig2}, \ref{fig:fig1}), and can be used to infer an independent black hole mass measurement for each TDE. Using the mass measurements inferred from the TDE's X-ray radii we again find a linear correlation between bolometric disc luminosity and black hole mass, and disc Eddington luminosity ratios of $\sim 10$\% (Fig. \ref{fig:edratRP},section 5.3). (4) We demonstrate how the small inferred radiated energies from TDE X-ray light curves can be understood by the large bolometric corrections inherent to their spectra. As TDE discs cool, the bolometric luminosity of their discs remains exponentially higher than their observed X-ray energies, and the X-ray-to-bolometric conversion factor can reach $\sim 100$. The bolometric radiated energy of ASASSN-14li, the best observed source in our sample, is $\sim 1 \times 10^{52}$ erg ($M_{\rm acc} \sim 0.1 M_\odot$), meaning that it has no ``missing'' energy. (5) We demonstrate that the early time optical and UV luminosity of many of the sources in our sample exceed their bolometric disc luminosities by a significant margin. This suggests that additional energy sources must be present at early times, and that the early optical emission seen from some TDE sources cannot be explained by the reprocessing of accretion disc emission.\\ It is important to remember that our conclusions are a result of an analysis of TDEs observed to have bright, thermal, X-ray spectra, and this must be kept in mind when generalising these conclusions to the entire TDE population. This work does however provide strong evidence that thermal X-ray TDEs behave as ``scaled up'' analogues of Galactic X-ray binaries in the soft state, and that at least some black hole accretion states are black hole mass independent. The large TDE samples expected to be uncovered with all sky X-ray surveys such as eROSITA provide a unique opportunity to probe the occupation fraction and properties of black holes at the low mass end of the galaxy/SMBH function. In the near-future, the bottleneck in constraining SMBH demographics with TDEs will shift from the sample size to a lack of adequate follow-up observations. Here we have shown that with only a few 100 soft X-ray photons (readily available in most X-ray observations), it is possible to estimate the SMBH mass following a TDE. This will provide a complementary avenue to TDE optical light curve models, many of which are not self-consistent and/or incorporate phenomenological physics to make up for our lack of understanding of the UV/optical emission mechanism. \section*{Data availability statement} The data used in this manuscript will be made available on Zenodo post publication. The Zenodo doi is 10.5281/zenodo.7533374. An XSPEC-ready implementation of the disc fitting function is available at this url: \href{https:www.github.com/andymummeryastro/TDEdiscXraySpectrum}{github.com/andymummeryastro/TDEdiscXraySpectrum}. \section*{Acknowledgements} This work was supported by a Leverhulme Trust International Professorship grant [number LIP-202-014]. For the purpose of Open Access, AM has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. TW warmly thanks the Space Telescope Science Institute for its hospitality during the completion of this work. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile (PI: Pasham). We acknowledge the use of public data from the Swift data archive.
train/arxiv
BkiUdoU5qsNCPfj-mTFw
5
1
\section{Conclusion Remarks} In order to generate a suitable and excellent land-use configuration solution objectively and reduce the heavy burden of urban planning specialists, we proposed an automatic land-use configuration planner framework. This framework generates the land-use solution based on the context embedding of a virgin area. Specifically, we obtained the residential community and its context based on the latitude and longitude of residential areas firstly. we then extracted the explicit features of the context from three aspects: (1) value-added space; (2) poi distribution; (3) traffic condition. Afterward, we mapped the explicit feature vectors to the geographical spatial graph as the attributes of the corresponding node. Next, we utilized the graph embedding technique to fuse all explicit features and spatial relations in the context together to obtain the context embedding. Then we distinguished the excellent and terrible land-use configuration plans based on expert knowledge. Finally, the context embedding, excellent and terrible plans were input into our LUCGAN to learn the distribution of excellent plans. The LUCGAN can generate a suitable and excellent land-use configuration solution based on the context embedding when the model converges. Ultimately, we conduct extensive experiments to exhibit the effectiveness of our automatic planner. \section{Experiment Results} In this section, we conduct extensive experiments and case studies to answer the following questions: \begin{enumerate} \item Does our proposed automatic planner (LUCGAN) outperform the baseline methods? \item What is the difference between the context of excellent land-use configuration plans and terrible plans? \item How does generated land-use configuration plans look like? \item How many proportions of each POI category occupies in generated plans? \item What is the generated situations of different categories in the generated plan? \end{enumerate} \subsection{Data Description} We use the following datasets for our study: \begin{enumerate} \item \textbf{Residential Community:} The residential community dataset contains 2990 residential communities in Beijing, where each residential community is associated with the information of its latitude and longitude. \item \textbf{POI:} The Beijing POI dataset includes 328668 POI records from 2011, where each POI item includes its latitude, longitude and the category. The POI information is shown in Table \ref{poi_lists}. \item \textbf{Taxi Trajectories:} The taxi trajectories dataset are collected from a Beijing taxi company, where each record contains trip ID, distance(m), travel time(s), average speed(km/h), pick-up and drop off time, pick-up and drop-off point. \item \textbf{Public Transportation:} This dataset logs the transactions of buses in Beijing between 2012 and 2013. After analyzing the dataset, it contains 1734247 bus trips, 718 bus lines. We use this dataset to obtain the public transportation situation. \item \textbf{House Price:} The house price dataset includes continuous five months house price data of each residential community in Beijing between 2011 and 2012, which is collected from Soufang website. \item \textbf{Check-In:} This dataset is the weibo check-in records in Beijing between 2011 and 2013, where each check-in item has its longitude, latitude, check-in time and check-in place. We utilize this dataset to analyze the vibrancy of one area. \end{enumerate} \begin{table}[htbp] \vspace{-0.4cm} \small \centering \setlength{\abovecaptionskip}{0.cm} \caption{POI category list} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{cccccc} \toprule code & POI category & code & POI category \\ \midrule 0 & road & 10 & tourist attraction \\ 1 & car service & 11 & real estate \\ 2 & car repair & 12 & government place \\ 3 & motorbike service & 13 & education \\ 4 & food service & 14 & transportation \\ 5 & shopping & 15 & finance\\ 6 & daily life service & 16 & company\\ 7 & recreation service & 17 & road furniture\\ 8 & medical service & 18 & specific address \\ 9 & lodging & 19 & public service\\ \bottomrule \end{tabular}} \label{poi_lists} \vspace{-0.4cm} \end{table} \subsection{Evaluation Metrics} Because evaluating the quality of the urban land-use configuration is an open question, there is no standard measurement. In this paper, we evaluate the quality of generated planning solution from multiple aspects to express the effectiveness of our framework: (1)\textbf{Scoring Model}. We build a random forest model based on the excellent and terrible land-use configuration plans. The model is capable of giving higher scores for excellent land-use configuration plans and provide lower scores for terrible plans. When we get the generate land-use configuration solutions, the scoring model can be utilized to quantify the quality of the generated solutions. (2) \textbf{Visualization}. In order to explore the generated solutions, we select one representative sample to visualize from multiple aspects. We can observe the solutions directly in this way. It is helpful to learn the difference between our planner and other baselines . \subsection{Baseline Methods} We compare the performances of our framework(LUCGAN) against the following three baseline methods: \begin{enumerate} \item \textbf{VAE}: is an encoder-decoder paradigm algorithm. The encoder encodes image data into latent embedding; the decoder decodes the embedding into the original data. In this experiment, we input excellent land-use configuration into VAE to learn the distribution of excellent solutions by minimizing the reconstruction loss. Then, we utilize the decoder to generate the solution based on the context environment embedding when VAE converges. \item \textbf{AVG}: generates the land-use configuration by calculating the mean value of all excellent land-use plans, which reflects the average level of all excellent samples. But this method can not provide a customized solution based on different context environments. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{image/overall.pdf} \vspace{-0.3cm} \caption{The quality score for different generated methods.} \label{score_model} \vspace{-0.8cm} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{image/context_visual.png} \vspace{-0.6cm} \caption{Visualization for different contexts.} \label{emb_context} \vspace{-0.55cm} \end{figure} \item \textbf{MAX}: generates the land-use configuration solutions by applying max operation on all excellent land-use plans. The result of this method reflects the most dominated POI categories in each geographical block. The same to AVG, MAX also can not generate a customized solution based on different context environments. \end{enumerate} We conduct all experiments on a x64 machine with Intel i9-9920X 3.50GHz CPU, 128GB RAM and Ubuntu 18.04. \begin{figure*}[!thb] \setlength{\abovecaptionskip}{-2pt} \centering \subfigure[LUCGAN]{\label{fig:LUCGAN}\includegraphics[width=0.33\linewidth]{{image/poi_gan_visual.pdf}}} \subfigure[VAE]{\label{fig:vae}\includegraphics[width=0.33\linewidth]{{image/poi_vae_visual.pdf}}} \subfigure[MAX]{\label{fig:max}\includegraphics[width=0.33\linewidth]{{image/poi_max_visual.pdf}}} \caption{Comparison of different generated land-use configuration solutions by different generated methods.} \label{fig:generated_solution} \vspace{-0.5cm} \end{figure*} \begin{figure*}[!thb] \setlength{\abovecaptionskip}{-2pt} \centering \subfigure[LUCGAN]{\label{fig:LUCGAN_poi}\includegraphics[width=0.33\linewidth]{{image/gan_result.pdf}}} \subfigure[VAE]{\label{fig:vae_poi}\includegraphics[width=0.33\linewidth]{{image/vae_result.pdf}}} \subfigure[MAX]{\label{fig:max_poi}\includegraphics[width=0.33\linewidth]{{image/max_result.pdf}}} \caption{Comparison of the proportion of each POI category in different generated solutions.} \label{fig:category_result} \vspace{-0.5cm} \end{figure*} \subsection{Overall Performance} Figure \ref{score_model} shows the quality score produced by the scoring model for different generated methods. An interesting phenomenon is that the MAX method ranks the 1st compared with other methods. A possible explanation is the scoring model only captures the distribution of original excellent plans. The MAX method incorporates all excellent plans by max operation, so the generated solution reflects the dominated POI categories of each geographical block, which is also inherent in the original data distribution. Therefore, the scoring model gives the highest score for the MAX method. Although the MAX method ranks the 1st, it does not indicate the MAX method is better than LUCGAN. This is because the MAX method only produces one kind of planning solution no matter what the context environment is. But the LUCGAN can customize the solutions based on different context environment embedding. In addition, the score of LUCGAN is also high, which indicates the LUCGAN capture the intrinsic rule of excellent plan distribution. So the LUCGAN is an effective and flexible generated method for generating land-use configuration. \subsection{Study of the Context Environment} Intuitively, the context environment is vital for generating the land-use configuration. Different context environments produce different kinds of land-use configuration solutions. In order to explore the difference between the context environments of excellent plans and bad plans, we utilize T-SNE algorithm on the context environment embedding. We randomly choose 500 good and bad context embedding respectively to visualize. Figure \ref{emb_context} is the visualization result of different kinds of context environments. We find that the pattern of good contexts and the pattern of bad contexts are discriminative, which proves that it is reasonable to exploit the context environment for generating the land-use configuration solutions. \subsection{Study of the Geographical Distribution Generated by Different Approaches} In order to observe the generated land-use configuration solution clearly, we pick up a representative solution to visualize. Owing to the generated solution has multiple channels and each channel owns lots of blocks, we merge these channels of the solution into one by we setting the dominated POI category as the final result for each geographical block. The merged solution reflects POI distribution in geographical spatial space. Figure \ref{fig:generated_solution} shows the visualization result Different color blocks represent different POI categories. The generated solution by LUCGAN is regular and organized, different POI categories intersect with each other. But the distribution of generated solutions by VAE and MAX are so chaotic that there no clear promising patterns. This experiment indicates the better effectiveness of LUCGAN compared against other baseline methods. \begin{figure*}[!thb] \setlength{\abovecaptionskip}{-0pt} \centering \subfigure[car service]{\label{fig:channel1}\includegraphics[width=0.24\linewidth]{{image/channel1.png}}} \subfigure[food service]{\label{fig:channel4}\includegraphics[width=0.24\linewidth]{{image/channel4.png}}} \subfigure[daily life service]{\label{fig:channel6}\includegraphics[width=0.24\linewidth]{{image/channel6.png}}} \subfigure[recreation service]{\label{fig:channel7}\includegraphics[width=0.24\linewidth]{{image/channel7.png}}} \vspace{-0.3cm} \subfigure[tourist attraction]{\label{fig:channel10}\includegraphics[width=0.24\linewidth]{{image/channel10.png}}} \subfigure[real estate]{\label{fig:channel11}\includegraphics[width=0.24\linewidth]{{image/channel11.png}}} \subfigure[government place]{\label{fig:channel12}\includegraphics[width=0.24\linewidth]{{image/channel12.png}}} \subfigure[education]{\label{fig:channel13}\includegraphics[width=0.24\linewidth]{{image/channel13.png}}} \vspace{-0.3cm} \subfigure[transportation]{\label{fig:channel14}\includegraphics[width=0.24\linewidth]{{image/channel14.png}}} \subfigure[specific address]{\label{fig:channel18}\includegraphics[width=0.24\linewidth]{{image/channel18.png}}} \subfigure[finance]{\label{fig:channel15}\includegraphics[width=0.24\linewidth]{{image/channel15.png}}} \subfigure[company]{\label{fig:channel16}\includegraphics[width=0.24\linewidth]{{image/channel16.png}}} \vspace{-0.1cm} \caption{Visualization for different POI categories of one generated solution.} \label{fig:visual_solution} \vspace{-0.6cm} \end{figure*} \subsection{Study of The POI Proportion Generated by Different Approaches} In this experiment, our purpose is to generate a vibrant residential community that owns diverse POI categories and lots of economic activities. After obtaining the generated solutions from different generated methods, we count the number of each POI category of different solutions respectively. Then we visualize the proportion of each POI category of the solutions. Figure \ref{fig:category_result} shows the comparison of different generated methods. In Figure \ref{fig:LUCGAN_poi} , we observe that the LUCGAN generated result owns all POI categories, and 4 (food service), 5 (shopping), 6 (daily life service) and 7 (recreation service) occupy a big proportion in all POI categories. In addition, These four POI types are all related to the economic activities closely. Therefore, the LUCGAN generated solutions satisfy our design scheme. Figure \ref{fig:vae_poi} shows the POI proportion relation in VAE generated result. We find that 1 (car service) and 17 (road furniture) are missing, so the POI diversity of VAE is not complete. Figure \ref{fig:max_poi} is the POI proportion relation in MAX generated result. Each POI category occupies balanced proportion, which indicates the MAX method is stubborn. The flexibility of MAX is poorer than VAE and LUCGAN. \subsection{Study of The Generation for Each POI Category} We aim to examine the justifiability of the generated configuration for each POI category To that end, we visualize the POI distribution for each POI category. Due to the page limitation, we randomly select 12 POIs of a generated solution for visualization In Figure \ref{fig:visual_solution}, the darker color block represents the larger POI number in the block. An interesting observation is that the POI distribution of different categories show their unique patterns. For example, transportation pots are more concentrated, while food service related POIs are more dispersed across the area; the distribution of car service spots is very similar to the recreation service, the possible reason is that recreation service spots may occupy many parking lots which potentially attract car services. To sum up, through the above experiments, LUCGAN is capable to generate the land-use configuration solution based on the context environment embedding effectively and flexibly. \section{Introduction} Urban planning is an interdisciplinary and complex process that involves with public policy, social science, engineering, architecture, landscape, and other related field. In this paper, we refer urban planning to the efforts of designing land-use configurations, which is the reduced yet essential task of urban planning. Effective urban planning can help to mitigate the operational and social vulnerability of a urban system, such as high tax, crimes, traffic congestion and accidents, pollution, depression, and anxiety. Due to the high complexity of urban systems, such planning tasks are mostly completed by professional planners. But, human planners take longer time. The recent advance of deep learning, particularly deep adversarial learning, provide a great potential for teaching a machine to imagine and create. This observation motivates us to rethink urban planning in the era of artificial intelligence: What roles does deep learning play in urban planning? Can machines develop and learn at a human capability to automatically and quickly calculate land-use configurations? In this way, machines can be planning assistants and human planners can finally adjust machine-generated plans for specific needs. All of the above evidence has shown that it is appealing to develop a data-driven AI-enabled automated urban planner. However, three unique challenges arise to achieve the goal: (1) How can we quantify a land-use configuration plan? (2) How can we develop a machine learning framework that can learn the good and the bad of existing urban communities in terms of land-use configuration policies? (3) How can we evaluate the quality of generated land-use configurations? Next, we will introduce our research insights and solutions to address the challenges. First, as we aim to teach a machine to reimagine the land-use configuration of an area, it is naturally critical to define a machine-perceivable structure for a land-use configuration. In practice, the land-use configuration plan of a given area is visually defined by a set of Point of Interests (POIs) and their corresponding locations (e.g., latitudes and longitudes) and urban functionality categories (e.g., shopping, banks, education, entertainment, residential). A close look into such visually-perceived land-use configuration can reveal that the land-use configuration is indeed a high-dimensional indicator that illustrates what and where we should put into an unplanned area. There is not just location-location statistical autocorrelation but also location-functionality statistical autocorrelation in a land-use configuration. To capture such statistical correlations, we propose to represent a land use configuration plan as a latitude-longitude-channel tensor, where each channel is a specific category of POIs that are distributed across the unplanned area, and the value of an entry in the tensor is the number of POIs. In this way, the tensor can describe not just the location-location interaction of POIs, but also location-function interaction of POIs. Second, after we define the quantitative version of a land-use configuration, our second question is that how we can teach a machine to automatically generate a land-use configuration? We analyze large-scale urban residential community data, and identify an important observation: 1) an urban community can be viewed as an attributed node in a socioeconomic network (city), and this node proactively interacts with surrounding nodes (environments); 2) the coupling, interaction, and coordination of a community and surrounding environments significantly influence the livability, vibrancy, and quality of a community. Based on this observation, we propose to convert the land-use configuration planning problem as a new objective: to teach a machine to generate a land-use configuration tensor given the surrounding context/area. In other words, the problem is reduced into the objective of learning a conditional probability function that maps a surrounding context representation to a well-planned configuration tensor, instead of a poorly-planned configuration tensor. The recently emerging deep adversarial learning provides a great potential to address the reduced objective. We reformulate the task into an adversarial learning paradigm, in which: 1) A neural generator is analogized as a machine planner that generates a land-use configuration; 2) The generator generates a configuration in terms of a pattern feature representation of surrounding spatial contexts; 3) The surrounding context representation is learned via self-supervised representation learning collectively from spatial graphs. 4) A neural discriminator is to classify whether the generated land-use configuration is well-planned (positive) or poorly-planned (negative). 5) A new mini-max loss function is constructed to guide the generator to learn from the goods of well-planned areas and the bads of poorly-planned areas. Third, it is traditionally a very challenging open question to evaluate the quality of a generated land-use configuration. The most solid and sound validation is to collaborate with urban developer and city governments to implement an AI-generated configuration into an unplanned area, and observe the development of the area in the following years. However, this is not realistic. In this paper, we exploit two strategies to assess the generated configurations: 1) We build one scoring model that outputs the score of a configuration by learning from training data. Specifically, we utilize a machine learning model to learn the data distribution of the original land-use reconfiguration samples. Thus, after we obtain the generated solutions, the scoring model has the ability to give a score. 2) We invite experienced regional planning experts to evaluate the quality of the generated solutions. In this paper, we let human experts to perform analysis on multiple case studies. In summary, in this paper, we develop an adversarial learning framework to generate effective land-use configurations by learning from urban geography, human mobility, and socioeconomic data. Specifically, our contributions are: \iffalse 1) We reformulate the automated urban planning task as a generative learning process to generate planning solutions based on surrounding spatial contexts. 2) We quantify the urban planning solution as a latitude-longitude-channel tensor by considering planning as a reconfiguration process of POIs, where each channel is a specific category of POIs. 3) We develop two quantitative measurements, including a scoring model and expert visual evaluation to evaluate the quality of urban planning solutions. 4) We conduct extensive experiment and visualization to show the effectiveness of our proposed framework. \fi 1) We develop a latitude-longitude-channel tensor to quantify a land-use configuration plan. 2) We propose a socioeconomic interaction perspective to understand urban planning as a process of optimizing the coupling between a community and surrounding environments. 3) We reformulate the automated urban planning problem into an adversarial learning framework that maps surrounding spatial contexts into a configuration tensor by a machine generator. 4) Although evaluation is challenging, we develop multiple aspects to conduct extensive experiment and visualization with real-world data to show the value of our method. \section*{Acknowledgment} This research was partially supported by the National Science Foundation (NSF) via the grant numbers: 1755946, I2040950, 2006889. \bibliographystyle{ACM-Reference-Format} \section{Automatic planner for land-use configuration} In this section, we first introduce how to quantify surrounding context/area. Then, we detail the measurement of how to evaluate the quality of land-use configuration solutions. Finally, we describe how to train a generative model to learn an automated urban planner. \subsection{Explicit Feature Extraction for Context Environments} The land-use configuration solution of an unplanned area has a strong relationship with its contexts. For example, if there are many commercial zones in the contexts of the unplanned area, we should avoid the redundancy of the same category POI in the planning. This is because we can make the unplanned area owns different function compared with its contexts, which is beneficial for the development and communication among the virgin area and its contexts. Thus we grab the intrinsic characteristics of the contexts completely by extracting multiple explicit features. There are lots of indicators to describe context environments. Here, we select four views to capture the features of the contexts: \begin{enumerate} \item \textbf{Value-added Space.} In common, the variation of house price reflects the value-added space of one area. Thus, we calculate the changing trend of house price of the contexts $[C_1 \sim C_8]$ in continued six months. Here, we take the context $C_1$ as an example to explain the calculation process. First, We obtain the housing price list among $t$ months. Then, we calculate the changing trend of house price by using the current house price value subtract the previous house price value. So we get the changing trend of $C_1$ as $\mathbf{v}_1=[v_1^1,v^2_1,...,v^{t-1}_1]$, where $v_1^i$ represents the value of the changing trend at i-th month. Finally, we collect the house price changing trend of all contexts together. The collected result is denoted as $\mathbf{V} = [\mathbf{v}_1,\mathbf{v}_2,...,\mathbf{v}_8]$, where the matrix $\mathbf{V} \in \mathbb{R}^{8 \times {t-1}}$. \item \textbf{POI Ratio.} Since various POIs provide diverse services to residents, the ratio of different types of POIs is a good indicator for indicating the functions of the area. Therefore, we calculate the POI ratio of the contexts $[C_1 \sim C_8]$. Here, we take $C_1$ as an example to explain the calculation process. First, we sum up the count of each POI category to form a feature vector. Then, we divide each item in the feature vector by the sum of all POI categories. We obtain the POI ratio of $C_1$, denoted by $\mathbf{r}_1 = [r_1^1,r^2_1,...,r^m_1]$, where $r_1^i$ represents the ratio of i-th POI category in $C_1$ and $m$ is the total number of POI categories. Finally, we collect the POI ratio of all contexts together. The collected result is denoted as $\mathbf{R} = [\mathbf{r}_1,\mathbf{r}_2,...,\mathbf{r}_8]$, where the matrix $\mathbf{R} \in \mathbb{R}^{8\times{m}}$, $m$ is the number of POI categories. \item \textbf{Public Transportation.} Public transportation is one popular travel mode due to its convenience and cheapness. So public transportation is a vital factor to be considered to describe the human mobility patterns. Thus, we extract features that related to public transportation to describe the public traffic situation of the contexts $C_1 \sim C_8$. We take $C_1$ as an example to show the calculation details. We calculate the feature vector of public transportation from five perspectives: (1) the leaving volume of $C_1$ in one day, denoted by $\mathbf{o}_1^1$; (2) the arriving volume of $C_1$ in one day, denoted by $\mathbf{o}_1^2$; (3) the transition volume of $C_1$ in one day, denoted by $\mathbf{o}_1^3$; (4) the density of bus stop of $C_1$, denoted by $\mathbf{o}_1^4$, which reflects the number of bus stop in per square meter; (5) the average balance of smart card of $C_1$, denoted by $\mathbf{o}_1^5$, which shows the economic expenditure of people in the travel field. The public transportation feature vector of $C_1$ can be denoted as $[o_1^1,o^2_1,...,o^5_1]$. Finally, we collect the public transportation feature vectors of all contexts together. The collected result is denoted as $\mathbf{O} = [\mathbf{o}_1,\mathbf{o}_2,...,\mathbf{o}_8]$, where the matrix $\mathbf{O} \in \mathbb{R}^{8 \times 5}$. \item \textbf{Private Transportation.} Taxi is another important tool for people traveling. The taxi trajectory data reflects the people's flow count and the traffic congestion situation of an area. Thus, we explore the features of the private transportation condition of the contexts $[C_1 \sim C_8]$. Here, we take $C_1$ as an example to illustrate the calculation process. We count the features of private transportation from the following 5 perspectives: (1) the leaving volume of $C_1$ in one day, denoted by $u_1^1$; (2) the arriving volume of $C_1$ in one day, denoted by $u_1^2$; (3) the transition volume of $C_1$ in one day, denoted by $u_1^3$; (4) in $C_1$, the average driving velocity of a taxi in one hour, denoted by $u_1^4$; (5) in $C_1$, the average commute distance for a taxi, denoted by $u_1^5$; Then, the feature vector of private transportation can be denoted as $[u_1^1,u^2_1,...,u^5_1]$. Ultimately, we collect the private transportation feature vectors of all contexts together. The collected result is denoted as $\mathbf{U}=[\mathbf{u}_1,\mathbf{u}_2,...,\mathbf{u}_8]$, where the matrix $\mathbf{U} \in \mathbb{R}^{8 \times 5}$. \end{enumerate} After that, we obtain the explicit feature set of the contexts $C_1 \sim C_8$. The set contains four kinds of features $[\mathbf{V},\mathbf{R},\mathbf{O},\mathbf{U}]$, which describe the context environments from four perspectives. \subsection{Explicit Features as Node Attributes: Constructing the Spatial Attributed Graph} The context environments wrap the residential community from different directions, resulting in spatial correlation among areas. Such phenomenon motivates us to exploit spatial graphs to capture such spatial correlations. Specifically, Figure \ref{spatial_graph} shows the graph structure, where the blue nodes represent the contexts; the orange node is the residential community; the edge between two nodes reflects the connectivity between them. \begin{figure}[htbp] \centering \vspace{-0.3cm} \includegraphics[width=0.45\linewidth]{image/spatial_graph.pdf} \vspace{-0.1cm} \caption{The graph structure between residential community and its surrounding spatial contexts. } \vspace{-0.4cm} \label{spatial_graph} \end{figure} Then, in order to fuse the spatial relationship and explicit features of the contexts, we construct a spatial attributed graph structure. Formally, we map the explicit features to the spatial graph based on the corresponding context node as the node attribute. Figure \ref{graph_feature} expresses the construction process of the spatial attributed graph. This graph not only contains the explicit feature vector of the contexts but also includes the spatial relations among them. \begin{figure}[htbp] \centering \vspace{-0.3cm} \includegraphics[width=\linewidth]{image/graph_feature.pdf} \vspace{-0.1cm} \caption{The illustration of constructing spatial attributed graphs: Each feature vector is mapped to the corresponding nodes by a column-wise strategy.} \vspace{-0.5cm} \label{graph_feature} \end{figure} \subsection{Learning Representation of the Spatial Attributed Graph} Figure \ref{gae_feature} shows that we develop a spatial representation learning framework to preserve and fuse the explicit features and spatial relationship in the contexts. Formally, we denote the spatial attributed graph $G$ by $G=(\mathbf{X},\mathbf{A})$, where $\mathbf{A}$ is the adjacency matrix that expresses the accessibility among different nodes; $X$ is the feature matrix of the graph, here, $\mathbf{X}=[\mathbf{V},\mathbf{R},\mathbf{O},\mathbf{U}]$, and the concatenation direction is row-wise. In order to get the latent graph embedding $\mathbf{z}$, we minimize the reconstruction loss between original graph $G$ and the reconstructed graph $\widehat{G}$ by an encoding-decoding framework. The encoder part owns two Graph Convolutional Network (GCN) layers. The first GCN layer takes $\mathbf{X}$ and $\mathbf{A}$ as input and output the feature matrix of low-dimensional space $\widehat{\mathbf{X}}$. Thus, the encoding process can be formulated as: \begin{equation} \widehat{\mathbf{X}} = GCN_1(\mathbf{X},\mathbf{A})=RELU(\widehat{\mathbf{D}}^{-\frac{1}{2}}\mathbf{A}\widehat{\mathbf{D}}^{-\frac{1}{2}}\mathbf{XW}_{1}) \end{equation} where $\widehat{\mathbf{D}}$ is the diagonal degree matrix, $\mathbf{W}_1$ is the weight matrix of the $GCN_1$, and the whole layer is activated by $RELU$ function. Owing to the latent embedding $\mathbf{z}$ is sampled from a prior Normal Distribution, the second GCN layer is responsible for assessing the parameters of the prior distribution. Formally, the second GCN layer takes $\widehat{\mathbf{X}}$ and $\mathbf{A}$ as input and then outputs the mean value $\bm{\mu}$ and the variance value $\bm{\delta}^2$. So the calcuation process of the second GCN layer can be formulated as: \begin{equation} \bm{\mu},log(\bm{\delta}^2) = GCN_2(\mathbf{X},\mathbf{A}) = \widehat{\mathbf{D}}^{-\frac{1}{2}}\mathbf{A} \widehat{\mathbf{D}}^{-\frac{1}{2}}\widehat{\mathbf{X}}\mathbf{W}_2 \end{equation} where $\mathbf{W}_2$ is the weight matrix of $GCN_2$. Next, we use the reparameterization trick to approximate the sample operation to obtain the latent representation $\mathbf{z}$: \begin{equation} \mathbf{z}=\bm{\mu}+\bm{\delta} \times \epsilon \end{equation} where $\epsilon \sim \mathcal{N}(0,1)$. The decoding module takes the $\mathbf{z}$ as input and then outputs the reconstructed adjacent matrix $\widehat{\mathbf{A}}$. So the decoding step can be formulated as: \begin{equation} \widehat{\mathbf{A}} = \sigma(\mathbf{z}\mathbf{z}^T) \end{equation} where $\sigma$ represents the decoding layer is activated by sigmoid function. Moreover, $\mathbf{z}\mathbf{z}^T$ can be converted to $\left \|\mathbf{z}\right \| \left \|\mathbf{z}^T\right\| \cos\theta$. The inner product operation is beneficial capture the spatial correlation among different contexts. During the training phase, we minimize the joint loss function $\mathcal{L}$ that is denoted as: \begin{equation} \mathcal{L} = \sum \limits_{i=1}^{N} \underbrace{ KL[q(\mathbf{z}|\mathbf{X},\mathbf{A}) || p(\mathbf{z})] }_{\text{KL Divergence between $q(.)$ and $p(.)$}} + \overbrace{ \sum_{j=1}^{S} \left \| \mathbf{A}-\widehat{\mathbf{A}} \right \|^2 }^{\text{Loss between $\mathbf{A}$ and $\widehat{\mathbf{A}}$}} \label{equ:loss} \end{equation} where $N$ is the dimension of $\mathbf{z}$; $S$ is the total number of the nodes in $\mathbf{A}$; $q$ represents the real distribution of $\mathbf{z}$; $p$ represents the prior distribution of $\mathbf{z}$. $\mathcal{L}$ includes two parts, the first part is the Kullback-Leibler divergence between the standard prior distribution $\mathcal{N}(0,1)$ and the distribution of $\mathbf{z}$, and the second part is the squared error between $\mathbf{A}$ and $\widehat{\mathbf{A}}$. The training process try to make the $\widehat{\mathbf{A}}$ close to $\mathbf{A}$ and let the the distribution of $\mathbf{z}$ similar to $\mathcal{N}(0,1)$. Finally, we utilize global average aggregation for $\mathbf{z}$ to get the graph level representation, which is the latent representation of all context environments. \begin{figure}[t] \centering \vspace{-0.3cm} \includegraphics[width=\linewidth]{image/GAE_feature.pdf} \vspace{-0.6cm} \caption{The proposed representation learning model to obtain surrounding context representations by minimizing the reconstruction loss of spatial attributed graphs.} \vspace{-0.6cm} \label{gae_feature} \end{figure} \subsection{Land-use Configuration Quantification and the Quality Measurement} Land-use configuration indicates the location of different types of POIs, which expects an appropriate format of quantification for accommodating with a learning model. To that end, we regard the POI distribution of one area as the land-use configuration And then, we construct a multi-channel tensor to represent the land-use configuration, where each channel is the POI distribution across the geospatial area corresponding to one POI category. Figure \ref{poi_dis} shows an example of a land-use configuration. We first divide an unplanned area into $n\times n$ squares, then we sum up the number of each POI category in each square respectively. Here, one POI category constructs one channel of the land-use configuration solution. We obtain a land-use configuration as a multi-channel tensor. \begin{figure}[!thbp] \centering \vspace{-0.3cm} \includegraphics[width=0.9\linewidth]{image/poi_dis_of_R.pdf} \vspace{-0.2cm} \caption{The construction of longitude, latitude, channel configuration tensor where the value of each entry is the number of POIs with respect to a specific category in a specific latitude range and a specific longitude range.} \label{poi_dis} \vspace{-0.5cm} \end{figure} Next, the other big challenge is how to evaluate the quality of land use configuration of the residential community? Because the urban planning is a complex field, urban planning specialists always evaluate the quality of land-use configuration solution from multiple aspects. In our framework, we provide a quality hyper-parameter $Q$ for users, they can set the value of $Q$ to distinguish the quality of land-use configuration solution. In our experiment, we choose the POI diversity and the check-in frequency of an area as the quality standard. Formally, we first calculate the total number of mobile check-in events of an area, denoted by $freq$, and the diversity of POI of an area, denoted by $div$. We then incorporate the two indicators into together by $Q=\frac{2\times freq \times div}{freq+div}$ \cite{wang2018ensemble}. If $Q > 0.5$, the solution is regarded as an excellent solution. Otherwise, it is justified as a terrible solution. \subsection{Generating Excellent Land-use Configuration Solution by GAN} Generative adversarial networks (GAN) is a popular deep generative model. The framework of this technique is suitable to generate realistic data samples via an adversarial way. In the computer vision field, GAN achieves tremendous achievements. So, here, we utilize the GAN framework to generate excellent land-use configuration solutions of an unplanned area according to the representation of the context environments. \begin{figure*}[t] \vspace{-0.3cm} \centering \includegraphics[width=0.75\linewidth]{image/gan_structure.pdf} \caption{Automatic land-use configuration planner} \vspace{-0.5cm} \label{GAN_framework} \end{figure*} Figure \ref{GAN_framework} represents the structure of our automated land-use configuration planner. In common, the real land-use configuration includes two categories: excellent and terrible. The purpose of the automated planner is to generate the excellent land-use configuration plan based on the context embedding. Formally, we input the context embedding into the generator to generate the land-use configuration solution. In order to improve the generative ability, the discriminator classify the excellent plans as positive and the terrible plans as negative. Algorithm \ref{alg:auto_planner} shows the detail information about the training phase. \begin{algorithm}[hbtp] \tcp{start training.} \For{number of training iterations}{ \tcp{update discriminator firstly.} \For{n steps}{ Sample minibatch of $m$ excellent land-use configuration samples $\left \{ \mathbf{E}^1,\mathbf{E}^2,...,\mathbf{E}^m \right \}$.\\ Sample minibatch of $m$ context information embedding samples $\left \{ \mathbf{z}^1,\mathbf{z}^2,...,\mathbf{z}^m \right \}$. \\ Generate land-use configuration samples by generator, $\left \{ \mathbf{F}^1,\mathbf{F}^2,...,\mathbf{F}^m \right \}$. Here, $\mathbf{F}^i=G(z^i)$.\\ Sample minibatch of $m$ terrible land-use configuration samples $\left \{ \mathbf{T}^1,\mathbf{T}^2,...,\mathbf{T}^m \right \}$. \\ Update the discriminator by ascending its gradient:\\ ~\\ $\bigtriangledown_{\theta_d} \frac{1}{m}\sum_{i=1}^{m} [ log(D(\mathbf{E}^i)) + log(D(1-\mathbf{F}^i))$\\ $+ log(D(1-\mathbf{T}^i)) ].$ ~\\ } \tcp{update generator secondly.} Sample minibatch of $m$ context information embedding samples $\left \{ \mathbf{z}^1,\mathbf{z}^2,...,\mathbf{z}^m \right \}$. \\ Update the generator by descending its gradient:\\ ~\\ $\bigtriangledown_{\theta_g} \frac{1}{m} \sum_{i=1}^{m} log(1-D(G(\mathbf{z}^i))).$ ~\\ } \caption{Minibatch adaptive moment estimation training of automatic land-use configuration model. We adjust one hyperparameter $f$ to change the update frequencies of the weight of the discriminator.} \label{alg:auto_planner} \end{algorithm} In algorithm \ref{alg:auto_planner}, we first update the parameters of the discriminator module and fix the parameters of the generator module. We then feed the excellent, terrible, and generated land-use configuration samples into the discriminator module. Next, the discriminator outputs the classification result that is activated by the sigmoid function. It gives higher classification scores for excellent samples than terrible and generated samples. Next, we fix the discriminator module and update the parameter of the generator module. The contexts embedding vectors are feed into the generator, and output generated land-use configuration solutions. Afterward, the generated solutions are feed into the discriminator to justify the quality of them. We update the parameter of the generator module to improve the generated ability of itself. The update gradient comes from the justification result of the discriminator module. Finally, We obtain one automatic land-use configuration planner when the GAN model converges. If we obtain the context embedding, the discriminator can generate one excellent land-use configuration for the unplanned area. \section{Problem Statement and Framework Overview} \subsection{Definitions} \subsubsection{Central area and its Contexts} In this paper, a central area is a square area that is centered on a geographical location (i.e., latitude and longitude), where is an unplanned area. In this study, the area of a central area is $1km^2$. The contexts of a central area wrap the residential community from different directions. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{image/context.pdf} \caption{The geographical definitions of a central area and its surrounding spatial contexts.} \label{residential_context} \vspace{-0.5cm} \end{figure} Figure \ref{residential_context} represents the spatial relationship between a central area and its contexts. In addition, we find that there are many POIs in the central area and its contexts. Intuitively, the future development of an unplanned area is affected by its contexts. \subsection{Problem Statement} Urban planning is a complex field. An excellent urban planning solution requires experts who own the amount of specific urban planning knowledge and experiences to spend much time designing. In order to reduce the heavy burden of the experts and generate suitable urban planning solutions objectively, we propose an automatic urban planner that produces excellent solutions based on the context environments. Here, we simplify the meaning of urban planning into land-use configuration, which makes the question easier to model. Formally, assuming a virgin area is $R$, the contexts of $R$ are $[C_1 \sim C_8]$, and the land-use configuration plan is $\mathbf{M}$ that structure is a multi-channel image, one channel represents one POI category data distribution. Given the explicit feature $\mathbf{F}$ that describes the situation of the context environments, where the matrix $\mathbf{F} \in \mathbb{R}^{8 \times K}$, $8$ is the number of contexts and $K$ is the dimension of the explicit feature vector of each context. \begin{figure}[!thbp] \centering \vspace{-0.3cm} \includegraphics[width=0.8\linewidth]{image/Framework} \vspace{-0.2cm} \caption{An overview of the proposed framework. The proposed framework includes four steps: 1) we first collect multiple data sources such as urban community related data (housing prices), point of interests data, and human mobility data (taxicab GPS traces). We then propose a spatial graph representation learning to learn the representation of surrounding contexts. Later, we develop an adversarial land use configuration machine to automate planning and generate recommended configurations. } \label{framework} \vspace{-0.5cm} \end{figure} The purpose of our framework is to take the explicit feature vector $\mathbf{F}$ as input and output the corresponding excellent land-use configuration solution $\mathbf{M}$. \subsection{Framework Overview} Figure \ref{framework} shows an overview of our proposed method (LUCGAN). This framework has two main parts: (i) learning representation of the contexts of the virgin area; (ii) generating an excellent land-use configuration solution for the virgin area. In the first part, we first extract explicit features of the contexts from value-added space, POI distribution, public and private transportation conditions. Then, we construct a graph structure to capture the geographical spatial relationship between the virgin area and its contexts. Afterward, we map the explicit features of contexts to the graph as attributes of corresponding nodes. The attributed spatial graph incorporates all characteristics of contexts together. Next, we utilize variational graph auto-encoder (VGAE) to obtain the latent representation of the contexts. Thus, we get the final representation of the contexts of virgin areas through the first part. In the second part, we input the latent representation of the contexts, excellent land-use configuration samples, and terrible land-use configuration samples into an extended GAN. The extended GAN is capable to generate the land-use configuration solution based on the contexts embedding. Moreover, we customize a new GAN loss which makes the model learns the distribution of excellent plans and keeps away from the terrible plans. Finally, when the model converges, the generator of the extended GAN can produce suitable and excellent land-use configuration solutions in an objective angle based on the latent context embedding. \section{Related Work} \textbf{Representation Learning.} The objective of representation learning is to obtain the low-dimensional representation of original data in latent space. In general, there are three types of representation learning models: (1) probabilistic graphical models; (2) manifold learning models; (3) auto-encoder models. The probabilistic graphical models build a complex Bayesian network system to learn the representation of uncertain knowledge buried in original data \cite{qiang2019learning}. The manifold learning models infer low-dimensional manifold of original data based on neighborhood information by non-parametric approaches \cite{zhu2018image}. The auto-encoder models learn the latent representation by minimizing the reconstruction loss between original and reconstructed data \cite{otto2019linearly}. In this paper, we utilize the auto-encoder paradigm to learn the representation of the spatial context environment. There are many successful spatial representation applications with the development of deep representation learning techniques \cite{fu2018representing,wang2018learning,zhang2019unifying,fu2019efficient}. For instance, Wang et al. capture the feature of GPS trajectory data by utilizing spatio-temporal embedding learning skills, then the embedding is used to analyze driving behavior \cite{wang2019spatiotemporal}. Du et al. propose a new spatial representation learning framework that captures the static and dynamic characteristics among the spatial entities. And they utilize the learned spatial representation to improve the performance of real estate price prediction. \cite{8970913}. \textbf{Generative Adversarial Networks.} Generative Adversarial Networks is a hot research field in recent years \cite{zhang2020curb,zhang2019trafficgan}. The GAN algorithms can be classified into three categories from task-driven view . (1). semi-supervised learning GANs. Usually, a complete labeled data set is difficult to obtain, the semi-supervised learning GANs can utilize unlabeled data or partially labeled data to train an excellent classifier \cite{ding2018semi,liu2020catgan}. For instance, Akcay et al. design a semi-supervised GAN anomaly detection framework that only uses normal data during the training phase \cite{akcay2018ganomaly}. (2). transfer learning GANs. Many researchers utilize the transfer learning GANs to transfer knowledge among different domains \cite{hoffman2018cycada,tzeng2017adversarial}. For instance, Choi et al. build an unified generative adversarial networks to translate the images in different domains \cite{choi2018stargan}. (3). reinforcement learning GANs. Generative models are incorporated with reinforcement learning (RL) to improve the model generative performance \cite{sarmad2019rl}. For instance, Ganin et al. combine reinforce learning and GAN to synthesize high-quality images \cite{ganin2018synthesizing}. \textbf{Urban Planning.} Urban planning is a complex research field \cite{adams1994urban}. The specialists need to consider lots of factors such as government policy, environmental protection to design suitable land-use configuration plan \cite{niemela1999ecology}. For example, partial researchers focus on constructing an urban planning solution for human health and well-being \cite{barton2013healthy}. Simultaneously, many researchers form a land-use configuration solution based on the development of real estate \cite{ratcliffe2004urban}. Therefore, it is difficult to generate a suitable and excellent urban planning solution objectively. To the best of our knowledge, we first propose an automatic land-use configuration planner to solve this problem.
train/arxiv
BkiUcbDxK4tBViZLEwRH
5
1
\section*{Required Metadata} \label{} \section*{Current code version} \label{} \begin{table}[H] \begin{tabular}{|l|p{6.5cm}|p{6.5cm}|} \hline \textbf{Nr.} & \textbf{Code metadata description} & \textbf{Please fill in this column} \\ \hline C1 & Current code version & cWB-6.4.0 \\ \hline C2 & Permanent link to code/repository used for this code version & DOI: \url{https://doi.org/10.5281/zenodo.4419902} ; \hspace{0.3cm} git repository: \url{https://gitlab.com/gwburst/public/library} \\ \hline C3 & Code Ocean compute capsule & \\ \hline C4 & Legal Code License & GNU General Public License v3.0. \\ \hline C5 & Code versioning system used & git \\ \hline C6 & Software code languages, tools, and services used & C++, Python, shell, HTML. \\ \hline C7 & Compilation requirements, operating environments \& dependencies & C++ compiler supporting C++11 (tested with GCC $8$ and ICC $19$), Python 2.7, CERN ROOT 6, HEALPix, CFITSIO, LAL, ligo-segments 1.0.1, ligo-gracedb 2.5.0. \\ \hline C8 & If available Link to developer documentation/manual & \url{https://gwburst.gitlab.io/documentation/latest/html/overview.html} \\ \hline C9 & Support email for questions & \url{https://gitlab.com/gwburst/public/library/issues}\\ \hline \end{tabular} \caption{Code metadata (mandatory)} \label{tab:meta} \end{table} \linenumbers \fi \section{Motivation and significance} \label{sec:intro} The first direct detection of gravitational waves was accomplished by the LIGO-Virgo Collaboration in 2015, almost one century after the initial prediction by Albert Einstein \cite{Einstein1916}. On September 14, 2015, at 09:50:45 UTC, the low-latency instance of \textit{coherent WaveBurst} (cWB) \cite{CWBwavelet:2004km,CWBlikenet:2005kmrm,Klimenko:2015ypf,Klimenko:2008fu,gwburst.gitlab.io,cwb-doc}, an unmodeled search pipeline for the prompt detection of generic gravitational-waves (GWs), identified and reconstructed a chirping signal (see also Fig. \ref{f:SIMexamples}) in the data from the first observing run of the advanced LIGO detectors located in Hanford (WA) and Livingston (LA) \footnote{The advanced Virgo interferometer in Cascina, Italy joined the GW network later on, in 2017.} and reported it within three minutes of data acquisition \cite{Abbott:2016blz}. Follow-up analyses by the LIGO-Virgo Collaboration established that the signal (later named GW150914) was consistent with the merger of a binary black hole (BBH). This historic detection was achieved thanks to the extreme sensitivity of the advanced GW detectors and the sophisticated signal-processing methods used to separate signals from noise and all the remaining systematics, such as those implemented within cWB. The cWB pipeline is designed to detect a wide class of gravitational-wave signals and reconstruct their waveforms with minimal assumptions on the source model. It also extracts additional properties such as bandwidth, duration, sky location, and polarization state. cWB searches for GWs both in low-latency mode (with a latency of few minutes) and offline. The low-latency reconstructed sky location \cite{Aasi:2013wya,PhysRevD.83.102001} can be promptly shared with the partner observatories \cite{emfollow} which search for coincident electromagnetic counterparts. As an example of an offline analysis, cWB has been used for targeted searches for core-collapse supernovae (CCSNe) \cite{Iess:2020yqj,PhysRevD.101.084002,Srivastava:2019fcb,Abbott:2018qee,Abbott:2016tdt} -- one of the next most anticipated GW sources, and one of the highest priority tasks for GW detectors. cWB is also well suited for the detection of compact binary coalescences (CBC): CBC sources formed from combinations of neutron stars (NS) and black holes (BH), are among the most efficient emitters of gravitational waves, and when observed in conjunction with electromagnetic or neutrino signals yield important astrophysical insights \cite{TheLIGOScientific:2017qsa,Abbott:2018wiz,Monitor:2017mdv,ANTARES:2017bia,LIGOScientific:2019eut}. The cWB algorithms are robust with respect to a variety of CBC features including higher multipoles, high mass ratios, misaligned spins, eccentric orbits and possible deviations from general relativity, that may create mismatches between signal waveforms and simulated CBC templates. Recently, cWB played a significant role both in the identification of higher multipoles for GW190814, an event associated with the coalescence of a binary system with the most unequal mass ratio yet measured with gravitational waves \cite{Abbott:2020khf} and in the detection of GW190521, the most massive and most distant black hole merger yet observed in gravitational waves, the first direct detection of an intermediate-mass black hole binary \cite{Abbott:2020tfl}. \section{Software description} \label{sec:descr} The goal of cWB is to identify coherent GW transients on a network of GW detectors with minimal assumptions on signal morphology. First, as shown in Fig. \ref{f:arch}, data streams from all detector are conditioned with a regression algorithm \cite{Tiwari:2015ofa} which identifies and removes persistent lines and noise artifacts; next, data are converted to the time-frequency (TF) domain with the Wilson-Daubechies-Meyer (WDM) wavelet transform, which has very good localization properties, both in time and frequency \cite{Necula:2012zz}. The data are then whitened and those pixels whose energy is larger then a given threshold are retained for further analysis (see, e.g., Fig. \ref{f:TFexamples}). The selected TF pixels from all detectors are combined in a constrained likelihood function that depends on the source sky position and takes into account the corresponding antenna patterns of the interferometers and time delays between interferometer pairs: after maximizing the constrained likelihood, a candidate event is identified when a specific measure of signal coherence, calculated on the ensemble of the selected TF pixels, exceeds a predetermined threshold \cite{Klimenko:2008fu,Klimenko:2005xv,Klimenko:2006rh}. \subsection{Software Architecture} \label{sec:arch} The core computational tasks are all performed by a specialized C++ library, the \textit{Wavelet Analysis Tool} (WAT) library, and are embedded within the CERN ROOT data-analysis framework \cite{Brun:1997pa,Antcheva:2009zz}. ROOT classes are also used as building blocks for data processing (e.g. I/O), data analysis (e.g. post-production statistical analysis), and visualization. ROOT-derived classes and cWB classes are also accessible via CLING, the ROOT just-in-time interpreter \cite{Vasilev:2012ev}, and can be used with a C++ interactive shell or within ROOT C++ macros. Python scripts are used for the most crucial tasks of cWB low-latency analysis, such as, for example, the trigger uploading to GraceDB (Gravitational-wave Candidate Event Database) \cite{gracedb}, which is a web service that provides early-warning information about candidate events. cWB is built with the Frame Library \cite{framelib} to access data frames (the standard format for LIGO-Virgo data) and usually with support for skymap grids via HEALPix \cite{Gorski:2004by}, for the astronomical data format FITS \cite{CFITSIO} and for GW waveforms through the LALsuite library \cite{lalsuite}. In order to quickly adapt to the new needs that arise with the growing theoretical and experimental understanding of GWs, cWB is complemented by a large set of user-selectable options and by the possibility for the user to execute custom plugins which can be called at different stages of the pipeline. \subsection{Software Functionalities} \label{sec:function} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{Architecture-2.pdf} \caption{Functional diagram of the cWB pipeline. White boxes show the processing steps, gray boxes show the external input. During PRE-PRODUCTION the pipeline initializes by reading the production configuration file which specifies the parameters that steer production. It also reads the \textit{frame files}, which contain strain and other data, and \textit{veto files} which mark those time intervals that must be excluded from analysis because of low data quality. User plugins that contain user-defined code to carry out targeted analyses are also loaded in this stage. Next, during PRODUCTION, the pipeline searches for triggers, and when one is found it reconstructs the gravitational waveform. In the POST-PRODUCTION stage, events are selected according to conditions specified in the initialization files and veto files, and finally, the pipeline outputs a report in the form of a properly formatted web page. } \label{f:arch} \end{figure} The user can steer the pipeline behavior by setting parameter values in two distinct configuration files, one for the production stage and the other for the post-production stage, as indicated in the functional diagram in Fig. \ref{f:arch}. The production configuration file selects the interferometers (up to 8 detectors from a list of existing, possible future or custom interferometers) and the time segments to be included in the analysis, the parameters that regulate signal conditioning, etc. A full list of all production stage parameters is available in the online documentation of cWB \cite{cWB-params}. The production configuration file may also specify user-defined plugins, i.e. C++ code that can be called by cWB at different stages of the analysis. Using plugins, the user can customize the analysis without directly modifying cWB source code \cite{cwb-plugins}. The post-production configuration file specifies further selections to be applied to web reports. The complete description of post-production parameter is also available in the online documentation \cite{cWB-pparams}. Finally, cWB evaluates the accidental noise background by repeating each search several thousands of times on time-shifted data, and it injects specific waveforms in background data to estimate the statistical fluctuations of the reconstruction process. For each reconstructed event, cWB estimates a large set of parameters and test statistics and can optionally produce a \textit{Coherent Event Display} (CED), the summary web page for the event. It consists of a number of sections, with each section showing plots on different aspects of the reconstruction \cite{ced}. All in all, cWB is quite efficient in carrying out its tasks. For example, a job that estimates the equivalent of 1 year of background noise for a BBH search on a two-detector network and runs on a single-core modern CPU takes approximately 6 hours. Parallelization is easy to achieve by splitting data into shorter time segments and by assigning them to separate jobs that replicate the same analysis. Therefore, a larger analysis like a background estimate over thousands of years of time-lagged data can be completed in a matter of hours on the Caltech LIGO HPC cluster \cite{Huerta_2017} where it can be run concurrently over thousands of cores. As far as memory allocation is concerned, the standard cWB setup requires about 1.5~GByte per core. A three-detector network brings about only a moderate increase in memory allocation, while runtime becomes approximately three times as large with respect to the two-detector network, because of the increased complexity of the analysis. \section{Illustrative Examples} \label{sec:example} The easiest way to test cWB functionalities is to install it as an image in a virtual environment: cWB images are available both for VirtualBox \cite{vbox} and Docker \cite{docker}. Full instructions on how to get the latest cWB images and run cWB are given in the cWB User Manual FAQs \cite{cwb-doc} \subsection{The GW150914 example: cWB waveform reconstruction} \label{GWOSC_example1} By using the command {\tt cwb\_gwosc} \cite{cwb-gwosc}, it is straightforward to reproduce the full analysis of GW150914, the first detected GW event. The command loads GW data available from the Gravitational Wave Open Science Center (GWOSC) \cite{GWOS}, which includes the original raw data, the power spectral densities (PSDs), and the parameter-estimation (PE) posteriors samples for all GW detections. The command line to execute for the GW150914 event is {\small \colorlet{shadecolor}{yellow!20} \begin{shaded} {\tt cwb\_gwosc GW=GW150914 {\color{green}all} } \end{shaded} } which downloads GW data from GWOSC, sets up a working directory with all default settings, runs the analysis\footnote{Execution time is $\approx 2$ minutes on a commercial Intel i7 CPU laptop ($\approx +10\%$ on docker), excluding the initial GW data download.} and finally produces a CED \footnote{The CED can be visualized directly by replacing the command option {\tt {\color{green}all}} with {\tt {\color{green}xced}}. } \cite{cwb-ced}. Fig. \ref{f:SIMexamples} (left) shows the multi-resolution TF map for the cWB reconstruction of GW150914. \subsubsection{cWB reconstruction on simulated data} \label{GWOSC_example2} The posterior samples made available at GWOSC can be injected into simulated noise and reconstructed by cWB in order to estimate the variability of the reconstruction process for GW150914. The {\tt cwb\_gwosc} command is used to perform this analysis: {\small \colorlet{shadecolor}{yellow!20} \begin{shaded} {\tt cwb\_gwosc GW=GW150914 SIM=true {\color{green}all} } \end{shaded} } This instruction downloads the noise power spectral density (PSD) and a set of posterior samples for GW150914. Next, the PSDs are used to simulate a colored Gaussian stationary noise and a random sample waveform is added to the data. Finally, cWB runs as in the previous example and reports the results on a CED. Fig. \ref{f:SIMexamples} (right) shows a comparison between injected and reconstructed waveforms in the frequency domain. \begin{figure*}[h] \includegraphics[clip,width=0.49\textwidth]{l_tfmap_scalogram.png} \hfil \includegraphics[clip,width=0.49\textwidth]{H1_wf_white_inj_rec_fft.png} \caption{\label{f:SIMexamples} cWB waveform reconstruction of GW150914 as a color-coded TF map of the signal likelihood (left panel) and a comparison in the frequency domain (right panel) between a random posterior sample for GW150914 (black) and the corresponding cWB reconstruction (red).} \end{figure*} \subsubsection{cWB WDM transform on GW data and TF pixel selection} The intermediate steps of the cWB analysis can be visualized with the {\tt cwb\_inet2G} command with plenty of options \cite{cwb-inet}. This is demonstrated for the pixel selection step of the example in Sec. \ref{GWOSC_example1} by running the following commands: {\small \colorlet{shadecolor}{yellow!20} \begin{shaded} {\tt cwb\_inet2G config/user\_parameters CSTRAIN 1 {\color{green}'--tool wdm --ifo H1 --type white --draw true'} } {\tt cwb\_inet2G config/user\_parameters SUPERCLUSTER 1 {\color{green}'--tool sparse --ifo H1 --type supercluster --draw true --mode 2'} } \end{shaded} } Various plots are created as interactive objects and sorted within the ROOT browser (as described in the documentation \cite{cwb-doc}); among those some TF plots ( properly zoomed in time and frequency) have been selected to be shown in Fig. \ref{f:TFexamples}. The plots display the TF pixels from the WDM transforms for Hanford data at the time of GW150914 for three different TF resolutions, both before and after the pixel selection step. For more examples, see the cWB Manual \cite{cwb-doc}. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{SoftwareX_Fig2.pdf} \caption{\label{f:TFexamples} Scalograms of Hanford data around GW150914: upper row, TF maps showing all data; lower row, TF maps showing only pixels retained for likelihood analysis. The columns correspond to different TF resolutions: left column, 1/32~s$\;\times\;$16~Hz; middle column: 1/16~s$\;\times\;$8~Hz; right column, 1/8~s$\;\times\;$4~Hz. } \end{figure*} \section{Impact} \label{sec:impact} For more than a decade, coherent WaveBurst has been used for the analysis of GW data and for the production of scientific results, such as the all-sky searches for burst signals targeting a wide class of generic bursts\cite{Abbott:2019prv,Abbott:2016ezn}. The cWB pipeline has also contributed to the detection and analysis of binary black hole events published in the LIGO-Virgo Catalogs \cite{LIGOScientific:2018mvr,Abbott:2020niy} and to the LIGO-Virgo electromagnetic follow-up program \cite{LIGOScientific:2019gag}. Dedicated cWB searches for the intermediate-mass black hole binaries were also conducted, which set stringent limits approaching interesting astrophysical rates \cite{Salemi:2019ovz,Abbott:2017iws}. Most importantly, cWB boosted exceptional discoveries, like the first detection of gravitational waves from a merger of two black holes. cWB was the first LIGO algorithm to report the event with low latency. The results of the cWB analysis were published in the GW150914 discovery paper \cite{Abbott:2016blz}. cWB contribution has been acknowledged in the official description \cite{Nobel} of the 2017 Nobel prize in physics awarded to Rainer Weiss, Barry C. Barish and Kip S. Thorne. The broader impact of cWB has been highlighted also by Prof. Yves Meyer, the recipient of 2017 Abel Prize, who acknowledged the cWB wavelet analysis in his prize lecture "Detection of gravitational waves and time-frequency wavelets" at the University of Oslo on May 24, 2017 \cite{AbelLecture}. More recently, cWB played a key role in the detection of the first intermediate-mass black hole \cite{Abbott:2020tfl}, as briefly mentioned in Sec.\ref{sec:intro}. Although cWB has been designed with gravitational data analysis in mind, it integrates search\&reconstruction techniques for low SNR signals in the audio-band that may find useful applications in other fields, such as, for example, in the field of sound pattern recognition. \iffalse As an instance, take the sound recognition software developed by the Cornell Lab of Ornithology and the Chemnitz University of Technology \cite{birdnet} where an artificial neural network (ANN) acts on simple spectrograms (another implementation of a TF map) to recognize bird songs. Such a task may become awkward in crowded environments, with several noise sources. A combination of wavelet representation with direction-finding -- similar to that implemented in cWB -- might improve the recognition capabilities of that software. It is important to mention that the use of ANNs to recognize patterns in the TF map has also been considered in cWB to search for CBC \cite{Vinciguerra:2017psh} or CCSN \cite{Astone:2018uge} events. \fi \section{Conclusions} \label{sec:conc} The cWB pipeline has some distinctive strengths: it is fast and efficient, it is highly flexible, and it is extensible -- both with specific plugins and, e.g. with the use of machine learning algorithms for recognition of patterns in the TF maps. It is ready to analyze data from larger (future) GW networks: networks composed of up to 8 detectors, chosen among the defaults or directly defined are allowed. These strengths may be crucial in view of the planned upgrades to GW observatories \cite{collaboration2020prospects}, which shall bring about increased sensitivities with correspondingly higher event rates: we expect cWB to play a key role in both low-latency and offline analysis. \section{CRediT author statement} M. Drago: Formal analysis, Software, Writing - Review \& Editing; V. Gayathri: Formal analysis; S. Klimenko: Conceptualization, Project administration, Supervision, Methodology, Software, Writing - Review \& Editing, C. Lazzaro: Formal analysis, E. Milotti: Methodology, Writing - Original Draft and Review \& Editing, G. Mitselmakher: Conceptualization; V. Necula: Methodology, Software; B. O'Brien: Formal analysis; G.A. Prodi: Supervision, Methodology, Writing - Review \& Editing; F. Salemi: Writing - Original Draft and Review \& Editing, Software, Methodology, Formal analysis; M. Szczepanczyk: Formal analysis; S. Tiwari: Formal analysis; V. Tiwari: Formal analysis, Methodology; G. Vedovato: Software, Methodology, Formal analysis, Data Curation, Visualization, Writing - Review \& Editing; I. Yakushin: Software, Formal analysis, Data Curation. \section{Conflict of Interest} No conflict of interest exists: We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome. \section*{Acknowledgements} \label{sec:ack} cWB makes use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center \cite{gwosc}, a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. S.Tiwari is supported by University of Zurich Forschungskredit {Nr. FK-19-114} and Swiss National Science Foundation. Finally, we gratefully acknowledge the support of the State of Niedersachsen/Germany, of the National Science Foundation (NSF) and of the Istituto Nazionale di Fisica Nucleare (INFN) for provision of computational resources. \bibliographystyle{elsarticle-num}
train/arxiv
BkiUdK44eILhQLGUUWgL
5
1
\section{Introduction} Let $G$ be a undirected simple graph on $n$ vertices with vertex set $V=\{v_1,v_2,\ldots,v_n\}$. The \emph{adjacency matrix} of $G$ is an $n\times n$ matrix $A(G)$ whose $(i,j)$-entry is $1$ if $v_i$ is adjacent to $v_j$ and is $0$ otherwise. The \emph{degree} of $v_i$, denoted by $d_i$, is the number of edges that incident to $v_i$. The matrix $L(G)=D(G)-A(G)$ is called the \emph{Laplacian matrix} of $G$, where $D(G)$ is the $n\times n$ diagonal matrix whose $(i,i)$-entry is $d_i$. The eigenvalues of $L(G)$ are called \emph{Laplacian eigenvalues} of $G$( short for L-eigenvalues ). Since $L(G)$ is a positive semidefinite matrix, the L-eigenvalues of $G$ are non-negative and the smallest eigenvalue equals zero. All the L-eigenvalues together with their multiplicities are called the \emph{Laplacian spectrum} of $G$ ( short for L-spectrum ), denoted by $\emph{Spec}_L(G)$. A graph $G$ is called DLS if $H\cong G$ for any graph $H$ with $\emph{Spec}_L(H)=\emph{Spec}_L(G)$. Throughout this paper, we denote the multiplicity of L-eigenvalue $\mu$ by $m(\mu)$, and the \emph{diameter} of $G$ by $d(G)$. Since the connected graph with a few of distinct eigenvalues possess nice combinatorial properties, it arouses a lot of interests for several matrices, such as the adjacency matrix \cite{Dam3,Dam1,Dam2,Huang}, the Laplacian matrix \cite{Dam,Mohammadian,12}, the signless Laplacian matrix \cite{F} and normalized Laplacian matrix \cite{Dam4}. We denote by $\mathcal{G}(n,k)$ the set of connected graphs of order $n$ with one of the L-eigenvalue having multiplicity $k$. It is well known that $\mathcal{G}(n,n-1)=\{K_n\}$. Das \cite{Das} proved that $\mathcal{G}(n,n-2)=\{K_{\frac{n}{2},\frac{n}{2}},K_{1,n-1},K_n-e\}$, and Mohammadian and Tayfeh-Rezaie \cite{Mohammadian} gave a partial characterization for the graphs of $\mathcal{G}(n,n-3)$. Motivated by their work, we will complete the characterization of the remaining graphs in $\mathcal{G}(n,n-3)$. By the way, we show that all these graphs of $\mathcal{G}(n,n-3)$ are DLS. \section{Preliminaries} The following result is given by Godsil. \begin{thm}[\cite{God}]\label{2-thm-1}Let $Q=\begin{pmatrix}I_m\mid O\end{pmatrix}^T$ be a $n\times m$ matrix, and let $A$ be a $n\times n$ real symmetric matrix with eigenvalues $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n$. If the eigenvalues of $B=Q^TAQ$ are $\mu_1\geq\mu_2\geq\cdots\geq\mu_m$, then $\lambda_i\ge\mu_i\ge\lambda_{n-m+i}$ ($i=1,\cdots,m$). Furthermore, if $\mu_i=\lambda_i$ for some $i\in[1,m]$, then $M$ has a $\mu_i$-eigenvector $z$ such that $Qz$ is a $\lambda_i$-eigenvector of $A$. \end{thm} There are many pretty properties about Laplacian eigenvalues. \begin{lem}[\cite{Cvet}]\label{2-lem-1} Let $G$ be a graph on $n$ vertices with Laplacian eigenvalues $\mu_1\ge\mu_2\ge\cdots\ge\mu_{n-1}\ge\mu_n=0$. Then we have the following results.\\ (i) Denote by $m(0)$ the multiplicity of $0$ as a Laplacian eigenvalue and $w(G)$ the number of connected components of $G$. Then $w(G)=m(0)$.\\ (ii) $G$ has exactly two distinct Laplacian eigenvalues if and only if $G$ is a union of some complete graphs of the same order and some isolate vertices.\\ (iii) The Laplacian eigenvalues of $\bar{G}$ are given by $\mu_i(\bar{G})=n-\mu_{n-i}$ for $i=1,2,\ldots,n-1$ and $\mu_n(\bar{G})=0$.\\ (iv) Let $H$ be a graph on $m$ vertices with Laplacian eigenvalues $\mu_1'\ge\mu_2'\ge\cdots\ge\mu_m'=0$, then the Laplacian spectrum of $G\nabla H$ is \[\{n+m,m+\mu_1,m+\mu_2,\ldots,m+\mu_{n-1},n+\mu_1',n+\mu_2',\ldots,n+\mu_{m-1}',0\}.\] \end{lem} It is well-known that \begin{lem}[\cite{Cvet}]\label{2-lem-2} Let $G$ be a connected graph on $n\geq3$ vertices with $s$ distinct Laplacian eigenvalues. Then $d(G)\leq s-1$. \end{lem} A graph $G$ is said to be a \emph{cograph} if it contains no induced $P_4$. There's a pretty result about cographs. \begin{lem}[\cite{Corneil}]\label{2-lem-3} Given a graph $G$, the following statements are equivalent:\\ 1) $G$ is a cograph.\\ 2) The complement of any connected subgraph of $G$ with at least two vertices is disconnected.\\ 3) Every connected induced subgraph of $G$ has diameter less than or equal to $2$. \end{lem} \section{Characterization of $\mathcal{G}(n,n-3)$} Recall that $\mathcal{G}(n,k)$ is the set of connected graphs with $m(\mu)=k$ for some L-eigenvalue $\mu$. If $k=n-1$, then $G$ has exactly two distinct L-eigenvalues, and so $\mathcal{G}(n,n-1)=\{K_n\}$ by Lemma \ref{2-lem-1} (ii). If $k=n-2$, then $G$ has exactly three distinct L-eigenvalues, and Das \cite{Das} proved that $\mathcal{G}(n,n-2)=\{K_{\frac{n}{2},\frac{n}{2}},K_{1,n-1},K_n-e\}$. For a graph $G\in\mathcal{G}(n,n-3)$, we see that $G$ has three or four distinct L-eigenvalues. In terms of the number and multiplicity of Laplacian eigenvalues, we can divide $\mathcal{G}(n,n-3)$ into five classes: $$\left\{ \begin{array}{ll} \mathcal{G}_1(n,n-3)=\{G\in\mathcal{G}(n,n-3)|\emph{Spec}_L(G)=[(\alpha)^{n-3},(\beta)^2,0]\} \\ \mathcal{G}_2(n,n-3)=\{G\in\mathcal{G}(n,n-3)|\emph{Spec}_L(G)=[(\alpha)^2,(\beta)^{n-3},0]\} \\ \mathcal{G}_3(n,n-3)=\{G\in\mathcal{G}(n,n-3)|\emph{Spec}_L(G)=[(\alpha)^{n-3},\beta,\gamma,0]\} \\ \mathcal{G}_4(n,n-3)=\{G\in\mathcal{G}(n,n-3)|\emph{Spec}_L(G)=[\alpha,(\beta)^{n-3},\gamma,0]\} \\ \mathcal{G}_5(n,n-3)=\{G\in\mathcal{G}(n,n-3)|\emph{Spec}_L(G)=[\alpha,\beta,(\gamma)^{n-3},0]\} \end{array} \right. $$ Mohammadian and Tayfeh-Rezaie \cite{Mohammadian} determine the classes of graphs in $\mathcal{G}_3(n,n-3)$, $\mathcal{G}_4(n,n-3)$ and $\mathcal{G}_5(n,n-3)$, all of which we collect in the following theorem. \begin{thm}[\cite{Mohammadian}, Theorem 8,9,10]\label{3-thm-1} For an integer $n\geq5$, we have\\ (i) $\mathcal{G}_3(n,n-3)=\{K_{n-3}\nabla\overline{K}_{1,2}\}$ and the Laplacian spectra of it is \begin{equation}\label{eq-1} \emph{Spec}_{L}(K_{n-3}\nabla\overline{K}_{1,2})=\{[n]^{n-3},[n-1]^1,[n-3]^1,[0]^1\}. \end{equation} (ii) $\mathcal{G}_4(n,n-3)=\{K_1\nabla2K_{\frac{n-1}{2}},\overline{K}_{\frac{n}{3}}\nabla2K_{\frac{n}{3}},K_{n-1}+e,G_{n,r}\}$ where $G_{n,r}$ is obtained from two copies of $K_r\nabla (n-r)K_1$ by joining the vertices of two independent sets $(n-r)K_1$ (shown in Fig. \ref{fig-1}). Furthermore, the Laplacian spectra of them are given by \begin{equation}\label{eq-2} \left\{\begin{array}{l} \emph{Spec}_{L}(K_1\nabla2K_{\frac{n-1}{2}})=\{[n]^1,[\frac{n+1}{2}]^{n-3},[1]^1,[0]^1\}\\ \emph{Spec}_{L}(\overline{K}_{\frac{n}{3}}\nabla2K_{\frac{n}{3}})=\{[n]^1,[\frac{2n}{3}]^{n-3},[\frac{n}{3}]^1,[0]^1\}\\ \emph{Spec}_{L}(K_{n-1}+e)=\{[n]^1,[n-1]^{n-3},[1]^1,[0]^1\}\\ \emph{Spec}_{L}(G_{n,r})=\{[\frac{1}{2}(3n-2r\pm\sqrt{n^2+4nr-4r^2})]^1,[n]^{2n-3},[0]^1\}\\ \end{array}\right.. \end{equation} (iii) $\mathcal{G}_5(n,n-3)=\{K_{2,n-2},K_{\frac{n}{2},\frac{n}{2}}+e,K_{1,n-1}+e\}$ and the Laplacian spectra of them are given by \begin{equation}\label{eq-3} \left\{\begin{array}{l} \emph{Spec}_{L}(K_{2,n-2})=\{[n]^1,[n-2]^1,[2]^{n-3},[0]^1\}\\ \emph{Spec}_{L}(K_{\frac{n}{2},\frac{n}{2}}+e)=\{[n]^1,[\frac{n}{2}+2]^1,[\frac{n}{2}]^{n-3},[0]^1\}\\ \emph{Spec}_{L}(K_{1,n-1}+e)=\{[n]^1,[3]^1,[1]^{n-3},[0]^1\}\\ \end{array}\right.. \end{equation} \end{thm} \begin{figure} \centering \unitlength 3.6mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(16,9)(0,0) \put(1,7){\line(1,0){5}} \put(1,7){\line(5,-2){5}} \put(1,7){\line(5,-6){5}} \put(1,5){\line(5,2){5}} \put(1,5){\line(1,0){4}} \put(5,5){\line(1,0){1}} \put(6,5){\line(-5,-4){5}} \put(1,1){\line(1,0){5}} \put(6,7){\line(-5,-6){5}} \put(1,5){\line(5,-4){5}} \put(6,7){\line(1,0){4}} \multiput(10,7)(-.06666667,-.03333333){60}{\line(-1,0){.06666667}} \put(6,5){\line(1,-1){4}} \put(10,1){\line(-1,0){4}} \put(6,1){\line(1,1){4}} \multiput(10,5)(-.06666667,.03333333){60}{\line(-1,0){.06666667}} \put(10,7){\line(-2,-3){4}} \put(6,7){\line(2,-3){4}} \put(6,5){\line(1,0){4}} \put(10,7){\line(1,0){5}} \put(10,7){\line(5,-2){5}} \put(10,7){\line(5,-6){5}} \put(10,5){\line(5,2){5}} \put(10,5){\line(1,0){4}} \put(14,5){\line(1,0){1}} \put(15,5){\line(-5,-4){5}} \put(10,1){\line(1,0){5}} \put(15,7){\line(-5,-6){5}} \put(10,5){\line(5,-4){5}} \multiput(.93,4.93)(0,-.8){6}{{\rule{.4pt}{.4pt}}} \multiput(5.93,4.93)(0,-.75){5}{{\rule{.4pt}{.4pt}}} \multiput(9.93,4.93)(0,-.8){6}{{\rule{.4pt}{.4pt}}} \multiput(14.93,4.93)(0,-.8){6}{{\rule{.4pt}{.4pt}}} \put(1,4){\oval(2,8)[]} \put(6,4){\oval(2,8)[]} \put(10,4){\oval(2,8)[]} \put(15,4){\oval(2,8)[]} \put(1,9){\makebox(0,0)[cc]{\small $K_r$}} \put(6.3,9){\makebox(0,0)[cc]{\small $(n-r)K_1$}} \put(10.5,9){\makebox(0,0)[cc]{\small $(n-r)K_1$}} \put(15,9){\makebox(0,0)[cc]{\small $K_r$}} \put(1,7){\circle*{.5}} \put(1,5){\circle*{.5}} \put(1,1){\circle*{.5}} \put(6,7){\circle*{.5}} \put(6,5){\circle*{.5}} \put(6,1){\circle*{.5}} \put(10,7){\circle*{.5}} \put(10,5){\circle*{.5}} \put(10,1){\circle*{.5}} \put(15,7){\circle*{.5}} \put(15,5){\circle*{.5}} \put(15,1){\circle*{.5}} \end{picture} \caption{\footnotesize The graph $G_{n,r}$.} \label{fig-1} \end{figure} For the complete characterization of $\mathcal{G}(n,n-3)$, it remains to determine the graphs of $\mathcal{G}_1(n,n-3)$ and $\mathcal{G}_2(n,n-3)$. At first, we introduce a tool which will be used frequently. \begin{lem}\label{3-lem-1} Let $A$ be a real symmetric matrix of order $n\geq6$ having an eigenvalue $\alpha\neq0$ with $m(\alpha)=n-m$, where $1\leq m\leq n-2$. Let $M$ be a principal submatrix of $A$ of order $m+2$. Then $\alpha$ is also an eigenvalue of $M$ with $m(\alpha)\ge2$, and there exists eigenvector $z=(z_1,z_2,\ldots,z_{m+2})^T$ of $M$ with respect to $\alpha$ such that $z_k=0$ for $1\le k\le m+2$. Furthermore, $\sum_{i=1}^{m+2}z_i=0$ if $A=L(G)$ for a graph $G$. \end{lem} \begin{proof} Let $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$ be the eigenvalues of $A$ and $\mu_1\ge\mu_2\ge\cdots\ge\mu_{m+2}$ the eigenvalues of $M$. By our assumption, there exists some $1\le i\le n$ such that $\lambda_i=\cdots=\lambda_{i+n-m-1}=\alpha\neq0$. By Theorem \ref{2-thm-1}, we get that $\alpha=\lambda_{i+n-m-2}\le\mu_i\le\lambda_i=\alpha$ and $\alpha=\lambda_{i+n-m-1}\le\mu_{i+1}\le\lambda_{i+1}=\alpha$. Therefore, we have $\mu_i=\mu_{i+1}=\alpha$. Suppose that $x=(x_1,\ldots,x_{m+2})^T$ and $y=(y_1,\ldots,y_{m+2})^T$ are two independent eigenvectors of $M$ with respect to $\alpha$. For each fixed integer $1\le k\le m+2$, by linear combination of $x$ and $y$, we get the eigenvector $z=(z_1,\ldots,z_{m+2})^T$ satisfying $z_k=0$. Furthermore, by Theorem \ref{2-thm-1}, we know that $z*=Qz$ is an eigenvector of $A$ with respect to $\alpha$, where $Q=\begin{pmatrix}I_{m+2}\mid O\end{pmatrix}^T$. Note that the all-ones vector $j$ is an eigenvector of $L(G)$ with respect to $0$, then we have ${z*}^Tj=\sum_{i=1}^{m+2}z_i=0$. \end{proof} \begin{lem}\label{3-lem-2} Let $G$ be a connected graph on $n\geq 2m$ vertices with $m(\alpha)=n-m$, where $\alpha$ is a L-eigenvalue of $G$. Then $\alpha$ is integral. \end{lem} \begin{proof} Let $f(x)$ be the characteristic polynomial of $L(G)$. As $L(G)$ only contains integral entries, we obtain that $f(x)$ is a monic polynomial with integral coefficients. Let $p(x)$ be the minimal polynomial of $\alpha$, then $p(x)\in Z[x]$ is irreducible in $Q[x]$ and $(p(x))^{n-m}|f(x)$. We assume that $p(x)$ is a polynomial of degree at least $2$. Therefore, $p(x)$ has another root $\beta\ne0$, which is also a Laplacian eigenvalue of $G$ with multiplicity $n-m$. It follows that $2(n-m)\le n-1$, which implies $n\leq 2m-1$, a contradiction. Thus, we have $p(x)=x-\alpha$ and the result follows. \end{proof} \begin{lem}\label{3-lem-3} Let $G\in\mathcal{G}(n,n-3)$ with $n\geq6$, then none of $J_1(=C_5)$, $J_2$ or $J_3$ (shown in Fig. \ref{fig-2}) can be an induced subgraph of $G$. \end{lem} \begin{proof} By way of contradiction. Suppose that $G$ has $J_i$ as an induced subgraph and $N_i$ the principal submatrix of $L(G)$ corresponding to $J_i$, for $i=1,2,3$. Let $\alpha$ be the Laplacian eigenvalue of $G$ with multiplicity $n-3$. Then, by Lemma \ref{3-lem-1}, $\alpha$ is an eigenvalue of $N_i$ with $m(\alpha)\geq2$, and there exists an eigenvector $z$ of $N_i$ with respect to $\alpha$ such that $z_k=0$ for $k\in\{1,\cdots,5\}$, and $\sum_{i=1}^5z_i=0$. First, suppose that $J_1=C_5$ is an induced subgraph of $G$ and $N_1$ is given by \[N_1=\left(\begin{array}{ccccc}d_1&-1&0&0&-1\\-1&d_2&-1&0&0\\0&-1&d_3&-1&0\\0&0&-1&d_4&-1\\-1&0&0&-1&d_5\end{array}\right)\begin{array}{l}v_1\\v_2\\v_3\\v_4\\u\end{array}.\] By Lemma \ref{3-lem-1}, there exists an eigenvector $x=(0,x_2,x_3,x_4,x_5)^T$ satisfying $x_2+x_3+x_4+x_5=0$ such that $N_1x=\alpha x$. From the first entry of $N_1x=\alpha x$, we have $x_2+x_5=0$. Therefore, we have $x_3+x_4=0$. If one of $x_2$ and $x_5$ equals zero, then $x_2=x_5=0$. Considering the second entry of both sides of $N_1x=\alpha x$, we get that $x_3=0$, and so $x_4=0$. It leads to that $x=0$, a contradiction. If one of $x_3$ and $x_4$ equals zero, then $x_3=x_4=0$, consider the third entry of both sides of $N_1x=\alpha x$ and we get that $x_2=0$, and so $x_5=0$. It leads to that $x=0$, a contradiction. Thus $x_2,x_3,x_4,x_5\ne 0$. Without loss of generality, we may assume that $x=(0,1,a,-a,-1)^T$. Consider the fourth and the fifth entries of both sides of $N_1x=\alpha x$, we have \[d_5-a=\alpha=d_4+1-\frac{1}{a}.\] Since $m(\alpha)=n-3$ and $n\geq6$, we get that $\alpha$ is integral by Lemma \ref{3-lem-2}. Then $a$ and $\frac{1}{a}$ are both integral. Thus, we have $a=\pm1$, and so $\alpha=d_5-1=d_4$ when $a=1$ or $\alpha=d_5+1=d_4+2$ when $a=-1$. It follows that \begin{equation}\label{eq-5}d_4=d_5-1.\end{equation} On the other hand, by Lemma\ref{3-lem-1}, there also exists an eigenvector $y=(y_1,0,y_3,y_4,y_5)^T$ satisfying $y_1+y_3+y_4+y_5=0$ such that $N_1y=\alpha y$. From the second entry of $N_1y=\alpha y$, we have $y_1+y_3=0$. Therefore, we have $y_4+y_5=0$. If one of $y_1$ and $y_3$ equals zero, then $y_1=y_3=0$. Consider the first entry of both sides of $N_1y=\alpha y$, we get that $y_5=0$, and then $y_4=0$. It leads to that $y=0$, a contradiction. If one of $y_4$ and $y_5$ equals zero, then $y_4=y_5=0$. Consider the fourth entry of both sides of $N_1y=\alpha y$, we get that $y_3=0$, and then $y_1=0$. It leads to that $y=0$, a contradiction. Thus, $y_1,y_3,y_4,y_5\ne 0$. Without loss of generality, we may assume that $y=(b,0,-b,1,-1)^T$. Consider the fourth and the fifth entries of both sides of $N_1y=\alpha y$, we have \[d_4+b+1=\alpha=d_5+b+1.\] It follows that $d_4=d_5$, which contradicts to (\ref{eq-5}). Second, assume that $J_2$ is an induced subgraph of $G$ and $N_2$ is given by \[N_2=\left(\begin{array}{ccccc}d_1&-1&0&0&-1\\-1&d_2&-1&0&-1\\0&-1&d_3&-1&0\\0&0&-1&d_4&-1\\-1&-1&0&-1&d_5\end{array}\right) \begin{array}{l}v_1\\v_2\\v_3\\v_4\\u\end{array}.\] By Lemma \ref{3-lem-1}, there exists an eigenvector $x=(x_1,x_2,x_3,x_4,0)^T$ satisfying $x_1+x_2+x_3+x_4=0$ such that $N_2x=\alpha x$. From the fifth entry of $N_2x=\alpha x$, we have $x_1+x_2+x_4=0$, which implies that $x_3=0$. Then from the third entry of $N_2x=\alpha x$, we get $x_2+x_4=0$, which implies that $x_1=0$, and then from the first entry we have $x_2=0$. Thus, $x_4=0$ and hence $x=0$, a contradiction. Finally, assume that $J_3$ is an induced subgraph of $G$ and $N_3$ is given by \[N_3=\left(\begin{array}{ccccc}d_1&-1&0&0&-1\\-1&d_2&-1&0&-1\\0&-1&d_3&-1&-1\\0&0&-1&d_4&-1\\-1&-1&-1&-1&d_5\end{array}\right) \begin{array}{l}v_1\\v_2\\v_3\\v_4\\u\end{array}.\] By Lemma \ref{3-lem-1}, there exists an eigenvector $x=(x_1,x_2,0,x_4,x_5)^T$ satisfying $x_1+x_2+x_4+x_5=0$ such that $N_3x=\alpha x$. Consider the third, the first and the fourth entries of both sides of $N_3x=\alpha x$ succesively, we get that $x=0$, a contradiction. \end{proof} \begin{figure}[htbp] \begin{center} \unitlength 3.6mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(24,9)(0,0) \thicklines \put(2,8){\line(1,0){4}} \multiput(6,8)(.0333333,-.1){30}{\line(0,-1){.1}} \multiput(7,5)(-.03370787,-.03370787){89}{\line(0,-1){.03370787}} \multiput(4,2)(-.03370787,.03370787){89}{\line(0,1){.03370787}} \multiput(1,5)(.0333333,.1){30}{\line(0,1){.1}} \multiput(9,5)(.0333333,.1){30}{\line(0,1){.1}} \put(10,8){\line(1,0){4}} \multiput(14,8)(.0333333,-.1){30}{\line(0,-1){.1}} \multiput(15,5)(-.03370787,-.03370787){89}{\line(0,-1){.03370787}} \multiput(12,2)(-.03370787,.03370787){89}{\line(0,1){.03370787}} \multiput(17,5)(.0333333,.1){30}{\line(0,1){.1}} \put(18,8){\line(1,0){4}} \multiput(22,8)(.0333333,-.1){30}{\line(0,-1){.1}} \multiput(23,5)(-.03370787,-.03370787){89}{\line(0,-1){.03370787}} \multiput(20,2)(-.03370787,.03370787){89}{\line(0,1){.03370787}} \put(12,2){\line(-1,3){2}} \put(18,8){\line(1,-3){2}} \put(20,2){\line(1,3){2}} \put(2,8){\circle*{.5}} \put(6,8){\circle*{.5}} \put(10,8){\circle*{.5}} \put(14,8){\circle*{.5}} \put(18,8){\circle*{.5}} \put(22,8){\circle*{.5}} \put(1,5){\circle*{.5}} \put(7,5){\circle*{.5}} \put(9,5){\circle*{.5}} \put(15,5){\circle*{.5}} \put(17,5){\circle*{.5}} \put(23,5){\circle*{.5}} \put(20,2){\circle*{.5}} \put(12,2){\circle*{.5}} \put(4,2){\circle*{.5}} \put(0.3,5){\makebox(0,0)[cc]{\footnotesize$v_1$}} \put(2,8.7){\makebox(0,0)[cc]{\footnotesize$v_2$}} \put(6,8.7){\makebox(0,0)[cc]{\footnotesize$v_3$}} \put(7.7,5){\makebox(0,0)[cc]{\footnotesize$v_4$}} \put(10,5){\makebox(0,0)[cc]{\footnotesize$v_1$}} \put(10,8.7){\makebox(0,0)[cc]{\footnotesize$v_2$}} \put(14,8.7){\makebox(0,0)[cc]{\footnotesize$v_3$}} \put(15.7,5){\makebox(0,0)[cc]{\footnotesize$v_4$}} \put(18,5){\makebox(0,0)[cc]{\footnotesize$v_1$}} \put(18,8.7){\makebox(0,0)[cc]{\footnotesize$v_2$}} \put(22,8.7){\makebox(0,0)[cc]{\footnotesize$v_3$}} \put(23.7,5){\makebox(0,0)[cc]{\footnotesize$v_4$}} \put(4.6,2){\makebox(0,0)[cc]{\footnotesize$u$}} \put(12.6,2){\makebox(0,0)[cc]{\footnotesize$u$}} \put(20.6,2){\makebox(0,0)[cc]{\footnotesize$u$}} \put(4,0.5){\makebox(0,0)[cc]{\small$J_1=C_5$}} \put(12,0.5){\makebox(0,0)[cc]{\small$J_2$}} \put(20,0.5){\makebox(0,0)[cc]{\small$J_3$}} \end{picture} \caption{The graphs in Lemma \ref{3-lem-3}} \label{fig-2} \end{center} \end{figure} \begin{lem}\label{3-lem-4} Let $G\in\mathcal{G}(n,n-3)$ with $n\ge 6$. If $G\neq G_{n,r}$ (shown in Fig. \ref{fig-1}), then $\bar{G}$ is disconnected. \end{lem} \begin{proof} By Lemma \ref{2-lem-3}, it suffices to show that $G$ contains no induced $P_4$. Assume by contradiction that $G$ contains an induced $P_4=v_1v_2v_3v_4$. If $G$ has three distinct Laplacian eigenvalues, then $d(G)=2$ by Lemma \ref{2-lem-2}. Therefore, there exists a vertex $u\in V(G)$ such that $u\sim v_1,v_4$, since otherwise $d(v_1,v_4)\ge3$. It follows that at least one of $J_1$, $J_2$ and $J_3$ will be an induced subgraph of $G$, which contradicts to Lemma \ref{3-lem-3}. If $G$ has four distinct Laplacian eigenvalues, then $G\in\{K_1\nabla2K_{\frac{n-1}{2}},\overline{K}_{\frac{n}{3}}\nabla2K_{\frac{n}{3}},K_{n-1}+e,G_{n,r}\}$ by Theorem \ref{3-thm-1}(ii), from which we find that all graphs but $G_{n,r}$ have their complements disconnected. \end{proof} We now give the characterization of the graphs in $\mathcal{G}_1(n,n-3)$ and $\mathcal{G}_2(n,n-3)$. \begin{thm}\label{3-thm-3} For an integer $n\geq6$, we have \\ (i) $\mathcal{G}_1(n,n-3)=\{3K_1\nabla K_{n-3},C_4\nabla K_{n-4}\}$ and the Laplacian spectra of them are given by \begin{equation}\label{eq-7} \left\{\begin{array}{l} \emph{Spec}_{L}(3K_1\nabla K_{n-3})=\{[n]^{n-3},[n-3]^2,[0]^1\}\\ \emph{Spec}_{L}(C_4\nabla K_{n-4})=\{[n]^{n-3},[n-2]^2,[0]^1\}\\ \end{array}\right.. \end{equation} (ii) $\mathcal{G}_2(n,n-3)=\{K_2\nabla(n-2)K_1,K_1\nabla K_{\frac{n-1}{2},\frac{n-1}{2}},K_{\frac{n}{3},\frac{n}{3},\frac{n}{3}}\}$ and the Laplacian spectra of them are given by \begin{equation}\label{eq-8} \left\{\begin{array}{l} \emph{Spec}_{L}(K_2\nabla(n-2)K_1)=\{[n]^2,[2]^{n-3},[0]^1\}\\ \emph{Spec}_{L}(K_1\nabla K_{\frac{n-1}{2},\frac{n-1}{2}})=\{[n]^2,[\frac{n+1}{2}]^{n-3},[0]^1\}\\ \emph{Spec}_{L}(K_{\frac{n}{3},\frac{n}{3},\frac{n}{3}})=\{[n]^2,[\frac{2n}{3}]^{n-3},[0]^1\}\\ \end{array}\right. \end{equation} \end{thm} \begin{proof} Let $G\in\mathcal{G}_1(n,n-3)\cup\mathcal{G}_2(n,n-3)$. Then $G$ has three distinct Laplacian eigenvalues, say $\alpha>\beta>0$, and so $G\neq G_{n,r}$. By Lemma \ref{3-lem-4}, we know that $\bar{G}$ is disconnected, and so $\alpha=n$ from Lemma \ref{2-lem-1} (i) and (iii). First suppose that $\alpha=n$ has multiplicity $n-3$, and so $G$ has Laplacian spectrum $\{n^{n-3},\beta^2,0\}$. Then, by Lemma \ref{2-lem-1}, the L-spectrum of $\bar{G}$ is given by \[\mathrm{Spec}_L(\bar{G})=\{[n-\beta]^2,[0]^{n-2}\},\] which implies that $\bar{G}$ has $n-2$ connected components, say $\bar{G}=G_1\cup G_2\cup\cdots\cup G_{n-2}$. It is easy to see that $\bar{G}=K_3\cup(n-3)K_1$ or $2K_{2}\cup(n-4)K_1$. Consequently, $G=3K_1\nabla K_{n-3}$ or $G=C_4\nabla K_{n-4}$. Thus (i) holds. Next suppose that $\beta$ has multiplicity $n-3$, and so $G$ has Laplacian spectra $\{n^2,\beta^{n-3},0\}$. Similarly, by Lemma \ref{2-lem-1}, we have \[Spec_L(\bar{G})=\{[n-\beta]^{n-3},[0]^3\},\] which implies that $\bar{G}$ contains three connected components, say $\bar{G}=G_1\cup G_2\cup G_3$. It is routine to verify that $\bar{G}=2K_1\cup K_{n-2}$, $K_1\cup 2K_{\frac{n-1}{2}}$ or $3K_{\frac{n}{3}}$. Hence $G=K_2\nabla (n-2)K_1$, $K_1\nabla K_{\frac{n-1}{2},\frac{n-1}{2}}$ or $K_{\frac{n}{3},\frac{n}{3},\frac{n}{3}}$. Thus (ii) holds. Note that all of the graphs we find consist of the join of two graphs, by Lemma \ref{2-lem-1} (iv), we obtain their Laplacian spectra, which are shown in (\ref{eq-7}) and (\ref{eq-8}). \end{proof} \begin{remark}\label{re-1} By using the software SageMath, we get the graphs of $\mathcal{G}(n,n-3)$ with $n=4$ and $n=5$. That is, \[\left\{\begin{array}{l} \mathcal{G}(4,1)=\{P_4,K_{1,3}+e\}\\ \mathcal{G}(5,2)=\{C_5,K_1\nabla C_4,K_2\nabla3K_1\}\\ \end{array}\right.\] \end{remark} \begin{cor}\label{3-cor-1} Let $G\in\mathcal{G}(n,n-3)$ with $n\geq4$, then $G$ is determined by its Laplacian spectrum. \end{cor} \begin{proof} Let $H$ be a connected graph on $n$ vertices with $\emph{Spec}_L(H)=\emph{Spec}_L(G)$. We get that $H\in\mathcal{G}(n,n-3)$. Then the result follows by pairwise comparing the Laplacian spectra of graphs in $\mathcal{G}(n,n-3)$. \end{proof}
train/arxiv
BkiUcarxK7ICUyBuz79w
5
1
\section{Introduction} The simplest form of the three-body problem is called the restricted three-body problem(RTBP), in which a particle of infinitesimal mass moves in the gravitational field of two massive bodies orbiting according to the exact solution of the two-body problem. In the circular problem, the two finite masses are fixed in a coordinate system rotating at the orbital angular velocity, with the origin (axis of rotation) at the centre of mass of the two bodies. Lagrange showed that in this rotating frame there are five stationary points at which the massless particle would remain fixed if placed there. There are three such points lying on the line connecting the two finite masses: one between the masses and one outside each of the masses. The other two stationary points, called the triangular points, are located equidistant from the two finite masses at a distance equal to the finite mass separation they are stable in classical case. The two masses and the triangular stationary points are thus located at the vertices of equilateral triangles in the plane of the circular orbit. There is a group of enthusiasts who want to setup a colony at $L_5$ point of the Earth-Moon system. As already noted, because $L_4$ and $L_5$ are the stable points of equilibrium, they have been proposed for sites of large self-contained \lq\lq Space colonies\rq\rq, an idea developed and advocated by the late \cite{O'Neill1974}. The three body problem have an interesting application for artificial satellites and future space colonization. Triangular points of the Sun- Jupiter or Sun-Earth system would be convenient sites to locate future space colonies. Application of results to realistic actual problem is obvious. The classical restricted three body problem is generalized to include the force of radiation pressure, the Poynting-Robertson(P-R) effect and oblateness effect. The photogravitational restricted three body problem arises from the classical problem when at least one of the interacting bodies exerts radiation pressure, for example, binary star systems(both primaries radiating). The photogravitational restricted three body problem under different aspects was studied by \citet*{Radzievskii1950}, \citet*{Chernikov1970}, \citet*{Bhatnagar1979}, \citet*{Schuerman1980}, \citet*{KushvahBR2006},\citet*{KushvahetalNr2007} The Poynting-Robertson drag named after John Henry Poynting and Howard Percy Robertson, is a process by which solar radiation causes dust grains in a solar system to slowly spiral inward. \cite{Poynting1903} considered the effect of the absorption and subsequent re-emission of sunlight by small isolated particles in the solar system. His work was later modified by \cite{Robertson1937} who used precise relativistic treatments of the first order in the ratio of the velocity of the particle to that of light. The location and stability of the five Lagrangian equilibrium points in the planar, circular restricted three-body problem was investigated by \citet*{Murray1994} when the third body is acted on by a variety of drag forces. The approximate locations of the displaced equilibrium points are calculated for small mass ratios and a simple criterion for their linear stability is derived. They showed if $a_1$ and $a_3$ denote the coefficients of the linear and cubic terms in the characteristic equation derived from a linear stability analysis, then an equilibrium point is asymptotically stable provided $0<a_1< a_3$. In cases where $a_1$ is approximately equal to $0$ or $a_1$ is approximately equal to $a_3$ the point is unstable but there is a difference in the e-folding time scales of the shifted $L_4$ and $L_5$ points such that the $L_4$ point, if it exists, is less unstable than the $L_5$ point. The results are applied to a number of general and specific drag forces. They have shown that, contrary to intuition, certain drag forces produce asymptotic stability of the displaced triangular equilibrium points, $L_4$ and $L_5$. \citet*{KushvahBR2006} examined the linear stability of triangular equilibrium points in the generalised photogravitational restricted three body problem with Poynting-Robertson drag and conclude that the triangular equilibrium points are unstable due to Poynting-Robertson drag . \citet*{KushvahetalHr2007} performed higher order normalizations in the generalized photogravitational restricted three body problem with Poynting-Robertson drag. \cite{Deprit1967} investigated the nonlinear stability of triangular points by applying Moser's modified version of Arnold's theorem(1961). \cite{Bhatnagaretal1983} studied the effect of perturbations on the nonlinear stability of triangular points. \cite{BR1997} studied nonlinear stability in the generalized restricted three body problem. His problem is generalized in the sense that the infinitesimal body and one of the primaries have been taken as oblate spheroid. \cite{SubbaRK1997} examined effect of oblateness on the non-linear stability of $L_4$ in the restricted three body problem . Hence we aim to study nonlinear stability of triangular points in our problem. To Examine the nonlinear stability of triangular points we used the KAM theorem[the work of \citet*{Kolmogorov1957} extended by \citet*{Arnold1961}, \citet*{Moser1962}]. Moser's conditions are utilised in this study by employing the iterative scheme of Henrard for transforming the Hamiltonian to the Birkhoff's normal form with the help of double D' Alembert's series. We have found the second order coefficients in the frequencies. For this we have obtained the partial differential equations which are satisfied by the third order homogeneous components of the fourth order part of Hamiltonian $H_4$ and second order polynomials in the frequencies. We have found the coefficients of sine and cosine in the homogeneous components of order three. They are critical terms. We have eliminated these critical terms by choosing properly the coefficients in the polynomials. Then we have obtained the values of the coefficients $A,B,C$ occurring in the fourth order part of the normalized Hamiltonian in KAM theorem. We have applied KAM theorem to examine the conditions of nonlinear stability. Using the first condition of the theorem, we have found two critical mass ratios $\mu_{c1},\ \mu_{c2}$ where this condition fails. By taking the second order coefficients, we have calculated the determinant $D$ occurring in the second condition of the theorem. From this, we have found the third critical mass ratio $\mu_{c3}$ where the second condition of the theorem fails. We conclude that triangular points are stable for all mass ratios in the range of stability except three critical mass ratios where KAM theorem fails. The stability conditions are different from classical case and others, due to radiation pressure, oblateness and P-R drag. \section{First Order Normalization} \label{sec:1storder} We used \cite{Whittaker1965} method for the transformation of $H_2$ into the normal form Equations of motion are as in \citet*{KushvahBR2006} and given by \begin{eqnarray} \ddot{x}-2n\dot{y}&=U_x ,\quad \mbox{where},\quad U_x=\frac{\partial{U_1}}{\partial{x}}-\frac{W_{1}n_1}{r^2_1}\\ \ddot{y}+2n\dot{x}&=U_y,\hspace{.85in}U_y=\frac{\partial{U_1}}{\partial{y}}-\frac{W_{1}n_2}{r^2_1}\\ U_1&={\displaystyle}{\frac{n^2(x^2+y^2)}{2}}+\frac{{(1-\mu) }{q_1}}{r_1}+\frac{\mu}{r_2}+\frac{\mu{A_2}}{2r^3_2} \end{eqnarray} \begin{eqnarray*} r^2_1&=&{(x+\mu)}^2+y^2, r^2_2={(x+\mu-1)}^2+y^2,\\\nonumber n^2&=&1+\frac{3}{2}A_2,\\\nonumber n_1&=&\frac{{(x+\mu)}[{(x+\mu)}\dot{x}+y\dot{y}]}{r^2_1}+\dot{x}-ny,\\ n_2&=&\frac{y[{(x+\mu)}\dot{x}+y\dot{y}]}{r^2_1}+\dot{y}+n{(x+\mu)} \end{eqnarray*} $W_1=\frac{(1-\mu)(1-q_1)}{c_d}$, $\mu=\frac{m_2}{m_1+m_2}\leq\frac{1}{2}$, $m_1,m_2$ be the masses of the primaries, $A_2=\frac{r^2_e-r^2_p}{5r^2}$ be the oblateness coefficient, $r_e$ and $r_p$ be the equatorial and polar radii respectively $r$ be the distance between primaries, $c_d= 299792458$ be the dimensionless velocity of light, $q_1=\bigl(1-\frac{F_p}{F_g}\bigr)$ be the mass reduction factor expressed in terms of the particle's radius $a$, density $\rho$ and radiation pressure efficiency factor $\chi$ (in the C.G.S.system) i.e., $q_1=1-{\displaystyle}{\frac{5.6\times{10^{-5}}\chi}{a\rho}}$. Assumption $q_1=constant$ is equivalent to neglecting fluctuation in the beam of solar radiation, the effect of the planet's shadow, obviously $q_1\leq1$. Triangular equilibrium points are given by $U_x=0,U_y=0,y\neq{0}$, then we have \begin{eqnarray} x_*&=&x_0\Biggl\{1\nonumber\\&&-{\displaystyle}{\frac{nW_1\bigl[{(1-\mu) }{\Bigl(1+\frac{5}{2}A_2\Bigr) }+\mu{(1-\frac{A_2}{2}) }{\frac{\delta ^2}{2} }\bigr]}{3\mu{(1-\mu) }{y_0 x_0}}}\nonumber\\&&-{\frac{\delta ^2}{2} }\frac{A_2}{x_0}\Biggr\} \label{eq:1x}\\ y_*&=&y_0\Biggl\{1\nonumber\\&&-{\displaystyle}{\frac{nW_1\delta^2\bigl[2\mu-1-\mu(1-\frac{3A_2}{2}){\frac{\delta ^2}{2} }+7{(1-\mu) }\frac{A_2}{2}\bigr]}{3\mu{(1-\mu) }{y^3_0}}}\nonumber\\&&-{\displaystyle}{\frac{\delta^2\bigl(1-{\frac{\delta ^2}{2} })A_2}{y^2_0}}\Biggr\}^{1/2}\label{eq:Ly}\end{eqnarray} where $x_0={\frac{\delta ^2}{2} }-\mu$, $y_0=\pm\delta\bigl(1-\frac{\delta^2}{4}\bigr)^{1/2}$ and $\delta=q^{1/3}_1$, as in \cite{KushvahBR2006} The Lagrangian function of the problem can be written as \begin{eqnarray} L&=&\frac{1}{2}(\dot{x}^2+\dot{y}^2)+n(x\dot{y}-\dot{x}y)+\frac{n^2}{2}(x^2+y^2)\nonumber\\&&+\frac{{(1-\mu) }{q_1}}{r_1}+\frac{\mu}{r_2}+\frac{\mu{A_2}}{2r^3_2}\nonumber\\ &&+W_1\Bigl\{\frac{{(x+\mu)}\dot{x}+y\dot{y}}{2r^2_1}-n \arctan{\frac{y}{{(x+\mu)}}}\Bigr\}\nonumber\\ \end{eqnarray} and the Hamiltonian is $H=-L+p_x\dot{x}+p_y\dot{y}$, where $p_x,p_y$ are the momenta coordinates given by \[ p_x=\frac{\partial{L}}{\partial{\dot{x}}}=\dot{x}-ny+\frac{W_1}{2r_1^2}{(x+\mu)}, \]\[ p_y=\frac{\partial{L}}{\partial{\dot{y}}}=\dot{y}+nx+\frac{W_1}{2r_1^2}y \] For simplicity we suppose $q_1=1-\epsilon$, with $|\epsilon|<<1$ then coordinates of triangular equilibrium point $L_4$ can be written in the form \begin{eqnarray} x&=&\frac{\gamma}{2}-\frac{\epsilon}{3}-\frac{A_2}{2}+\frac{A_2 \epsilon}{3}\nonumber\\&&-\frac{(9+\gamma)}{6\sqrt{3}}W_1-\frac{4\gamma \epsilon}{27\sqrt{3}}W_1 \\ y&=&\frac{\sqrt{3}}{2}\Bigl\{1-\frac{2\epsilon}{9}-\frac{A_2}{3}-\frac{2A_2 \epsilon}{9}\nonumber\\&&+\frac{(1+\gamma)}{9\sqrt{3}}W_1-\frac{4\gamma \epsilon}{27\sqrt{3}}W_1\Bigr\} \end{eqnarray} where $\gamma=1-2\mu$. We shift the origin to $L_4$. For that, we change $x\rightarrow {x_*}+x$ and $y\rightarrow{y_*}+y$. Let $a=x_*+\mu, b=y_*$ so that \begin{eqnarray} a&=&\frac{1}{2} \Biggl\{1-\frac{2\epsilon}{3}-A_2+\frac{2A_2 \epsilon}{3}\nonumber\\&&-\frac{(9+\gamma)}{3\sqrt{3}}W_1-\frac{8\gamma \epsilon}{27\sqrt{3}}W_1 \bigr\}\\ b&=&\frac{\sqrt{3}}{2}\Bigl\{1-\frac{2\epsilon}{9}-\frac{A_2}{3}-\frac{2A_2 \epsilon}{9}\nonumber\\&&+\frac{(1+\gamma)}{9\sqrt{3}}W_1-\frac{4\gamma \epsilon}{27\sqrt{3}}W_1\Bigr\} \end{eqnarray} Expanding $L$ in power series of $x $ and $y$, we get \begin{eqnarray} L&=&L_0+L_1+L_2+L_3+\cdots \\ H&=&H_0+H_1+H_2+H_3+\cdots\nonumber\\&& =-L+p_x{\dot{x}}+p_y{\dot{y}} \end{eqnarray} where $L_0,L_1,L_2,L_3 \ldots$ are \begin{eqnarray} L_0&=&\frac{3}{2}-\frac{2\epsilon}{3}-\frac{\gamma \epsilon}{3}+\frac{ 3 \gamma A_2}{4}-\frac{3 A_2 \epsilon}{2}-\gamma A_2 \nonumber \\ &&-\frac{\sqrt{3}W_1}{4}+\frac{2\gamma}{3\sqrt{3}}W_1\nonumber\\&&-\frac{ \epsilon W_1}{3\sqrt{3}}-\frac{23\epsilon W_1}{54\sqrt{3}}-n \arctan{\frac{b}{a}} \end{eqnarray} \begin{eqnarray} L_1&&=\dot{x}\bigl\{-\frac{\sqrt{3}}{2}-\frac{5 A_2 }{8\sqrt{3}}+\frac{7\epsilon A_2}{12\sqrt{3}}+\frac{4 W_1}{9}-\frac{1 \gamma W_1}{18}\bigr\}\nonumber \\&&+ \dot{y}\bigl\{\frac{1}{2}-\frac{\epsilon}{3}-\frac{A_2 }{8}+\frac{\epsilon A_2}{12\sqrt{3}}-\frac{ W_1}{6\sqrt{3}}+\frac{2 \epsilon W_1}{3\sqrt{3}}\bigr\} \nonumber \\ & &-x \bigr\{-\frac{1}{2}+\frac{\gamma}{2}+\frac{9 A_2}{8}+\frac{15\gamma A_2}{8}-\frac{35\epsilon A_2}{12}\nonumber\\&&-\frac{29\gamma \epsilon A_2}{12}+ \frac{3\sqrt{3}W_1}{8}-\frac{2\gamma}{3\sqrt{3}}W_1-\frac{5 \epsilon W_1}{12\sqrt{3}}\nonumber\\&&-y \bigr\{\frac{15\sqrt{3}A_2}{2}+\frac{9\sqrt{3}\gamma A_2}{8}-2\sqrt{3} \epsilon A_2-2\sqrt{3}\gamma \epsilon A_2\nonumber\\&&- \frac{W_1}{8}+\gamma W_1-\frac{43 \epsilon }{36}W_1 \bigr\} \end{eqnarray} \begin{eqnarray} L_2&=&\frac{(\dot x^2+ \dot y^2)}{2}+n(x\dot y-\dot x y)+ \frac{n^2}{2}(x^2+y^2)\nonumber\\&&-Ex^2-Fy^2-G xy \end{eqnarray} \begin{equation} L_3=-\frac{1}{3!}\left\{x^3T_1+3x^2yT_2+3xy^2T_3+y^3T_4+6T_5\right\}\label{eq:L3} \end{equation} \begin{eqnarray} L_4&=&-\frac{1}{4!}\Bigr\{N_1x^4+4N_2x^3y+6N_3x^2y^2\nonumber\\&&+4N_4xy^3+24N_6\Bigr\}\label{eq:L4} \end{eqnarray} where \begin{eqnarray} E&=&\frac{1}{16}\Bigl[ 2-6\epsilon- 3A_2- \frac{31A_2\epsilon}{2}-\frac{69W_1}{6\sqrt{3}} \nonumber\\&&+\gamma \bigl\{2\epsilon+12A_2+ \frac{A_2\epsilon}{3}+\frac{199W_1}{6\sqrt{3}}\bigr\}\Bigr] \end{eqnarray} \begin{eqnarray} F&=&\frac{-1}{16}\Bigl[ 10-2\epsilon+21A_2- \frac{717A_2\epsilon}{18}-\frac{67W_1}{6\sqrt{3}} \nonumber \\&+&\gamma \bigl\{6\epsilon- \frac{293A_2\epsilon}{18}+\frac{187W_1}{6\sqrt{3}}\bigr\}\Bigr] \end{eqnarray} \begin{eqnarray} G&=&\frac{\sqrt{3}}{8}\Bigl[2\epsilon+6A_2- \frac{37A_2\epsilon}{2}-\frac{13W_1}{2\sqrt{3}} \nonumber \\&&-\gamma \bigl\{6\epsilon-\frac{\epsilon}{3}+13A_2- \frac{33A_2\epsilon}{2}+\frac{(11W_1}{2\sqrt{3}}\bigr\}\Bigr]\nonumber \\ \end{eqnarray} $T_i,N_j,(i=1,\dots,5,\ j=1,\dots,6 )$ are as in \citet*{KushvahetalHr2007}. The second order part $H_2$ of the corresponding Hamiltonian takes the form \begin{equation} H_2=\frac{p_x^2+p_y^2}{2}+n(yp_x-xp_y)+Ex^2+Fy^2+Gxy \end{equation} To investigate the stability of the motion, as in \cite{Whittaker1965}, we consider the following set of linear equations in the variables $x, y$: \begin{equation*} \begin{array}{l c l} -\lambda p_x& = & \frac{\partial{H_2}}{\partial x},\\&&\\ -\lambda p_y& = & \frac{\partial{H_2}}{\partial y},\\ \end{array} \quad \begin{array}{l c l } \lambda x& = & \frac{\partial{H_2}}{\partial p_x}\\&&\\ \lambda y& = & \frac{\partial{H_2}}{\partial p_y}\\ \end{array} \end{equation*} i.e. \begin{equation} AX=\mathbf{0} \label{eq:ax}\end{equation} where \begin{equation} X=\left[\begin{array}{c} x\\ y\\ p_x\\ p_y \end{array}\right], \quad A=\left[\begin{array}{c c c c} 2E & G&\lambda& -n\\ G&2F&n&\lambda\\ -\lambda& n& 1& 0\\ -n & -\lambda& 0& 1\end{array}\right] \end{equation} Clearly $|A|=0$, implies that the characteristic equation corresponding to Hamiltonian $H_2$ is given by \begin{equation} \lambda^4+2(E+F+n^2)\lambda^2+4EF -G^2+n^4-2n^2(E+F)=0 \label{eq:ch} \end{equation} This is characteristic equation whose discriminant is \begin{equation} D=4(E+F+n^2)^2-4\bigl\{4EF-G^2+n^4-2n^2(E+F)\bigr\} \end{equation} Stability is assured only when $D>0$. i.e \begin{eqnarray} \mu<\mu_{c_0}-0.221896\epsilon +2.103887A_2 +\nonumber\\ 0.493433\epsilon A_2 +0.704139 W_1 + 0.401154\epsilon W_1\label{eq:muc0} \end{eqnarray} where $ \mu_{c_0}=0.038521$,(Routh's critical mass ratio) When $D>0$ the roots $\pm i \omega_1$ and $\pm i \omega_2$ ($\omega_1,$ $\omega_2$ being the long/short -periodic frequencies) are related to each other as \begin{eqnarray} \omega_1^2+\omega_2^2&=& 1-\frac{\gamma \epsilon}{2}+\frac{3\gamma A_2}{2}+\frac{83\epsilon A_2}{12}-\frac{W_1}{24\sqrt{3}}\nonumber\\ \label{eq:w1+w2} \end{eqnarray} \begin{eqnarray} \omega_1^2\omega_2^2&=&\frac{27}{16} -\frac{27\gamma^2}{16}+\frac{9\epsilon}{8}+\frac{9\gamma\epsilon}{8} +\frac{117\gamma A_2}{16}\nonumber\\ &&-\frac{241\epsilon A_2}{32}+\frac{35W_1}{16\sqrt{3}}-\frac{55 \sqrt{3}\gamma W_1}{16}\label{eq:w1w2}\\ &&(0<\omega_2<\frac{1}{\sqrt{2}}<\omega_1<1)\nonumber\end{eqnarray} From (~\ref{eq:w1+w2}) and (~\ref{eq:w1w2}) it may be noted that $\omega_j$ $ (j=1,2)$ satisfy \begin{eqnarray} &&\gamma^2= 1+\frac{4\epsilon}{9}-\frac{107\epsilon A_2}{27}+\frac{2\gamma \epsilon }{3}-\frac{25W_1}{27\sqrt{3}}\nonumber\\&& +\biggl(-\frac{16}{27}+\frac{32\epsilon}{243}+\frac{208 A_2}{81}\nonumber\\&&-\frac{8\gamma A_2}{27}-\frac{4868\epsilon A_2}{729}+\frac{296W_1}{243\sqrt{3}}\biggr)\omega_j^2\nonumber\\ && +\biggl(\frac{16}{27}-\frac{32\epsilon}{243}-\frac{208 A_2}{81}\nonumber\\&&-\frac{1880\epsilon A_2}{729}-\frac{2720W_1}{2187\sqrt{3}}\biggr)\omega_j^4 \end{eqnarray} Alternatively, it can also be seen that if $u=\omega_1\omega_2$, then (~\ref{eq:w1w2}) gives \begin{eqnarray} \gamma^2&=& 1+\frac{4\epsilon}{9}-\frac{107\epsilon A_2}{27}-\frac{25W_1}{27\sqrt{3}}\nonumber\\&&+\gamma\biggl(\frac{2\epsilon }{3}+\frac{1579\epsilon A_2}{324}-\frac{55\gamma W_1}{9\sqrt{3}}\biggr)\nonumber\\&& +\biggl(-\frac{16}{27}+\frac{32\epsilon}{243}+\frac{208 A_2}{81}\nonumber\\&&-\frac{1880\epsilon A_2}{729}+\frac{320W_1}{243\sqrt{3}}\biggr)u^2 \end{eqnarray} Following the method for reducing $H_2$ to the normal form, as in \cite{Whittaker1965},use the transformation \begin{equation}X=JT \label{eq:XJT}\end{equation} \begin{equation*} \mbox{where}\, X=\left[\begin{array}{c} x\\y\\p_x\\p_y\end{array}\right],J=[J_{ij}]_{1\leq i, j \leq 4},\ T=\left[\begin{array}{c} Q_1\\Q_2\\P_1\\P_2\end{array}\right] \end{equation*} where $J_{ij}$ are as in \citet*{KushvahetalHr2007},\, $P_i= (2 I_i\omega_i)^{1/2}\cos{\phi_i},$ $Q_i= (\frac{2 I_i}{\omega_i})^{1/2}\sin{\phi_i},$ $(i=1,2)$ The transformation changes the second order part of the Hamiltonian into the normal form \begin{equation} H_2=\omega_1I_1-\omega_2I_2\end{equation} The general solution of the corresponding equations of motion are \begin{equation} I_i=\mbox{const.}, \quad \phi_i=\pm \omega_i+\mbox{const.},\ (i=1,2) \label{eq:intmt} \end{equation} If the oscillations about $L_4$ are exactly linear, the Eq.(~\ref{eq:intmt}) represent the integrals of motion and the corresponding orbits will be given by \begin{eqnarray}x&=&J_{13}\sqrt{2\omega_1I_1}\cos{\phi_1}+J_{14}\sqrt{2\omega_2I_2}\cos{\phi_2}\label{eq:xb110}\end{eqnarray} \begin{eqnarray} y&=&J_{21}\sqrt{\frac{2I_1}{\omega_1}}\sin{\phi_1}+J_{22}\sqrt{\frac{2I_2}{\omega_2}}\sin{\phi_2}\nonumber\\&&+J_{23}\sqrt{2I_1}{\omega_1}\cos{\phi_1}+J_{24}\sqrt{2I_2}{\omega_2}\sin{\phi_2}\label{eq:yb101} \end{eqnarray} \section{Second Order Normalization} \label{sec:2ndorder} In order to perform Birkhoff's normalization, we use Henrard's method (\citet{Deprit1967}) for which the coordinates $(x,y)$ of infinitesimal body, to be expanded in double D'Alembert series $x=\sum_{n\geq1}B_n^{1,0},\quad y=\sum_{n\geq 1}B_n^{0,1}$ where the homogeneous components $B_n^{1,0}$ and $B_n^{0,1}$ of degree $n$ are of the form \begin{eqnarray} &&\sum_{0\leq{m}\leq{n}} I_1^{\frac{n-m}{2}}I_2^{\frac{m}{2}}\sum_{(p,q)}\bigl[C_{n-m,m,p,q} \cos{(p\phi_1+q\phi_2)}\nonumber\\&&+S_{n-m,m,p,q} \sin{(p\phi_1+q\phi_2)}\bigr] \end{eqnarray} The conditions in double summation are (i) $p$ runs over those integers in the interval $0\leq p\leq n-m$ that have the same parity as $n-m$ (ii) $q$ runs over those integers in the interval $-m\leq q\leq m$ that have the same parity as $m$. Here $I_1$, $I_2$ are the action momenta coordinates which are to be taken as constants of integer, $\phi_1$, $\phi_2$ are angle coordinates to be determined as linear functions of time in such a way that $\dot\phi_1=\omega_1+\sum_{n\geq 1}f_{2n}(I_1,I_2),\ \dot\phi_2=-\omega_2+\sum_{n\geq 1}g_{2n}(I_1,I_2)$ where $\omega_1,\omega_2$ are the basic frequencies, $f_{2n}$ and $g_{2n}$ are of the form \begin{eqnarray} f_{2n}&=&\sum_{0\leq m\leq n}{f'}_{2(n-m),2m}I_1^{n-m}I_2^m\\ g_{2n}&=&\sum_{0\leq m\leq n}{g'}_{2(n-m),2m}I_1^{n-m}I_2^m \end{eqnarray} The first order components $B_1^{1,0}$ and $B_1^{0,1}$ are the values of $x$ and $y$ given by (~\ref{eq:xb110}) (~\ref{eq:yb101}). In order to find out the second order components $B_2^{1,0},B_2^{0,1}$ we consider Lagrange's equations of motion \begin{equation} \frac{d}{dt}(\frac{\partial L}{\partial \dot x })-\frac{\partial L}{\partial x }=0, \quad \frac{d}{dt}(\frac{\partial L}{\partial \dot y })-\frac{\partial L}{\partial y }=0 \end{equation} \begin{equation} {i.e.}\,\left.\begin{array}{l c l} \ddot x-2n\dot y+(2E-n^2)x+Gy&=&\frac{\partial L_3}{\partial x }+\frac{\partial L_4}{\partial x }\\ &&\\\ddot x+2n\dot x+(2F-n^2)y+Gx&=&\frac{\partial L_3}{\partial y }+\frac{\partial L_4}{\partial y }\end{array} \right\}\label{eq:lgeq}\end{equation} Since $x$ and $y$ are double D'Alembert series, $x^jx^k(j\geq0,k\geq0,j+k\geq0)$ and the time derivatives $\dot x ,\dot y ,\ddot x, \ddot y $ are also double D'Alembert series. We can write \[\dot x=\sum_{n\geq 1} \dot x_n,\,\dot y=\sum_{n\geq 1} \dot y_n,\,\ddot x=\sum_{n\geq 1} \ddot x_n,\,\ddot y=\sum_{n\geq 1} \ddot y_n \] where $\dot x ,\dot y ,\ddot x, \ddot y $ are homogeneous components of degree $n$ in $I_1^{1/2},I_2^{1/2}$ i.e. \begin{eqnarray} \dot x &=& \frac{d}{dt}\sum_{n\geq 1}B_n^{1,0}\nonumber\\&=&\sum_{n\geq 1}\Biggl[\frac{\partial B_n^{1,0}}{\partial{\phi_1}}(\omega_1+f_2+f_4+\cdots)\nonumber\\&+&\frac{\partial B_n^{1,0}}{\partial{\phi_2}}(-\omega_2+g_2+g_4+\cdots)\Biggr]\end{eqnarray} We write three components $\dot x_1 ,\dot x_2 ,\dot x_3$ of $\dot x$ \begin{eqnarray} \dot x_1&=&\omega_1\frac{\partial B_1^{1,0}}{\partial{\phi_1}}-\omega_2\frac{\partial B_1^{1,0}}{\partial{\phi_2}}=DB_1^{1,0}\\ \dot x_2&=&\omega_1\frac{\partial B_2^{1,0}}{\partial{\phi_1}}-\omega_2\frac{\partial B_2^{1,0}}{\partial{\phi_2}}=DB_2^{1,0}\\ \dot x_3&=&\omega_1\frac{\partial B_3^{1,0}}{\partial{\phi_1}}-\omega_2\frac{\partial B_3^{1,0}}{\partial{\phi_2}}+f_2\frac{\partial B_1^{1,0}}{\partial{\phi_1}}-g_2\frac{\partial B_1^{1,0}}{\partial{\phi_2}}\nonumber\\ &=&DB_2^{1,0}+f_2\frac{\partial B_1^{1,0}}{\partial{\phi_1}}-g_2\frac{\partial B_1^{1,0}}{\partial{\phi_2}} \end{eqnarray} where \begin{equation}D\equiv \omega_1\frac{\partial\ }{\partial{\phi_1}}-\omega_2\frac{\partial\ }{\partial{\phi_2}}\end{equation} Similarly three components $\ddot x_1 ,\ddot x_2 ,\ddot x_3$ of $\ddot x$ are \begin{eqnarray*} \ddot x_1 &=&D^2B_1^{1,0},\, \ddot x_2=D^2B_2^{1,0},\\&& \ddot x_3=D^2B_3^{1,0}+2\omega_1f_2\frac{\partial^2B_1^{1,0}}{\partial\phi_1^2}-2\omega_2g_2\frac{\partial^2B_1^{1,0}}{\partial\phi_2^2} \end{eqnarray*} In similar manner we can write the components of $\dot y, \ddot y$. Putting the values of $x, y, \dot x ,\dot y ,\ddot x $ and $\ddot y$ in terms of double D'Alembert series in Eq.(~\ref{eq:lgeq}) we get \begin{eqnarray} &&\left(D^2+2E-1-\frac{3}{2}A_2\right)B_2^{1,0}\nonumber\\&&-\left\{2\left( 1+\frac{3}{4}A_2\right)D-G\right\}B_2^{0,1} =X_2 \label{eq:x2} \end{eqnarray} \begin{eqnarray}&& \left\{2\left( 1+\frac{3}{4}A_2\right)D+G\right\}B_2^{1,0}\nonumber\\&&+\left(D^2+2F-1-\frac{3}{2}A_2\right)B_2^{0,1}=Y_2\label{eq:y2} \end{eqnarray} where \[X_2=\left[\frac{\partial L_3}{\partial x}\right]_{x=B_1^{1,0},y=B_1^{0,1}},\,\, Y_2=\left[\frac{\partial L_3}{\partial y}\right]_{x=B_1^{1,0},y=B_1^{0,1}}\] These are two simultaneous partial differential equations in $B_2^{1,0}$ and $B_2^{0,1}$. We solve these equations to find the values of $B_2^{1,0}$ and $B_2^{0,1}$, from (~\ref{eq:x2}) and (~\ref{eq:y2}) \begin{equation} \triangle_1 \triangle_2B_2^{1,0}=\Phi_2,\, \triangle_1 \triangle_2B_2^{0,1}=-\Psi_2 \label{eq:phi_si}\end{equation} \[\mbox{where} \quad \triangle_1=D^2+\omega_1^2, \triangle_2=D^2+\omega_2^2 \] \begin{equation} \Phi_2=(D^2+2F-n^2)X_2+(2nD-G)Y_2 \label{eq:phi2} \end{equation} \begin{equation} \Psi_2=(2nD+G)X_2-(D^2+2E-n^2)Y_2 \label{eq:psi2} \end{equation} The Eq.(~\ref{eq:phi_si}) can be solved for $B_2^{1,0}$ and $B_2^{0,1}$ by putting the formula \begin{equation*}\frac{1}{\triangle_1\triangle_2}\left\{\begin{array}{c}\cos(p\phi_1+q\phi_2)\\ \mbox{or} \\\sin(p\phi_1+q\phi_2)\end{array}=\frac{1}{\triangle_{p,q}}\left\{\begin{array}{c}\cos(p\phi_1+q\phi_2)\\\mbox{or} \\\sin(p\phi_1+q\phi_2)\end{array}\right.\right.\] where \[\triangle_{p,q}=\left[ \omega_1^2-(\omega_1p-\omega_2q)^2\right]\left[ \omega_2^2-(\omega_1p-\omega_2q)^2\right] \end{equation*} provided $\triangle_{p,q}\neq0$. Since $\triangle_{1,0}=0, \triangle_{0,1}=0$ the terms $\cos\phi_1,\sin\phi_1,\cos\phi_2,\sin\phi_2$ are the critical terms. $\Phi_2$ and $\Psi_2$ are free from such terms. By condition(1) of Moser's theorem $k_1\omega_1+k_2\omega_2\neq 0$ for all pairs $(k_1,k_2)$ of integers such that $|k_1|+|k_2|\leq4$, therefore each of $\omega_1, \omega_2, \omega_1\pm2\omega_2,\omega_2\pm2\omega_1$ is different from zero and consequently none of the divisors $\triangle_{0,0}, \triangle_{0,2}, \triangle_{2,0}, \triangle_{1,1}, \triangle_{1,-1}$ is zero. The second order components $B_2^{1,0}, B_2^{0,1}$ are as follows: \begin{eqnarray} B_2^{1,0}&=&r_1I_1+r_2I_2+r_3I_1\cos2\phi_1\nonumber\\&&+r_4I_2\cos2\phi_2 +r_5I_1^{1/2}I_2^{1/2}\cos(\phi_1-\phi_2)\nonumber\\&&+r_6I_1^{1/2}I_2^{1/2}\cos(\phi_1+\phi_2)+r_7I_1\sin2\phi_1\nonumber\\&&+r_8I_2\sin2\phi_2 +r_9I_1^{1/2}I_2^{1/2}\sin(\phi_1-\phi_2)\nonumber\\&&+r_{10}I_1^{1/2}I_2^{1/2}\sin(\phi_1+\phi_2)\label{eq:b2az} \end{eqnarray} \begin{eqnarray} B_2^{0,1}&=&-\Bigl\{s_1I_1+s_2I_2+s_3I_1\cos2\phi_1\nonumber\\&&+s_4I_2\cos2\phi_2+s_5I_1^{1/2}I_2^{1/2}\cos(\phi_1-\phi_2)\nonumber\\ && +s_6I_1^{1/2}I_2^{1/2}\cos(\phi_1+\phi_2)+s_7I_1\sin2\phi_1\nonumber\\&&+s_8I_2\sin2\phi_2+s_9I_1^{1/2}I_2^{1/2}\sin(\phi_1-\phi_2)\nonumber\\&&+s_{10}I_1^{1/2}I_2^{1/2}\sin(\phi_1+\phi_2)\Bigr\}\label{eq:b2za} \end{eqnarray} where $r_i,s_i,(i=1,\dots,10)$ are as in \citet*{KushvahetalHr2007}. Using transformation $x=B_1^{{1,0}}+B_2^{{1,0}}$ and $y=B_1^{{0,1}}+B_2^{{0,1}}$ the third order part $H_3$ of the Hamiltonian in $I_1^{1/2},I_2^{1/2}$ is of the form \begin{equation}\label{eq:H3} H_3=A_{3,0}I_1^{3/2}+A_{2,1}I_1I_2^{1/2}+A_{1,2}I_1^{1/2}I_2+A_{0,3}I_2^{3/2} \end{equation} We can verify that in Eq.(~\ref{eq:H3}) $A_{3,0}$ vanishes independently as in \cite{Deprit1967}. Similarly the other coefficients $A_{2,1},$ $A_{1,2},$ $A_{0,3}$ are also found to be zero independently. \section{Second Order Coefficients in the Frequencies} \label{sec:2coef} In order to find out the second order coefficients $f_{2,0}, f_{0,2}, g_{2,0}, g_{0,2}$ in the polynomials $f_2$ and $g_2$ we have done as in \cite{Deprit1967}. Proceeding as (~\ref{eq:phi_si}), we find \begin{equation} \triangle_1 \triangle_2B_3^{1,0}=\Phi_3-2f_2P-2g_2Q\label{eq:phi3} \end{equation} \begin{equation} \triangle_1\triangle_2B_3^{0,1}=\Psi_3-2f_2U-2g_2V \label{eq:psi3} \end{equation} where \begin{equation} \Phi_3=\Bigl[D^2+2F-n^2\Bigr]X_3+\Bigl[(2nD-G)\Bigr]Y_3\label{eq:Phi3}\\ \end{equation} \begin{equation} \Psi_3=-\Bigl[2(nD+G)\Bigr]X_3+\Bigl[D^2+2nE-n^2\Bigr]Y_3\label{eq:Psi3}\end{equation} \begin{eqnarray} P&=&\left[ D^2+2F-n^2 \right]\left[\omega_1\frac{\partial^2B_1^{1,0}}{\partial\phi_1^2}-n\frac{\partial B_1^{0,1}}{\partial \phi_1}\right]\nonumber\\&&+(2nD-G)\left[\omega_1\frac{\partial^2B_1^{0,1}}{\partial\phi_1^2}+n\frac{\partial B_1^{1,0}}{\partial \phi_1}\right]\label{eq:P} \end{eqnarray} \begin{eqnarray} Q&=&-\left[ D^2+2F-n^2 \right]\left[\omega_2\frac{\partial^2B_1^{1,0}}{\partial\phi_2^2}-n\frac{\partial B_1^{0,1}}{\partial \phi_1}\right]\nonumber\\&&-(2nD-G)\left[\omega_2\frac{\partial^2B_1^{0,1}}{\partial\phi_2^2}+n\frac{\partial B_1^{1,0}}{\partial \phi_2}\right]\label{eq:Q} \end{eqnarray} \begin{eqnarray} U&=&-(2nD+G)\left[\omega_1\frac{\partial^2B_1^{1,0}}{\partial\phi_1^2}-n\frac{\partial B_1^{0,1}}{\partial \phi_1}\right]\nonumber\\&+&\left[ D^2+2E-n^2 \right]\left[\omega_1\frac{\partial^2B_1^{0,1}}{\partial\phi_1^2}+n\frac{\partial B_1^{1,0}}{\partial \phi_1}\right]\label{eq:U} \end{eqnarray} \begin{eqnarray} V&=&(2nD+G)\left[\omega_2\frac{\partial^2B_1^{1,0}}{\partial\phi_2^2}-n\frac{\partial B_1^{0,1}}{\partial \phi_2}\right]\nonumber\\&-&\left[ D^2+2E-n^2 \right]\left[\omega_2\frac{\partial^2B_1^{0,1}}{\partial\phi_2^2}-n\frac{\partial B_1^{1,0}}{\partial \phi_2}\right]\label{eq:V} \end{eqnarray} \begin{equation} X_3=\frac{\partial}{\partial x}(L_3+L_4), \quad Y_3=\frac{\partial}{\partial y}(L_3+L_4) \end{equation} i.e. \begin{eqnarray} X_3&=&\frac{T_1}{2}x^2+T_2 xy+\frac{T_3}{2}y^2+\frac{N_1}{6}x^3+\frac{N_2}{2}x^2y\nonumber\\&&+\frac{N_3}{2}xy^2+\frac{N_4}{6}y^3+\frac{\partial{T_5}}{\partial{x}}+\frac{\partial{N_6}}{\partial{x}}\\ Y_3&=&\frac{T_2}{2}x^2+T_3 xy+\frac{T_4}{2}y^2+\frac{N_2}{6}x^3+\frac{N_3}{2}x^2y\nonumber\\&&+\frac{N_4}{2}xy^2+\frac{N_5}{6}y^3+\frac{\partial{T_5}}{\partial{y}}+\frac{\partial{N_6}}{\partial{y}} \end{eqnarray} (\ref{eq:Phi3}) and (\ref{eq:Psi3}) are the partial differential equations which are satisfied by the third order components $B_3^{1,0}, B_3^{0,1}$ and the second order polynomials $f_2,g_2$ in the frequencies. We do not require to find out the components $B_3^{1,0}$ and $B_3^{0,1}$. We find the coefficients of $\cos\phi_1, \sin \phi_1, \cos\phi_2$ and $\sin\phi_2$ in the right hand sides of (~\ref{eq:Phi3}),(\ref{eq:Psi3}). They are the critical terms , since $\triangle_{1,0}=\triangle_{0,1}=0$. We eliminate these terms by choosing properly the coefficients in the polynomials \begin{equation} f_2=f_{2,0}I_1+f_{0,2}I_2, \quad g_2=g_{2,0}I_1+g_{0,2}I_2\label{eq:f2} \end{equation} Further, we find that \begin{eqnarray} f_{2,0}=\frac{(\mbox{coefficient of } \cos\phi_1 \mbox { in } \Phi_3)}{2 (\mbox{ coefficient of } \cos\phi_1 \mbox{ in }P)}=A \\ f_{0,2}=g_{2,0}=\frac{(\mbox{coefficient of } \cos\phi_2 \mbox { in } \Phi_3)}{2( \mbox{ coefficient of } \cos\phi_2 \mbox{ in } Q)}=B \\g_{0,2}=\frac{(\mbox{coefficient of } \cos\phi_2 \mbox { in }\Psi_3)}{2 (\mbox{ coefficient of } \cos\phi_2 \mbox{ in } Q)}=C \\\nonumber \end{eqnarray} where \begin{eqnarray} A&=&A_{1,1}+(A_{1,2}+A_{1,3}\gamma)\epsilon+(A_{1,4}\nonumber\\&&+A_{1,5}\gamma)A_2+(A_{1,6}+A_{1,7}\gamma)W_1 \end{eqnarray} \begin{eqnarray} B&=&B_{1,1}+(B_{1,2}+B_{1,3}\gamma)\epsilon+(B_{1,4}\nonumber\\&&+B_{1,5}\gamma)A_2+(B_{1,6}+B_{1,7}\gamma)W_1 \end{eqnarray} \begin{eqnarray} C&=&C_{1,1}+(C_{1,2}+C_{1,3}\gamma)\epsilon+(C_{1,4}\nonumber\\&&+C_{1,5}\gamma)A_2+(C_{1,6}+C_{1,7}\gamma)W_1 \end{eqnarray} where $A_{1,i},B_{1,i}$ and $C_{1,i},(i=1\dots,7)$ are as in Appendix I \section{Stability} \label{sec:stab} The condition(i) of KAM theorem fails when $\omega_1=2\omega_2$ and $\omega_1=3\omega_2$ \subsection{Case(i)}\label{subsect:i} \begin{equation} \mbox{When}\quad \omega_1=2\omega_2\label{eq:KAMi}\end{equation} Then from (~\ref{eq:KAMi}) and (~\ref{eq:w1w2}) we have \begin{eqnarray} &&\mu^2\left(-\frac{27}{4}-\frac{3\epsilon}{2}-\frac{117A_2}{4}-\frac{221W_1}{15\sqrt{3}}\right)\nonumber\\&&+\mu\left(\frac{27}{4}-\frac{107\epsilon}{100}+\frac{3021A_2}{100}+\frac{4291W_1}{120\sqrt{3}}\right)\nonumber\\&&-\frac{4}{25}+\frac{407\epsilon}{200}-\frac{12A_2}{25}-\frac{23991W_1}{200\sqrt{3}}=0 \end{eqnarray} Solving for $\mu$ we have \begin{eqnarray} &&\mu_{c1}= 0.024294 - 0.312692\epsilon\nonumber\\&&- 0.036851A_2 + 1.001052 W_1\label{eq:muc1} \end{eqnarray} \subsection{Case(ii)}\label{subsect:ii} \begin{equation} \mbox{When} \quad\omega_1=3\omega_2\label{KAMii}\end{equation} Proceeding as (~\ref{subsect:i}), we have \begin{eqnarray} &&\mu^2\left(-\frac{27}{4}-\frac{3\epsilon}{2}-\frac{117A_2}{4}-\frac{99 \sqrt{3}W_1}{20}\right)\nonumber\\&&+\mu\left(\frac{27}{4}-\frac{93\epsilon}{100}+\frac{2979A_2}{100}+\frac{119 \sqrt{3}W_1}{10}\right)\nonumber\\&&-\frac{9}{100}+\frac{393\epsilon}{200}-\frac{27A_2}{100}-\frac{4777W_1}{400\sqrt{3}}=0 \end{eqnarray} Solving for $\mu$, we have \begin{eqnarray} &&\mu_{c2}=0.013516 - 0.29724\epsilon\nonumber\\&&- 0.019383 A_2 + 1.007682 W_1\label{eq:muc2} \end{eqnarray} Normalized Hamiltonian up to fourth order is \begin{eqnarray}\label{eq:H_Norm4} H=\omega_1I_1-\omega_2I_2+\frac{1}{2}(AI_1^2+2BI_1I_2+CI_2^2)+\dots \end{eqnarray} Calculating the determinant $ D$ occurring in condition (ii) of KAM theorem, we have \[D=-(A\omega_2^2+2B\omega_1\omega_2+C\omega_1^2)\] Putting the values of $A, B$ and $C$ and if $u=\omega_1\omega_2$, we have \begin{eqnarray} &&D=\frac{644u^4-541u^2+36}{8(4u^2-1)(25u^2-4)}+(D_2+D_3\gamma)\epsilon\nonumber\\&&+(D_4+D_5\gamma)A_2+(D_6+D_7\gamma)W_1\label{eq:D} \end{eqnarray} The second condition of KAM theorem is satisfied if , in the interval $0<\mu<\mu_{c0}$, [where $\mu_{c0}$ as in ~(\ref{eq:muc0})] the mass parameter does not take the value $\mu_{c3}$, which makes $D=0$. To find $\mu_{c3}$, we note that when $\epsilon=A_2=W_1=0$, then from (~\ref{eq:D}), $D$ becomes zero if and only if \[644u^4-541u^2+36=0\] This implies that \begin{equation} u^2=\frac{541-\sqrt {199945}}{1288}=0.072863 =u_0\mbox{(say)}\end{equation} Writing $u^2=\frac{27(1-\gamma^2)}{16}$,\, $\gamma=1-2\mu $ and then solving above, we get \begin{equation} \gamma=\gamma_0=0.978173\dots,\, \mu=\mu_0=0.010914\dots \end{equation} When $\epsilon,A_2, W_1$ are not zero, we assume that $D$ is zero if \begin{equation} \mu=\mu_0+\alpha_1 \epsilon+\alpha_2A_2+\alpha_3W_1 \end{equation} \begin{equation} \gamma=\gamma_0-2(\alpha_1 \epsilon+\alpha_2A_2+\alpha_3W_1) \end{equation} \[\mbox{where}\quad \gamma_0=1-2\mu_0 \] \begin{eqnarray} &&u^2=u_0+(u_1+\alpha_1 u_2)\epsilon\nonumber\\&&+(u_3+\alpha_2u_4)A_2+(u_5+\alpha_3u_6)W_1\label{eq:u} \end{eqnarray} with \begin{eqnarray*} &&u_1=\frac{27}{16}\gamma_0^2+\frac{9}{8}\gamma_0+\frac{9}{8},u_3=\frac{117(1-\gamma^2_0)}{16},\nonumber\\&&u_2=u_4=\frac{27\gamma_0}{4},u_6=\frac{27\gamma_0}{4\sqrt{3}},\nonumber\\&&u_5=\frac{27\gamma_0^2+165\gamma_0+35}{16\sqrt{3}} \end{eqnarray*} and $\alpha_i,(i=1,2,3)$ are to be determined. From (~\ref{eq:D}), $D$ is zero when \begin{eqnarray} &D&=\frac{644u^4-541u^2+36}{8(4u^2-1)(25u^2-4)}+(D_2+D_3\gamma)\epsilon\nonumber\\&&+(D_4+D_5\gamma)A_2+(D_6+D_7\gamma)W_1=0\label{eq:Dzero}\end{eqnarray} Making use of (~\ref{eq:u}) in (~\ref{eq:Dzero}) and equating to zero the coefficients of $\epsilon,A_2$ and $W_1$, we get \begin{eqnarray} \alpha_1&=&-\frac{1}{u_2(1288u_0-541)}\Bigl\{(1288u_0-541)u_1\nonumber\\&&+8(D_2^0+D_3^0\gamma_0)(4u_0-1)(25u_0-4)\Bigr\} \end{eqnarray} \begin{eqnarray}\alpha_2&=&-\frac{1}{u_4(1288u_0-541)}\Bigl\{(1288u_0-541)u_3\nonumber\\&&+8(D_4^0+D_5^0\gamma_0)(4u_0-1)(25u_0-4)\Bigr\} \end{eqnarray} \begin{eqnarray}\alpha_3&=&-\frac{1}{u_6(1288u_0-541)}\Bigl\{(1288u_0-541)u_5\nonumber\\&&+8(D_6^0+D_7^0\gamma_0)(4u_0-1)(25u_0-4)\Bigr\}\end{eqnarray} where $D_n^0$ $(n=2,3,4,\dots ,7)$ are $D_n$ given as in Appendix II, as evaluated for the unperturbed problem. Numerical computation yields, \[\alpha_1=-0.120489\dots,\ \alpha_2=-0.373118\dots,\]\[ \alpha_3=2.904291\dots\] Then we have \begin{eqnarray} &&\mu_{c3}=\mu_0+\alpha_1\epsilon+\alpha_2A_2+\alpha_3W_1 \nonumber\\&&=0.010914-0.120489 \epsilon\nonumber\\&&-0.373118A_2+2.904291 W_1\label{eq:muc3}\end{eqnarray} Hence in the interval $0<\mu<\mu_{c0}$, both the conditions of KAM theorem are satisfied and therefore the triangular point is stable except for three mass ratios $\mu_{ci}(i=1,2,3)$. \section{Analytical Study}\label{sec:ASob} \subsection{Observation I}\label{subsect:obs1} Consider $A_2=0$,$q_1=1,(W_1=0)$ then problem reduced to the classical restricted three body problem. From equation (~\ref{eq:1x}) (~\ref{eq:Ly}) we get \[\quad x=\frac{1}{2}-\mu, \quad y=\pm\frac{\sqrt{3}}{2} \] from (~\ref{eq:muc0}) stability is assured when $\mu<\mu_{c0}$ where $\mu_{c0}=0.038521$. The relation between $\omega_1, \omega_2$ in (~\ref{eq:w1+w2})(~\ref{eq:w1w2}) are given by \begin{eqnarray}&& \omega_1^2+\omega_2^2=1,\, \omega_1^2\omega_2^2=\frac{27}{16}{1-\gamma^2} \\&&(0<\omega_2<\frac{1}{\sqrt{2}}<\omega_1<1)\nonumber\end{eqnarray} From (~\ref{eq:muc1}), (~\ref{eq:muc2}) (~\ref{eq:muc3}) we have found that the triangular points are stable in the range of linear stability except the three mass ratios \begin{eqnarray}\mu_{c1}&=& 0.024294\label{eq:muc1class} \\ \mu_{c2}&=&0.013516 \label{eq:muc2class}\\ \mu_{c3}&=&0.010914\label{eq:muc3class} \end{eqnarray} and the $D$ occurring in the second condition of KAM theorem we have found from (~\ref{eq:D}) \begin{equation}D=\frac{644u^4-541u^2+36}{8(4u^2-1)(25u^2-4)}\label{eq:Dclass}\end{equation}where $u=\omega_1\omega_2$ All the above results, are exactly similar with the results as in \cite{Deprit1967}. Now we have $\epsilon=1-q_1$,$ W_1=\frac{(1-\mu)\epsilon}{c_d}$, suppose $D=D_0$. We draw the figure (~\ref{fig:f1}) which describes the instability range in classical case and figure (~\ref{fig:f2}) views the points $\omega_1=0.924270,\omega_2=0.381742,D_0= 0$ , when $A_2=0, q_1=1$ and $\frac{1}{\sqrt{2}}<\omega_1<1,\quad 0<\omega_2<\frac{1}{\sqrt{2}}$ the value of $\gamma=0.978173$. \begin{figure} \epsscale{1}\plotone{f1.eps} \caption{$A_2=0, q_1=1$}\label{fig:f1} \end{figure} \begin{figure} \plotone{f2.eps} \epsscale{1}\caption{$A_2=0, q_1=1, \omega_1=0.924270, \omega_2=0.381742,\ D_0= 0$}\label{fig:f2} \end{figure} \subsection{Observation II}\label{subsect:obs2} Consider the case when $A_2=0$, $q_1\neq1,(W_1\neq0)$ i.e. photogravitational restricted three body problem with P-R drag when bigger primary is supposed to be radiating body and small primary is being spherical symmetric. The coordinates of triangular equilibrium points are given by \begin{equation} x=x_0\biggl\{1-\displaystyle{\frac{W_1[{(1-\mu) }+\mu{\frac{\delta ^2}{2} }]}{3\mu{(1-\mu) } x_0 y_0}}\biggr\}\end{equation} \begin{equation} y=y_0\biggl\{1-\displaystyle{\frac{W_1\delta^2[2\mu-1-\mu{\frac{\delta ^2}{2} }]}{6\mu{(1-\mu) } y^3_0}}\biggr\} \end{equation} this result coincides with \citet*{Schuerman1980}, where \( x=x_0={\frac{\delta ^2}{2} }-\mu,\, y=y_0=\pm\delta\biggl(1-\frac{\delta^4}{4}\biggr)^{1/2}\),\, $ q_1=1=\delta$,\, \( x=\frac{1}{2}-\mu,\, y=\pm\frac{\sqrt{3}}{2} \) Substituting $\epsilon=1-q_1$,$ W_1=\frac{(1-\mu)\epsilon}{c_d}$, $\mu=\mu_{ci},(i=0,1,2,3)$,$A_2=0$ in (~\ref{eq:muc1}), (~\ref{eq:muc2}) (~\ref{eq:muc3}), we have found that the triangular equilibrium points are stable in the range of stability except three mass ratios \begin{eqnarray} \mu_{c1}&=& 0.024294 -0.312692(1 - q_1)\nonumber\\&& + \frac{0.976732(1 - q_1)}{c_d}\label{eq:muc1pr}\\ \mu_{c2}&=&0.013516 - 0.29724(1 - q_1)\nonumber\\&& + \frac{0.994062(1 - q_1)}{c_d}\label{eq:muc2pr}\\ \mu_{c3}&=&0.010914-0.120489(1 - q_1)\nonumber\\&&+\frac{2.87259 (1 - q_1)}{c_d}\label{eq:muc3pr} \end{eqnarray} We have observed from table (\ref{tbl-1}) and figure (~\ref{fig:f3}), the mass ratio increases, accordingly as the radiation pressure increases, these results are similar but not identical to those of \cite{Papadakis99}. \begin{figure}[t] \epsscale{1} \plotone{f3.eps} \caption{Stability region $\mu_{ci},(i=1,2,3)-q_1$ , when $A_2=0$, $W_1\neq0$ }\label{fig:f3} \end{figure} \subsection{Observation III}\label{subsect:obs3} When $A_2\neq0$, $q_1=1,(W_1=0)$ i.e. in this observation we have considered the smaller primary as an oblate spheroid, the radiation pressure(P-R drag) is not considered. The triangular equilibrium points are given by \begin{eqnarray} x&=&\frac{1-2\mu-A_2}{2}\label{lxob}\\ y&=\pm&\frac{\sqrt{3}}{2}\Bigl\{1-\frac{A_2}{3}\Bigr\}\label{lyob} \end{eqnarray} which are similar but not identical to results as in \cite{Bhatnagaretal1983} and \cite{Chandraetal2004}. In this case triangular equilibrium points are stable in the nonlinear sense except three mass ratios at which Moser's condition fails. Which are given by \begin{eqnarray} \mu_{c1}&=& 0.024294 - 0.036851A_2 \\ \mu_{c2}&=&0.013516 - 0.019383 A_2 \\ \mu_{c3}&=&0.010914-0.373118A_2 \end{eqnarray} The stability region are shown in the diagram $A_2-\mu_{ci}(i=1,2,3)$, (~\ref{fig:f4}), the outer line is corresponding to $\mu_{c1}$, second line due to $ \mu_{c2}$ and innermost line is due to $\mu_{c3}$ it is clear from table (\ref{tbl-2}) the $\mu$ decreases as $A_2$ increases. These results agree with \cite{Markellosetal96, Bhatnagaretal1983} \begin{figure}[t] \epsscale{1} \plotone{f4.eps} \caption{Stability region $\mu_{ci},(i=1,2,3)-A_2$, when $ q_1=1$,$W_1=0$}\label{fig:f4} \end{figure} \subsection{Observation IV}\label{subsect:obs4} When $A_2\neq 0,\, q_1\neq1 (W_1\neq0)$ this is the most generalized case which is being considered.The triangular equilibrium points are given by (~\ref{eq:1x}), (~\ref{eq:Ly}) clearly they are the functions of oblateness coefficient $A_2$ and P-R drag term $W_1$. Substituting $\epsilon=1-q_1$,$ W_1=\frac{(1-\mu)\epsilon}{c_d}$, $\mu=\mu_{ci},(i=0,1,2,3)$ in (~\ref{eq:muc1}), (~\ref{eq:muc2}) (~\ref{eq:muc3}), we get the new formulae \begin{eqnarray}\mu_{c1}&=& 0.024294 - 0.312692(1-q_1)\nonumber\\&-& 0.036851A_2 + \frac{0.976732(1 - q_1)}{c_d}\label{eq:muc1apr}\\ \mu_{c2}&=&0.013516 - 0.29724(1-q_1)\nonumber\\&-& 0.019383 A_2 + \frac{0.994062(1 - q_1)}{c_d}\label{eq:muc2apr}\\ \mu_{c3}&=&0.010914-0.120489 (1-q_1)\nonumber\\&-&0.373118A_2+\frac{2.87259 (1 - q_1)}{c_d}\label{eq:muc3apr}\end{eqnarray} Using (~\ref{eq:muc1apr})-(~\ref{eq:muc3apr}) we have drawn $\mu-A_2-q_1$, $3D$ diagrams (~\ref{fig:f5}). You can see in the first diagram, the uppermost plane is due to $\mu_{c1}$, middle plane is due to $\mu_{c2}$ and innermost plane is due to $\mu_{c3}$, second view value of $\mu_{c0}=.035829$. From these diagrams, we reached at the conclusion that the stability region is reduced due to P-R drag and oblateness effect of smaller primary. But still the triangular equilibrium points are stable in the range of linear stability except three mass ratios at which KAM theorem fails, while they are unstable in linear case [see \cite{Murray1994, KushvahBR2006}]. \section{Conclusion} \label{sec:conc} Using \cite{Whittaker1965} method we have seen that the second order part $H_2$ of the Hamiltonian is transformed into the normal form $H_2=\omega_1I_1-\omega_2I_2$ and the third order part $H_3$ of the Hamiltonian in $I_1^{1/2},I_2^{1/2}$ zero. We conclude that the stability region is reduced due to P-R drag and oblateness effect of smaller primary. But still the triangular equilibrium points are stable in the nonlinear sense in the range of linear stability except for three mass ratios $\mu_{ci},(i=1,2,3)$ at which KAM theorem fails, while they are unstable in linear case [see \cite{Murray1994, KushvahBR2006}]. These results agree with those found by \cite{Deprit1967} and others. \acknowledgments{ We are thankful to D.S.T. Government of India, New Delhi for sanctioning a project DST/MS/140/2K dated 02/01/2004 on this topic. We are also thankful to IUCAA Pune for providing financial assistance for visiting library and computer facility.} \begin{table*} \tabletypesize{\scriptsize}\caption{$A_2=0,q_1\neq1,(W_1\neq0)$}\label{tbl-1} \begin{tabular}{crrr} \tableline\tableline $q_1$ & $\mu_{c1}$ & $\mu_{c2}$ & $\mu_{c3}$ \\ \tableline 0.95 & 0.00866& -0.001346 & 0.00488921 \\ 0.96 &0.011786 & 0.0016263 & 0.006094 \\ 0.97 & 0.014913 & 0.0045987 & 0.007299\\ 0.98 & 0.018040 & 0.0075712& 0.008504\\ 0.99 & 0.02117 & 0.010544& 0.00970878 \\ 1.00 & 0.024294 & 0.013516 & 0.0109137\\ \tableline \end{tabular} \end{table*} \begin{table*} \tabletypesize{\scriptsize} \caption{$A_2\neq 0, q_1=1,(W_1=0)$\label{tbl-2}} \begin{tabular}{crrr} \tableline\tableline $A_2$ & $\mu_{c1}$ & $\mu_{c2}$ & $\mu_{c3}$ \\ \tableline 0.0&0.024294 &0.01352 &0.010914\\ 0.1 &0.020609&0.01158 &-0.026398\\ 0.2&0.016924&0.009639 &-0.06371\\ 0.3&0.013239 &0.007701&-0.101022\\ 0.4 & 0.009554 &0.005763&-0.138334\\ 0.5 &0.005869 &0.003825 &-0.175645\\ 0.6&0.002184&0.001886&-0.212957\\ 0.7&-0.001501 &-0.000052 &-0.250269\\ \tableline \end{tabular} \end{table*} \begin{figure*}[h] \epsscale{1.5} \plottwo{f5.eps}{f6.eps} \caption{Both the graphs show the stability region $\mu_{ci},(i=1,2,3)-q_1-A_2$, second graph view $\mu_{c0}=.035829$}\label{fig:f5} \end{figure*}
train/arxiv
BkiUbqA5qoTDt2GdhaPa
5
1
\section{Contrastive Feature Generation} \label{Sec:Contrast_Features} In Section~\ref{Sec:Contrast}, we motivated the usage of gradients of a pre-trained network $f()$ to obtain contrast. However, the minimum possible change in network $f()$ to encompass the contrast may not always be possible from just the gradient. In this section, we consider three techniques to generate contrastive features based on gradients. \subsection{Gradients} \label{Subsec:Grads} Neural networks whose objectives can be formulated as a differentiable empirical loss function are trained using backpropagation~\cite{rumelhart1986learning}. Backpropagation iteratively adjusts the weight and bias parameters $w_l$ and $b_l$ of the network to minimize the difference between the predicted output vector and the desired output vector. The difference is estimated using an empirical loss function. Given a loss function $J(W, b, x)$, the gradient w.r.t. the weight in layer $l$ is given by $\nabla{J}/\nabla{w_l}$. This gradient is used to update the weights as, \begin{equation}\label{Eq:Gradient_Update} w'_l = w_l - \alpha \frac{\nabla{J}}{\nabla{w_l}}, \end{equation} where $w_l$ and $w'_l$ are the weights before and after the update, $\alpha$ is the learning rate, and $\nabla{J}/\nabla{w_l}$ is the change required to minimize the error. We construct the output and the loss function to obtain contrastive features from a network. Given a network $f()$, trained on $N$ classes, we find contrast for a datapoint $x$ with a label $y$ to all possible $N$ classes. We do so by backpropagating $N$ During training, consider a datapoint $x$ and it corresponding one-hot encoded label $y$. Given a \subsection{Gradients Through L-BFGS} \label{Subsec:LBFGS} Gradients indicate the direction of parameter search that reduces the cost function in neural networks. However, these gradients do not provide the distance that needs to be traveled in the defined direction. Rather, during training, a learning rate is initialized that controls the distance to be updated in the direction specified by gradients. Limited memory-BFGS (L-BFGS) is an approximate second-order method of optimization that estimates both the direction and the corresponding distance in that direction of the update required in network parameters~\cite{nocedal1980updating}. Note that the gradients from Eq.~\ref{Eq:Gradient_Update} indicate the direction of parameter search that reduces the cost function in neural networks. Limited memory-BFGS (L-BFGS) is a quasi-Newtonian~\cite{dennis1977quasi} method of optimization that approximates the inverse hessian of the objective function $J()$ to \subsection{Gradients from adversarial perturbation} \label{Subsec:Adversarial} Continuing the notations established in Secs.~\ref{Sec:Intro} and~\ref{Sec:Theory}, a $L$ layered network $f$ is trained to distinguish between $N$ classes using original distortion free images. For every image $x$ in the distortion free dataset, targeted Fast Gradient Sign Method~\cite{goodfellow2014explaining} (FGSM) is applied to create $N$ adversarial images to obtain $x$'s distance to decision boundaries. For a target $i \in [1,N]$, an adversarial noise $\epsilon \text{sign}(\nabla_x J(W,x,i))$ is added to the image $x$. $J(W,x,i)$ refers to the cost function used to train the network with parameters $W$. $\nabla_x$ is the gradient of the cost function w.r.t. the input $x$. $\epsilon$ of $0.1$ is used in this work. Adversarial noise is added to the input over $k$ iterations until the classification changes to $i$, i.e $f(x_{k-1}+\epsilon \text{sign}(\nabla_x J(W,x_k,i))) = i$. The absolute value of the gradient of the cost function with respect to filter $W_L^i$ is summed up over $k$ iterations, i.e $r_i = \sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^i} J(W,x_j,i))$ where $r_i$ is the feature for $i^{\text{th}}$ class. $i$ is then iterated over all N classes to obtain multi-directional features of $x$ to all decision boundaries. All corresponding $r_i$ are concatenated to obtain the final feature $r_x = [r_1 r_2 \dots r_N]$. Hence the final multi-directional feature is given by, \vspace{-1mm} \begin{equation}\label{eq:Feature} r_x = \Bigg[\sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^1} J(W,x_j,1)) \sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^2} J(W,x_j,2)) \dots \sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^N} J(W,x_j,N))\Bigg] \end{equation} $r_x$ is the proposed proxy second-order representation of $x$. Note that if the dimensionality of the $(L-1)^{\text{th}}$ layer from Eq.\ref{eq:Filter} is $d_{L-1}\times 1$, then the dimensionality of every subfeature $r_i$ is also $d_{L-1} \times 1$. The concatenated final feature has a dimensionality $(N*d_{L-1})\times 1$. For reference, if the network $f_L$ is VGG-16 trained on CIFAR-10, $d_{L-1}$ is $512\times 1$ and $r_x$ is $5120\times 1$. This representation can get prohibitive with increased number of classes. Simple methods to offset this increase and speed up $r_x$ generation are discussed in Sec.~\ref{subsec:Enhancements}. \section{Conclusion} \label{Sec:Discussion} In this paper, we illustrate the existence of implicit contrast within trained neural networks. We abduce this contrast and provide a robust inference scheme that reasons contrastively. We also visualize such a posteriori reasons as visual explanations that add context to existing causal explanations. The underlying principle that allows extracting such contrastive reasons is : 1) the definition of contrast which allows a datapoint to adopt multiple labels and exist in multiple manifolds and 2) existence of gradients that provide a non-heuristic means to travel between such manifolds. \begin{comment} Both the proposed methods require generating adversarial images $N$ times during testing phase. This might compromise applications that require fast throughput. Borrowing the notations from Section~\ref{Sec:Method}, generating $r_x$ takes around 0.28s for every image on VGG-16. The reason for the large cost is the multiple backpropagation that happen over all the $16$ feature extraction modules to create adversarial noise in $x$ domain. However, the generation of perceptually similar adversarial images in the spatial $x$ domain is not our goal. Instead of generating adversarial images as $x+\epsilon \text{sign}(\nabla_x J(W,x,i))_k$, we generate adversarial data points in the $f_{L-1}$ domain. That is, instead of backpropagating to $x$ and taking the partial derivative $\nabla_x J(W,x,i))$ w.r.t $x$, we backpropagate only one layer to $f_{L-1}$ and obtain the partial derivative w.r.t $f_{L-1}(x)$. Hence, in place of of adversarial images, we create adversarial data points in $f_{L-1}(x)$. Conceptually, the gradients still provide the direction for the filters towards the data points. Accumulating gradients, until the targeted projection $P_i^'$ exceeds the original classification, provides the second order representation $r_x$. The remainder of the method remains the same. Generating $r_x$ in this fashion takes 0.0056s for every image. The average accuracies across levels for all six distortions is presented in Table~\ref{Tab:Timing}. The performance of the $\nabla_{f_{L-1}(x)}$ RR and NR methods are comparable to their $\nabla_x$ counterparts among all distortions. This alternate feature generation method can be used in datasets with large $N$ and huge feature extraction modules. A feed-forward network provides a prediction $P$ for an image $x$ in ${\mathcal{O}}(1)$ time complexity. For the proposed contrastive inference, we backpropagate over $N$ classes for one image. Hence, the total complexity to extract the contrastive feature, $r_x$, is ${\mathcal{O}}(N)$. We posit that the complexity of the feature extraction is derived from the serial nature of our implementation. The author in~\cite{goodfellow2015efficient} provides ways of extracting individual gradients from minibatches of data. By creating multiple copies of $x$ all with ground truths given by the modified Kronecker delta function from Eq.~\ref{eq:Kroenecker}, the extraction of $r_x$ can be accelerated. \end{comment} \section{Results} \label{sec:Results} In Section~\ref{Sec:Contrast}, the contrastive reasons were used to demonstrate the visual explanations that can complement existing \emph{`Why P?'} explanations. In this section, we demonstrate the robustness claim made in Section~\ref{Sec:Inference}. We apply feed-forward and contrastive inference on existing neural networks and show that : 1) contrastive inference performs similarly to feed-forward inference when $f()$ is trained and tested on pristine data, 2) contrastive inference outperforms its feed-forward counterpart when the testing data is distorted. Distortions include image acquisition errors, severe environmental conditions during acquisition, transmission and storage errors among others. Current techniques that alleviate the drop in feed-forward accuracy require training using distorted data. The authors in~\cite{vasiljevic2016examining} show that finetuning or retraining networks using distorted images increases the performance of classification under the same distortion. However, performance between different distortions does not generalize well. For instance, training on Gaussian blurred images does not guarantee a performance increase in motion blur images~\cite{vasiljevic2016examining, geirhos2018generalisation}. Other proposed methods include training on style-transferred images~\cite{geirhos2018imagenet}, training on adversarial images~\cite{hendrycks2019benchmarking}, and training on simulated noisy virtual images~\cite{Temel2017_CURETSR}. All these works require additional training data not belonging to $X$. In this paper, we show that contrastive inference increases classification accuracy in $19$ considered distortions from CIFAR-10-C dataset~\cite{hendrycks2019benchmarking}. This increase in accuracy is induced because of the contrastive features extracted from the pristine data and not through training with distorted data. \vspace{-3mm} \subsection{Results on Pristine Data} \label{Subsec:In-Distribution} We train four networks - ResNet-18,34,50, and 101~\cite{he2016deep} on CIFAR-10~\cite{krizhevsky2009learning} trainset and test them on CIFAR-10 testset. The training set of CIFAR-10 has $50000$ images from $10$ classes with each class having $5000$ sample images. The networks are trained in PyTorch for $200$ epochs on a NVIDIA 1080Ti GPU with a batch size of $128$ using SGD optimization. Learning rates of $0.1, 0.004,$ and $0.0008$ are used from epochs $0-60, 60-120,\text{ and } 160-200$ respectively along with a momentum of $0.9$ throughout the training procedure. PyTorch's horizontal flipping transformation is used as a data augmentation technique. The testset in CIFAR-10 consists of $10000$ images with each class represented by $1000$ images. The results of all networks derived using Eq.~\ref{eq:FF-Infer} are shown as feed-forward results in Table~\ref{table:Original_Results}. \begin{table}[h!] \small \centering \caption{Feed-Forward Inference vs contrastive Inference on CIFAR-$10$ test set} \vspace{-1mm} \begin{tabular}{c | c c c c} \hline ResNet & 18 & 34 & 50 & 101 \\ [0.5ex] \hline\hline Feed-Forward (\%) & $91.02$ & $93.01$ & $93.09$ & $93.11$ \\ [1ex] \hline Gradients (\%) & $90.94$ & $93.14$ & $92.88$ & $92.73$ \\ [1ex] \hline \end{tabular} \label{table:Original_Results}\vspace{-7mm} \end{table} For all $50000$ images in the training set, we extract the contrastive features $r_x$. $r_x$ is richer in terms of dimensionality and contrastive information. For instance, for every image in ResNet-18, the feed-forward network provides a $64\times 1$ feature as $y^{L-1}_{feat}$ from Eq.~\ref{eq:FF-Infer}. Using these feed-forward features, the proposed contrastive features, $r_x$ from Eq.~\ref{eq:r_dis}, are extracted with a dimensionality of $640\times1$ feature. These features are normalized. $\mathcal{H}$, with the structure given in Table~\ref{table:Structure} is trained on the $50000$ training contrastive features for $200$ epochs using the same procedure as the original network. The time to train $\mathcal{H}$ is dependent on its structure and is shown in Table~\ref{table:Structure}. Hence, for contrastive inference, our full framework consists of the original trained network, $f()$, and an MLP, $\mathcal{H}$. During testing, all $10000$ images from the testset are passed through the original network. The contrastive features from each image are extracted individually. These features are normalized and passed through the trained MLP. The overall accuracy is reported in Table~\ref{table:Original_Results}. As can be seen, the results of all three methods are comparable to that of the feed-forward procedure. These results are a validation of the Bayesian limiting case scenario expressed in Eq.~\ref{eq:probability}. When the network $f()$ is well trained from a distribution $X$, $Pr(i|P,x\in X)$ where $i\in [1,N]$ provides no new information regarding any $i$ other than $P$. Hence, the results from Eq.~\ref{eq:probability} are only influenced by the $64\times 1$ feature extracted using $i = P$ in Eq.~\ref{eq:FF-Infer}. \vspace{-3mm} \subsection{Results on Distorted Data} \label{Subsec:Distortion} \begin{figure*}[!htb] \begin{center} \minipage{\textwidth}% \includegraphics[width=\linewidth]{Figs/Robustness.pdf} \endminipage \vspace{-3mm} \caption{Visualization of accuracy gains (in red) of using the proposed contrastive inference over feed-forward inference on CIFAR-10-C for four networks among $19$ distortions. Within each distortion, the distortion levels are averaged and the results shown.}\vspace{-0.7cm}\label{fig:Robustness_Distortionwise} \end{center} \end{figure*} \begin{comment} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Levelwise-Results.png} \endminipage \vspace{-3mm} \caption{Visualization of accuracy gains (in red) of using the proposed contrastive inference over feed-forward inference on CIFAR-10-C for four networks among $5$ distortion levels. Within each distortion level, the individual distortions are averaged and the results shown.}\vspace{-0.7cm}\label{fig:Robustness_Levelwise} \end{center} \end{figure*} \end{comment} \begin{figure*}[!htb] \begin{center} \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Data.png} \endminipage \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Iter.png} \endminipage \caption{(a) Contrastive inference vs. Feed-Forward Inference when $f()$ is trained on limited data. (b) Contrastive inference vs. Feed-Forward Inference when $f()$ is trained in limited time.}\label{fig:f-effect}\vspace{-0.7cm} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Data_Tests.png} \endminipage \caption{Contrastive Inference vs. Feed-Forward Inference under limited training data on (a) Brightness distortion, (b) Motion blur distortion, (c) Gaussian noise distortion.}\label{fig:Data-Tests}\vspace{-0.6cm} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Epoch_Tests.png} \endminipage \caption{Contrastive Inference vs. Feed-Forward Inference across epochs under (a) Brightness distortion, (b) Motion blur distortion, (c) Gaussian noise distortion.}\label{fig:Epoch-Tests}\vspace{-0.7cm} \end{center} \end{figure*} In this section, we use CIFAR-10-C~\cite{hendrycks2019benchmarking} to validate the performance of contrastive inference on distorted data. CIFAR-10-C provides $19$ distortions on the testset of CIFAR-10. It also provides $5$ increasing levels of degradation in each distortion. Hence, every distortion has $50000$ images to test on. $19$ distortions produce $950,000$ images during testing on a network trained on the original $50000$ images. The same networks ResNet-18,34,50,101 trained in Section~\ref{Subsec:In-Distribution} are used for the experiments in this section. The results are shown in Fig.~\ref{fig:Robustness_Distortionwise}. The blue bars depict the results when the network $f()$ infers the predictions of the distorted images in a feed-forward fashion. The red bar depicts the contrastive accuracy gain over the feed-forward predictions with the inclusion of $\mathcal{H}$. The results of all $19$ distortions averaged over $5$ distortion levels are shown. Among $4$ networks and in every distortion category, there is an increase in the performance of contrastive inference over feed-forward inference. However, the increase is not consistent across the distortion categories. The results of all $4$ networks increase substantially by atleast $5\%$ in $8$ of the $19$ categories. These distortions include - gaussian blur, gaussian noise, glass blur, motion blur, pixelate, shot noise, speckle noise, and zoom blur. The highest increase is $8.22\%$ on glass blur for ResNet-18. These distortions are global distortions i.e. the criteria and methodology for distorting pixels does not change based on individual pixels. Other distortions including contrast, and brightness where the proposed method's performance gains are less than $3\%$ consist of distortions that change the global and local characteristics of the image including it's mean and contrast. Neural networks are actively trained to ignore such changes so that their effects are not propagated beyond the first few layers. The contrast obtained from the last layer is by itself insufficient to characterize the underlying data without the noise. This should be addressed by deriving contrast from earlier layers but is currently beyond the scope of this paper. Also, the contrastive performance gain increases as the level of distortions increase. This is because, with higher distortion, we move further away from the Bayesian limiting case in Eq.~\ref{eq:probability}. In other words, the information in $Pr(i|P,x\in X)$ increases. For ResNet-18, the average level $1$ contrastive gain across all $19$ distortions is $1.36\%$ while its level $5$ gain across distortions is $5.26\%$. \begin{table*} \centering \caption{Contrastive Inference averaged accuracies with different loss functions for ResNet-18 on CIFAR-10-C.}\vspace{-1mm} \begin{tabular}{c c c c c c c c c c} \hline Feed-Forward & MSE & CE & BCE & L1 & L1-M & Smooth L1 & Smooth L1-M & NLL & SoftMargin \\ [0.5ex] \hline\hline $67.89\%$ & $71.35\%$ & $69.42\%$ & $70.24\%$ & $69.09\%$ & $69.92\%$ & $69.22\%$ & $70.01\%$ & $70.93\%$ & $70.91\%$ \\ [1ex] \hline\vspace{-7mm} \end{tabular} \label{table:Losses} \end{table*} \vspace{-3mm} \subsubsection{Effect of loss in gradient generation} Note that the gradient generation process is stochastic in Eq.~\ref{eq:CI-features}. We test the impact of the choice of loss functions in Eq.~\ref{eq:CI-features} on the performance of contrastive inference. We compare multiple loss functions all of whose distortion-wise level-wise averaged results are provided in Table~\ref{table:Losses}. Feed-forward accuracy is the results obtained from the network $f()$, MSE is the modified Mean Squared Error from Eq.~\ref{eq:Kroenecker}, CE is Cross Entropy, BCE is Binary Cross Entropy, L1 is Manhattan distance, L1-M is the modified version of L1 distance when $\delta^M_i$ from Eq.~\ref{eq:Kroenecker} is backpropagated, Smooth L1 is the leaky extension of Manhattan distance, Smooth L1-M is the modified version of Smooth L1 similar to L1-M, NLL is Negative Log Likelihood loss, and SmoothMargin is the modified hinge loss. All these loss functions are taken from PyTorch. In Table~\ref{table:Losses}, the performance of all contrastive inference modalities with different loss functions, exceed that of the feed-forward inference. The proposed modified MSE outperforms the nearest loss by $0.42\%$ in accuracy and is used in all our experiments. \begin{comment} \subsubsection{Level-wise Performance Gain} The results in Fig.~\ref{fig:Robustness_Levelwise} are categorized based on the distortion level and presented. All $19$ categories of distortion are averaged for each level and their respective feed-forward accuracy and contrastive gains are shown. Note that Level $0$ refers to results on pristine data. It is a graphical representation of Table~\ref{table:Original_Results}. The rest of the $5$ levels are on distorted data. From Section~\ref{subsec:Math_Theory}, the contrastive inference approaches the ideal case of training on distorted data when the entropy of $y_{L-1}$ is maximum. As the distortions increase, the network $f()$ is increasingly not-confidant of its prediction. Hence, $y_{L-1} = f_{L-1}(z)$ approaches all $\frac{1}{N}$ or maximum entropy. This is the same as backpropagating contrasts assuming all logits are $M$ from Eq.~\ref{eq:r_dis}. Hence, as the distortion level increases, the contrastive gain also increases. This is apparent from the results in Fig.~\ref{fig:Robustness_Levelwise}. \end{comment} \vspace{-3mm} \subsection{Effect of $f()$}\label{subsec:f-Effect} We further analyze ResNet-18 trained on CIFAR-10 to see the efficacy of contrastive features when $f()$ is \emph{not} well trained i.e. when contrast has not yet been learned implicitly. In this subsection, our goal is to ascertain that the performance gain obtained by contrast is not due to a poorly trained $f()$ or a statistical anomaly. We consider two cases of such $f()$ : when $f()$ is trained for limited time, and when $f()$ is trained with limited data. ResNet-18 is trained on CIFAR-10 with the same learning parameter setup as in Section~\ref{Subsec:In-Distribution} and Table~\ref{table:Original_Results}. \subsubsection{Training and testing under limited time}\label{subsubsec:LWLT} For the limited time experimental setup, ResNet-18 is trained for $200$ epochs. The network states at multiples of $3$ epochs from $1$ to $198$ along with epoch $= 200$ are saved. This provides $67$ versions of $f()$ under different stages of training. Each $f()$ is tested on $10,000$ CIFAR-10 testing images and the Top-$1$ accuracy is plotted in blue in Fig.~\ref{fig:f-effect}b. The gradients $r_x$ for all $67$ states are extracted for the $50,000$ training samples. These gradients are used to train $67$ separate $\mathcal{H}$ of structure provided in Table~\ref{table:Structure} with a similar parameter setup as before. The gradients from the $10,000$ testing samples are extracted separately for each of the $67$ ResNets and tested. The results are plotted in red in Fig.~\ref{fig:f-effect}b. Note the sharp spikes at epochs $60$ and $120$ where there is a drop in the learning rate. Hence, when training domain is the same as testing domain, contrastive and feed-forward features provide statistically similar performance across varying states of $f()$ in accordance with Eq.~\ref{eq:probability}. We now consider the case when a network $f()$ is trained for limited epochs on domain $X$ and tested on $Z$ from CIFAR-10-C distortions. The $67$ trained models of ResNet-18 are are tested on three distortions from CIFAR-10-C. The three distortions include motion blur, brightness, and Gaussian noise. These three distortion types were chosen to represent the spectrum of the proposed method's performance increase over feed-forward inference. From the results in Fig.~\ref{fig:Robustness_Distortionwise}, contrastive inference achieved one of its highest performance gains in Gaussian noise, lowest performance gain in brightness and an average increase in motion blur. The results in Fig.~\ref{fig:Epoch-Tests} indicate that after around $60$ epochs, the feed-forward network has implicitly learned contrasts sufficiently to discriminate between classes. This is seen in both motion blur and Gaussian noise experiments. The results from brightness indicate that contrastive inference follows feed-forward inference across epochs. \subsubsection{Training and testing with limited data}\label{subsubsec:LimitedData} For the limited data experiment, ResNet-18 is trained on only a subset of available data. CIFAR-10 consists of $50,000$ training images with each class consisting of $5,000$ images. We randomly sample a fixed number images from each class and train $f()$ using these samples. The results are plotted in Fig.~\ref{fig:f-effect}a. Ten separate ResNets are trained with each model having access to random $100$, $200$, $300$, $400$, $500$, $600$, $700$, $800$, $900$, and $1000$ images per class. Validation is conducted on all $10,000$ testing images and the results are plotted in blue in Fig.~\ref{fig:f-effect}a. Contrastive features are extracted for the same random images that the base $f()$ was trained on and $10$ separate $\mathcal{H}$ are trained on these gradients. $r_z$ for $10,000$ testing data is separately extracted for all ten instances of $f()$ and passed through trained $\mathcal{H}$ to obtain contrastive results. These are plotted in red. Similar to the results in Table~\ref{table:Original_Results}, contrastive inference is statistically similar to feed-forward inference. We consider the case when a network $f()$ is trained on limited data on $X$ but tested on distorted data $Z$, from CIFAR-10-C. We consider the same distortions as before from Section~\ref{subsubsec:LWLT}. The results of feed-forward and contrastive inference are plotted in Fig.~\ref{fig:Data-Tests}. For Gaussian noise distortion, contrastive inference outperforms feed-forward inference even with only $300$ training images per class. However, this is not the case for both motion blur and brightness where contrastive inference follows feed-forward inference. \begin{table*} \centering \caption{Performance of Contrastive Inference vs Feed-Forward Inference on VisDA Dataset} \vspace{-1.5mm} \begin{tabular}{c c c c c c c c c c c c c c} \hline & Plane & Cycle & Bus & Car & Horse & Knife & Bike & Person & Plant & Skate & Train & Truck & All \\ [0.5ex] \hline Feed-Forward & $27.6$ & $7.2$ & $\textbf{38.1}$ & $54.8$ & $43.3$ & $\textbf{4.2}$ & $\textbf{72.7}$ & $\textbf{8.3}$ & $28.7$ & $22.5$ & $\textbf{87.2}$ & $2.9$ & $38.1$\\ Contrastive & $\textbf{39.9}$ & $\textbf{27.6}$ & $19.6$ & $\textbf{79.9}$ & $\textbf{73.5}$ & $2.7$ & $46.6$ & $6.5$ & $\textbf{43.8}$ & $\textbf{30}$ & $73.6$ & $\textbf{4.3}$ & $\textbf{43.6}$\\ \hline \end{tabular} \label{table:VisDA} \end{table*} \vspace{-3mm} \subsection{Effect of training data}\label{subsec:D-Effect} We analyze the impact of training data in three additional cases : a) The base network $f()$ is trained on noisy images, b) the training and testing data are of a higher resolution than the $32\times 32 \times 3$ CIFAR-10 images, c) the training and testing data are significantly different like in VisDA dataset~\cite{peng2017visda}. In all three cases, we \emph{do not} use data from training domain to learn $\mathcal{H}$. \vspace{-3mm} \subsubsection{Results on training with noisy data}\label{subsubsec:NoiseTrain} In Sections~\ref{Subsec:Distortion} and \ref{subsec:f-Effect}, $\mathcal{H}$ is used as a plug-in on top of neural network $f()$ trained on pristine data $X$. Consider a network $f'()$ that has trained on distorted data. We apply contrastive inference on $f'()$ and show that there is a performance gain when using $\mathcal{H}$. In this experimental setup, we augment the training data of CIFAR-10 with six distortions - gaussian blur, salt and pepper, gaussian noise, overexposure, motion blur, and underexposure - to train a ResNet-18 network $f'()$. We then test the network on CIFAR-10 test data corrupted by the same six distortions in $5$ progressively varying degrees. The distortions were obtained from~\cite{temel2018cure}. Note that while $f'()$ is trained on distortions, $\mathcal{H}$ is trained only on original CIFAR-10 training data. The results of the proposed contrastive inference compared to feed-forward inference of the augmented model $f'()$ increases by a total of $1.12\%$ across all distortions and levels. On the highest level $5$ distortion in blur category, the increase is $6.87\%$. \vspace{-3mm} \subsubsection{Results on STL dataset} The proposed approach is implemented on higher resolution images of size $96\times96\times3$ in STL-10 dataset~\cite{coates2011analysis}. ResNet-18 architecture is adopted with an extra linear layer to account for change in resolution. Note that STL-10 does not have a standardised distorted version. Hence, we use the same distortions from Section~\ref{subsubsec:NoiseTrain} to corrupt the STL-10 testset. The results of contrastive inference increases by an average of $2.56\%$ in all but underexposure distortion. In underexposure, the accuracy drops by $1.05\%$. In level $5$ of both blur categories, the contrastive performance gain is $6.89\%$. The decrease in performance in underexposure distortion can be attributed to the change in low level statistical characteristics that are discarded by the network's initial layers. \vspace{-3mm} \subsubsection{Results on VisDA dataset}\label{subsubsec:DA} We show validation results on a synthetic-to-real domain shift dataset called VisDA~\cite{peng2017visda} in Table~\ref{table:VisDA}. An ImageNet pre-trained ResNet-$18$ architecture is finetuned on synthetically generated images from VisDA dataset and tested on real images in the same VisDA dataset. The dataset has $12$ classes with $152k$ training images. While there is an overall performance gain of $5.48\%$, the individual class accuracies indicate room for improvement. The feed-forward predictions, $P$, on \texttt{Knife}, \texttt{Person}, \texttt{Bike}, and \texttt{Train} are either too confidant or their confidence is low. Hence, $Pr(i|P)$ term in Eq.~\ref{eq:probability} is adversely affected by $P$ which in turn affects contrastive predictions $\Tilde{y}$. \begin{comment} \subsubsection{Domain Shift Due to Dataset Difference} \label{Subsec:Domain_Shift} Domain shifts between datasets occur due to changes in lightning, camera pose, background, and acquisition sensors among others. In this paper, we demonstrate the performance of contrastive methods against the feed-forward technique on CIFAR-10~\cite{krizhevsky2009learning}, STL~\cite{coates2011analysis}, and Office~\cite{saenko2010adapting} datasets. Cifar-10 and STL consist of $9$ common classes. The source of these images are however different and has been used as a common method of validating domain adaptation algorithms. STL dataset has $5000$ training images and $8000$ testing images each of size $96\times 96 \times 3$ derived from Flickr website. For the experiments described in this section, we downsample STL images to $32\times 32 \times 3$ to match CIFAR-10 dataset. We also choose $9$ classes from each dataset. Hence we are left with $45000$ training images and $4500$ testing images Office dataset consists of images from $31$ classes derived from three sources - Webcam, DSLR, and Amazon. \input{Sections/tab_DA.tex} \end{comment} \section{Contrastive Explanations} \label{Sec:Contrast} Explanations are a set of rationales used to understand the reasons behind a decision~\cite{kitcher1962scientific}. In this section, we visually inspect the reasons behind decisions by answering \emph{'Why P, rather than Q?'} questions between the predicted class $P$ and the contrast class $Q$ for a network $f()$. We modify the popular Grad-CAM~\cite{selvaraju2017grad} explanation technique to obtain our contrastive visualizations. We first describe Grad-CAM before detailing the necessary modifications. \vspace{-3mm} \subsection{Grad-CAM} \label{subsec:Gradcam} Grad-CAM is used to visually justify the decision $P$ made by a classification network by answering \emph{`Why P?'}. The activations from the last convolutional layer of a network are used to create these visualizations since they possess high-level semantics while maintaining spatial information. For any class $i, \forall i \in [1,N]$, the logit $y_i$ is backpropagated to the feature map $A_l$ where $A_l = f_l(x)$ and $l$ is the last convolutional layer. The gradients at every feature map are $\frac{\partial y_i}{\partial A_l^k}$ for a channel $k$. These gradients are global average pooled to obtain importance scores $\alpha_k$ of every feature map in $l^{th}$ layer and $k^{th}$ channel. The individual maps $A_l^k$ are multiplied by their importance scores $\alpha_k$ and averaged to obtain a heat map. The Grad-CAM map at layer $l$ and class $i$ is given by $L^i = ReLU(\sum_{k=1}^K \alpha_k A^k_l )$. The Grad-CAM from an ImageNet-pretrained VGG-16~\cite{Simonyan15} for a correctly classified Spoonbill image is visualized in Fig.~\ref{fig:Contrast_examples}c. The red-highlighted regions in Fig.~\ref{fig:Contrast_examples}c explains why VGG-16 chose Spoonbill as the decision $P$. Hence, Grad-CAM visually explains the observed causality \emph{`Why P?'}. \vspace{-3mm} \subsection{Contrast Visualizations}\label{subsec:Contrast_Visuals} In Grad-CAM, the importance score $\alpha_k$ which is derived by backpropagating the logit $y_i$, weighs the activations in a layer $l$ based on $A_l^k$'s contribution towards the classification. The activations are projections on network parameters and hence have access to both the causal and contrastive information. Therefore, to extract contrastive explanations, the contrast importance score $\alpha_k^c$ must be global average pooled contrastive features i.e $\alpha_k^c = \sum_u\sum_v \nabla_{W_l} J(W,x,P,Q)$, where $u,v$ are the channel sizes at layer $l$. This is achieved by backpropagating $J(W,x,P,Q)$ within the Grad-CAM framework to obtain the contrastive maps for class $Q$. Hence, while $\alpha_k$ highlights \emph{`Why P?'}, $\alpha_k^c$ denotes \emph{`Why P, rather than Q?'}. Note that there can be $N$ contrastive maps for a network trained to discriminate between $N$ classes. The contrast-emphasized regions for selected classes are shown in Fig.~\ref{fig:Contrast_examples}d-g. In Fig.~\ref{fig:Contrast_examples}d, VGG-16 indicates that the contrast between spoonbill and its notion of a flamingo class resides in the lack of S-shaped neck for a spoonbill. Similarly, it translates to not detecting white feathers in Fig.~\ref{fig:Contrast_examples}d to contrast between a spoonbill and a crane. The contrast between a band-aid and a spoonbill is in the presence of neck and legs in the spoonbill. This is highlighted in Fig.~\ref{fig:Contrast_examples}f. Fig.~\ref{fig:Contrast_examples}e indicates that VGG-16 contrasts between a pig and a spoonbill based on the neck of spoonbill. The body and feather colors of the spoonbill are de-emphasized but the shape of its legs and neck contribute towards VGG-16's decision. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Reasoning.png} \endminipage \caption{Contrastive explanations (CE) on Recognition. (a) Input $x$. (b) Grad-CAM of $x$ for predicted class $P$. (c) Representative image of nearest class $Q$. (d) CE for class $Q$. (e) Representative image of random class $Q$. (f) CE for random class $Q$ in (e). (g) CE when $P = Q$.}\label{fig:Reasoning} \end{center} \end{figure*} \subsection{Analysis} \label{subsec:Grad-CAM_Analysis} Consider an $L-$layered network $f()$ trained using the cross entropy loss loss $J()$ to differentiate between $N$ classes. The output of the network for an image $x$ after the last $L^{th}$ layer and before applying the loss is $y^L_{feat} = f_L(x)$ where $y^L_{feat}$ is a $1\times N$ vector containing the logits for each class. For brevity, we express $y^L_{feat}$ as $y$ in the remaining of this section. The general cross entropy loss for the label $i$ where $i \in [1,N]$ is, \begin{equation}\label{eq:CE} J(W, x, P, i) = -y_i + \sum_{j=1}^{N} e^{y_j}, \text{ where } y_j = f_L(x). \end{equation} During training, $i$ is the true label. \noindent\textbf{Why P:} Note that in Grad-CAM the logit for the interested class is backpropagated. To ask \emph{`Why P?'}, the logit corresponding to class $P$, i.e $y_p$, is backpropagated. We represent the backpropagated variable in Grad-CAM as $J_G(P, P)$ where, \begin{equation}\label{eq:Grad-CAM} J_G(P, P) = y_P. \end{equation} \noindent\textbf{Why P, rather than Q: }Contrast maps over $N$ classes are obtained by backpropagating the loss between predicted and contrast class $Q$. We represent this backpropagated variable as $J_C(P, Q)$ in Eq.~\ref{eq:CE}, where $Q \in [1,N]$ and $Q\neq P$. Approximating the exponent with its second order Taylor series expansion, we have, \begin{equation} J_C(P, Q) = -y_Q + \sum_{j=1}^{N} \bigg(1 + y_j + \frac{y_j^2}{2}\bigg). \end{equation} Note that for a well trained network $f()$, the logits of all but the predicted class are high. Hence $\sum_{j=1}^{N} \frac{y_j^2}{2} = \frac{y_p^2}{2}$. Substituting, \begin{equation}\label{eq:J_Full} J_C(P, Q) = -y_Q + N + \sum_{j=1}^N y_j + \frac{y_P^2}{2}. \end{equation} The quantity in Eq.~\ref{eq:J_Full} is differentiated, hence nulling the effect of constant $N$. For a well trained network $f()$, small changes in $W$ do not adversely affect the sum of all logits $\sum_{j=1}^N y_j$. Hence approximating its gradient to $0$ and discarding it, we can obtain $J_C(P, Q)$ as a function of two logits, $y_Q$ and $y_P$ given by, \begin{equation}\label{eq:J_Final} J_C(P, Q) = -y_Q + \frac{y_P^2}{2}. \end{equation} Compare Eq.~\ref{eq:J_Final} against Eq.~\ref{eq:Grad-CAM}. From Eq.~\ref{eq:Grad-CAM}, only $y_P$ or the logit for class $P$ is backpropagated to obtain importance scores. Hence, the importance score $\alpha_k$, highlights features in learned $l^{th}$ layer manifold where $f_l(x)$ projects onto patterns that justify $P$. In Eq.~\ref{eq:J_Final}, the backpropagated gradients are a function of $-y_Q$ and $y_P$. Hence, the contrast importance score $\alpha_k^c$, highlights the non-common features between classes $P$ and $Q$. These non-common features span the difference between $m_P$ and $m_Q$ from Fig.~\ref{fig:Contrast_examples}a. Recall $m_P$ is the learned manifold where $x$ is classified as $P$ and $m_Q$ is the hypothetical contrast manifold where $x$ is labeled as $Q$. \noindent\textbf{Why P, rather than P: }When $Q = P$, Eq.~\ref{eq:J_Final} is written as, \begin{equation}\label{eq:J_PP} J_C(P, P) = -y_P + \frac{y_P^2}{2}. \end{equation} Note that the importance scores when backpropagating the first term in $J_C(P, P)$ are the negative of the backpropagated variable in Eq.~\ref{eq:Grad-CAM}. The authors in Grad-CAM~\cite{selvaraju2017grad} claim that backpropagating the negative score $-y_P$, provides conterfactual explanations. Since $J_C(P, P)$ is a function of both $y_P$ and $-y_P$, our results do not provide the results in the same counterfactual modality. However, since we are backpropagating a loss function between manifold $m_P$ against itself, the importance scores highlight those features in the image which disallow $f()$ to predict $x$ as $P$ with a higher confidence. If $J()$ were MSE, this modality reads as \emph{`Why not P with 100\% confidence?'}. \begin{comment} Also, since our contrastive gradients follow the Grad-CAM procedure for obtaining contrastive explanation for $Q$, the gradients $\frac{\partial J(W, x, P, Q)}{\partial W_l}$ at layer $l$ are global average pooled to obtain the importance scores. $\frac{y_^P^2}{2}$ is constant across all $K$ channels and then normalized. Hence, we can represent the loss as a function of only the $y_Q$ and $y_j$ terms by accounting for the differentiation and normalization. Applying Eq.~\ref{eq:J_Full} to all possible values of $Q$ except $P$ and averaging the results over $N-1$ contrast maps, we have, \begin{equation}\label{eq:J_Avg} \begin{split} \frac{1}{N-1}\sum_{i = 1, i\neq P}^N J(f, P, i) &= \frac{1}{N-1}\bigg[-\sum_{i = 1, i \neq P}^N y_i + \sum_{j=1}^N y_j \bigg],\\ \sum_{i = 1, i\neq P}^N J(f, P, i) &= \frac{1}{N-1}y_P, \end{split} \end{equation} Hence averaging $N-1$ contrast maps approximates to backpropagating scaled $y_P$ which is the same as Grad-CAM. Note that the scaling affects the importance score across all $K$ channels which is then normalized. \end{comment} \subsection{Qualitative Results} \label{subsec:Explanations_Qual} \subsubsection{Experiments} In this section, we consider contrastive explanations on large scale and fine-grained recognition. Large-scale datasets, like ImageNet~\cite{ILSVRC15}, consist of a wide variety of classes. Fine-grained recognition is the subordinate categorization of similar objects such as different types of birds, and cars among themselves~\cite{yang2012unsupervised}. We consider Stanford Cars~\cite{KrauseStarkDengFei-Fei_3DRR2013} and traffic sign recognition CURE-TSR~\cite{Temel2017_CURETSR} datasets for fine-grained recognition. On ImageNet, we use PyTorch's ImageNet pretrained VGG-$16$~\cite{simonyan2014very} architecture to show results in Fig.~\ref{fig:Reasoning}. VGG-$16$ is chosen to be consistent with Grad-CAM. Note that we tested generating contrastive explanations on other architectures including AlexNet~\cite{krizhevsky2012imagenet}, SqueezeNet~\cite{iandola2016squeezenet}, VGG-$19$~\cite{simonyan2014very}, ResNet-$18,34,50,101$~\cite{he2016deep}, and DenseNet-$161$~\cite{huang2017densely}. On Stanford Cars dataset, we replace and train the final fully connected layer of an ImageNet pre-trained VGG-$16$ architecture to discriminate between $196$ classes. For CURE-TSR, we use the trained network provided by the authors in~\cite{Temel2017_CURETSR}. The results from the fine-grained datasets and the cat-dog image used in Grad-CAM~\cite{selvaraju2017grad} are shown in Fig.~\ref{fig:Reasoning}. Note that the time taken to generate a contrastive explanation is the same as Grad-CAM. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Wrong_Reasoning.png} \endminipage \caption{Contrastive explanations (CE) on Recognition. (a) Input $x$. (b) Grad-CAM of $x$ for predicted class $P$. (c) Representative image of nearest class $Q$. (d) CE for class $Q$. (e) Representative image of random class $Q$. (f) CE for random class $Q$ in (e). (g) CE when $P = Q$.}\label{fig:Wrong_Reasoning} \end{center} \end{figure*} \subsubsection{Results} The visual answers to \emph{`Why P, rather than Q?'} from different datasets are provided in Fig.~\ref{fig:Reasoning}. We visualize the considered images $x$ in the column (a) of Fig.~\ref{fig:Reasoning}. Their respective Grad-CAM~\cite{selvaraju2017grad} explanations are shown in Fig~\ref{fig:Reasoning}b. Our contrastive explanations are visualized in columns (d), (f), and (g) of Fig.~\ref{fig:Reasoning}. The questions answered by each of the contrastive explanations is shown below the image. Representative images of the considered \emph{Q} class are shown alongside in columns (c) and (e) respectively. Note that the network does not see these representative images to produce the explanations. It bases its explanations on its notion of the \emph{Q} class. The contrastive explanations provide an interesting insight into the decisions of neural networks. For instance, the network's explanation as to \emph{`Why Bull Mastiff, rather than Boxer?'} in Fig.~\ref{fig:Reasoning}d is human interpretable. From the Boxer's representative image the two dogs differ in the shape of jowls. This is shown in the red highlights below the Bull Mastiff's jaws. Note that the red highlighted portion in the contrastive explanation is below the red highlighted region from Grad-CAM's explanation. This leads humans to interpret that the network's notion of the difference between the two breeds is the size and/or shape of jowls. However, contrastive explanations need not always be human interpretable. Consider the case when a network trained on a traffic sign dataset, CURE-TSR~\cite{Temel2017_CURETSR}, is asked \emph{`Why No-Left, rather than Stop?'}. The letters that spell STOP not being in the sign is the intuitive human response to the above question. However, from the contrastive explanation in Fig.~\ref{fig:Reasoning}f the network highlights the bottom left corner of the image. Among the 14 traffic signs the network has trained to differentiate between in the dataset, Stop sign is the only class that has a hexagonal shape. Hence, the network has learned to check for a straight side in the bottom left. The absence of this side in $x$ indicates to the network that $x$ is not a STOP sign. This clearly illustrates the differences between the notion of classes between humans and machines. When the difference between $P$ and $Q$ are not fine-grained, then the contrastive explanations are similar to Grad-CAM explanations. This is illustrtaed by showing explanations between predicted $P$ and random classes in Fig.~\ref{fig:Reasoning}f for ImageNet and Stanford Cars dataset. The difference between a dog and a bluejay is in the face of the dog - the same region highlighted by Grad-CAM to make decisions. The input Bugatti Veyron's sloping hood is sufficiently different from that of the boxy hood of the Audi that it is highlighted. The same region is used to classify it as a Bugatti. Fig.~\ref{fig:Reasoning}g provides contrastive explanations when $P$ is backpropagated. We use the question \emph{`Why not P with 100\% confidence?'} as the explanation modality. However, as noted before, this question is loss dependent. The results in cat-dog image are human interpretable - the presence of the cat prevents the network from making a confidant decision. \vspace{-0mm} \subsubsection{Results on noisy data}\label{subsubsec:Noisy_explanations} $P$ is the network prediction and hence is not controlled in the statement \emph{`Why P, rather than Q?'}. In this section, we add noise to $x$ to illustrate the effect of noise as well as incorrect classification $P$ for contrastive explanations. The results are presented in Fig.~\ref{fig:Wrong_Reasoning}. The first row shows the pristine image of spoonbill along with Grad-CAM and contrastive explanations. The second row illustrates the scenario when the spoonbill image has Gaussian noise added to it. This noise however, is insufficient to change prediction $P$. In the third row, the noisy spoonbill image is classified as a window screen. In all three rows, both the Grad-CAM and contrastive explanations change. We first analyze Grad-CAM. For the pristine spoonbill image, the network infers based on the body of the spoonbill. When small noise is added, the correct classification is made based primarily on the legs. When a large amount of noise is added, the network incorrectly predicts the spoonbill as a window screen based on features around the bird. Among the contrastive explanations for why the image is not a flamingo, the network consistently highlights the neck. The results for crane also highlight the body. The results for \emph{`Why not window with 100\% confidence?'} highlights the face and tail of the bird which is intuitive. Hence, contrastive explanations provide additional context and information that is not available from observed causal explanations. We propose to tie inference to contrastive patterns rather than associative feed-forward patterns so as to obtain additional features to infer from. In other words, not only does the bird image have to have the requisite patterns for a spoonbill, it also has to satisfy the \emph{`Why not Q?'} modality where $Q\in [1,N]$. Inference based on all possible contrastive features is contrastive inference. In Section~\ref{Sec:Inference}, we illustrate the robustness of contrastive inference. \vspace{-3mm} \begin{comment} \textbf{Current Feed-Forward -> Contrast -> Gradients} We first consider inference in existing supervised feed-forward neural networks like VGG-16, ResNets-18,34,50, and 101 in a classification setting. Consider an $L$ layered neural network $f()$ trained to classify a distribution $X$ into $N$ classes. This network can be broken into two components: feature extraction stage $f_1 - f_{L-1}$ and a classification stage $f_L$. $f_L$ is commonly a fully connected layer consisting of $N$ filters each of dimensionality $d_{L-1}\times 1$. During inference, given an image $x\in X$, the feed-forward hierarchy of the network produces a feature $f_{L-1}(x)$ of dimensionality $d_{L-1}\times 1$ that is projected independently onto each of the $N$ filters. The filter with the maximum projection is inferred as the class to which the image $x$ belongs. Mathematically, \begin{equation}\label{eq:Filter} \begin{gathered} y_{feat} = f_{L-1}(x), \forall y\in \Re^{N\times 1},\\ y_{pred} = \operatorname*{arg\,max} (W_L^T y_{feat} + b_L),\\ \forall W_L\in \Re^{d_{L-1}\times N}, f_{L-1}(x)\in \Re^{d_{L-1}\times 1}, b_L\in \Re^{N\times 1}, \end{gathered} \end{equation} where $y_{feat}$ is the feed-forward feature before the last layer and $y_{pred}$ is the filter onto which $y_{feat}$ projects maximally. $W_L$ and $b_L$ are the weight and bias parameters in the last layer, respectively. All the weights in $f()$ combined, provide the knowledge base, e.g., the color of the fur in Fig~\ref{fig:Concept}. The extracted features $y_{feat}$ are both derived and used in a feed-forward manner. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Contrastive_Features.pdf} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} Contrast represents the minimum possible change required to take the intervention into consideration. In this paper, we extract contrast from the network $f()$. In Fig~\ref{fig:Concept}, contrast was obtained from knowing the contents of the knowledge base. A pretrained neural network $f()$ stores knowledge in the parameters of every layer. The layer parameters $W_l$ and $b_l$ represent the knowledge base that is learned to differentiate between trained classes. We obtain contrast from $f()$ by providing contrastive hypotheses to the network. Consider the trained manifold $f_{l}(), \forall l \in [1,L]$. This manifold is a function of an image and its class. Given a sample from the training distribution with the correct class, the sample is expected to project onto the span of weights. However, if the same sample is provided with a different class, the network weights move to encompass the new sample within its span. The movement is proportional to gradients and provides the disparity between the learned class and the contrastive class. We use these gradients as contrastive features. A toy example is shown in Fig.~\ref{fig:Contrastive_Features}a. A network is trained to correctly classify a cat image as a cat. This is illustrated as the trained blue manifold $m(cat)$. However, if the same image is provided with a frog label, the manifold shifts to a different space where the cat image can be classified as a frog, illustrated by the purple manifold $\Tilde{m}(frog)$. In this case, gradients of $W_1$ represent contrast - expected \emph{fact} from knowledge manifold vs provided contrastive hypothesis. Note that contrast represents distance and hence we only use the magnitude of gradients in this work. Consider a neural network $f()$ trained to classify a distribution $X$ into $N$ classes. Given an instance $x\in X$ during testing, the network produces an output $y$ such that $y = f(x)$. The observed output can thus be denoted as $f(y|x)$. Causal and counterfactual inference usually takes the form of $f(y|\text{do}(x))$ where there is an intervention in $x$ or utilization of $do-calculus$ to obtain causality. Contrastive features on the other hand require interventions in the output. Hence the output of contrastive features takes the form of $f(\text{do}(y)|x)$. \end{comment} \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Inference.png} \endminipage \caption{Feed-Forward Inference vs Contrastive Inference. (a) The features in blue before the final fully connected layer represent the feed-forward features. (b) The features for contrastive inference represent the change in a learned model when given a contrast hypothesis.}\label{fig:Contrastive_Inference}\vspace{-3mm} \end{center} \end{figure*} \section{Contrastive Features} \label{sec:Abduction} In visual space, we define contrast as the perceived difference between two known quantities. In this paper, we assume that the knowledge of the two quantities is provided by a trained neural network $f()$. The knowledge present in a feed-forward classification network is discriminatory and its reasoning process is inductive. In other words, given a neural network $f()$ that is trained to classify between $N$ classes, the network recognizes patterns to infer a class $P$ for any given image $x$. We first describe the feed-forward features used to make inductive decisions before providing a representation space definition for contrast. \vspace{-3mm} \subsection{Feed-Forward Features}\label{subsec:FF-Features} Consider an $L$-layered classification network $f()$ trained on $x \in X$ for a certain task. $f()$ stores knowledge about the task on $X$ in the parameters of its layers - $W_l$ and $b_l$, $\forall l \in [1,L]$. Given an input image $x$, traditional feed-forward networks that reason inductively project the image on these parameters to make a decision $P$. The intermediate activations from a layer $l$ in $f()$ are termed $y_{feat}^l$. These activations are projections on the span of the weights $W_l, \forall l \in [1,L-1]$. $y_{feat}^l$ are the features used to make the decision $P$ and hence, activations $y_{feat}^l$ are the feed-forward features. In both the explanation and inference applications, we use $y_{feat}^l$ as feed-forward inductive features. $y^l_{feat}$ are used to obtain explanations for decisions using Grad-CAM~\cite{selvaraju2017grad} which is described in Section~\ref{Sec:Contrast}. If $y_{feat}^{L-1}$ are the feed-forward features at the $(L-1)^{th}$ layer, a task specific mechanism is used to infer the feed-forward prediction $P$ which is explored in Section~\ref{Sec:Inference}. \vspace{-3mm} \subsection{Contrastive Features}\label{subsec:Contrastive-Features} Note that contrastive reasons are a subset of all possible abductive reasons. We adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Hence, contrast is a measure of change created by an input image $x$ in the parameters of a trained network against the contrast class. In terms of neural network representation space, contrast between classes $P$ and $Q$ for an image $x$ is the difference between the manifolds that predict $x$ as class $P$ and $x$ as $Q$. The network parameters $W_l$ and $b_l$ span a manifold where the given image $x$ belongs to a class $i, i \in [1,N]$. A toy classification example is shown in Fig.~\ref{fig:Contrast_examples} where a learned manifold, $m_P$, is visualized in blue. On the learned manifold, a spoonbill is classified as a spoonbill. A hypothetical contrastive manifold, $m_Q$, is shown in purple that differs from the blue manifold in that it classifies a spoonbill as a flamingo. The difference between the two manifolds is contrast. Note that $m_Q$ is hypothetical and hence the difference between the two cannot be directly measured. In this paper, we measure the change required to obtain the contrastive manifold from the trained manifold. We use gradients to measure this change. Usage of gradients to characterize model change is not new. Neural networks whose objectives can be formulated as a differentiable loss function are trained using backpropagation~\cite{rumelhart1986learning}. The authors in~\cite{kwon2019distorted} used gradients with respect to weights to characterize distortions for sparse and variational autoencoders. Fisher Vectors use gradients to characterize the change that data creates within networks~\cite{jaakkola1999exploiting} which were extended to classify images~\cite{sanchez2013image}. In detection applications on anomalous~\cite{kwon2020backpropagated}, novel~\cite{kwon2020novelty}, and out-of-distribution~\cite{lee2020gradients} data, gradients are used to characterize model change. We extract contrast for class $Q$ when an image $x$ is predicted as $P$ by backpropagating a loss between $P$ and $Q$. Hence, for a loss function $J()$, contrast is $\nabla_{W_l} J(W,x,P,Q)$, where $W$ are the network weights. Note that $J()$ is a measure of contrastivity between $P$ and $Q$. $Q$ can be any one of $[1,N]$ classes. Hence for $i \in [1,N]$, there are $N$ possible contrastive features given by $\nabla_{W_l} J(W,x,P,i)$ at any layer $l$ for $i \in [1,N]$. The feed-forward features $y^l_{feat}$ and the proposed contrastive features are analogous in the sense that they provide mechanisms to infer or justify decisions. In Sections~\ref{Sec:Contrast} and~\ref{Sec:Inference}, we demonstrate the applicability of these contrastive features in two applications: contrastive explanations and contrastive inference. \begin{comment} We construct the output and the loss function to obtain contrastive features from a network. Given a network $f()$, trained on $N$ classes, we find contrast for a datapoint $x$ with a label $y$ to all possible $N$ classes. We do so by backpropagating $N$ Hence, instead of instantiating an independent knowledge base like in, we extract the relationships $P$ between concepts directly from the perception model $f()$. These concepts are specifically designed to differentiate between classes and hence is termed contrastive reasoning. For the same spoonbill the contrastive question defining the knowledge concept as the difference between $P$ and $Q$ entities from the abductive reasoning statement \emph{'Why P, rather than Q?'}. For instance, in Fig.~\ref{fig:concept}, a neural network is trained to recognize both spoonbills and flamingos as separate classes. Thus, the network has access to the discriminative knowledge that separates the two classes. This knowledge is stored in the network's weight and bias parameters, termed as $W$ and $b$ respectively. These parameters span a manifold where the given image $x$ belongs to a class $i, i \in [1,N]$. A toy classification example is shown in Fig.~\ref{fig:contrast} where a learned manifold is visualized in blue. On the learned manifold, a spoonbill is classified as a spoonbill. A hypothetical contrastive manifold is shown in purple that differs from the blue manifold in that it recognizes a spoonbill as a flamingo. The same figure holds for regression networks, where the manifolds exist in a continuous space rather than discrete space. In terms of neural network representation space, contrast is the difference between the manifolds that predict $x$ as $P$ and $x$ as $Q$. In this paper, instead of directly measuring the difference between learned and contrastive manifolds, we measure the change required to obtain the contrastive manifold from the learned manifold. We use gradients to measure this change. Usage of gradients to characterize model change in not new. The authors in~\cite{kwon2019distorted} used gradients with respect to weights to characterize distortions for sparse and variational autoencoders. Fisher Vectors use gradients to characterize the change that data creates within networks~\cite{jaakkola1999exploiting} which were extended to classify images~\cite{sanchez2013image}. We formalize the framework for generating contrastive reasons in this section. We first define contrast from a We adopt the definition of abduction from~\cite{velazquez2013epistemic} to define contrast. \cite{velazquez2013epistemic} defines abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Considering $P$ as the We adopt the framework for abductive reasoning from the authors in~\cite{dai2019bridging}. They instantiate abductive reasoning as first-order logical program that runs in parallel to existing perception models to decipher mathematical equations in an image. The first-order program requires knowledge concepts that are input into the system apart from the data itself. They use a generic CNN as the perception model to recognize the symbols in the given image. An external abductive model $B$ is defined which consists of knowledge of the structure of equations, and the recursive bitwise operations. Consider such a model applied to large scale recognition datasets. In the example of Fig.~\ref{fig:Concept}, the knowledge base must be initialized with the neck and body category for existing reasoning mechanisms to function well. Consider the number of semantic categories if the differentiation is among $1000$ classes in ImageNet dataset. Initializing the knowledge base becomes both computationally tedious and defeats the purpose of learning from data. To alleviate this shortcoming, we adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Hence, instead of instantiating $B$ as an independent knowledge base, we extract the relationships $P$ between concepts directly from the perception model $f()$. These concepts are specifically designed to differentiate between classes and hence we call it contrastive reasoning. Note that Abductive learning involves deriving a set of hypotheses that can be used to explain a given set of facts. Reasoning through abduction, is that one or more of the derived hypotheses, if true, can be used to explain the occurrence. Abductive reasoning provides a non-monotonic reasoning paradigm to overcome the limitations and false conclusions from inductive reasoning. \textbf{Abductive Learning Framework:} The framework proposes a simultaneous perception and reasoning framework for existing neural network architectures. We adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Abductive reasoning is a common-sense based reasoning rather than mathematical reasoning. As such, its work in literature is limited to using it in specific real world scenarios - police investigations~\cite{carson2009abduction}, medical diagnostics~\cite{magnani1992abductive}, automobile diagnostics~\cite{finin1989abductive}, archaeology~\cite{thagard1997abductive}, and ontology~\cite{elsenbroich2006case}, psychology~\cite{shank1998extraordinary} among others. In all these cases, the formulation of abduction is based on first-order logic. Modern deep learning algorithms have far exceeded the capabilities of first-order logic. Hence, the current formulations of abduction are insufficient to obtain hypotheses from neural networks. We comment on the difference between contrastive inference and abductive inference. Abductive reasoning provides $N$ independent hypotheses to verify the validity of $N$ separate decisions respectively. Based on some criteria, one of the $N$ hypotheses is chosen and its corresponding decision made. This form of decision making is termed as abductive inference. In this paper, we differentiate abductive inference from contrastive inference in the way that the abductive reasons are used. Contrastive inference uses patterns within these hypotheses to infer decisions instead of selecting these hypotheses. We adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Consider an observation $x$ derived from a distribution $P$. Consider a neural network $f()$ trained on the same distribution $P$. Given $x$, $f()$ predicts the corresponding label of $x$ as $\Tilde{y}$. Given that the task is classification, $\Tilde{y}$ refers to a class among $N$ trained classes. Hence, $f()$ predicts $Pr(y'|x)$. During training, the goal is to have the prediction $\Tilde{y}$ the same as the ground truth $y$. An empirical loss function $J()$ is constructed that measures the true sampled distribution We first present the task of abductive reasoning in existing literature. Abductive reasoning involves the definition of a first-order logical program that runs either in parallel to or on top of existing perception models. The first-order program requires knowledge concepts that are input into the system apart from the data itself. For instance, the authors in~\cite{dai2019bridging} decipher mathematical equations in an image based on an abductive learning framework. They use a generic CNN as the perception model to recognize the symbols in the given image. An external abductive model $B$ is defined which consists of knowledge of the structure of equations, and the recursive bitwise operations. Consider such a model applied to large scale recognition datasets. In the example of Fig.~\ref{fig:Concept}, the knowledge base must be initialized with the neck and body category for existing reasoning mechanisms to function well. Consider the number of semantic categories if the differentiation is among $1000$ classes in ImageNet dataset. Initializing the knowledge base becomes both computationally tedious and defeats the purpose of learning from data. In this paper, we propose to replace this manual allocation of first-order programming using the perception model itself. We posit that since a neural network has learned to differentiate between $1000$ classes, it has learned the fine-grained differences between the $1000$ classes. In such a scenario, our goal is to extract the knowledge concepts $B$ from the network $f()$ itself. \subsection{Problem Setting} We first describe an abductive learning framework. Consider a distribution $\mathcal{D}$ consisting of data $X$ and their corresponding labels $Y$. $\mathcal{D}$ is sampled from $\mathcal{D}$ and consists of $M$ data points given by $\mathcal{D} = {(x_1, y_1), (x_2, y_2), \dots (x_M, y_M)}$. This forms the training set for a neural network $f()$. The neural network is trained on the $M$ data points to discriminate between $N$ classes for any given image $x$. The network $f()$ then acts as the knowledge base $K$ from which relationships between classes $C$ are extracted. Note that these relationships are \emph{contrasts} between the pseudo-label assigned by the network and the contrast label to be extracted in. Hence, the system consists of Consider a distribution $(X,Y)$ from which individual $(x,y)$ data are sampled. $x$ is the image data and $y$ is its corresponding label. Consider $M$ samples derived from $X$. This sampled distribution is $\Tilde{X}$. Note that $\Tilde{X}$ is comprised of $(x_1, y_1), (x_2, y_2), \dots (x_M, y_M)$. The goal of a classification network $f()$ is to correctly predict the output $y \in [1, M]$ given an input $x$. \subsection{Grad-CAM} \label{subsec:Gradcam} Grad-CAM is used to visually justify the decisions made by a neural network. The actiivations from the last convolutional layer of a network are used to create these visualizations since they possess high-level semantics while maintaining spatial information. For any class $i, \forall i \in [1,N]$, the logit $y_i$ is backpropagated to the feature map $A_l$ where $A_l = f_l(x)$ and $l$ is the last convolutional layer. Let there be $K$ channels in this $l^{th}$ layer. The gradients at every feature map are $\frac{\partial y_i}{\partial A_l^k}$. These gradients are the same size as the kernels, $u$ and $v$ and the same number as the channels $K$. The gradients are global average pooled across $u$ and $v$ to obtain importance scores $\alpha_k$ of every feature map in $l^{th}$ layer. The individual maps $A_l^k$ are multiplied by their importance scores $\alpha_k$ and averaged to obtain a heat map of size $u,v$. Only the positive values in this map are used since they represent the values that have a positive influence for class $i$. Hence, Grad-CAM map at layer $l$ and class $i$ is given by $L^i = ReLU(\sum_{k=1}^K \alpha_k A^k_l )$. \end{comment} \section{Related Works} \label{Sec:LitReview} In Section~\ref{sec:introduction}, we introduced abductive reasoning as an alternative reasoning scheme to existing induction that better generalizes to new situations. Abductive reasoning requires multiple hypotheses to infer decisions. Hypotheses are defined as answers to logical questions that aid inference. In this section, we first describe the possible logical questions - causal, counterfactual, and contrastive. We then motivate our choice of contrastive questions to obtain abductive hypotheses. \subsection{Abductive Reasoning} \noindent \textbf{Abductive Reasoning: }Consider a classification network where given a datapoint $x$ the network predicts the label of $x$ as $P$. The factors that cause $P$ are termed causal reasons~\cite{pearl2009causal, halpern2005causes}. An explanation for the generation of label $P$ can be defined as the answer to the question \emph{'Why P?'}. For instance, for a classification algorithm, Grad-CAM~\cite{selvaraju2017grad} provides visual explanations for \emph{'Why P?'}. However, these explanations are based on observational causality~\cite{lopez2017discovering}. Observed causality can hence be differentiated from interventionist causality where the explanations for a predicted label $P$ changes in response to active interventions. Active interventions can further be divided into two scenarios based on the location of interventions - either in the data itself or in the predictions~\cite{mcgill1993contrastive}. Intervening within data itself provides answers to \emph{'What if?'} questions. These answers are counterfactual explanations. However, all possible interventions in data can be long, complex and impractical~\cite{lopez2017discovering}. Even with interventions, it is challenging to estimate if the resulting effect is a consequence of the intervention or due to other uncontrolled interventions~\cite{bottou2013counterfactual}. The second type of interventions can occur in the network predictions. These are questions of the type \emph{'Why P, rather than Q?'} questions. The answers to these questions form contrastive reasons. The answers to observed causal, counterfactual, and contrastive questions constitute abductive reasons. Before describing the current formulations of abductive reasoning and frameworks, we relate the considered observed causal and contrastive reasons to Fig.~\ref{fig:Concept}. \noindent \textbf{Abductive and Contrastive Reasons: }In this paper, we specifically choose contrastive questions to obtain abductive hypotheses and extract answers from neural networks by intervening in the predictions made. This is because pre-trained neural networks already have implicit contrastive knowledge. We first motivate this claim. In Fig.~\ref{fig:Concept}, consider that the knowledge base is a trained neural network $f()$. During testing, it is given the image of a spoonbill $x$, with the task of predicting the label $P$ of $x$. Consider that $f()$ correctly predicts the label $P$ as spoonbill. The feed-forward reason for $P$ is the detection of pink and round body, and straight neck. These are reasons for \emph{`Why Spoonbill?'} and represent observed causal reasons. The contrastive reasons answer the question \emph{`Why Spoonbill, rather than flamingo?'} and \emph{`Why spoonbill, rather than crane?'}. Here, $Q$ is either flamingo or crane. The contrastive reasons in Fig.~\ref{fig:Concept} visualize the knowledge base's notion of the difference between a spoonbill and a flamingo and between a spoonbill and a crane. While $P$ is constrained by the prediction from $f()$, $Q$ is user defined and can be used to ask the network for other explanations including \emph{`Why spoonbill, rather than band-aid?'} or \emph{`Why spoonbill, rather than pig?'} as long as the network $f()$ has learned these classes. $Q$ can be any of the learned classes in the network. If $f()$ is trained to classify between $N$ classes, $Q$ can take the values of $[1, N]$. Hence, for a recognition network with some prediction $P$ and trained to discriminate between $N$ classes, there are potentially $N$ possible contrastive reasons. The network acts as a discriminatory knowledge base that has implicitly learned the contrastive information between $P$ and $Q$. Therefore, we choose the contrastive model of abductive reasoning in our framework. \noindent\textbf{Reasoning versus Explanations: }Reasoning is the process by which a network $f()$ infers a decision $P$. The authors in~\cite{goguen1983reasoning} consider reasoning to be a mental process which can only be surmised based on how it manifests - in terms of explanations and inference. Explanations are comprehensible justifications of reasoning. In other words, explanations are justifications that a network $f()$ provides after a decision $P$ is made. Inference is the outcome of reasoning, i.e. the decision $P$ itself. Therefore, for all trained networks, we can ask and extract observed causal~\cite{selvaraju2017grad}, counterfactual~\cite{goyal2019counterfactual}, and contrastive explanations~\cite{prabhushankar2020contrastive}. Similarly, inference can occur based on causal, counterfactual, or contrastive features. They can then be termed as causal, counterfactual, or contrastive inference. Two of these explanatory schemes and inference mechanisms are shown in Fig.~\ref{fig:Concept}. Technically, the explanations come after decisions but we use explanations as a visual substitute for reasoning in Fig.~\ref{fig:Concept}. \subsection{Abductive Inference and Learning} \noindent \textbf{Abductive Inference: }Abductive inference is generally described as Inference to the Best Explanation (IBE)~\cite{harman1965inference}. While inductive inference associates learned patterns and extrapolates them to unseen test cases, IBE postulates all possible explanations for the occurrence of the test case and picks the class with the \emph{best} explanation. Note that the novelty of existing works is derived from finding a metric that describes what explanation is \emph{best}. Traditionally, the simplicity of an explanation was seen as a stand in for the best explanation. Recent works for IBE in machine learning are postulated under the abductive framework. \noindent \textbf{Abductive Framework: }The combination of both abductive reasoning and inference together forms the abductive framework. Abductive reasoning is a common-sense based reasoning rather than mathematical reasoning. As such, its utility is in specific real world scenarios - police investigations~\cite{carson2009abduction}, medical diagnostics~\cite{magnani1992abductive}, automobile diagnostics~\cite{finin1989abductive}, archaeology~\cite{thagard1997abductive}, ontology~\cite{elsenbroich2006case}, and psychology~\cite{shank1998extraordinary} among others. In all these cases, the formulation of abduction is based on first-order logic. Modern deep learning algorithms have far exceeded the capabilities of first-order logic. In machine learning, an abductive framework is primarily seen as a logical process. The authors in~\cite{kakas2000abductive} describe the logical process of abduction and contrast it against an inductive logical process. The authors in~\cite{dai2019bridging} instantiate abductive reasoning as first-order logical program that runs in parallel to existing perception models to decipher mathematical equations in an image. The first-order program requires knowledge concepts that are input into the system apart from the data itself. They use a generic CNN as the perception model to recognize the symbols in the given image. An external abductive model is defined which consists of knowledge of the structure of equations, and the recursive bitwise operations. \subsection{Contrastive Inference and Learning} In this paper, we divine the knowledge concepts intrinsically from the perception network. We do so by defining the knowledge concept as the difference between $P$ and $Q$ entities from the contrastive reasoning statement \emph{`Why P, rather than Q?'}. The name is derived from the psychological concept of contrastive inference. \noindent\textbf{Contrastive Inference: }In developmental psychology, contrastive inference is a mechanism that allows for the divination of entities that are not mentioned linguistically~\cite{kronmuller2014show}. For instance, the phrase '\emph{Hand me the large cup}' implies the existence of multiple cups of varying sizes without the need for mentioning their existence explicitly. Human-interpretable explanations have been shown to be contrastive - explanations are based on a contrast to a known fact. Contrastive inference provides a way of obtaining pragmatic explanations and hence is a form of abductive reasoning~\cite{kronmuller2014show,lipton2003inference}. \noindent\textbf{Contrastive Learning: } The term contrastive learning denotes a framework wherein negative samples of training data that are semantically similar to positive samples are mined~\cite{arora2019theoretical}\cite{NIPS2013_5007}\cite{chen2020simple}. In~\cite{NIPS2013_5007}, contrastive learning is used to differentiate between specific topics within mixture models. The authors in~\cite{chen2020simple} propose a contrastive learning framework where multiple data augmentation strategies are used to train a network overhead with a contrastive loss. Note that the proposed method does not derive from the existing contrastive learning setup. \noindent \textbf{Proposed Contrastive Inference: }We comment on the difference between the proposed contrastive inference and existing abductive frameworks. In any given scenario, an abductive inference framework makes one of $N$ possible decisions based on $N$ independent hypotheses that are extracted to verify the validity of $N$ separate decisions. The hypotheses are extracted based on a knowledge base. One of the hypotheses is then chosen and its corresponding decision made. Consider such a framework from~\cite{dai2019bridging} where an external model is a knowledge base. Consider such a model applied to large scale recognition datasets. In the example of Fig.~\ref{fig:Concept}, the knowledge base must be manually initialized with the neck and body category for existing reasoning mechanisms to function well. Consider the number of semantic categories requiring manual allocation if the differentiation is among $1000$ classes in ImageNet dataset. Initializing the knowledge base becomes both computationally tedious and defeats the purpose of learning from data. In this paper, we address this challenge by extracting contrast based abductive reasons directly from the network. We start by defining contrast both in the visual space and in the representation space spanned by the neural network. \begin{comment} we first define contrast as a way of obtaining abductive reasons. The proposed contrastive inference framework learns the base network $f()$ in a supervised feed-forward fashion. We then extract the contrast between the predicted class of a data point $P$ and a contrast class $Q$. These are denoted as contrastive reasons and visualized. The data is then represented as a measure of change that it creates in the trained base network against all classes. Such a representation is shown to be robust to distortions. The proposed inference framework is novel in its use of the base network $f()$, the interpretation of abduction and contrast, and its applicability in the inference stage across pretrained networks. \noindent\textbf{Data characterization using gradient based features} Use of gradients to characterize data is not new. The authors in \noindent\textbf{Domain Adaptation} The authors in~\cite{saenko2010adapting, hendrycks2019benchmarking, Temel2017_CURETSR, geirhos2018generalisation} have demonstrated the vulnerability of feed-forward neural networks to domain shifts during test time. Domain shifts due to changes in lightning, camera pose, background, acquisition sensors, and synthetic-to-real shifts. The techniques proposed to counter degradation in accuracy under these domain shifts are broadly classified under domain adaptation~\cite{saenko2010adapting, sun2016deep}. Generally, domain adaptation techniques propose methods to align statistics and distributions between source and target domains by utilizing a few images from target distribution. Note that a different network is trained for every new domain to be adapted to. \noindent\textbf{Robustness to Distortion} Distortions include image acquisition errors, environmental conditions during acquisition, transmission and storage errors among others. CIFAR-10-C~\cite{hendrycks2019benchmarking} consists of $19$ real world distortions each of which has five levels of degradation. Hence, a single target domain adaptation algorithm would require training a network $19\times5$ times. Therefore, domain adaptation techniques are generally not used in this setting. Current techniques that alleviate the drop in feed-forward accuracy require distorted training data. The authors in~\cite{vasiljevic2016examining} show that finetuning or retraining networks using distorted images increases the performance of recognition under the same distortion. However, performance between different distortions is not generalized well. For instance, training on gaussian blurred images does not guarantee a performance increase in motion blur images~\cite{vasiljevic2016examining, geirhos2018generalisation}. Other proposed methods include training on style-transferred images~\cite{geirhos2018imagenet}, training on adversarial images~\cite{hendrycks2019benchmarking}, and training on simulated noisy virtual images~\cite{Temel2017_CURETSR}. All these works require additional data during training. \end{comment} \section{Introduction}\label{sec:introduction}} \IEEEPARstart{S}{upervised} learning entails learning by association. In the field of psychology, associative learning involves the formation of previously unknown associations between stimuli and responses~\cite{byrne2014learning}. In supervised machine learning, stimuli refers to the data provided to the algorithm and response is the label required of the algorithm. During training, the algorithm is conditioned to associate between the data and its label. Hence, the goal of training an algorithm is to learn the patterns in data that either cause or correlate to the given label. During testing, the algorithm searches for the learned pattern to predict the label. This act of predicting the label is termed as inference. The means of arriving at the inference is reasoning. An example of supervised machine learning is object recognition where a label is inferred based on the learned patterns in a given image. Recent advances in machine learning has allowed state-of-the-art performances in recognition tasks~\cite{krizhevsky2012imagenet}\cite{he2016deep}. Specifically, recognition algorithms surpassed top-5 human error rate of $5.1\%$ on ImageNet~\cite{russakovsky2015imagenet}. Error rate measures the inaccurate predictions made by humans or the network. The reduction in error rate can be traced to advancements in the learning process~\cite{kingma2014adam}, regularization~\cite{srivastava2014dropout}, and hardware utilization~\cite{li2017analysis} among others. However, all these advancements rely on the traditional feed-forward-based associative reasoning and inference scheme. The author in~\cite{olshausen201427} examines the conventional feed-forward model of inference in both human vision and machine vision, where a set of neurons extract object selective features in a hierarchical fashion. Such a feed-forward representation leads to a task-specific, trained-data specific inference mechanism. For instance, a change in data domain during testing requires retraining to obtain a new representation conducive to test domain. This is because the feed-forward model follows an inductive reasoning approach to inference. Inductive reasoning is a branch of causal inference in perception~\cite{shams2010causal}. Inference based on induction provides a conclusion with uncertainty which allows for speculation regarding cause. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Concept.png} \endminipage \caption{Discriminative features are identified in Feed-Forward reasoning frameworks. Contrasts between observed facts and a known knowledge base is used as reasons in Contrastive Inference.}\label{fig:Concept \end{center} \end{figure*} Contrary to an inductive framework, an abductive framework creates a hypothesis and tests its validity without considering the cause. Abductive reasoning allows humans to better generalize to new situations. Extensive work has been conducted to understand the development of human brain and visual perception based on abductive reasoning~\cite{national2018people}. This form of reasoning was introduced by the philosopher Charles Sanders Peirce~\cite{peirce1931collected}, who saw abduction as a reasoning process from effect to cause~\cite{paul1993approaches}. In contrast, induction conjectures general laws from particular instances. Peirce further connected induction and abduction by stating that \emph{Induction is an argument which sets out from a hypothesis, resulting from previous Abduction}. In this paper, we follow this principle to modify inductively trained feed-forward neural networks to reason and infer abductively for the task of object recognition. In other words, we use an inductively trained neural network to extract multiple hypotheses regarding the input's class and use them as features for inference. Specifically, we extract hypotheses based on the network's assertion of contrast between classes. We call this inference mechanism as \emph{contrastive inference} and its corresponding reasoning as \emph{contrastive reasoning}. To explain this concept of contrast, let us take the toy example of two subjects distinguishing between three classes of birds - spoonbill, flamingo and crane. This is illustrated in Fig.~\ref{fig:Concept}. All three are shallow-water birds with fine-grained physical differences. Spoonbills and flamingos have round and pink bodies. However, they differ primarily in the shape of their necks. A flamingo has an S-shaped neck while a spoonbill does not. Spoonbills and Cranes have long necks but have different colored bodies. Consider that images of all three birds have been shown to two subjects beforehand, thereby training them to distinguish between the three birds. Given a new test image of a spoonbill, subject A correctly recognizes the image as a spoonbill based on the shape and color of the bird's body and its beak. Subject B also recognizes the image as a spoonbill but does so by noticing that the neck is \emph{not} S-shaped and the body is \emph{not} white. While both the subjects infer correctly, the reasoning mechanism is complementary. Subject B infers the \emph{contrast} between a known fact - the S-shape of the neck, white body- and its lack thereof in the test image to make the right classification. Hence subject B constructed multiple hypotheses and contrastively reasoned why it cannot be either a flamingo or a crane. Inference that is built on these contrastive reasons or features is termed as \emph{contrastive inference}. On the contrary, subject A reasons inductively to infer the correct class based on identifying the patterns in the image corresponding to spoonbill. This is feed-forward inference that is currently being used by neural networks. \noindent \textbf{Paper structure and novelty: }In this paper, we rethink existing inductive reasoning based feed-forward networks and provide an alternative reasoning and inference scheme based on abduction. We first describe abductive reasoning and its usage in artificial intelligence and machine learning in Section~\ref{Sec:LitReview}. We provide a structure to the possible abductive reasons and introduce contrastive reasons as a subset of abductive reasons. We further show that current formulations of abductive reasoning does not allow for large scale learning based solutions. The novelty of our work is in formulating abductive reasons as contrasts that can be extracted from trained deep networks. This is motivated and described in Section~\ref{sec:Abduction}. We extract and visualize these contrastive reasons in Section~\ref{Sec:Contrast} as contrastive visual explanations. We then qualitatively examine the results and gain insights provided by such contrastive explanations. In Section~\ref{Sec:Inference}, we describe the inference mechanism that uses the contrastive reasons to make decisions. We finally enhance the robustness of existing models to noise in Section~\ref{sec:Results} and conclude in Section~\ref{Sec:Discussion}. \begin{comment} The contributions of this paper include: \begin{itemize} \item Formulating and defining contrast and contrastive reasons as functions of pre-trained neural networks. \item Extracting contrastive reasons and visualizing them as explanations. \item Enhancing the robustness of existing neural networks to noise. \end{itemize} \end{comment} \begin{comment} Consider a neural network $f()$ trained to classify images among one of $N$ classes on a dataset $X$. Training occurs by asking the network to predict labels and backpropagating the errors to correct the network weights $W$ and biases $b$. The network learns to correlate patterns in data $X$ with the label in $Y, \forall Y \in [1,N]$. This is a feed-forward inductive learning framework. During inference, the network correlates the patterns it observes in a test image $\Tilde{x}$ with the corresponding label $\Tilde{y}$ that it has trained for. Hence, the inductively trained feed-forward neural network has inferred abductively. Instead of feed-forward inference We introduce a term inspired by psychology - contrastive inference - as a special case of the abductive framework. Contrastive inference is a mechanism that allows for the divination of entities that are not mentioned linguistically~\cite{kronmuller2014show}. For instance, the phrase '\emph{Hand me the large cup}' implies the existence of multiple cups of varying sizes without the need for mentioning their existence explicitly. Contrastive inference has been studied in the field of developmental psychology~\cite{kronmuller2014show,lipton2003inference} as a way of obtaining pragmatic explanations. Human-interpretable explanations have been shown to be contrastive - explanations are based on a contrast to a known fact~\cite{miller2018contrastive, miller2018explanation}.To solidify this concept of contrast, let us take the toy example of two subjects distinguishing between Tom and Top Cat as illustrated in Fig.~\ref{fig:Concept}. Both cats differ in the color of their furs. Top Cat has yellow fur while Tom has darker gray fur. Consider that images of both Tom and Top Cat have been shown to two subjects beforehand. Also, the subjects have been trained to distinguish between the two cats based on the color of their fur. Given a new test image, subject A correctly identifies the image as Tom based on the darker gray shade of fur. Subject B also identifies the image as Tom but does so by noticing that the fur is \emph{not} yellow. While both the subjects classify correctly, the inference mechanism is complementary. Subject B infers the \emph{contrast} between a known fact - the yellow fur of Top Cat - and its lack thereof in test image to make the right classification. Inference that makes use of these contrastive explanations or features is termed as \emph{contrastive inference}. On the contrary, subject A infers the correct class based on identifying the gray fur in the image, which is based on the feed-forward inference that is currently being used by neural networks. In this paper, for a given image $x$, we hypothesise that $x$ independently belongs to all $N$ possible classes and observe the contrast it creates in a pretrained model. The contrast refers to what it expects the image $x$ to be in a feed-forward fashion vs what we hypothesise it is. This contrast to all possible $N$ classes is used as a feature to describe $x$ so that a conclusion is made. This is illustrated in Fig.~\ref{fig:Concept}. Consider a binary image classifier trained to differentiate between two cats - Tom and Top Cat. Both cats differ in the color of their furs. Top Cat has yellow fur while Tom has darker gray fur. Consider that images of both Tom and Top Cat have been shown to two subjects beforehand. Also, the subjects have been trained to distinguish between the two cats based on the color of their fur. Given a new test image, subject A correctly identifies the image as Tom based on the darker gray shade of fur. Subject B also identifies the image as Tom but does so by noticing that the fur is \emph{not} yellow. While both the subjects classify correctly, the inference mechanism is complementary. Subject B infers the \emph{contrast} between a known fact - the yellow fur of Top Cat - and its lack thereof in test image to make the right classification. Inference that makes use of these contrastive explanations or features is termed as \emph{contrastive inference}. On the contrary, subject A infers the correct class based on identifying the gray fur in the image, which is based on the feed-forward inference that is currently being used by neural networks. In recent times, deep learning has allowed for \emph{better} inference predictions. \emph{Better} is measured through the number of correct accuracy. This has come about due to advances in learning processes, regularization, hardware utilization among others. Recent advances in machine learning has allowed state-of-the-art performances in image classification tasks~\cite{russakovsky2015imagenet}\cite{krizhevsky2012imagenet}\cite{he2016deep}. Specifically, classification algorithms surpassed top-5 human error rate of $5.1\%$ on ImageNet~\cite{russakovsky2015imagenet}. But the system of reasoning has remained the same Contrastive inference is a mechanism that allows for the divination of entities that are not mentioned linguistically~\cite{kronmuller2014show}. For instance, the phrase '\emph{Hand me the large cup}' implies the existence of multiple cups of varying sizes without the need for mentioning their existence explicitly. Contrastive inference has been studied in the field of developmental psychology~\cite{kronmuller2014show,lipton2003inference} as a way of obtaining pragmatic explanations. Human-interpretable explanations have been shown to be contrastive - explanations are based on a contrast to a known fact. \textbf{Supervised - Inductive - Causal - Counterfactual - Contrastive - Abductive - Paper summary} Consider a classification framework where given a datapoint $x$ and a trained network $f(X)$, trained on $x\in X$, provides a label $P$. An explanation for the generation of label $P$ can be defined as the answer to the question \emph{'Why P?'}. The factors that cause $P$ are termed causal reasons~\cite{pearl2009causal, halpern2005causes}. For instance, for a classification algorithm, Grad-CAM~\cite{selvaraju2017grad} provides visual explanations for \emph{'Why P?'}. However, these explanations are based on observational causality~\cite{lopez2017discovering}. Observed causality can hence be differentiated from interventionist causality where the explanations for a predicted label $P$ changes in response to active interventions. Active interventions can further be divided into two scenarios based on the location of interventions - either in the data itself or in the predictions made~\cite{mcgill1993contrastive}. Intervening within data itself provides answers to \emph{'What if?'} questions. This is termed as counterfactual explanations. However, all possible interventions in data can be long and complex. Moreover, observed real data rarely affords interventions~\cite{lopez2017discovering}. Even with interventions, it is challenging to estimate if the resulting effect is a consequence of the intervention or due to other uncontrolled interventions~\cite{bottou2013counterfactual}. The second type of interventions can occur in the network predictions. These are questions of the type \emph{'Why P, rather than Q?'} questions. This is called abductive reasoning. \begin{figure}[!htb] \begin{center} \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/ContrastiveVsFF.pdf} \endminipage \caption{Discriminative features are identified in Feed-Forward Inference. Contrast between an observed fact and a known knowledge base is used as features in Contrastive Inference.}\label{fig:Concept \end{center} \end{figure} Supervised learning entails learning by association. In the field of psychology, associative learning involves the formation of previously unknown associations between stimuli and responses~\cite{byrne2014learning}. In supervised machine learning, stimuli refers to the data provided to the algorithm and response is the label required of the algorithm. During training, the algorithm is conditioned to associate between the data and its label. The algorithm learns the patterns in data that cause the label. During testing, the algorithm searches for the learned pattern to choose the label. The act of predicting the label is termed as inference. The justification for the chosen label is termed as reasoning. Consider a binary classifier trained to differentiate between two cats - Tom and Top Cat. Tom has gray fur while Top Cat has yellow fur. Current supervised learning algorithms learn to associate the color of the fur with the names of the cats. At inference, the algorithms identify the color of fur to label the cat. This is termed as associative or feed-forward inference. However, after learning the color differences between the two cats, if an inference mechanism identifies that the color of the fur is not that of the other cat. We term this scheme of inference as contrastive inference. \begin{figure*}[!htb] \begin{center} \minipage{0.99\textwidth}% \includegraphics[width=\linewidth]{Figs/Contrastive_Reasons.png} \endminipage \caption{Discriminative features are identified in Feed-Forward Inference. Contrast between an observed fact and a known knowledge base is used as features in Contrastive Inference.}\label{fig:Visualizations \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/FeaturesCombined.pdf} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} Traditionally, human visual perception is seen as an inference problem~\cite{olshausen201427}. The author in~\cite{olshausen201427} examines the conventional feed-forward model of inference in both human vision and machine vision, where a set of neurons extract object selective features in a hierarchical fashion. These features are pooled to create a global representation that is invariant to pose, and shape among others. Finally, a label is attached to each representation. Such a feed-forward representation leads to a task-specific, trained-data specific inference mechanism. For instance, a change in data domain during testing requires retraining to obtain a new representation conducive to test domain. This is because the feed-forward model follows a inductive approach to inference. This concept is first formalized. Note that inductive reasoning is a branch of causal inference in perception~\cite{shams2010causal}. Inference based on induction provides a conclusion with uncertainty which allows for speculation regarding cause. Common techniques in deep learning including~\cite{krizhevsky2012imagenet, he2016deep} use inductive inference. Contrary to inductive inference, abductive inference creates a hypothesis and tests its validity without considering the cause. In this paper, for a given image $x$, we hypothesise that $x$ independently belongs to all $N$ possible classes and observe the contrast it creates in a pretrained model. The contrast refers to what it expects the image $x$ to be in a feed-forward fashion vs what we hypothesise it is. This contrast to all possible $N$ classes is used as a feature to describe $x$ so that a conclusion is made. Extensive work has been conducted to understand the development of human brain and visual perception based on abductive reasoning~\cite{national2018people}. Abductive reasoning allows humans to better generalize to new situations. This form of reasoning was introduced by the philosopher Charles Sanders Peirce~\cite{peirce1931collected}, who saw abduction as a reasoning process from effect to cause~\cite{paul1993approaches}. In contrast, induction conjectures general laws from particular instances. Peirce further connected induction and abduction by stating that \emph{Induction is an argument which sets out from a hypothesis, resulting from previous Abduction}. We demonstrate that contrastive inference provides similar results as feed-forward inference in classification networks. However, when the test domain $Z$ is not the same as the training domain $X$, we illustrate that the proposed contrastive inference exceeds the performance of feed-forward inference. Domain shift can be because of change in sensor, background, and image acquisition under environmental challenges, distortions, acquisition errors among others. We also illustrate that the proposed contrastive inference can be applied on HVS related tasks. Section~\ref{sec:Contrastive Features} introduces contrastive features and provides a mechanism to obtain them. In Section~\ref{sec:Contrastive Inference}, we place these contrastive features in an inference setting. Section~\ref{sec:Exp} motivates and describes the applications considered to illustrate the benefits of contrastive inference against feed-forward inference. We finally conclude the paper in Section~\ref{sec:Conclusion}. In this paper, we combine contrastive explanations, abductive reasoning, and neural networks to provide inference frameworks. In Section~\ref{Sec:Contrast}, we formalize contrast for pre-trained neural networks. In Section~\ref{Sec:LitReview}, we go through other works in the field. We introduce the proposed methods for the inference framework in Section~\ref{Sec:Inference}. The validations are provided in Section~\ref{Sec:Results}. We close the paper in Section~\ref{Sec:Discussion} by discussing the broader implications of the proposed reasoning framework. \end{comment} \section{Analysis} \label{Sec:Analysis} \begin{figure*}[!htb] \begin{center} \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Data.png} \endminipage \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Iter.png} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} In this section, we analyze the loss component that is used to derive contrast. In addition, we report the timing requirements for each of the methods described in Section~\ref{Sec:Contrast_Features}. We also provide a methodology to accelerate the derivation of contrast by adversarial perturbations. The results of the accelerated method is compared against the standard method described in Section~\ref{Subsec:Adversarial}. Finally, we analyze the effect of a \emph{well trained} vs \emph{badly trained} $f()$ on the overall derivation of contrast. \subsection{Effect of Loss Functions on Contrast} \label{Subsec:Loss} \subsection{Timing Analysis} \label{Subsec:Timing} \textbf{Accelerating adversarial gradients} \subsection{Contrastive Inference as a Plug-In} \label{Subsec:Plug-In} \textbf{Distortion based Plug-In} \textbf{Domain Shift based Plug-In} \subsection{Effect of Training of $f()$} \label{Subsec:Training} Here we compare the performance of extracted contrast from the base network $f()$ when it is well trained against when it has not been trained well. We consider three scenarios - $f()$ trained with and without data augmentation, $f()$ trained with limited data, and $f()$ trained in limited time. Here, we ascertain that the performance gain obtained by contrast is not due to a poorly trained $f()$. In fact, we show that the contrast obtained by a well trained network $f()$ is more robust as compared to that of a poorly trained $f()$. \textbf{Contrast with and without Data Augmentation} Data augmentation is a technique first used in~\cite{lecun1998gradient} to avoid overfitting, correct class imbalance, and artificially inflate the size of dataset~\cite{shorten2019survey}. A number of techniques are used for data augmentation including geometrical transformations, random erasing, and adversarial training among others. In this paper, we use simple geometrical transformations to enhance the performance of base networks $f()$. We test how contrastive inference changes with and without augmentation on $f()$. We then train our MLP $H()$ with the same geometrical transformed images and show that the results increases. This shows that any method that increases the performance accuracy of existing networks will work on contrastive features as well. \begin{table}[h!] \small \centering \caption{Data Augmentation Based Contrastive Inference} \vspace{1mm} \begin{tabular}{||c c c c c||} \hline ResNet $f()$ & $f()$ w/o D/A & $f()$ with D/A & $H()$ with D/A \\ [0.5ex] \hline\hline Feed-Forward (\%) & $91.02$ & $93.01$ & $93.09$ \\ [1ex] \hline Contrast-G (\%) & $90.94$ & $93.14$ & $92.88$ \\ [1ex] \hline \end{tabular} \label{table:Original_Results} \end{table} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Data_Tests.png} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Epoch_Tests.png} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} \textbf{Contrast when $f()$ is trained on limited data} \textbf{Contrast when $f()$ is trained in limited time} \section{Contrastive Inference} \label{Sec:Inference} In Section~\ref{Sec:LitReview}, we stated that abductive reasoning is more robust to new conditions and stimuli. In this section, we use the obtained contrastive features to validate this statement. Contrastive reasons, like abductive reasons, generalize under new conditions. We train on pristine images and test on noisy images to verify robustness. Neural networks have shown a vulnerability to distortions between train and test domains~\cite{Temel2017_CURETSR}. We first describe the feed-forward inference from existing neural networks that use inductive reasoning and feed-forward features from Section.~\ref{subsec:FF-Features} to make decisions. This is followed by our proposed contrastive inference framework. \subsection{Feed-Forward Inference}\label{subsec:FF-Inference} In Section~\ref{subsec:FF-Features}, we described the activations that form the feed-forward features that result in inductive decisions. Continuing the notations used in Section~\ref{subsec:FF-Features}, we consider the inference mechanism for the specific task of classification. For a network $f()$ that is trained to classify between $N$ classes, the last layer is commonly a fully connected layer consisting of $N$ weights or filters. During inference the feed-forward features at the $(L-1)^{\text{th}}$ layer, $y_{feat}^{L-1} = f_{L-1}(x)$, are projected independently onto each of the $N$ filters. The filter with the maximum projection is inferred as the class to which $x$ belongs to. Mathematically, feed-forward features $y_{feat}^l$ and the network prediction $P$ are related as, \begin{align} y_{feat}^l = & f_{l}(x), \forall l \in [1, L-1],\\ P = & \operatorname*{arg\,max} (W_L^T y_{feat}^{L-1} + b_L), \label{eq:FF-Infer}\\ \forall W_L\in \Re^{d_{L-1}\times N}, & y_{feat}\in \Re^{d_{L-1}\times 1}, b_L\in \Re^{N\times 1}, \end{align} where $W_L$ and $b_L$ are the parameters of the final linear layer. \vspace{-3mm} \subsection{Contrastive Inference} For the application of classification, the workflow is shown in Fig~\ref{fig:Contrastive_Inference}. For a pretrained neural network trained on $N$ classes on $X$ domain, an image $z \in Z$ such that $Z \neq X$ is provided. For $f()$ that is not adapted to test domain $Z$, the prediction is incorrect as illustrated in Fig~\ref{fig:Contrastive_Inference}. We hypothesize that the actual prediction should be all possible $N$ classes. By feeding in the new hypothesis $Q$, we obtain contrastive features from $f()$. We represent the data $x$ as a measure of change that it creates in the trained base network against all classes. These features are used to obtain the correct prediction. We first expand on the data representation part before obtaining predictions from them. \subsubsection{Contrastive Data Representation} For discriminative networks, contrast for class $1$ is provided by backpropagating class $1$. The gradient is proportional to the loss function $J(W,x,P,1)$, where $W$ is the weight parameter and $J()$ is a convex loss function. Specifically, the manifold change is in the direction of $\nabla_{W_l} J(W,x,P,1)$ for weights in layer $l\in [1,L]$. This term is the gradient of loss for $1$ with respect to weights in layer $l$. As shown in Fig~\ref{fig:Contrastive_Inference}b, we backpropagate over all $N$ classes to obtain contrastive features across all classes given by, $r_i = \nabla_{W_l} J(W,x,P,i)$. The final contrastive feature, $r_x$ for an image $x$, is given by concatenating all individual contrasts. Hence, \begin{equation}\label{eq:r_dis} \begin{gathered} r_i = (\nabla_{W_l} J(W,x,P,i)), \forall l \in [1,L], \forall i \in [1,N] \\ r_x = [r_1, r_2 \dots r_N] \end{gathered} \end{equation} The loss function $J$ is taken with respect to the logits after the final layer. This loss is representative of the contrast between the predicted output $P = f(x)$ and the contrast, $i \in [1,N]$. Loss $J()$ need not be the same loss used to train the network. In Section~\ref{Sec:Contrast}, we used cross entropy to visualize explanations. Access to quantitative results allows for testing other loss functions during inference. In Section~\ref{sec:Results}, we test a number of available loss functions based on classification accuracy. In this work, we use a modified MSE loss function where $J = \text{MSE}(f_L(x), \delta^M_i)$ where $\delta^M_i$ is a modified Kronecker delta function given by, \begin{equation}\label{eq:Kroenecker} \delta^{M}_{i} = \begin{cases} M, & \text{if } i=\text{class},\\ 0, & \text{otherwise} \end{cases} \end{equation} where $M$ is the mean of output logits $y_P = \text{max }f(x)$ taken for all $x$ in training set. We use $M$ instead of $1$ because we want the network to be as confidant of the contrast as it is with the prediction and penalize it accordingly. Note that we now have our contrastive features $r_x$ for a datapoint $x$. \subsubsection{Inference Based on Contrastive Data Representation} Once $r_x$ is obtained for all $N$ classes, the contrastive feature is now analogous to $y_{feat}^{L-1}$ from Eq.~\ref{eq:FF-Infer}. Similar to $y_{feat}$, $r_x$ is processed as Eq.~\ref{eq:FF-Infer}. However, $y_{feat}$ is of dimension $\Re^{d_{L-1}\times 1}$ and $r_x$ is of dimension $\Re^{(N\times d_{L-1})\times 1}$ since it is a concatenation of $N$ gradients. To account for the larger dimension size, we classify $r_x$ by training a simple Multi Layer Perceptron (MLP), that we will henceforth refer to as $\mathcal{H}$, on top of $r_x$ derived from training data. The structure of the MLP depends on the size $(N \times d_{L-1}) \times 1$. This is provided in the implementation details in Section~\ref{Subsec:In-Distribution}. Once $r_x$ passes through $\mathcal{H}$, the argument of the maximum logit $\Tilde{y}$, is inferred as the class of $x$. Expressing $\Tilde{y}$ mathematically, we have, \begin{align} r_x = [r_1, r_2, \dots r_N]&, \forall r_i = (\nabla_{W_l} J(W,x,P,i)),\label{eq:CI-features}\\ \Tilde{y} &= \operatorname*{arg\,max} (\mathcal{H}(r_x)).\label{eq:CI-Inrer} \end{align} Notice the similarity between the feed-forward prediction $P$ and contrastive prediction $\Tilde{y}$. Our goal is to make both the feed-forward and contrastive workflows similar while only changing the feed-forward features to contrastive features so as to showcase the effectiveness of contrastive reasoning during inference. Since the proposed contrastive inference follows an abductive reasoning approach, we borrow the mathematical interpretation of abductive reasoning to formulate contrastive inference. The authors in~\cite{douven2015probabilistic} suggest that abductive reasoning is a non-bayesian process and provide an update rule for training. Adapting this to the inference stage, we have, \begin{equation}\label{eq:probability} \Tilde{y} = \argmax_i{\mathcal{F} \big[ P, {\mathcal{H}}(Pr(i|P), r_x)\big]}, \end{equation} where $\Tilde{y}$ is the prediction from contrastive inference from Eq.~\ref{eq:CI-Inrer}, $P$ is the prediction of feed-forward inference from Eq.~\ref{eq:FF-Infer}, $Pr(i|P)$ is the probability that the true prediction is the contrast class $i \in [1,N]$ conditioned on the given feed-forward prediction $P$, and $r_x$ is the contrastive feature. Consider Fig~\ref{fig:Contrastive_Inference}. Given the image of a spoonbill $x$, let the feed-forward inference predict $P$ as a flamingo. $r_x$ is extracted from $f()$ which is used to train another classifier that represents ${\mathcal{H}}$ from Eq.~\ref{eq:probability}. The entire contrastive inference framework requires both feed-forward prediction and ${\mathcal{H}}$. This framework is represented by the function ${\mathcal{F}}$ from Eq.~\ref{eq:probability}. ${\mathcal{F}}$ is the proposed contrastive inference. While feed-forward inference obeys Bayes rule to obtain $P$, Eq~\ref{eq:probability} is non-Bayesian. It depends not only on the learned network $f()$, but also on contrastive features $r_x$ derived from class hypotheses - features that represent effect to cause. However, in the limiting case when a network is well trained, Eq.~\ref{eq:probability} behaves in a Bayesian fashion. This is because of the $Pr(i|P)$ term. In an ideal case, for a well trained network $f()$ with train and test data from the same distribution, $Pr(i|P) = 1$ for $i = P$ and $0$ otherwise. We test this hypothesis in Table~\ref{table:Original_Results} when networks $f()$ trained and tested on distribution $X$, perform with the same accuracy in both feed-forward and contrastive settings. \vspace{-3mm} \begin{comment} \subsection{Analysis} In this section we analyze the proposed contrastive inference framework based on existing abductive reasoning interpretation and loss interpretation. \subsubsection{Interpretation based on abductive reasoning} Since the proposed contrastive inference follows an abductive reasoning approach, we borrow the mathematical interpretation of abductive reasoning to formulate contrastive inference. The authors in~\cite{douven2015probabilistic} suggest that abductive reasoning is a non-bayesian process and provide an update rule for training. Adapting this to the inference stage, we have, \begin{equation}\label{eq:probability} \Tilde{y} = \argmax_i{\mathcal{F} \big[ P, {\mathcal{H}}(Pr(i|P), r_x)\big]}, \end{equation} where $\Tilde{y}$ is the prediction from contrastive inference from Eq.~\ref{eq:CI-Inrer}, $y_{pred}$ is the prediction of feed-forward inference from Eq.~\ref{eq:FF-Infer}, $Pr(i|P)$ is the probability that the true prediction is class $i \in [1,N]$ conditioned on the given feed-forward prediction $P$, and $r_x$ is the contrastive feature. Note that $r_i$ is obtaining by conditioning the contrast class $i, i \in [1,N]$ on $P$. Consider Fig~\ref{fig:Contrastive_Inference}. Given the image of a spoonbill $x$, let the feed-forward inference predict $P$ as a flamingo. $r_x$ is extracted from $f()$ which is used to train another classifier that represents ${\mathcal{H}}$ from Eq.~\ref{eq:probability}. The entire contrastive inference framework requires both feed-forward prediction and ${\mathcal{H}}$. This framework is represented by the function ${\mathcal{F}}$ from Eq.~\ref{eq:probability}. ${\mathcal{F}}$ is the proposed contrastive inference. While feed-forward inference obeys Bayes rule to obtain $y_{pred}$, Eq~\ref{eq:probability} is non-Bayesian. It depends not only on the learned network $f()$, but also on contrastive features $r_x$ derived from class hypotheses - features that represent effect to cause. However, in the limiting case when a network is well trained, Eq.~\ref{eq:probability} behaves in a Bayesian fashion. This is because of the $Pr(i|P)$ term. In an ideal case, for a well trained network $f()$ with train and test data from the same distribution, $Pr(i|P) = 1$ for $i = P$ and $0$ otherwise. We test this hypothesis in Table~\ref{table:Original_Results} when networks $f()$ trained and tested on distribution $X$, perform with the same accuracy in both feed-forward and contrastive settings. \subsubsection{Interpretation based on loss analysis} \label{subsec:Math_Theory} Consider a neural network $f()$ trained on $N$ instances sampled from a distribution $X$. Hence the training set is given by $\{(x_1,y_1), (x_2,y_2), ...(x_N,y_N)\}$. The goal of the network $f()$ is to learn to predict the correct $y = \{y_1, y_2. ... y_N\}$ given an instance $x$. The network is said to have been generalized when given an instance $x' \in X$ not in the original training set, $f()$ predicts the correct $y$. In this paper, we consider the case when a network trained on $X$, is asked to predict on $X$'s distorted distribution $Z$. Consider a network $f()$ that is trained using the least squares empirical loss function. Following the author's from~\cite{geman1992neural}, this loss can be decomposed in the following way: \begin{align}\label{eq:B-V} E[(f(x;X) - y)^2|x,X] & = (E_X[f(x;X)] \\ & - E[y|x])^2 + E_{X}[(f(x;X) \\ & - E_{X}[f(x;X)])^2] \end{align} The first term in Eq.~\ref{eq:B-V} is the bias and the second term is the variance. The bias indicates how far the network $f()$ is to the true predictor that outputs the ground truth $y$. The second term measures the variance of $f()$ for multiple samples from $X$. The bias error occurs in the selection of the network itself. For instance, choosing ResNet-151 when the choice should have been AlexNet creates bias. In this work, we take existing architectures and imbue contrast. Hence, the bias term remains unchanged. The second term, variance, can further be decomposed based on the Fisher Information Matrix. Consider a datum $z$ not part of the training set. It's variance is given by~\cite{cohn1994neural}, \begin{equation}\label{eq:Fisher} var(z) = \bigg[\frac{\partial P}{\partial \theta}\bigg]^T \bigg[\frac{\partial^2 J(W, x)}{\partial^2 \theta}\bigg] \bigg[\frac{\partial P}{\partial \theta}\bigg]. \end{equation} Physically, Eq.~\ref{eq:Fisher} formalizes the variance of a datum as the model's estimate of the expected square distance between the output $P$ and the true output $y$. Consider the case when the datum $x'$ is not from the training distribution $\mathcal{D}$, but from a shifted $\mathcal{D_{\alpha}}$ distribution. For a datum $x' \in \mathcal{D}$, we expect this distance to be small when $y = \hat{y}$, and large when $y \neq \hat{y}$. This quantity is captured by the $\bigg[\frac{\partial^2 J(W, x)}{\partial^2 \theta}\bigg]$ term. Consider the case of maximum entropy for this loss term. For an $x_{\alpha} \in \mathcal{D}$, the variance of $x_{\alpha}$ can be reduced by minimizing Eq.~\ref{eq:Fisher}. This occurs when $x_{\alpha}$ is equidistant to all classes. Equidistant in the manifold indicates equiprobable using softmax probability. And hence, the logits $f_L(x)$ before the argmax are all equal to $1/N$ where $N$ is the number of classes. $J(W,x_{\alpha})$ is constructed as a squared error between $f_L(x)$ and a one-hot vector of dimensionality $N\times 1$ with the argument at $\hat{y}$ being 1 and others $0$. Consider the input training data $x$, except as ordered pairs in the introspective domain. Hence, the dataset is given by $\{(x_1, y_1), (x_1, y_2), ... (x_1, y_N), (x_2, y_1), (x_2, y_2) ... (x_2, y_N), ...$ \noindent $(x_M, y_1), (x_M, y_2) ...(x_M, y_N)$. For a single datum $x_i, \forall i \in [1,M]$, the variance over for a sample with class $i$ is proportional to $\bigg[\frac{\partial^2 J(W, x, i)}{\partial^2 \theta}\bigg], \forall i \in [1.N]$. Hence, minimizing this variance includes minimizing the cost function $J(W, x, i)$ over all $i \in [1,N]$. Hence, it is the squared error between a $N\times 1$ vector of ones and $f(x)$. For an ideal and well trained network $f()$, the output logits resemble a one-hot vector with the value $1$ at the true label $y$, and zeros everywhere else. Note that this is the mirror of the case when we considered $x_{\alpha}$. Hence, minimizing the variance of the sample $x$ in the contrastive domain, minimizes the variance of a sample $x_{\alpha} \in \mathcal{D_{\alpha}}$. \end{comment} \section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Conclusion} In this paper, we characterized images as the contrast between a trained model and the change in the trained model given a certain hypothesis about the image. We constructed and analyzed three measures of such contrast in our proposed inference framework. The philosophical and mathematical interpretations of this inference framework was presented. Based on these interpretations, we motivated the principle of generalization that benefit from using contrastive inference. \end{comment} \bibliographystyle{ieee_fullname} \section{Contrastive Feature Generation} \label{Sec:Contrast_Features} In Section~\ref{Sec:Contrast}, we motivated the usage of gradients of a pre-trained network $f()$ to obtain contrast. However, the minimum possible change in network $f()$ to encompass the contrast may not always be possible from just the gradient. In this section, we consider three techniques to generate contrastive features based on gradients. \subsection{Gradients} \label{Subsec:Grads} Neural networks whose objectives can be formulated as a differentiable empirical loss function are trained using backpropagation~\cite{rumelhart1986learning}. Backpropagation iteratively adjusts the weight and bias parameters $w_l$ and $b_l$ of the network to minimize the difference between the predicted output vector and the desired output vector. The difference is estimated using an empirical loss function. Given a loss function $J(W, b, x)$, the gradient w.r.t. the weight in layer $l$ is given by $\nabla{J}/\nabla{w_l}$. This gradient is used to update the weights as, \begin{equation}\label{Eq:Gradient_Update} w'_l = w_l - \alpha \frac{\nabla{J}}{\nabla{w_l}}, \end{equation} where $w_l$ and $w'_l$ are the weights before and after the update, $\alpha$ is the learning rate, and $\nabla{J}/\nabla{w_l}$ is the change required to minimize the error. We construct the output and the loss function to obtain contrastive features from a network. Given a network $f()$, trained on $N$ classes, we find contrast for a datapoint $x$ with a label $y$ to all possible $N$ classes. We do so by backpropagating $N$ During training, consider a datapoint $x$ and it corresponding one-hot encoded label $y$. Given a \subsection{Gradients Through L-BFGS} \label{Subsec:LBFGS} Gradients indicate the direction of parameter search that reduces the cost function in neural networks. However, these gradients do not provide the distance that needs to be traveled in the defined direction. Rather, during training, a learning rate is initialized that controls the distance to be updated in the direction specified by gradients. Limited memory-BFGS (L-BFGS) is an approximate second-order method of optimization that estimates both the direction and the corresponding distance in that direction of the update required in network parameters~\cite{nocedal1980updating}. Note that the gradients from Eq.~\ref{Eq:Gradient_Update} indicate the direction of parameter search that reduces the cost function in neural networks. Limited memory-BFGS (L-BFGS) is a quasi-Newtonian~\cite{dennis1977quasi} method of optimization that approximates the inverse hessian of the objective function $J()$ to \subsection{Gradients from adversarial perturbation} \label{Subsec:Adversarial} Continuing the notations established in Secs.~\ref{Sec:Intro} and~\ref{Sec:Theory}, a $L$ layered network $f$ is trained to distinguish between $N$ classes using original distortion free images. For every image $x$ in the distortion free dataset, targeted Fast Gradient Sign Method~\cite{goodfellow2014explaining} (FGSM) is applied to create $N$ adversarial images to obtain $x$'s distance to decision boundaries. For a target $i \in [1,N]$, an adversarial noise $\epsilon \text{sign}(\nabla_x J(W,x,i))$ is added to the image $x$. $J(W,x,i)$ refers to the cost function used to train the network with parameters $W$. $\nabla_x$ is the gradient of the cost function w.r.t. the input $x$. $\epsilon$ of $0.1$ is used in this work. Adversarial noise is added to the input over $k$ iterations until the classification changes to $i$, i.e $f(x_{k-1}+\epsilon \text{sign}(\nabla_x J(W,x_k,i))) = i$. The absolute value of the gradient of the cost function with respect to filter $W_L^i$ is summed up over $k$ iterations, i.e $r_i = \sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^i} J(W,x_j,i))$ where $r_i$ is the feature for $i^{\text{th}}$ class. $i$ is then iterated over all N classes to obtain multi-directional features of $x$ to all decision boundaries. All corresponding $r_i$ are concatenated to obtain the final feature $r_x = [r_1 r_2 \dots r_N]$. Hence the final multi-directional feature is given by, \vspace{-1mm} \begin{equation}\label{eq:Feature} r_x = \Bigg[\sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^1} J(W,x_j,1)) \sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^2} J(W,x_j,2)) \dots \sum_{j=0}^{k-1}\text{abs}(\nabla_{W_L^N} J(W,x_j,N))\Bigg] \end{equation} $r_x$ is the proposed proxy second-order representation of $x$. Note that if the dimensionality of the $(L-1)^{\text{th}}$ layer from Eq.\ref{eq:Filter} is $d_{L-1}\times 1$, then the dimensionality of every subfeature $r_i$ is also $d_{L-1} \times 1$. The concatenated final feature has a dimensionality $(N*d_{L-1})\times 1$. For reference, if the network $f_L$ is VGG-16 trained on CIFAR-10, $d_{L-1}$ is $512\times 1$ and $r_x$ is $5120\times 1$. This representation can get prohibitive with increased number of classes. Simple methods to offset this increase and speed up $r_x$ generation are discussed in Sec.~\ref{subsec:Enhancements}. \section{Conclusion} \label{Sec:Discussion} In this paper, we illustrate the existence of implicit contrast within trained neural networks. We abduce this contrast and provide a robust inference scheme that reasons contrastively. We also visualize such a posteriori reasons as visual explanations that add context to existing causal explanations. The underlying principle that allows extracting such contrastive reasons is : 1) the definition of contrast which allows a datapoint to adopt multiple labels and exist in multiple manifolds and 2) existence of gradients that provide a non-heuristic means to travel between such manifolds. \begin{comment} Both the proposed methods require generating adversarial images $N$ times during testing phase. This might compromise applications that require fast throughput. Borrowing the notations from Section~\ref{Sec:Method}, generating $r_x$ takes around 0.28s for every image on VGG-16. The reason for the large cost is the multiple backpropagation that happen over all the $16$ feature extraction modules to create adversarial noise in $x$ domain. However, the generation of perceptually similar adversarial images in the spatial $x$ domain is not our goal. Instead of generating adversarial images as $x+\epsilon \text{sign}(\nabla_x J(W,x,i))_k$, we generate adversarial data points in the $f_{L-1}$ domain. That is, instead of backpropagating to $x$ and taking the partial derivative $\nabla_x J(W,x,i))$ w.r.t $x$, we backpropagate only one layer to $f_{L-1}$ and obtain the partial derivative w.r.t $f_{L-1}(x)$. Hence, in place of of adversarial images, we create adversarial data points in $f_{L-1}(x)$. Conceptually, the gradients still provide the direction for the filters towards the data points. Accumulating gradients, until the targeted projection $P_i^'$ exceeds the original classification, provides the second order representation $r_x$. The remainder of the method remains the same. Generating $r_x$ in this fashion takes 0.0056s for every image. The average accuracies across levels for all six distortions is presented in Table~\ref{Tab:Timing}. The performance of the $\nabla_{f_{L-1}(x)}$ RR and NR methods are comparable to their $\nabla_x$ counterparts among all distortions. This alternate feature generation method can be used in datasets with large $N$ and huge feature extraction modules. A feed-forward network provides a prediction $P$ for an image $x$ in ${\mathcal{O}}(1)$ time complexity. For the proposed contrastive inference, we backpropagate over $N$ classes for one image. Hence, the total complexity to extract the contrastive feature, $r_x$, is ${\mathcal{O}}(N)$. We posit that the complexity of the feature extraction is derived from the serial nature of our implementation. The author in~\cite{goodfellow2015efficient} provides ways of extracting individual gradients from minibatches of data. By creating multiple copies of $x$ all with ground truths given by the modified Kronecker delta function from Eq.~\ref{eq:Kroenecker}, the extraction of $r_x$ can be accelerated. \end{comment} \section{Results} \label{sec:Results} In Section~\ref{Sec:Contrast}, the contrastive reasons were used to demonstrate the visual explanations that can complement existing \emph{`Why P?'} explanations. In this section, we demonstrate the robustness claim made in Section~\ref{Sec:Inference}. We apply feed-forward and contrastive inference on existing neural networks and show that : 1) contrastive inference performs similarly to feed-forward inference when $f()$ is trained and tested on pristine data, 2) contrastive inference outperforms its feed-forward counterpart when the testing data is distorted. Distortions include image acquisition errors, severe environmental conditions during acquisition, transmission and storage errors among others. Current techniques that alleviate the drop in feed-forward accuracy require training using distorted data. The authors in~\cite{vasiljevic2016examining} show that finetuning or retraining networks using distorted images increases the performance of classification under the same distortion. However, performance between different distortions does not generalize well. For instance, training on Gaussian blurred images does not guarantee a performance increase in motion blur images~\cite{vasiljevic2016examining, geirhos2018generalisation}. Other proposed methods include training on style-transferred images~\cite{geirhos2018imagenet}, training on adversarial images~\cite{hendrycks2019benchmarking}, and training on simulated noisy virtual images~\cite{Temel2017_CURETSR}. All these works require additional training data not belonging to $X$. In this paper, we show that contrastive inference increases classification accuracy in $19$ considered distortions from CIFAR-10-C dataset~\cite{hendrycks2019benchmarking}. This increase in accuracy is induced because of the contrastive features extracted from the pristine data and not through training with distorted data. \vspace{-3mm} \subsection{Results on Pristine Data} \label{Subsec:In-Distribution} We train four networks - ResNet-18,34,50, and 101~\cite{he2016deep} on CIFAR-10~\cite{krizhevsky2009learning} trainset and test them on CIFAR-10 testset. The training set of CIFAR-10 has $50000$ images from $10$ classes with each class having $5000$ sample images. The networks are trained in PyTorch for $200$ epochs on a NVIDIA 1080Ti GPU with a batch size of $128$ using SGD optimization. Learning rates of $0.1, 0.004,$ and $0.0008$ are used from epochs $0-60, 60-120,\text{ and } 160-200$ respectively along with a momentum of $0.9$ throughout the training procedure. PyTorch's horizontal flipping transformation is used as a data augmentation technique. The testset in CIFAR-10 consists of $10000$ images with each class represented by $1000$ images. The results of all networks derived using Eq.~\ref{eq:FF-Infer} are shown as feed-forward results in Table~\ref{table:Original_Results}. \begin{table}[h!] \small \centering \caption{Feed-Forward Inference vs contrastive Inference on CIFAR-$10$ test set} \vspace{-1mm} \begin{tabular}{c | c c c c} \hline ResNet & 18 & 34 & 50 & 101 \\ [0.5ex] \hline\hline Feed-Forward (\%) & $91.02$ & $93.01$ & $93.09$ & $93.11$ \\ [1ex] \hline Gradients (\%) & $90.94$ & $93.14$ & $92.88$ & $92.73$ \\ [1ex] \hline \end{tabular} \label{table:Original_Results}\vspace{-7mm} \end{table} For all $50000$ images in the training set, we extract the contrastive features $r_x$. $r_x$ is richer in terms of dimensionality and contrastive information. For instance, for every image in ResNet-18, the feed-forward network provides a $64\times 1$ feature as $y^{L-1}_{feat}$ from Eq.~\ref{eq:FF-Infer}. Using these feed-forward features, the proposed contrastive features, $r_x$ from Eq.~\ref{eq:r_dis}, are extracted with a dimensionality of $640\times1$ feature. These features are normalized. $\mathcal{H}$, with the structure given in Table~\ref{table:Structure} is trained on the $50000$ training contrastive features for $200$ epochs using the same procedure as the original network. The time to train $\mathcal{H}$ is dependent on its structure and is shown in Table~\ref{table:Structure}. Hence, for contrastive inference, our full framework consists of the original trained network, $f()$, and an MLP, $\mathcal{H}$. During testing, all $10000$ images from the testset are passed through the original network. The contrastive features from each image are extracted individually. These features are normalized and passed through the trained MLP. The overall accuracy is reported in Table~\ref{table:Original_Results}. As can be seen, the results of all three methods are comparable to that of the feed-forward procedure. These results are a validation of the Bayesian limiting case scenario expressed in Eq.~\ref{eq:probability}. When the network $f()$ is well trained from a distribution $X$, $Pr(i|P,x\in X)$ where $i\in [1,N]$ provides no new information regarding any $i$ other than $P$. Hence, the results from Eq.~\ref{eq:probability} are only influenced by the $64\times 1$ feature extracted using $i = P$ in Eq.~\ref{eq:FF-Infer}. \vspace{-3mm} \subsection{Results on Distorted Data} \label{Subsec:Distortion} \begin{figure*}[!htb] \begin{center} \minipage{\textwidth}% \includegraphics[width=\linewidth]{Figs/Robustness.pdf} \endminipage \vspace{-3mm} \caption{Visualization of accuracy gains (in red) of using the proposed contrastive inference over feed-forward inference on CIFAR-10-C for four networks among $19$ distortions. Within each distortion, the distortion levels are averaged and the results shown.}\vspace{-0.7cm}\label{fig:Robustness_Distortionwise} \end{center} \end{figure*} \begin{comment} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Levelwise-Results.png} \endminipage \vspace{-3mm} \caption{Visualization of accuracy gains (in red) of using the proposed contrastive inference over feed-forward inference on CIFAR-10-C for four networks among $5$ distortion levels. Within each distortion level, the individual distortions are averaged and the results shown.}\vspace{-0.7cm}\label{fig:Robustness_Levelwise} \end{center} \end{figure*} \end{comment} \begin{figure*}[!htb] \begin{center} \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Data.png} \endminipage \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Iter.png} \endminipage \caption{(a) Contrastive inference vs. Feed-Forward Inference when $f()$ is trained on limited data. (b) Contrastive inference vs. Feed-Forward Inference when $f()$ is trained in limited time.}\label{fig:f-effect}\vspace{-0.7cm} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Data_Tests.png} \endminipage \caption{Contrastive Inference vs. Feed-Forward Inference under limited training data on (a) Brightness distortion, (b) Motion blur distortion, (c) Gaussian noise distortion.}\label{fig:Data-Tests}\vspace{-0.6cm} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Epoch_Tests.png} \endminipage \caption{Contrastive Inference vs. Feed-Forward Inference across epochs under (a) Brightness distortion, (b) Motion blur distortion, (c) Gaussian noise distortion.}\label{fig:Epoch-Tests}\vspace{-0.7cm} \end{center} \end{figure*} In this section, we use CIFAR-10-C~\cite{hendrycks2019benchmarking} to validate the performance of contrastive inference on distorted data. CIFAR-10-C provides $19$ distortions on the testset of CIFAR-10. It also provides $5$ increasing levels of degradation in each distortion. Hence, every distortion has $50000$ images to test on. $19$ distortions produce $950,000$ images during testing on a network trained on the original $50000$ images. The same networks ResNet-18,34,50,101 trained in Section~\ref{Subsec:In-Distribution} are used for the experiments in this section. The results are shown in Fig.~\ref{fig:Robustness_Distortionwise}. The blue bars depict the results when the network $f()$ infers the predictions of the distorted images in a feed-forward fashion. The red bar depicts the contrastive accuracy gain over the feed-forward predictions with the inclusion of $\mathcal{H}$. The results of all $19$ distortions averaged over $5$ distortion levels are shown. Among $4$ networks and in every distortion category, there is an increase in the performance of contrastive inference over feed-forward inference. However, the increase is not consistent across the distortion categories. The results of all $4$ networks increase substantially by atleast $5\%$ in $8$ of the $19$ categories. These distortions include - gaussian blur, gaussian noise, glass blur, motion blur, pixelate, shot noise, speckle noise, and zoom blur. The highest increase is $8.22\%$ on glass blur for ResNet-18. These distortions are global distortions i.e. the criteria and methodology for distorting pixels does not change based on individual pixels. Other distortions including contrast, and brightness where the proposed method's performance gains are less than $3\%$ consist of distortions that change the global and local characteristics of the image including it's mean and contrast. Neural networks are actively trained to ignore such changes so that their effects are not propagated beyond the first few layers. The contrast obtained from the last layer is by itself insufficient to characterize the underlying data without the noise. This should be addressed by deriving contrast from earlier layers but is currently beyond the scope of this paper. Also, the contrastive performance gain increases as the level of distortions increase. This is because, with higher distortion, we move further away from the Bayesian limiting case in Eq.~\ref{eq:probability}. In other words, the information in $Pr(i|P,x\in X)$ increases. For ResNet-18, the average level $1$ contrastive gain across all $19$ distortions is $1.36\%$ while its level $5$ gain across distortions is $5.26\%$. \begin{table*} \centering \caption{Contrastive Inference averaged accuracies with different loss functions for ResNet-18 on CIFAR-10-C.}\vspace{-1mm} \begin{tabular}{c c c c c c c c c c} \hline Feed-Forward & MSE & CE & BCE & L1 & L1-M & Smooth L1 & Smooth L1-M & NLL & SoftMargin \\ [0.5ex] \hline\hline $67.89\%$ & $71.35\%$ & $69.42\%$ & $70.24\%$ & $69.09\%$ & $69.92\%$ & $69.22\%$ & $70.01\%$ & $70.93\%$ & $70.91\%$ \\ [1ex] \hline\vspace{-7mm} \end{tabular} \label{table:Losses} \end{table*} \vspace{-3mm} \subsubsection{Effect of loss in gradient generation} Note that the gradient generation process is stochastic in Eq.~\ref{eq:CI-features}. We test the impact of the choice of loss functions in Eq.~\ref{eq:CI-features} on the performance of contrastive inference. We compare multiple loss functions all of whose distortion-wise level-wise averaged results are provided in Table~\ref{table:Losses}. Feed-forward accuracy is the results obtained from the network $f()$, MSE is the modified Mean Squared Error from Eq.~\ref{eq:Kroenecker}, CE is Cross Entropy, BCE is Binary Cross Entropy, L1 is Manhattan distance, L1-M is the modified version of L1 distance when $\delta^M_i$ from Eq.~\ref{eq:Kroenecker} is backpropagated, Smooth L1 is the leaky extension of Manhattan distance, Smooth L1-M is the modified version of Smooth L1 similar to L1-M, NLL is Negative Log Likelihood loss, and SmoothMargin is the modified hinge loss. All these loss functions are taken from PyTorch. In Table~\ref{table:Losses}, the performance of all contrastive inference modalities with different loss functions, exceed that of the feed-forward inference. The proposed modified MSE outperforms the nearest loss by $0.42\%$ in accuracy and is used in all our experiments. \begin{comment} \subsubsection{Level-wise Performance Gain} The results in Fig.~\ref{fig:Robustness_Levelwise} are categorized based on the distortion level and presented. All $19$ categories of distortion are averaged for each level and their respective feed-forward accuracy and contrastive gains are shown. Note that Level $0$ refers to results on pristine data. It is a graphical representation of Table~\ref{table:Original_Results}. The rest of the $5$ levels are on distorted data. From Section~\ref{subsec:Math_Theory}, the contrastive inference approaches the ideal case of training on distorted data when the entropy of $y_{L-1}$ is maximum. As the distortions increase, the network $f()$ is increasingly not-confidant of its prediction. Hence, $y_{L-1} = f_{L-1}(z)$ approaches all $\frac{1}{N}$ or maximum entropy. This is the same as backpropagating contrasts assuming all logits are $M$ from Eq.~\ref{eq:r_dis}. Hence, as the distortion level increases, the contrastive gain also increases. This is apparent from the results in Fig.~\ref{fig:Robustness_Levelwise}. \end{comment} \vspace{-3mm} \subsection{Effect of $f()$}\label{subsec:f-Effect} We further analyze ResNet-18 trained on CIFAR-10 to see the efficacy of contrastive features when $f()$ is \emph{not} well trained i.e. when contrast has not yet been learned implicitly. In this subsection, our goal is to ascertain that the performance gain obtained by contrast is not due to a poorly trained $f()$ or a statistical anomaly. We consider two cases of such $f()$ : when $f()$ is trained for limited time, and when $f()$ is trained with limited data. ResNet-18 is trained on CIFAR-10 with the same learning parameter setup as in Section~\ref{Subsec:In-Distribution} and Table~\ref{table:Original_Results}. \subsubsection{Training and testing under limited time}\label{subsubsec:LWLT} For the limited time experimental setup, ResNet-18 is trained for $200$ epochs. The network states at multiples of $3$ epochs from $1$ to $198$ along with epoch $= 200$ are saved. This provides $67$ versions of $f()$ under different stages of training. Each $f()$ is tested on $10,000$ CIFAR-10 testing images and the Top-$1$ accuracy is plotted in blue in Fig.~\ref{fig:f-effect}b. The gradients $r_x$ for all $67$ states are extracted for the $50,000$ training samples. These gradients are used to train $67$ separate $\mathcal{H}$ of structure provided in Table~\ref{table:Structure} with a similar parameter setup as before. The gradients from the $10,000$ testing samples are extracted separately for each of the $67$ ResNets and tested. The results are plotted in red in Fig.~\ref{fig:f-effect}b. Note the sharp spikes at epochs $60$ and $120$ where there is a drop in the learning rate. Hence, when training domain is the same as testing domain, contrastive and feed-forward features provide statistically similar performance across varying states of $f()$ in accordance with Eq.~\ref{eq:probability}. We now consider the case when a network $f()$ is trained for limited epochs on domain $X$ and tested on $Z$ from CIFAR-10-C distortions. The $67$ trained models of ResNet-18 are are tested on three distortions from CIFAR-10-C. The three distortions include motion blur, brightness, and Gaussian noise. These three distortion types were chosen to represent the spectrum of the proposed method's performance increase over feed-forward inference. From the results in Fig.~\ref{fig:Robustness_Distortionwise}, contrastive inference achieved one of its highest performance gains in Gaussian noise, lowest performance gain in brightness and an average increase in motion blur. The results in Fig.~\ref{fig:Epoch-Tests} indicate that after around $60$ epochs, the feed-forward network has implicitly learned contrasts sufficiently to discriminate between classes. This is seen in both motion blur and Gaussian noise experiments. The results from brightness indicate that contrastive inference follows feed-forward inference across epochs. \subsubsection{Training and testing with limited data}\label{subsubsec:LimitedData} For the limited data experiment, ResNet-18 is trained on only a subset of available data. CIFAR-10 consists of $50,000$ training images with each class consisting of $5,000$ images. We randomly sample a fixed number images from each class and train $f()$ using these samples. The results are plotted in Fig.~\ref{fig:f-effect}a. Ten separate ResNets are trained with each model having access to random $100$, $200$, $300$, $400$, $500$, $600$, $700$, $800$, $900$, and $1000$ images per class. Validation is conducted on all $10,000$ testing images and the results are plotted in blue in Fig.~\ref{fig:f-effect}a. Contrastive features are extracted for the same random images that the base $f()$ was trained on and $10$ separate $\mathcal{H}$ are trained on these gradients. $r_z$ for $10,000$ testing data is separately extracted for all ten instances of $f()$ and passed through trained $\mathcal{H}$ to obtain contrastive results. These are plotted in red. Similar to the results in Table~\ref{table:Original_Results}, contrastive inference is statistically similar to feed-forward inference. We consider the case when a network $f()$ is trained on limited data on $X$ but tested on distorted data $Z$, from CIFAR-10-C. We consider the same distortions as before from Section~\ref{subsubsec:LWLT}. The results of feed-forward and contrastive inference are plotted in Fig.~\ref{fig:Data-Tests}. For Gaussian noise distortion, contrastive inference outperforms feed-forward inference even with only $300$ training images per class. However, this is not the case for both motion blur and brightness where contrastive inference follows feed-forward inference. \begin{table*} \centering \caption{Performance of Contrastive Inference vs Feed-Forward Inference on VisDA Dataset} \vspace{-1.5mm} \begin{tabular}{c c c c c c c c c c c c c c} \hline & Plane & Cycle & Bus & Car & Horse & Knife & Bike & Person & Plant & Skate & Train & Truck & All \\ [0.5ex] \hline Feed-Forward & $27.6$ & $7.2$ & $\textbf{38.1}$ & $54.8$ & $43.3$ & $\textbf{4.2}$ & $\textbf{72.7}$ & $\textbf{8.3}$ & $28.7$ & $22.5$ & $\textbf{87.2}$ & $2.9$ & $38.1$\\ Contrastive & $\textbf{39.9}$ & $\textbf{27.6}$ & $19.6$ & $\textbf{79.9}$ & $\textbf{73.5}$ & $2.7$ & $46.6$ & $6.5$ & $\textbf{43.8}$ & $\textbf{30}$ & $73.6$ & $\textbf{4.3}$ & $\textbf{43.6}$\\ \hline \end{tabular} \label{table:VisDA} \end{table*} \vspace{-3mm} \subsection{Effect of training data}\label{subsec:D-Effect} We analyze the impact of training data in three additional cases : a) The base network $f()$ is trained on noisy images, b) the training and testing data are of a higher resolution than the $32\times 32 \times 3$ CIFAR-10 images, c) the training and testing data are significantly different like in VisDA dataset~\cite{peng2017visda}. In all three cases, we \emph{do not} use data from training domain to learn $\mathcal{H}$. \vspace{-3mm} \subsubsection{Results on training with noisy data}\label{subsubsec:NoiseTrain} In Sections~\ref{Subsec:Distortion} and \ref{subsec:f-Effect}, $\mathcal{H}$ is used as a plug-in on top of neural network $f()$ trained on pristine data $X$. Consider a network $f'()$ that has trained on distorted data. We apply contrastive inference on $f'()$ and show that there is a performance gain when using $\mathcal{H}$. In this experimental setup, we augment the training data of CIFAR-10 with six distortions - gaussian blur, salt and pepper, gaussian noise, overexposure, motion blur, and underexposure - to train a ResNet-18 network $f'()$. We then test the network on CIFAR-10 test data corrupted by the same six distortions in $5$ progressively varying degrees. The distortions were obtained from~\cite{temel2018cure}. Note that while $f'()$ is trained on distortions, $\mathcal{H}$ is trained only on original CIFAR-10 training data. The results of the proposed contrastive inference compared to feed-forward inference of the augmented model $f'()$ increases by a total of $1.12\%$ across all distortions and levels. On the highest level $5$ distortion in blur category, the increase is $6.87\%$. \vspace{-3mm} \subsubsection{Results on STL dataset} The proposed approach is implemented on higher resolution images of size $96\times96\times3$ in STL-10 dataset~\cite{coates2011analysis}. ResNet-18 architecture is adopted with an extra linear layer to account for change in resolution. Note that STL-10 does not have a standardised distorted version. Hence, we use the same distortions from Section~\ref{subsubsec:NoiseTrain} to corrupt the STL-10 testset. The results of contrastive inference increases by an average of $2.56\%$ in all but underexposure distortion. In underexposure, the accuracy drops by $1.05\%$. In level $5$ of both blur categories, the contrastive performance gain is $6.89\%$. The decrease in performance in underexposure distortion can be attributed to the change in low level statistical characteristics that are discarded by the network's initial layers. \vspace{-3mm} \subsubsection{Results on VisDA dataset}\label{subsubsec:DA} We show validation results on a synthetic-to-real domain shift dataset called VisDA~\cite{peng2017visda} in Table~\ref{table:VisDA}. An ImageNet pre-trained ResNet-$18$ architecture is finetuned on synthetically generated images from VisDA dataset and tested on real images in the same VisDA dataset. The dataset has $12$ classes with $152k$ training images. While there is an overall performance gain of $5.48\%$, the individual class accuracies indicate room for improvement. The feed-forward predictions, $P$, on \texttt{Knife}, \texttt{Person}, \texttt{Bike}, and \texttt{Train} are either too confidant or their confidence is low. Hence, $Pr(i|P)$ term in Eq.~\ref{eq:probability} is adversely affected by $P$ which in turn affects contrastive predictions $\Tilde{y}$. \begin{comment} \subsubsection{Domain Shift Due to Dataset Difference} \label{Subsec:Domain_Shift} Domain shifts between datasets occur due to changes in lightning, camera pose, background, and acquisition sensors among others. In this paper, we demonstrate the performance of contrastive methods against the feed-forward technique on CIFAR-10~\cite{krizhevsky2009learning}, STL~\cite{coates2011analysis}, and Office~\cite{saenko2010adapting} datasets. Cifar-10 and STL consist of $9$ common classes. The source of these images are however different and has been used as a common method of validating domain adaptation algorithms. STL dataset has $5000$ training images and $8000$ testing images each of size $96\times 96 \times 3$ derived from Flickr website. For the experiments described in this section, we downsample STL images to $32\times 32 \times 3$ to match CIFAR-10 dataset. We also choose $9$ classes from each dataset. Hence we are left with $45000$ training images and $4500$ testing images Office dataset consists of images from $31$ classes derived from three sources - Webcam, DSLR, and Amazon. \input{Sections/tab_DA.tex} \end{comment} \section{Contrastive Explanations} \label{Sec:Contrast} Explanations are a set of rationales used to understand the reasons behind a decision~\cite{kitcher1962scientific}. In this section, we visually inspect the reasons behind decisions by answering \emph{'Why P, rather than Q?'} questions between the predicted class $P$ and the contrast class $Q$ for a network $f()$. We modify the popular Grad-CAM~\cite{selvaraju2017grad} explanation technique to obtain our contrastive visualizations. We first describe Grad-CAM before detailing the necessary modifications. \vspace{-3mm} \subsection{Grad-CAM} \label{subsec:Gradcam} Grad-CAM is used to visually justify the decision $P$ made by a classification network by answering \emph{`Why P?'}. The activations from the last convolutional layer of a network are used to create these visualizations since they possess high-level semantics while maintaining spatial information. For any class $i, \forall i \in [1,N]$, the logit $y_i$ is backpropagated to the feature map $A_l$ where $A_l = f_l(x)$ and $l$ is the last convolutional layer. The gradients at every feature map are $\frac{\partial y_i}{\partial A_l^k}$ for a channel $k$. These gradients are global average pooled to obtain importance scores $\alpha_k$ of every feature map in $l^{th}$ layer and $k^{th}$ channel. The individual maps $A_l^k$ are multiplied by their importance scores $\alpha_k$ and averaged to obtain a heat map. The Grad-CAM map at layer $l$ and class $i$ is given by $L^i = ReLU(\sum_{k=1}^K \alpha_k A^k_l )$. The Grad-CAM from an ImageNet-pretrained VGG-16~\cite{Simonyan15} for a correctly classified Spoonbill image is visualized in Fig.~\ref{fig:Contrast_examples}c. The red-highlighted regions in Fig.~\ref{fig:Contrast_examples}c explains why VGG-16 chose Spoonbill as the decision $P$. Hence, Grad-CAM visually explains the observed causality \emph{`Why P?'}. \vspace{-3mm} \subsection{Contrast Visualizations}\label{subsec:Contrast_Visuals} In Grad-CAM, the importance score $\alpha_k$ which is derived by backpropagating the logit $y_i$, weighs the activations in a layer $l$ based on $A_l^k$'s contribution towards the classification. The activations are projections on network parameters and hence have access to both the causal and contrastive information. Therefore, to extract contrastive explanations, the contrast importance score $\alpha_k^c$ must be global average pooled contrastive features i.e $\alpha_k^c = \sum_u\sum_v \nabla_{W_l} J(W,x,P,Q)$, where $u,v$ are the channel sizes at layer $l$. This is achieved by backpropagating $J(W,x,P,Q)$ within the Grad-CAM framework to obtain the contrastive maps for class $Q$. Hence, while $\alpha_k$ highlights \emph{`Why P?'}, $\alpha_k^c$ denotes \emph{`Why P, rather than Q?'}. Note that there can be $N$ contrastive maps for a network trained to discriminate between $N$ classes. The contrast-emphasized regions for selected classes are shown in Fig.~\ref{fig:Contrast_examples}d-g. In Fig.~\ref{fig:Contrast_examples}d, VGG-16 indicates that the contrast between spoonbill and its notion of a flamingo class resides in the lack of S-shaped neck for a spoonbill. Similarly, it translates to not detecting white feathers in Fig.~\ref{fig:Contrast_examples}d to contrast between a spoonbill and a crane. The contrast between a band-aid and a spoonbill is in the presence of neck and legs in the spoonbill. This is highlighted in Fig.~\ref{fig:Contrast_examples}f. Fig.~\ref{fig:Contrast_examples}e indicates that VGG-16 contrasts between a pig and a spoonbill based on the neck of spoonbill. The body and feather colors of the spoonbill are de-emphasized but the shape of its legs and neck contribute towards VGG-16's decision. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Reasoning.png} \endminipage \caption{Contrastive explanations (CE) on Recognition. (a) Input $x$. (b) Grad-CAM of $x$ for predicted class $P$. (c) Representative image of nearest class $Q$. (d) CE for class $Q$. (e) Representative image of random class $Q$. (f) CE for random class $Q$ in (e). (g) CE when $P = Q$.}\label{fig:Reasoning} \end{center} \end{figure*} \subsection{Analysis} \label{subsec:Grad-CAM_Analysis} Consider an $L-$layered network $f()$ trained using the cross entropy loss loss $J()$ to differentiate between $N$ classes. The output of the network for an image $x$ after the last $L^{th}$ layer and before applying the loss is $y^L_{feat} = f_L(x)$ where $y^L_{feat}$ is a $1\times N$ vector containing the logits for each class. For brevity, we express $y^L_{feat}$ as $y$ in the remaining of this section. The general cross entropy loss for the label $i$ where $i \in [1,N]$ is, \begin{equation}\label{eq:CE} J(W, x, P, i) = -y_i + \sum_{j=1}^{N} e^{y_j}, \text{ where } y_j = f_L(x). \end{equation} During training, $i$ is the true label. \noindent\textbf{Why P:} Note that in Grad-CAM the logit for the interested class is backpropagated. To ask \emph{`Why P?'}, the logit corresponding to class $P$, i.e $y_p$, is backpropagated. We represent the backpropagated variable in Grad-CAM as $J_G(P, P)$ where, \begin{equation}\label{eq:Grad-CAM} J_G(P, P) = y_P. \end{equation} \noindent\textbf{Why P, rather than Q: }Contrast maps over $N$ classes are obtained by backpropagating the loss between predicted and contrast class $Q$. We represent this backpropagated variable as $J_C(P, Q)$ in Eq.~\ref{eq:CE}, where $Q \in [1,N]$ and $Q\neq P$. Approximating the exponent with its second order Taylor series expansion, we have, \begin{equation} J_C(P, Q) = -y_Q + \sum_{j=1}^{N} \bigg(1 + y_j + \frac{y_j^2}{2}\bigg). \end{equation} Note that for a well trained network $f()$, the logits of all but the predicted class are high. Hence $\sum_{j=1}^{N} \frac{y_j^2}{2} = \frac{y_p^2}{2}$. Substituting, \begin{equation}\label{eq:J_Full} J_C(P, Q) = -y_Q + N + \sum_{j=1}^N y_j + \frac{y_P^2}{2}. \end{equation} The quantity in Eq.~\ref{eq:J_Full} is differentiated, hence nulling the effect of constant $N$. For a well trained network $f()$, small changes in $W$ do not adversely affect the sum of all logits $\sum_{j=1}^N y_j$. Hence approximating its gradient to $0$ and discarding it, we can obtain $J_C(P, Q)$ as a function of two logits, $y_Q$ and $y_P$ given by, \begin{equation}\label{eq:J_Final} J_C(P, Q) = -y_Q + \frac{y_P^2}{2}. \end{equation} Compare Eq.~\ref{eq:J_Final} against Eq.~\ref{eq:Grad-CAM}. From Eq.~\ref{eq:Grad-CAM}, only $y_P$ or the logit for class $P$ is backpropagated to obtain importance scores. Hence, the importance score $\alpha_k$, highlights features in learned $l^{th}$ layer manifold where $f_l(x)$ projects onto patterns that justify $P$. In Eq.~\ref{eq:J_Final}, the backpropagated gradients are a function of $-y_Q$ and $y_P$. Hence, the contrast importance score $\alpha_k^c$, highlights the non-common features between classes $P$ and $Q$. These non-common features span the difference between $m_P$ and $m_Q$ from Fig.~\ref{fig:Contrast_examples}a. Recall $m_P$ is the learned manifold where $x$ is classified as $P$ and $m_Q$ is the hypothetical contrast manifold where $x$ is labeled as $Q$. \noindent\textbf{Why P, rather than P: }When $Q = P$, Eq.~\ref{eq:J_Final} is written as, \begin{equation}\label{eq:J_PP} J_C(P, P) = -y_P + \frac{y_P^2}{2}. \end{equation} Note that the importance scores when backpropagating the first term in $J_C(P, P)$ are the negative of the backpropagated variable in Eq.~\ref{eq:Grad-CAM}. The authors in Grad-CAM~\cite{selvaraju2017grad} claim that backpropagating the negative score $-y_P$, provides conterfactual explanations. Since $J_C(P, P)$ is a function of both $y_P$ and $-y_P$, our results do not provide the results in the same counterfactual modality. However, since we are backpropagating a loss function between manifold $m_P$ against itself, the importance scores highlight those features in the image which disallow $f()$ to predict $x$ as $P$ with a higher confidence. If $J()$ were MSE, this modality reads as \emph{`Why not P with 100\% confidence?'}. \begin{comment} Also, since our contrastive gradients follow the Grad-CAM procedure for obtaining contrastive explanation for $Q$, the gradients $\frac{\partial J(W, x, P, Q)}{\partial W_l}$ at layer $l$ are global average pooled to obtain the importance scores. $\frac{y_^P^2}{2}$ is constant across all $K$ channels and then normalized. Hence, we can represent the loss as a function of only the $y_Q$ and $y_j$ terms by accounting for the differentiation and normalization. Applying Eq.~\ref{eq:J_Full} to all possible values of $Q$ except $P$ and averaging the results over $N-1$ contrast maps, we have, \begin{equation}\label{eq:J_Avg} \begin{split} \frac{1}{N-1}\sum_{i = 1, i\neq P}^N J(f, P, i) &= \frac{1}{N-1}\bigg[-\sum_{i = 1, i \neq P}^N y_i + \sum_{j=1}^N y_j \bigg],\\ \sum_{i = 1, i\neq P}^N J(f, P, i) &= \frac{1}{N-1}y_P, \end{split} \end{equation} Hence averaging $N-1$ contrast maps approximates to backpropagating scaled $y_P$ which is the same as Grad-CAM. Note that the scaling affects the importance score across all $K$ channels which is then normalized. \end{comment} \subsection{Qualitative Results} \label{subsec:Explanations_Qual} \subsubsection{Experiments} In this section, we consider contrastive explanations on large scale and fine-grained recognition. Large-scale datasets, like ImageNet~\cite{ILSVRC15}, consist of a wide variety of classes. Fine-grained recognition is the subordinate categorization of similar objects such as different types of birds, and cars among themselves~\cite{yang2012unsupervised}. We consider Stanford Cars~\cite{KrauseStarkDengFei-Fei_3DRR2013} and traffic sign recognition CURE-TSR~\cite{Temel2017_CURETSR} datasets for fine-grained recognition. On ImageNet, we use PyTorch's ImageNet pretrained VGG-$16$~\cite{simonyan2014very} architecture to show results in Fig.~\ref{fig:Reasoning}. VGG-$16$ is chosen to be consistent with Grad-CAM. Note that we tested generating contrastive explanations on other architectures including AlexNet~\cite{krizhevsky2012imagenet}, SqueezeNet~\cite{iandola2016squeezenet}, VGG-$19$~\cite{simonyan2014very}, ResNet-$18,34,50,101$~\cite{he2016deep}, and DenseNet-$161$~\cite{huang2017densely}. On Stanford Cars dataset, we replace and train the final fully connected layer of an ImageNet pre-trained VGG-$16$ architecture to discriminate between $196$ classes. For CURE-TSR, we use the trained network provided by the authors in~\cite{Temel2017_CURETSR}. The results from the fine-grained datasets and the cat-dog image used in Grad-CAM~\cite{selvaraju2017grad} are shown in Fig.~\ref{fig:Reasoning}. Note that the time taken to generate a contrastive explanation is the same as Grad-CAM. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Wrong_Reasoning.png} \endminipage \caption{Contrastive explanations (CE) on Recognition. (a) Input $x$. (b) Grad-CAM of $x$ for predicted class $P$. (c) Representative image of nearest class $Q$. (d) CE for class $Q$. (e) Representative image of random class $Q$. (f) CE for random class $Q$ in (e). (g) CE when $P = Q$.}\label{fig:Wrong_Reasoning} \end{center} \end{figure*} \subsubsection{Results} The visual answers to \emph{`Why P, rather than Q?'} from different datasets are provided in Fig.~\ref{fig:Reasoning}. We visualize the considered images $x$ in the column (a) of Fig.~\ref{fig:Reasoning}. Their respective Grad-CAM~\cite{selvaraju2017grad} explanations are shown in Fig~\ref{fig:Reasoning}b. Our contrastive explanations are visualized in columns (d), (f), and (g) of Fig.~\ref{fig:Reasoning}. The questions answered by each of the contrastive explanations is shown below the image. Representative images of the considered \emph{Q} class are shown alongside in columns (c) and (e) respectively. Note that the network does not see these representative images to produce the explanations. It bases its explanations on its notion of the \emph{Q} class. The contrastive explanations provide an interesting insight into the decisions of neural networks. For instance, the network's explanation as to \emph{`Why Bull Mastiff, rather than Boxer?'} in Fig.~\ref{fig:Reasoning}d is human interpretable. From the Boxer's representative image the two dogs differ in the shape of jowls. This is shown in the red highlights below the Bull Mastiff's jaws. Note that the red highlighted portion in the contrastive explanation is below the red highlighted region from Grad-CAM's explanation. This leads humans to interpret that the network's notion of the difference between the two breeds is the size and/or shape of jowls. However, contrastive explanations need not always be human interpretable. Consider the case when a network trained on a traffic sign dataset, CURE-TSR~\cite{Temel2017_CURETSR}, is asked \emph{`Why No-Left, rather than Stop?'}. The letters that spell STOP not being in the sign is the intuitive human response to the above question. However, from the contrastive explanation in Fig.~\ref{fig:Reasoning}f the network highlights the bottom left corner of the image. Among the 14 traffic signs the network has trained to differentiate between in the dataset, Stop sign is the only class that has a hexagonal shape. Hence, the network has learned to check for a straight side in the bottom left. The absence of this side in $x$ indicates to the network that $x$ is not a STOP sign. This clearly illustrates the differences between the notion of classes between humans and machines. When the difference between $P$ and $Q$ are not fine-grained, then the contrastive explanations are similar to Grad-CAM explanations. This is illustrtaed by showing explanations between predicted $P$ and random classes in Fig.~\ref{fig:Reasoning}f for ImageNet and Stanford Cars dataset. The difference between a dog and a bluejay is in the face of the dog - the same region highlighted by Grad-CAM to make decisions. The input Bugatti Veyron's sloping hood is sufficiently different from that of the boxy hood of the Audi that it is highlighted. The same region is used to classify it as a Bugatti. Fig.~\ref{fig:Reasoning}g provides contrastive explanations when $P$ is backpropagated. We use the question \emph{`Why not P with 100\% confidence?'} as the explanation modality. However, as noted before, this question is loss dependent. The results in cat-dog image are human interpretable - the presence of the cat prevents the network from making a confidant decision. \vspace{-0mm} \subsubsection{Results on noisy data}\label{subsubsec:Noisy_explanations} $P$ is the network prediction and hence is not controlled in the statement \emph{`Why P, rather than Q?'}. In this section, we add noise to $x$ to illustrate the effect of noise as well as incorrect classification $P$ for contrastive explanations. The results are presented in Fig.~\ref{fig:Wrong_Reasoning}. The first row shows the pristine image of spoonbill along with Grad-CAM and contrastive explanations. The second row illustrates the scenario when the spoonbill image has Gaussian noise added to it. This noise however, is insufficient to change prediction $P$. In the third row, the noisy spoonbill image is classified as a window screen. In all three rows, both the Grad-CAM and contrastive explanations change. We first analyze Grad-CAM. For the pristine spoonbill image, the network infers based on the body of the spoonbill. When small noise is added, the correct classification is made based primarily on the legs. When a large amount of noise is added, the network incorrectly predicts the spoonbill as a window screen based on features around the bird. Among the contrastive explanations for why the image is not a flamingo, the network consistently highlights the neck. The results for crane also highlight the body. The results for \emph{`Why not window with 100\% confidence?'} highlights the face and tail of the bird which is intuitive. Hence, contrastive explanations provide additional context and information that is not available from observed causal explanations. We propose to tie inference to contrastive patterns rather than associative feed-forward patterns so as to obtain additional features to infer from. In other words, not only does the bird image have to have the requisite patterns for a spoonbill, it also has to satisfy the \emph{`Why not Q?'} modality where $Q\in [1,N]$. Inference based on all possible contrastive features is contrastive inference. In Section~\ref{Sec:Inference}, we illustrate the robustness of contrastive inference. \vspace{-3mm} \begin{comment} \textbf{Current Feed-Forward -> Contrast -> Gradients} We first consider inference in existing supervised feed-forward neural networks like VGG-16, ResNets-18,34,50, and 101 in a classification setting. Consider an $L$ layered neural network $f()$ trained to classify a distribution $X$ into $N$ classes. This network can be broken into two components: feature extraction stage $f_1 - f_{L-1}$ and a classification stage $f_L$. $f_L$ is commonly a fully connected layer consisting of $N$ filters each of dimensionality $d_{L-1}\times 1$. During inference, given an image $x\in X$, the feed-forward hierarchy of the network produces a feature $f_{L-1}(x)$ of dimensionality $d_{L-1}\times 1$ that is projected independently onto each of the $N$ filters. The filter with the maximum projection is inferred as the class to which the image $x$ belongs. Mathematically, \begin{equation}\label{eq:Filter} \begin{gathered} y_{feat} = f_{L-1}(x), \forall y\in \Re^{N\times 1},\\ y_{pred} = \operatorname*{arg\,max} (W_L^T y_{feat} + b_L),\\ \forall W_L\in \Re^{d_{L-1}\times N}, f_{L-1}(x)\in \Re^{d_{L-1}\times 1}, b_L\in \Re^{N\times 1}, \end{gathered} \end{equation} where $y_{feat}$ is the feed-forward feature before the last layer and $y_{pred}$ is the filter onto which $y_{feat}$ projects maximally. $W_L$ and $b_L$ are the weight and bias parameters in the last layer, respectively. All the weights in $f()$ combined, provide the knowledge base, e.g., the color of the fur in Fig~\ref{fig:Concept}. The extracted features $y_{feat}$ are both derived and used in a feed-forward manner. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Contrastive_Features.pdf} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} Contrast represents the minimum possible change required to take the intervention into consideration. In this paper, we extract contrast from the network $f()$. In Fig~\ref{fig:Concept}, contrast was obtained from knowing the contents of the knowledge base. A pretrained neural network $f()$ stores knowledge in the parameters of every layer. The layer parameters $W_l$ and $b_l$ represent the knowledge base that is learned to differentiate between trained classes. We obtain contrast from $f()$ by providing contrastive hypotheses to the network. Consider the trained manifold $f_{l}(), \forall l \in [1,L]$. This manifold is a function of an image and its class. Given a sample from the training distribution with the correct class, the sample is expected to project onto the span of weights. However, if the same sample is provided with a different class, the network weights move to encompass the new sample within its span. The movement is proportional to gradients and provides the disparity between the learned class and the contrastive class. We use these gradients as contrastive features. A toy example is shown in Fig.~\ref{fig:Contrastive_Features}a. A network is trained to correctly classify a cat image as a cat. This is illustrated as the trained blue manifold $m(cat)$. However, if the same image is provided with a frog label, the manifold shifts to a different space where the cat image can be classified as a frog, illustrated by the purple manifold $\Tilde{m}(frog)$. In this case, gradients of $W_1$ represent contrast - expected \emph{fact} from knowledge manifold vs provided contrastive hypothesis. Note that contrast represents distance and hence we only use the magnitude of gradients in this work. Consider a neural network $f()$ trained to classify a distribution $X$ into $N$ classes. Given an instance $x\in X$ during testing, the network produces an output $y$ such that $y = f(x)$. The observed output can thus be denoted as $f(y|x)$. Causal and counterfactual inference usually takes the form of $f(y|\text{do}(x))$ where there is an intervention in $x$ or utilization of $do-calculus$ to obtain causality. Contrastive features on the other hand require interventions in the output. Hence the output of contrastive features takes the form of $f(\text{do}(y)|x)$. \end{comment} \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Inference.png} \endminipage \caption{Feed-Forward Inference vs Contrastive Inference. (a) The features in blue before the final fully connected layer represent the feed-forward features. (b) The features for contrastive inference represent the change in a learned model when given a contrast hypothesis.}\label{fig:Contrastive_Inference}\vspace{-3mm} \end{center} \end{figure*} \section{Contrastive Features} \label{sec:Abduction} In visual space, we define contrast as the perceived difference between two known quantities. In this paper, we assume that the knowledge of the two quantities is provided by a trained neural network $f()$. The knowledge present in a feed-forward classification network is discriminatory and its reasoning process is inductive. In other words, given a neural network $f()$ that is trained to classify between $N$ classes, the network recognizes patterns to infer a class $P$ for any given image $x$. We first describe the feed-forward features used to make inductive decisions before providing a representation space definition for contrast. \vspace{-3mm} \subsection{Feed-Forward Features}\label{subsec:FF-Features} Consider an $L$-layered classification network $f()$ trained on $x \in X$ for a certain task. $f()$ stores knowledge about the task on $X$ in the parameters of its layers - $W_l$ and $b_l$, $\forall l \in [1,L]$. Given an input image $x$, traditional feed-forward networks that reason inductively project the image on these parameters to make a decision $P$. The intermediate activations from a layer $l$ in $f()$ are termed $y_{feat}^l$. These activations are projections on the span of the weights $W_l, \forall l \in [1,L-1]$. $y_{feat}^l$ are the features used to make the decision $P$ and hence, activations $y_{feat}^l$ are the feed-forward features. In both the explanation and inference applications, we use $y_{feat}^l$ as feed-forward inductive features. $y^l_{feat}$ are used to obtain explanations for decisions using Grad-CAM~\cite{selvaraju2017grad} which is described in Section~\ref{Sec:Contrast}. If $y_{feat}^{L-1}$ are the feed-forward features at the $(L-1)^{th}$ layer, a task specific mechanism is used to infer the feed-forward prediction $P$ which is explored in Section~\ref{Sec:Inference}. \vspace{-3mm} \subsection{Contrastive Features}\label{subsec:Contrastive-Features} Note that contrastive reasons are a subset of all possible abductive reasons. We adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Hence, contrast is a measure of change created by an input image $x$ in the parameters of a trained network against the contrast class. In terms of neural network representation space, contrast between classes $P$ and $Q$ for an image $x$ is the difference between the manifolds that predict $x$ as class $P$ and $x$ as $Q$. The network parameters $W_l$ and $b_l$ span a manifold where the given image $x$ belongs to a class $i, i \in [1,N]$. A toy classification example is shown in Fig.~\ref{fig:Contrast_examples} where a learned manifold, $m_P$, is visualized in blue. On the learned manifold, a spoonbill is classified as a spoonbill. A hypothetical contrastive manifold, $m_Q$, is shown in purple that differs from the blue manifold in that it classifies a spoonbill as a flamingo. The difference between the two manifolds is contrast. Note that $m_Q$ is hypothetical and hence the difference between the two cannot be directly measured. In this paper, we measure the change required to obtain the contrastive manifold from the trained manifold. We use gradients to measure this change. Usage of gradients to characterize model change is not new. Neural networks whose objectives can be formulated as a differentiable loss function are trained using backpropagation~\cite{rumelhart1986learning}. The authors in~\cite{kwon2019distorted} used gradients with respect to weights to characterize distortions for sparse and variational autoencoders. Fisher Vectors use gradients to characterize the change that data creates within networks~\cite{jaakkola1999exploiting} which were extended to classify images~\cite{sanchez2013image}. In detection applications on anomalous~\cite{kwon2020backpropagated}, novel~\cite{kwon2020novelty}, and out-of-distribution~\cite{lee2020gradients} data, gradients are used to characterize model change. We extract contrast for class $Q$ when an image $x$ is predicted as $P$ by backpropagating a loss between $P$ and $Q$. Hence, for a loss function $J()$, contrast is $\nabla_{W_l} J(W,x,P,Q)$, where $W$ are the network weights. Note that $J()$ is a measure of contrastivity between $P$ and $Q$. $Q$ can be any one of $[1,N]$ classes. Hence for $i \in [1,N]$, there are $N$ possible contrastive features given by $\nabla_{W_l} J(W,x,P,i)$ at any layer $l$ for $i \in [1,N]$. The feed-forward features $y^l_{feat}$ and the proposed contrastive features are analogous in the sense that they provide mechanisms to infer or justify decisions. In Sections~\ref{Sec:Contrast} and~\ref{Sec:Inference}, we demonstrate the applicability of these contrastive features in two applications: contrastive explanations and contrastive inference. \begin{comment} We construct the output and the loss function to obtain contrastive features from a network. Given a network $f()$, trained on $N$ classes, we find contrast for a datapoint $x$ with a label $y$ to all possible $N$ classes. We do so by backpropagating $N$ Hence, instead of instantiating an independent knowledge base like in, we extract the relationships $P$ between concepts directly from the perception model $f()$. These concepts are specifically designed to differentiate between classes and hence is termed contrastive reasoning. For the same spoonbill the contrastive question defining the knowledge concept as the difference between $P$ and $Q$ entities from the abductive reasoning statement \emph{'Why P, rather than Q?'}. For instance, in Fig.~\ref{fig:concept}, a neural network is trained to recognize both spoonbills and flamingos as separate classes. Thus, the network has access to the discriminative knowledge that separates the two classes. This knowledge is stored in the network's weight and bias parameters, termed as $W$ and $b$ respectively. These parameters span a manifold where the given image $x$ belongs to a class $i, i \in [1,N]$. A toy classification example is shown in Fig.~\ref{fig:contrast} where a learned manifold is visualized in blue. On the learned manifold, a spoonbill is classified as a spoonbill. A hypothetical contrastive manifold is shown in purple that differs from the blue manifold in that it recognizes a spoonbill as a flamingo. The same figure holds for regression networks, where the manifolds exist in a continuous space rather than discrete space. In terms of neural network representation space, contrast is the difference between the manifolds that predict $x$ as $P$ and $x$ as $Q$. In this paper, instead of directly measuring the difference between learned and contrastive manifolds, we measure the change required to obtain the contrastive manifold from the learned manifold. We use gradients to measure this change. Usage of gradients to characterize model change in not new. The authors in~\cite{kwon2019distorted} used gradients with respect to weights to characterize distortions for sparse and variational autoencoders. Fisher Vectors use gradients to characterize the change that data creates within networks~\cite{jaakkola1999exploiting} which were extended to classify images~\cite{sanchez2013image}. We formalize the framework for generating contrastive reasons in this section. We first define contrast from a We adopt the definition of abduction from~\cite{velazquez2013epistemic} to define contrast. \cite{velazquez2013epistemic} defines abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Considering $P$ as the We adopt the framework for abductive reasoning from the authors in~\cite{dai2019bridging}. They instantiate abductive reasoning as first-order logical program that runs in parallel to existing perception models to decipher mathematical equations in an image. The first-order program requires knowledge concepts that are input into the system apart from the data itself. They use a generic CNN as the perception model to recognize the symbols in the given image. An external abductive model $B$ is defined which consists of knowledge of the structure of equations, and the recursive bitwise operations. Consider such a model applied to large scale recognition datasets. In the example of Fig.~\ref{fig:Concept}, the knowledge base must be initialized with the neck and body category for existing reasoning mechanisms to function well. Consider the number of semantic categories if the differentiation is among $1000$ classes in ImageNet dataset. Initializing the knowledge base becomes both computationally tedious and defeats the purpose of learning from data. To alleviate this shortcoming, we adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Hence, instead of instantiating $B$ as an independent knowledge base, we extract the relationships $P$ between concepts directly from the perception model $f()$. These concepts are specifically designed to differentiate between classes and hence we call it contrastive reasoning. Note that Abductive learning involves deriving a set of hypotheses that can be used to explain a given set of facts. Reasoning through abduction, is that one or more of the derived hypotheses, if true, can be used to explain the occurrence. Abductive reasoning provides a non-monotonic reasoning paradigm to overcome the limitations and false conclusions from inductive reasoning. \textbf{Abductive Learning Framework:} The framework proposes a simultaneous perception and reasoning framework for existing neural network architectures. We adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Abductive reasoning is a common-sense based reasoning rather than mathematical reasoning. As such, its work in literature is limited to using it in specific real world scenarios - police investigations~\cite{carson2009abduction}, medical diagnostics~\cite{magnani1992abductive}, automobile diagnostics~\cite{finin1989abductive}, archaeology~\cite{thagard1997abductive}, and ontology~\cite{elsenbroich2006case}, psychology~\cite{shank1998extraordinary} among others. In all these cases, the formulation of abduction is based on first-order logic. Modern deep learning algorithms have far exceeded the capabilities of first-order logic. Hence, the current formulations of abduction are insufficient to obtain hypotheses from neural networks. We comment on the difference between contrastive inference and abductive inference. Abductive reasoning provides $N$ independent hypotheses to verify the validity of $N$ separate decisions respectively. Based on some criteria, one of the $N$ hypotheses is chosen and its corresponding decision made. This form of decision making is termed as abductive inference. In this paper, we differentiate abductive inference from contrastive inference in the way that the abductive reasons are used. Contrastive inference uses patterns within these hypotheses to infer decisions instead of selecting these hypotheses. We adopt the definition of abduction from~\cite{velazquez2013epistemic} who define abduction as \emph{a process of belief change that is triggered by an observation and guided by the knowledge and belief that an agent has the ability to derive}. Consider an observation $x$ derived from a distribution $P$. Consider a neural network $f()$ trained on the same distribution $P$. Given $x$, $f()$ predicts the corresponding label of $x$ as $\Tilde{y}$. Given that the task is classification, $\Tilde{y}$ refers to a class among $N$ trained classes. Hence, $f()$ predicts $Pr(y'|x)$. During training, the goal is to have the prediction $\Tilde{y}$ the same as the ground truth $y$. An empirical loss function $J()$ is constructed that measures the true sampled distribution We first present the task of abductive reasoning in existing literature. Abductive reasoning involves the definition of a first-order logical program that runs either in parallel to or on top of existing perception models. The first-order program requires knowledge concepts that are input into the system apart from the data itself. For instance, the authors in~\cite{dai2019bridging} decipher mathematical equations in an image based on an abductive learning framework. They use a generic CNN as the perception model to recognize the symbols in the given image. An external abductive model $B$ is defined which consists of knowledge of the structure of equations, and the recursive bitwise operations. Consider such a model applied to large scale recognition datasets. In the example of Fig.~\ref{fig:Concept}, the knowledge base must be initialized with the neck and body category for existing reasoning mechanisms to function well. Consider the number of semantic categories if the differentiation is among $1000$ classes in ImageNet dataset. Initializing the knowledge base becomes both computationally tedious and defeats the purpose of learning from data. In this paper, we propose to replace this manual allocation of first-order programming using the perception model itself. We posit that since a neural network has learned to differentiate between $1000$ classes, it has learned the fine-grained differences between the $1000$ classes. In such a scenario, our goal is to extract the knowledge concepts $B$ from the network $f()$ itself. \subsection{Problem Setting} We first describe an abductive learning framework. Consider a distribution $\mathcal{D}$ consisting of data $X$ and their corresponding labels $Y$. $\mathcal{D}$ is sampled from $\mathcal{D}$ and consists of $M$ data points given by $\mathcal{D} = {(x_1, y_1), (x_2, y_2), \dots (x_M, y_M)}$. This forms the training set for a neural network $f()$. The neural network is trained on the $M$ data points to discriminate between $N$ classes for any given image $x$. The network $f()$ then acts as the knowledge base $K$ from which relationships between classes $C$ are extracted. Note that these relationships are \emph{contrasts} between the pseudo-label assigned by the network and the contrast label to be extracted in. Hence, the system consists of Consider a distribution $(X,Y)$ from which individual $(x,y)$ data are sampled. $x$ is the image data and $y$ is its corresponding label. Consider $M$ samples derived from $X$. This sampled distribution is $\Tilde{X}$. Note that $\Tilde{X}$ is comprised of $(x_1, y_1), (x_2, y_2), \dots (x_M, y_M)$. The goal of a classification network $f()$ is to correctly predict the output $y \in [1, M]$ given an input $x$. \subsection{Grad-CAM} \label{subsec:Gradcam} Grad-CAM is used to visually justify the decisions made by a neural network. The actiivations from the last convolutional layer of a network are used to create these visualizations since they possess high-level semantics while maintaining spatial information. For any class $i, \forall i \in [1,N]$, the logit $y_i$ is backpropagated to the feature map $A_l$ where $A_l = f_l(x)$ and $l$ is the last convolutional layer. Let there be $K$ channels in this $l^{th}$ layer. The gradients at every feature map are $\frac{\partial y_i}{\partial A_l^k}$. These gradients are the same size as the kernels, $u$ and $v$ and the same number as the channels $K$. The gradients are global average pooled across $u$ and $v$ to obtain importance scores $\alpha_k$ of every feature map in $l^{th}$ layer. The individual maps $A_l^k$ are multiplied by their importance scores $\alpha_k$ and averaged to obtain a heat map of size $u,v$. Only the positive values in this map are used since they represent the values that have a positive influence for class $i$. Hence, Grad-CAM map at layer $l$ and class $i$ is given by $L^i = ReLU(\sum_{k=1}^K \alpha_k A^k_l )$. \end{comment} \section{Related Works} \label{Sec:LitReview} In Section~\ref{sec:introduction}, we introduced abductive reasoning as an alternative reasoning scheme to existing induction that better generalizes to new situations. Abductive reasoning requires multiple hypotheses to infer decisions. Hypotheses are defined as answers to logical questions that aid inference. In this section, we first describe the possible logical questions - causal, counterfactual, and contrastive. We then motivate our choice of contrastive questions to obtain abductive hypotheses. \subsection{Abductive Reasoning} \noindent \textbf{Abductive Reasoning: }Consider a classification network where given a datapoint $x$ the network predicts the label of $x$ as $P$. The factors that cause $P$ are termed causal reasons~\cite{pearl2009causal, halpern2005causes}. An explanation for the generation of label $P$ can be defined as the answer to the question \emph{'Why P?'}. For instance, for a classification algorithm, Grad-CAM~\cite{selvaraju2017grad} provides visual explanations for \emph{'Why P?'}. However, these explanations are based on observational causality~\cite{lopez2017discovering}. Observed causality can hence be differentiated from interventionist causality where the explanations for a predicted label $P$ changes in response to active interventions. Active interventions can further be divided into two scenarios based on the location of interventions - either in the data itself or in the predictions~\cite{mcgill1993contrastive}. Intervening within data itself provides answers to \emph{'What if?'} questions. These answers are counterfactual explanations. However, all possible interventions in data can be long, complex and impractical~\cite{lopez2017discovering}. Even with interventions, it is challenging to estimate if the resulting effect is a consequence of the intervention or due to other uncontrolled interventions~\cite{bottou2013counterfactual}. The second type of interventions can occur in the network predictions. These are questions of the type \emph{'Why P, rather than Q?'} questions. The answers to these questions form contrastive reasons. The answers to observed causal, counterfactual, and contrastive questions constitute abductive reasons. Before describing the current formulations of abductive reasoning and frameworks, we relate the considered observed causal and contrastive reasons to Fig.~\ref{fig:Concept}. \noindent \textbf{Abductive and Contrastive Reasons: }In this paper, we specifically choose contrastive questions to obtain abductive hypotheses and extract answers from neural networks by intervening in the predictions made. This is because pre-trained neural networks already have implicit contrastive knowledge. We first motivate this claim. In Fig.~\ref{fig:Concept}, consider that the knowledge base is a trained neural network $f()$. During testing, it is given the image of a spoonbill $x$, with the task of predicting the label $P$ of $x$. Consider that $f()$ correctly predicts the label $P$ as spoonbill. The feed-forward reason for $P$ is the detection of pink and round body, and straight neck. These are reasons for \emph{`Why Spoonbill?'} and represent observed causal reasons. The contrastive reasons answer the question \emph{`Why Spoonbill, rather than flamingo?'} and \emph{`Why spoonbill, rather than crane?'}. Here, $Q$ is either flamingo or crane. The contrastive reasons in Fig.~\ref{fig:Concept} visualize the knowledge base's notion of the difference between a spoonbill and a flamingo and between a spoonbill and a crane. While $P$ is constrained by the prediction from $f()$, $Q$ is user defined and can be used to ask the network for other explanations including \emph{`Why spoonbill, rather than band-aid?'} or \emph{`Why spoonbill, rather than pig?'} as long as the network $f()$ has learned these classes. $Q$ can be any of the learned classes in the network. If $f()$ is trained to classify between $N$ classes, $Q$ can take the values of $[1, N]$. Hence, for a recognition network with some prediction $P$ and trained to discriminate between $N$ classes, there are potentially $N$ possible contrastive reasons. The network acts as a discriminatory knowledge base that has implicitly learned the contrastive information between $P$ and $Q$. Therefore, we choose the contrastive model of abductive reasoning in our framework. \noindent\textbf{Reasoning versus Explanations: }Reasoning is the process by which a network $f()$ infers a decision $P$. The authors in~\cite{goguen1983reasoning} consider reasoning to be a mental process which can only be surmised based on how it manifests - in terms of explanations and inference. Explanations are comprehensible justifications of reasoning. In other words, explanations are justifications that a network $f()$ provides after a decision $P$ is made. Inference is the outcome of reasoning, i.e. the decision $P$ itself. Therefore, for all trained networks, we can ask and extract observed causal~\cite{selvaraju2017grad}, counterfactual~\cite{goyal2019counterfactual}, and contrastive explanations~\cite{prabhushankar2020contrastive}. Similarly, inference can occur based on causal, counterfactual, or contrastive features. They can then be termed as causal, counterfactual, or contrastive inference. Two of these explanatory schemes and inference mechanisms are shown in Fig.~\ref{fig:Concept}. Technically, the explanations come after decisions but we use explanations as a visual substitute for reasoning in Fig.~\ref{fig:Concept}. \subsection{Abductive Inference and Learning} \noindent \textbf{Abductive Inference: }Abductive inference is generally described as Inference to the Best Explanation (IBE)~\cite{harman1965inference}. While inductive inference associates learned patterns and extrapolates them to unseen test cases, IBE postulates all possible explanations for the occurrence of the test case and picks the class with the \emph{best} explanation. Note that the novelty of existing works is derived from finding a metric that describes what explanation is \emph{best}. Traditionally, the simplicity of an explanation was seen as a stand in for the best explanation. Recent works for IBE in machine learning are postulated under the abductive framework. \noindent \textbf{Abductive Framework: }The combination of both abductive reasoning and inference together forms the abductive framework. Abductive reasoning is a common-sense based reasoning rather than mathematical reasoning. As such, its utility is in specific real world scenarios - police investigations~\cite{carson2009abduction}, medical diagnostics~\cite{magnani1992abductive}, automobile diagnostics~\cite{finin1989abductive}, archaeology~\cite{thagard1997abductive}, ontology~\cite{elsenbroich2006case}, and psychology~\cite{shank1998extraordinary} among others. In all these cases, the formulation of abduction is based on first-order logic. Modern deep learning algorithms have far exceeded the capabilities of first-order logic. In machine learning, an abductive framework is primarily seen as a logical process. The authors in~\cite{kakas2000abductive} describe the logical process of abduction and contrast it against an inductive logical process. The authors in~\cite{dai2019bridging} instantiate abductive reasoning as first-order logical program that runs in parallel to existing perception models to decipher mathematical equations in an image. The first-order program requires knowledge concepts that are input into the system apart from the data itself. They use a generic CNN as the perception model to recognize the symbols in the given image. An external abductive model is defined which consists of knowledge of the structure of equations, and the recursive bitwise operations. \subsection{Contrastive Inference and Learning} In this paper, we divine the knowledge concepts intrinsically from the perception network. We do so by defining the knowledge concept as the difference between $P$ and $Q$ entities from the contrastive reasoning statement \emph{`Why P, rather than Q?'}. The name is derived from the psychological concept of contrastive inference. \noindent\textbf{Contrastive Inference: }In developmental psychology, contrastive inference is a mechanism that allows for the divination of entities that are not mentioned linguistically~\cite{kronmuller2014show}. For instance, the phrase '\emph{Hand me the large cup}' implies the existence of multiple cups of varying sizes without the need for mentioning their existence explicitly. Human-interpretable explanations have been shown to be contrastive - explanations are based on a contrast to a known fact. Contrastive inference provides a way of obtaining pragmatic explanations and hence is a form of abductive reasoning~\cite{kronmuller2014show,lipton2003inference}. \noindent\textbf{Contrastive Learning: } The term contrastive learning denotes a framework wherein negative samples of training data that are semantically similar to positive samples are mined~\cite{arora2019theoretical}\cite{NIPS2013_5007}\cite{chen2020simple}. In~\cite{NIPS2013_5007}, contrastive learning is used to differentiate between specific topics within mixture models. The authors in~\cite{chen2020simple} propose a contrastive learning framework where multiple data augmentation strategies are used to train a network overhead with a contrastive loss. Note that the proposed method does not derive from the existing contrastive learning setup. \noindent \textbf{Proposed Contrastive Inference: }We comment on the difference between the proposed contrastive inference and existing abductive frameworks. In any given scenario, an abductive inference framework makes one of $N$ possible decisions based on $N$ independent hypotheses that are extracted to verify the validity of $N$ separate decisions. The hypotheses are extracted based on a knowledge base. One of the hypotheses is then chosen and its corresponding decision made. Consider such a framework from~\cite{dai2019bridging} where an external model is a knowledge base. Consider such a model applied to large scale recognition datasets. In the example of Fig.~\ref{fig:Concept}, the knowledge base must be manually initialized with the neck and body category for existing reasoning mechanisms to function well. Consider the number of semantic categories requiring manual allocation if the differentiation is among $1000$ classes in ImageNet dataset. Initializing the knowledge base becomes both computationally tedious and defeats the purpose of learning from data. In this paper, we address this challenge by extracting contrast based abductive reasons directly from the network. We start by defining contrast both in the visual space and in the representation space spanned by the neural network. \begin{comment} we first define contrast as a way of obtaining abductive reasons. The proposed contrastive inference framework learns the base network $f()$ in a supervised feed-forward fashion. We then extract the contrast between the predicted class of a data point $P$ and a contrast class $Q$. These are denoted as contrastive reasons and visualized. The data is then represented as a measure of change that it creates in the trained base network against all classes. Such a representation is shown to be robust to distortions. The proposed inference framework is novel in its use of the base network $f()$, the interpretation of abduction and contrast, and its applicability in the inference stage across pretrained networks. \noindent\textbf{Data characterization using gradient based features} Use of gradients to characterize data is not new. The authors in \noindent\textbf{Domain Adaptation} The authors in~\cite{saenko2010adapting, hendrycks2019benchmarking, Temel2017_CURETSR, geirhos2018generalisation} have demonstrated the vulnerability of feed-forward neural networks to domain shifts during test time. Domain shifts due to changes in lightning, camera pose, background, acquisition sensors, and synthetic-to-real shifts. The techniques proposed to counter degradation in accuracy under these domain shifts are broadly classified under domain adaptation~\cite{saenko2010adapting, sun2016deep}. Generally, domain adaptation techniques propose methods to align statistics and distributions between source and target domains by utilizing a few images from target distribution. Note that a different network is trained for every new domain to be adapted to. \noindent\textbf{Robustness to Distortion} Distortions include image acquisition errors, environmental conditions during acquisition, transmission and storage errors among others. CIFAR-10-C~\cite{hendrycks2019benchmarking} consists of $19$ real world distortions each of which has five levels of degradation. Hence, a single target domain adaptation algorithm would require training a network $19\times5$ times. Therefore, domain adaptation techniques are generally not used in this setting. Current techniques that alleviate the drop in feed-forward accuracy require distorted training data. The authors in~\cite{vasiljevic2016examining} show that finetuning or retraining networks using distorted images increases the performance of recognition under the same distortion. However, performance between different distortions is not generalized well. For instance, training on gaussian blurred images does not guarantee a performance increase in motion blur images~\cite{vasiljevic2016examining, geirhos2018generalisation}. Other proposed methods include training on style-transferred images~\cite{geirhos2018imagenet}, training on adversarial images~\cite{hendrycks2019benchmarking}, and training on simulated noisy virtual images~\cite{Temel2017_CURETSR}. All these works require additional data during training. \end{comment} \section{Introduction}\label{sec:introduction}} \IEEEPARstart{S}{upervised} learning entails learning by association. In the field of psychology, associative learning involves the formation of previously unknown associations between stimuli and responses~\cite{byrne2014learning}. In supervised machine learning, stimuli refers to the data provided to the algorithm and response is the label required of the algorithm. During training, the algorithm is conditioned to associate between the data and its label. Hence, the goal of training an algorithm is to learn the patterns in data that either cause or correlate to the given label. During testing, the algorithm searches for the learned pattern to predict the label. This act of predicting the label is termed as inference. The means of arriving at the inference is reasoning. An example of supervised machine learning is object recognition where a label is inferred based on the learned patterns in a given image. Recent advances in machine learning has allowed state-of-the-art performances in recognition tasks~\cite{krizhevsky2012imagenet}\cite{he2016deep}. Specifically, recognition algorithms surpassed top-5 human error rate of $5.1\%$ on ImageNet~\cite{russakovsky2015imagenet}. Error rate measures the inaccurate predictions made by humans or the network. The reduction in error rate can be traced to advancements in the learning process~\cite{kingma2014adam}, regularization~\cite{srivastava2014dropout}, and hardware utilization~\cite{li2017analysis} among others. However, all these advancements rely on the traditional feed-forward-based associative reasoning and inference scheme. The author in~\cite{olshausen201427} examines the conventional feed-forward model of inference in both human vision and machine vision, where a set of neurons extract object selective features in a hierarchical fashion. Such a feed-forward representation leads to a task-specific, trained-data specific inference mechanism. For instance, a change in data domain during testing requires retraining to obtain a new representation conducive to test domain. This is because the feed-forward model follows an inductive reasoning approach to inference. Inductive reasoning is a branch of causal inference in perception~\cite{shams2010causal}. Inference based on induction provides a conclusion with uncertainty which allows for speculation regarding cause. \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/Concept.png} \endminipage \caption{Discriminative features are identified in Feed-Forward reasoning frameworks. Contrasts between observed facts and a known knowledge base is used as reasons in Contrastive Inference.}\label{fig:Concept \end{center} \end{figure*} Contrary to an inductive framework, an abductive framework creates a hypothesis and tests its validity without considering the cause. Abductive reasoning allows humans to better generalize to new situations. Extensive work has been conducted to understand the development of human brain and visual perception based on abductive reasoning~\cite{national2018people}. This form of reasoning was introduced by the philosopher Charles Sanders Peirce~\cite{peirce1931collected}, who saw abduction as a reasoning process from effect to cause~\cite{paul1993approaches}. In contrast, induction conjectures general laws from particular instances. Peirce further connected induction and abduction by stating that \emph{Induction is an argument which sets out from a hypothesis, resulting from previous Abduction}. In this paper, we follow this principle to modify inductively trained feed-forward neural networks to reason and infer abductively for the task of object recognition. In other words, we use an inductively trained neural network to extract multiple hypotheses regarding the input's class and use them as features for inference. Specifically, we extract hypotheses based on the network's assertion of contrast between classes. We call this inference mechanism as \emph{contrastive inference} and its corresponding reasoning as \emph{contrastive reasoning}. To explain this concept of contrast, let us take the toy example of two subjects distinguishing between three classes of birds - spoonbill, flamingo and crane. This is illustrated in Fig.~\ref{fig:Concept}. All three are shallow-water birds with fine-grained physical differences. Spoonbills and flamingos have round and pink bodies. However, they differ primarily in the shape of their necks. A flamingo has an S-shaped neck while a spoonbill does not. Spoonbills and Cranes have long necks but have different colored bodies. Consider that images of all three birds have been shown to two subjects beforehand, thereby training them to distinguish between the three birds. Given a new test image of a spoonbill, subject A correctly recognizes the image as a spoonbill based on the shape and color of the bird's body and its beak. Subject B also recognizes the image as a spoonbill but does so by noticing that the neck is \emph{not} S-shaped and the body is \emph{not} white. While both the subjects infer correctly, the reasoning mechanism is complementary. Subject B infers the \emph{contrast} between a known fact - the S-shape of the neck, white body- and its lack thereof in the test image to make the right classification. Hence subject B constructed multiple hypotheses and contrastively reasoned why it cannot be either a flamingo or a crane. Inference that is built on these contrastive reasons or features is termed as \emph{contrastive inference}. On the contrary, subject A reasons inductively to infer the correct class based on identifying the patterns in the image corresponding to spoonbill. This is feed-forward inference that is currently being used by neural networks. \noindent \textbf{Paper structure and novelty: }In this paper, we rethink existing inductive reasoning based feed-forward networks and provide an alternative reasoning and inference scheme based on abduction. We first describe abductive reasoning and its usage in artificial intelligence and machine learning in Section~\ref{Sec:LitReview}. We provide a structure to the possible abductive reasons and introduce contrastive reasons as a subset of abductive reasons. We further show that current formulations of abductive reasoning does not allow for large scale learning based solutions. The novelty of our work is in formulating abductive reasons as contrasts that can be extracted from trained deep networks. This is motivated and described in Section~\ref{sec:Abduction}. We extract and visualize these contrastive reasons in Section~\ref{Sec:Contrast} as contrastive visual explanations. We then qualitatively examine the results and gain insights provided by such contrastive explanations. In Section~\ref{Sec:Inference}, we describe the inference mechanism that uses the contrastive reasons to make decisions. We finally enhance the robustness of existing models to noise in Section~\ref{sec:Results} and conclude in Section~\ref{Sec:Discussion}. \begin{comment} The contributions of this paper include: \begin{itemize} \item Formulating and defining contrast and contrastive reasons as functions of pre-trained neural networks. \item Extracting contrastive reasons and visualizing them as explanations. \item Enhancing the robustness of existing neural networks to noise. \end{itemize} \end{comment} \begin{comment} Consider a neural network $f()$ trained to classify images among one of $N$ classes on a dataset $X$. Training occurs by asking the network to predict labels and backpropagating the errors to correct the network weights $W$ and biases $b$. The network learns to correlate patterns in data $X$ with the label in $Y, \forall Y \in [1,N]$. This is a feed-forward inductive learning framework. During inference, the network correlates the patterns it observes in a test image $\Tilde{x}$ with the corresponding label $\Tilde{y}$ that it has trained for. Hence, the inductively trained feed-forward neural network has inferred abductively. Instead of feed-forward inference We introduce a term inspired by psychology - contrastive inference - as a special case of the abductive framework. Contrastive inference is a mechanism that allows for the divination of entities that are not mentioned linguistically~\cite{kronmuller2014show}. For instance, the phrase '\emph{Hand me the large cup}' implies the existence of multiple cups of varying sizes without the need for mentioning their existence explicitly. Contrastive inference has been studied in the field of developmental psychology~\cite{kronmuller2014show,lipton2003inference} as a way of obtaining pragmatic explanations. Human-interpretable explanations have been shown to be contrastive - explanations are based on a contrast to a known fact~\cite{miller2018contrastive, miller2018explanation}.To solidify this concept of contrast, let us take the toy example of two subjects distinguishing between Tom and Top Cat as illustrated in Fig.~\ref{fig:Concept}. Both cats differ in the color of their furs. Top Cat has yellow fur while Tom has darker gray fur. Consider that images of both Tom and Top Cat have been shown to two subjects beforehand. Also, the subjects have been trained to distinguish between the two cats based on the color of their fur. Given a new test image, subject A correctly identifies the image as Tom based on the darker gray shade of fur. Subject B also identifies the image as Tom but does so by noticing that the fur is \emph{not} yellow. While both the subjects classify correctly, the inference mechanism is complementary. Subject B infers the \emph{contrast} between a known fact - the yellow fur of Top Cat - and its lack thereof in test image to make the right classification. Inference that makes use of these contrastive explanations or features is termed as \emph{contrastive inference}. On the contrary, subject A infers the correct class based on identifying the gray fur in the image, which is based on the feed-forward inference that is currently being used by neural networks. In this paper, for a given image $x$, we hypothesise that $x$ independently belongs to all $N$ possible classes and observe the contrast it creates in a pretrained model. The contrast refers to what it expects the image $x$ to be in a feed-forward fashion vs what we hypothesise it is. This contrast to all possible $N$ classes is used as a feature to describe $x$ so that a conclusion is made. This is illustrated in Fig.~\ref{fig:Concept}. Consider a binary image classifier trained to differentiate between two cats - Tom and Top Cat. Both cats differ in the color of their furs. Top Cat has yellow fur while Tom has darker gray fur. Consider that images of both Tom and Top Cat have been shown to two subjects beforehand. Also, the subjects have been trained to distinguish between the two cats based on the color of their fur. Given a new test image, subject A correctly identifies the image as Tom based on the darker gray shade of fur. Subject B also identifies the image as Tom but does so by noticing that the fur is \emph{not} yellow. While both the subjects classify correctly, the inference mechanism is complementary. Subject B infers the \emph{contrast} between a known fact - the yellow fur of Top Cat - and its lack thereof in test image to make the right classification. Inference that makes use of these contrastive explanations or features is termed as \emph{contrastive inference}. On the contrary, subject A infers the correct class based on identifying the gray fur in the image, which is based on the feed-forward inference that is currently being used by neural networks. In recent times, deep learning has allowed for \emph{better} inference predictions. \emph{Better} is measured through the number of correct accuracy. This has come about due to advances in learning processes, regularization, hardware utilization among others. Recent advances in machine learning has allowed state-of-the-art performances in image classification tasks~\cite{russakovsky2015imagenet}\cite{krizhevsky2012imagenet}\cite{he2016deep}. Specifically, classification algorithms surpassed top-5 human error rate of $5.1\%$ on ImageNet~\cite{russakovsky2015imagenet}. But the system of reasoning has remained the same Contrastive inference is a mechanism that allows for the divination of entities that are not mentioned linguistically~\cite{kronmuller2014show}. For instance, the phrase '\emph{Hand me the large cup}' implies the existence of multiple cups of varying sizes without the need for mentioning their existence explicitly. Contrastive inference has been studied in the field of developmental psychology~\cite{kronmuller2014show,lipton2003inference} as a way of obtaining pragmatic explanations. Human-interpretable explanations have been shown to be contrastive - explanations are based on a contrast to a known fact. \textbf{Supervised - Inductive - Causal - Counterfactual - Contrastive - Abductive - Paper summary} Consider a classification framework where given a datapoint $x$ and a trained network $f(X)$, trained on $x\in X$, provides a label $P$. An explanation for the generation of label $P$ can be defined as the answer to the question \emph{'Why P?'}. The factors that cause $P$ are termed causal reasons~\cite{pearl2009causal, halpern2005causes}. For instance, for a classification algorithm, Grad-CAM~\cite{selvaraju2017grad} provides visual explanations for \emph{'Why P?'}. However, these explanations are based on observational causality~\cite{lopez2017discovering}. Observed causality can hence be differentiated from interventionist causality where the explanations for a predicted label $P$ changes in response to active interventions. Active interventions can further be divided into two scenarios based on the location of interventions - either in the data itself or in the predictions made~\cite{mcgill1993contrastive}. Intervening within data itself provides answers to \emph{'What if?'} questions. This is termed as counterfactual explanations. However, all possible interventions in data can be long and complex. Moreover, observed real data rarely affords interventions~\cite{lopez2017discovering}. Even with interventions, it is challenging to estimate if the resulting effect is a consequence of the intervention or due to other uncontrolled interventions~\cite{bottou2013counterfactual}. The second type of interventions can occur in the network predictions. These are questions of the type \emph{'Why P, rather than Q?'} questions. This is called abductive reasoning. \begin{figure}[!htb] \begin{center} \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/ContrastiveVsFF.pdf} \endminipage \caption{Discriminative features are identified in Feed-Forward Inference. Contrast between an observed fact and a known knowledge base is used as features in Contrastive Inference.}\label{fig:Concept \end{center} \end{figure} Supervised learning entails learning by association. In the field of psychology, associative learning involves the formation of previously unknown associations between stimuli and responses~\cite{byrne2014learning}. In supervised machine learning, stimuli refers to the data provided to the algorithm and response is the label required of the algorithm. During training, the algorithm is conditioned to associate between the data and its label. The algorithm learns the patterns in data that cause the label. During testing, the algorithm searches for the learned pattern to choose the label. The act of predicting the label is termed as inference. The justification for the chosen label is termed as reasoning. Consider a binary classifier trained to differentiate between two cats - Tom and Top Cat. Tom has gray fur while Top Cat has yellow fur. Current supervised learning algorithms learn to associate the color of the fur with the names of the cats. At inference, the algorithms identify the color of fur to label the cat. This is termed as associative or feed-forward inference. However, after learning the color differences between the two cats, if an inference mechanism identifies that the color of the fur is not that of the other cat. We term this scheme of inference as contrastive inference. \begin{figure*}[!htb] \begin{center} \minipage{0.99\textwidth}% \includegraphics[width=\linewidth]{Figs/Contrastive_Reasons.png} \endminipage \caption{Discriminative features are identified in Feed-Forward Inference. Contrast between an observed fact and a known knowledge base is used as features in Contrastive Inference.}\label{fig:Visualizations \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{1\textwidth}% \includegraphics[width=\linewidth]{Figs/FeaturesCombined.pdf} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} Traditionally, human visual perception is seen as an inference problem~\cite{olshausen201427}. The author in~\cite{olshausen201427} examines the conventional feed-forward model of inference in both human vision and machine vision, where a set of neurons extract object selective features in a hierarchical fashion. These features are pooled to create a global representation that is invariant to pose, and shape among others. Finally, a label is attached to each representation. Such a feed-forward representation leads to a task-specific, trained-data specific inference mechanism. For instance, a change in data domain during testing requires retraining to obtain a new representation conducive to test domain. This is because the feed-forward model follows a inductive approach to inference. This concept is first formalized. Note that inductive reasoning is a branch of causal inference in perception~\cite{shams2010causal}. Inference based on induction provides a conclusion with uncertainty which allows for speculation regarding cause. Common techniques in deep learning including~\cite{krizhevsky2012imagenet, he2016deep} use inductive inference. Contrary to inductive inference, abductive inference creates a hypothesis and tests its validity without considering the cause. In this paper, for a given image $x$, we hypothesise that $x$ independently belongs to all $N$ possible classes and observe the contrast it creates in a pretrained model. The contrast refers to what it expects the image $x$ to be in a feed-forward fashion vs what we hypothesise it is. This contrast to all possible $N$ classes is used as a feature to describe $x$ so that a conclusion is made. Extensive work has been conducted to understand the development of human brain and visual perception based on abductive reasoning~\cite{national2018people}. Abductive reasoning allows humans to better generalize to new situations. This form of reasoning was introduced by the philosopher Charles Sanders Peirce~\cite{peirce1931collected}, who saw abduction as a reasoning process from effect to cause~\cite{paul1993approaches}. In contrast, induction conjectures general laws from particular instances. Peirce further connected induction and abduction by stating that \emph{Induction is an argument which sets out from a hypothesis, resulting from previous Abduction}. We demonstrate that contrastive inference provides similar results as feed-forward inference in classification networks. However, when the test domain $Z$ is not the same as the training domain $X$, we illustrate that the proposed contrastive inference exceeds the performance of feed-forward inference. Domain shift can be because of change in sensor, background, and image acquisition under environmental challenges, distortions, acquisition errors among others. We also illustrate that the proposed contrastive inference can be applied on HVS related tasks. Section~\ref{sec:Contrastive Features} introduces contrastive features and provides a mechanism to obtain them. In Section~\ref{sec:Contrastive Inference}, we place these contrastive features in an inference setting. Section~\ref{sec:Exp} motivates and describes the applications considered to illustrate the benefits of contrastive inference against feed-forward inference. We finally conclude the paper in Section~\ref{sec:Conclusion}. In this paper, we combine contrastive explanations, abductive reasoning, and neural networks to provide inference frameworks. In Section~\ref{Sec:Contrast}, we formalize contrast for pre-trained neural networks. In Section~\ref{Sec:LitReview}, we go through other works in the field. We introduce the proposed methods for the inference framework in Section~\ref{Sec:Inference}. The validations are provided in Section~\ref{Sec:Results}. We close the paper in Section~\ref{Sec:Discussion} by discussing the broader implications of the proposed reasoning framework. \end{comment} \section{Analysis} \label{Sec:Analysis} \begin{figure*}[!htb] \begin{center} \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Data.png} \endminipage \minipage{0.49\textwidth}% \includegraphics[width=\linewidth]{Figs/CiNN_Iter.png} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} In this section, we analyze the loss component that is used to derive contrast. In addition, we report the timing requirements for each of the methods described in Section~\ref{Sec:Contrast_Features}. We also provide a methodology to accelerate the derivation of contrast by adversarial perturbations. The results of the accelerated method is compared against the standard method described in Section~\ref{Subsec:Adversarial}. Finally, we analyze the effect of a \emph{well trained} vs \emph{badly trained} $f()$ on the overall derivation of contrast. \subsection{Effect of Loss Functions on Contrast} \label{Subsec:Loss} \subsection{Timing Analysis} \label{Subsec:Timing} \textbf{Accelerating adversarial gradients} \subsection{Contrastive Inference as a Plug-In} \label{Subsec:Plug-In} \textbf{Distortion based Plug-In} \textbf{Domain Shift based Plug-In} \subsection{Effect of Training of $f()$} \label{Subsec:Training} Here we compare the performance of extracted contrast from the base network $f()$ when it is well trained against when it has not been trained well. We consider three scenarios - $f()$ trained with and without data augmentation, $f()$ trained with limited data, and $f()$ trained in limited time. Here, we ascertain that the performance gain obtained by contrast is not due to a poorly trained $f()$. In fact, we show that the contrast obtained by a well trained network $f()$ is more robust as compared to that of a poorly trained $f()$. \textbf{Contrast with and without Data Augmentation} Data augmentation is a technique first used in~\cite{lecun1998gradient} to avoid overfitting, correct class imbalance, and artificially inflate the size of dataset~\cite{shorten2019survey}. A number of techniques are used for data augmentation including geometrical transformations, random erasing, and adversarial training among others. In this paper, we use simple geometrical transformations to enhance the performance of base networks $f()$. We test how contrastive inference changes with and without augmentation on $f()$. We then train our MLP $H()$ with the same geometrical transformed images and show that the results increases. This shows that any method that increases the performance accuracy of existing networks will work on contrastive features as well. \begin{table}[h!] \small \centering \caption{Data Augmentation Based Contrastive Inference} \vspace{1mm} \begin{tabular}{||c c c c c||} \hline ResNet $f()$ & $f()$ w/o D/A & $f()$ with D/A & $H()$ with D/A \\ [0.5ex] \hline\hline Feed-Forward (\%) & $91.02$ & $93.01$ & $93.09$ \\ [1ex] \hline Contrast-G (\%) & $90.94$ & $93.14$ & $92.88$ \\ [1ex] \hline \end{tabular} \label{table:Original_Results} \end{table} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Data_Tests.png} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \minipage{0.9\textwidth}% \includegraphics[width=\linewidth]{Figs/Epoch_Tests.png} \endminipage \caption{Contrastive Feature Generation. (a) $m(cat)$ in blue is the original manifold that recognizes a cat as a cat. $\Tilde{m}(frog)$ is the contrastive hypothesis provided. Change between the two is termed contrast. (b) Contrastive feature generation across hypotheses.}\label{fig:Contrastive_Features} \end{center} \end{figure*} \textbf{Contrast when $f()$ is trained on limited data} \textbf{Contrast when $f()$ is trained in limited time} \section{Contrastive Inference} \label{Sec:Inference} In Section~\ref{Sec:LitReview}, we stated that abductive reasoning is more robust to new conditions and stimuli. In this section, we use the obtained contrastive features to validate this statement. Contrastive reasons, like abductive reasons, generalize under new conditions. We train on pristine images and test on noisy images to verify robustness. Neural networks have shown a vulnerability to distortions between train and test domains~\cite{Temel2017_CURETSR}. We first describe the feed-forward inference from existing neural networks that use inductive reasoning and feed-forward features from Section.~\ref{subsec:FF-Features} to make decisions. This is followed by our proposed contrastive inference framework. \subsection{Feed-Forward Inference}\label{subsec:FF-Inference} In Section~\ref{subsec:FF-Features}, we described the activations that form the feed-forward features that result in inductive decisions. Continuing the notations used in Section~\ref{subsec:FF-Features}, we consider the inference mechanism for the specific task of classification. For a network $f()$ that is trained to classify between $N$ classes, the last layer is commonly a fully connected layer consisting of $N$ weights or filters. During inference the feed-forward features at the $(L-1)^{\text{th}}$ layer, $y_{feat}^{L-1} = f_{L-1}(x)$, are projected independently onto each of the $N$ filters. The filter with the maximum projection is inferred as the class to which $x$ belongs to. Mathematically, feed-forward features $y_{feat}^l$ and the network prediction $P$ are related as, \begin{align} y_{feat}^l = & f_{l}(x), \forall l \in [1, L-1],\\ P = & \operatorname*{arg\,max} (W_L^T y_{feat}^{L-1} + b_L), \label{eq:FF-Infer}\\ \forall W_L\in \Re^{d_{L-1}\times N}, & y_{feat}\in \Re^{d_{L-1}\times 1}, b_L\in \Re^{N\times 1}, \end{align} where $W_L$ and $b_L$ are the parameters of the final linear layer. \vspace{-3mm} \subsection{Contrastive Inference} For the application of classification, the workflow is shown in Fig~\ref{fig:Contrastive_Inference}. For a pretrained neural network trained on $N$ classes on $X$ domain, an image $z \in Z$ such that $Z \neq X$ is provided. For $f()$ that is not adapted to test domain $Z$, the prediction is incorrect as illustrated in Fig~\ref{fig:Contrastive_Inference}. We hypothesize that the actual prediction should be all possible $N$ classes. By feeding in the new hypothesis $Q$, we obtain contrastive features from $f()$. We represent the data $x$ as a measure of change that it creates in the trained base network against all classes. These features are used to obtain the correct prediction. We first expand on the data representation part before obtaining predictions from them. \subsubsection{Contrastive Data Representation} For discriminative networks, contrast for class $1$ is provided by backpropagating class $1$. The gradient is proportional to the loss function $J(W,x,P,1)$, where $W$ is the weight parameter and $J()$ is a convex loss function. Specifically, the manifold change is in the direction of $\nabla_{W_l} J(W,x,P,1)$ for weights in layer $l\in [1,L]$. This term is the gradient of loss for $1$ with respect to weights in layer $l$. As shown in Fig~\ref{fig:Contrastive_Inference}b, we backpropagate over all $N$ classes to obtain contrastive features across all classes given by, $r_i = \nabla_{W_l} J(W,x,P,i)$. The final contrastive feature, $r_x$ for an image $x$, is given by concatenating all individual contrasts. Hence, \begin{equation}\label{eq:r_dis} \begin{gathered} r_i = (\nabla_{W_l} J(W,x,P,i)), \forall l \in [1,L], \forall i \in [1,N] \\ r_x = [r_1, r_2 \dots r_N] \end{gathered} \end{equation} The loss function $J$ is taken with respect to the logits after the final layer. This loss is representative of the contrast between the predicted output $P = f(x)$ and the contrast, $i \in [1,N]$. Loss $J()$ need not be the same loss used to train the network. In Section~\ref{Sec:Contrast}, we used cross entropy to visualize explanations. Access to quantitative results allows for testing other loss functions during inference. In Section~\ref{sec:Results}, we test a number of available loss functions based on classification accuracy. In this work, we use a modified MSE loss function where $J = \text{MSE}(f_L(x), \delta^M_i)$ where $\delta^M_i$ is a modified Kronecker delta function given by, \begin{equation}\label{eq:Kroenecker} \delta^{M}_{i} = \begin{cases} M, & \text{if } i=\text{class},\\ 0, & \text{otherwise} \end{cases} \end{equation} where $M$ is the mean of output logits $y_P = \text{max }f(x)$ taken for all $x$ in training set. We use $M$ instead of $1$ because we want the network to be as confidant of the contrast as it is with the prediction and penalize it accordingly. Note that we now have our contrastive features $r_x$ for a datapoint $x$. \subsubsection{Inference Based on Contrastive Data Representation} Once $r_x$ is obtained for all $N$ classes, the contrastive feature is now analogous to $y_{feat}^{L-1}$ from Eq.~\ref{eq:FF-Infer}. Similar to $y_{feat}$, $r_x$ is processed as Eq.~\ref{eq:FF-Infer}. However, $y_{feat}$ is of dimension $\Re^{d_{L-1}\times 1}$ and $r_x$ is of dimension $\Re^{(N\times d_{L-1})\times 1}$ since it is a concatenation of $N$ gradients. To account for the larger dimension size, we classify $r_x$ by training a simple Multi Layer Perceptron (MLP), that we will henceforth refer to as $\mathcal{H}$, on top of $r_x$ derived from training data. The structure of the MLP depends on the size $(N \times d_{L-1}) \times 1$. This is provided in the implementation details in Section~\ref{Subsec:In-Distribution}. Once $r_x$ passes through $\mathcal{H}$, the argument of the maximum logit $\Tilde{y}$, is inferred as the class of $x$. Expressing $\Tilde{y}$ mathematically, we have, \begin{align} r_x = [r_1, r_2, \dots r_N]&, \forall r_i = (\nabla_{W_l} J(W,x,P,i)),\label{eq:CI-features}\\ \Tilde{y} &= \operatorname*{arg\,max} (\mathcal{H}(r_x)).\label{eq:CI-Inrer} \end{align} Notice the similarity between the feed-forward prediction $P$ and contrastive prediction $\Tilde{y}$. Our goal is to make both the feed-forward and contrastive workflows similar while only changing the feed-forward features to contrastive features so as to showcase the effectiveness of contrastive reasoning during inference. Since the proposed contrastive inference follows an abductive reasoning approach, we borrow the mathematical interpretation of abductive reasoning to formulate contrastive inference. The authors in~\cite{douven2015probabilistic} suggest that abductive reasoning is a non-bayesian process and provide an update rule for training. Adapting this to the inference stage, we have, \begin{equation}\label{eq:probability} \Tilde{y} = \argmax_i{\mathcal{F} \big[ P, {\mathcal{H}}(Pr(i|P), r_x)\big]}, \end{equation} where $\Tilde{y}$ is the prediction from contrastive inference from Eq.~\ref{eq:CI-Inrer}, $P$ is the prediction of feed-forward inference from Eq.~\ref{eq:FF-Infer}, $Pr(i|P)$ is the probability that the true prediction is the contrast class $i \in [1,N]$ conditioned on the given feed-forward prediction $P$, and $r_x$ is the contrastive feature. Consider Fig~\ref{fig:Contrastive_Inference}. Given the image of a spoonbill $x$, let the feed-forward inference predict $P$ as a flamingo. $r_x$ is extracted from $f()$ which is used to train another classifier that represents ${\mathcal{H}}$ from Eq.~\ref{eq:probability}. The entire contrastive inference framework requires both feed-forward prediction and ${\mathcal{H}}$. This framework is represented by the function ${\mathcal{F}}$ from Eq.~\ref{eq:probability}. ${\mathcal{F}}$ is the proposed contrastive inference. While feed-forward inference obeys Bayes rule to obtain $P$, Eq~\ref{eq:probability} is non-Bayesian. It depends not only on the learned network $f()$, but also on contrastive features $r_x$ derived from class hypotheses - features that represent effect to cause. However, in the limiting case when a network is well trained, Eq.~\ref{eq:probability} behaves in a Bayesian fashion. This is because of the $Pr(i|P)$ term. In an ideal case, for a well trained network $f()$ with train and test data from the same distribution, $Pr(i|P) = 1$ for $i = P$ and $0$ otherwise. We test this hypothesis in Table~\ref{table:Original_Results} when networks $f()$ trained and tested on distribution $X$, perform with the same accuracy in both feed-forward and contrastive settings. \vspace{-3mm} \begin{comment} \subsection{Analysis} In this section we analyze the proposed contrastive inference framework based on existing abductive reasoning interpretation and loss interpretation. \subsubsection{Interpretation based on abductive reasoning} Since the proposed contrastive inference follows an abductive reasoning approach, we borrow the mathematical interpretation of abductive reasoning to formulate contrastive inference. The authors in~\cite{douven2015probabilistic} suggest that abductive reasoning is a non-bayesian process and provide an update rule for training. Adapting this to the inference stage, we have, \begin{equation}\label{eq:probability} \Tilde{y} = \argmax_i{\mathcal{F} \big[ P, {\mathcal{H}}(Pr(i|P), r_x)\big]}, \end{equation} where $\Tilde{y}$ is the prediction from contrastive inference from Eq.~\ref{eq:CI-Inrer}, $y_{pred}$ is the prediction of feed-forward inference from Eq.~\ref{eq:FF-Infer}, $Pr(i|P)$ is the probability that the true prediction is class $i \in [1,N]$ conditioned on the given feed-forward prediction $P$, and $r_x$ is the contrastive feature. Note that $r_i$ is obtaining by conditioning the contrast class $i, i \in [1,N]$ on $P$. Consider Fig~\ref{fig:Contrastive_Inference}. Given the image of a spoonbill $x$, let the feed-forward inference predict $P$ as a flamingo. $r_x$ is extracted from $f()$ which is used to train another classifier that represents ${\mathcal{H}}$ from Eq.~\ref{eq:probability}. The entire contrastive inference framework requires both feed-forward prediction and ${\mathcal{H}}$. This framework is represented by the function ${\mathcal{F}}$ from Eq.~\ref{eq:probability}. ${\mathcal{F}}$ is the proposed contrastive inference. While feed-forward inference obeys Bayes rule to obtain $y_{pred}$, Eq~\ref{eq:probability} is non-Bayesian. It depends not only on the learned network $f()$, but also on contrastive features $r_x$ derived from class hypotheses - features that represent effect to cause. However, in the limiting case when a network is well trained, Eq.~\ref{eq:probability} behaves in a Bayesian fashion. This is because of the $Pr(i|P)$ term. In an ideal case, for a well trained network $f()$ with train and test data from the same distribution, $Pr(i|P) = 1$ for $i = P$ and $0$ otherwise. We test this hypothesis in Table~\ref{table:Original_Results} when networks $f()$ trained and tested on distribution $X$, perform with the same accuracy in both feed-forward and contrastive settings. \subsubsection{Interpretation based on loss analysis} \label{subsec:Math_Theory} Consider a neural network $f()$ trained on $N$ instances sampled from a distribution $X$. Hence the training set is given by $\{(x_1,y_1), (x_2,y_2), ...(x_N,y_N)\}$. The goal of the network $f()$ is to learn to predict the correct $y = \{y_1, y_2. ... y_N\}$ given an instance $x$. The network is said to have been generalized when given an instance $x' \in X$ not in the original training set, $f()$ predicts the correct $y$. In this paper, we consider the case when a network trained on $X$, is asked to predict on $X$'s distorted distribution $Z$. Consider a network $f()$ that is trained using the least squares empirical loss function. Following the author's from~\cite{geman1992neural}, this loss can be decomposed in the following way: \begin{align}\label{eq:B-V} E[(f(x;X) - y)^2|x,X] & = (E_X[f(x;X)] \\ & - E[y|x])^2 + E_{X}[(f(x;X) \\ & - E_{X}[f(x;X)])^2] \end{align} The first term in Eq.~\ref{eq:B-V} is the bias and the second term is the variance. The bias indicates how far the network $f()$ is to the true predictor that outputs the ground truth $y$. The second term measures the variance of $f()$ for multiple samples from $X$. The bias error occurs in the selection of the network itself. For instance, choosing ResNet-151 when the choice should have been AlexNet creates bias. In this work, we take existing architectures and imbue contrast. Hence, the bias term remains unchanged. The second term, variance, can further be decomposed based on the Fisher Information Matrix. Consider a datum $z$ not part of the training set. It's variance is given by~\cite{cohn1994neural}, \begin{equation}\label{eq:Fisher} var(z) = \bigg[\frac{\partial P}{\partial \theta}\bigg]^T \bigg[\frac{\partial^2 J(W, x)}{\partial^2 \theta}\bigg] \bigg[\frac{\partial P}{\partial \theta}\bigg]. \end{equation} Physically, Eq.~\ref{eq:Fisher} formalizes the variance of a datum as the model's estimate of the expected square distance between the output $P$ and the true output $y$. Consider the case when the datum $x'$ is not from the training distribution $\mathcal{D}$, but from a shifted $\mathcal{D_{\alpha}}$ distribution. For a datum $x' \in \mathcal{D}$, we expect this distance to be small when $y = \hat{y}$, and large when $y \neq \hat{y}$. This quantity is captured by the $\bigg[\frac{\partial^2 J(W, x)}{\partial^2 \theta}\bigg]$ term. Consider the case of maximum entropy for this loss term. For an $x_{\alpha} \in \mathcal{D}$, the variance of $x_{\alpha}$ can be reduced by minimizing Eq.~\ref{eq:Fisher}. This occurs when $x_{\alpha}$ is equidistant to all classes. Equidistant in the manifold indicates equiprobable using softmax probability. And hence, the logits $f_L(x)$ before the argmax are all equal to $1/N$ where $N$ is the number of classes. $J(W,x_{\alpha})$ is constructed as a squared error between $f_L(x)$ and a one-hot vector of dimensionality $N\times 1$ with the argument at $\hat{y}$ being 1 and others $0$. Consider the input training data $x$, except as ordered pairs in the introspective domain. Hence, the dataset is given by $\{(x_1, y_1), (x_1, y_2), ... (x_1, y_N), (x_2, y_1), (x_2, y_2) ... (x_2, y_N), ...$ \noindent $(x_M, y_1), (x_M, y_2) ...(x_M, y_N)$. For a single datum $x_i, \forall i \in [1,M]$, the variance over for a sample with class $i$ is proportional to $\bigg[\frac{\partial^2 J(W, x, i)}{\partial^2 \theta}\bigg], \forall i \in [1.N]$. Hence, minimizing this variance includes minimizing the cost function $J(W, x, i)$ over all $i \in [1,N]$. Hence, it is the squared error between a $N\times 1$ vector of ones and $f(x)$. For an ideal and well trained network $f()$, the output logits resemble a one-hot vector with the value $1$ at the true label $y$, and zeros everywhere else. Note that this is the mirror of the case when we considered $x_{\alpha}$. Hence, minimizing the variance of the sample $x$ in the contrastive domain, minimizes the variance of a sample $x_{\alpha} \in \mathcal{D_{\alpha}}$. \end{comment} \section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Conclusion} In this paper, we characterized images as the contrast between a trained model and the change in the trained model given a certain hypothesis about the image. We constructed and analyzed three measures of such contrast in our proposed inference framework. The philosophical and mathematical interpretations of this inference framework was presented. Based on these interpretations, we motivated the principle of generalization that benefit from using contrastive inference. \end{comment} \bibliographystyle{ieee_fullname}
train/arxiv
BkiUbqQ5qWTD6heGq6Mz
5
1
\section{Introduction} A hypergraph is a pair $\mathcal{H}=(V,E)$ where $V$ is a vertex set and $E\subseteq 2^V$ is an edge set. For a fixed set of positive integers $R$, we say $\mathcal{H}$ is an $R$-uniform hypergraph, or $R$-graph for short, if the cardinality of each edge belongs to $R$. If $R=\{k\}$, then an $R$-graph is simply a $k$-uniform hypergraph or a $k$-graph. Given an $R$-graph $\mathcal{H} = (V,E)$ and a set $S \in \binom{V}{s}$, let $d(S)$ denote the number of edges containing $S$ and $\delta_s(\mathcal{H})$ be the minimum $s$-degree of $\mathcal{H}$, i.e., the minimum of $d(S)$ over all $s$-element sets $S \in \binom{V}{s}$. When $s = 2$, $\delta_2(\mathcal{H})$ is also called the minimum \textit{co-degree} of $\mathcal{H}$. Given a hypergraph $\mathcal{H}$, the \textit{$2$-shadow}(or \textit{shadow}) of $\mathcal{H}$, denoted by $\partial(\mathcal{H})$, is a simple $2$-uniform graph $G=(V,E)$ such that $V(G)=V(\mathcal{H})$ and $uv\in E(G)$ if and only if $\{u,v\}\subseteq h$ for some $h\in E(\mathcal{H})$. Note that $\delta_2(\mathcal{H}) \geq 1$ if and only if $\partial(\mathcal{H})$ is a complete graph. In this case, we say $\mathcal{H}$ is {\em covering}. There are several notions of a path or a cycle in hypergraphs. A \textit{Berge path} of length $t$ is a collection of $t$ hyperedges $h_1, h_2, \ldots, h_{t} \in E$ and $t+1$ vertices $v_1, \ldots, v_{t+1}$ such that $\{v_i, v_{i+1}\} \subseteq h_i$ for each $i\in [t]$. Similarly, a $k$-graph $\mathcal{H} = (V,E)$ is called a \textit{Berge} cycle of length $t$ if $E$ consists of $t$ distinct edges $h_1, h_2, \ldots, h_t$ and $V$ contains $t$ distinct vertices $v_1, v_2, \ldots, v_t$ such that $\{v_i, v_{i+1}\} \subseteq h_i$ for every $i\in [t]$ where $v_{t+1} \equiv v_1$. Note that there may be other vertices than $v_1, \ldots, v_t$ in the edges of a Berge cycle or path. Gerbner and Palmer \cite{GP} extended the definition of Berge paths and Berge cycles to general graphs. In particular, given a simple graph $G$, a hypergraph $\mathcal{H}$ is called \emph{Berge-$G$} if there is an injection $i\colon V(G)\to V(\mathcal{H})$ and a bijection $f\colon E(G) \to E(\mathcal{H})$ such that for all $e=uv \in E(G)$, we have $\{i(u), i(v)\} \subseteq f(e)$. We say an $R$-graph $\mathcal{H}$ on $n$ vertices contains a \textit{Hamiltonian Berge cycle (path)} if it contains a Berge cycle (path) of length $n$ (or $n-1$). We say $\mathcal{H}$ is {\em Berge-Hamiltonian} if it contains a Hamiltonian Berge cycle. Bermond, Germa, Heydemann, and Sotteau \cite{BGHS} showed a Dirac-type theorem for Berge cycles. We showed in \cite{LW-codegree} that for every finite set $R$ of positive integers, there is an integer $n_0=n_0(R)$ such that every covering $R$-uniform hypergraph $\mathcal{H}$ on $n$ ($n\geq n_0$) vertices contains a Berge cycle $C_s$ for any $3\leq s\leq n$. In particular, every covering $R$-graph on sufficiently large $n$ vertices is Berge-Hamiltonian. Extremal problems related to Berge hypergraphs have been receiving increasing attention lately. For Tur\'an-type results, let $ex_k(n,G)$ denote the maximum number of hyperedges in $k$-uniform Berge-$G$-free hypergraph. Gy\H{o}ri, Katona and Lemons \cite{GKL} showed that for a $k$-graph $\mathcal{H}$ containing no Berge path of length $t$, if $t\geq k+2 \geq 5$, then $e(\mathcal{H}) \leq \frac{n}{t}\binom{t}{k}$; if $3\leq t\leq k$, then $e(\mathcal{H}) \leq \frac{n(t-1)}{k+1}$. Both bounds are sharp. The remaining case of $t=k+1$ was settled by Davoodi, Gy\H{o}ri, Methuku and Tompkins \cite{Davoodi}. For cycles of a given length, Gy\H{o}ri and Lemons \cite{GL1, GL2} showed that $ex_k(n, C_{2t}) = \Theta(n^{1+1/t})$. The same asymptotic upper bound holds for odd cycles of length $2t+1$ as well. The problem of avoiding all Berge cycles of length at least $k$ has been investigated in a series of papers \cite{KL, FKL, FKL2, Ergemlidze, GLSZ}. For general results on the maximum size of a Berge-$G$-free hypergraph for an arbitrary graph $G$, see for example \cite{GMP, GMT, PTTW}. For Ramsey-type results, define $R_c^k(BG_1, \ldots, BG_c)$ as the smallest integer $n$ such that for any $c$-edge-coloring of a complete $k$-uniform hypergraph on $n$ vertices, there exists a Berge-$G_i$ subhypergraph with color $i$ for some $i$. Salia, Tompkins, Wang and Zamora \cite{STWZ} showed that $R_2^3(BK_s, BK_t) = s+t -3$ for $s,t \geq 4$ and $\max(s,t) \geq 5$. For higher uniformity, they showed that $R^4(BK_t, BK_t) = t+1$ for $t \geq 6$ and $R_2^k(BK_t, BK_t)=t$ for $k\geq 5$ and $t$ sufficiently large. Independently and more generally, Gerbner, Methuku, Omidi and Vizer \cite{GMOV} showed that $R_c^k(BK_n) = n$ if $k > 2c$; $R_c^k(BK_n) = n+1$ if $k = 2c$ and obtained various bounds on $R_c^k(BK_n)$ when $k < 2c$. Similar investigations have also been started independently by Axenovich and Gy\'arf\'as \cite{AG} who focus on the Ramsey number of small fixed graphs where the number of colors may go to infinity. Very recently, we \cite{LW-cRamsey} defined a new type of Ramsey number, namely the \emph{cover Ramsey number}, denoted as $\hat{R}^R(BG_1, BG_2)$, as the smallest integer $n_0$ such that for every covering $R$-uniform hypergraph $\mathcal{H}$ on $n \geq n_0$ vertices and every $2$-edge-coloring (blue and red) of $\mathcal{H}$, there is either a blue Berge-$G_1$ or a red Berge-$G_2$ subhypergraph. We show that for every $k\geq 2$, $R(G_1, G_2) \leq \hat{R}^{[k]}(BG_1, BG_2) \leq c_k \cdot R(G_1, G_2)^3$ for some $c_k$. Moreover, $R^{\{k\}}(K_t) > (1+o(1))\frac{\sqrt{2}}{e} t2^{t/2}$ for sufficiently large $t$ and $\hat{R}^{[k]}(BG,BG) \leq c(d,k)n$ if $\Delta(G)\leq d$. It occurs to us that the cover Ramsey number for Berge hypergraphs behaves more like the classical Ramsey number than the Ramsey number of Berge hypergraphs defined in \cite{AG, STWZ, GMOV}. This inspires us to extend the investigation to the analogous \emph{cover Tur\'an number} for Berge hypergraphs. In particular, given a fixed graph $G$ and a finite set of positive integers $R \subseteq [k]$, we define the \emph{$R$-cover Tur\'an number} of $G$, denoted as \emph{$\hat{\rm{ex}}_R(n, G)$}, as the maximum number of edges in the shadow graph of a Berge-$G$-free $R$-graph on $n$ vertices. The \emph{$R$-cover Tur\'an density}, denoted as $\hat{\pi}_R(G)$, is defined as \[\hat{\pi}_R(G) = \displaystyle\limsup_{n\to \infty} \frac{\hat{\rm{ex}}_R(n, G)}{\binom{n}{2}}.\] When $R$ is clear from the context, we ignore $R$ and use \emph{cover Tur\'an number} and \emph{cover Tur\'an density} for short. A graph is called \textit{$R$-degenerate} if $\hat{\pi}_R(G) = 0$. For the ease of reference, when $R=\{k\}$, we simply denote $\hat{\pi}_R(G)$ as $\hat{\pi}_k(G)$ and call $G$ \textit{$k$-degenerate} if $\hat{\pi}_{\{k\}}(G) = 0$. We remark that the Tur\'an number of graphs only differ by a constant factor when the host hypergraph is uniform compared to non-uniform. In particular, we show the following proposition. \begin{proposition}\label{prop:uniform-versus-non} If $R$ is a finite set of positive integers such that $min(R) = m \geq 2$ and $max(R) = M$. Then given a fixed graph $G$, $$\max_{r\in R}\hat{\rm{ex}}_r(n,G) \leq \hat{\rm{ex}}_{R}(n,G) \leq \frac{\binom{M}{2}}{\binom{m}{2}} \hat{\rm{ex}}_m(n,G).$$ \end{proposition} Indeed, the first inequality is clear from definition. For the second inequality, suppose we have an $R$-graph $\mathcal{H}$ with more than $\binom{M}{2}/\binom{m}{2} \cdot \hat{\rm{ex}}_m(n,G)$ edges in its shadow. For each hyperedge $h$ in $\mathcal{H}$, shrink it to a hyperedge of size $m$ by uniformly and randomly picking $m$ vertices in $h$. Call the resulting hypergraph $\mathcal{H}'$. It is easy to see that for any edge $e \in E(\partial(\mathcal{H}))$, $\Prob{e \in E(\partial(\mathcal{H}'))} \geq \binom{m}{2}/\binom{M}{2}$. Hence by linearity of expectation, the expected number of edges in $\partial(\mathcal{H}')$ is more than $\hat{\rm{ex}}_m(n,G)$. It follows that there exists a way to shrink $\mathcal{H}$ to a $m$-graph with at least $(\hat{\rm{ex}}_m(n,G)+1)$ edges in its shadow. Thus, by definition of the cover Tur\'an number, $\mathcal{H}'$ contains a Berge copy of $G$, which corresponds to a Berge-$G$ in $\mathcal{H}$. \begin{remark} Note that Proposition \ref{prop:uniform-versus-non} implies that if a graph $G$ is $k$-degenerate (where $k\geq 2$), then it is $R$-degenerate for any finite set $R$ satisfying $min(R) \geq k$. In particular, a bipartite graph is $k$-degenerate for all $k\geq 2$. \end{remark} In this paper, we determine the cover Tur\'an density of all graphs when the uniformity of the host graph equals to $3$. We first establish a general upper bound for the cover Tur\'an density of graphs. \begin{theorem}\label{thm:general-upper} For any fixed graph $G$ and any fixed $\epsilon > 0$, there exists $n_0$ such that for any $n\geq n_0$, $$\hat{\rm{ex}}_{k}(n,G) \leq \left ( 1- \frac{1}{\chi(G)-1} + \epsilon \right ) \binom{n}{2}.$$ \end{theorem} We remark that Theorem \ref{thm:general-upper} holds when the host hypergraph is non-uniform as well, i.e. we can replace $k$ with any fixed finite set of positive integers $R$. If $\chi(G)>k$, there is a construction giving the matching lower bound. Partition the vertex set into $t:=\chi(G)-1$ equitable parts $V=V_1\cup V_2 \cup \cdots \cup V_t$. Let $\mathcal{H}$ be the $k$-uniform hypergraph on the vertex set $V$ consisting of all $k$-tuples intersecting each $V_i$ on at most one vertex. The shadow graph is simply the Tur\'an graph with $ (1- \frac{1}{\chi(G)-1}+o(1))\binom{n}{2}$ edges. The shadow graph is $K_{t+1}$-free, thus contains no subgraph $G$. It follows that $\mathcal{H}$ is Berge-$G$-free. Therefore, we have the following theorem: \begin{theorem}\label{thm:general} For any $k\geq 2$, and any fixed graph $G$ with $\chi(G)\geq k+1$, we have $$\hat{\pi}_{k}(G) = 1- \frac{1}{\chi(G)-1}.$$ \end{theorem} Given a simple graph $G$ on $n$ vertices $v_1, \ldots, v_n$ and a sequence of $n$ positive integers $s_1, \ldots, s_n$, we denote $B = G(s_1, \ldots, s_n)$ the $(s_1, \ldots, s_n)$-\emph{blowup} of $G$ obtained by replacing every vertex $v_i\in G$ with an independent set $I_i$ of $s_i$ vertices, and by replacing every edge $(v_i, v_j)$ of $G$ with a complete bipartite graph connecting the independent sets $I_i$ and $I_j$. If $s = s_1 = s_2 = \cdots = s_n$, we simply write $G(s_1, \ldots, s_n)$ as $G(s)$ where $s$ is called the \emph{blowup factor}. We also define a \emph{generalized blowup} of $G$, denoted by $G(s_1, \ldots, s_n; M)$ where $M \subseteq E(G) \subseteq \binom{[n]}{2}$, as the graph obtained by replacing every vertex $v_i\in G$ with an independent set $I_i$ of $s_i$ vertices, and by replacing every edge $(v_i, v_j)$ of $E(G)\backslash M$ with a complete bipartite graph connecting $I_i$ and $I_j$ and replacing every edge $(v_i, v_j) \in M$ with a maximal matching connecting $I_i$ and $I_j$. When $M = \emptyset$, we simply write $G(s_1, \ldots, s_n; M)$ as the standard blowup $G(s_1, \ldots, s_n)$. We first want to characterize the class of degenerate graphs when the host hypergraph is $3$-uniform. Observe that $\hat{\rm{ex}}_k(n,G)\leq \binom{k}{2} ex_k(n, G)$. This implies that any graph $G$ satisfying $ex_k(n,G) = o(n^2)$ is $k$-degenerate. In particular, by results of \cite{GL1, GL2, GP, PTTW}, any cycles of fixed length at least $4$ and $K_{2,t}$ are $3$-degenerate. For triangles, Gr\'osz, Methuku and Tompkins \cite{GMT} showed that the uniformity threshold of a triangle is $5$, which implies that $C_3$ is $5$-degenerate. Moreover, there are constructions which show that $C_3$ is not $3$-degenerate or $4$-degenerate. For $K_{s,t}$ where $s,t\geq 3$, it is shown \cite{PTTW, GMT, AS} that $ex_r(n,K_{s,t}) = \Theta(n^{r-\frac{r(r-1)}{2s}})$. Thus in this case, the corresponding results on Berge Tur\'an number do not imply the degeneracy of $K_{s,t}$ in the cover Tur\'an density. In this paper, we classify all degenerate graphs when the host hypergraph is $3$-uniform. \begin{theorem}\label{thm:degenerate} Given a simple graph $G$, $\hat{\pi}_3(G) = 0$ if and only if $G$ satisfies both of the following conditions: \begin{enumerate}[(1)] \item $G$ is triangle-free, and there exists an induced bipartite subgraph $B \subseteq G$ such that $V(G)-V(B)$ is a single vertex. \item There exists a bipartite subgraph $B \subseteq G$ such that $E(G)-E(B)$ is a matching (possibly empty) in one of the partitions of $B$. \end{enumerate} \end{theorem} \begin{corollary}\label{cor:degenerate-characterize1} Given a simple graph $G$, $\hat{\pi}_3(G) = 0$ if and only if $G$ is contained in both $C_5(1,s,s,s,s)$ and $C_3(s,s,s;\{\{1,2\}\})$ for some positive integer $s$. \end{corollary} \begin{figure}[htb] \begin{center} \begin{minipage}{.2\textwidth} \resizebox{3cm}{!}{\input{char1A.tikz}} \end{minipage} \hspace{2cm} \begin{minipage}{.2\textwidth} \resizebox{4cm}{!}{\input{char1B.tikz}} \end{minipage} \end{center} \caption{$C_5(1,s,s,s,s)$ and $C_3(s,s,s;\{\{1,2\}\})$} \label{fig:embed2} \end{figure} \begin{corollary}\label{cor:degenerate-characterize2} Given a simple graph $G$, $\hat{\pi}_3(G) = 0$ if and only if $G$ is a subgraph of one of the graphs in Figure \ref{fig:embed3}. \end{corollary} \begin{figure}[htb] \begin{center} \begin{minipage}{.2\textwidth} \resizebox{4.5cm}{!}{\input{char2A.tikz}} \end{minipage} \hspace{2cm} \begin{minipage}{.2\textwidth} \resizebox{4.5cm}{!}{\input{char2B.tikz}} \end{minipage} \end{center} \caption{Characterization of $3$-degenerate graphs.} \label{fig:embed3} \end{figure} With Theorem \ref{thm:general-upper} and Theorem \ref{thm:degenerate}, we can then determine the cover Tur\'an density of all graphs when $k = 3$. The results are summarized in the following theorem. \begin{theorem}\label{thm:cover-density-3} Given a simple graph $G$, \[ \hat{\pi}_3(G) = \begin{cases} 1-\frac{1}{\chi(G)-1} & \textrm{if $\chi(G)\geq 4$,}\\ 0 & \textrm{if $G$ satisfies the condition in Theorem \ref{thm:degenerate},} \\ \frac{1}{2} & \textrm{otherwise.} \end{cases} \] \end{theorem} For $3$-cover Tur\'an number, we also show the following proposition: \begin{proposition}\label{prop:cover-number} Let $G$ be a connected bipartite graph such that every edge is contained in a $C_4$ and every two vertices in the same part have a common neighbor. Then $$\hat{\rm{ex}}_3(n,G) = \Theta(ex(n,G)).$$ \end{proposition} \begin{proof} The fact that $\hat{\rm{ex}}_3(n,G) = O(ex(n,G))$ is a consequence of Proposition \ref{prop:uniform-versus-non}. For the lower bound, consider an extremal $G$-free graph $H$ with $ex(n,G)$ edges. It follows that there is a bipartite subgraph $H'= A\cup B$ of $H$ which is $G$-free and contains at least $\frac{1}{2}ex(n,G)$ edges. We then construct a $3$-graph $\mathcal{H}$ as follows. For each $a\in A$, replace $a$ with two new vertices $a_1, a_2$. The vertex set $B$ remains the same. For each $e = \{a,b\} \in E(H')$ with $a \in A$, $b\in B$, we have a hyperedge $\{a_1, a_2, b\}$ in $\mathcal{H}$. We claim that $\mathcal{H}$ contains no Berge-$G$. Indeed, if there is any Berge-$G$ in $\mathcal{H}$, then one of the following two cases must happen: \begin{description} \item Case 1: An edge in $G$ is mapped to $\{a_1, a_2\}$ for some $a \in A$. However, note that there is no $C_4$ containing $a_1 a_2$ in $\partial(\mathcal{H})$ while every edge of $G$ is contained in a $C_4$. This gives us a contradiction. \item Case 2: Two vertices of $G$ from the same part are mapped to $\{a_1, a_2\}$ for some $a \in A$. In this case, by our assumption, $a_1, a_2$ have a common neighbor $w$ in $G$. However, there are no two distinct hyperedges embedding $a_1w , a_2 w$ by our construction. Contradiction. \end{description} Hence it follows that $\mathcal{H}$ is Berge-$G$-free and has $\Omega(ex(n,G))$ hyperedges. \end{proof} \begin{remark} We give a class of graphs satisfying the conditions in Proposition \ref{prop:cover-number}. Let $B = B_1\cup B_2$ be an arbitrary connected bipartite graph with minimum degree $2$ such that each part has a vertex that is adjacent to all the vertices in the other part. It's easy to check that $B$ satisfies the conditions in Proposition \ref{prop:cover-number}. \end{remark} Using Proposition \ref{prop:cover-number}, we have the following corollary on the asymptotics of the cover Tur\'an number of $K_{s,t}$. \begin{corollary} For positive integers $t\geq s\geq 2$, we have $$\hat{\rm{ex}}_3(n,K_{s,t}) = \Theta(ex(n,K_{s,t})).$$ \end{corollary} The following questions would be interesting for further investigations: \begin{question} Characterize all $k$-degenerate graphs or determine the \{$k$\}-cover Tur\'an density of all graphs for $k\geq 4$. \end{question} \begin{question} Determine the asymptotics of the cover Tur\'an number of the $3$-degenerate graphs in Theorem \ref{thm:degenerate}. \end{question} \section{Proof of Theorem \ref{thm:general-upper}} \begin{proof}[Proof of Theorem \ref{thm:general-upper}] Let $k \geq 2$ and $G$ be a fixed graph with $\chi(G) \geq 2$. Let $\epsilon > 0$. Suppose $\mathcal{H}$ is an edge-minimal $k$-uniform hypergraph on sufficiently large $n$ vertices such that $$\abs*{E(\partial(\mathcal{H}))} \geq \left ( 1- \frac{1}{\chi(G)-1} + \epsilon \right ) \binom{n}{2}.$$ Our goal is to show that $\mathcal{H}$ contains a Berge copy of $G$. For ease of reference, set $H = \partial(\mathcal{H})$. Let $M = k^2/\epsilon$. Let $H'$ be the subgraph of $H$ obtained by deleting all the edges $uv$ from $H$ with co-degree $d(\{u,v\}) \geq M$ in $\mathcal{H}$. \begin{claim} $\abs*{E(H')} \geq \left ( 1- \frac{1}{\chi(G)-1} + \epsilon/2 \right ) \binom{n}{2}$. \end{claim} \begin{proof} Let $L = E(H)\backslash E(H')$. By double counting, the number of hyperedges containing some edge in $L$ is at least $LM/\binom{k}{2}$. Since $\mathcal{H}$ is assumed to be edge-minimal, it follows that every hyperedge $h$ contains a vertex pair that is only contained in $h$. Hence $|E(\mathcal{H})| \leq \binom{n}{2}$. It follows that $$LM/\binom{k}{2} \leq \abs{E(\mathcal{H})} \leq \binom{n}{2},$$ which implies that $$L \leq \frac{k^2}{2M} \binom{n}{2} \leq \frac{\epsilon}{2}\binom{n}{2}.$$ This completes the proof of the claim. \end{proof} Let $G'$ be the blowup of $G$ by a factor of $b = Mv(G)^2k$, i.e., $G' = G(b)$. Suppose $V(G) = \{v_1, \ldots, v_s\}$ and $V_i$ is the blowed-up independent set in $G'$ that corresponds to $v_i$. Recall the celebrated Erd\H{o}s-Stone-Simonovits theorem \cite{ES1,ES2}, which states that for a fixed simple graph $F$, $ex(n,F) = \left ( 1 -\frac{1}{\chi(F)-1}+o(1) \right ) \binom{n}{2}$. Since $\chi(G') = \chi(G)$, it follows by the Erd\H{o}s-Stone-Simonovits theorem that for sufficiently large $n$, $H'$ contains $G'$ as a subgraph. Our goal is to give an embedding $f$ of $G$ into $G'$ so that $f(v_i) \in V_i$ for all $1\leq i \leq s$ and every edge of $G$ is embedded in a distinct hyperedge in $\mathcal{H}$. For ease of reference, set $L_j = \{v_1, \ldots, v_j\}$. For $1\leq t\leq s$ and $v\in V(G)$, set $N_t(v) = N_G(v) \cap L_t$. For $i = 1$, just embed $v_1$ to an arbitrary vertex in $V_1$. Suppose that $v_1, \ldots, v_t$ are already embedded and edges in $G[L_t]$ are already embedded in distinct hyperedges. We now want to embed $v_{t+1}$ into an appropriate vertex in $V_{t+1}$, i.e., we want to find a vertex $u \in V_{t+1}$ such that there are distinct unused hyperedges embedding the edges from $u$ to $f(N_t(v_{t+1}))$. Note that each vertex $u$ in $V_{t+1}$ is adjacent to all vertices in $f(N_t(v_{t+1}))$ in $G'$. Let $S_t(u) = \{u\} \times f(N_t(v_{t+1}))$, i.e., $S_t(u)$ is the set of vertex pairs which contain $u$ and another vertex in $f(N_t(v_{t+1}))$. Recall that $|V_{t+1}| = Mv(G)^2k$. At most $e(G)(k-2)$ vertices in $V_{t+1}$ are contained in hyperedges that are already used. For any of the remaining vertices $u \in V_{t+1}$, if there are no distinct hyperedges embedding all vertex pairs in $S_t(u)$, that means some hyperedge contains at least two vertex pairs $u w_1, u w_2$ in $S_t(u)$. Note that $d_{H'}(\{w_1, w_2\}) \leq M$ by the definition of $H'$. Thus the number of vertices $u\in V_{t+1}$ such that there exists some hyperedge containing at least two vertex pairs in $S_t(u)$ is at most $$ \binom{t}{2}M (k-2) \leq \frac{Mv(G)^2k}{2}.$$ Since $|V_{t+1}| = Mv(G)^2k$, it follows that there exists some $u \in V_{t+1}$ such that $u$ is not contained in any hyperedge already used and there is no hyperedge containing at least two vertex pairs in $S_t(u)$. It follows that there are distinct unused hyperedges containing all vertex pairs in $S_t(u)$. Set $f(v_{t+1})$ to be this $u$. By induction, we can then conclude that $\mathcal{H}$ contains a Berge copy of $G$. This completes the proof of Theorem \ref{thm:general-upper}. \end{proof} \section{Proof of Theorem \ref{thm:degenerate}} \subsection{Regularity Lemma} The proof of Theorem \ref{thm:degenerate} uses the Szemer\'edi Regularity Lemma. Given a graph $G$, and two disjoint vertex sets $X,Y \subseteq V(G)$, let $e(X,Y)$ denote the number of edges intersecting both $X$ and $Y$. Define $d(X,Y) = e(X,Y)/|X||Y|$ as the \emph{edge density} between $X$ and $Y$. $(X,Y)$ is called $\epsilon$-regular if for all $X'\subseteq X$, $Y' \subseteq Y$ with $|X'| \geq \epsilon |X|$ and $|Y'| \geq \epsilon |Y|$, we have $\abs{d(X,Y)-d(X',Y')} \leq \epsilon$. We say a vertex partition $V = V_0 \cup V_1 \cup \cdots \cup V_k$ \emph{equipartite} (with the \emph{exceptional set} $V_0$) if $|V_i| = |V_j|$ for all $i,j \in [k]$. The vertex partition $V = V_0 \cup V_1 \cup \cdots \cup V_k$ is said to be $\epsilon$-regular if all but at most $\epsilon k^2$ pairs $(V_j, V_j)$ with $1\leq i < j \leq k$ are $\epsilon$-regular and $|V_0| \leq \epsilon n$. The extremely powerful Szemer\'edi's regularity lemma states the following: \begin{theorem}\cite{Szemeredi}\label{thm:regLemma} For every $\epsilon$ and $m$, there exists $N_0$ and $M$ such that every graph $G$ on $n\geq N_0$ admits an $\epsilon$-regular partition $V_0 \cup V_1 \cup \cdots \cup V_k$ satisfying that $m\leq k\leq M$. \end{theorem} A $\epsilon$-regular pair satisfies the following simple lemma. \begin{lemma}\label{lem:large-neighbor} Suppose $(X,Y)$ is an $\epsilon$-regular pair of density $d$. Then for every $Y'\subseteq Y$ of size $|Y'| \geq \epsilon |Y|$, there exists less than $\epsilon |X|$ vertices in $X$ that have less than $(d-\epsilon)|Y'|$ neighbors in $Y'$. \end{lemma} \begin{proof} Let $Y'\subseteq Y$ with $|Y'| \geq \epsilon |Y|$. Let $X'$ be the set of vertices of $X$ that have less than $(d-\epsilon)|Y'|$ neighbors in $Y'$. Note that $d(X',Y') < (d-\epsilon)$, which can only happen if $|X'| < \epsilon|X|$. \end{proof} Using Lemma \ref{lem:large-neighbor}, we will show the following lemma using the standard embedding technique. \begin{lemma}\label{lem:clique-large-neighbor} Fix a positive integer $s$. Suppose $(X,Y)$ is an $\epsilon$-regular pair of density $d$ such that $\epsilon \leq 1/4s$, $(d-\epsilon)^s \geq 4\epsilon$ and $|X|,|Y|\geq 4s/(d-\epsilon)^s$. Then there exist disjoint subsets $A,C \subseteq X$ and $B,D \subseteq Y$ such that $|A| = |B| = s$, $|C|\geq \epsilon |X|$, $|D| \geq \epsilon |Y|$, and there is a complete bipartite graph connecting $A$ and $D$, $B$ and $C$ as well as $A$ and $B$. \end{lemma} \begin{proof} Denote $A = \{a_1, \ldots, a_s\}$ and $B = \{b_1, \ldots, b_s\}$. For each $i\in [s]$, we will first embed $a_i$ to $X$ one vertex at a time. After embedding the $k^{\textrm{th}}$-vertex, we will show that the following condition is satisfied: $$\abs*{Y\cap \displaystyle\bigcap_{i=1}^k N(a_i)} \geq (d-\epsilon)^k|Y|.$$ The condition is trivially satisfied when $k = 0$. Suppose that we already embedded the vertices $a_1, \ldots, a_t$ for some $t > 0$. Let $Y_t' = Y\cap \bigcap_{i=1}^t N(a_i)$. By induction, $|Y_t'| \geq (d-\epsilon)^t|Y| > \epsilon|Y|$. Hence by Lemma \ref{lem:large-neighbor}, at least $\left ((1-\epsilon)|X|-s\right )$ vertices in $X$ have at least $(d-\epsilon)|Y_t'|$ neighbors in $Y_t'$. Embed $a_{t+1}$ to one of these $\left ((1-\epsilon)|X|-s\right )$ vertices and it's easy to see that $$\abs*{Y\cap \displaystyle\bigcap_{i=1}^{t+1} N(a_i)} \geq (d-\epsilon)|Y_t'| \geq (d-\epsilon)^{t+1}|Y|.$$ Now we want to embed $b_i$ to $Y_s'$ one vertex at a time. The process is entirely the same as long as $$(d-\epsilon)^{s}(|X|-s) \geq \epsilon|X|$$ and $$(d-\epsilon)^s|Y|-\epsilon|Y|-s\geq 1,$$ which are satisfied by our assumption on $d$, $|X|$ and $|Y|$. \end{proof} \subsection{Constructions for Theorem \ref{thm:degenerate}}\label{sec:lower-bound-1} Before we prove Theorem \ref{thm:degenerate}, we first give two constructions and show that if $G$ does not satisfy the conditions $(1)$ and $(2)$ in Theorem \ref{thm:degenerate}, then at least one of the constructions do not contain a Berge copy of $G$. In particular, suppose $A, B$ are two disjoint set of vertices enumerated as $A = \{a_1, \dots, a_{n/2}\}$ and $B = \{b_1, \dots, b_{n/2}\}$. Let $\mathcal{H}_1$ be a $3$-uniform hypergraph such that $V(\mathcal{H}_1) = A \cup B$ and $E(\mathcal{H}_1) = \{\{a_i, b_j, b_{j+1}\}: \textrm{ $j$ is odd}\}$. Let $\mathcal{H}_2$ be a $3$-uniform hypergraph such that $V(\mathcal{H}_2) = A \cup B$ and $E(\mathcal{H}_2) = \{\{b_1, a_i, b_{j}\}: a_i \in A, b_j \in B \backslash \{b_1\}\}$. Observe that $$\lim_{n\to \infty} \frac{|E(\partial(\mathcal{H}_1))|}{\binom{n}{2}} = \displaystyle\lim_{n\to\infty} \frac{|E(\partial(\mathcal{H}_2))|}{\binom{n}{2}} = \frac{1}{2}.$$ \begin{claim}\label{cl:forward} If $\hat{\pi}_3(G) = 0$, then condition $(1)$ and $(2)$ of Theorem \ref{thm:degenerate} must hold. \end{claim} \begin{proof} Suppose that $\hat{\pi}_3(G) = 0$. We claim that $(1)$ and $(2)$ must hold. First observe that $\mathcal{H}_1$ contains no Berge triangle. Hence $G$ must be triangle-free otherwise $\mathcal{H}_1$ is Berge-$G$-free. Now note that given a hypergraph $\mathcal{H}$, if $\partial(\mathcal{H})$ is $G$-free, then $\mathcal{H}$ must be Berge-$G$-free. Observe that $\partial(\mathcal{H}_1)$ contains a bipartite subgraph $B \subseteq \partial(\mathcal{H}_1)$ such that $E(\partial(\mathcal{H}_1))-E(B)$ is a matching (possibly empty) in one of the partition of $B$. Hence if there is no such bipartite subgraph in $G$, then $\partial(\mathcal{H}_1)$ is $G$-free, implying that $\mathcal{H}_1$ is Berge-$G$-free. Since $\hat{\pi}_3(G) = 0$, it follows that $G$ must satisfy condition $(2)$. Similarly, observe that $\partial(\mathcal{H}_2)$ satisfies condition $(1)$. Hence if $G$ doesn't satisfy condition $(1)$, then $\mathcal{H}_2$ is Berge-$G$-free, which contradicts that $\hat{\pi}_3(G) = 0$. Therefore we can conclude that $(1)$ and $(2)$ must hold for $G$. \end{proof} \subsection{Proof of Theorem \ref{thm:degenerate}} The forward direction is proved in Claim \ref{cl:forward}. It remains to show that if $G$ satisfies the conditions $(1)$ and $(2)$ in Theorem \ref{thm:degenerate}, then $\hat{\pi}_3(G) = 0$. Suppose not, i.e., $\hat{\pi}_3(G) \geq d$ for some $d > 0$. Our goal is to show that for every $3$-graph $\mathcal{H}$ on (sufficiently large) $n$ vertices and at least $d \binom{n}{2}$ edges in $\partial(\mathcal{H})$, $\mathcal{H}$ contains a Berge copy of $G$. Assume first that $\mathcal{H}$ is edge-minimal while maintaining the same shadow. It follows that in every hyperedge $h$ of $\mathcal{H}$, there exists some $e \in \binom{h}{2}$ such that $e$ is contained only in $h$. Moreover, note that since each hyperedge covers at most $3$ edges in $\partial(\mathcal{H})$, we have that $$|E(\mathcal{H})| \geq \frac{1}{3}|E(\partial(\mathcal{H}))| \geq \frac{d}{3}\binom{n}{2}.$$ Call an edge $e \in \partial(\mathcal{H})$ \emph{uniquely embedded} if there exists a unique hyperedge $h \in E(\mathcal{H})$ containing $e$. Now randomly partition $V(\mathcal{H})$ into three sets $X,Y,Z$ of the same size. Let $e(X,Y,Z)$ denote the number of hyperedges of $\mathcal{H}$ intersecting each of the sets $X,Y,Z$ on at most one vertex. It's easy to see that $\textrm{E}[e(X,Y,Z)] = \frac{2}{9} |E(\mathcal{H})|$. Hence there exists a $3$-partite subhypergraph $\mathcal{H}_1 = X\cup Y\cup Z$ of $\mathcal{H}$ such that $|E(\mathcal{H}_1)| \geq \frac{2}{9} |E(\mathcal{H})|$. Note that each hyperedge $h$ of $\mathcal{H}_1$ contains some $e \in \binom{h}{2}$ that is uniquely embedded. Hence there are at least $\frac{2}{9}|E(\mathcal{H})|$ uniquely embedded edges in $\partial(\mathcal{H}_1)$. Without loss of generality, assume that there are at least $\frac{2}{27}|E(\mathcal{H})|$ uniquely embedded edges between the vertex sets $X$ and $Y$ in $\partial(\mathcal{H}_1)$. Let $\mathcal{H}'$ be the subhypergraph of $\mathcal{H}_1$ with only hyperedges containing a uniquely embedded edge between $X$ and $Y$. For ease of reference, let $H' = \partial(\mathcal{H}')$ and let $H'[X\cup Y]$ be the subgraph of $\partial(\mathcal{H}')$ induced by $X\cup Y$. Note that $H'[X\cup Y]$ is bipartite with at least $\frac{2}{27}|E(\mathcal{H})| \geq \frac{2d}{81}\binom{n}{2} = d'\binom{n}{2}$ edges. Let $\epsilon = \epsilon(s,d'/2)$ be small enough so that $\epsilon$ satisfies the assumptions in Lemma \ref{lem:clique-large-neighbor}. Applying the regularity lemma on $H'[X\cup Y]$, we can find an $\epsilon$-regular partition in which there exist two parts $X'\subseteq X, Y'\subseteq Y$ such that $(X', Y')$ is an $\epsilon$-regular pair with edge density at least $d'/2$. Moreover, $|X'|,|Y'| \geq n/M$ for some constant $M > 0$. By Lemma \ref{lem:clique-large-neighbor}, we can find disjoint subsets $A,C \subseteq X'$ and $B,D \subseteq Y'$ such that $|A| = |B| = 2s$, $|C|\geq \epsilon|X'|$, $|D| \geq \epsilon |Y'|$, and there is a complete bipartite graph connecting $A$ and $D$, $B$ and $C$ as well as $A$ and $B$. Now consider the subhypergraph $\hat{\mathcal{H}} =\mathcal{H}'[C\cup D\cup Z]$ of $\mathcal{H}'$ induced by the vertex set $C\cup D\cup Z$, i.e., all hyperedges in $\hat{\mathcal{H}}$ contain vertices only in $C\cup D\cup Z$. Given a vertex set $S \subseteq V(\hat{\mathcal{H}})$, define $\hat{d}_S(v)$ as the number of neighbors of $v$ in $S$ in $\partial(\hat{\mathcal{H}})$. \begin{claim}\label{cl:1p4clus} If there exists some $z \in Z$ such that $\hat{d}_C(v) \geq 2s$ and $\hat{d}_D(v) \geq 2s$, then $\mathcal{H}'$ contains a Berge-$C_5(1,s,s,s,s)$ as subhypergraph. \end{claim} \begin{proof} Denote the $C_5(1,s,s,s,s)$ that we wish to embed as $\{v_1\} \cup V_2\cup V_3\cup V_4\cup V_5$. Let $v_1= z$. Let $C_z, D_z$ be the set of neighbors of $z$ in $C$ and $D$ respectively in $\partial(\hat{\mathcal{H}})$. We wish to embed $V_2$ in $C_z$, $V_3$ in $B$, $V_4$ in $A$ and $V_5$ in $D_z$. Note that $|C_z|,|D_z|\geq 2s$ by our assumption. Pick arbitrary $s$ of them to be $V_2$. For each vertex pair $\{z,w\}$ where $w \in V_2$, there exists a hyperedge $h \subseteq C \cup D \cup Z$ containing $\{z,w\}$. Use $h$ to embed $\{z,w\}$. Observe that at most $s$ vertices in $D_z$ or $B$ are contained in hyperedges embedding the edges from $z$ to $V_2$. Since $|D_z|\geq 2s$, we can set $V_5$ to be arbitrary $s$ vertices among vertices in $D_z$ that are not contained in any hyperedge embedding the edges from $z$ to $V_2$. Similarly, since $|A|, |B| \geq 2s$, we can set $V_3$ and $V_4$ to be arbitrary $s$ vertices among vertices in $B$ and $A$ that are not contained in any hyperedge embedding the edges from $z$ to $V_2$ and from $z$ to $V_5$ respectively. We then have distinct hyperedges (in $\hat{\mathcal{H}}$ only) embedding the edges from $z$ to $V_2$ and $z$ to $V_5$, $V_2$ to $V_3$ and $V_4$ to $V_5$ respectively. Moreover, recall that by our choice of $X'$ and $Y'$, vertex pairs between $V_4$ and $V_5$ are uniquely embedded (with the third vertex in $Z$), i.e., there exist distinct hyperedges embedding them. Hence, we obtain a Berge-$C_5(1,s,s,s,s)$ in $\mathcal{H}'$. \end{proof} Now observe that $|C|\geq \epsilon |X'|$, $|D| \geq \epsilon |Y'|$. Hence by the $\epsilon$-regularity of $(X',Y')$, the number of edges $e(C,D)$ in $\partial(\hat{\mathcal{H}})$ satisfies that $$e(C,D) \geq (\frac{d'}{2}-\epsilon)|C||D| \geq (\frac{d'}{2}-\epsilon)\epsilon^2|X'||Y'|\geq (\frac{d'}{2}-\epsilon)\epsilon^2 \frac{n^2}{M^2} = cn^2$$ where $c$ is a constant depending on $\epsilon$ and $d'$. \begin{claim}\label{cl:matching3clus} If $\mathcal{H}'$ contains no Berge-$C_5(1,s,s,s,s)$ as subhypergraph, it must contain a Berge-$F$ where $F$ is any triangle-free subgraph of $C_3(s,s,s;$ $\{\{1,2\}\})$. \end{claim} \begin{proof} By claim \ref{cl:1p4clus}, since $\mathcal{H}'$ contains no Berge-$C_5(1,s,s,s,s)$ as subhypergraph, it follows that given any $v\in Z$, one of $\hat{d}_C(v)$, $\hat{d}_D(v)$ must be smaller than $2s$. Let $Z_1$ be the set of vertices $z \in Z$ with $\hat{d}_C(v) < 2s$, and $Z_2$ be the set of vertices $z \in Z$ with $\hat{d}_D(v) < 2s$. Let $e(Z_1, D)$ and $e(Z_2, C)$ denote the number of edges between $Z_1$ and $D$, $Z_2$ and $C$ respectively in $\partial(\hat{\mathcal{H}})$. Since $e(C,D) \geq cn^2$ and all hyperedges in $\hat{\mathcal{H}}$ contains a vertex in $Z$, it follows that at least one of $e(Z_1, D)$ and $e(Z_2, C)$ must be at least $\Omega(n^2)$. WLOG, suppose $e(Z_1, D) \geq c'n^2$ for some $c' > 0$. Recall the classical result of K\H{o}v\'ari, S\'os and Tur\'an \cite{KST}, who showed that $ex(n, K_{r,t}) = O(n^{2-1/r})$ where $r\leq t$. By the Tur\'an number of complete bipartite graphs, we have that for sufficiently large $n$, $\partial(\hat{\mathcal{H}})[D\cup Z_1]$ contains a complete bipartite graph $K_{(2s)^{s+1},(2s)^{s+1}}$. For ease of reference, call this complete bipartite graph $K$. Let $F$ be an arbitrary triangle-free subgraph of $C_3(s,s,s;$ $\{\{1,2\}\})$. We now show that $\hat{\mathcal{H}}$ contains a Berge-$F$ subhypergraph. Let $C_1$ be the collection of vertices $v$ in $C$ such that there is some hyperedge containing $v$ and one of the edges in $K$. Observe that for each $v\in C_1$, $\hat{d}_{Z_1 \cap K}(v) \leq s$, otherwise we obtain a Berge-$C_5(1,s,s,s,s)$ in $\mathcal{H}'$. Moreover, recall that for every $v \in Z_1$, $\hat{d}_C(v) < 2s$. It follows that there must be an edge $x_1 y_1 \in \partial(\hat{\mathcal{H}})$ with $x_1 \in C_1, y_1\in Z_1$ such that at least $(2s)^{s}$ vertices in $D \cap K$ form a hyperedge containing $x_1y_1$. Now consider the subgraph $K'$ of $K$ induced by these $(2s)^{s}$ vertices in $D\cap K$ as well as the non-neighbors of $x_1$ in $Z_1 \cap K$. Observe that $K'$ is also a complete bipartite graph with at least $(2s)^{s}$ vertices in each partition. Hence by the same logic, we can find another edge $x_2 y_2 \in \partial(\hat{\mathcal{H}})$ with $x_2 \in C_1, y_2 \in Z_1$ such that at least $(2s)^{s-1}$ vertices in $D\cap K'$ form hyperedges containing $x_1 y_1$ and $x_2 y_2$ respectively. Continuing this process $s$ steps, it is not hard to see that we can find a Berge-$F$ subhypergraph in $\hat{\mathcal{H}}$. In summary, if $\mathcal{H}$ is $3$-graph with at least $d\binom{n}{2}$ edges in $\partial(\mathcal{H})$ for some $d> 0$ and $n$ sufficiently large, then $\mathcal{H}$ contains either a Berge-$C_5(1,s,s,s,s)$ or a Berge-$F$ where $F$ is any triangle-free subgraph of $C_3(s,s,s;\{\{1,2\}\})$. Moreover, observe that if $G$ satisfies the conditions $(1)$ and $(2)$ in Theorem \ref{thm:degenerate}, then $G$ is a subgraph of both $C_5(1,s,s,s,s)$ and $C_3(s,s,s;\{\{1,2\}\})$. Hence it follows that $\hat{\pi}_3(G) = 0$. This completes the proof of the theorem. \end{proof} It is easy to see that Theorem \ref{thm:degenerate} implies Corollary \ref{cor:degenerate-characterize1}. In the remaining of this section, we show that Corollary \ref{cor:degenerate-characterize1} and Corollary \ref{cor:degenerate-characterize2} are indeed equivalent. \begin{proof}[Proof of Corollary \ref{cor:degenerate-characterize2}] It suffices to show that a graph $G$ is contained in both $C_5(1,s,s,s,s)$ and $C_3(s,s,s;$ $\{\{1,2\}\})$ (for some $s$) if and only if $G$ is a subgraph of one of the graphs in Figure \ref{fig:embed3}. We follow the labelling in Figure \ref{fig:embed2}. The backward direction is easy. For the forward direction, there are two cases: \begin{figure}[htb] \begin{center} \begin{minipage}{.2\textwidth} \resizebox{5cm}{!}{\input{proof_charA.tikz}} \end{minipage} \hspace{3cm} \begin{minipage}{.2\textwidth} \resizebox{4.5cm}{!}{\input{proof_charB.tikz}} \end{minipage} \end{center} \caption{Equivalence of characterizations in Corollary \ref{cor:degenerate-characterize1} and \ref{cor:degenerate-characterize2}. } \label{fig:embed4} \end{figure} \begin{description} \item Case 1: With loss of generality, $v_1$ is in $B$. Let $v_2\in C$ be the vertex matched to $v_1$. Let $B'=B\setminus \{v_1\}$, and $C'=C\setminus \{v_2\}$. Note that $G-v_1$ is a bipartite graph, i.e., $V(G)-v_1 = U_1 \cup U_2$. With loss of generality, we can assume $B'\subseteq U_1$, $C'\subseteq U_2$ and $v_2 \in U_2$ by properly swapping two ends of the matching edges between $B$ and $C$ if needed. Since $G- v_1$ is bipartite, the vertex set $A$ is partitioned into two parts $A_1 \subseteq U_1, A_2\subseteq U_2$. Let $A_1', A_2'$ be the neighbors of $v$ in $A_1, A_2$ respectively, $A_1'', A_2''$ be the non-neighbors of $v$ in $A_1, A_2$ respectively. Recall that $v_2\in U_2$. It follows that $v_2$ is independent with $A_2' \cup A_2''$. Moreover, since $G$ is triangle-free, $v_2$ is also independent with $A_1'$. It then follows that $G$ can be embedded into the first graph of Figure \ref{fig:embed3} in the same way labelled in Figure \ref{fig:embed4} (note that there are no edges between $v_1$ and $A_2''$). \item Case 2: $v_1$ is in $A$. Since $G-v_1$ is bipartite, we can write $V(G)-v_1 = U_1 \cup U_2$. WLOG, assume that $B\subseteq U_1$ and $C\subseteq U_2$ by properly swapping two ends of the matching edges between $B$ and $C$ if needed. Moreover, write $A = A_1 \cup A_2\cup \{v\}$ where $A_1 \in U_1$ and $A_2 \in U_2$. Write $B = B' \cup B''$, $C =C' \cup C''$ such that $B'$ and $C'$ are the neighbors of $v_1$ in $B$ and $C$ respectively. Since $G$ is triangle-free, it follows that $v_1$ is independent with $B''$ and $C''$. It then follows that $G$ can be embedded into the second graph of Figure \ref{fig:embed3} in the same way labelled in Figure \ref{fig:embed4}. \end{description} \end{proof} \begin{section}{Proof of Theorem \ref{thm:cover-density-3}} If $\chi(G) \geq 4$, we are done by Theorem \ref{thm:general}. If $\chi(G) \leq 3$ and $G$ is not degenerate, the two hypergraphs we constructed in Section \ref{sec:lower-bound-1} provide the lower bound $1/2$, which is also an upper bound by Theorem \ref{thm:general-upper}. Theorem \ref{thm:degenerate} resolves the case when $G$ is degenerate. \end{section}
train/arxiv
BkiUbPrxaKgS2EKBgkrg
5
1
\section{Introduction} Magnetic resonance imaging (MRI) is one of the most important tools for clinical diagnosis of various diseases, due to its excellent and versatile soft tissue contrast. Clinical MRI is based on expert interpretation of anatomical images of varying contrasts and thus tends to retain a level of subjectivity. Quantitative MRI (qMRI) methods, such as measurements of different relaxation times, allow reducing the subjectivity by providing numerical values on which diagnostic assessment or predictions of tissue properties can be based on. However, such quantitative MRI measurements necessarily take more time than standard anatomical imaging. For example, in \trho{} mapping \cite{gilani2016,borthakur2006,}, typically 5-7 sets of measurements with varying spin lock times are collected to estimate the \trho{} map. Such measurements will thus take 5-7 times longer than acquiring similar anatomical images, often approaching 10 minutes for a stack of quantitative 2D-images. \trho{} imaging is based on tilting the magnetization into the $xy$-plane and then locking the magnetization with a spin-lock pulse of certain amplitude and duration. Quantitative mapping, i.e. the measurement of the \trho{} relaxation time constant, is realized by repeating the \trho{} preparation with several different durations of the spin-lock pulse, and collecting the full MR image for each of these preparations. The \trho{} MRI contrast is particularly sensitive to molecular processes occurring at the frequency ($\omega_1$) of the spin-lock pulse, corresponding to the amplitude of the pulse: $\omega_1 = \gamma B_1$, where $\gamma$ is the gyromagnetic ratio, which ties the magnetic field strength (of the RF pulse) $B_1$ to its resonance frequency. Generally, spin-lock pulses operate at and are limited to frequencies that correspond to slow molecular processes that are often both biologically important and altered in disease-related changes. The \trho{} relaxation time has been reported as a promising biomarker for numerous tissues and diseases, such as different disorders of the brain \cite{kettunen2000,grohn2000}, cardiomyopathy \cite{wangc2015}, liver fibrosis \cite{wangyx2011}, musculoskeletal disorders \cite{borthakur2006,wangl2015,kajabi2021} and many others. For a broader overview of \trho{} relaxation and its applications, the reader is referred to the reviews by Gilani and Sepponen \cite{gilani2016}, Wang and Regatte \cite{wangl2015} and Borthakur et al \cite{borthakur2006}. Staying still in the scanner for extended periods of time can prove to be challenging, for example, for pediatric patients. The excessively long data acquisition times are also operationally impractical because they lead to a small number of studies that can be performed daily with a single MRI device. Quantitative MRI and \trho{} imaging in particular can thus greatly benefit from using undersampled measurements, which are a natural and efficient way to reduce the scanning time for a single qMRI experiment. When using undersampled data, conventional MR image reconstruction methods, such as regridding \cite{Jac+91}, may lead to insufficient reconstruction quality. The usage of compressed sensing (CS) \cite{CRT06,Don06} methods, where an iterative reconstruction method is used together with a sparsifying transform of the image, has proven highly successful with undersampled MRI data \cite{JFL15}. Usage of CS methods for \trho{} imaging have been previously studied, for example, in \cite{Zhu+15,Pan+16,Zib+18,Zib+20}. In \cite{Zhu+15}, the authors used principal component analysis and dictionary learning in the first in-vivo application of CS to \trho{} reconstruction. In \cite{Pan+16}, the authors used spatial TV together with Autocalibrating Reconstruction for Cartesian sampling (ARC) to accelerate the measurements. In \cite{Zib+18}, the authors compared 12 different sparsifying transforms in 3D-\trho{} mapping. The regularization model combining spatial TV with 2nd order contrast TV was found to perform the best, with satisfactory results with acceleration factor (AF, i.e., the number of datapoints in full data divided by the number of data used in the reconstruction) up to 10 when using cartesian 3D sampling together with parallel imaging. In \cite{Zib+20}, both cartesian and radial data were reconstructed using various different regularization methods. The authors reached acceptable accuracy with AF up to 4 for the cartesian data, whereas with the radial data, the accuracy was acceptable with AF up to 10. When using CS for \trho{} mapping, the image series with varying spin-lock durations $T_\mathrm{SL}$ is first reconstructed, followed by a pixel-by-pixel non-linear least squares fit of a monoexponential (or a biexponential) signal model to the reconstructed image intensity data to obtain the desired \trho{} relaxation time map. Since the exponential signal model combining the \trho{} and varying $T_\mathrm{SL}$ is well known, a direct, embedded model may also be used to reconstruct the desired \trho{} map directly from the k-space measurement data without the intermediate step of reconstructing the separate intensity maps corresponding to different $T_\mathrm{SL}$. The direct one-step reconstruction utilizing the embedded model has clear advantages over the sequential two step reconstruction model. First, it reduces the number of unknowns in the reconstruction problem significantly; for example, for measurements with 7 spin-lock times, the number of unknowns may be reduced from $14N$ (one complex image for each contrast) to just $3N$ (\trho{}, \szero{} and a single phase map), where $N$ is the number of pixels or voxels in a single image. Secondly, it allows regularization of the parameter map of interest, i.e., the \trho{} parameter map in case of \trho{} mapping, instead of posing regularization on the complex-valued images corresponding to different contrasts in the intermediate step. And thirdly, since the signal model is embedded in the reconstruction, there is no need to decide what type of a contrast regularization model fits the data best. A disadvantage of the embedded model is that it transforms the MRI inversion into a non-linear problem. The resulting non-linear optimization problem can, however, be solved conveniently with, for example, the non-linear primal-dual proximal splitting algorithm \cite{Val14}. In this work, we propose an embedded parameterization model to directly reconstruct the \trho{}, \szero{}, and phase maps from the k-space measurement data, and use the non-linear primal-dual proximal splitting algorithm to solve the problem. The proposed model is tested with 2D simulated radial phantom data and 2D cartesian \emph{ex vivo} mouse kidney data. The proposed embedded model is compared with two CS models: one with spatial TV and TV over the $T_\mathrm{SL}$ contrasts, which, we believe, is generally the most commonly used CS model in MRI, and a second CS model with spatial TV and second order contrast TV, which in \cite{Zib+18} was found to perform the best out of 12 different CS models for \trho{} mapping. The first CS model is labeled "CS S1+C1" and the second CS model is labeled "CS S1C2" throughout the paper. The models are named slightly different, since in the first model, the spatial and contrast TV components are separate with two different regularization parameters, and in the second model the spatial TV and the second order contrast TV are under the same root with a single regularization parameter. In the cartesian test case, results from a direct iFFT model are also shown as reference. Reconstructions from both the CS models and the iFFT model are followed by the mono-exponential pixel-by-pixel \trho{} fit. \section{Reconstruction methods} \subsection[Embedded T1rho model]{Embedded \trho{} model}\label{ssec:embeddedModel} The measurement model for embedded mono-exponential \trho{} mapping is \begin{equation} m = K(\szeroTwo,\trhoTwo,\theta)+e, \end{equation} where the vectors \szero{}, \trho{}, and $\theta\in\mathbb{R}^N$ are the parameter maps to be reconstructed. The complex measurement vector $m\in\mathbb{C}^{CM}$ is composed of $C$ contrasts each consisting of $M$ k-space measurements. Further, we denote the complex-valued measurement noise by $e\in\mathbb{C}^{CM}$, and the non-linear forward model by $K: \mathbb{R}^{3N} \to \mathbb{C}^{CM}$. The non-linear forward model can be further decomposed to \begin{equation} K(\szeroTwo,\trhoTwo,\theta) = AB(D(\szeroTwo,\trhoTwo,\theta)), \end{equation} where $A$ is the block-diagonal matrix containing the Fourier transform operations. In case of cartesian measurements, the blocks of $A$ read $A^c=U^c\mathcal{F}$, where $U^c$ is the undersampling pattern used with the measurements with contrast index $c$, and $\mathcal{F}$ is the Fourier transform. In case of non-cartesian measurements, we approximate the forward model using the non-uniform fast Fourier transform (NUFFT \cite{FS03}), i.e., $A^c=P^c\mathcal{F}L^c$, where $P^c$ is an interpolation and sampling matrix, and $L^c$ is a scaling matrix. Furthermore, $D$ maps the \szero{} and \trho{} parameter maps to magnitude images as \begin{align} D(\szeroTwo,\trhoTwo,\theta)=&\left[\begin{array}{c} \mathrm{diag}(\szeroTwo)\cdot \mathrm{diag}(\exp(-T_{\mathrm{SL}_1}/\trhoTwo))\\ \vdots\\ \mathrm{diag}(\szeroTwo)\cdot\mathrm{diag}(\exp(-T_{\mathrm{SL}_C}/\trhoTwo))\\ \mathrm{diag}(\theta) \end{array}\right]\\ :=& \left[\begin{array}{c} \mathrm{diag}(r_1)\\ \vdots\\ \mathrm{diag}(r_C)\\ \mathrm{diag}(\theta) \end{array}\right], \end{align} where $\exp$ denotes elementwise exponentiation, and the division of the scalars $T_{\mathrm{SL}_i}$ by the vector \trho{} is likewise done elementwise. Moreover, $B$ maps the magnitude and phase components of the images to real and complex components, and can be expressed as \begin{equation} B(r_1,...,r_C,\theta)=\left[\begin{array}{c} \mathrm{diag}(r_1)\cdot\mathrm{diag}(\cos\theta)\\ \mathrm{diag}(r_1)\cdot\mathrm{diag}(\sin\theta)\\ \vdots\\ \mathrm{diag}(r_C)\cdot\mathrm{diag}(\cos\theta)\\ \mathrm{diag}(r_C)\cdot\mathrm{diag}(\sin\theta) \end{array}\right]. \end{equation} Here too, the $\sin$ and $\cos$ are to be interpreted as elementwise operations. Note that if the phase maps vary between contrasts, the model can be easily modified to reconstruct separate phase maps for all contrasts instead of reconstructing only a single phase map. In a typical \trho{} measurement, however, the contrast preparation is usually well-separated from the imaging segment and thus the phase can be expected to be the same between the otherwise identical image acquisition segments. In the embedded reconstruction, we use total variation regularization for the \szero{} and \trho{} maps, and L2-norm regularization for the spatial gradient of the phase map. We also limit the \szero{} and \trho{} parameter maps above a small positive value. With these, the minimization problem reads \begin{multline} \min_{\szeroTwo,\trhoTwo,\theta}||K(\szeroTwo,\trhoTwo,\theta)-m||_2^2 +\alpha_1 \mathrm{TV_S}(\szeroTwo) +\alpha_2 \mathrm{TV_S}(\trhoTwo) \\ +\alpha_3 ||\nabla_\mathrm{S} \theta||_2^2 +\delta_\mathrm{a_1}(\szeroTwo)+\delta_\mathrm{a_2}(\trhoTwo) \label{eq:emb_t1rho_model} \end{multline} where $\alpha_1$, $\alpha_2$, and $\alpha_3$ are the regularization parameters for the \szero{}, \trho{}, and phase maps, respectively, and $a_1$ and $a_2$ are the small positive constraints on the \szero{} and \trho{} maps, respectively. \subsubsection[Solving the embedded T1rho reconstruction problem]{Solving the embedded \trho{} reconstruction problem} The non-linear, non-smooth optimization problem \eqref{eq:emb_t1rho_model} is solved using the non-linear primal-dual proximal splitting algorithm proposed in \cite{Val14}, which is described in Algorithm~\ref{alg:nl-pdps} in its most general form. The algorithm applied to the embedded \trho{} reconstruction is described in more detail in the Appendix. \begin{algorithm} \caption{Non-linear primal-dual proximal splitting \cite[Algorithm 2.1]{Val14}} \label{alg:nl-pdps} \begin{algorithmic} \State $\mathrm{Choose}\ \omega\geq 0\ \mathrm{, and}\ \tau,\sigma$ \State $\mathrm{s.t.}\ \tau\sigma (\sup_{k=1,...,i}\@ifstar{\oldnorm}{\oldnorm*}{\nabla H(x^k)}^2)<1.$ \While{Not reached stopping criterion} \State $x^{i+1} := (I+\tau\partial G)^{-1}(x^i-\tau[\nabla H(x^i)]^*y^i)$ \State $\bar{x}^{i+1} := x^{i+1} + \omega(x^{i+1}-x^i)$ \State $y^{i+1} := (I+\sigma\partial F^*)^{-1}(y^i+\sigma H(\bar{x}^{i+1}))$ \EndWhile \end{algorithmic} \end{algorithm} In our implementation, $x=(\szeroTwo^\mathrm{T}, \trhoTwo^\mathrm{T}, \theta^\mathrm{T})^\mathrm{T}$, and we initialize the \szero{} and phase parts of $x^0$ using iFFT or adjoint of NUFFT of the $T_\mathrm{SL}=0$ measurements. \trho{} was initialized to a constant value of 20, and the dual variable $y$ was initialized to 0. The non-linear mapping $H:\mathbb{R}^{3N}\to \mathbb{C}^{CM+6N}$ contains the non-linear forward model $K$ and the discrete difference matrices. In addition, we use varying primal step sizes for the different blocks of the embedded reconstruction, i.e. different $\tau_i$ parameters for the \szero{}, \trho{}, and phase updates \cite{MJV20}. This essentially replaces the scalar step length parameter $\tau$ in Algorithm~\ref{alg:nl-pdps} with the diagonal matrix \begin{equation} \mathrm{T} = \left[\begin{array}{ccc} \tau_1 I_N & 0 & 0\\ 0 & \tau_2 I_N & 0\\ 0 & 0 & \tau_3 I_N\\ \end{array}\right]. \end{equation} The step parameters $\tau_1$, $\tau_2$, and $\tau_3$ are derived from the norm of the corresponding block of the matrix $\nabla K$. Here, we only use the non-linear part $K$ of $H$ to estimate the step lengths, as the linear part of $H$ has only a minor impact on the norm of $\nabla H$. We set the parameter $\sigma$ to $\sigma=1/\max(\tau_i)$ and use $\omega=1$ for the relaxation parameter. Since the block-diagonal matrix $A$ is linear and can be normalized to 1, we have $\@ifstar{\oldnorm}{\oldnorm*}{\nabla K} = \@ifstar{\oldnorm}{\oldnorm*}{J_BJ_D}$. Furthermore, the product of the Jacobians writes \begin{multline} \label{eq:J_BJ_D} J_BJ_D =\\ \left[ \begin{smallmatrix} \mathrm{diag}(\cos\theta) E_1 & \mathrm{diag}((T_\mathrm{SL_1}/(\trhoTwo\odot\trhoTwo))\odot\cos\theta) r_1 & -\mathrm{diag}(\sin\theta)r_1 \\ \mathrm{diag}(\sin\theta) E_1 & \mathrm{diag}((T_\mathrm{SL_1}/(\trhoTwo\odot\trhoTwo))\odot\sin\theta) r_1 & \mathrm{diag}(\cos\theta)r_1 \\ \mathrm{diag}(\cos\theta) E_2 & \mathrm{diag}((T_\mathrm{SL_2}/(\trhoTwo\odot\trhoTwo))\odot\cos\theta) r_2 & -\mathrm{diag}(\sin\theta)r_2 \\ \mathrm{diag}(\sin\theta) E_2 & \mathrm{diag}((T_\mathrm{SL_2}/(\trhoTwo\odot\trhoTwo))\odot\sin\theta) r_2 & \mathrm{diag}(\cos\theta)r_2 \\ \vdots & \vdots & \vdots \\ \mathrm{diag}(\cos\theta) E_C & \mathrm{diag}((T_\mathrm{SL_C}/(\trhoTwo\odot\trhoTwo))\odot\cos\theta) r_C & -\mathrm{diag}(\sin\theta)r_C \\ \mathrm{diag}(\sin\theta) E_C & \mathrm{diag}((T_\mathrm{SL_C}/(\trhoTwo\odot\trhoTwo))\odot\sin\theta) r_C & \mathrm{diag}(\cos\theta)r_C \\ \end{smallmatrix}\right], \\ \ \end{multline} where $E_i=\mathrm{diag}(\exp(-T_\mathrm{SL_i}/\trhoTwo))$, $r_i=\mathrm{diag}(\szeroTwo)\cdot E_i$, and $\odot$ is the Hadamard product, i.e., elementwise multiplication. Now, since the matrix $J_BJ_D$ consists of only diagonal blocks, and the index of the maximum value is the same for all $E_i$, it is straightforward to estimate the $\tau_i$ from the norms of the maximum values of the column-blocks of \eqref{eq:J_BJ_D} yielding \begin{align} \tau_1 =& \sqrt{{\sum_{i=1}^C \@ifstar{\oldnorm}{\oldnorm*}{\exp(-T_\mathrm{SL_i}/\trhoTwo)}_\infty^2}} \label{eq:tau_1}\\ \tau_2 =& \sqrt{\sum_{i=1}^C \@ifstar{\oldnorm}{\oldnorm*}{r_i(T_\mathrm{SL_i}/(\trhoTwo\odot\trhoTwo))}_\infty^2}\label{eq:tau_2}\\ \tau_3 =& \sqrt{\sum_{i=1}^C \@ifstar{\oldnorm}{\oldnorm*}{\szeroTwo\odot\exp(-T_\mathrm{SL_i}/\trhoTwo)}_\infty^2} \label{eq:tau_3}. \end{align} In addition, we calculate the norms in every iteration, and update the used $\tau_i$ and $\sigma$ if the step is smaller than the previously used step. In our experience, these step lengths may, however, prove to be too small, and in some cases larger step lengths, especially for the \trho{} update step, may be used to obtain faster convergence. In this work, we used a multiplier of 50 for the \trho{} update step $\tau_2$ in the radial simulation. Note that the step length criterion of Algorithm~\ref{alg:nl-pdps} still holds with the multiplier, since $\tau_2\cdot\sigma$ remains small due to the selection of $\sigma$. \subsection{Compressed sensing reference methods} We compare the embedded model to two CS models, which include a complex valued reconstruction of the images with different spin-lock times, followed by a pixel by pixel non-linear least squares fit of the monoexponential signal model to obtain the \trho{} and \szero{} parameter maps. The first CS reconstruction model uses spatial total variation together with first order total variation over the varying $T_{SL}$ contrasts (labeled CS S1+C1) and the second one uses spatial total variation together with second order total variation over the varying $T_{SL}$ contrasts (labeled CS S1C2). The measurement model for a single contrast image is \begin{equation} \label{eq:cs_meas_model} m^c =A^c u^c+e^c, \end{equation} where the superscript $c$ denotes the contrast index, $m^c\in\mathbb{C}^M$ is the k-space data vector for contrast index $c$, $u^c\in\mathbb{C}^N$ is the image vector, $e^c\in\mathbb{C}^M$ is the complex valued noise vector, and $A^c$ is the forward model, which depends on the measurement sequence and undersampling pattern, and is described in more detail in Section~\ref{ssec:embeddedModel}. With the measurement model of \eqref{eq:cs_meas_model}, spatial total variation, and total variation over the contrasts, the CS minimization problem reads \begin{equation} u^*=\argmin_{u} \@ifstar{\oldnorm}{\oldnorm*}{Au-m}_2^2+ \alpha \mathrm{TV_S}(u) + \beta \mathrm{TV_C}(u), \label{eq:cs_model} \end{equation} where $A$ is a block-diagonal matrix containing the forward transforms $A^c$ corresponding to each image, $u\in\mathbb{C}^{NC}$ is all the images vectorized, such that $C$ is the number of contrasts, and $m\in\mathbb{C}^{MC}$ is all the k-space measurements vectorized. Further, $\mathrm{TV_S}$ denotes spatial total variation, $\mathrm{TV_C}$ denotes total variation over contrasts, and $\alpha$ and $\beta$ are the regularization parameters of spatial and contrast TV respectively. The second CS minimization problem, which uses the single regularization parameter version of combined spatial TV and second order contrast TV, reads \begin{equation} u^*=\argmin_{u} \@ifstar{\oldnorm}{\oldnorm*}{Au-m}_2^2+ \alpha \mathrm{TV_{SC}(u)}, \label{eq:cs_s1c2_model} \end{equation} where \begin{equation*} \mathrm{TV_{SC}}(u)=\sum_k{\sqrt{(\nabla_\mathrm{x} u)_k^2+(\nabla_\mathrm{y} u)_k^2+(\nabla_\mathrm{c}^2 u)_k^2 }}, \end{equation*} where $\nabla_\mathrm{x}$ and $\nabla_\mathrm{y}$ are the horizontal and vertical direction spatial discrete forward difference operators respectively, $\nabla_\mathrm{c}^2$ is the second order contrast direction discrete difference operator, and $k$ is an index that goes through all the pixels in the set of images. Both of the minimization problems (\ref{eq:cs_model}, \ref{eq:cs_s1c2_model}) are solved using the popular primal-dual proximal splitting algorithm of Chambolle and Pock \cite{CP11}. Finally, in the CS models (and the iFFT model), we fit the mono-exponential \trho{} signal equation \begin{equation} \label{eq:t1rhomodel} [\mathrm{T}_{\mathrm{1\rho},k}^*,\mathrm{S}_{0,k}^*]=\argmin_{\mathrm{T}_\mathrm{1\rho},\mathrm{S}_0} \@ifstar{\oldnorm}{\oldnorm*}{|u_k|-\mathrm{S}_0\exp(-T_\mathrm{SL}/\mathrm{T}_{1\rho})}^2 \end{equation} pixel by pixel to the reconstructed intensity images obtained by solving either \eqref{eq:cs_model} or \eqref{eq:cs_s1c2_model}. Here $|u_k|=|u_k^1|,...,|u_k^C|$ is the vector of reconstruction intensity values at pixel location $k$ with $T_\mathrm{SL}$ contrasts 1 to $C$, and similarly $T_\mathrm{SL}$ is the vector of $T_\mathrm{SL}$ values of contrasts 1 to $C$. Note that the final \szero{} estimate is obtained from the mono-exponential model fit instead of taking the intensity values from the reconstructions with $T_\mathrm{SL}=0$. \section{Materials and Methods} \subsection{Simulated golden angle radial data} The simulation of the radial measurement data was based on the Shepp-Logan phantom in dimensions $128\cdot 128$, which was zero-filled to dimensions $192\cdot 192$. The \trho{} values of the target were set to between 20 and 120. The intensity with $T_\mathrm{SL}=0$ was set to a maximum of 1, and the phase of the target was set $2\pi x/192$, where $x$ is the horizontal coordinate of the pixel. The images of the simulated \trho{}, \szero{}, and phase maps are shown in Figure~\ref{fig:simu_simuParts}. To generate the varying $T_\mathrm{SL}$ measurements, spin lock times of 0, 4, 8, 16, 32, 64, and 128 ms were used. For each $T_\mathrm{SL}$, 302 (i.e., $\sim192\cdot\pi/2$) golden angle \cite{Win+07} spokes were generated. This corresponds to full sampling for equispaced radial spokes with image dimensions $192\cdot 192$ in the sense that the distance between spokes at their outermost points satisfies the Nyquist criterion \cite{BKZ04}. Finally, complex Gaussian noise at 5 \% of the mean of the absolute values of the full noiseless simulation was added to the simulated measurements. \begin{figure} \centering \includegraphics[width=\columnwidth]{im/simu_simuParts.png} \caption{The simulated \trho{}, \szero{}, and phase parameter maps for the radial golden angle \trho{} phantom simulation.} \label{fig:simu_simuParts} \end{figure}{} \subsection{Cartesian data from ex vivo mouse kidney} Experimental \emph{ex vivo} data from a mouse kidney was acquired from a separate study. The data was collected in compliance with ethical permits (ESAVI-2020-006283) at 9.4 T using a 19 mm quadrature RF volume transceiver (RAPID Biomedical GmbH, Rimpar, Germany) and VnmrJ3.1 Varian/Agilent DirectDrive console. \trho{} relaxation data was collected using a refocused \trho{} preparation scheme \cite{witschey2007} with a spin-lock frequency of 500 Hz and $T_\mathrm{SL}$ = 0, 8, 16, 32, 64, and 128 ms. The \trho{}-prepared data, i.e., \trho{}-weighted images, were collected using a fast spin echo sequence with a repetition time of 5 s, effective echo time of 5.5 ms, echo train length of 8, slice thickness of 1mm, field-of-view of 17 x 17 mm and acquisition matrix of 192 x 192. Eventually, only spin-lock times up to 64 ms were used in the reconstruction as the signal intensity of the longest spin-lock time was close to the noise level and had minimal or no effect on the reconstruction. \subsection{Reconstruction specifics} The radial data from the 2D phantom was reconstructed with the embedded model and the two CS models with acceleration factors 1, 5, 10, 20, 30, 50, and 101 (rounded to the nearest integer). In \trho{} imaging, the images measured with varying spin-lock times are expected to have high redundancy in the sense that the images are expected to be structurally similar with decreasing intensity as $T_\mathrm{SL}$ increases, making complementary k-space sampling warranted. In complementary k-space sampling, the subsampling with any measured contrast is different from the others, meaning that each sampling adds to spatial information gained at other contrasts. The golden angle radial sampling is especially well suited for this as the measurements are inherently complementary (i.e., each new spoke has a different path in the k-space compared to the previous ones) and each measured spoke traverses through the central (low-frequency) part of the k-space which contains significant information on the intensity level of the images. Thus, we sampled the golden angle data such that, for example, with an acceleration factor 10, the first contrast used the first 30 spokes out of the 302 total, the second contrast used spokes 31 through 60 and so on to achieve complementary k-space sampling. In the embedded model, the phase regularization parameter was set to a constant value at 0.01, and the other regularization parameters were varied over a wide range. In the CS models, the regularization parameters were also varied over a wide range to find the best parameters. The reconstructions shown use the regularization parameters that yielded the smallest \trho{} RMSE with respect to the ground truth phantom. The cartesian \emph{ex vivo} mouse kidney data was reconstructed with the embedded, the iFFT, and the two CS methods with acceleration factors 2, 3, 4, and 5 (rounded to the nearest integer). Undersampling was done by taking a number of full k-space rows corresponding to the desired acceleration factor, since cartesian data collection in MRI scanners is done line by line. For the undersampled reconstructions, 1/4 of the total sampled k-space rows were taken from around the center to include zero frequency and enough low frequency data in all contrasts. Half of the rest 3/4 were taken from the top part and the other half from the bottom part. To achieve complementary sampling, the rows from the top and bottom parts were selected such that all rows were first selected once in random order, before continuing to sample from the full set of rows again. In the \emph{ex vivo} test case, too, the phase regularization parameter of the embedded model was set to a constant level, which was 0.0001, and the other parameters of the embedded and both CS models were varied over a wide range to find the optimal \trho{} estimate. The embedded model reconstructions were compared to the embedded reconstruction with full data and likewise the CS and iFFT model reconstructions were compared to the corresponding reconstructions with full data as the true \trho{} map is not available. Thus, the RMSEs reflect each model's relative tolerance for undersampling compared to the situation where fully sampled data are available for the particular reconstruction model. \ \section{Results} \subsection{Simulated golden angle radial data} With the radial simulated phantom data, all the methods produce reconstructions with similar RMSE when using full data (acceleration factor 1). With undersampled data, the embedded model outperforms both the CS models as measured by RMSE of both the \trho{} (Figure~\ref{fig:simu_t1rho}) and \szero{} (Figure~\ref{fig:simu_s0}) maps with all acceleration factors, and the improvement increases with larger acceleration factors. \begin{figure} \centering \includegraphics[width=\columnwidth]{im/simu_t1rho.png} \caption{The \trho{} maps of the radial simulation reconstructed with the embedded model and the two CS models, and the RMSEs of the reconstructions as compared to the true values used in the simulation. The top row contains the CS S1+C1 model, and the middle row the CS S1C2 model \trho{} parameter maps obtained from the monoexponential fit of \eqref{eq:t1rhomodel}, and the bottom row contains the embedded model reconstructions. Columns 2-5 show the \trho{} parameter maps at acceleration factors 5, 10, 30, and 101. Images are cropped to content.} \label{fig:simu_t1rho} \end{figure}{} \begin{figure} \centering \includegraphics[width=\columnwidth]{im/simu_s0.png} \caption{The \szero{} maps of the radial simulation reconstructed with the embedded model and the two CS models, and the RMSEs of the reconstructions as compared to the true values used in the simulation. The \szero{} maps shown here are from the same reconstructions as the \trho{} maps shown in Figure~\ref{fig:simu_t1rho}. The top row contains the CS S1+C1 model, and the middle row the CS S1C2 model \szero{} parameter maps obtained from the monoexponential fit of \eqref{eq:t1rhomodel}, and the bottom row contains the embedded model reconstructions. Columns 2-5 show the \szero{} parameter maps at acceleration factors 5, 10, 30, and 101. Images are cropped to content.} \label{fig:simu_s0} \end{figure}{} \begin{figure} \centering \includegraphics[width=\columnwidth]{im/simu_rmse.png} \caption{The RMSEs of the \trho{} (left) and \szero{} (right) maps of the radial simulation with the embedded model and the two CS models at acceleration factors 1, 5, 10, 20, 30, 50, and 101.} \label{fig:simu_rmse} \end{figure}{} The \trho{} maps computed using the CS models are also visibly noisier as the model does not allow direct regularization of the \trho{} map (Figure~\ref{fig:simu_t1rho}). With an acceleration factor of 101, reconstructions of both CS models start to break down, whereas the embedded model reconstruction still reconstructs the target reasonably well, with the RMSE values below those of the the CS models at an acceleration factor of 20--30 (Figures~\ref{fig:simu_t1rho},~\ref{fig:simu_s0} and~\ref{fig:simu_rmse}). \subsection{Cartesian data from ex vivo mouse kidney} In the cartesian \emph{ex vivo} test case, the performance of the embedded and CS models in their relative tolerance for undersampling is similar with acceleration factor 2, and both CS models perform slightly worse than the embedded model with acceleration factor 3 (Figures~\ref{fig:expeKidney_t1rho},~\ref{fig:expeKidney_s0} and~\ref{fig:expeKidney_rmse}). With acceleration factor 4, the performance of the CS models is already clearly worse than the performance of the embedded model. While both of the CS models fail in the reconstruction with acceleration factor 5, the embedded model still produces similar tolerance for undersampling as with the smaller acceleration factors. The undersampled iFFT reconstructions shown for reference perform worse than the CS or the embedded model reconstructions with all the acceleration factors. \begin{figure} \centering \includegraphics[width=\columnwidth]{im/expeKidney_t1rho.png} \caption{The \trho{} maps of the cartesian \emph{ex vivo} mouse kidney data with the iFFT, CS S1+C1, CS S1C2 and embedded models, and the RMSEs as compared to the corresponding model reconstructions with full data. The top row contains the iFFT, the second row the CS S1+C1, and the third row the CS S1C2 model \trho{} parameter maps obtained from the monoexponential fit of \eqref{eq:t1rhomodel}, and the bottom row contains the \trho{} maps obtained from the embedded model reconstructions. Columns 1-4 show the parameter maps corresponding to acceleration factors 1, 3, 4, and 5. Images are cropped to content.} \label{fig:expeKidney_t1rho} \end{figure}{} \begin{figure} \centering \includegraphics[width=\columnwidth]{im/expeKidney_s0.png} \caption{The \szero{} maps of the cartesian \emph{ex vivo} mouse kidney data with the iFFT, CS S1+C1, CS S1C2 and embedded models, and the RMSEs as compared to the corresponding model reconstructions with full data. The \szero{} maps shown here are from the same reconstructions as the \trho{} maps shown in Figure~\ref{fig:expeKidney_t1rho}. The top row contains the iFFT, the second row the CS S1+C1, and the third row the CS S1C2 model \szero{} parameter maps obtained from the monoexponential fit of \eqref{eq:t1rhomodel}, and the bottom row contains the \szero{} maps obtained from the embedded model reconstructions. Columns 1-4 show the parameter maps corresponding to acceleration factors 1, 3, 4, and 5. Images are cropped to content.} \label{fig:expeKidney_s0} \end{figure}{} \begin{figure} \centering \includegraphics[width=\columnwidth]{im/expeKidney_rmse.png} \caption{The RMSEs of the \trho{} (left) and \szero{} (right) maps of the cartesian \emph{ex vivo} mouse kidney data with the embedded, CS S1+C1, CS S1C2, and iFFT models at acceleration factors 2, 3, 4, and 5.} \label{fig:expeKidney_rmse} \end{figure}{} \section{Discussion} In this work, we proposed a non-linear, embedded \trho{} model for direct quantitative \trho{} reconstruction. The model is solved using the non-linear primal-dual proximal splitting algorithm \cite{Val14}. We compared the embedded model reconstructions to two compressed sensing reconstructions followed by a mono-exponential \trho{} fit in a radial simulated test case and a cartesian \emph{ex vivo} test case. In the cartesian test case, we also show results from iFFT reconstructions followed by the \trho{} fit. In the simulated test case, where the RMSE metric with respect to the true target image is available, the embedded model outperformed both of the CS models with improvement increasing towards the higher acceleration factors. In the experimental test case with Cartesian ex vivo mouse kidney data, the RMSEs reflect the relative tolerance of the method with respect to the case where the fully sampled data was available for that particular method. In this case, the embedded model and the CS models had similar RMSE for acceleration factor 2, and for higher acceleration factors the embedded model exhibited clearly better tolerance for undersampling, indicating that the embedded model would allow usage of higher acceleration factors than the CS models. The two CS models perform quite similarly with the second-order contrast TV model CS S1C2 performing slightly better overall than the CS S1+C1 model in the simulated test case. The same observation can be made in the cartesian test case up to acceleration factor 4. In the Cartesian test case the CS S1+C1 model has smaller RMSE than CS S1C2 with acceleration factor 5, but in this case both of the CS models failed to produce useful \trho{} or \szero{} maps. From the practical point of view, the second-order contrast TV model with the implementation described in \cite{Zib+18} is also more convenient than the CS S1+C1 model as it requires selecting only a single regularization parameter. The embedded model is, however, slower to compute than the CS models. For example, our MATLAB code for the radial simulation data with $\mathrm{AF}=5$ took 104 minutes for the embedded model and 26 minutes for the CS S1+C1 model. For the experimental cartesian data, the difference was bigger: for example for $\mathrm{AF}=2$, the embedded model took 75 minutes to compute, while the CS S1+C1 model converged to stopping criterion in under a minute. The computation times could, however, be shortened, for example, by optimizing the code, running the code on a GPU, and also by loosening the stopping criteria since we ran the iterations to rather strict criteria. In the radial simulated test case, the embedded model reconstructs the target quite well even with an acceleration factor of 101, using only 3 spokes per $T_{SL}$ contrast, and 21 spokes in the whole reconstruction. In the cartesian test case, the acceleration factors that can be reached are much smaller. Even though the target used in the radial simulation is rather simple, it is evident that the radial sampling pattern, particularly with the golden angle sampling where k-space spokes are complementary and go through the center part of the k-space, allows much higher acceleration factors than a cartesian line-by-line sampling pattern. This is due to the undersampling artefacts in radial sampling (i.e. streaking) being more noise like in the transform domain than the undersampling artefacts that arise in cartesian sampling \cite{LDP07,BUF07}. This finding is aligned with the findings of \cite{Zib+20}. Testing the proposed embedded model with radial experimental data, \emph{in vivo} data, 3D-data, and parallel imaging data are interesting future works, and our hypothesis is that similar results, where the embedded model outperforms the CS models, are to be expected. In addition, the embedded \trho{} model could be tested with other regularizers, such as total generalized variation \cite{BKP10}, which balances between minimizing the first and second-order differences of the signal. As the contrast manipulation scheme of the signal acquisition and the quantitative signal equation are the only major aspects that change between different qMRI contrasts, the proposed method can easily be adapted to fit other qMRI cases as well. Besides other qMRI methods, other aspects where embedded modelling could offer further benefits are \trho{} dispersion imaging \cite{akella2004,keenan2015}, where the data are acquired at multiple spin-locking amplitudes, and reducing RF energy deposition by asymmetric data reduction for the different spin-lock times (i.e. less data for long spin-lock pulses). More generally, shorter scan times may allow for higher spin-lock durations and/or higher amplitude pulses, as the specific absorption rate of RF energy can be minimized via acquiring less data for the most demanding pulses. Alternatively, multi-contrast embedded modelling could offer further avenues for data reduction. \section{Conclusions} In this work, we proposed an embedded \trho{} reconstruction method, which directly reconstructs the \trho{}, \szero{}, and phase maps from the measurement data. The reconstruction method also allows direct regularization of these parameter maps, and thus \emph{a priori} information about the parameter maps may be incorporated into the reconstruction. We also showed that the proposed method outperforms two compressed sensing models in two test cases, especially when using higher acceleration factors. \begin{appendices} \section{Algorithm details} The minimization problem we are solving using NL-PDPS reads \begin{multline} \min_{\szeroTwo,\trhoTwo,\theta}||K(\szeroTwo,\trhoTwo,\theta)-m||_2^2 +\alpha_1 \mathrm{TV_S}(\szeroTwo) +\alpha_2 \mathrm{TV_S}(\trhoTwo)\\ +\alpha_3 ||\nabla_\mathrm{S} \theta||_2^2 +\delta_\mathrm{a_1}(\szeroTwo)+\delta_\mathrm{a_2}(\trhoTwo). \label{eq:emb_t1rho_model_2} \end{multline} For the algorithm, the minimization problem is written as \begin{equation} \min_u F(H(u))+G(u), \end{equation} where $u=(\szeroTwo^\mathrm{T},\trhoTwo^\mathrm{T},\theta^\mathrm{T})^\mathrm{T}:=(u_1^\mathrm{T},u_2^\mathrm{T},u_3^\mathrm{T})^\mathrm{T}$, and \begin{equation} H(u)= \left[\begin{array}{c} K(u)\\ \nabla_\mathrm{D} u\\ \end{array}\right], \mathrm{where\ } \nabla_\mathrm{D} = \left[\begin{array}{ccc} \nabla_\mathrm{x} & 0 & 0\\ \nabla_\mathrm{y} & 0 & 0\\ 0 & \nabla_\mathrm{x} & 0\\ 0 & \nabla_\mathrm{y} & 0\\ 0 & 0 & \nabla_\mathrm{x}\\ 0 & 0 & \nabla_\mathrm{y}\\ \end{array}\right], \end{equation} where $\nabla_\mathrm{x}$ and $\nabla_\mathrm{y}$ are discrete forward differences in the horizontal and vertical directions. Further, the functional $F$ is divided into 4 parts matching the parts of the minimization problem as \begin{align} F_1(p_1) =& \frac{1}{2}\@ifstar{\oldnorm}{\oldnorm*}{p_1-m}_2^2 \\ F_2(p_2) =& \alpha_1\@ifstar{\oldnorm}{\oldnorm*}{\@ifstar{\oldabs}{\oldabs*}{p_2}}_1\\ F_3(p_3) =& \alpha_2\@ifstar{\oldnorm}{\oldnorm*}{\@ifstar{\oldabs}{\oldabs*}{p_3}}_1\\ F_4(p_4) =& \alpha_3\@ifstar{\oldnorm}{\oldnorm*}{p_4}_2^2, \end{align} and the $p_i$ are obtained by \begin{align} p_1 =& K(u) \\ p_2 =& \nabla_\mathrm{S} u_1 := \left[\begin{array}{c} \nabla_\mathrm{x} u_1\\ \nabla_\mathrm{y} u_1\\ \end{array}\right] \\ p_3 =& \nabla_\mathrm{S} u_2 := \left[\begin{array}{c} \nabla_\mathrm{x} u_2\\ \nabla_\mathrm{y} u_2\\ \end{array}\right]\\ p_4 =& \nabla_\mathrm{S} u_3 := \left[\begin{array}{c} \nabla_\mathrm{x} u_3\\ \nabla_\mathrm{y} u_3\\ \end{array}\right]. \end{align} Here, the $|p_2|$ and $|p_3|$ denote the isotropic gradient magnitudes, i.e., for example for $|p_2|$, the elements are $(|p_2|)_i=\sqrt{(\nabla_\mathrm{x}\szeroTwo)_i^2+(\nabla_\mathrm{y}\szeroTwo)_i^2}$. Similarly, the functional $G$ has two parts, which read \begin{align} G_1(u) =& \delta_\mathrm{a_1}(\szeroTwo) \\ G_2(u) =& \delta_\mathrm{a_2}(\trhoTwo). \end{align} Now, the proximal operators (also called the resolvent operators) of the convex conjugates of $F_i$, i.e., $F_i^*$, and of $G_1$ and $G_2$ read \begin{align} (I+\sigma\partial F_1^*)^{-1}(v_1) =& \frac{v_1-\sigma m}{1+\sigma} \\ (I+\sigma\partial F_2^*)^{-1}(v_2) =& \frac{v_2}{\max(1,|v_2|/\alpha_1)}\\ (I+\sigma\partial F_3^*)^{-1}(v_3) =& \frac{v_3}{\max(1,|v_3|/\alpha_2)}\\ (I+\sigma\partial F_4^*)^{-1}(v_4) =& \frac{v_4}{\sigma/(2\alpha_3)+1}\\ (I+\tau\partial G_1)^{-1}(u) =& P_{1,a_1}(u) = \begin{cases} u_1,& u_1\geq a_1\\ a_1,& u_1<a_1 \end{cases}\\ (I+\tau\partial G_2)^{-1}(u) =& P_{2,a_2}(u) = \begin{cases} u_2,& u_2\geq a_2\\ a_2,& u_2<a_2 \end{cases}. \end{align} With these, we can write algorithm \ref{alg:nl-pdps_precise}. The step lengths $\tau_i$ are chosen by Eqs.~\ref{eq:tau_1}-\ref{eq:tau_3}. \begin{algorithm} \caption{Embedded \trho{} with NL-PDPS \cite[Algorithm 2.1]{Val14}} \label{alg:nl-pdps_precise} \begin{algorithmic} \State $\mathrm{Choose}\ \omega\geq 0\ \mathrm{, and}\ \tau_\ell,\sigma$ \State $\mathrm{s.t.}\ \tau_\ell\sigma (\sup_{k=1,...,i}\@ifstar{\oldnorm}{\oldnorm*}{[\nabla H(x^k)]_\ell}^2)<1.$ \State $\mathrm{T} = \mathrm{diag}(\tau_1 I_N,\ \tau_2 I_N,\ \tau_3 I_N).$ \While{Not reached stopping criterion} \State $\tilde{u}^{i+1} \leftarrow u^i-\mathrm{T}[\nabla L(u^i)]^*[v_1^{i^\mathrm{T}} v_2^{i^\mathrm{T}} v_3^{i^\mathrm{T}} v_4^{i^\mathrm{T}}]^\mathrm{T}$ \State $u^{i+1} \leftarrow P_{2,a_2}(P_{1,a_1}(\tilde{u}^{i+1}))$ \State $\bar{u}^{i+1} \leftarrow u^{i+1} + \omega(u^{i+1}-u^i)$ \State $v_1^{i+1} \leftarrow (I+\sigma\partial F_1^*)^{-1}(v_1^i+\sigma K(\bar{u}_1^{i+1}))$ \State $v_2^{i+1} \leftarrow (I+\sigma\partial F_2^*)^{-1}(v_2^i+\sigma \nabla_\mathrm{S}\bar{u}_2^{i+1})$ \State $v_3^{i+1} \leftarrow (I+\sigma\partial F_3^*)^{-1}(v_3^i+\sigma \nabla_\mathrm{S}\bar{u}_3^{i+1})$ \State $v_4^{i+1} \leftarrow (I+\sigma\partial F_4^*)^{-1}(v_4^i+\sigma \nabla_\mathrm{S}\bar{u}_4^{i+1})$ \EndWhile \end{algorithmic} \end{algorithm} \end{appendices} \printbibliography \end{document}
train/arxiv
BkiUdFM4eIZjqV0yNCjE
5
1
\section{Introduction} Van der Waals heterobilayers consisting of vertically stacked monolayer transition-metal dichalcogenide semiconductors (TMDs) form atomically sharp interfaces with type-II band alignment \cite{chiu2015determination,wilson2017determination}. This band alignment enables the formation of interlayer excitons (IXs) - electron-hole Coulomb bound states between electrons and holes spatially separated in different monolayers. The reduced overlap of the electron and hole wavefunctions gives rise to long IX radiative lifetimes (compared to intralayer excitons) \cite{rivera2015observation, miller2017long} that can be further tailored by the momentum mismatch between the carriers \cite{choi2021twist}. The spatial separation of the IXs carriers also results in a large permanent electric out-of-plane dipole moment, which enables a large tunability of the exciton energy by externally applied electric fields \cite{ciarrocchi2019polarization,jauregui2019electrical,baek2020highly}. The combination of long lifetimes and large binding energies \cite{rivera2018interlayer,Torun2018} position IXs in TMD heterostructures as an exciting platform to explore many-body exciton-exciton phenomena such as dipolar interactions of IXs in the low density regime \cite{kremser2020discrete,li2020dipolar} or the high-density regime, where signatures of coherent excitonic many-body quantum states and high-temperature exciton (boson) condensation have been predicted \cite{fogler2014high,wu2015theory,berman2016high} and observed \cite{sigl2020signatures,wang2019evidence}. The robust and long-lived IXs in TMD heterobilayers also offer novel opportunities to realize atomically-thin optoelectronic devices such as lasers \cite{paik2019interlayer} and excitonic transistors that can operate at room temperature \cite{unuchek2018room}. Beyond the large permanent dipole moment and strong Coulomb interactions, the compelling concept of a moir{\'e} superlattice \cite{kang2013electronic} emerges in TMD heterobilayers due to the lattice mismatch and any relative twist angle between the constituent monolayers. In MoSe$_2$/WSe$_2$ heterobilayers, the moir{\'e} superlattice results in a periodic potential landscape for IXs \cite{zhang2017interlayer} (with a periodicity that depends on the relative crystallographic alignment of the layers) in which three trapping sites with three different local atomic registries arise \cite{yu2017moire,wu2018theory,yu2018brightened}. Experimental evidence of IXs trapped in a moir{\'e}-induced potential has been reported in MoSe$_2$/WSe$_2$ heterobilayers with twist angles near 0$^\circ$, 21.8$^\circ$ and 60$^\circ$ at cryogenic temperatures \cite{seyler2019signatures,brotons2020spin,baek2020highly}. These localized IXs present emission linewidths below 100 $\mu$eV \cite{seyler2019signatures,brotons2020spin,baek2020highly} and photon antibunching \cite{baek2020highly}, clear hallmarks of quantum-confined excitons. Moreover, the trapped IXs show well-defined magneto-optical properties: the $g$-factors of the trapped IXs depend on the relative valley alignment (i.e., stacking configuration) between the layers hosting the carriers, while their optical selection rules are determined by the atomic registry of the trapping site \cite{seyler2019signatures,brotons2020spin}. Together, these magneto-optical properties provide compelling evidence for the moir{\'e} potential as the origin of the IX trapping. The important role the moir{\'e} potential plays in the confinement of IXs is further supported by the twist-dependent IX diffusion in TMD heterobilayers, in which the diffusion length of the IX ensemble depends on the moir{\'e} periodicity \cite{choi2020moire,yuan2020twist}. However, the narrow emission linewidths observed for single trapped IXs contrast with the broad photoluminescence (PL) spectra observed in similar MoSe$_2$/WSe$_2$ heterostructures \cite{hanbicki2018double,ciarrocchi2019polarization,calman2020indirect,wang2019giant,jauregui2019electrical,miller2017long,sigl2020signatures,choi2021twist}, which show IX emission bands with linewidths of $4-6$ meV in the cleanest samples \cite{ciarrocchi2019polarization,calman2020indirect,sigl2020signatures,delhomme2020flipping}, two orders of magnitude broader than single trapped IXs. Such contrasting observations have opened a debate regarding a unified picture of the nature of IX emission in TMD heterobilayers, and in particular in the prototypical heterostructure: MoSe$_2$/WSe$_2$ heterobilayers \cite{tartakovskii2020moire}. To date, clear experimental evidence which can marry these two regimes (narrow linewidth trapped IX vs broad linewidth ensemble IX) is missing. Further, gate doping of MoSe$_2$/WSe$_2$ heterobilayers has been shown to lead to the formation of charged IX (trions) with spin-singlet and spin-triplet configurations \cite{jauregui2019electrical,joe2021electrically}. However, the trapped or delocalized nature of IX trions, and all their possible spin-valley configurations, have yet to be addressed. Here, we investigate the magneto-optical properties of IXs in a gate-tunable MoSe$_2$/WSe$_2$ heterobilayer. We tune the density of IXs by scanning the excitation power over five orders of magnitude and report a clear and continuous evolution from the narrow emission of single trapped IXs to broad ensemble IX peaks, for which IXs with both spin-triplet and spin-singlet configuration are observed. In the high excitation power regime, we observe an energetic blue-shift of the IXs due to repulsive dipolar interactions. We estimate the density of IXs from the power dependent evolution of the IX PL and find that, even at the highest excitation powers employed in our measurements ($\sim$80 $\mu$W), the estimated density of optically-generated IXs is two orders of magnitude smaller than the estimated density of moir{\'e} traps ($\sim4\cdot10^{12}$ cm$^{-2}$) in our heterostructure. Polarization-resolved PL measurements under a vertical magnetic field confirm that the narrow and broad IX PL present identical magneto-optical properties, confirming that both the quantum-dot-like and broad PL emission from IXs arise from IXs trapped in moir{\'e} potentials with the same atomic registry. Moreover, we investigate the formation of negatively-charged IX trions and find that the trion creation originates from on-site charging of the trapping potentials. The magneto-optical properties of the negatively-charged IXs reveal three different negative trion species with contrasting spin-valley configurations. Interestingly, we observe both intervalley and intravalley IX trions with spin-triplet optical transitions, a consequence of the absence of a dark exciton state in MoSe$_2$/WSe$_2$ heterobilayers. The identification of the various neutral and charged exciton species provides new insight into multiple peaked IX spectra in heterobilayers, while the identification of the localized nature of IX ensemble emission in MoSe$_2$/WSe$_2$ heterobilayers demonstrate the important role of the moir{\'e} potential in their magneto-optical properties. These findings highlight the necessity to consider the spatial pinning of the IXs to the moir{\'e} lattice in order to achieve an accurate description of many-body exciton-exciton phenomena in TMD heterobilayers. \section{Magneto-optics of spin-singlet and spin-triplet neutral IX\lowercase{s}} Figure \ref{fig1}(a) shows a sketch of the dual-gated heterobilayer device we employ, which consists of a ML MoSe$_2$ and a ML WSe$_2$ vertically stacked with a twist angle $\Delta\theta\sim 56.4 \pm 0.2^\circ$ (2$H$ stacking). The twist angle in our heterobilayer is beyond the theoretically proposed critical angle for lattice reconstruction \cite{rosenberger2020twist,weston2020atomic,andersenexcitons}, ensuring minimal moir{\'e} domain formation. The heterobilayer was encapsulated by hexagonal boron nitride (hBN). Graphene layers act as electrical contacts for the top, bottom and heterobilayer gates (see Ref. \cite{baek2020highly} for more details). Via voltage-dependent reflectance spectroscopy, we observe clear signatures of strongly correlated electronic states in both the conduction (CB) and valence bands (VB) at numerous fractional filling values of the moir{\'e} superlattice (see Suppl. Note 1). Our observations, consistent with recent reports of correlated insulating states in angle aligned WSe$_2$/WS$_2$ heterobilayer samples \cite{tang2020simulation,regan2020mott,xu2020correlated,liu2020excitonic}, confirm the quality of our heterobilayer sample and the formation of a moir{\'e} superlattice. However, in-depth analysis of this experiment is beyond the scope of the current manuscript. Next, we investigate the evolution of the low-temperature (T = 4 K) confocal PL spectrum as a function of the IX density (with all gates grounded). The density of optically-generated IXs in our sample can be varied by changing the power of the continuous-wave excitation laser ($P_{exc}$) \cite{wang2019optical}. We excite resonantly to the 1$s$ state of the intralayer A exciton in ML MoSe$_2$ ($\lambda = 759$ nm), and scan $P_{exc}$ over 5 orders of magnitude. Figure \ref{fig1}(b) shows a color plot of the full evolution of the PL spectrum at a representative spot in the grounded heterobilayer for 0.4 nW $ \leq P_{exc} \leq$ 80 $\mu$W, while Fig. \ref{fig1}(c) presents linecuts extracted from Fig. \ref{fig1}(b) for $P_{exc}$ of different orders of magnitude. At low excitation powers ($P_{exc} \leq$ 10 nW), the PL spectrum reveals several discrete narrow emission lines with energies in the range ~1.39 - 1.41 eV, consistent with recently reported values for neutral IXs trapped in the moir{\'e} potential landscape \cite{seyler2019signatures,brotons2020spin}. Magneto-optical studies in WSe$_2$/MoSe$_2$ heterobilayers with 2$H$ stacking have shown that the moir{\'e}-trapped IXs arise from optical transitions involving the lowest spin-orbit-split CB of MoSe$_2$ at $\pm K$ \cite{seyler2019signatures,brotons2020spin, baek2020highly}. This observation leads to spin-triplet optical transitions for the trapped IXs (see Fig. \ref{fig1}(d)). Although such spin-flip transitions are normally forbidden in ML TMDs \cite{wang2018colloquium}, they can be brightened due to the selection rules dictated by the resulting interlayer atomic registry of the heterostructures, as theoretically predicted \cite{yu2018brightened} and experimentally shown \cite{ciarrocchi2019polarization,wang2019giant,joe2021electrically,zhang2019highly}. \begin{figure*} \begin{center} \includegraphics[scale=0.6]{Figure_1.png} \end{center} \caption{Power dependence of moir{\'e}-trapped IXs. \textbf{(a)} Sketch of the dual-gated WSe$_2$/MoSe$_2$ heterobilayer. Graphite layers are used as contacts for top, bottom, and heterobilayer, while hBN layers ($D=$ 18 nm) are used as dielectric spacers. \textbf{(b)} Color plot with the full evolution of the low-temperature (T = 4 K) confocal PL spectrum of a representative spot in the undoped heterobilayer for 0.4 nW $ \leq P_{exc} \leq$ 80 $\mu$W in logarithmic scale. \textbf{(c)} Linecuts extracted from the color plot in (b) for $P_{exc}$ with different orders of magnitude. The spectra have been shifted vertically and normalized by the values indicated in the figure for visualization purposes. \textbf{(d)} Schematics of the spin–valley configuration of the proposed optical transitions with the corresponding selection rules for $H_h^h$ atomic registry \cite{yu2018brightened}. \textbf{(e)} Magnetic-field dependence of IX PL in the low- and high-excitation regimes (bottom and top panels, respectively) for $\sigma^+$- and $\sigma^-$-resolved confocal collection (left and right panels, respectively). \textbf{(f)} Magnetic-field dependence of the Zeeman splitting measured for IX$^0_T$ and IX$^0_S$ at high $P_{exc}$ (top panel), and for a representative moir{\'e}-trapped IX at low $P_{exc}$.} \label{fig1} \end{figure*} For the lowest excitation powers employed, only a few tens of IXs are generated ($\sim$10 to 40 depending on the spatial position in the sample). Figure S1 shows the full evolution of the PL spectrum measured at a second spatial location of the heterobilayer in a similar $P_{exc}$ range. The PL measured at both spatial locations shows the same overall behavior under increasing excitation power. First, the emission intensity of the few discrete narrow lines saturates with increasing power (see Figs. S2(a) and S2(b)), hallmarks of few-level quantum-confined systems. Simultaneously, as $P_{exc}$ increases, more trapping sites are populated with IXs and we progressively lose the ability to resolve individual spectral lines, since they merge into an IX ensemble band. The IX ensemble band blue-shifts with increasing $P_{exc}$. At excitation powers of $\sim2 \mu$W, a second IX emission peak appears at higher energy and continuously blue-shifts with increasing $P_{exc}$. For the highest $P_{exc}$ used in our experiments (80 $\mu$W), the two IX peaks exhibit linewidths between 5 and 10 meV (depending on the spatial location on the sample) and consistently exhibit an energy splitting of $\sim$24 meV. To unambiguously identify if these peaks arise from band-edge states at $\pm K$ and disentangle the spin–valley configuration of each IX emission band, we perform helicity-resolved magneto-optical spectroscopy measurements in the Faraday configuration under linearly polarized ($\pi$) excitation at 1.63 eV (759 nm). Figure \ref{fig1}(e) shows the magnetic-field ($B_z$) dependence of IX PL in the low- and high-excitation regimes (bottom and top panels, respectively) for $\sigma^+$- and $\sigma^-$-resolved confocal collection (left and right panels, respectively). A clear Zeeman splitting with increasing $B_z$ is observed for the IX PL in both excitation regimes. The low-energy ensemble band (IX$^0_T$) observed at high $P_{exc}$ shows the same polarization dependence with $B_z$ as the few individually resolved moir{\'e}-trapped IXs. However, the high-energy ensemble band (IX$^0_S$) presents the opposite polarization. Figure \ref{fig1}(f) shows the $B_z$-dependence of the experimental Zeeman splitting ($\Delta E_z$) values measured for IX$^0_T$ and IX$^0_S$ (top panel) and a representative single moir{\'e}-trapped IX (bottom panel), as extracted from Lorentzian fits of the experimental data. The Zeeman splitting is defined as $\Delta E_z(B_z) = E^{\sigma^+}(B_z)-E^{\sigma^-}(B_z)= g\mu_BB_z$, with $g$ an effective $g$-factor, $\mu_B$ the Bohr magneton and $E^{\sigma^\pm}$ the $B_z$-dependent energies of the IX transitions with $\sigma^\pm$ polarization. Linear fits reveal $g$-factors of $-15.71\pm0.05$, $12.24\pm0.13$, and $-15.98\pm0.02$ for IX$^0_T$, IX$^0_S$ and the moir{\'e}-trapped single IX, respectively. Since the carrier spin, the valley index and the atomic orbital involved in the optical transitions are associated to a magnetic moment \cite{seyler2019signatures,nagler2017giant,xu2014spin,xiao2012coupled}, the measured $g$-factors provide valuable information about the nature of the transitions. In this regard, the band-picture model presented in Ref. \cite{aivazian2015magnetic} has successfully predicted both the magnitude and the sign of the IX $g$-factors in 2$H$- and 3$R$-stacked MoSe$_2$/WSe$_2$ heterobilayers \cite{seyler2019signatures, ciarrocchi2019polarization,wang2019giant,joe2021electrically,brotons2020spin,baek2020highly} that show excellent agreement with more rigorous theoretical descriptions of the IX Zeeman shifts \cite{wozniak2020exciton}. Taking into account the reported effective masses of $m^*_h = 0.37$ $m_0$ and $m^*_e = 0.84$ $m_0$ for holes in the topmost valence band of WSe$_2$ \cite{kormanyos2015k} and electrons in the lowest CB of MoSe$_2$ at at $\pm K$ \cite{larentis2018large,goryca2019revealing} (with $m_0$ the free electron rest mass), we estimate theoretical $g$-factor values of $-15.8$ and $11.8$ for IX$^0_T$ and IX$^0_S$ transitions, respectively \cite{brotons2020spin}. The good agreement between the experimental and calculated $g$-factor values confirms the spin-valley configurations of both excitonic transitions and corroborates the identification of IX$^0_T$ and IX$^0_S$ in the high excitation regime (see Fig. \ref{fig1}(d) for a level schematic, the optical transitions, and the corresponding selection rules \cite{yu2018brightened}), in agreement with recent works \cite{ciarrocchi2019polarization, wang2019giant,joe2021electrically,zhang2019highly}. These results bring to light a striking property of these IX transitions: unlike counterpart ML TMDs \cite{echeverry2016splitting,wang2018colloquium}, TMD heterobilayers do not host dark excitons (i.e. spin-forbidden optical transitions). In the rest of the work, we focus on the properties of the IX ensemble emission peaks with different spin configurations (IX$^0_T$ and IX$^0_S$) observed at high IX densities. \begin{figure*} \begin{center} \includegraphics[scale= 0.6]{Figure_2.png} \end{center} \caption{Dipolar interactions of an ensemble of trapped IXs. \textbf{(a)} Ratio of integrated PL intensities between the IX$^0_T$ and IX$^0_S$ peaks (IX$^0_T$/IX$^0_S$) as a function of $P_{exc}$ extracted from Fig. \ref{fig1}(b) (black dots). The inset shows the evolution of the integrated PL intensities for each individual peak in the same $P_{exc}$ range. The red, blue and green solid lines represent fits of the experimental data to the rate equation model described in the Suppl. Mat. \textbf{(b)} Energy splitting between IX$^0_S$ and IX$^0_T$ ($\Delta E_{S-T}$) as a function of the excitation power (black dots). The inset shows the energy shifts of the individual peaks. The red solid line represents a fit of the experimental $\Delta E_{S-T}$ to Eq. (\ref{Eq:energy_splitting}), while the green and blue solid lines are fits of the measured $\Delta E_{S,T}$ to Eq. (\ref{Eq:plate_capacitor}) and the shaded areas represent the uncertainty interval corresponding to $d$ values ranging from 0.4 \cite{baek2020highly} to 1 nm \cite{nagler2017interlayer}. \textbf{(c)} Estimated power-dependent densities of IXs with spin-singlet (green) and spin-triplet (blue) configurations. The thickness of the lines indicates the confidence interval estimated from the fits to the experimental data. The black dashed line indicates the density of moir{\'e} traps estimated for our heterostructure, while the orange shaded area represents the density of sites for MoSe$_2$/WSe$_2$ heterobilayers with stacking angles between 54.4$^\circ$ and 58.4$^\circ$. \textbf{(d)} Sketch depicting the transition between the low- and high-excitation power regimes for IXs (red spheres) trapped in the moir{\'e} sites of a MoSe$_2$/WSe$_2$ heterobilayer. At low $P_{exc}$, only a few IXs are a localized in the moir{\'e} sites lying inside the circular confocal collection spot (not to scale). As $P_{exc}$ increases, more and more optically-generated IXs are trapped in the moir{\'e} sites, giving rise to a broad PL emission band originating from an ensemble of moir{\'e}-trapped IXs. Even at the highest $P_{exc}$ used in this work, less than $\sim2\%$ of moir{\'e} sites contain trapped IXs.} \label{fig2} \end{figure*} \section{Dipolar interactions of an ensemble of trapped IX\lowercase{s}} To understand the link between the individually resolved moir{\'e}-trapped IXs and the ensemble IX emission, we focus on the power-dependent blue-shifts observed in the PL energy of both IX$^0_T$ and IX$^0_S$. The peak blue-shifts originate from the repulsive dipolar exciton-exciton interaction of IXs, which arises as a consequence of the large permanent electrical dipole moment induced by the spatial separation of the exciton carriers \cite{butov1999magneto,nagler2017interlayer,wang2018electrical,unuchek2019valley}. The power-dependent energy shifts for IX$^0_T$ ($\Delta E_{T}$) and IX$^0_S$ ($\Delta E_{S}$) can be expressed as $\Delta E_{S,T}(P_{exc}) = E_{S,T}(P_{exc})-E_{S,T}^0$, with $E_{S,T}(P_{exc})$ and $E_{S,T}^0$ being the energy of the corresponding exciton species at $P_{exc}$ and vanishing excitation, respectively. Such excitation-dependent energy shifts can be quantitatively estimated using the plate capacitor formula \cite{butov1999magneto,nagler2017interlayer}: % \begin{align} \Delta E_{S,T}(P_{exc})=4\pi N_{S,T}(P_{exc})e^2d/\epsilon, \label{Eq:plate_capacitor} \end{align} % where $N_{S,T}$ is the exciton density of the corresponding IX configuration, $e$ is the electron charge, $d$ is the interlayer distance, and $\epsilon$ is the dielectric constant (see Suppl. Note 3 for the derivation and applicability of Eq. (\ref{Eq:plate_capacitor})). Therefore, Eq. (\ref{Eq:plate_capacitor}) allows us to estimate the interlayer exciton density at different excitation powers from the experimentally measured energy blue-shift ($N_{S,T}=\Delta E_{S,T} \varepsilon/(4\pi e^2d)$) \cite{butov1999magneto,nagler2017interlayer,jauregui2019electrical,li2020dipolar}. For example, from the data shown in Fig. \ref{fig1}(b) we estimate a $\Delta E_{T}\sim0.56$ meV at $P_{exc}=4$ nW. Assuming an $\varepsilon = 7.4 \varepsilon_0$ \cite{gao2017interlayer} (with $\varepsilon_0$ the vacuum permittivity), and a $d$ ranging from $\sim0.4$ nm \cite{baek2020highly} to 1 nm \cite{nagler2017interlayer}, we estimate a $N_T(4$ nW$)\approx 1.8\cdot10^{9} - 4.6\cdot10^{9}$ cm$^{-2}$. The estimated value of $N_T$ suggests the presence of $\sim$18 - 46 trapped IX$_T^0$ in a spot of about 1 $\mu m^2$, which corresponds roughly to the area of the confocal spot in our measurements. The estimated number of trapped IX$_T^0$ agrees well with the magnitude of the total number of peaks that can be experimentally resolved at $P_{exc}=4$ nW. In a similar way, from the estimated total $\Delta E_{T}$ of $\sim$8.06 meV for IX$_T^0$ in Fig. \ref{fig1}(b), we estimate a $N_T\approx 2.6\cdot10^{10} - 6.6\cdot10^{10}$ cm$^{-2}$ at the highest excitation powers used in our experiments. For such IX densities, the IXs present an average exciton-exciton distance $\langle r_{IX}\rangle\sim 42-67$ nm (see Suppl. Note 4 for more details). The estimated $\langle r_{IX}\rangle$ in our experiments is of the same order of magnitude than the $\langle r_{IX}\rangle$ inferred in other 2D excitonic systems for which dipolar interactions are typically observed, for example indirect excitons in III-V quantum wells \cite{butov1999magneto} and MoSe$_2$/WSe$_2$ heterobilayers \cite{nagler2017interlayer,li2020dipolar,jauregui2019electrical}. The estimated values of $N_T$ and $\langle r_{IX}\rangle$ shed some light on the nature of the IX PL emission bands observed at high $P_{exc}$. The stacking angle in our heterostructure ($\Delta\theta = 56.4 \pm 0.2^\circ$ \cite{baek2020highly}) gives rise to an estimated density $N_{total}\sim1.6\cdot10^{13}$ cm$^{-2}$ of moir{\'e} trapping sites with three different local atomic configurations: $H_h^h$, $H_h^X$ and $H_h^M$ \cite{yu2017moire}, where $H_{h}^{\mu}$ denotes an $H$-type stacking with either $h$ the hexagon centre, $X$ the chalcogen site or $M$ the metal site vertically aligned with the hexagon centre ($h$) of the hole layer. $N_{total}$ yields a density $N_{moir\Acute{e}}=N_{total}/3\sim4.2\cdot10^{12}$ cm$^{-2}$ of moir{\'e} sites with the same atomic registry. Therefore, our analysis reveals that the estimated density of optically-generated IXs ($N_{IX}$) is around two orders of magnitude smaller than $N_{moir\Acute{e}}$; less than 2$\%$ of the moir{\'e} sites are filled with IXs. We note that the highest $N_{IX}$ achieved in our experiments corresponds to the lowest $N_{IX}$ investigated by Wang \textit{et. al.}, who reported an optically driven Mott transition from IXs to a charge-separated electron and hole plasmas in a 3$R$-MoSe$_2$/WSe$_2$ heterobilayer with a similar moir{\'e} period ($\Delta\theta=4^{\circ}$) for $N_{IX}>$ $3\cdot10^{12}$ cm$^{-2}$ (i.e., for $N_{IX}>N_{moir\Acute{e}}$) \cite{wang2019optical}. In order to provide an additional degree of confidence in the IX density estimated from Eq. (\ref{Eq:plate_capacitor}), we estimate the magnitudes of $N_T$ and $N_S$ from the power dependent results shown in Fig. \ref{fig1}(b). Figure \ref{fig2}(a) shows the ratio of integrated PL intensities between the IX$^0_T$ and IX$^0_S$ peaks as a function of $P_{exc}$ extracted from Fig. \ref{fig1}(b) (black dots). The inset shows the evolution of the integrated PL intensities for each individual peak in the same $P_{exc}$ range. In the range of $P_{exc}$ for which both peaks coexist, we observe that the intensity ratio decreases from $\sim$105 to $\sim$5 with increasing $P_{exc}$. To evaluate the PL intensities of the two IX species peaks quantitatively, we employ a rate equation model (see Suppl. note 5). The solid lines in Fig. \ref{fig2}(a) represent fits of the experimental data for both the PL intensity ratio (red) and the PL intensities of the IX$^0_T$ (blue) and IX$^0_S$ (green) peaks to the rate equation model. The fits allow us to confidently estimate the power-dependent relative densities of IXs with spin-singlet and spin-triplet configurations. Next, we show in Fig. \ref{fig2}(b) the experimentally-measured energy splitting between IX$^0_S$ and IX$^0_T$ ($\Delta E_{S-T}$), where we observe that increasing $P_{exc}$ gives rise to a reduction of $\Delta E_{S-T}$ by up to 3 meV, well beyond the uncertainty associated to the experimental determination of the energy splitting. From Eq. (\ref{Eq:plate_capacitor}), the density-dependent $\Delta E_{S-T}$ can be calculated as % \begin{align} \Delta E_{S-T}=\Delta E_{S-T}^0 +\Delta N_{S-T}(P_{exc})4\pi e^2d/ \varepsilon, \label{Eq:energy_splitting} \end{align} % with $\Delta E_{S-T}^0=E_S^0-E_T^0$ and $\Delta N_{S-T}(P_{exc})=N_{S}(P_{exc})-N_{T}(P_{exc})$. From Eq. (\ref{Eq:energy_splitting}) it is straightforward to see that the decrease of $\Delta E_{S-T}$ with increasing $P_{exc}$ has its origin in the higher density of IX$^0_T$ in the range of $P_{exc}$ used in our experiments, which leads to $\Delta N_{S-T}(P_{exc})<0$. Figure \ref{fig2}(b) shows the fit of the experimental $\Delta E_{S-T}$ (black dots) to Eq. (\ref{Eq:energy_splitting}) (red solid line), where we have used the relative IX densities shown in \ref{fig2}(c), and have assumed an average $d$ value of 0.7 nm \cite{li2020dipolar}. The inset shows a comparison of the experimental and calculated energy shift for each individual exciton band, where the shaded areas represent the confidence interval corresponding to energy shifts calculated using $d$ values ranging from 0.4 \cite{baek2020highly} to 1 nm \cite{nagler2017interlayer}. The good agreement between the experimental and calculated values allows us to estimate the order of magnitude of the absolute densities for IXs with both spin configurations, which are shown in Fig. \ref{fig2}(c). \begin{figure*}[t] \begin{center} \includegraphics[scale=0.6]{Figure_3.png} \end{center} \caption{Interlayer exciton trions in WSe$_2$/MoSe$_2$. \textbf{(a), (b),} Gate-voltage controlled PL of IXs in the neutral ($0< V_g \leq0.1$ V) and electron doping regime ($V_g \geq 0.1$ V) at $B_z$ = 0 T, for $P_{exc}=20$ nW (a) and $P_{exc}=40$ $\mu$W (b) at 4 K. In (b), the PL intensity has been multiplied by a factor 20 in the spectral range delimited by the vertical dashed area for visualization purposes. \textbf{(c)}, Linecuts extracted from the color plots in (a) and (b) (black and red solid lines, respectively) in the neutral ($V_g=0$ V) and electron doping regimes ($V_g=0.6$ V). The spectra have been normalized by the values indicated in the figure, and the linecuts at $V_g=0.6$ V have been shifted vertically for visualization purposes. The dashed vertical lines indicate the central energies of the different ensemble peaks, illustrating the power-induced blue-shift ($\Delta E(P_{exc})$) and the energy splittings arising from the binding energies of the charged IXs with spin triplet ($\Delta E_{IX_T}$) and spin singlet ($\Delta E_{IX_S}$) configuration.} \label{fig3} \end{figure*} The black dashed line in Fig. \ref{fig2}(c) shows the estimated $N_{moir\Acute{e}}$ for our sample. In agreement with the analysis from the plate-capacitor approximation (Eq. (\ref{Eq:plate_capacitor})), the results in Fig. \ref{fig2}(c) corroborate that, even for the highest excitation powers used in our experiments, the estimated densities of optically-generated IXs are around two orders of magnitude smaller than $N_{moir\Acute{e}}$. The orange shaded area in Fig. \ref{fig2}(c) represents the estimated $N_{moir\Acute{e}}$ for MoSe$_2$/WSe$_2$ heterobilayers with stacking angles between 54.4$^\circ$ and 58.4$^\circ$, showing that this conclusion holds true even for a large range of stacking angles. Our results represent a bridge between the quantum-dot-like \cite{seyler2019signatures,brotons2020spin,baek2020highly} and the broad PL emission peaks \cite{hanbicki2018double,ciarrocchi2019polarization,wang2019giant,jauregui2019electrical,miller2017long,sigl2020signatures,choi2021twist} previously observed for IX ensembles in MoSe$_2$/WSe$_2$ heterobilayers at different excitation powers. Figure \ref{fig2}(d) shows a sketch depicting the transition between the low- and high-density regimes. At low $P_{exc}$, only a few IXs (red spheres) are localized in the moir{\'e} sites lying inside our circular confocal collection spot (not to scale). The spatial and spectral isolation of these trapped excitons is likely aided by dipolar repulsion, which minimizes the probability for excitons to populate neighboring moir{\'e} sites. As $P_{exc}$ increases, more and more optically-generated IXs are trapped in the moir{\'e} sites, giving rise to a broad PL emission band originating from the ensemble of moir{\'e}-trapped IXs. Even at the highest $P_{exc}$ used in this work, less than 2$\%$ of moir{\'e} sites contain trapped IXs. This behaviour agrees well with the magneto-optical properties measured for single confined IXs and the IX$^0_T$ ensemble PL band. The polarization selection rules of moir{\'e}-trapped excitons are dictated by the local atomic registry of the moiré trapping site \cite{yu2017moire,yu2018brightened}. The same optical selection rules observed for both single confined IXs and the IX$^0_T$ ensemble PL band indicate that only moir{\'e} sites with a local atomic registry $H_h^h$ are responsible for the IX trapping \cite{seyler2019signatures,zhang2019highly,brotons2020spin,baek2020highly}. Moreover, the optical selection rules of the IX$^0_S$ ensemble PL peaks further corroborate this affirmation, since $H_h^h$ is the only local atomic registry in $2H$-stacked TMD heterobilayers that results in opposite circularly-polarized transitions for IXs with spin-triplet and spin-singlet configurations \cite{yu2018brightened}. We note that for MoSe$_2$/WSe$_2$ heterobilayers with $3R$-stacking (i.e. $\Delta\theta\sim0^\circ$), the magneto-optical selection rules suggest that the PL emission at high excitation power arises from spin-singlet and spin-triplet IXs with a local stacking registry $R^X_h$ \cite{ciarrocchi2019polarization, joe2021electrically}. \section{{Spin-valley configurations of negative IX trions in 2$H$-M\lowercase{o}S\lowercase{e}$_2$/WS\lowercase{e}$_2$}} \begin{figure*}[t] \begin{center} \includegraphics[scale=0.6]{Figure_4.png} \end{center} \caption{Interlayer exciton trions in WSe$_2$/MoSe$_2$. \textbf{(a)} Color plot with the full evolution of the PL spectrum of negative IX trions for 0.2 nW $ \leq P_{exc} \leq$ 40 $\mu$W at $V_g=2$ V in logarithmic scale, in which four different peaks can be resolved (as indicated by the arrows). \textbf{(b)} PL spectrum acquired for the intermediate $P_{exc}$ value indicated by the white dashed line in (a) (2 $\mu$W) at $V_g=2$ V, fitted to four lorentzian peaks corresponding to various exciton species: IX$^-_{T, inter}$ (blue), IX$^-_{T,intra}$ (gray), IX$^-_{S,inter}$ (green), and IX$^{-'}_T$ (red). \textbf{(c)} Schematic representation of the charge configurations for IX$^-_{T, inter}$ (left), IX$^-_{T,intra}$ (middle) and IX$^-_{S,inter}$ (right) showing the optical transitions that involve a hole in the the topmost valence band of WSe$_2$ at $K$. \textbf{(d)} Magnetic field dependence of the IX trions PL for $\sigma^+$- (purple) and $\sigma^-$-polarized (orange) collection in the range $0\leq B_z\leq5$ T under $V_g=$ 1 V and $P_{exc}=40$ $\mu$W. \textbf{(e)} Zeeman splitting of each IX exciton species (dots) as extracted from fits of the experimental data shown in (d). The solid lines represent linear fits of the experimental data.} \label{fig4} \end{figure*} In this section, we investigate the formation of negatively charged trapped IXs with both spin-triplet and spin-singlet configuration. The Fermi energy in our WSe$_2$/MoSe$_2$ heterobilayer can be continuously tuned with an external gate voltage $V_g$. The color plots of Figs. \ref{fig3}(a) and \ref{fig3}(b) show the effects of electron doping on the PL of the trapped IXs at low ($P_{exc}=20$ nW) and high ($P_{exc}=40\mu$W) excitation powers, respectively. The $V_g$-dependent evolution of the IX PL shows the same overall behaviour for both low and high IX density regimes, and resembles the doping dependence of the intralayer A exciton of monolayer MoSe$_2$ measured in a sample region where the MoSe$_2$ does not overlap with the WSe$_2$ layer (see Suppl. Note 6). For $0< V_g \lesssim 0.1$ V, the PL spectrum is dominated by neutral excitons: IX$^0_T$ at low IX density (Fig. \ref{fig3}(a)), and both IX$^0_T$ and IX$^0_S$ at high IX density (Fig. \ref{fig3}(b)). At high IX density, the IX$^0_T$ and IX$^0_S$ ensemble peaks present linewidths of $\sim5.2$ meV and $\sim7$ meV, respectively, on par with the cleanest MoSe$_2$/WSe$_2$ samples \cite{ciarrocchi2019polarization,calman2020indirect,sigl2020signatures}. For $V_g \gtrsim 0.1$ V, new red-shifted peaks appear at energies $\sim$7 ($\sim$4) meV below the IX$^0_T$ (IX$^0_S$) peaks, indicating the formation of negatively charged IX trions with different spin configurations (see linecuts in Fig. \ref{fig3}(c)). The measured energy differences between the neutral and charged exciton peaks are attributed to the binding energies of the charged IXs, in good agreement with recently reported values for MoSe$_2$/WSe$_2$ heterostructures with $3R$ stacking \cite{jauregui2019electrical,joe2021electrically}. The new trion peaks coexist with IX$^0_T$ and IX$^0_S$ for a short range of applied voltages until IX$^0_T$ and IX$^0_S$ eventually vanish with increasing $V_g$. We note the observation of IX trions in the quantum emitter (low-excitation) regime is novel. At high excitation powers, the PL spectrum in the $n$-doped region shows broad emission shoulders in both the low- and high-energy tails of the IX trion peak with spin-triplet configuration. In order to spectrally resolve the emission bands, we plot the PL of IXs for $n$ doping ($V_g=2$ V) as a function of $P_{exc}$ (see Fig. \ref{fig4}(a)). Figure \ref{fig4}(b) shows the PL spectrum acquired for an intermediate $P_{exc}$ value of 2 $\mu$W at $V_g=2$ V, as indicated by the white dashed line in Fig. \ref{fig4}(a). At this excitation power, we clearly resolve four emission peaks. The brightest peak in the spectrum (at $\sim$ 1.39 eV) corresponds to intervalley IX trions with spin-triplet optical transitions (IX$^-_{T, inter}$). At higher emission energies, we observe two additional peaks with an energy splitting of $\sim$12 meV. These two peaks show a similar integrated PL intensity across the entire range of $P_{exc}$ (see Suppl. Note 7), suggesting that their corresponding charge configurations involve the same band-edge energy levels. We attribute the low energy and high energy peaks of this doublet to intravalley (IX$^-_{T,intra}$) and intervalley (IX$^-_{S,inter}$) trions with spin-triplet and spin-singlet configuration, respectively. Figure \ref{fig4}(c) shows a schematic representation of the charge configurations for IX$^-_{T, inter}$ (left), IX$^-_{T,intra}$ (middle), and IX$^-_{S,inter}$ (right) with optical transitions that involve a hole in the the topmost valence band of WSe$_2$ at $K$. The PL spectrum in Fig. \ref{fig4}(b) also shows a broad emission peak at lower energies than IX$^-_{T, inter}$ (IX$^{-'}_T$). Although its origin is not yet fully understood, this feature has also recently observed in the reflectivity of $n$-doped ML WSe$_2$ \cite{van2019probing, wang2020observation} and the PL of $n$-doped MoS$_2$ \cite{roch2020first,klein2021controlling} and attributed to either a Mahan-like exciton \cite{roch2020first} or an exciton-plasmon-like excitation \cite{van2019probing, wang2020observation}. We find that contrary to the behaviour of IX$^-_{T, inter}$, IX$^-_{S,inter}$ and IX$^-_{T,intra}$, the integrated emission intensity of IX$^{-'}_T$ increases with the carrier concentration. Moreover, increasing the electron concentration also leads to a linear increase of the energy splitting between IX$^{-}_{T,inter}$ and IX$^{-'}_T$ (see Suppl. Note 8). Finally, the charge configurations for the different charged exciton species depicted in Fig. \ref{fig4}(c) highlight a striking difference between IX$^-_{S,inter}$ and IX$^-_{T,intra}$. Unlike IX$^-_{S,inter}$, in which the photon emission arises from electron-hole recombination between the topmost VB and the top spin-split CB, the absence of a dark exciton ground state in MoSe$_2$/WSe$_2$ heterostructures enables the electron-hole recombination between the topmost VB and the bottommost spin-split CB for IX$^-_{T,intra}$. This behaviour contrasts with negative trions in W-based TMDs, in which both intravalley and intervalley negative trions present electron-hole recombination involving the same top spin-plit CB \cite{lyons2019valley}. Moreover, since IX$^-_{S,inter}$ is the only trion species in which the electron-hole recombination involves the top spin-split CB, the $V_g$-dependent relative integrated PL intensities of the different trion species should provide an estimate of the energy splitting between the spin-split CBs in MoSe$_2$ \cite{joe2021electrically}. We observe that the PL intensity from IX$^-_{S,inter}$ overtakes the combined PL intensity from IX$^-_{T, inter}$ and IX$^-_{T,intra}$ at $V_g \sim$ 6.1 V (see Suppl. Note 9), which corresponds to an energy splitting of $\sim$ 21 meV, in good agreement with the calculated spin-orbit splitting of MoSe$_2$ CBs at $\pm K$ (23 meV) \cite{kormanyos2014spin}. Therefore, the spin-valley configurations depicted in Fig. \ref{fig4}(c) for the different trion states suggest that, contrary to intravalley negative trions in W-based TMDs, although IX$^-_{S,inter}$ and IX$^-_{T,intra}$ involve the topmost CB, they should present different magneto-optical properties. To investigate the magneto-optical properties of the different trion peaks, we measure their PL as a function of $B_z$ at $P_{exc}=40$ $\mu$W and $V_g=$ 1 V, for which both the intensity and linewidth of the different peaks allow us to clearly monitor their emission properties. Figure \ref{fig4}(d) shows the magnetic field dependence of the IX trions PL for $\sigma^+$- (purple) and $\sigma^-$-polarized (orange) collection in the range $0\leq B_z\leq5$ T. The results reveal a clear Zeeman splitting for the three IX trion species: a positive vertical $B_z$ field leads to a red-shift of the $\sigma^+$-polarized emission for both IX$^-_{T, inter}$ and IX$^{-}_{T,intra}$, while it induces a slight blue-shift of the $\sigma^+$-polarized emission for IX$^-_{S,inter}$. Such contrasting behaviour can also be seen in the experimental Zeeman splitting of each IX trion species as shown in Fig. \ref{fig4}(e): IX$^-_{T, inter}$ and IX$^-_{T,intra}$ present negative slopes (negative $g$-factor), whereas IX$^-_{S,inter}$ exhibits a positive one. Linear fits of the measured Zeeman splittings reveal $g$-factors of $-15.44\pm0.06$, $-15.1\pm0.4$ and $12.31\pm0.11$ for IX$^-_{T, inter}$, IX$^-_{T,intra}$ and IX$^-_{S,inter}$, respectively. The extracted values show that IX$^-_{T, inter}$ and IX$^-_{S,inter}$ have $g$-factors with opposite signs and different magnitudes, as previously observed for IX$^0_T$ and IX$^0_S$ in the results shown in Fig. \ref{fig1}(f). More importantly, the fits also reveal that IX$^-_{T, inter}$ and IX$^-_{T,intra}$ present $g$-factors with the same sign and very similar magnitude, which confirms that the electron-hole recombination in these trions involve the same exact CB and VB, confirming the origin of the IX$^-_{T,intra}$ peak. We note that the particular band alignment and optical selection rules in 2$H$-MoSe$_2$/WSe$_2$ heterobilayers allow the formation of intervalley and intravalley IX trions by optical pumping of both the upper and lower CBs of MoSe$_2$ even in the neutral doping regime (see Suppl. Note 10). These results suggest the importance of revisiting the interpretation of multiple IX emission peaks in previous reports. Finally, the fits reveal a dependence of the trions $g$-factors with the carrier concentration. Figure S7 shows the evolution of the $g$-factor of the brightest peak in our spectra (IX$^-_{T, inter}$) as a function of electron doping. Similar to the case of the $g-$factor of quasiparticles in a 2D electron gas \cite{janak1969g}, and intralayer exciton polarons in ML TMDs \cite{roch2019spin, wang2018strongly, back2017giant}, we observe a change of $g$ with increasing electron doping. This change in the effective $g$-factors with increasing carrier doping has been attributed to many-body interactions and phase space filling effects \cite{back2017giant,wang2018strongly}. Figure \ref{fig4}(e) also shows the measured Zeeman splitting for the excitonic feature IX$^{-'}_T$. The observed negative slope in the Zeeman splitting of this peak with increasing $B_z$ reveals similar selection rules to IX$^-_{T, inter}$. However, similar to the exact origin of IX$^{-'}_T$, the extracted $g$-factor of $-25.3\pm0.5$ is not yet understood. Finally, we note a quadratic red-shift of the central energy of the Zeeman-split peaks of the IX ensemble peaks with increasing magnetic field (see Suppl. Note 12), which is absent for low IX densities, highlighting the importance of the strong exciton-exciton interactions at high IX densities. \section{Conclusion and Perspectives} In summary, we investigate neutral and negatively charged IXs trapped in moir{\'e} confinement potentials in a gate-tunable 2$H$-WSe$_2$/MoSe$_2$ heterostructure. The homogeneous linewidths of the ensemble IX peaks we find in our sample ($\sim$ 5 meV for neutral and charged species) are on par with the best reported, confirming the high-quality nature of the sample thanks to hBN encapsulation and control of the Fermi-level. By scanning the excitation power over five orders of magnitude, we tune the density of optically generated neutral excitons and observe a continuous evolution of the trapped IX density, from a few isolated quantum emitters to a large ensemble. In the high-excitation regime, neutral IXs with spin-triplet (IX$^0_T$) and spin-singlet (IX$^0_S$) configurations are identified. The IX ensemble peaks energetically blue-shift with increasing density due to dipolar repulsion. From this effect, we are able to estimate $N_{IX}$ and find that even at the highest IX density we measure, $N_{IX} << N_{moir\Acute{e}}$. We discover that the magneto-optical properties of the trapped IXs are identical, regardless of their density: both the narrow quantum-dot-like and broad ensemble IX emission originate from IXs localised in moir{\'e} potential traps with the same local atomic registry ($H_h^h$). Moreover, the optical selection rules of the IX$^0_S$ ensemble further corroborate this affirmation, since $H_h^h$ is the only local atomic registry in $2H$-stacked TMD heterobilayers that results in opposite circularly-polarized transitions for IXs with spin-triplet and spin-singlet configurations. Next, the Fermi energy was tuned to create negatively charged moir{\'e} trapped IXs, which we observe in both the low- and high-IX-density regimes. To our knowledge, the observation of charged moir{\'e} trapped IXs in the low-density regime is novel, and an exciting avenue for future investigations. Binding energies for the on-site negatively charged IX are found to be 7 meV (4 meV) for the spin-triplet (spin-singlet) trion configuration. Our magneto-optical measurements at high IX densities reveal a fine structure for negative IX trions with spin-triplet configuration; we clearly resolve intravalley (IX$^-_{T,intra}$) and intervalley (IX$^-_{T,inter}$) IX trions. We note that, using moderate excitation powers, the formation of intervalley and intravalley IX trions is observed even in the neutral-doping regime, which creates a multi-peaked PL spectrum consisting of six possible neutral and charged IX species (IX$^0_T$, IX$^0_S$, IX$^-_{T,intra}$, IX$^-_{T,inter}$, IX$^-_{S,intra}$, and IX$^{-'}_T$). These results suggest it could be fruitful to revisit previous interpretations of multiple peaked IX spectra from MoSe$_2$/WSe$_2$ heterobilayers in previous reports, particularly those using ungrounded devices and high excitation powers. We remark that the unified picture of narrow linewidth IX emitters and broad ensemble IX peaks reconciles contrasting IX spectra reported from nominally similar WSe$_2$/MoSe$_2$ heterobilayer samples. This unified picture provides further valuable evidence about the nature of quantum emitters in moir{\'e} heterostructures, complementing previous results \cite{seyler2019signatures,brotons2020spin,baek2020highly}. So, although the narrow linewidth peaks have an inhomogeneous energy distribution similar to defect-related single photon emitters in 2D materials, the properties of the moir{\'e} quantum emitters are identical to the ensemble IX and arise from the intrinsic symmetry of the moir{\'e} confining potential at the specific atomic registry. Further, in the low-excitation regime, spatial isolation of trapped IXs is likely aided by dipolar repulsion, which minimizes the probability for excitons to populate neighboring moir{\'e} sites. Finally, in the high-excitation regime, the results highlight the importance of considering the spatial pinning of the ensemble IXs to the moir{\'e} lattice in order to achieve an accurate description of many-body exciton-exciton phenomena in TMD heterobilayers: the dipole interactions will depend on, and can be tuned by, the twist angle.
train/arxiv
BkiUdNk4eIZjjLi4T0jn
5
1
\section{Introduction} Online judges are systems that provide fully automated evaluation of algorithms, which solve computational problems submitted by their users. The term online judge was introduced by Kurnia, Lim, and Cheang in 2001 as an online platform that supports fully automated, real-time evaluation of source code, binaries, or even textual output submitted by participants competing in the particular challenge \cite{Kurnia_2001}. Actually, the development of online judge systems boasts a much longer history, dating back to 1961 when they emerged at Stanford University \cite{Forsythe_1965}. In the past, their most popular application was supporting organization of competitive programming contests such as ACM International Collegiate Programming Contest (ICPC) and International Olympiad in Informatics (IOI) and archiving of problems used during such competitions \cite{Khera_1993}. However, nowadays, their application is much wider, including organization of challenges dedicated to solving data science and optimization problems. Formally, the online judge system is an online service performing any of the following steps of the evaluation procedure in the cloud computing infrastructure \cite{Wasik_2018}: \begin{enumerate} \item collects, compiles sources if needed, and verifies the executability of the resultant binary $b$; \item assesses solution $b$ based on a set of specific test cases, $T$, defined for the particular computational problem $\Pi$ in a reliable, homogeneous, evaluation environment; \item computes the aggregated status $s$ and the evaluation score $v$ based on the statuses and scores of all considered test cases. \end{enumerate} The most popular application of online judge systems is archiving problems from algorithmic contests. The first system in this category that gained significant popularity was UVa Online Judge, followed by many others such as Codeforces, SPOJ, TopCoder, POJ, etc. \cite{Wasik_2018}. The number of programming problems stored by popular online judge systems is so large that even some methods were created to classify them manually (e.g., uHunt) or automatically \cite{Yoon_2006}. Another popular application of online judge systems is the support of education purposes, e.g., in computing science in teaching programming or algorithms and data structures, such as CheckIO, CodinGame or Codeboard. There are even online judges that extend e-learning platforms such as Moodle. Recruiters' work can also be supported by the online judges that are usually used to verify the programming skills of candidates for employers such as HackerRank or Qualified. Finally, there are development platforms such as DOMjudge or Mooshak that allow integrating selected mechanism provided by the online judge system into custom web service. Detailed classification and features provided by various online judges are described in \cite{Ihantola2015,Combefis_2014,Wasik_2018}. The approach offered by the online judge platforms has such a large impact that the concept they utilize has already been named as a cloud-based evaluation, or, strictly following the cloud computing naming scheme, as Evaluation as a Service (EaaS). At least two international workshops devoted to this topic have been organized recently \cite{Muller_2016,Hopfgartner_2015}. \section{Methods - Optil.io platform} In this paper we would like to shortly present how we used the Evaluation as a Service architecture to implement the Optil.io platform that is focused on evaluating data science and optimization problems \cite{Wasik_2016}. The utilization of EaaS methodology implemented in the Optil.io platform from the user perspective is following: \begin{enumerate} \item User submits the algorithm solving a particular computational problem through a web interface. The algorithm can be submitted in the form of a source code that will be compiled in the provided computational infrastructure or a binary executable. \item The platform verifies if the submission can be properly executed and in the case of a source code it verifies if it compiles successfully. \item The submission is executed using a homogeneous, safe computing infrastructure on the benchmark including dedicated test cases. During the execution, the evaluation engine verifies if the submission does not exceed strict resource limitations (maximal processing time, RAM utilization, disk storage limit) and does not induce runtime errors. \item Based on the results calculated during the evaluation the ranking of all considered submissions is presented. \end{enumerate} An important feature of the EaaS methodology is the possibility to reliably as well as continuously evaluate submissions, at any moment of time, whenever they are submitted. This aspect of EaaS differs from the simple web form that allows only for collecting of submissions through the Internet. The EaaS approach that has been developed as a part of the Optil.io platform provides a reliable and continuous evaluation of algorithms solving complex optimization and data science problems. As web platforms implemented according to EaaS architecture can be easily used by users to share their solutions for various problems, including those originating from data science and optimization area, they are often extended to utilize crowdsourcing concept. The term \textit{crowdsourcing} was introduced in 2006 by Jeff Howe \cite{Howe_2006}. However, the concept of crowdsourcing understood as outsourcing work to a vast, usually unnamed, network of people in the form of an open call is quite old. One of its first applications was the discovery of a method for measuring the longitude of a ship in 1714, for which a prize was offered by the British government. Since that time, the concept of crowdsourcing has been utilized many times, but its rapid progression has started after the development of the Internet in the 1990s \cite{Wasik_2015}. The successful examples of its applications are services such as Wikipedia or OpenStreetMap. An extended review of crowdsourcing systems available on the world wide web (WWW) can be found in the survey by Doan et al. \cite{Doan_2011}, and a discussion about the nature of crowdsourcing can be found in the paper by Estelles and Gonzalez \cite{Estelles-Arolas_2012}. Crowdsourcing can bring many benefits. As observed by Francis Galton in 1907, the collective opinion of a crowd of individuals can be much more precise than the opinion of any single individual from the crowd. It is a basic assumption of the so-called "wisdom-of-the-crowd" \cite{Galton_1907}, which later evolved into an idea of collective intelligence (CI). A CI is a more general concept defined usually as an intelligence emerging from the collaboration, collective efforts, and competition of many individuals. In recent years, many platforms supporting the crowdsourcing have been implemented, such as InnoCentive or CrowdAnalytix. On the other hand, programming challenges can be very successful in solving complex science- and industry-inspired computing problems. This has been proven especially in the data mining field by the Kaggle platform \cite{Dhar_2013}. However, there are also many other successful utilizations of such events, e.g., Dream Challenges, ROADEF Challenge, VeRoLog Solver Challenge or TopCoder. A typical challenge organization requires, first, publishing a description of the challenge on the Internet and next, collecting submissions from the crowd of practitioners. Submissions are usually up-loaded as binaries executed by judges after the challenge submission deadline or textual output files generated by contestants executing their own code on a predefined benchmark set of test cases provided by the judges. A very similar approach was implemented in Optil.io platform which make possible not only to submit the algorithms and evaluate them in the cloud, but also to organize programming challenges which objective is to solve difficult optimization or data science problems. Challenges organized at Optil.io platform employ the continuous evaluation built on EaaS architecture. Participants can submit any source code developed in any programming language. To prevent overfitting of solutions to test instances, we can divide tests into public and private sets. To ensure smooth communication between algorithms and the evaluation engine we developed a simple communication channel based on the standard input/output. Privacy and intellectual property rights to the data submitted by users are regulated by terms of service, challenges rules, and privacy policy prepared by the cooperating lawyer and accepted by all participants of the challenge. In general, they leave the intellectual property to the author and only the winner has to provide his solution to the challenge organizer in order to receive the prize. \section{Results} \begin{figure*}[ht!] \centering \includegraphics[width=0.75\textwidth]{lumberjack_ranking} \caption{The results of the challenge dedicated to solving the variant of an orienteering problem. The horizontal axis presents the progress of the challenge in days. The vertical axis presents the total, aggregated score obtained by the solutions (relatively to the best submission which is assigned 100 points). Each data point represents the single submission. The yellow line presents how the score obtained by the best solution increased in time. Top 5 ranked in the final standing contestants are marked with colors different than red.\label{fig:ranking}} \end{figure*} We have already organized four challenges using Optil.io platform. Two of them were focused on simple combinatorial problems - variants of facility location and orienteering problems. The remaining two challenges were dedicated to supporting PACE Challenge organized in collaboration with other European universities: Saarland University, Friedrich-Schiller-University Jena, and University Paris Dauphine. The goal of the Parameterized Algorithms and Computational Experiments (PACE) Challenge is to investigate the applicability of algorithmic ideas studied and developed in the subfields of multivariate, fine-grained, parameterized, or fixed-parameter tractable algorithms. The 2017 and 2018 editions of PACE were hosted using Optil.io platform. Up to this point, the organizers run solutions submitted by users only once, at the end of the contest, using manually executed scripts. Optil.io platform provided them with automatic, continuous evaluation method. Contestants could submit their algorithms to the platform and have them assessed almost instantly using 200 instances. 100 of them were public ones, for which participants could see their results in real time and compare those results with others. Another 100 instances were private. Results of evaluation using these instances were visible only to organizers, and after the challenge deadline, they were used to determine the winner. For each instance algorithm could use up to 30 minutes of processor time, thus requiring to perform 100 hours of computation per submission. Thanks to the parallelization, contestants could check their results on public instances after just around 90 minutes. Organization of all of the challenges allowed observing many interesting behaviors emerging from the crowdsourcing approach. We could observe how contestants worked on their solutions improving them, how much time they required to find the optimal solution for particular instances and how the best results obtained by any of the contestants changed in time. An example plot presenting the progress of the challenge is presented in Figure \ref{fig:ranking}. \section{Acknowledgments} All authors were supported by the National Center for Research and Development, Poland [grant no. LIDER/004/103/L-5/13/NCBR/2014]. Moreover, development tools used during the Optil.io project (JIRA and Bitbucket) were shared by PLGrid infrastrucutre. \bibliographystyle{named}
train/arxiv
BkiUciQ4ubnjorQ5qydj
5
1
\section{Introduction} The unitarity method introduced in~\cite{Bern:1994zx, Bern:1994cg} is designed to compute any scattering amplitude by matching its unitarity cuts onto the corresponding cuts of its expansion in a basis of master integrals~\cite{MasterIntegrals} with rational coefficients. Each of these coefficients can be determined quantitatively from prior knowledge of the master integrals and the singularity structure of the amplitude. As the master integrals form a basis for amplitudes, so the unitarity cuts of master integrals have uniquely identifiable analytic properties, and can be used as a basis for the cuts of any amplitude. Therefore, the coefficients of the linear combination can be extracted systematically through the phase-space integration (instead of complete loop integration). Recently, unitarity-based methods for one-loop amplitudes have been the subject of an intense investigation, through different implementations of the cut-constraints \cite{Cachazo:2004by,Bena:2004xu,Cachazo:2004dr,Britto:2004nj,Britto:2004nc,Britto:2005ha,Brandhuber:2005jw,Britto:2006sj,Anastasiou:2006jv,Mastrolia:2006ki,Britto:2006fc,Anastasiou:2006gt,Britto:2007tt,OPP,Forde:2007mi,Kilgore:2007qr,BjerrumBohr:2007vu,Ellis:2007br,Giele:2008ve}. The holomorphic anomaly of unitarity cuts~\cite{Cachazo:2004by,Cachazo:2004kj} simplifies the phase-space integration dramatically: cut-integrals can be done analytically by evaluating residues of a complex function in spinor variables~\cite{SpinorFormalism}, reducing the problem of so-called tensor-reduction to one of {\em algebraic} manipulation. Accordingly, in~\cite{Britto:2005ha,Britto:2006sj}, a systematic method was introduced to evaluate any finite four-dimensional unitarity cut, yielding compact expressions for the coefficients of the master integrals. This method was successfully applied to the final parts of the cut-constructible part of the six-gluon amplitude in QCD. The same method, based on the spinor-integration of the phase-space, was later extended for the evaluation of generalised cuts in $D$ dimensions~\cite{Anastasiou:2006jv,Mastrolia:2006ki,Britto:2006fc,Anastasiou:2006gt}, which is essential for the complete determination of any amplitude in dimensional regularization \cite{DDimU,Bern:1995db,Rozowsky:1997dm}. In this paper, we carry out the extension to the massive case of the analytic results presented in \cite{Britto:2007tt}, stemming from an original study of compact formulas for the coefficients of the master integrals \cite{Britto:2006fc}. Following the same logic as in \cite{Britto:2007tt}, we now present general formulas for the coefficients of the master integrals which can be evaluated without performing any integration. These formulas depend on input variables (indices, momenta and associated spinors) that are specific to the initial cut-integrand, which is assembled from tree-level amplitudes. The value of a given coefficient is thus obtained simply by pattern-matching, that is by specializing the value of the input variables to be inserted in the general formulas. The implementation of the general formulas into automatic tools is straightforward, as done for the current investigation with the program {\tt S@M}~\cite{Maitre:2007jq}. In this paper, since the formulas for the coefficients are obtained via massive double cuts in $D$-dimension, we do not present results for the coefficients of cut-free functions like tadpoles and bubbles with massless external momentum (which can be expressed in terms of tadpoles as well). The coefficients of such functions could be fixed either by imposing the expected UV-behaviour of the amplitude, as described in \cite{Bern:1995db}, or computed with other techniques applicable in massive calculations~\cite{OPP,Forde:2007mi,Kilgore:2007qr,Ellis:2007br,Giele:2008ve}. The paper is organized as follows. In section 2, we describe the structure of the decomposition of one-loop amplitude in terms of master integrals. In Section 3 we explain the double-cut integration with spinor variables, which leads to the formulas of the coefficients of the master integrals, presented in Section 4. In Sections 5 and 6, we apply our formulas to two examples of one-loop scattering amplitudes, respectively $gH \to gg$ and $gg \to gg$, where the Higgs mass and the mass of the internal fermion (in both cases) are kept as free parameters. In Section 7, we present both analytical and numerical methods to obtain, finally, the explicit coefficients of the dimensionally shifted master integrals. In Appendix A, we record the translation between our basis of integrals and the ones used in the literature for the examples discussed in Sections 5 and 6. In Appendix B, we present a proof of the decomposition into the dimensionally shifted basis, with rational coefficients independent of $\epsilon$. In other words, we prove that the coefficients given by our algebraic expressions will be polynomial in our extra-dimensional variable $u$. As a byproduct, we have produced equivalent and simpler algebraic functions for the evaluation of coefficients. \section{Decomposition in terms of master integrals} We define the $n$-point scalar function with non-uniform masses as follows:\footnote{For ease of presentation, we are omitting the prefactor $i(-1)^{n+1}(4\pi)^{D/2}$ (which was included for example in \cite{Bern:1995db}).} \begin{eqnarray} I_n(M_1,M_2,m_1,\ldots,m_{n-2}) \equiv \int {d^{4-2\epsilon} p \over (2\pi)^{4-2\epsilon}}{1\over (p^2-M_1^2) ((p-K)^2-M_2^2) \prod_{j=1}^{n-2} ((p-P_j)^2-m_{j}^2)}.~~~\label{n-scalar} \end{eqnarray} Giele, Kunszt and Melnikov \cite{Giele:2008ve} have given the decomposition of any one-loop amplitude in $D$ dimensions in terms of master integrals, represented here pictorially. \begin{eqnarray} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \GOval(0,0)(27,27)(0){1} \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$A_n^{(D)}$}}} \end{picture} \hspace*{1.0cm} &=& \qquad \! e^{(0)} \hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-17,-20)(17,-20) \Line(-17,-20)(-25,10) \Line(17,-20)(25,10) \Line(-25,10)(0,25) \Line(+25,10)(0,25) \Line(0,25)(0,35) \Line(-25,10)(-32,15) \Line(+25,10)(+32,15) \Line(-17,-20)(-25,-30) \Line( 17,-20)(25,-30) \Text(0,0)[]{{\tiny{$I_5^{(D)}$}}} \end{picture} \nonumber \\ & & \nonumber \\ & & + \quad d^{(0)} \hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$I_4^{(D)}$}}} \end{picture} \hspace*{1cm} + \quad d^{(2)} \hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$I_4^{(D+2)}$}}} \end{picture} \hspace*{1cm} + \quad d^{(4)} \hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$I_4^{(D+4)}$}}} \end{picture} \nonumber \\ & & \nonumber \\ & & + \quad c^{(0)}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,0) \Line(-20,20)(20,0) \Line(-20,-20)(-20,20) \Line(-20,-20)(-30,-30) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(-5,0)[]{{\tiny{$I_3^{(D)}$}}} \Line(-20,20)(-30,30) \end{picture} \hspace*{1cm} + \quad c^{(2)}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,0) \Line(-20,20)(20,0) \Line(-20,-20)(-20,20) \Line(-20,-20)(-30,-30) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(-3,0)[]{{\tiny{$I_3^{\!(\!D\!+\!2\!)}$}}} \Line(-20,20)(-30,30) \end{picture} \nonumber \\ & & \nonumber \\ & & + \quad b^{(0)}\hspace*{1.0cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(0,0)[]{{\tiny{$I_2^{(D)}$}}} \Oval(0,0)(20,20)(0) \end{picture} \hspace*{1.0cm} + \quad b^{(2)}\hspace*{1.0cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(0,0)[]{{\tiny{$I_2^{(D+2)}$}}} \Oval(0,0)(20,20)(0) \end{picture} \hspace*{1.0cm} + \quad a^{(0)}\hspace*{1.0cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(0,-20)(-10,-30) \Line(0,-20)( 10,-30) \Text(0,0)[]{{\tiny{$I_1^{(D)}$}}} \Oval(0,0)(20,20)(0) \end{picture} \label{DdimMasterDeco} \end{eqnarray} \vspace*{0.5cm} \noindent Here, with reference to \cite{Giele:2008ve}: {\it i)} we have absorbed the residual $D$-dependence of the coefficients in the definition of the master integrals; {\it ii)} for ease of notation, we have given as understood the sums on the partition of the $n$-points of the amplitude in the number of points corresponding to each master integral. Thus, the coefficients $e,d,c,b,a$ in Eq.(\ref{DdimMasterDeco}) are independent of $D$. If, on both sides of Eq.(\ref{DdimMasterDeco}), we apply the standard decomposition of the $D=4-2\epsilon$ dimensional loop variable, $L$, in a four-dimensional component, $\tilde{\ell}$, and its $(-2\epsilon)$-dimensional orthogonal complement, $\mu$, \begin{eqnarray} L = \W \ell + \mu \ . \Label{L-stddeco} \end{eqnarray} then the integration measure becomes \begin{eqnarray} \int d^{4-2\epsilon} L = \int d^{-2\epsilon}\mu \ \int d^4 \W \ell \ , \end{eqnarray} namely the composition of a four-dimensional integration and an integration over a $(-2\epsilon)$-dimensional mass-like parameter. By taking the $\mu$-integral to be understood, the four-dimensional integration on both sides of Eq.(\ref{DdimMasterDeco}), can be read as follows: \begin{eqnarray} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \GOval(0,0)(27,27)(0){1} \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$A_n^{(4)}$}}} \end{picture} \hspace*{1.0cm} &=& \Big(e^{(0)} \ \pi(\mu^2) + d_2(\mu^2) \Big) \hspace*{0.7cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$I_4^{(4)}$}}} \end{picture} \hspace*{1.0cm} + c_1(\mu^2)\hspace*{0.7cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,0) \Line(-20,20)(20,0) \Line(-20,-20)(-20,20) \Line(-20,-20)(-30,-30) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(-5,0)[]{{\tiny{$I_3^{(4)}$}}} \Line(-20,20)(-30,30) \end{picture} \hspace*{1.0cm} + b_1(\mu^2)\hspace*{1.0cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(0,0)[]{{\tiny{$I_2^{(4)}$}}} \Oval(0,0)(20,20)(0) \end{picture} \hspace*{1.0cm} + a^{(0)}\hspace*{0.7cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(0,-20)(-10,-30) \Line(0,-20)( 10,-30) \Text(0,0)[]{{\tiny{$I_1^{(4)}$}}} \Oval(0,0)(20,20)(0) \end{picture} \label{4dimMasterDeco} \end{eqnarray} \vspace*{0.5cm} \noindent where $d_n(\mu^2), c_n(\mu^2),$ and $b_n(\mu^2)$ are polynomials of degree $n$ in $\mu^2$, as discussed in Appendix \ref{polynomialproof}, \begin{eqnarray} d_2(\mu^2) &=& d^{(0)} + d^{(2)} \mu^2 + d^{(4)} (\mu^2)^2 \ , \\ c_1(\mu^2) &=& c^{(0)} + c^{(2)} \mu^2 \ , \\ b_1(\mu^2) &=& b^{(0)} + b^{(2)} \mu^2 \ , \end{eqnarray} whereas $\pi(\mu^2)$ is non-polynomial in $\mu^2$ and corresponds to the coefficients of the reduction of the pentagon to boxes, which occurs in $D=4$. The polynomial structure of $d_n(\mu^2), c_n(\mu^2),$ and $b_n(\mu^2)$ is responsible for the dimensionally shifted integrals appearing in Eq.(\ref{DdimMasterDeco}), because the $\mu$-integration can be performed trivially by absorbing the extra powers of $\mu^2$ into the integration measure, according to \cite{Bern:1995db}: \begin{eqnarray} \int {d^{ - 2 \epsilon} \mu \over (2\pi)^{ - 2 \epsilon}} \ (\mu^2)^r \ f( \mu^2) &=& - \epsilon (1-\epsilon) (2-\epsilon) \cdots (r-1-\epsilon) (4\pi)^r \int {d^{2r - 2 \epsilon} \mu \over (2\pi)^{2r - 2 \epsilon}} \ f( \mu^2) \ . \end{eqnarray} The presence of $\pi(\mu^2)$ in the coefficient of the four-dimensional box is a unique signature of the pentagon. We conclude that the reconstruction of the four-dimensional kernel of any one-loop amplitude, given in Eq.(\ref{4dimMasterDeco}), contains all the information for the complete reconstruction of the amplitude in $D$-dimensions, given in Eq.(\ref{4dimMasterDeco}). In the following pages, we present the general formulas of the coefficients of the box, $I_4^{(4)}$, triangle, $I_3^{(4)}$, and bubble, $I_2^{(4)}$, obtained from the double cut of Eq.(\ref{4dimMasterDeco}), \begin{eqnarray} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \GOval(0,0)(27,27)(0){1} \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$A_n^{(4)}$}}} \DashLine(0,30)(0,-30){3} \end{picture} \hspace*{1.0cm} &=& \Big(e^{(0)} \ \pi(\mu^2) + d_2(\mu^2) \Big) \hspace*{0.7cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \Text(0,0)[]{{\tiny{$I_4^{(4)}$}}} \DashLine(0,30)(0,-30){3} \end{picture} \hspace*{1.0cm} + c_1(\mu^2)\hspace*{0.7cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,-20)(20,0) \Line(-20,20)(20,0) \Line(-20,-20)(-20,20) \Line(-20,-20)(-30,-30) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(-5,0)[]{{\tiny{$I_3^{(4)}$}}} \Line(-20,20)(-30,30) \DashLine(0,30)(0,-30){3} \end{picture} \hspace*{1.0cm} + b_1(\mu^2)\hspace*{1.0cm} \begin{picture}(0,0)(0,0) \SetScale{0.65} \SetWidth{1.0} \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Text(0,0)[]{{\tiny{$I_2^{(4)}$}}} \Oval(0,0)(20,20)(0) \DashLine(0,30)(0,-30){3} \end{picture} \label{4dimCutMasterDeco} \end{eqnarray} \vspace*{0.5cm} \noindent Since the formulas for the coefficients are obtained via double cuts, we do not present the results for the coefficients of cut-free functions like tadpoles and bubbles with massless external momentum (which can be expressed in terms of tadpoles as well). Their coefficients could be fixed either by imposing the expected UV-behaviour of the amplitude, as described in \cite{Bern:1995db}, or computed with alternative techniques~\cite{OPP,Forde:2007mi,Kilgore:2007qr,Ellis:2007br,Giele:2008ve}. \section{The double cut phase space integration} In this section, we review the $D$-dimensional unitarity method \cite{Anastasiou:2006jv, Anastasiou:2006gt} as applied in cases with arbitrary masses \cite{Britto:2006fc}. Our goal is to describe the structure of the cut integrand, from which we will directly read off the coefficients from the formulas in the following section. The formulas will be the massive analogs of the ones in \cite{Britto:2007tt}. Recall the phase space integration of a standard (double) cut in $D=4-2\epsilon$ dimensions. We use the usual decomposition of the $D$-dimensional loop variable, $L$, in a four-dimensional component, $\W \ell$, and a transverse $(-2\epsilon)$-dimensional remnant, $\mu$, \begin{eqnarray} L = \W \ell + \mu \ . \Label{L-stddeco-2} \end{eqnarray} The integration measure becomes \begin{eqnarray} \int d^{4-2\epsilon} L = \int d^{-2\epsilon}\mu \int d^4 \W \ell = {(4\pi)^{\epsilon} \over \Gamma(-\epsilon)} \int d\mu^2 \ (\mu^2)^{-1-\epsilon} \int d^4 \W \ell \ , \end{eqnarray} namely the composition of a four-dimensional integration and an integration over a $(-2\epsilon)$-dimensional mass-like parameter. In order to write the four-dimensional part in terms of spinor variables associated to massless momentum, we proceed with the following change of variables: \begin{eqnarray} \W \ell= \ell+z K,~~~~~\ell^2=0, ~~~\Label{changing} \end{eqnarray} where $\ell$ is a massless momentum and $K$ is the momentum across the cut, fixed by the kinematics. Accordingly, the four-dimensional integral measure becomes \begin{eqnarray} \int d^4\W \ell= \int dz~ d^4\ell~ \delta^+(\ell^2) (2 \ell \cdot K) \ . \end{eqnarray} The Lorentz-invariant phase-space (LIPS) of a double cut in the $K^2$-channel is defined by the presence of two $\delta$-functions imposing the cut conditions: \begin{eqnarray} \int d^{4-2\epsilon} \Phi = \int d^{4-2\epsilon} L \ \delta(L^2-M_1^2) \ \delta((L-K)-M_2^2). \end{eqnarray} Here $M_1$ and $M_2$ are the masses of the cut lines. By using the decomposition of the loop variable in Eq.(\ref{L-stddeco}), the four-dimensional integral can be separated, so that \begin{eqnarray} \int d^{4-2\epsilon} \Phi = {(4\pi)^{\epsilon} \over \Gamma(-\epsilon)} \int d\mu^2 \ (\mu^2)^{-1-\epsilon} \ \int d^{4} \phi, \end{eqnarray} where the four-dimensional LIPS is \begin{eqnarray} \int d^{4} \phi & = & \int d^4 \W \ell \ \delta(\W \ell^2- M_1^2-\mu^2) \ \delta((\W \ell-K)^2-M_2^2-\mu^2) \ . \end{eqnarray} The change of variables in Eq.(\ref{changing}), and the $z$-integration (trivialized by the presence of $\delta$'s), yield the four-dimensional LIPS to appear as \begin{eqnarray} \int d^4 \phi &=& \int d^4\ell \ \delta^+(\ell^2) \ \delta((1-2z)K^2-2 \ell \cdot K+M_1^2-M_2^2), \Label{double-cut-measure} \end{eqnarray} where \begin{eqnarray} z = { (K^2+M_1^2-M_2^2)- \sqrt{\Delta[K_1, M_1, M_2]- 4 K^2 \mu^2}\over 2 K^2},~~~~\Label{solve-z} \end{eqnarray} with \begin{eqnarray} \Delta[K,M_1,M_2]\equiv (K^2)^2+(M_1^2)^2+(M_2^2)^2-2 K^2 M_1^2 -2 K^2 M_2^2- 2M_1^2 M_2^2 \ . ~~~\Label{Delta-KMM} \end{eqnarray} We remark that the value of $z$ in Eq.(\ref{solve-z}) is frozen to be the proper root ($K>0$) of the quadratic argument of $\delta(z(1-z)K^2 + z(M_1^2-M_2^2)-M_1^2-\mu^2)$, coming from $\delta(\W \ell^2- M_1^2-\mu^2)$. For later convenience, one can redefine the $\mu^2$-integral measure as \begin{eqnarray*} \int d\mu^2 (\mu^2)^{-1-\epsilon} = \left( { \Delta[K,M_1,M_2]\over 4 K^2}\right)^{-\epsilon} \int_0^1 du \ u^{-1-\epsilon}, \end{eqnarray*} where the relation between $u$ and $\mu^2$ is given by \begin{eqnarray} u\equiv {4 K^2\mu^2 \over \Delta[K,M_1,M_2]},~~~~~~~\mu^2=\left( { u\Delta[K,M_1,M_2]\over 4 K^2}\right).~~~\Label{u-def}\end{eqnarray} We observe that the domain of $u$, i.e., $u\in [0,1]$, follows from the kinematical constraints, as discussed in \cite{Britto:2006fc}. Finally, after the above rearrangement, the $D$-dimensional Lorentz-invariant phase-space of a double cut in the $K^2$-channel can be written in a suitable form, \begin{eqnarray} \int d^{4-2\epsilon} \Phi &=& \chi(\epsilon,K,M_1,M_2) \ \int_0^1 du \ u^{-1-\epsilon} \int d^4\phi, \label{Ddim:2PLEcut} \end{eqnarray} where \begin{eqnarray} \chi(\epsilon,K,M_1,M_2) &=& {(4\pi)^{\epsilon} \over \Gamma(-\epsilon)} \left( {\Delta[K,M_1,M_2]\over 4 K^2} \right)^{-\epsilon}, \end{eqnarray} and where $d^4\phi$ was given in Eq.(\ref{double-cut-measure}). By using the definition of $u$ given in Eq.(\ref{u-def}), we can write \begin{eqnarray} z ={ {\alpha} -{\beta} \sqrt{1-u}\over 2},~~~~\Label{z-sol-u}\end{eqnarray} where \begin{eqnarray} {\alpha} = { K^2+M_1^2-M_2^2\over K^2},~~~~{\beta}={\sqrt{\Delta[K, M_1, M_2]}\over K^2}.~~~\Label{ab-def}\end{eqnarray} Notice that when $M_1=M_2=0$ we have ${\alpha}={\beta}=1$, thus reproducing the massless case. A useful relation between $z$ and $u$ is the following: \begin{eqnarray} (1-2z) +{ M_1^2-M_2^2\over K^2} = {\beta} \sqrt{1-u}.~~~\Label{z-rel-1}\end{eqnarray} This relation will be used in Appendix \ref{polynomialproof} to prove that that the coefficients given in this paper are polynomials in $u$, or equivalently $\mu^2$. As discussed in the previous section, this feature is essential for the straightforward reconstruction of dimensionally-shifted master integrals. \bigskip The main feature of a double-cut LIPS parametrized as in Eqs(\ref{Ddim:2PLEcut},\ref{double-cut-measure}) is that the kernel of the integration is represented by the four-dimensional integral. In fact, the $u$-integration (or equivalently, the $\mu^2$-integration), is simply responsible of the rise of {\it shifted-dimension} master integrals. Thus, our interest in the extraction of the coefficients of the master integrals from a four-dimensional massive double cut, see Eq.(\ref{4dimCutMasterDeco}), translates in focusing the discussion only on the $\int d^4\phi$. The $D$-dimensional double cut of any one-loop amplitude is, in general form, \begin{eqnarray} \int d^{4-2\epsilon} \Phi \ A_L^{\rm tree} \times A_R^{\rm tree} = \chi(\epsilon,K,M_1,M_2) \ \int_0^1 du \ u^{-1-\epsilon} \int d^4\phi \ A_L^{\rm tree} \times A_R^{\rm tree} \end{eqnarray} where $A_L^{\rm tree}$ and $A_R^{\rm tree}$ are the two tree-level amplitudes on the left and right side of the cut. As discussed above, the kernel of the integration is represented by the four-dimensional part, \begin{eqnarray} \int d^4\phi \ A_L^{\rm tree} \times A_R^{\rm tree} \ . \end{eqnarray} We proceed from the formula (\ref{double-cut-measure}) by introducing spinor-variables according to \cite{Cachazo:2004kj}, \begin{eqnarray} \int d^4\ell \ \delta(\ell^2) &=& \int \vev{\ell~d\ell}[\ell~d\ell] \int t ~dt \end{eqnarray} and performing the integral over $t$ trivially, with the second delta function. The general expression of the double cut integral will then be \begin{eqnarray} & & \int d^4\phi \ A_L^{\rm tree} \times A_R^{\rm tree} \nonumber \\ &=& \int d^4\ell \delta^+(\ell^2)\delta((1-2z)K^2-2 \ell \cdot K+M_1^2-M_2^2) { \prod_j \gb{a_j|\W\ell|b_j}\over \prod_i ((\W\ell-K_i)^2-m_i^2-\mu^2)} ~~~~\Label{gen-integrand} \\ & = & \int d^4\ell \delta^+(\ell^2)\delta((1-2z)K^2-2 \ell \cdot K+M_1^2-M_2^2) { \prod_j \gb{a_j|\ell+zK|b_j}\over \prod_i ( K_i^2+M_1^2-m_i^2-2 (\ell+zK)\cdot K_i)} \nonumber \\& = & \int \vev{\ell~d\ell}[\ell~d\ell] \int t dt \delta((1-2z)K^2+t\gb{\ell|K|\ell}+M_1^2-M_2^2) { \prod_j (z\gb{a_j| K|b_j}+ t\gb{\ell|P_j|\ell}) \over \prod_i ( K_i^2+M_1^2-m_i^2-2 z K\cdot K_i+t \gb{\ell|K_i|\ell})} \nonumber \end{eqnarray} Here we have used $\W\ell^2= M_1^2+\mu^2$. Notice that $\gb{a|\W\ell|b}=-2\W\ell\cdot P$, with $P=|a\rangle [b|$. After using the remaining delta function to perform the integral over $t$, we have \begin{eqnarray} \int \vev{\ell~d\ell}[\ell~d\ell]\left((1-2z)+{M_1^2-M_2^2\over K^2} \right){ (K^2)^{n+1}\over \gb{\ell|K|\ell}^{n+2}} {\prod_{j=1}^{n+k} \gb{\ell|R_j|\ell}\over \prod_{i=1}^k \gb{\ell|Q_i|\ell}}~~~\Label{frame}\end{eqnarray} where \begin{eqnarray} P_j&=&|a_j\rangle [b_j| \ , \\ R_j & = &-\left((1-2z)+{M_1^2-M_2^2\over K^2} \right)P_j- { z(2P_j\cdot K)\over K^2}K,~~~~\Label{R-def} \\ Q_j & = & -\left((1-2z) +{M_1^2-M_2^2\over K^2} \right)K_j+ {K_j^2+M_1^2-m_j^2-2 z K\cdot K_j\over K^2}K~~~~\Label{Q-def} \end{eqnarray} The vectors $P_j, R_j, Q_j$ do not depend on the loop variables, but rather only on the external kinematics, and especially on the momentum across the cut, $K$. Applying (\ref{frame}) to the master integrals, we find the following results. \begin{itemize} \item (1) Bubble: $k=0$, $n+k=0$. \begin{eqnarray} \int \vev{\ell~d\ell}[\ell~d\ell]\left((1-2z)+{M_1^2-M_2^2\over K^2} \right){ (K^2)\over \gb{\ell|K|\ell}^2}\end{eqnarray} \item (2) Triangle: $k=1$, $n+k=0$. \begin{eqnarray} \int \vev{\ell~d\ell}[\ell~d\ell]\left((1-2z)+{M_1^2-M_2^2\over K^2} \right){ 1\over \gb{\ell|K|\ell}\gb{\ell|Q|\ell}}\end{eqnarray} \item (3) Box: $k=2$, $n+k=0$. \begin{eqnarray} \int \vev{\ell~d\ell}[\ell~d\ell]\left((1-2z)+{M_1^2-M_2^2\over K^2} \right){ (K^2)^{-1}\over \gb{\ell|Q_1|\ell}\gb{\ell|Q_2|\ell}}\end{eqnarray} \end{itemize} These formulas are the extension to the massive case of the corresponding ones given in \cite{Britto:2007tt}. We notice that the presence of the masses enters only the definitions of $P_j$, $R_j$, and $Q_j$. Therefore the spinor integration performed in the massless case \cite{Britto:2007tt} is valid as well in this case. \bigskip The expression of the cut-integrand in Eq.(\ref{frame}), with its indices, $n$ and $k$, and its vectors $P_j$, $R_j$, and $Q_j$ is the key to constructing the coefficients. In the next section we present general formulas for the coefficients of the master-integrals (boxes, triangles and bubbles), which depend on exactly these input parameters. Accordingly, given a specific amplitude (or integral), one can obtain its decomposition in terms of master-integrals {\em without any integration}. Every coefficient is obtained from the general formulas simply by substituting the input parameters characterizing the specific amplitude. These parameters are obtained by pattern-matching onto the reference form in Eq.(\ref{frame}). \section{\label{sec:formulas}Formulas for the coefficients of master integrals} The coefficients of master integrals are obtained by the procedure described in the previous section, which is a straightforward generalization of the massless case \cite{Britto:2007tt}. We list the results in this section. In fact, the expressions take the same form as in the massless case; the mass dependence enters directly through the definitions (\ref{R-def}) and (\ref{Q-def}), and through these formulas into the definitions (\ref{box-null}), (\ref{tri-null}). \subsection{Box coefficient} \vspace*{1cm} \begin{figure}[h] \begin{center} \begin{picture}(0,0)(0,0) \SetScale{0.7} \SetWidth{1.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \DashLine(0,30)(0,-30){3} \Text(30,30)[]{\footnotesize{$K_r$} } \Text(-30,+30)[]{\footnotesize{$K_s$} } \Text(+40,-30)[]{\footnotesize{$K-K_r$} } \end{picture} \hspace*{4.0cm} \begin{picture}(0,0)(0,0) \SetScale{0.7} \SetWidth{1.0} \Line(-25,0)(0,25) \Line(0,25)(25,0) \Line(25,0)(0,-25) \Line(0,-25)(-25,0) \Line(-35,0)(-25,0) \Line(25,0)(35,0) \Line(0,25)(0,35) \Line(0,-25)(0,-35) \DashLine(-12,30)(-12,-30){3} \Text(-30,0)[]{\footnotesize{$K$} } \Text(10,+30)[]{\footnotesize{$K_r$} } \Text(10,-30)[]{\footnotesize{$K_s$} } \end{picture} \end{center} \vspace*{0.5cm} \caption{Double-cut of Box functions} \label{fig:BoxCoeff} \end{figure} \noindent The formula for the coefficient of either of the box-functions with external kinematics as shown in Fig.\ref{fig:BoxCoeff} is \begin{eqnarray} C[Q_r,Q_s,K] & = & {(K^2)^{2+n}\over 2}\left({\prod_{j=1}^{k+n} \gb{P_{sr,1}|R_j |P_{sr,2}}\over \gb{P_{sr,1}|K |P_{sr,2}}^{n+2}\prod_{t=1,t\neq i,j}^k \gb{P_{sr,1}|Q_t |P_{sr,2}}}+ \{P_{sr,1}\leftrightarrow P_{sr,2}\} \right),~~\Label{box-exp}\end{eqnarray} where \begin{eqnarray} \Delta_{sr} &=& (2Q_s \cdot Q_r)^2-4 Q_s^2 Q_r^2 \nonumber \\ P_{sr,1} &=& Q_s + \left( {-2Q_s \cdot Q_r + \sqrt{\Delta_{sr}}\over 2Q_r^2} \right) Q_r \nonumber \\ P_{sr,2} &=& Q_s + \left( {-2Q_s \cdot Q_r - \sqrt{\Delta_{sr}}\over 2Q_r^2} \right) Q_r~~~~\Label{box-null} \end{eqnarray} \subsection{Triangle coefficient} \vspace*{1cm} \begin{figure}[h] \begin{center} \begin{picture}(0,0)(0,0) \SetScale{0.7} \SetWidth{1.0} \Line(+20,-20)(-20,0) \Line(+20,20)(-20,0) \Line(+20,-20)(+20,20) \Line(+20,-20)(+30,-30) \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(+20,20)(+30,30) \DashLine(0,30)(0,-30){3} \Text(-30,0)[]{\footnotesize{$K$}} \Text(+30,+30)[]{\footnotesize{$K_s$}} \end{picture} \end{center} \vspace*{0.5cm} \caption{Double-cut of a Triangle} \label{fig:TriCoeff} \end{figure} \noindent The formula for the coefficient of the triangle function with external kinematics as shown in Fig.\ref{fig:TriCoeff} is \begin{eqnarray} C[Q_s,K] & = & { (K^2)^{1+n}\over 2}\frac{1}{(\sqrt{\Delta_s})^{n+1}}\frac{1}{(n+1)! \vev{P_{s,1}~P_{s,2}}^{n+1}} \nonumber \\ & & \times \frac{d^{n+1}}{d\tau^{n+1}}\left.\left({\prod_{j=1}^{k+n} \vev{P_{s,1}-\tau P_{s,2} |R_j Q_s|P_{s,1}-\tau P_{s,2}}\over \prod_{t=1,t\neq s}^k \vev{P_{s,1}-\tau P_{s,2}|Q_t Q_s |P_{s,1}-\tau P_{s,2}}} + \{P_{s,1}\leftrightarrow P_{s,2}\}\right)\right|_{\tau=0},~~~~~\Label{tri-exp}\end{eqnarray} where \begin{eqnarray} \Delta_{s} &=& (2Q_s \cdot K)^2-4 Q_s^2 K^2 \nonumber \\ P_{s,1} &=& Q_s + \left({-2Q_s \cdot K + \sqrt{\Delta_{s}}\over 2K^2} \right) K \nonumber \\ P_{s,2} &=& Q_s + \left({-2Q_s \cdot K - \sqrt{\Delta_{s}}\over 2K^2} \right) K ~~~~\Label{tri-null} \end{eqnarray} Note that the triangle coefficient is present only when $n\geq -1$. \subsection{Bubble coefficient} \vspace*{1cm} \begin{figure}[h] \begin{center} \begin{picture}(0,0)(0,0) \SetScale{0.7} \SetWidth{1.0} \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Oval(0,0)(20,20)(0) \DashLine(0,30)(0,-30){3} \Text(-30,0)[]{\footnotesize{$K$}} \end{picture} \end{center} \vspace*{0.5cm} \caption{Double-cut of a Bubble} \label{fig:BubCoeff} \end{figure} \noindent The formula for the coefficient of the bubble function with the external momentum $K$, shown in Fig.\ref{fig:BubCoeff}, is \begin{eqnarray} C[K] = (K^2)^{1+n} \sum_{q=0}^n {(-1)^q\over q!} {d^q \over ds^q}\left.\left( {\cal B}_{n,n-q}^{(0)}(s)+\sum_{r=1}^k\sum_{a=q}^n \left({\cal B}_{n,n-a}^{(r;a-q;1)}(s)-{\cal B}_{n,n-a}^{(r;a-q;2)}(s)\right)\right)\right|_{s=0},~~~~~\Label{bub-exp} \end{eqnarray} where \begin{eqnarray} {\cal B}_{n,t}^{(0)}(s)\equiv {d^n\over d\tau^n}\left.\left( {1 \over n! [\eta|\W \eta K|\eta]^{n}} {(2\eta\cdot K)^{t+1} \over (t+1) (K^2)^{t+1}}{\prod_{j=1}^{n+k} \vev{\ell|R_j (K+s\eta)|\ell}\over \vev{\ell~\eta}^{n+1} \prod_{p=1}^k \vev{\ell| Q_p(K+s\eta)|\ell}}|_{\ket{\ell}\to |K-\tau \W \eta|\eta] }\right)\right|_{\tau= 0},~~~\Label{cal-B-0}\end{eqnarray} \begin{eqnarray} & & {\cal B}_{n,t}^{(r;b;1)}(s) \equiv {(-1)^{b+1}\over b! \sqrt{\Delta_r}^{b+1} \vev{P_{r,1}~P_{r,2}}^b}{d^b \over d\tau^{b}} \left({1\over (t+1)} {\gb{P_{r,1}-\tau P_{r,2}|\eta|P_{r,1}}^{t+1}\over \gb{P_{r,1}-\tau P_{r,2}|K|P_{r,1}}^{t+1}}\right. \nonumber \\ & & \times \left.\left. {\vev{P_{r,1}-\tau P_{r,2}|Q_r \eta|P_{r,1}-\tau P_{r,2}}^{b} \prod_{j=1}^{n+k} \vev{P_{r,1}-\tau P_{r,2}|R_j (K+s\eta)|P_{r,1}-\tau P_{r,2}}\over \vev{P_{r,1}-\tau P_{r,2}|\eta K|P_{r,1}-\tau P_{r,2}}^{n+1} \prod_{p=1,p\neq r}^k \vev{P_{r,1}-\tau P_{r,2}| Q_p(K+s\eta)|P_{r,1}-\tau P_{r,2}}}\right)\right|_{\tau=0},~~~\Label{cal-B-r-1}\end{eqnarray} \begin{eqnarray} & & {\cal B}_{n,t}^{(r;b;2)}(s) \equiv {(-1)^{b+1}\over b! \sqrt{\Delta_r}^{b+1} \vev{P_{r,1}~P_{r,2}}^{b}}{d^{b} \over d\tau^{b}} \left({1\over (t+1)} {\gb{P_{r,2}-\tau P_{r,1}|\eta|P_{r,2}}^{t+1}\over \gb{P_{r,2}-\tau P_{r,1}|K|P_{r,2}}^{t+1}}\right. \nonumber \\ & & \times \left.\left. {\vev{P_{r,2}-\tau P_{r,1}|Q_r \eta|P_{r,2}-\tau P_{r,1}}^{b} \prod_{j=1}^{n+k} \vev{P_{r,2}-\tau P_{r,1}|R_j (K+s\eta)|P_{r,2}-\tau P_{r,1}}\over \vev{P_{r,2}-\tau P_{r,1}|\eta K|P_{r,2}-\tau P_{r,1}}^{n+1} \prod_{p=1,p\neq r}^k \vev{P_{r,2}-\tau P_{r,1}| Q_p(K+s\eta)|P_{r,2}-\tau P_{r,1}}}\right)\right|_{\tau=0}.~~~\Label{cal-B-r-2}\end{eqnarray} where $\Delta_r, P_{r,1}, P_{r,2}$ are given by (\ref{tri-null}), and $\eta, \W\eta$ are arbitrary, generically chosen null vectors. Note that the bubble coefficient exists only when $n\geq 0$. \section{Example I: $s_{12}$-channel cut of $A(1^+,2^+,3^+,H)$} In this section as well as the next, we check our formulas by reconstructing some helicity amplitudes contributing to $gH \to gg$ and $gg \to gg$ at NLO in QCD, both known in the literature \cite{Ellis:1987xu,Bern:1995db}. We present our calculations in detail. Our first example is the $s_{12}$-channel cut of $A(1^+,2^+,3^+,H)$. This amplitude was first computed in \cite{Ellis:1987xu}. Here, to facilitate comparison, we follow the setup of \cite{Rozowsky:1997dm}, where the amplitude was rederived using unitarity cuts. At one loop, every Feynman diagram has a massive quark circulating in the loop. The quark mass is denoted by $m$. \begin{figure} \begin{eqnarray} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \GOval(0,0)(27,27)(0){1} \SetWidth{1.0} \Gluon(-20,-20)(-30,-30){3}{3} \Gluon(20,-20)(30,-30){3}{3} \Gluon(20,20)(30,30){3}{3} \DashLine(-20,20)(-30,30){3} \GOval(-20,0)(20,8.5)(0){0.7} \GOval(+20,0)(20,8.5)(0){0.7} \DashLine(0,32)(0,-32){2} \Text(-33, -33)[]{\footnotesize {$3$}} \Text(-33, +33)[]{\footnotesize {$H$}} \Text(+33, +33)[]{\footnotesize {$1$}} \Text(+33, -33)[]{\footnotesize {$2$}} \Text(0, +35)[]{\footnotesize {$L_1$}} \Text(0, -35)[]{\footnotesize {$L_2$}} \end{picture} \hspace*{1.5cm} &=& c_4^{1m} \hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \SetWidth{1.0} \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \DashLine(-20,20)(-30,30){3} \DashLine(0,30)(0,-30){2} \end{picture} \hspace*{1cm} + c_{[12|3|H]}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Line(-20,-20)(20,0) \Line(-20,20)(20,0) \Line(-20,-20)(-20,20) \SetWidth{1.0} \Line(-20,-20)(-30,-30) \Line(20,0)(30,10) \Line(20,0)(30,-10) \DashLine(-20,20)(-30,30){3} \DashLine(0,30)(0,-30){2} \end{picture} \hspace*{1cm} + c_{[1|2|3H]}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Line(+20,-20)(-20,0) \Line(+20,20)(-20,0) \Line(+20,-20)(+20,20) \SetWidth{1.0} \Line(+20,-20)(+30,-30) \DashLine(-20,0)(-30,10){3} \Line(-20,0)(-30,-10) \Line(+20,20)(+30,30) \DashLine(0,30)(0,-30){2} \end{picture} \hspace*{1cm} + c_{[12|3H]}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Oval(0,0)(20,20)(0) \SetWidth{1.0} \DashLine(-20,0)(-30,10){3} \Line(-20,0)(-30,-10) \Line(20,0)(30,10) \Line(20,0)(30,-10) \DashLine(0,30)(0,-30){2} \end{picture} \nonumber \end{eqnarray} \vspace*{0.5cm} \caption{Double-cut in the $s_{12}$-channel for $A(1^+,2^+,3^+,H)$.} \label{fig:s12cut} \end{figure} The $s$-channel cut of $A(1^+,2^+,3^+,H)$ admits a decomposition in terms of cuts of master integrals as shown in Fig. \ref{fig:s12cut}. Its expression, given in Eq.(4.20) of \cite{Rozowsky:1997dm}, reads \begin{eqnarray} A(1^+,2^+,3^+,H)|_{s-{\rm channel}}&=& - { i \over (4 \pi)^{2 - \epsilon} } {m^2 \over v^2} { s t \over \spa 1.2 \spa 2.3 \spa 3.1} \Bigg\{ {(t-u) \over 2 t (s-m_H^2)} I_3^{(4)}[4 (m^2 +\mu^2) - s] \nonumber \\ && \qquad - {(s-m_H^2) \over 2 s t} I_3^{(2)}[4 (m^2 +\mu^2) - s] - {1 \over 2} I_4[4 (m^2 +\mu^2) - s] \ . \Bigg\} \end{eqnarray} where $s=s_{12}, \ t=s_{23} \ , u=m_H^2 - s - t$. One can thus read the following values for the coefficients: \begin{eqnarray} c_4^{1m} &=& - c_0 \ {1 \over 2} \Big(4 (m^2 +\mu^2) - s \Big) \ , \\ c_{[12|3|H]} &=& - c_0 \ {(s-m_H^2) \over 2 s t} \Big(4 (m^2 +\mu^2) - s \Big) \ , \\ c_{[1|2|3H]} &=& c_0 \ {(t-u) \over 2 t (s-m_H^2)} \Big(4 (m^2 +\mu^2) - s \Big) \ , \\ c_{[12|3H]} &=& 0 ~~~\Label{exa-1-bubble}\ . \end{eqnarray} with \begin{eqnarray} c_0 &=& - { i \over (4 \pi)^{2 - \epsilon} } {m^2 \over v^2} { s t \over \spa 1.2 \spa 2.3 \spa 3.1} \ . \end{eqnarray} \subsection{The reconstruction of the coefficients} We now show how to reconstruct the coefficients given above with our formulas from Section \ref{sec:formulas}. We follow the definition of the integrand given by \cite{Rozowsky:1997dm}. By sewing the tree-level amplitude $A_4^{\rm tree}(-L_1,1^+,2^+,-L_2)$ and $A_4^{\rm tree}(-L_1,3^+,H,-L_2)$ given in Eq.(4.1-4.2) of \cite{Rozowsky:1997dm}, and using the Dirac equation for massive fermion, it is shown that the four-dimensional integrand of the $s$-cut, $C_{12}$, can be written as:\footnote{Note that here we use ``twistor'' sign convention for the antiholomorphic spinor product, which is the opposite of the ``QCD'' convention followed by \cite{Rozowsky:1997dm} $\spb {x}.{y}^{\rm Rozowsky} = - \spb {x}.{y}^{\rm BFM} $. } \begin{eqnarray} C_{12} = c_{0,1} {N_1 \over D_2} + c_{0,2}{N_2 \over D_2 D_4} \end{eqnarray} where \begin{eqnarray} c_{0,1}=-{ m K^2 s_{23} \over v (K^2-m_H^2) \vev{1~2}\vev{2~3}\vev{3~1}} ~~~~~c_{0,2}= {m \cb{1~2} \over \sqrt{2} v \vev{1~2}} \end{eqnarray} \begin{eqnarray} N_1 &=& m \left[ 4(m^2 + \mu^2)-s_{12}\right] \\ N_2 &=& 8m(m^2 +\mu^2)(\ell_1 \cdot \epsilon_3^+ + k_4 \cdot \epsilon_3^+) +\sqrt{2} m {\gb{1|4|\ell_1|4|3} \over \vev{1~3}} \end{eqnarray} \begin{eqnarray} D_2 & = & (\ell_1-k_1)^2 - \mu^2 - m^2, ~~~D_4 = (\ell_1+k_4)^2 - \mu^2 - m^2 \end{eqnarray} \noindent We need to classify the contribution of $N_1$ and $N_2$ to each coefficient. Observe that $N_1$ is independent of the loop momentum variable, so we consider it as a single term, \begin{eqnarray} N_1 &=& m \left[ 4(m^2 + \mu^2)-s_{12}\right], \end{eqnarray} while $N_2$ is treated as three separate terms, \begin{eqnarray} N_2 = N_{2,1} + N_{2,2} + N_{2,3} \, \end{eqnarray} where \begin{eqnarray} N_{2,1} &=& 8m(m^2 +\mu^2)(k_4 \cdot \epsilon_3^+) = 8m(m^2 +\mu^2) {\spab{1}.{2}.{3} \over \sqrt{2} \spa 1.3 } \\ N_{2,2} &=& 8m(m^2 +\mu^2)(\ell_1 \cdot \epsilon_3^+) = - 8m(m^2 +\mu^2) {\spab{1}.{\ell_1}.{3} \over \sqrt{2} \spa 1.3 } \\ N_{2,3} &=& \sqrt{2} m {\gb{1|4|\ell_1|4|3} \over \vev{1~3}}. \end{eqnarray} By pattern-matching onto the reference form in Eq.(\ref{frame}), each integrand can be characterised by the parameters given in the following table. \begin{eqnarray} \begin{array}{|c||c|c|c|} \hline {\rm integrand} & n & k & \s{P}_1 = |P_1\rangle [P_1| \\ \hline \hline {N_1 / D_2} & -1 & 1 & - \\ \hline {N_{2,1} /(D_2 D_4)} & -2 & 2 & - \\ \hline {N_{2,2} /(D_2 D_4)} & -1 & 2 & |1 \rangle \ [3|\\ \hline {N_{2,3} /(D_2 D_4)} & -1 & 2 & k_4|3] \ \langle 1|k_4 \\ \hline \end{array} \end{eqnarray} These data are the input values that we need in evaluating the formulas of the coefficients of the master integrals. \\ From this table we draw the following conclusions. Since $N_1/D_2$ has $k=1$ and $n=-1$, it contributes only to a triangle coefficient; whereas $N_{2,i} \ (i=1,2,3)$, having $n\leq -1$, contributes to both box and triangle coefficients. There are no bubble contributions at all. Thus we have already reproduced the absence of bubbles, Eq.(\ref{exa-1-bubble}), without any calculation. To apply our formulas, we need to identify the definitions of $K$, $K_1$, and $K_2$. By inspection of $D_2$ and $D_4$, along with the fact of working in the $s$-channel cut, we choose the following consistent definitions: \begin{eqnarray} K=k_1+k_2;~~~K_1=k_1;~~~K_2=-k_4. ~~~\Label{ks-for-gggh} \end{eqnarray} Since there is a single massive quark circulating in the loop, we have \begin{eqnarray} ~~~ M_1=M_2=m_j=m,\end{eqnarray} and \begin{eqnarray} z= {1-\sqrt{1-u}\sqrt{1-{4 m^2\over s_{12}}}\over 2} \end{eqnarray} From (\ref{ks-for-gggh}), we use the definition (\ref{Q-def}) to construct \begin{eqnarray} Q_1 &=& -(1-z)k_1 - z k_2 \\ Q_2 &=& \left(1-2z \right)k_4 +\left((1-z){m_H^2 \over K^2} - z \right)K \end{eqnarray} Using (\ref{tri-null}), we also set up the following quantities useful for triangle coefficients: \begin{eqnarray} & & \Delta_1 = (1-2z)^2 (K^2)^2 \\ & & P_{1,1} = (1-2z)k_2,~~~~~P_{1,2}=-(1-2z)k_1\\& & \Delta_2 = (1-2z)^2 (K^2-m_H^2)^2 \\ & & P_{2,1} = -(1-2z)k_3,~~~~~ P_{2,2}=-(1-2z){m_H^2 \over K^2}k_3+(1-2z)(1-{m_H^2 \over K^2})k_4\end{eqnarray} \subsection{The box coefficient $c_4^{1m}$} The box coefficient $c_4^{1m}$ takes contributions from $N_{2,1}$, $N_{2,2}$, $N_{2,3}$: \begin{eqnarray} c_4^{1m} &=& c_{0,2}~8m(m^2+\mu^2)k_4 \cdot \epsilon_3^+ \ C[Q_1,Q_2,K]^{(2,1)} - c_{0,2} {8m(m^2+\mu^2) \over \sqrt{2}\vev{1~3}} \ C[Q_1,Q_2,K]^{(2,2)} \nonumber \\ && + c_{0,2} {\sqrt{2} m \over \vev{1~3}} \ C[Q_1,Q_2,K]^{(2,3)} \end{eqnarray} where $C[Q_r,Q_s,K]$, defined in Eq.(\ref{box-exp}), is \begin{eqnarray} C[Q_r,Q_s,K] & = & {(K^2)^{2+n}\over 2}\left({\prod_{j=1}^{k+n} \gb{P_{sr,1}|R_j |P_{sr,2}}\over \gb{P_{sr,1}|K |P_{sr,2}}^{n+2}\prod_{t=1,t\neq r,s}^k \gb{P_{sr,1}|Q_t |P_{sr,2}}}+ \{P_{sr,1}\leftrightarrow P_{sr,2}\} \right). \end{eqnarray} \bigskip \noindent $\bullet \ C[Q_1,Q_2,K]^{(2,1)}$ This term, corresponding to $n=-2$ is trivial, since $k=2$ and $N_{2,1}$ has no dependence on the loop variable, \begin{eqnarray} C[Q_1,Q_2,K]^{(2,1)} = 1 \end{eqnarray} \bigskip \noindent $\bullet \ C[Q_1,Q_2,K]^{(2,2)}$ and $C[Q_1,Q_2,K]^{(2,3)}$ $C[Q_1,Q_2,K]^{(2,2)}$ and $C[Q_1,Q_2,K]^{(2,3)}$ both correspond to $n=-1,k=2$. They differ only in the definition of $P_1$. Therefore we can compute them in parallel, and specialize later to the corresponding $P_1$. With $n=-1,k=2$, the expression is \begin{eqnarray} & & C[Q_1,Q_2,K] = {(K^2)\over 2}\left({ \gb{P_{21,1}|R_1 |P_{21,2}}\over \gb{P_{21,1}|K |P_{21,2}} }+ \{P_{21,1}\leftrightarrow P_{21,2}\} \right) \\ & & = -(1-2z){K^2\over 2} \left({\gb{P_{21,1}|P_1 |P_{21,2}}\over \gb{P_{21,1}|K |P_{21,2}} } + {\gb{P_{21,2}|P_1 |P_{21,1}}\over \gb{P_{21,2}|K |P_{21,1}} } \right) + {z (-2K \cdot P_1) } \nonumber \\ & & = -(1-2z){K^2\over 2} \left({\gb{P_{21,1}|P_1 |P_{21,2}}\gb{P_{21,2}|K |P_{21,1}}+\gb{P_{21,2}|P_1 |P_{21,1}}\gb{P_{21,1}|K |P_{21,2}}\over \gb{P_{21,1}|K |P_{21,2}}\gb{P_{21,2}|K |P_{21,1}} } \right) + {z (-2K \cdot P_1) } \nonumber \end{eqnarray} \bigskip \noindent For $|P_1\rangle = |1\rangle \ , |P_1] = |3]$, (so that, for any $S$, one has $2P_1 \cdot S = -\gb{1|S|3}$), one obtains \begin{eqnarray} & & C[Q_1,Q_2,K]^{(2,2)} = \nonumber \\ & & = -(1-2z){K^2\over 2} \left(-{(1-2z)^4 \over z(1-z)}{s_{23} (s_{23}+K^2-m_H^2)\gb{1|2|3} \over K^2} \right) \left({(1-2z)^4 \over z(1-z)}s_{23}(s_{23}+K^2-m_H^2) \right)^{-1} + {z \gb{1|2|3} } \nonumber \\ & & ={\gb{1|2|3}\over 2} \end{eqnarray} \\ For $|P_1\rangle = \s{k}_4|3] \ , |P_1] = \s{k}_4|1\rangle$, ( so that, $2P_1 \cdot S = -\gb{1|k_4 S k_4|3}$), one gets \begin{eqnarray} & & C[Q_1,Q_2,K]^{(2,3)} = \nonumber \\ & & = -(1-2z){K^2\over 2} \left({\gb{P_{21,1}|P_1 |P_{21,2}}\gb{P_{21,2}|K |P_{21,1}}+\gb{P_{21,2}|P_1 |P_{21,1}}\gb{P_{21,1}|K |P_{21,2}}\over \gb{P_{21,1}|K |P_{21,2}}\gb{P_{21,2}|K |P_{21,1}} } \right) + {z (-2K \cdot P_1) } \nonumber \\ & & = -{\gb{1|2|3}\over 2}m_H^2 \end{eqnarray} \bigskip \noindent $\bullet \ $ The result for $c_4^{1m}$ The total coefficient of our box is: \begin{eqnarray} c_4^{1m} &=& c_{0,2}~8m(m^2+\mu^2)k_4 \cdot \epsilon_3^+ - c_{0,2} {8m(m^2+\mu^2) \over \sqrt{2}\vev{1~3}}({\gb{1|2|3}\over 2}) + c_{0,2} {\sqrt{2} m \over \vev{1~3}}(-m_H^2 {\gb{1|2|3}\over 2}) \nonumber \\ & = & - {m^2 K^2 s_{23} \over 2 v \vev{1~2}\vev{2~3}\vev{3~1}} \left[ {4(m^2+\mu^2) } -m_H^2 \right] \end{eqnarray} Multiplying by $-i/(4\pi)^{2-\epsilon}$, to account for the difference in the definitions of master integrals, we confirm the result of \cite{Rozowsky:1997dm}. \subsection{The triangle coefficient $c_{[1|2|3H]}$} The coefficient $c_{[1|2|3H]}$ gets contributions from $N_{1}$, $N_{2,2}$ and $N_{2,3}$: \begin{eqnarray} c_{[1|2|3H]} & = & c_{0,1} \ N_1 \ C[Q_1,K]^{(1)} - c_{0,2} {8m(m^2+\mu^2) \over \sqrt{2}\vev{1~3}} \ C[Q_1,K]^{(2,2)} + c_{0,2} {\sqrt{2} m \over \vev{1~3}} \ C[Q_1,K]^{(2,3)} \end{eqnarray} where the general triangle coefficient, given in Eq.(\ref{tri-exp}), reads \begin{eqnarray} C[Q_s,K] & = & { (K^2)^{1+n}\over 2}\frac{1}{(\sqrt{\Delta_s})^{n+1}}\frac{1}{(n+1)! \vev{P_{s,1}~P_{s,2}}^{n+1}} \nonumber \\ & & \times \frac{d^{n+1}}{d\tau^{n+1}}\left.\left({\prod_{j=1}^{k+n} \vev{P_{s,1}-\tau P_{s,2} |R_j Q_s|P_{s,1}-\tau P_{s,2}}\over \prod_{t=1,t\neq s}^k \vev{P_{s,1}-\tau P_{s,2}|Q_t Q_s |P_{s,1}-\tau P_{s,2}}} + \{P_{s,1}\leftrightarrow P_{s,2}\}\right)\right|_{\tau=0}. \end{eqnarray} \bigskip \noindent $\bullet \ C[Q_1,K]^{(1)}$ We have already observed that the $N_1$ term is trivial. Here is how that shows up in our formulas. Read $C[Q_s,K]$ for $s=1$ and $k=1,n=-1.$, and no $R_j$. The term inside the parentheses degenerates to 1. \begin{eqnarray} C[Q_1,K]^{(1)} & = & { 1 \over 2} \left.\left(1 +1 \right)\right|_{\tau=0} = 1 . \end{eqnarray} \bigskip \noindent $\bullet \ C[Q_1,K]^{(2,2)}$ and $C[Q_1,K]^{(2,3)}$ As we said already, $N_{2,2}$ and $N_{2,3}$ differ in the definition of $P_1$. Therefore we start by manipulating the general formula, and only at the very end we specialize each contribution using the corresponding $P_1$. Since $n=-1,k=2$, there is no derivative at all, so we can set $\tau=0$ from the beginning: \begin{eqnarray} C[Q_s,K] & = & { 1\over 2} \left({ \vev{P_{s,1} |R_1 Q_s|P_{s,1}}\over \prod_{t=1,t\neq s}^k \vev{P_{s,1}|Q_t Q_s |P_{s,1}}} + {\vev{P_{s,2} |R_1 Q_s|P_{s,2}}\over \prod_{t=1,t\neq s}^k \vev{P_{s,2}|Q_t Q_s |P_{s,2}}} \right). \end{eqnarray} The one-mass triangle $(1|2|3H)$ corresponds to the value $s=1$. \begin{eqnarray} C[Q_1,K & = & -{ 1\over2} \left({ \gb{2 |P_1|1}\over \gb{2|4|1}} + {\gb{1 |P_1|2}\over \gb{1|4|2}} \right) \end{eqnarray} For $|P_1\rangle = |1\rangle \ , |P_1] = |3]$, one gets \begin{eqnarray} C[Q_1,K]^{(2,2)} & = & -{ 1\over2} { \gb{1|2|3} \over s_{23} } \end{eqnarray} For $|P_1\rangle = \s{k}_4|3] \ , |P_1] = \s{k}_4|1\rangle$, one obtains \begin{eqnarray} C[Q_1,K]^{(2,3)} & = & -{ 1\over2} \left({ \gb{2|4|3}\gb{1|4|1}\over \gb{2|4|1}} + {\gb{1|4|3}\gb{1|4|2}\over \gb{1|4|2}} \right) =\gb{1|2|3}\left(1-{m_H^2 \over 2 s_{23}} \right) \end{eqnarray} \bigskip \noindent $\bullet \ $ The result for $c_{[1|2|3H]}$ The total coefficient of triangle $(1|2|3H)$ is: \begin{eqnarray} c_{[1|2|3H]} &= & -{ m K^2 s_{23} \over (K^2-m_H^2) \vev{1~2}\vev{2~3}\vev{3~1}} m \left[ 4(m^2 + \mu^2)-s_{12}\right] - {m \cb{1~2} \over \sqrt{2} v \vev{1~2}} {8m(m^2+\mu^2) \over \sqrt{2}\vev{1~3}}(-{\gb{1|2|3}\over 2 s_{23}}) \nonumber \\ & & + {m \cb{1~2} \over \sqrt{2} v \vev{1~2}} {\sqrt{2} m \over \vev{1~3}}\gb{1|2|3}\left(1-{m_H^2 \over 2 s_{23}} \right)\nonumber \\ &= & -{m^2 K^2 \left(2 s_{23}+K^2-m_H^2\right)\over 2 v (K^2-m_H^2)\vev{1~2} \vev{2~3}\vev{3~1}} \left[ 4(m^2 + \mu^2)-m_H^2\right] \end{eqnarray} Multiplying by $i/(4\pi)^{2-\epsilon}$, to account for the difference in the definitions of master integrals, we again confirm the result of \cite{Rozowsky:1997dm}. \subsection{The coefficient $c_{[12|3|H]}$} The coefficient $c_{[12|3|H]}$ gets contributions from $N_{2,2}$ and $N_{2,3}$, therefore can be written as \begin{eqnarray} c_{[12|3|H]} & = & - c_{0,2} {8m(m^2+\mu^2) \over \sqrt{2}\vev{1~3}} \ C[Q_2,K]^{(2,2)} + c_{0,2} {\sqrt{2} m \over \vev{1~3}} \ C[Q_2,K]^{(2,3)} \end{eqnarray} \bigskip \noindent $\bullet \ C[Q_2,K]^{(2,2)}$ and $C[Q_2,K]^{(2,3)}$ The two-mass triangle $(12|3|H)$ corresponds to the value $s=2$, and its coefficient can be obtained from Eq.(\ref{tri-exp}), \begin{eqnarray} C[Q_2,K] & = & { 1\over 2} \left({ \vev{3 |P_1 K |3} \over\vev{3|1|2|3} }- { \cb{3|K P_1|3} \over \cb{3|1|2|3} } \right) \end{eqnarray} By using $|P_1\rangle = |1\rangle \ , |P_1] = |3]$, one gets \begin{eqnarray} C[Q_2,K]^{(2,2)} & = & \left( 1-{m_H^2 \over K^2} \right) {\gb{1|2|3} \over 2 s_{23}} \end{eqnarray} By using $|P_1\rangle = \s{k}_4|3] \ , |P_1] = \s{k}_4|1\rangle$, one obtains \begin{eqnarray} C[Q_2,K]^{(2,3)} & = & { 1\over 2} \left({ \gb{3|4|3}\vev{1|4 K |3} \over\vev{3|1|2|3} }- { \cb{3|K4|3}\gb{1|4|3} \over \cb{3|1|2|3} } \right) = (1-{m_H^2 \over K^2}) m_H^2 {\gb{1|2|3} \over 2 s_{23}} \end{eqnarray} \bigskip \noindent $\bullet \ $ The result of $c_{[12|3|H]}$ The total coefficient of triangle $(12|3|H)$ is: \begin{eqnarray} c_{[12|3|H]} & = & -c_{0,2} {8m(m^2+\mu^2) \over \sqrt{2}\vev{1~3}}\left( 1-{m_H^2 \over K^2}\right) {\gb{1|2|3} \over 2 s_{23}} + c_{0,2} {\sqrt{2} m \over \vev{1~3}}\left( 1-{m_H^2 \over K^2}\right) m_H^2 {\gb{1|2|3} \over 2 s_{23}} \nonumber \\ & & = {m^2 (K^2-m_H^2) \over 2 v \vev{1~2} \vev{2~3}\vev{3~1}} \left[ {4(m^2+\mu^2) }-m_H^2 \right] \end{eqnarray} Multiplying by $i/(4\pi)^{2-\epsilon}$, to account for the difference in the definitions of master integrals, we again confirm the result of \cite{Rozowsky:1997dm}. \section{Example II: $s_{23}$-channel cut of $A(1^-,2^-,3^+,4^+)$} Our second example features a non-vanishing bubble coefficient. We study the $t$-channel cut of the gluon amplitude $A(1^-,2^-,3^+,4^+)$, with a massive quark circulating in the loop. (As usual, $t=s_{23}$.) \begin{figure} \begin{eqnarray} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \GOval(0,0)(27,27)(0){1} \SetWidth{1.0} \Gluon(-20,-20)(-30,-30){3}{3} \Gluon(20,-20)(30,-30){3}{3} \Gluon(20,20)(30,30){3}{3} \Gluon(-20,20)(-30,30){3}{3} \GOval(-20,0)(20,8.5)(0){0.7} \GOval(+20,0)(20,8.5)(0){0.7} \DashLine(0,32)(0,-32){2} \Text(-33, -33)[]{\footnotesize {$4$}} \Text(-33, +33)[]{\footnotesize {$1$}} \Text(+33, +33)[]{\footnotesize {$2$}} \Text(+33, -33)[]{\footnotesize {$3$}} \Text(0, +35)[]{\footnotesize {$L_1$}} \Text(0, -35)[]{\footnotesize {$L_2$}} \end{picture} \hspace*{1.5cm} &=& c_4^{0m} \hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Line(-20,-20)(20,-20) \Line(20,-20)(20,20) \Line(20,20)(-20,20) \Line(-20,20)(-20,-20) \SetWidth{1.0} \Line(-20,-20)(-30,-30) \Line(20,-20)(30,-30) \Line(20,20)(30,30) \Line(-20,20)(-30,30) \DashLine(0,30)(0,-30){2} \end{picture} \hspace*{1cm} + c_{[23|4|1]}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Line(-20,-20)(20,0) \Line(-20,20)(20,0) \Line(-20,-20)(-20,20) \SetWidth{1.0} \Line(-20,-20)(-30,-30) \Line(20,0)(30,10) \Line(20,0)(30,-10) \Line(-20,20)(-30,30) \DashLine(0,30)(0,-30){2} \end{picture} \hspace*{1cm} + c_{[2|3|41]}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Line(+20,-20)(-20,0) \Line(+20,20)(-20,0) \Line(+20,-20)(+20,20) \SetWidth{1.0} \Line(+20,-20)(+30,-30) \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(+20,20)(+30,30) \DashLine(0,30)(0,-30){2} \end{picture} \hspace*{1cm} + c_{[12|34]}\hspace*{1cm} \begin{picture}(0,0)(0,0) \SetScale{0.8} \SetWidth{2.0} \Oval(0,0)(20,20)(0) \SetWidth{1.0} \Line(-20,0)(-30,10) \Line(-20,0)(-30,-10) \Line(20,0)(30,10) \Line(20,0)(30,-10) \DashLine(0,30)(0,-30){2} \end{picture} \nonumber \end{eqnarray} \vspace*{0.5cm} \caption{Double-cut in the $s_{23}$-channel for $A(1^-,2^-,3^+,4^+)$.} \label{fig:s23cut} \end{figure} The $t$-channel cut of $A(1^-,2^-,3^+,4^+)$ admits a decomposition in terms of cuts of master integrals as shown in Fig. \ref{fig:s23cut}, and its expression was given in Eq.(5.33) of \cite{Bern:1995db}. After converting that expression into our basis of $D$-dimensional master integrals, as done in Appendix \ref{sec:IntegralsTranslation}, it reads \begin{eqnarray} \left. A^{\rm fermion}_4(1^-,2^-,3^+,4^+) \right|_{t-{\rm cut}} &=& \left. {\vev{1~2}^2 \cb{3~4}^2 \over st} \left( {2 \over 3} I_2[1] +{4 \over 3t}I_2[m^2+\mu^2] - {2\over s} I_2[m^2+\mu^2] \nonumber \right. \right. \\ & & \left. \left. +{2t \over s} I_4[(m^2+\mu^2)^2] - t I_4[m^2+\mu^2] \right) \right|_{t-{\rm cut}} \end{eqnarray} One reads the following values for the coefficients: \begin{eqnarray} c_4^{0m} &=& c_0 \ \bigg( {2t \over s} (m^2+\mu^2) - t \bigg) (m^2+\mu^2) \ , \\ c_{[23|4|1]} &=& 0 \ , \\ c_{[2|3|41]} &=& 0 \ , \\ c_{[23|41]} &=& c_0 \ \bigg( {2 \over 3} +{4 \over 3t} (m^2+\mu^2) - {2\over s} (m^2+\mu^2) \bigg) \end{eqnarray} with \begin{eqnarray} c_0 &=& {\vev{1~2}^2 \cb{3~4}^2 \over st} \ . \end{eqnarray} \subsection{The reconstruction of the coefficients} We now apply our formulas of Section \ref{sec:formulas} to construct the coefficients given above. We follow the definition of the integrand given by \cite{Bern:1995db}. By sewing the tree-level amplitude $A_4^{\rm tree}(-L_1,2^-,3^+,L_2)$ and $A_4^{\rm tree}(-L_2,4^+,1^-,L_1)$ given in Eq.(2.3) of \cite{Bern:1995db}, and using the Dirac equation for a massive fermion, it is shown that the four-dimensional integrand of the $t$-cut, $C_{23}$, can be written as:\footnote{Recall that here we use ``twistor'' sign convention for the antiholomorphic spinor product, which is the opposite of the ``QCD'' convention followed by \cite{Bern:1995db} $\spb {x}.{y}^{\rm Bern-Morgan} = - \spb {x}.{y}^{\rm BFM} $. }% \begin{eqnarray} C_{23} = -{2 N_1 + N_2 \over D_1 D_2}, \end{eqnarray} with \begin{eqnarray} N_1 &=& {1 \over s_{23}^2} \gb{1|\ell_1|4}^2 \gb{2|\ell_1|3}^2 \\ N_2 &=& -{1 \over s_{23}}\vev{1~2}\cb{3~4}\gb{1|\ell_1|4}\gb{2|\ell_1|3} \\ D_1 &=& (\ell_1+k_1)^2 - \mu^2 - m^2 \\ D_2 &=& (\ell_1-k_2)^2 - \mu^2 - m^2 \end{eqnarray} By pattern-matching onto the reference form in Eq.(\ref{frame}), each integrand is characterised by the parameters given in the following table. \begin{eqnarray} \begin{array}{|c||c|c|c|c|c|c|} \hline {\rm integrand} & n & k & |P_1\rangle [P_1| & |P_2\rangle [P_2| & |P_3\rangle [P_3| & |P_4\rangle [P_4| \\ \hline \hline \hline {N_{1} /(D_1 D_2)} & 2 & 2 & |1\rangle [4| & |2\rangle [3| & |1\rangle [4| & |2\rangle [3| \\ \hline {N_{2} /(D_1 D_2)} & 0 & 2 & |1\rangle [4| & |2\rangle [3| & - & - \\ \hline \end{array} \end{eqnarray} We define \begin{eqnarray} & & K = k_2+k_3;~~~~~~~K_1=-k_1;~~~~~~K_2=k_2 \\ & & P_1=P_3 = \lambda_1\tilde\lambda_4,~~~~~~~P_2=P_4 = \lambda_2\tilde\lambda_3. \end{eqnarray} Moreover, since we have a quark of mass $m$ circulating in the loop, we take \begin{eqnarray} M_1=M_2=m_j=m. \end{eqnarray} Then, by applying (\ref{solve-z}), we find \begin{eqnarray} z(1-z) = {m^2 + \mu^2 \over K^2}. ~~~~ \Label{z-mu-ex2} \end{eqnarray} For the $N_1$ term, $n=2$. For the $N_2$ term, $n=0$. Both terms give boxes, triangles and bubbles. From the definitions (\ref{R-def}), (\ref{Q-def}) we have \begin{eqnarray} Q_1 &=& (1-z)k_1+z k_4,~~~Q_2 = -(1-z)k_2-z k_3 \\ R_1 &=& R_3 = -(1-2z)\lambda_1\tilde\lambda_4,~~~~ R_2 = R_4 = -(1-2z)\lambda_2\tilde\lambda_3. \end{eqnarray} Further, the quantities defined in (\ref{box-null}), (\ref{tri-null}) become \begin{eqnarray} \Delta_{12} &=& (1-2z)^4 s_{12}(K^2+s_{12}) -(1-2z)^2 K^2 s_{12} \\ \Delta_1 & = & (1-2z)^2 (K^2)^2,~~~\\P_{1,1} &=& -(1-2z)k_4,~~~ P_{1,2} = (1-2z)k_1,\\ \Delta_2 &=& (1-2z)^2 (K^2)^2 \\ P_{2,1} &=& (1-2z)k_3,~~~ P_{2,2} = -(1-2z)k_2. \end{eqnarray} \subsection{The box coefficient $c_4^{0m}$} The box coefficient $c_4^{0m}$ receives contributions from both $N_1$ and $N_2$, and can be correspondingly decomposed as: \begin{eqnarray} c_4^{0m} = - {2 \over (K^2)^2} C[Q_1,Q_2,K]^{(1)} + {\vev{1~2}\cb{3~4} \over K^2} C[Q_1,Q_2,K]^{(2)} \ . \end{eqnarray} We discuss the computation of $C[Q_1,Q_2,K]^{(1)}$ and $C[Q_1,Q_2,K]^{(2)}$ in detail, starting from the expression given in Eq.(\ref{box-exp}). \bigskip \noindent $\bullet \ C[Q_1,Q_2,K]^{(1)}$ For the $N_1$ term, with $n=2$, the expression is given by \begin{eqnarray} C[Q_1,Q_2,K]^{(1)} & = & {(K^2)^{4}\over 2}\left({ \gb{P_{21,1}|R_1 |P_{21,2}}^2\gb{P_{21,1}|R_2 |P_{21,2}}^2 \over \gb{P_{21,1}|K|P_{21,2}}^4}+ { \gb{P_{21,2}|R_1 |P_{21,1}}^2\gb{P_{21,2}|R_2 |P_{21,1}}^2 \over \gb{P_{21,2}|K|P_{21,1}}^4} \right) \end{eqnarray} For analytic simplification, the following trace identity is helpful. \begin{eqnarray*} \gb{P_1|R|P_2} \gb{P_2|S|P_1} = \mathop{\rm Tr}\left( {1-\gamma_5\over 2} \not{P_1} \not{R} \not{P_2} \not{S}\right) \equiv \mathop{{\rm tr}_-}(P_1 R P_2 S) ~~~~~\Label{spinor-formula}\end{eqnarray*} In terms of vectors, \begin{eqnarray*} \mathop{{\rm tr}_-}(V_1 V_2 V_3 V_4) & = & {1\over 2} ( (2V_1 \cdot V_2)(2 V_3\cdot V_4)+(2V_1 \cdot V_4)(2 V_2\cdot V_3) -(2V_1\cdot V_3)(2 V_2 \cdot V_4) -4i\epsilon_{\mu \nu\sigma \rho}V_1^\mu V_2^\nu V_3^\sigma V_4^\rho) \end{eqnarray*} The coefficient can then be expressed in terms of traces, and evaluated as follows. \begin{eqnarray} & &C[Q_1,Q_2,K]^{(1)}\nonumber \\ & & = {(K^2)^4 \left( (\mathop{{\rm tr}_-}(P_{21,2} K P_{21,1} R_1)\mathop{{\rm tr}_-}^2(P_{21,2} K P_{21,1} R_2))^2 + (\mathop{{\rm tr}_-}(P_{21,1} K P_{21,2} R_1)\mathop{{\rm tr}_-}^2(P_{21,1} K P_{21,2} R_2))^2 \right) \over 2 (\mathop{{\rm tr}_-} (P_{21,1} K P_{21,2} K))^4 }\nonumber \\ & & = {(K^2)^4 z^2 (1-z)^2 \cb{3~4}^2 \vev{1~2}^2 \over s_{12}^2 } \end{eqnarray} \bigskip \noindent $\bullet \ C[Q_1,Q_2,K]^{(2)}$ For the $N_2$ term with $n=0$ the expression is given by \begin{eqnarray} C[Q_1,Q_2,K]^{(2)} & = & {(K^2)^{2}\over 2}\left({ \gb{P_{21,1}|R_1 |P_{21,2}}\gb{P_{21,1}|R_2 |P_{21,2}} \over \gb{P_{21,1}|K |P_{21,2}}^{2}}+ \{P_{21,1}\leftrightarrow P_{21,2}\} \right). \end{eqnarray} Combining the two terms over a common denominator, we have \begin{eqnarray} & & C[Q_1,Q_2,K]^{(2)} \nonumber \\ & & = {(K^2)^2 \left( \mathop{{\rm tr}_-}(P_{21,2} K P_{21,1} R_1)\mathop{{\rm tr}_-}(P_{21,2} K P_{21,1} R_2) + \mathop{{\rm tr}_-}(P_{21,1} K P_{21,2} R_1)\mathop{{\rm tr}_-}(P_{21,1} K P_{21,2} R_2) \right) \over 2 (\mathop{{\rm tr}_-} (P_{21,1} K P_{21,2} K))^2 }\nonumber \\ & & = {(K^2)^2 z (1-z) \cb{3~4}\vev{1~2} \over s_{12} } \end{eqnarray} \bigskip \noindent $\bullet \ $ The result of $c_4^{0m}$ We add our two contributions together and replace $z$ using (\ref{z-mu-ex2}). The total box coefficient is thus \begin{eqnarray} c_4^{0m} & = & -{2 \over (K^2)^2} {(K^2)^4 z^2 (1-z)^2 \cb{3~4}^2 \vev{1~2}^2 \over s_{12}^2 } +{\vev{1~2}\cb{3~4} \over K^2} {(K^2)^2 z (1-z) \cb{3~4}\vev{1~2} \over s_{12} } \\ & = & {(m^2+\mu^2) \cb{3~4}^2 \vev{1~2}^2 \over s_{12} } \left( 1 - {2(m^2+\mu^2) \over s_{12}} \right) \end{eqnarray} \subsection{The triangle coefficients $c_{[23|4|1]}$ and $c_{[2|3|41]}$} Both terms exhibit the symmetry of the amplitude, so our two triangles are not independent. The triangle coefficients $c_{[23|4|1]}$ and $c_{[2|3|41]}$ receive contributions from both $N_1$ and $N_2$, and can be correspondingly decomposed as: \begin{eqnarray} c_{[23|4|1]}& = & - {2 \over (K^2)^2} C[Q_1,K]^{(1)} + {\vev{1~2}\cb{3~4} \over K^2} C[Q_1,K]^{(2)} \\ c_{[2|3|41]}& = & - {2 \over (K^2)^2} C[Q_2,K]^{(1)} + {\vev{1~2}\cb{3~4} \over K^2} C[Q_2,K]^{(2)} \end{eqnarray} We discuss in parallel, first the contribution due to $N_1$ to both coefficients, namely $C[Q_1,K]^{(1)}$ and $C[Q_2,K]^{(1)}$, and later the one due to $N_2$, namely $C[Q_1,K]^{(2)}$ and $C[Q_2,K]^{(2)}$, where the triangle coefficient was given in Eq.(\ref{tri-exp}). \bigskip \noindent $\bullet \ C[Q_1,K]^{(1)}$ and $C[Q_2,K]^{(1)}$ Since the $N_1$ term with $n=2$, the triangle coefficient expression is given by \begin{eqnarray} & & C[Q_1,K]^{(1)} = { (K^2)^{3}\over 2}\frac{1}{(\sqrt{\Delta_1})^{3}}\frac{1}{3! \vev{4~1}^{3}} \frac{d^{3}}{d\tau^{3}}\left.\left({\prod_{j=1}^{4} \vev{4-\tau 1 |R_j Q_1|4-\tau 1}\over \vev{4-\tau 1|Q_2 Q_1|4-\tau 1}} + {\prod_{j=1}^{4} \vev{1-\tau 4 |R_j Q_1|1-\tau 4}\over \vev{1-\tau 4|Q_2 Q_1|1-\tau 4}} \right)\right|_{\tau=0}\nonumber \\ & & = {(1-2z) \over 2 }\frac{1}{3! \vev{4~1}} \frac{d^{3}}{d\tau^{3}}\left.\left({ \tgb{4|Q_1|4}^2 \vev{4-\tau 1, 2}^2 \tgb{3|Q_1|4-\tau 1}^2 \over \vev{4-\tau 1|Q_2 Q_1|4-\tau 1}} + {\tau^4 \tgb{4|Q_1| 4}^2 \vev{1-\tau 4, 2}^2 \tgb{3|Q_1|1-\tau 4}^2 \over \vev{1-\tau 4|Q_2 Q_1|1-\tau 4}} \right)\right|_{\tau=0} \nonumber \\ & & = {(1-2z) (1-z)^2 (K^2)^2\over 12 \vev{4~1}} \frac{d^{3}}{d\tau^{3}}\left.\left({ \vev{4-\tau 1, 2}^2 \tgb{3|Q_1|4-\tau 1}^2 \over \vev{4-\tau 1|Q_2 Q_1|4-\tau 1}} \right)\right|_{\tau=0} \nonumber \\ & & = { (1-z)^2 (K^2)^2 \over 12 } \frac{d^{3}}{d\tau^{3}}\left.\left({ (\vev{4~2}-\tau\vev{1~2})^2 ((1-z)\cb{3~1} + \tau z \cb{3~4})^2 \over -(1-z) \gb{4|3|1}+\tau s_{34} +\tau^2 z \gb{1|3|4}} \right)\right|_{\tau=0} \nonumber \\ & & = 0. \end{eqnarray} A similar calculation shows that \begin{eqnarray} & & C[Q_2,K]^{(1)} = { (K^2)^{3}\over 2}\frac{1}{(\sqrt{\Delta_2})^{3}}\frac{1}{3! \vev{3~2}^{3}} \frac{d^{3}}{d\tau^{3}}\left.\left({\prod_{j=1}^{4} \vev{3-\tau 2 |R_j Q_2|3-\tau 2}\over \vev{3-\tau 2|Q_1 Q_2 |3-\tau 2}} + {\prod_{j=1}^{4} \vev{2-\tau 3 |R_j Q_2|2-\tau 3}\over \vev{2-\tau 3|Q_1 Q_2 |2-\tau 3}} \right)\right|_{\tau=0} \nonumber \\ & & = 0, \end{eqnarray} which can also be seen by the symmetry of the amplitude and the cut. \bigskip \noindent $\bullet \ C[Q_1,K]^{(2)}$ and $C[Q_2,K]^{(2)}$ For the $N_2$ term with $n=0$, the expression is simpler as \begin{eqnarray} & & C[Q_1,K]^{(2)} = -{ (1-2z)(1-z) K^2 \over 2} \frac{d}{d\tau}\left.\left({ \vev{4-\tau 1~2}\tgb{3|Q_1|4-\tau 1} \over \vev{4-\tau 1|Q_2 Q_1|4-\tau 1}} + { \tau^2 \vev{1-\tau 4~2}\tgb{3|Q_1|1-\tau 4}\over \vev{1-\tau 4|Q_2 Q_1 |1-\tau 4}} \right)\right|_{\tau=0} \nonumber \\ & & = -{ (1-2z)(1-z) K^2 \over 2} \frac{d}{d\tau}\left.\left({ (\vev{4~2}-\tau\vev{1~2}) (\tgb{3|Q_1|4} -\tau \tgb{3|Q_1| 1}) \over \vev{4-\tau 1|Q_2 Q_1|4-\tau 1}} \right)\right|_{\tau=0} \nonumber \\ & & = 0. \end{eqnarray} A similar calculation shows \begin{eqnarray} & & C[Q_2,K]^{(2)} = {(1-2z) \over 2 } \frac{d}{d\tau}\left.\left({ \vev{3-\tau 2~1}\tgb{4|Q_2|3-\tau 2} \tgb{3|Q_2|3} \over \vev{3-\tau 2|Q_1 Q_2|3-\tau 2}} + { \tau^2 \vev{2-\tau 3~1}\tgb{4|Q_2|2-\tau 3} \tgb{3|Q_2| 3}\over \vev{2-\tau 3|Q_1 Q_2|2-\tau 3}} \right)\right|_{\tau=0} \nonumber \\ & & = 0, \end{eqnarray} which can also be seen by symmetry. \bigskip \noindent {$\bullet \ $The results of $c_{[23|4|1]}$ and $c_{[2|3|41]}$} Every term vanishes separately, so \begin{eqnarray} c_{[23|4|1]}& = & 0,~~~~ c_{[2|3|41]} = 0 . \end{eqnarray} The vanishing results for triangle coefficients is not obvious from the beginning. We suspect that there should be a more directly physical argument to see this point. \subsection{The bubble coefficient $c_{[23|41]}$} The bubble coefficient $c_{[23|41]}$ receives contributions from both $N_1$ and $N_2$, and can be correspondingly decomposed as: \begin{eqnarray} c_{[23|41]}& = & - {2 \over (K^2)^2} C[K]^{(1)} + {\vev{1~2}\cb{3~4} \over K^2} C[K]^{(2)} \end{eqnarray} There is one subtlety regarding the calculation of the bubble coefficient. The formulas involve an arbitrarily chosen, generic auxiliary null vector $\eta$. If $\eta$ coincides with one of the $K_i$, we need to use a modified formula, given in Appendix B.3.1 of \cite{Britto:2007tt}. In this example, we illustrate both options. First, we show the result with a generic choice of $\eta$; second, we use the formulas for the case $\eta=K_1$. Both are suitable for numerical evaluation, while the special choice of $\eta$ may simplify the analytic expression. We will find that the two results agree with each other, as well as with \cite{Bern:1995db}. \subsubsection{Generic reference momentum $\eta$} Let us start with the formulas for generic $\eta$, given in Eq.(\ref{bub-exp}). There are two terms we need to calculate.\\ \noindent $\bullet \ C[K]^{(1)}$ For the first term, with $N_1$ in the numerator, and $n=2$, the coefficient is \begin{eqnarray} C[K]^{(1)} = (K^2)^3 \sum_{q=0}^2 {(-1)^q\over q!} {d^q \over ds^q}\left.\left( {\cal B}_{2,2-q}^{(0)}(s)+\sum_{r=1}^2\sum_{a=q}^2 \left({\cal B}_{2,2-a}^{(r;a-q;1)}(s)-{\cal B}_{2,2-a}^{(r;a-q;2)}(s)\right)\right)\right|_{s=0}, \label{gggg:bubble-coeff} \end{eqnarray} where \begin{eqnarray} {\cal B}_{2,2-q}^{(0)}(s) = {d^2\over d\tau^2}\left.\left( {1 \over 2 [\eta|\W \eta K|\eta]^{2}} {(2\eta\cdot K)^{3-q} \over (3-q) (K^2)^{3-q}}{\prod_{j=1}^{4} \vev{\ell|R_j (K+s\eta)|\ell}\over \vev{\ell~\eta}^{3} \prod_{p=1}^2 \vev{\ell| Q_p(K+s\eta)|\ell}}|_{\ket{\ell}\to |K-\tau \W \eta|\eta] }\right)\right|_{\tau= 0}, \end{eqnarray} \begin{eqnarray} & & {\cal B}_{2,2-a}^{(r;a-q;1)}(s) = {(-1)^{a-q+1}\over (a-q)! \sqrt{\Delta_r}^{a-q+1} \vev{P_{r,1}~P_{r,2}}^{a-q}}{d^{a-q} \over d\tau^{a-q}} \left({1\over (3-a)} {\gb{P_{r,1}-\tau P_{r,2}|\eta|P_{r,1}}^{3-a}\over \gb{P_{r,1}-\tau P_{r,2}|K|P_{r,1}}^{3-a}}\right. \nonumber \\ & & \times \left.\left. {\vev{P_{r,1}-\tau P_{r,2}|Q_r \eta|P_{r,1}-\tau P_{r,2}}^{a-q} \prod_{j=1}^{4} \vev{P_{r,1}-\tau P_{r,2}|R_j (K+s\eta)|P_{r,1}-\tau P_{r,2}}\over \vev{P_{r,1}-\tau P_{r,2}|\eta K|P_{r,1}-\tau P_{r,2}}^{3} \prod_{p=1,p\neq r}^2 \vev{P_{r,1}-\tau P_{r,2}| Q_p(K+s\eta)|P_{r,1}-\tau P_{r,2}}}\right)\right|_{\tau=0}, \end{eqnarray} \begin{eqnarray} & & {\cal B}_{2,2-a}^{(r;a-q;2)}(s) = {(-1)^{a-q+1}\over (a-q)! \sqrt{\Delta_r}^{a-q+1} \vev{P_{r,1}~P_{r,2}}^{a-q}}{d^{a-q} \over d\tau^{a-q}} \left({1\over (3-a)} {\gb{P_{r,2}-\tau P_{r,1}|\eta|P_{r,2}}^{3-a}\over \gb{P_{r,2}-\tau P_{r,1}|K|P_{r,2}}^{3-a}}\right. \nonumber \\ & & \times \left.\left. {\vev{P_{r,2}-\tau P_{r,1}|Q_r \eta|P_{r,2}-\tau P_{r,1}}^{a-q} \prod_{j=1}^{4} \vev{P_{r,2}-\tau P_{r,1}|R_j (K+s\eta)|P_{r,2}-\tau P_{r,1}}\over \vev{P_{r,2}-\tau P_{r,1}|\eta K|P_{r,2}-\tau P_{r,1}}^{3} \prod_{p=1,p\neq r}^2 \vev{P_{r,2}-\tau P_{r,1}| Q_p(K+s\eta)|P_{r,2}-\tau P_{r,1}}}\right)\right|_{\tau=0}. \end{eqnarray} After making some substitutions, and considering the summation ranges of $a$ and $q$, we get \begin{eqnarray} & & {\cal B}_{2,2-a}^{(1;a-q;1)}(s) = 0, \end{eqnarray} \begin{eqnarray} & & {\cal B}_{2,2-a}^{(1;a-q;2)}(s) = {(-1)^{a-q+1}\over (a-q)! \sqrt{\Delta_1}^{a-q+1} \vev{4~1}^{a-q}}{d^{a-q} \over d\tau^{a-q}} \left({1\over (3-a)} {\gb{1-\tau 4|\eta|1}^{3-a}\over \gb{1-\tau 4|K|1}^{3-a}}\right. \nonumber \\ & & \times \left.\left. {\vev{1-\tau 4|Q_1 \eta|1-\tau 4}^{a-q} \vev{-\tau 4|R_1(K+s\eta)|1-\tau 4}^2 \vev{1-\tau 4|R_2(K+s\eta)|1-\tau 4}^2 \over \vev{1-\tau 4|\eta K|1-\tau 4}^{3} \vev{1-\tau 4| Q_2(K+s\eta)|1-\tau 4}}\right)\right|_{\tau=0}. \end{eqnarray} \begin{eqnarray} & & {\cal B}_{2,2-a}^{(2;a-q;1)}(s) = {(-1)^{a-q+1}\over (a-q)! \sqrt{\Delta_2}^{a-q+1} \vev{3~2}^{a-q}}{d^{a-q} \over d\tau^{a-q}} \left({1\over (3-a)} {\gb{3-\tau 2|\eta|3}^{3-a}\over \gb{3-\tau 2|K|3}^{3-a}}\right. \nonumber \\ & & \times \left.\left. {\vev{3-\tau 2|Q_2 \eta|3-\tau 2}^{a-q} \vev{3-\tau 2|R_1(K+s\eta)|3-\tau 2}^2 \vev{3|R_2(K+s\eta)|3-\tau 2}^2 \over \vev{3-\tau 2|\eta K|3-\tau 2}^{3} \vev{3-\tau 2| Q_1(K+s\eta)|3-\tau 2}}\right)\right|_{\tau=0}, \end{eqnarray} \begin{eqnarray} & & {\cal B}_{2,2-a}^{(2;a-q;2)}(s) = 0. \end{eqnarray} \bigskip \noindent $\bullet \ C[K]^{(2)}$ For the second term $N_2$ with $n=0$, the expression is much simpler: \begin{eqnarray} C[K]^{(2)} = K^2 \left.\left( {\cal B}_{0,0}^{(0)}(s)+\sum_{r=1}^2 \left({\cal B}_{0,0}^{(r;0;1)}(s)-{\cal B}_{0,0}^{(r;0;2)}(s)\right)\right)\right|_{s=0}, \end{eqnarray} where \begin{eqnarray} {\cal B}_{0,0}^{(0)}(s=0) &=& \left.\left( {(2\eta\cdot K) \over K^2}{\vev{\ell|R_1 K|\ell}\vev{\ell|R_2 K|\ell} \over \vev{\ell~\eta} \prod_{p=1}^2 \vev{\ell| Q_p K|\ell}}\right)\right|_{\ket{\ell}\to |K|\eta] } \nonumber \\ & = &{1 \over K^2} {\cb{\eta|K R_1 |\eta}\cb{\eta|K R_2 |\eta} \over \cb{\eta|K Q_1 |\eta}\cb{\eta|K Q_2 |\eta}} \nonumber \\ &=& - {1 \over K^2} {\cb{\eta~4} \cb{\eta~3} \over \cb{\eta~1}\cb{\eta~2} } \\ {\cal B}_{0,0}^{(1;0;1)}(s=0) & = & - {1 \over \sqrt{\Delta_1} } {\gb{4|\eta|4}\over \gb{4|K|4}} { \vev{4|R_1K|4} \vev{4|R_2K|4} \over \vev{4|\eta K|4} \vev{4|Q_2 K|4}} \nonumber \\ &=& - {1 \over K^2 } {\cb{\eta~4} \vev{4~2} \over \cb{\eta~ 1} \vev{4~3}} \\ {\cal B}_{0,0}^{(1;0;2)}(s=0) &=& 0. \\ {\cal B}_{0,0}^{(2;0;1)}(s=0) & = & - {1 \over \sqrt{\Delta_2} } {\gb{3|\eta|3} \over \gb{3|K|3}} { \vev{3|R_1K|3} \vev{3|R_2K|3} \over \vev{3|\eta K|3} \vev{3|Q_1 K|3}} \nonumber \\ &=& {1 \over K^2 } {\cb{\eta~3} \cb{4~2} \over \cb{\eta~ 2} \cb{1~ 2}} \\ {\cal B}_{0,0}^{(2;0;2)}(s=0) &=& 0. \end{eqnarray} \bigskip \noindent $\bullet$ Results: We have used the numerical routines of {\tt S@M} \cite{Maitre:2007jq} to show that while each single term $\cal{B}$ entering Eq.(\ref{gggg:bubble-coeff}) is $\eta$-dependent, their combination is indeed independent of the choice of $\eta$. The choice $\eta=k_3$ is found to be convenient. (Note that $k_3$ is not proportional to either of $K_1$, $K_2$.) Therefore we set $\eta=k_3$, and we obtain the following analytic result. \begin{eqnarray} c_{[12|34]} &=& - {2 \over (K^2)^2} \bigg\{ { \vev{1~3} \cb{1~3} (K^2)^{3}\over \cb{1~2}^2 \vev{4~3}^{2}} \left( {1 \over 2}-4z(1-z) \right) - {5 (K^2)^{4} \over 3 \cb{1~2}^2 \vev{4~3}^{2}}z(1-z) + { (K^2)^{4} \over 6 \cb{1~2}^2 \vev{4~3}^{2}} \nonumber \\ & & + {K^2 \cb{1~3}\vev{1~3} \over \vev{4~3}^2 \cb{1~2}^2} \left( ({1 \over 6} - {2 \over 3}z(1-z)) \vev{1~3}^2 \cb{1~3}^2 + ( {1\over 2}-3z(1-z)) K^2 \vev{1~3} \cb{1~3} \right) \bigg\} + {\vev{1~2}\cb{3~4} \over K^2} {\spb 4.3 \over \spb 1.2} \nonumber \\ &=& {\vev{1~2}^2 \cb{3~4}^2 \over 3s^2t^2} \left( -4 {(m^2+\mu^2) s } +6(m^2+\mu^2)t -2 st \right). \end{eqnarray} We have used the relations $K^2=t, \vev{1~3}[1~3]=-s-t$. \subsubsection{Special choice of $\eta$} Alternatively, we discuss the calculation of the bubble coefficient by using the special choice of $\eta=K_1=-k_1$ from the beginning. With this choice, we need to use formulas for the $\cal{B}$ which are slightly different from the ones used in the previous section. They are given in Appendix B.3.1 of \cite{Britto:2007tt}. Our convention for the spinors is: \begin{eqnarray} \ket{\eta}=\ket{1}, \qquad |\eta ]=-|1]. \end{eqnarray} \begin{eqnarray} C[K]_n = (K^2)^{1+n} \sum_{q=0}^n {(-1)^q\over q!} {d^q \over ds^q}\left.\left( {\cal B}_{n,n-q}^{(0)}(s)+\sum_{r=2}^k\sum_{a=q}^n \left({\cal B}_{n,n-a}^{(r;a-q;1)}(s)-{\cal B}_{n,n-a}^{(r;a-q;2)}(s)\right)\right)\right|_{s=0}.~~~~\Label{spe-Re-gen-n-1}\end{eqnarray} \begin{eqnarray} {\cal B}_{n,t}^{(0)}(s) &=& -{d^{n+1}\over d\tau^{n+1}}\left.\left( { 1\over (1-2z) - s z }{[\eta|\W \eta K|\eta]^{-n-1}\over (t+1) (n+1)!} {\prod_{j=1}^{n+k} \vev{\ell|R_j (K+s\eta)|\ell}\over \vev{\ell~\eta}^{n+2} \prod_{p=2}^k \vev{\ell| Q_p(K+s\eta)|\ell}}|_{\ket{\ell}\to |K-\tau \W \eta|\eta] }\right)\right|_{\tau\to 0} \end{eqnarray} Since $k=2$, we can directly set $r=2$: \begin{eqnarray} {\cal B}_{n,t}^{(2;a;1)}(s) & = & { 1\over (1-2z) - s z }{(-1)^{a}\over (1-2z)^{a+1} (K^2)^{a+t+2} a! \vev{3~2}^a}{d^a \over d\tau^a} \left({ \cb{1~3}^{t+1}\over (t+1)} \right. \nonumber \\ & & \times \left.\left. { \gb{\ell|Q_2|1}^a \prod_{j=1}^{n+2} \vev{\ell|R_j (K+s\eta)|\ell} \over \vev{\ell~1}^{n+1-t-a} \cb{1~4}^{n+2} \vev{4~\ell}^{n+2} }\right) \right|_{\ket{\ell}=\ket{3}-\tau\ket{2}} \end{eqnarray} \begin{eqnarray} {\cal B}_{n,t}^{(2;a;2)}(s) & = & { 1\over (1-2z) - s z }{(-1)^{a}\over (1-2z)^{a+1} (K^2)^{a+t+2} a! \vev{3~2}^a}{d^a \over d\tau^a} \left({ \cb{1~2}^{t+1} \over (t+1)} \right. \nonumber \\ & & \times \left.\left. { \gb{\ell|Q_2|1}^a \prod_{j=1}^{n+2} \vev{\ell|R_j (K+s\eta)|\ell} \over \vev{\ell~1}^{n+1-t-a} \cb{1~4}^{n+2} \vev{4~\ell}^{n+2} }\right) \right|_{\ket{\ell}=\ket{2}-\tau\ket{3}} \end{eqnarray} Now we proceed to evaluate.\\ \noindent $\bullet \ C[K]^{(2)}$ For the $N_2$ term with $n=0$, the evaluation is simple. In particular, there are no derivatives in $s$, so we can set $s=0$ directly. \begin{eqnarray} C[K]^{(2)} = K^2 \left.\left( {\cal B}_{0,0}^{(0)}(s)+ {\cal B}_{0,0}^{(2;0;1)}(s)-{\cal B}_{0,0}^{(2;0;2)}(s)\right)\right|_{s=0}. \end{eqnarray} Choosing $\W\eta=3$, we have \begin{eqnarray} {\cal B}_{0,0}^{(0)}(s=0) & = & {1 \over \cb{1~2} \vev{3~4}},~~~~ {\cal B}_{0,0}^{(2;0;1)}(s=0) = {1 \over K^2}{\cb{1~3}\cb{4~2}\over \cb{1~2}^{2} },~~~{\cal B}_{0,0}^{(2;0;2)}(s=0)=0\end{eqnarray} so the total contribution comes to \begin{eqnarray} C[K]^{(2)} = {\cb{4~3} \over \cb{1~2}} \end{eqnarray} \bigskip \noindent $\bullet \ C[K]^{(1)}$ For the $N_1$ term with $n=2$, the calculation is a bit more involved. \begin{eqnarray} C[K]^{(1)} = (K^2)^{3} \sum_{q=0}^2 {(-1)^q\over q!} {d^q \over ds^q}\left.\left( {\cal B}_{2,2-q}^{(0)}(s)+ \sum_{a=q}^2 \left({\cal B}_{2,2-a}^{(2;a-q;1)}(s)-{\cal B}_{2,2-a}^{(2;a-q;2)}(s)\right)\right)\right|_{s=0}. \end{eqnarray} For the various terms, we have \begin{eqnarray} {\cal B}_{2,2-q}^{(0)}(s) &=& {1 \over (3-q) 3! \vev{3~4}^2 \cb{1~2}^2 } {d^{3}\over d\tau^{3}}\left( { (1-2z)^4 (1+s)^2 \over (1-2z) - s z } \times \right.~~~\Label{exm-2-1} \\ & & \left. \left. { (1-\tau)^2 \left( K^2 (1+s) - \tau (K^2 +s\cb{3~1}\vev{1~3})\right)^2 \over \left( (1+s)(1-2z)K^2 + \tau ( (zs+2z-s-1)K^2 - s(1-2z)\cb{3~1}\vev{1~3}) + \tau^2 s (1-z)\cb{3~1}\vev{1~3} \right) } \right)\right|_{\tau\to 0} \nonumber \end{eqnarray} \begin{eqnarray} {\cal B}_{2,2-a}^{(2;a-q;1)}(s) & = & {(1+s)^2 (1-2z)^{3-a+q} \over (1-2z) - s z }{(-1)^{a-1} \vev{3~2}^{2} \cb{1~3}^{3-a}\over (K^2)^{4-q} (a-q)! } ~~~\Label{exm-2-2}\\ & & \times {d^{a-q} \over d\tau^{a-q}} \left( { ((1-z)\cb{1~2}+ \tau z \cb{1~3})^{a-q} (\vev{1~3}-\tau\vev{1~2})^{3-q} (K^2+s\cb{3~1}\vev{1~3}-s\tau \cb{3~1}\vev{1~2})^2 \over (3-a) \cb{1~4}^2 (\vev{4~3}-\tau\vev{4~2})^{4} }\right)\nonumber \end{eqnarray} The term $ {\cal B}_{2,2-a}^{(2;a-q;2)}(s)$ vanishes after taking the derivatives with respect to $s$. The reason is the following. Notice that \begin{eqnarray} & & {\cal B}_{n,t}^{(2;a-q;2)}(s) \\ & & = {(1+s)^2 (1-2z)^{3-a+q} \over (1-2z) - s z }{(-1)^{a-q} \vev{3~2}^2 \cb{1~2}^{t+1} \over (K^2)^{a-q+t+2} (a-q)! \vev{3~2}^{a-q}}{d^{a-q} \over d\tau^{a-q}} \left( { \tau^2 \over (t+1)} \left. { \gb{\ell|Q_2|1}^{a-q} \tgb{3|K+s\eta|\ell}^2 \over \vev{\ell~1}^{-1-t-a+q} \cb{1~4}^{2} \vev{4~\ell}^{4} }\right) \right|_{\ket{\ell}=\ket{2}-\tau\ket{3}}. \nonumber \end{eqnarray} We can see that the $\tau$-derivative vanishes unless $a-q=2$, in which case we get \begin{eqnarray} {\cal B}_{n,t}^{(2;2;2)}(s) & = & {(1+s)^2 (1-2z) \over (1-2z) - s z }{ \vev{3~2}^2 \cb{1~2}^{t+1} \over (K^2)^{4+t} \vev{3~2}^2} \left( { 1\over (t+1)} { (-z)^2 \gb{2|3|1}^2 s^2 \tgb{3|1|2}^2 \over \vev{2~1}^{-3-t} \cb{1~4}^{2} \vev{4~2}^{4} }\right) \end{eqnarray} However, the condition $a-q=2$ implies $a=2,q=0$. Therefore we can set $s=0$, and the expression vanishes: \begin{eqnarray} {\cal B}_{n,t}^{(2;2;2)}(s) =0.~~~\Label{exm-2-3} \end{eqnarray} Now we collect the results of (\ref{exm-2-1}),(\ref{exm-2-2}) and (\ref{exm-2-3}). We take $\W\eta=k_3$. Define \begin{eqnarray} C_1 \equiv {\cb{1~3}\vev{1~3} \over (K^2)^{2} \cb{1~2}^2 \vev{4~3}^2 }. \end{eqnarray} Let us begin with the terms with $q=0$: \begin{eqnarray} {\cal B}_{2,2}^{(0)}(s=0) &=& - {K^2(1-2z)^2\over 3\vev{3~4}^2\cb{2~1}^2 } \\{\cal B}_{2,2}^{(2;0;1)}(s=0) & = & C_1 (-{1\over 3})(1-2z)^2 \cb{1~3}^{2} \vev{1~3}^{2}\nonumber \\ {\cal B}_{2,1}^{(2;1;1)}(s=0) & = & C_1 \left( -{1\over 2}(1-2z)^2 \cb{1~3}^2 \vev{1~3}^2 + ({3\over 2}-{9\over 2}z+3z^2) K^2\cb{1~3}\vev{1~3} \right) \nonumber \\ {\cal B}_{2,0}^{(2;2;1)}(s=0) & = & C_1 \left( (-3+6z-3z^2) (K^2)^2 -(1-2z)^2 \vev{1~3}^2\cb{1~3}^2 +(6-18z+12z^2)K^2\cb{1~3} \vev{1~3} \right) \nonumber\end{eqnarray} For $q=1$: \begin{eqnarray} - \left. {d \over ds} {\cal B}_{2,1}^{(0)}(s) \right|_{ s = 0} &=& - {(1-2z)^2 \over 2 \vev{3~4}^2 \cb{1~2}^2 } { \cb{3~1}\vev{1~3} - (1-2z) K^2 \over (1-2z)} \\ - \left. {d \over ds}{\cal B}_{2,1}^{(2;0;1)}(s)\right|_{ s = 0} & = & C_1 \left( (1-2z)^2 \cb{1~3}^2\vev{1~3}^2 + (-1+{7 \over 2} z-3z^2) K^2 \vev{1~3} \cb{1~3}\right) \nonumber \\ - \left.{d \over ds} {\cal B}_{2,0}^{(2;1;1)}(s)\right|_{s=0} &= & C_1 \left( (4-10z+6z^2)(K^2)^2+ 2(1-2z)^2 \cb{1~3}^2\vev{1~3}^2 + (-10+30z-21 z^2)K^2 \cb{1~3}\vev{1~3} \right) \nonumber \end{eqnarray} For $q=2$: \begin{eqnarray} \left. {1 \over 2}{d^2 \over ds^2} {\cal B}_{2,0}^{(0)}(s) \right|_{s=0} &=& { 1 \over \vev{3~4}^2 \cb{1~2}^2 } { z \left( (1-2z)\cb{3~1}\vev{1~3} -(1-z)K^2 \right) }\\ \left. {1 \over 2}{d^2 \over ds^2} {\cal B}_{2,0}^{(2;0;1)}(s)\right|_{s=0} &= & C_1 \left( (-1+2z-z^2) (K^2)^2 -(1-2z)^2 \vev{1~3}^2 \cb{1~3}^2 + (4 - 14z + 12z^2) K^2 \vev{1~3} \cb{1~3} \right) \nonumber \end{eqnarray} All together, we get the following result for the $N_1$ term. \begin{eqnarray} C[K]^{(1)} &=& { \vev{1~3} \cb{1~3} (K^2)^{3}\over \cb{1~2}^2 \vev{4~3}^{2}} \left( {1 \over 2}-4z(1-z) \right) - {5 (K^2)^{4} \over 3 \cb{1~2}^2 \vev{4~3}^{2}}z(1-z) + { (K^2)^{4} \over 6 \cb{1~2}^2 \vev{4~3}^{2}} \\ & & + {K^2 \cb{1~3}\vev{1~3} \over \vev{4~3}^2 \cb{1~2}^2} \left( ({1 \over 6} - {2 \over 3}z(1-z)) \vev{1~3}^2 \cb{1~3}^2 + ( {1\over 2}-3z(1-z)) K^2 \vev{1~3} \cb{1~3} \right) \nonumber \end{eqnarray} \bigskip \noindent $\bullet \ $ The result of $c_{[12|34]}$ Final bubble coefficient: \begin{eqnarray} c_{[12|34]} &=& - {2 \over (K^2)^2} \bigg\{ { \vev{1~3} \cb{1~3} (K^2)^{3}\over \cb{1~2}^2 \vev{4~3}^{2}} \left( {1 \over 2}-4z(1-z) \right) - {5 (K^2)^{4} \over 3 \cb{1~2}^2 \vev{4~3}^{2}}z(1-z) + { (K^2)^{4} \over 6 \cb{1~2}^2 \vev{4~3}^{2}} \nonumber \\ & & \qquad \qquad + {K^2 \cb{1~3}\vev{1~3} \over \vev{4~3}^2 \cb{1~2}^2} \left( ({1 \over 6} - {2 \over 3}z(1-z)) \vev{1~3}^2 \cb{1~3}^2 + ( {1\over 2}-3z(1-z)) K^2 \vev{1~3} \cb{1~3} \right) \bigg\} \nonumber \\ & & + {\vev{1~2}\cb{3~4} \over K^2} {\spb 4.3 \over \spb 1.2} \nonumber \\ &=& {\vev{1~2}^2 \cb{3~4}^2 \over 3s^2t^2} \left( -4 {(m^2+\mu^2) s } +6(m^2+\mu^2)t -2 st \right) \end{eqnarray} where we used the definitions $K^2=t, \vev{1~3}[1~3]=-s-t$. \subsection{Comparison with the literature} The $t$-channel cut of $A(1^-,2^-,3^+,4^+)$ admits a decomposition in terms of cuts of master integrals as shown in Fig.\ref{fig:s23cut}. Its expression was given in Eq.(5.33) of \cite{Bern:1995db}, and reads \begin{eqnarray} \left. A^{\rm fermion}_4(1^-,2^-,3^+,4^+) \right|_{t-{\rm cut}} &=& -2 \left. A^{\rm scalar}_4(1^-,2^-,3^+,4^+) \right|_{t-{\rm cut}} - \left. {1 \over (4\pi)^{2-\epsilon}} A_4^{\rm tree} (t J_4-I_2(t))\right|_{t-{\rm cut}} \end{eqnarray} with \begin{eqnarray} \left. A^{\rm scalar}_4(1^-,2^-,3^+,4^+) \right|_{t-{\rm cut}} &=& \left. {1 \over (4\pi)^{2-\epsilon}} A_4^{\rm tree} \left( {1 \over t} I_2^{(1,3),D=6-2\epsilon} + {1\over s}J_2^{(1,3)} -{t \over s} K_4 \right) \right|_{t-{\rm cut}} \end{eqnarray} and \begin{eqnarray} A^{\rm tree}_4 &=& i{\vev{1~2}^4 \over \vev{1~2}\vev{2~3}\vev{3~4}\vev{4~1}} = -i {\vev{1~2}^2 \cb{3~4}^2 \over K^2 s_{12}} \end{eqnarray} where we neglected the cut-free term, $I_1$ and $I_2(0)$. In standard notation we have $s=s_{12}, \ t=s_{23}, \ u= - s - t = s_{13}$. Now we translate the expression of \cite{Bern:1995db} into our canonical basis, using the identities of Appendix \ref{sec:IntegralsTranslation}. \begin{eqnarray} \left. A^{\rm fermion}_4(1^-,2^-,3^+,4^+) \right|_{t-{\rm cut}} &=& -2 \left. A^{\rm scalar}_4(1^-,2^-,3^+,4^+) \right|_{t-{\rm cut}} - \left. {1 \over (4\pi)^{2-\epsilon}} A_4^{\rm tree} (t J_4-I_2(t))\right|_{t-{\rm cut}} \\ &=& - \left. {1 \over (4\pi)^{2-\epsilon}} A_4^{\rm tree} \left( {2 \over t} I_2^{(1,3),D=6-2\epsilon} + {2\over s}J_2^{(1,3)} -{2t \over s} K_4 + t J_4-I_2(t) \right) \right|_{t-{\rm cut}} \\ &=& - i \left. A_4^{\rm tree} \left( {2 \over t} \left( {t \over 6} I^{\rm BFM}_2[1] +{1 \over 3}I_1 -{2 \over 3}I^{\rm BFM}_2[m^2+\mu^2] \right) + {2\over s} I^{\rm BFM}_2[m^2+\mu^2] \nonumber \right. \right. \\ & & \left. \left. -{2t \over s} I^{\rm BFM}_4[(m^2+\mu^2)^2] + t I^{\rm BFM}_4[m^2+\mu^2] - I^{\rm BFM}_2[1] \right) \right|_{t-{\rm cut}} \\ &=& \left. {\vev{1~2}^2 \cb{3~4}^2 \over st} \left( {2 \over 3} I^{\rm BFM}_2[1] +{4 \over 3t}I^{\rm BFM}_2[m^2+\mu^2] - {2\over s} I^{\rm BFM}_2[m^2+\mu^2] \nonumber \right. \right. \\ & & \left. \left. +{2t \over s} I^{\rm BFM}_4[(m^2+\mu^2)^2] - t I^{\rm BFM}_4[m^2+\mu^2] \right) \right|_{t-{\rm cut}} \end{eqnarray} We have reproduced every one of these coefficients, up to an overall minus sign in the amplitude. \section{From polynomials in $u$ to final coefficients} As proven in the Appendix \ref{polynomialproof}, the coefficients of 2-, 3-, and 4-point functions in four dimensions are polynomials in $u$ (or equivalently $\mu^2$), of known degree $d$: for boxes, $d= [(n+2)/2]$; for triangles, $d=[(n+1)/2]$; for bubbles, $d=[n/2]$; where $[x]$ denotes the greatest integer less than or equal to $x$. Using this fact, we can generally represent any coefficient of the master integral as \begin{eqnarray} P_d(u) = \sum_{r=0}^d \ c_r \ u^r \ . \end{eqnarray} The coefficients $c_r$ are in one-to-one correspondence to the coefficients of the {\it shifted-dimension} master integrals (see Section 2). \\ To compute the $c_r$ analytically, one can proceed with the standard differentiations with respect to $u$, at $u=0$: \begin{eqnarray} c_r & = & \left. {1 \over k ! } {d^{r} \over d u^r} P_d(u) \right|_{u=0}. \end{eqnarray} When the differentiations are time consuming, or the analytic expression is not needed, one can switch to the following numerical procedure, and extract the $c_r$ algebraically, by {\it projections}. \begin{enumerate} \item Generate the values $P_{d,k}, (k=0,...,d-1)$, \begin{eqnarray} P_{d,k} = P_d(u_k) \ , \end{eqnarray} by evaluating $P_d(u)$ at particular points: \begin{eqnarray} u_k = \ e^{-2 \pi i {k / d}} \ . \end{eqnarray} \item Using the orthogonality relations for plane waves, one can obtain the coefficient $c_r$ simply by the following formula: \begin{eqnarray} c_r &=& {1 \over d} \sum_{k=0}^{d-1} \ P_{d,k} \ e^{2 \pi i r {k /d} }. \end{eqnarray} \end{enumerate} \acknowledgments RB is supported by Stichting FOM. BF is supported by Qiu-Shi Professor Fellowship from Zhejiang University, China.
train/arxiv
BkiUdBQ4ubngyxyhl2fE
5
1
\section{Introduction} After the string theory had indicated that noncommutative (NC) gauge field theory (GFT) could be one of its low-energy effective-theory realisation \cite{Seiberg:1999vs}, the studies on noncommutative particle phenomenology started their development \cite{Hinchliffe:2002km,Trampetic:2008bk}. The aim of the present paper is to find possible experimental signatures and predict/estimate bounds on the NC spacetime from collider physics experimental data, in particular from the Standard Model (SM) forbidden, earlier introduced and very well known $Z \to \gamma\gamma$ decay mode as well as the invisible $Z \to \bar\nu\nu$ decay. The NC contributions are for the latter decay are analyzed and discussed first time in this article. Nice progress has been obtained in the Seiberg-Witten (SW) maps and enveloping algebra based models where one could deform commutative gauge theories with arbitrary gauge group and representation~ \cite{Madore:2000en,Jurco:2000ja,Jurco:2000fb,Jurco:2001my,Jurco:2001rq, Jackiw:2001jb,Calmet:2001na,Horvat:2011qn}. Possible constraints on the $\rm U_\star(1)$ charges \cite{Chaichian:2009uw} are also rescinded in this approach. The noncommutative extensions of particle physics models like the covariant noncommutative SM (NCSM) and the NC GUT models \cite{Calmet:2001na,Horvat:2011qn,Behr:2002wx, Deshpande:2001mu,Duplancic:2003hg,Aschieri:2002mc,Melic:2005fm,Martin:2013gma,Martin:2013lba} were constructed. These allow a minimal (no new particle content) deformation with the sacrifice that interactions include infinitely many terms defined through recursion over (and in practice the cut-off at a certain order of) the NC parameter $\theta^{\mu\nu}$. There were also other studies with NC modifications of the particle-physics theory \cite{Bichl:2001cq,Ohl:2004tn,Melic:2005su,Alboteanu:2006hh,Buric:2007qx}. Studies of divergences have been performed since the early stage of the NC QFT development \cite{Filk:1996dm}. Later on trouble-free one-loop quantum corrections to noncommutative scalar $\phi^4$ theories \cite{Grosse:2004yu,Magnen:2008pd,Meljanac:2011cs} and the NC QED \cite{Vilar:2009er} have been obtained. Also the SW map expanded NCSM \cite{Calmet:2001na,Behr:2002wx,Duplancic:2003hg,Melic:2005fm} at first order in $\theta$, albeit breaking the Lorentz symmetry is found to be anomaly free \cite{Martin:2002nr}, having also well-behaved one-loop quantum corrections \cite{Bichl:2001cq,Buric:2006wm,Latas:2007eu,Buric:2007ix, Martin:2009vg,Martin:2009sg,Tamarit:2009iy,Buric:2010wd}. At $\theta$-order there are two important interactions that are suppressed and/or forbidden in the SM, the triple neutral gauge boson interaction \cite{Behr:2002wx,Duplancic:2003hg,Melic:2005fm}, and the tree level coupling of neutrinos to photons \cite{Schupp:2002up, Minkowski:2003jg}, respectively. Here the expansion and cut-off in powers of the NC parameters $\theta^{\mu\nu}$ corresponds to an expansion in momenta, restricting the range of validity to energies well below the NC scale $\Lambda_{\rm NC}$. Usually, this is no problem for experimental predictions because the lower bound on the NC parameters $\theta^{\mu\nu}=c^{\mu\nu}/\Lambda_{\rm NC}^2$ (the coefficients $c^{\mu\nu}$ being of order one) runs higher than typical momenta involved in a particular process. However, there are exotic processes in the early universe as well as those involving ultra high energy cosmic rays \cite{Horvat:2009cm,Horvat:2010sr,Horvat:2011iv,Horvat:2011wh} in which the typical energy involved is higher than the current experimental bound on the NC scale $\Lambda_{\rm NC}$. Thus, the previous $\theta$-cut-off approximate results are inapplicable. To cure the cut-off approximation, we are using $\theta$-exact expressions, inspired by the exact formulas for the SW map \cite{Jurco:2001my,Mehen:2000vs,Liu:2000mja,Okawa:2001mv,Martin:2012aw}, and expand in powers of gauge fields, as we did in \cite{Horvat:2011iv}. In $\theta$-exact models we have studied the UV/IR mixing \cite{Schupp:2008fs,Horvat:2011bs}, the neutrino propagation \cite{Horvat:2011qg} and also some NC photon-neutrino phenomenology \cite{Horvat:2009cm,Horvat:2010sr,Horvat:2011iv,Horvat:2011wh}, respectively. Due to the presence of the UV/IR mixing the $\theta$-exact model is generally regarded as not perturbatively renormalizable, thus the relation of quantum corrections to observations is not entirely clear~\cite{Horvat:2010km}. However, recently we observed that certain possibility of reducing the severity of divergences using constraints on both the deformation freedom and noncommutative parameter $\theta$ \cite{Horvat:2011bs,Horvat:2011qg,Horvat:2013rga} could occur. In a noncommutative $\rm U(1)$ model we found that via the combination of deformation freedom and appropriate choices of the noncommutative parameter $\theta$, one could remove all divergences from both the one-loop neutrino (neutral fermion) and photon self-energy. In particular there exists a particular combination where the photon self-interaction contribution to photon self-energy reduces to a single finite term~\cite{Horvat:2013rga}. A deformation freedom was also shown recently to be capable of suppressing the divergent structures at one-loop and first order in the noncommutative parameter $\theta$~\cite{Buric:2006wm,Latas:2007eu,Buric:2007ix, Martin:2009vg,Martin:2009sg,Tamarit:2009iy,Buric:2010wd}.\footnote{Photon self-energy can be controlled at any order of $\theta$-expansion when such parameters are allowed to be renormalized~\cite{Bichl:2001cq}. It may also worth notice that in a $\theta$-exact treatment the momentum power series sum into phase shifts, therefore do not increase the superficial UV divergent order.} Assuming that controls over divergences in noncommutative phenomenological models could be made available, either in the $\theta$-expanding or in the $\theta$-exact approach, tree level amplitudes will always serve as the leading order contribution to the relevant particle physics process in such models given perturbation theory is trustable. In the particular context of the UV/IR mixing problem it is also very important to mention a complementary approach \cite{hep-th/0606248,Abel:2006wj} where NC gauge theories are realized as effective QFTs, underlain by some more fundamental theory such as string theory. It was claimed that for a large class of more general QFTs above the UV cutoff the phenomenological effects of the UV completion can be quite successfully modeled by a threshold value of the UV cutoff. So, in the presence of a finite UV cutoff {\em neither} sort of divergence will ever appear since the problematic phase factors effectively transform the highest energy scale (the UV cutoff) into the lowest one (the IR cutoff). What is more, not only the full scope of noncommutativity is experienced only in the range delimited by the two cutoffs, but for the scale of NC high enough, the whole standard model can be placed below the IR cutoff. In such a way the UV/IR mixing problem becomes hugely less pressing, making, at the same time, a study of the theory at the quantum level much more reliable. To study triple neutral gauge boson couplings, in this report we construct the $\theta$-exact pure gauge-sector action of the $\rm SU(2)\times U(1)$ gauge group, while for the NC Z-boson-neutrino and photon-neutrino interactions we are using the actions from \cite{Horvat:2011qn}. We compute only tree level processes and therefore treat the NC model as an effective theory only. The decay of $Z$ boson into two photons has been used to predict possible experimental signatures of the NCSM \cite{Behr:2002wx,Duplancic:2003hg,Melic:2005fm,Buric:2007qx}. Since by Bose-symmetry and rotational invariance arguments \cite{LandauYoung} any vector particle cannot decay into two massless vector particles, this forbidden decay has very little background from the standard model. Fixing $\theta$ spontaneously breaks C, P, and/or CP discrete symmetries \cite{Aschieri:2002mc}. See also general discussions on the C,P,T, and CP properties of the noncommutative interactions in \cite{SheikhJabbari:2000vi}, and in the case of our model a discussion given in \cite{Melic:2005hb,Tamarit:2008vy}. A breaking of C symmetry occurs in the $Z\to \gamma\gamma$ process. One common approximation in those existing works is that only the vertices linear in the NC parameter $\theta$ were used. In this work we extend the NCSM gauge sector actions for first time to all orders in $\theta$. Next we discuss the decay widths $\Gamma(Z\to \gamma\gamma)$ and $\Gamma(Z\to \nu\nu)$ as functions of the NC scale $\Lambda_{\rm NC}$ for space/light-like noncommutativity which are allowed by the unitarity conditions \cite{Gomis:2000zz,Aharony:2000gz}. \section{NCSM gauge sector in a $\theta$-exact model of the NCSM} As usual we consider the star$(\star)$-product formalism for quantum field theory on the deformed Moyal space with a constant NC parameter $\theta$. We start with a $\theta$-exact hybrid SW-map expansion of the noncommutative gauge field $\hat V_\mu$ in the terms of component commutative fields, \begin{eqnarray} \hat V_\mu &=& V_\mu^aT^a-\frac{1}{8}\theta^{\rho\tau} \bigg[\Big\{V_\rho^a\stackrel{\star_2}{,}(\partial_\tau V_\mu+F_{\tau\mu})^b\Big\}\big\{T^a,T^b\big\} \nonumber\\ &&+\: \Big\{V_\rho^a\stackrel{\star_{2'}}{,}(\partial_\tau V_\mu+F_{\tau\mu})^b\Big\} \big[T^a,T^b\big] \bigg]. \label{compfield} \end{eqnarray} Besides the star($\star$)-commutator term, starting at first order of $\theta$, one gets a $\star$-anti\-commutator term starting at second order in $\theta$. The generalized star products: $\star_2$ and $\star_{2'}$ \cite{Mehen:2000vs,Schupp:2008fs,Horvat:2011bs,Horvat:2011qn}, have the following definitions and properties: \begin{eqnarray} \phi(x)\star_2 \psi(x)&=&\psi(x)\star_2 \phi(x) \nonumber\\ &=&\frac{\sin\frac{\partial_1\theta \partial_2}{2}}{\frac{\partial_1\theta \partial_2}{2}}\phi(x_1)\psi(x_2)\bigg|_{x_1=x_2=x}, \nonumber\\ \big[\phi\stackrel{\star}{,}\psi\big] &=&i\theta^{\rho\tau}\partial_\rho\phi\star_2\partial_\tau\psi, \label{star2} \end{eqnarray} \begin{eqnarray} \phi(x)\star_{2'}\psi(x)&=&-\psi(x)\star_{2'} \phi(x) \nonumber\\ &=&\frac{\cos\frac{\partial_1\theta\partial_2}{2}-1} {\frac{\partial_1\theta\partial_2}{2}}\phi(x_1)\psi(x_2)\bigg|_{x_1=x_2=x}, \nonumber\\ \big\{\phi \stackrel{\star}{,} \psi\big\}-\big\{\phi,\psi\big\}&=&\theta^{\rho\tau}\partial_\rho\phi\star_{2'}\partial_\tau\psi, \label{star2'} \end{eqnarray} where $\star_2$ is symmetric and $\star_{2'}$ antisymmetric in its arguments. Both star products are non\-associative. The noncommutative action is defined in the usual way \cite{Buric:2006wm,Schupp:2002up,Minkowski:2003jg,Schupp:2008fs} \begin{equation} S=\int-\frac{1}{2}\Tr \hat F^{\mu\nu}\star \hat F_{\mu\nu}+i \bar{\hat \Psi}\star\fmslash{D}\hat \Psi\,, \label{Sg1} \end{equation} with definitions of the noncommutative covariant derivative and field strength resembling the corresponding expressions of non-abelian Yang-Mills theory: \begin{eqnarray} D_\mu\hat \Psi&=&\partial_\mu\hat \Psi-i[\hat V_\mu\stackrel{\star}{,}\hat \Psi], \nonumber\\ \hat F_{\mu\nu}&=&\partial_\mu \hat V_\nu-\partial_\nu \hat V_\mu-i[\hat V_\mu\stackrel{\star}{,}\hat V_\nu]. \label{DF} \end{eqnarray} Noncommutative fields ($\hat V_\mu,\hat \Psi$) in above action are images of the corresponding commutative fields $V_\mu$ and $\Psi$ under hybrid SW-map. Starting with equations (\ref{compfield}) and (\ref{DF}), from the action (\ref{Sg1}) we obtain the following gauge-action-part written in terms of the commutative component gauge fields: \begin{eqnarray} S_{g}&=&\frac{-1}{2} \int\sum\limits_{a,b}\hbox{tr}(T_aT_b)F^a_{\mu\nu}F^{b\mu\nu} -\int\sum\limits_{a,b,c} \Bigg\{\hbox{tr} T_a\{T_b,T_c\} \nonumber\\ &\cdot& F^{a\mu\nu}\Bigg[\theta^{\rho\tau}\partial_\mu\bigg(V^b_\rho\star_2(\partial_\tau V^c_\nu +F^c_{\tau\nu})\bigg) +i[V^b_\mu\stackrel{\star}{,}V^c_\nu]\Bigg] \nonumber\\ &+&\hbox{tr} T_a[T_b,T_c]F^{a\mu\nu} \Bigg[i\bigg(\{V^b_\mu\stackrel{\star}{,}V^c_\nu\}-\{V^b_\mu,V^c_\nu\}\bigg) \nonumber\\ &-& \theta^{\rho\tau}V^b_\rho\star_{2'}(\partial_\tau V^c_\nu+F^c_{\tau\nu})\Bigg]\Bigg\}. \label{Sg2} \end{eqnarray} The two traces $\hbox{tr} T_a[T_b,T_c]$ and $\hbox{tr} T_a\{T_b,T_c\}$ are both well known in the representation theory of Lie algebras. The first one is proportional to the structure constant $f_{abc}$, with the quadratic Casimir as its coefficient, $ \hbox{tr} T_a[T_b,T_c]=i\hbox{tr} T_a f_{dbc}T_d=iA f_{abc}. $ Thus the different gauge sectors do not mix in this part of the action. The second trace is slightly more complicated as it is connected with details of representation of the gauge group (we denote it as $B_{abc}$). Also, using the generalized star product $\star_2$, we can rewrite the star-commutator as $i[V^b_\mu\stackrel{\star}{,}V^c_\nu]=-\theta^{\rho\tau}\partial_\rho V^b_\mu\star_2 \partial_\tau V^c_\nu$, and obtain: \begin{eqnarray} S_{g} &=& \sum\limits_a A\int F^a_{\mu\nu}F^{a\mu\nu} \nonumber\\ &-& \sum\limits_{a,b,c}B_{abc}\int \theta^{\rho\tau}F^{a\mu\nu} \Bigg[\partial_\mu\bigg(V^b_\rho\star_2(\partial_\tau V^c_\nu+F^c_{\tau\nu})\bigg) \nonumber\\ &-& \partial_\rho V^b_\mu\star_2 \partial_\tau V^c_\nu \Bigg]+...=S_{\rm gauge}+.... \label{Sg3} \end{eqnarray} After a series of partial integrations and by noting that $B_{abc}$ is totally symmetric under the permutations of $a$, $b$ and $c$, one arrives at the action \begin{eqnarray} S_{\rm gauge}&=& A\int F^a_{\mu\nu}F^{a\mu\nu} \label{Sg5}\\ -B_{abc}\hspace{-.7cm}&&\int \theta^{\rho\tau}F^{a\mu\nu} \left(\frac{1}{4}F_{\rho\tau}^b\star_2F^c_{\mu\nu}-F^b_{\mu \rho}\star_2 F^c_{\nu\tau}\right)+.... \nonumber \end{eqnarray} Here we see that the three gauge boson mixing coupling terms is controlled by the cubic Casimir $B_{abc}$ and contain only the generalized star-product $\star_2$. When $\star_2$ is reduced to unity by a $\theta$-expansion, this formula recovers the prior first order result in~\cite{Behr:2002wx}. $B_{abc}$ is in general group and representation dependent. It reflects a few common properties:\\ $\bullet$ $B_{abc}$ takes on the opposite sign for a representation and its complex conjugate;\\ $\bullet$ $B_{abc}$ vanishes for the adjoint representation of any Lie group;\\ $\bullet$ $B_{abc}$ vanishes for any simple Lie group except ${\rm SU(N}\ge 3)$.\\ For this reasons, $\rm SO(10)$ and $\rm E_6$ GUT models have no additional noncommutative triple gauge boson couplings which are forbidden in the standard model (this attribute was qualified as the ``uniqueness'' of the noncommutative GUT model in \cite{Aschieri:2002mc}). Furthermore when the trace in the standard model is computed using generators descended from non-$\rm SU(N\ge 3)$ simple gauge groups, for example from $\rm SO(10)$ or $\rm E_6$ GUTs, there is no such coupling either. The standard model and $\rm SU(5)$ GUT were studied before \cite{Deshpande:2001mu,Behr:2002wx,Aschieri:2002mc,Melic:2005fm} as both could accommodate the neutral boson mixing couplings $\gamma\gamma\gamma$, $Z\gamma\gamma$, $ZZ\gamma$, $\gamma GG$, $ZGG$, and $ZZZ$. Since in $\rm SU(5)$ GUT all heavy gauge bosons are charged, the standard model neutral boson coupling has covered all the cases. To introduce possible extra neutral gauge bosons, one may consider models with left-right symmetry like the Pati-Salam $\rm SU(4)\times SU(2)\times SU(2)$ \cite{Pati:1974yy} or trinification $\rm SU(3)^3\times Z_3$ \cite{trinification}. In the Pati-Salam model, the vanishing of $B_{abc}$ for $\rm SU(2)$ forces all mixing to arise from the $\rm SU(4)$ sector, that is, $YYY$, $YGG$ (and $GGG$) couplings only, thus no mixing will include heavy gauge bosons. The $\rm SU(4)$ sector has either $\bf 4$ or $\bf {\bar 4}$ representation. Therefore, up to normalization the following terms \begin{eqnarray} \hbox{tr} Y\{Y,Y\}&=&-\frac{8}{9},\;\;\;\hbox{tr} Y \{G^a,G^b\}=\delta^{ab},\,\; \nonumber\\ \hbox{tr} G^a\{G^b,G^c\}&=&d^{abc}_{(3)}, \label{YYY} \end{eqnarray} are the only non-vanishing neutral-boson coupling components. The trinification $\rm SU(3)^3\times Z_3$ seems more promising since the left and right symmetry group are both $\rm SU(3)$. However, in this model, all matter multiplets are of $\rm Z_3$ symmetric $(3,\bar 3,1)\oplus(\bar 3,1,3)\oplus(1,3,\bar 3)$ type. Thus, when $\rm Z_3$ is maintained, all mixing couplings cancel between $\bf 3$ and $\bf {\bar 3}$.\footnote{An analogues conclusion was drawn from the analysis of the NCSM gauge sector at first order in $\theta$ \cite{Buric:2006wm}. Also note that to exploit the possibility of $Z'\gamma\gamma$ coupling one could probe, for example, the left-right symmetric electroweak model $\rm SU(3)_L\times SU(3)_R\times U(1)_X$ \cite{Dias:2010vt}.} In order to obtain the NCSM triple neutral gauge boson (TGB) interaction terms, in accordance with \cite{Calmet:2001na,Behr:2002wx,Deshpande:2001mu,Aschieri:2002mc,Duplancic:2003hg,Melic:2005fm}, we return to the standard model gauge group, $\rm G_{SM}=U(1)_Y \times SU(2)_L \times SU(3)_C$, write the SM gauge potential as \begin{eqnarray} V^{\mu}=g_Y A^{\mu}Y + g_L\sum^3_{a=1}B_a^{\mu}T^a_L+g_C\sum^8_{b=1}G_b^{\mu}T^b_C\,, \label{V} \end{eqnarray} and choose to sum in the action over all particle representations of the standard model. There are five multiplets of fermions and one Higgs multiplet for each generation, so we assign six arbitrary weights $\alpha_i,i=1...6$, to each of them. Considering triple-gauge boson couplings we find the following non-vanishing elements to be \begin{eqnarray} &\hbox{tr} Y\{T^i_L,T^j_L\}=\hbox{tr} T^i_L\{Y,T^j_L\}=\hbox{tr} T^i_L\{T^j_L,Y\}, \nonumber\\& \hbox{tr} Y\{T^i_C,T^j_C\}=\hbox{tr} T^i_C\{Y,T^j_C\}=\hbox{tr} T^i_C\{T^j_C,Y\}, \nonumber\\& \hbox{tr} T^i_C\{T^j_C,T^k_C\}=A_{2C}\cdot d^{ijk}, \quad {\rm and}\quad \hbox{tr} Y^3. \label{TTT} \end{eqnarray} Expanding all possible cyclic permutation explicitly one reaches the following $\theta$-exact result for the SM gauge boson mixing coupling: \begin{eqnarray} \lefteqn{S_{\rm gauge}= S_{\rm gauge}^{\rm SM}} \label{action3} \\ & &\hspace{-7mm}{}+{g^3_Y}\kappa_1\int {\theta^{\rho\tau}}\, f^{\mu\nu}\left(\frac{1}{4}f_{\rho\tau}\star_2 f_{\mu\nu}-f_{\mu\rho}\star_2 f_{\nu\tau}\right), \nonumber \\ & &\hspace{-7mm}{}+g_Yg_L^2\kappa_2 \int\sum_{i=1}^{3} \theta^{\rho\tau} \Big[B_i^{\mu\nu}\left(\frac{1}{4}f_{\rho\tau}\star_2B^i_{\mu\nu}- 2f_{\mu\rho}\star_2B^i_{\nu\tau}\right) \nonumber \\ & &\hspace{2cm}{} +f^{\mu\nu}\left(\frac{1}{2}B^i_{\rho\tau}\star_2B^{i}_{\mu\nu}-B^i_{\mu\rho}\star_2B^{i}_{\nu\tau}\right) \Big], \nonumber \\ & &\hspace{-7mm}{}+g_Yg_C^2\kappa_3\int\sum_{j=1}^{8} \theta^{\rho\tau}\Big[G_j^{\mu\nu}\left(\frac{1}{4}f_{\rho\tau}\star_2G^j_{\mu\nu}-2f_{\mu\rho}\star_2G^j_{\nu\tau}\right) \nonumber \\ & &\hspace{2cm}{} +f^{\mu\nu}\left(\frac{1}{2}G^j_{\rho\tau}\star_2G^j_{\mu\nu}-G^j_{\mu\rho}\star_2G^j_{\nu\tau}\right) \Big] . \nonumber \end{eqnarray} The couplings of the model $\kappa_i,\;\;i=1,2,3$, as functions of the six weights $\alpha_i $, are parameters of the model: \begin{eqnarray} \kappa_1&=&\frac{1}{2}\left(-\alpha_1-\frac{\alpha_2}{4}+\frac{8 \alpha_3}{9} -\frac{\alpha_4}{9}+\frac{\alpha_5}{36}+\frac{\alpha_6}{4}\right), \nonumber\\ \kappa_2&=&\frac{1}{2}\Big(-\alpha_2+\alpha_5+\alpha_6\Big), \nonumber\\ \kappa_3&=&\frac{1}{2} \Big(2\alpha_3-\alpha_4+\alpha_5\Big), \label{kappa123} \end{eqnarray} with weights satisfying the positivity conditions, \begin{equation} \alpha_j > 0,\; \forall j,\;j=1,...,6. \label{unequa} \end{equation} In order to restore the coupling constants, $\alpha_i$'s must satisfy three $\rm G_{SM}$ constraints \cite{Behr:2002wx} \begin{eqnarray} \frac{1}{g_Y^2}&=&2\alpha_1+\alpha_2+\frac{8\alpha_3}{3}+\frac{2\alpha_4}{3}+\frac{\alpha_5}{3}+\alpha_6, \nonumber\\ \frac{1}{g_L^2}&=&\alpha_2+3\alpha_5+\alpha_6, \nonumber\\ \frac{1}{g_C^2}&=&\alpha_3+\alpha_4+2\alpha_5. \label{YLc123} \end{eqnarray} Solutions to the above system of six equations and six inequations are given in \cite{Behr:2002wx,Duplancic:2003hg}. Additional details of the model are given in \cite{Melic:2005fm,Buric:2006wm}. After performing the electroweak (EW) symmetry breaking, from the action (\ref{action3}) we extract the $Z$ boson-photon and other TGB couplings in terms of the physical fields, which are not present in the commutative SM, \begin{eqnarray} {\cal L}_{Z\gamma\gamma}&=&\Big[g_Y^3\kappa_1\sin\vartheta_W\cos^2\vartheta_W \nonumber\\ & &+g_Yg_L^2\kappa_2\big(\sin^3\vartheta_W -2\sin\vartheta_W\cos^2\vartheta_W\big)\Big]\, \nonumber\\ & & \cdot \;\theta^{\rho\tau} \Big[2Z^{\mu\nu}\big(2A_{\mu\rho}\star_2A_{\nu\tau}-A_{\mu\nu}\star_2A_{\rho\tau}\big) \nonumber\\ & & +A^{\mu\nu}\big(8Z_{\mu\rho}\star_2A_{\nu\tau} -Z_{\rho\tau}\star_2A_{\mu\nu}\big)\Big], \label{L2} \end{eqnarray} where $A_{\mu\nu} = \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ and $Z_{\mu\nu} = \partial_{\mu}Z_{\nu}-\partial_{\nu}Z_{\mu}$, being the Abelian quantities. Following the observations in \cite{Buric:2006wm,Buric:2007qx,Latas:2007eu,Horvat:2013rga}, one notice that introducing a further deformation in the gauge boson coupling may improve the quantum property of the model. This gauge deformation freedom $\kappa_g$ can be introduced into \eqref{L2} to reach a $\theta$-exact $Z\gamma\gamma$ coupling equivalent to the linear-in-$\theta$ one given in \cite{Buric:2006wm}, preserving, at the same time, the $\rm U(1)$ gauge symmetry. The modified interaction then read: \begin{eqnarray} {\cal L}_{Z\gamma\gamma}(\kappa_g)&=&\frac{e}{4} \sin2{\vartheta_W}\,{\rm K}_{Z\gamma \gamma}\, {\theta^{\rho\tau}} \label{L2mod}\\ &\cdot& \Big[2Z^{\mu\nu}\big(2A_{\mu\rho}\star_2A_{\nu\tau}-\kappa_gA_{\mu\nu}\star_2A_{\rho\tau}\big) \nonumber\\ &&+ A^{\mu\nu}\big(8Z_{\mu\rho}\star_2A_{\nu\tau} -\kappa_gZ_{\rho\tau}\star_2A_{\mu\nu}\big)\Big], \nonumber \end{eqnarray} where the EW symmetry breaking, via some redefinitions, induces new dimensionless TGB coupling constant \begin{equation} K_{Z\gamma\gamma}=\frac{1}{2}\Big[g_Y^2\kappa_1+(g_Y^2-2g_L^2)\kappa_2\Big]. \label{KZgaga} \end{equation} Other interactions, like ${\cal L}_{\gamma\gamma\gamma}(\kappa_g)$, could be obtained in the same way. The deformation parameter $\kappa_g$ is originally set to be one, however $\kappa_g=3$ has been found to be improving the quantum property of several related models \cite{Buric:2006wm,Latas:2007eu,Horvat:2013rga}. Thus we shall do our phenomenological analysis of $Z\to\gamma\gamma$ decays for both values, i.e. for $\kappa_g=1,3$. All new redefined dimensionless coupling constants ${\rm K}_{\gamma\gamma\gamma}$, ${\rm K}_{Z\gamma\gamma}$, ..., appears as a consequence of the EW symmetry breaking procedure, and were first introduced in \cite{Behr:2002wx,Duplancic:2003hg,Melic:2005fm}. It is important to note that particularly interesting coupling ${\rm K}_{Z\gamma\gamma}$ could also receive the zero value. See details in \cite{Behr:2002wx,Duplancic:2003hg}. In this work we focus on the partial width of the $Z\rightarrow \gamma\gamma$ decay arising from $\theta$-exact Lagrangian (\ref{L2mod}). The gauge-invariant amplitude ${\cal M}_{Z\rightarrow \gamma\gamma}$ for the $Z(k_1)\rightarrow\gamma(k_2)\,\gamma(k_3)$ decay in the momentum space reads \begin{eqnarray} {\cal M}_{Z\to \gamma\gamma}&=&-2e \sin2{\vartheta_W}\,{\rm K}_{Z\gamma \gamma}F_{\star_2}(k_1,k_2) \label{ampl}\\ &\cdot&{\Theta^{\mu\nu\rho}_3}(\kappa_g;k_1,-k_2,-k_3) \epsilon_{\mu}(k_1) \epsilon_{\nu}(k_2) \epsilon_{\rho}(k_3), \nonumber \end{eqnarray} where momentum dependent function \begin{eqnarray} F_{\star_2}(k_1,k_2)=\frac{\sin\frac{k_1\theta k_2}{2}}{\frac{k_1\theta k_2}{2}}, \label{Ffactor} \end{eqnarray} comes from the $\star_2$-product in Lagrangian. The tensor ${\Theta^{\mu\nu\rho}_3}(\kappa_g;k_1,k_2,k_3)$, is given by \begin{eqnarray} {\Theta^{\mu\nu\rho}_3}(\kappa_g;k_1,k_2,k_3)&=& -\,(k_1 \theta k_2)\, \nonumber \\ & & \hspace*{-3cm} \cdot\; \Big[(k_1-k_2)^\rho g^{\mu \nu} +(k_2-k_3)^\mu g^{\nu \rho} + (k_3-k_1)^\nu g^{\rho \mu}\Big] \nonumber \\ & & \hspace*{-3cm} -\,\theta^{\mu \nu}\, \Big[ k_1^\rho \, (k_2 k_3) - k_2^\rho \, (k_1 k_3) \Big] \nonumber \\ & & \hspace*{-3cm} -\,\theta^{\nu \rho}\, \Big[ k_2^\mu \, (k_3 k_1) - k_3^\mu \, (k_2 k_1) \Big] \nonumber \\ & & \hspace*{-3cm} -\,\theta^{\rho \mu}\, \Big[ k_3^\nu \, (k_1 k_2) - k_1^\nu \, (k_3 k_2) \Big] \nonumber \\ & & \hspace*{-3cm} +\,(\theta k_2)^\mu \,\Big[g^{\nu \rho}\, k_3^2 - k_3^\nu k_3^\rho\Big] +(\theta k_3)^\mu\,\Big[g^{\nu \rho}\, k_2^2 - k_2^\nu k_2^\rho\Big] \nonumber \\ & & \hspace*{-3cm} +\,(\theta k_3)^\nu \,\Big[g^{\mu \rho}\, k_1^2 - k_1^\mu k_1^\rho \Big] +(\theta k_1)^\nu \,\Big[g^{\mu \rho}\, k_3^2 - k_3^\mu k_3^\rho \Big] \nonumber \\ & & \hspace*{-3cm} +\,(\theta k_1)^\rho \,\Big[g^{\mu \nu}\, k_2^2 - k_2^\mu k_2^\nu \Big] +(\theta k_2)^\rho \,\Big[g^{\mu \nu}\, k_1^2 - k_1^\mu k_1^\nu \Big] \nonumber \\ & & \hspace*{-3cm} +(\kappa_g-1)\,(\theta k_1)^{\mu}\,\Big[g^{\nu \rho}\,(k_3 k_2)-k_3^\nu k_2^\rho\Big] \nonumber \\ & & \hspace*{-3cm} +(\kappa_g-1)\,(\theta k_2)^{\nu} \,\Big[g^{\mu \rho}\,(k_3 k_1)-k_3^\mu k_1^\rho\Big] \nonumber \\ & & \hspace*{-3cm} +(\kappa_g-1)\,(\theta k_3)^{\rho} \,\Big[g^{\mu \nu}\,(k_2 k_1)-k_2^\mu k_1^\nu\Big]\, , \label{amplit} \end{eqnarray} where the 4-momenta $k_1,k_2,k_3$ are taken to be incoming, satisfying the momentum conservation $(k_1+k_2+k_3=0)$. In (\ref{amplit}) the deformation freedom parameter $\kappa_g$ appears symmetric in physical gauge bosons which enter the interaction point, as one would expect. For $\kappa_g=1$, the tensor (\ref{amplit}) becomes the tensor $\Theta_3((\mu,k_1),(\nu,k_2),(\rho,k_3))$ from \cite{Melic:2005fm}. It is straightforward to see that ${k_1}_\mu\Theta^{\mu\nu\rho}_3={k_2}_\nu\Theta^{\mu\nu\rho}_3={k_3}_\rho\Theta^{\mu\nu\rho}_3=0$, i.e. $\Theta^{\mu\nu\rho}_3(\kappa_g;k_1,k_2,k_3)$ respects the aforementioned $\rm U(1)$ gauge symmetry. \section{Z Decays} To illustrate certain physical effects of our deformed $\theta$-exact construction, we compute the $Z\to\gamma\gamma$ and $Z\to\nu\bar\nu$ decay rates in the Z--boson rest frame, which is then readily to be compared with the precision Z resonance measurements on $e^+e^-$ colliders, where Z is produced almost at rest. \subsection{The NCSM $Z \to \gamma\gamma$ decay rate} \noindent The partial $Z \to \gamma\gamma$ decay width obtained from (\ref{ampl}) reads \begin{equation} \label{Z2Gamma:DecayWidthFull} \begin{split} &\Gamma(Z\to\gamma\gamma)= \frac{ \alpha}{24} \sin^2 2\vartheta_W K_{Z\gamma\gamma}^2 M_Z \\& \cdot\;\Bigg[- 8\bigg(\Big(\kappa_g\big(9 \kappa_g-34\big) +35\Big) +2 \frac{|\vec{B_\theta}|^2}{|\vec{E_\theta}|^2} \\& +\big(\kappa_g-1\big) \big(\kappa_g+3\big) \frac{\big(\vec{E_\theta}\vec{B_\theta}\big)^2}{|\vec{E_\theta}|^4}\bigg) \\& + \bigg(2 \Big(\kappa_g\big(11\kappa_g-42\big)+43\Big) +\Big(\kappa_g\big(\kappa_g+2\big)+5\Big)\frac{|\vec{B_\theta}|^2}{ |\vec{E_\theta}|^2} \\& +\big(\kappa_g-1\big) \big(\kappa_g+3\big) \frac{\big(\vec{E_\theta}\vec{B_\theta}\big)^2}{|\vec{E_\theta}|^4}\bigg) \\& \cdot\,\bigg(M_Z^2|\vec{E_\theta}| \; {\rm Si}\Big(\frac{1}{2}M_Z^2|\vec{E_\theta}|\Big) +2\cos\Big(\frac{1}{2}M_Z^2|\vec{E_\theta}|\Big)\bigg) \\& + 2 \bigg( -\big(\kappa_g-1\big)\big(\kappa_g+3\big)\frac{|\vec{B_\theta}|^2}{|\vec{E_\theta}|^2} \Big(1-3\frac{\big(\vec{E_\theta}\vec{B_\theta}\big)^2}{|\vec{E_\theta}|^2|\vec{B_\theta}|^2}\Big) \\& + 2\Big(\kappa_g\big(7\kappa_g-26\big)+27\Big)\bigg) \frac{\sin\Big(\frac{1}{2}M_Z^2|\vec{E_\theta}|\Big)}{\Big(\frac{1}{2}M_Z^2|\vec{E_\theta}|\Big)}\Bigg], \end{split} \end{equation} where we have used the following notation $\theta^2 =(\theta^2)^{\mu}_{\mu} = \theta_{\mu\nu}\theta^{\nu\mu} = 2\left (\vec{E}_{\theta}^2 - \vec{B}_{\theta}^2 \right )$, and $|\vec{E_\theta}|\sim |\vec{B_\theta}|\sim1/\Lambda^2_{\rm NC}$. In (\ref{Z2Gamma:DecayWidthFull}) Si is the sine integral function, $\mbox{\rm Si}(z)= \int_0^z\,dt\, \frac{\sin t}{t}$. Expanding Si and trigonometric functions in power series, one can easily show that the partial width (\ref{Z2Gamma:DecayWidthFull}) recovers the prior leading order result in \cite{Buric:2007qx} \begin{equation} \begin{split} \Gamma(Z\to\gamma\gamma)=&\frac{ \alpha}{72} \sin^2 2\vartheta_W K_{Z\gamma\gamma}^2 M_Z^5 \\&\cdot\Big[\big(13\kappa_g^2-50\kappa_g+51\big)|\vec{E_\theta}|^2 \\&+\big(\kappa_g^2+2\kappa_g+3\big)|\vec{B_\theta}|^2\Big]+\mathcal{O}\left(\Lambda_{\rm NC}^{-8}\right) \label{Z2GammaDecay:a} \end{split} \end{equation} The $\theta$-exact rate (\ref{Z2Gamma:DecayWidthFull}) differs considerably from the rate (\ref{Z2GammaDecay:a}) obtained at the $\theta$-first order in the SW/enveloping algebra $\theta$-expanded model \cite{Buric:2007qx}. Difference appears due to the simultaneous presence of the $\theta$-exact model function $F_{\star_2}(k_1,k_2)$, and of the deformation freedom parameter $\kappa_g$. Both objects ($F_{\star_2}$ and $\kappa_g$) together produce new, previously unknown, mixing term: $(\kappa_g-1) (\kappa_g+3)(\vec{E_\theta}\vec{B_\theta})^2$, which vanishes for $\kappa_g=1,-3$, $\vec{E_\theta}=0$, $\vec{B_\theta}=0$ or $\vec{E_\theta}\perp \vec{B_\theta}$. So, unlike the first order rate which depends only on the lengths of $\vec{E}_\theta$ and $\vec{B}_\theta$, the $\theta$-exact rate has one more higher order rotation invariant term $(\vec E_\theta\vec B_\theta)^2$, which would measure the relative angle between $\vec{E}_\theta$ and $\vec{B}_\theta$. At the traditional (undeformed gauge interaction) point $\kappa_g=1$ one would not see this effect. In the $Z$-rest frame the $Z$-momenta are reduced to the time-like component only, therefore $k_1\theta k_2=M_Z {\vec E}_\theta \cdot \vec k_2$. Thus the ${\vec B}_\theta$ component does not contribute to the $F_{\star_2}$-function. Consequently, the $\theta$-exact rate equals to the first order result for space-like noncommutativity as expected. For light-like noncommutativity (also preserving unitarity \cite{Aharony:2000gz}) the full NC effect will be still exhibited. \subsection{The NCSM $Z\to \nu\bar{\nu}$ decay rate } \noindent Since the complete, $\theta$-exact $Z\nu\nu$ interaction on noncommutative spaces was discussed in details in \cite{Horvat:2011qn}, we shall not repeat it here. We only give the $Z\nu\bar\nu$ vertex, which was first time derived in our previous paper \cite{Horvat:2011qn}: \begin{equation} \label{vertexznunubar} \begin{split} \Gamma&_{Z\nu^l\bar\nu^{l'}}= i\frac{e}{\sin 2\vartheta_W}\Bigg\{\bigg[\gamma^\mu +\frac{i}{2}F_{\bullet}(q,k_l) \\&\cdot \sum\limits_{n=1}^{3}U^*_{l'n}U_{ln}\bigg((q\theta k_l)\gamma^\mu +(\theta q)^\mu\fmslash k_l-(\theta k_l)^{\mu}\fmslash q\bigg)\bigg]\frac{1-\gamma^5}{2} \\& +\sum\limits^3_{n=1}\sum\limits^{N+3}_{n'=4} \left((\theta k_l)^{\mu}U_{l'n}(m^D_{n(n'-3)})^*U_{ln'}\frac{1-\gamma^5}{2} \right.\\ &\left. +\Big((\theta q)^\mu-(\theta k_l)^\mu\Big) U^*_{l'n}m^D_{n(n'-3)}U^*_{ln'} \frac{1+\gamma^5}{2}\right)\bigg\}\\ &+\frac{\kappa e}{2}\tan\vartheta_W \bigg\{F_{\star_2}(q,k_l)\delta_{ll'}\bigg[(q\theta k_l)\gamma^\mu \\& +(\theta q)^\mu(\fmslash k_l-m_l)-(\theta k_l)^\mu\fmslash q\bigg]\bigg\}\,, \end{split} \end{equation} where $U$ is the mixing matrix, and $m^D$ denotes the Dirac mass part of the mass matrix \begin{equation} {\bf M}=\begin{pmatrix} 0 & m^D\\ (m^D)^T & m^M \end{pmatrix}\,. \label{mmatrixseesaw1} \end{equation} Indices $l,l'$ runs from $1$ to $N+3$ and denote the $N+3$ neutrino mass eigenstates. The momentum dependent factor \begin{equation} F_{\bullet}(q,k_l):=\frac{(e^{-i\frac{q\theta k_l}{2}}-1)}{-i\frac{q\theta k_l}{2}}. \label{factorbullet} \end{equation} arising from generalized $\star$-products, while the constant $\kappa$ measures a correction from the chiral blind $\star$-commutator coupling between neutrinos and $Z$ boson. It vanishes in the commutative limit therefore may be arbitrarily large. The non-$\kappa$-proportional term, on the other hand, is the noncommutative deformation of standard model Z-neutrino coupling, which involves the left handed neutrinos only. For details see section four in \cite{Horvat:2011qn}. For simplicity we set all neutrino masses to be zero in this paper. Then for massless on-shell neutrinos the terms \begin{equation} [ (\theta q)^{\mu} {\fmslash k_l} -(\theta k_l)^\mu {\fmslash q} ]\, (1-\gamma_5), \label{p'p} \end{equation} in the vertex (\ref{vertexznunubar}) do not contribute to the $Z\to\nu\bar\nu$ amplitude due to the equations of motions. Thus the vertex (\ref{vertexznunubar}) can be written in a form similar to the SM vertex \begin{equation} \frac{ig}{2\cos\vartheta_W}\,\gamma^\mu\,(g_V-g_A\gamma_5), \label{gVgA} \end{equation} with \begin{eqnarray} g_V &=& 1 -\frac{1}{2} \exp\Big(i\frac{M_Z}{2} \, \vec p\vec E_\theta \Big) \nonumber\\ & +& 2i\kappa \sin^2\vartheta_W \sin\Big(\frac{M_Z}{2} \, \vec p\vec E_\theta \Big)~, \label{Z2fbarfNC:gV} \\ g_A &=& 1 -\frac{1}{2} \exp\Big(i\frac{M_Z}{2} \, \vec p\vec E_\theta \Big)~. \label{Z2fbarfNC:gA} \end{eqnarray} Using above vertex we obtain the following $Z\to \nu\bar{\nu}$ decay partial width \begin{equation} \begin{split} &\Gamma(Z\to\nu\bar\nu)= \Gamma_{\rm SM}(Z\to\nu\bar\nu)+\Gamma_{\kappa}(Z\to\nu\bar\nu) \\& =\Gamma_{\rm SM}(Z\to\nu\bar\nu) \\& +\frac{\alpha}{3 M_Z |\vec{E_\theta}|} \bigg[\kappa \big(1 -\kappa +\kappa\cos 2 \vartheta _W \big) \sec ^2\vartheta_W\cos\Big(\frac{1}{4}M_Z^2 |\vec{E_\theta}|\Big) \\& -8 \csc ^2 2 \vartheta _W\bigg]\sin\Big(\frac{1}{4}M_Z^2 |\vec{E_\theta}|\Big) \\& +\frac{\alpha M_Z}{12} \bigg[-2 \kappa ^2 +\big(\kappa (2 \kappa -1)+2\big) \sec^2\vartheta _W+2 \csc^2\vartheta _W\bigg] \\&=\Gamma_{\rm SM}(Z\to\nu\bar\nu)+\frac{\alpha M_Z^5|\vec{E_\theta}|^2}{288}\bigg[2 \csc ^2 2 \vartheta _W \\&+\kappa\big((2\kappa-1)\sec ^2\vartheta_W -2\kappa\big)\bigg]+\mathcal{O}\left(\Lambda_{\rm NC}^{-8}\right), \end{split} \label{rateZnunu} \end{equation} whose NC part vanishes when $\vec{E}_\theta\to 0$, i.e. for vanishing $\theta$ or space-like noncommutativity, but not light-like. As before, in the $Z$-rest frame the $Z$-momenta are reduced to the time-like component only, therefore $q\theta k=M_Z {\vec E}_\theta \vec k$, and the ${\vec B}_\theta$ component does not contribute to the $F_{\star_2}$ and $F_{\bullet}$ functions, respectively. \section{Discussion and conclusion} In the last section we have shown that the tree level tri-particle decays ($Z\to\gamma\gamma,\;\nu\bar\nu$) in the covariant noncommutative quantum gauge theory based on Seiberg-Witten maps can be computed without an expansion over the noncommutative parameter $\theta$. We obtained for the first time covariant $\theta$-exact triple neutral gauge boson interactions within the NCSM gauge sector. For computation of the invisible $Z$-decays we have also needed to use $\theta$-exact $Z\nu\bar\nu$ interactions constructed for the first time in \cite{Horvat:2011qn}. Focusing on Z decays into two photons and two neutrinos, we have reconsidered previous computations that were done with less sophisticated tools and derived new bounds on the scale of noncommutativity. Let us present here our phenomenological results and compare to those obtained previously. Noncommutative, especially the $\theta$-exact computations are always prone to the question of nonlocal quantum corrections. However several prior results~\cite{Buric:2006wm,Latas:2007eu,Buric:2007ix,Martin:2009vg,Martin:2009sg, Tamarit:2009iy,Buric:2010wd,Horvat:2011bs,Horvat:2011qg,Horvat:2013rga} have shown that the severity of the novel divergent behavior can be put under control by adjusting the aforementioned deformation parameter(s). We are positive that such a control could be extended across an even broader spectrum. And the tree level amplitude evaluation is of considerable phenomenological importance all the time. In the following we include certain variations of the deformation parameter to illustrate its phenomenological effect too. (I) First, we shall perform our analysis of $Z\to\gamma\gamma$ decays for the two unitarity preserving cases: that is for the light-like and for the space-like noncommutativity, respectively. Second, since the divergent quantum properties of our NCSM gauge sector are controlled by two deformation parameter highlighted values: $\kappa_g=1,3$, we shall do our numerical analysis for those two values, too. Now we define the ratio $\Gamma(Z\to\gamma\gamma)/\Gamma(Z)_{\rm{tot,SM}}$ where $\Gamma(Z)_{\rm{tot,SM}}=(2.4952 \pm 0.0023)$ GeV is the Z--boson full width \cite{PDG2011}. The Z--boson mass is $M_Z=(91.1876\pm 0.0021)$ GeV, the Weinberg angle $\sin^2\vartheta_W = 0.23116$, and the fine structure constant $\alpha$ equals $1/137.036$, \cite{PDG2011}. Figure \ref{Fig1} displays main results for the ratio $\Gamma(Z\to\gamma\gamma)/\Gamma(Z)_{\rm{tot,SM}}$ based on Eq.~(\ref{Z2Gamma:DecayWidthFull}), as a function of the scale of noncommutativity $\Lambda_{\rm NC}$, for fixed coupling constant $|K_{Z\gamma\gamma}|=0.33$, and for two choices of deformation parameter $\kappa_g=1,3$, respectively. Figure ~\ref{Fig2} magnifies the low NC scale regime of Fig ~\ref{Fig1} for fixed coupling constant $|K_{Z\gamma\gamma}|=0.33$, for fixed gauge-deformation parameter $\kappa_g=3$, and with one more curve for the full rank $\theta^{\mu\nu}$ \cite{Horvat:2013rga} added.\footnote{The unitarity issue in the one-loop photon two point function of \cite{Gomis:2000zz,Aharony:2000gz}, is shown to be absent for the same full rank $\theta$ in the U(1) model~of \cite{Horvat:2013rga} with deformation $\kappa_g=3$.} Note that the first order result for $\kappa_g=3$ becomes \begin{equation} \begin{split} \Gamma(Z\to\gamma\gamma)=&\frac{ \alpha}{4} \sin^2 2\vartheta_W K_{Z\gamma\gamma}^2 M_Z^5\Big(|\vec{E}_\theta|^2+|\vec{B}_\theta|^2\Big) \\&+\mathcal{O}\big(\Lambda_{\rm NC}^{-8}\big), \end{split} \end{equation} with the two lengths $|\vec{E}_\theta|$ and $|\vec{B}_\theta|$ weighing exactly the same. Therefore the first order result for all three $\theta$ choices in Figure ~\ref{Fig2} coincide with the full result for space-like $\theta$, while the $\theta$-exact results access the difference among $\theta^{\mu\nu}$ choices at low NC scales. An important consequence of this paper is that the coupling $K_{Z\gamma\gamma}$ and the NC scale $\Lambda_{\rm NC}$ are independent parameters, whereas in the $\theta$-expanded models (linear in $\theta$) experiments measure always the combination $|K_{Z\gamma\gamma}|^2/\Lambda_{\rm NC}^4$. Furthermore, at very $\Lambda_{\rm NC}$ low scales the full $\theta$-exact decay rate is sensitive to the tensor structure of the $\theta^{\mu\nu}$ tensor for $\kappa_g\neq 1, -3$, as shown in Fig~\ref{Fig2}. However, these differences are not relevant when the NC scale is much larger than the $Z$-boson mass. This fact is quite transparent from both Figs, \ref{Fig1}, and ~\ref{Fig2}, respectively. In order to maximize the $Z\to\gamma\gamma$ rate, in the above analyzes we have used the maximal allowed value $|K_{Z\gamma\gamma}|=0.33$ computed and given in figures and tables of \cite{Duplancic:2003hg}. This is different with respect to the analysis of the rate computed at the first order in $\theta$ in~\cite{Buric:2007qx}, where the lower central value from the figures and the tables in \cite{Duplancic:2003hg} $|K_{Z\gamma\gamma}|=0.05$ was used. Anyhow the ratio of two values $|0.33/0.05|^2\simeq43$, is representing just an overall shift-factor, in any analysis. Finally it is important to note that coupling $K_{Z\gamma\gamma}$ could also receive the zero value, thus producing zero rate for the $Z \to\gamma\gamma$ decay. \\ \begin{figure}[top] \begin{center} \includegraphics[width=8.5cm,angle=0]{Fig1.eps} \end{center} \caption{$\Gamma(Z\to\gamma\gamma)/\Gamma(Z)_{\rm{tot,SM}}$ vs. $\Lambda_{\rm NC}$, for fixed coupling constant $|K_{Z\gamma\gamma}|=0.33$. The black horizontal line is the experimental upper limit $\Gamma(Z\to\gamma\gamma)/\Gamma(Z)_{\rm{tot,SM}}<5.2\cdot 10^{-5}$ [64]. Dashed and solid curves correspond to the gauge deformation freedom parameter $\kappa_g=1,3$, respectively. Red corresponds to the light-like case $|\vec{E_\theta}|=|\vec{B_\theta}|=1/\sqrt{2}\Lambda_{\rm{NC}}^2$ and $\vec{E_\theta}\vec{B_\theta}=0$, (overlapped with $\vec{E_\theta}\vec{B_\theta}=1/2\Lambda_{\rm{NC}}^4$). Black is: $|\vec{E_\theta}|=\vec{E_\theta}\vec{B_\theta}=0$, and $|\vec{B_\theta}|=1/\Lambda_{\rm{NC}}^2$.} \label{Fig1} \end{figure} \\ \begin{figure}[top] \begin{center} \includegraphics[width=8.5cm,angle=0]{Fig2.eps} \end{center} \caption{$\Gamma(Z\to\gamma\gamma)/\Gamma(Z)_{\rm{tot,SM}}$ vs. $\Lambda_{\rm NC}$ at very small $\Lambda_{\rm NC}$ scales, and for fixed coupling $|K_{Z\gamma\gamma}|=0.33$, and freedom $\kappa_g=3$. Here the space-like case $|\vec{E_\theta}|=\vec{E_\theta}\vec{B_\theta}=0$, $|\vec{B_\theta}|=1/\Lambda_{\rm{NC}}^2$, and which equals to the first order approximation for space-like, light-like and full rank $\theta$ is given in solid (blue) line. The light-like case, $|\vec{E_\theta}|=|\vec{B_\theta}|=1/\sqrt{2}\Lambda_{\rm{NC}}^2,\;\vec{E_\theta}\perp \vec{B_\theta}$ is given in dotted (red) and full rank $\theta$, $|\vec{E_\theta}|=|\vec{B_\theta}|=1/\sqrt{2}\Lambda_{\rm{NC}}^2,\;\vec{E_\theta}\parallel \vec{B_\theta}$ in dashed (green) curve, respectively. Small but visible deviation can be seen between dotted and dashed curves.} \label{Fig2} \end{figure} Inspecting Fig.~\ref{Fig1} we see that current experimental upper limit $\Gamma(Z\to\gamma\gamma)/\Gamma(Z)_{\rm{tot,SM}}<5.2\cdot 10^{-5}$ \cite{PDG2011} is way too weak to produce any meaningful constraint on the scale of noncommutativity. However, we can certainly expect that further analysis of the LHC experiments would significantly improve the current limit on the $Z\to\gamma\gamma$ partial width. Also Fig. \ref{Fig1} clearly shows that, for example, for ratio as low as $\sim10^{-15}$, our noncommutative scale is $\Lambda_{\rm{NC}} \stackrel{>}\sim 100$ TeV, thus unobservable at the LHC energies. (II) Solely new analysis of the NC contributions to the $Z\to\nu\bar\nu$ decay mode rate (\ref{rateZnunu}) is presented next. A comparison of the experimental $Z\to\nu\bar\nu$ decay width $\Gamma_{\rm invisible}=(499.0\pm 1.5)$ MeV \cite{PDG2011} with its SM theoretical counterpart, allows us to set a constraint $\Delta\Gamma=\Gamma(Z\to\nu\bar\nu) - \Gamma_{\rm SM}(Z\to\nu\bar\nu) \lesssim 1$ MeV, from where a bound on the scale of noncommutativity $\Lambda_{\rm{NC}} = {|\vec{E_\theta}|^{-1/2}}\stackrel{>}\sim 120$ GeV is obtained (see Fig.~\ref{Fig3}), for both choices $\kappa =0,1$. It is worth to notice that since (\ref{rateZnunu}) depends only on $E_{\theta}$ the effect exists only for $E_{\theta}\neq 0$. Thus Fig. \ref{Fig3} does not include the space-like noncommutativity case. \begin{figure}[top] \begin{center} \includegraphics[width=8.5cm,angle=0]{Fig3.eps} \end{center} \caption{$\Delta\Gamma(Z\to\nu\bar\nu)$ decay width vs. $\Lambda_{\rm{NC}}$, for $\kappa=1$ (solid) and $\kappa=0$ (dashed), respectively. Black curves correspond to $\theta$-exact rate, while straight (red) curves correspond to the expanded decay rate.} \label{Fig3} \end{figure} Comparing to previous results, the total decay rates are modified by a factor which remains finite throughout all energy scales. Thus, our results behave much better than the $\theta$-expansion method when ultra high energy processes are considered. We expect that similar control on the high energy behavior can be extended to $\theta$-exact perturbation theory involving more external fields in the near future. All of our results show closed/convergent forms (see Figs. ~\ref{Fig1} to \ref{Fig3}) throughout full interaction energy scales, thus facilitating further phenomenological applications. This would provide a considerably improved theoretical basis for research work in the field of noncommutative particle phenomenology. \\ The work of R.H., D.K., J.T., and J.Y. are supported by the Croatian Ministry of Science, Education and Sports under Contract No. 098-0982930-2872. The work of A.I. supported by the Croatian Ministry of Science, Education and Sports under Contracts No. 098-0982930-1016.
train/arxiv
BkiUbdbxaKgS2MYsNrxc
5
1
\section{Introduction} In this note we will study \emph{lamplighter random walks} on graphs. First we show the equivalence of the lamplighter random walk with the random walk on percolation clusters (Theorem~\ref{thm:pnix=epCx}) by comparing moments. The main result is a spectral resolution of its transition operator (Theorem~\ref{thm:eigenprojections}), which shows in more detail the intimate connection between lamplighter random walks and percolation clusters. The present note complements our previous paper \cite{LehnerNeuhauserWoess:spectrum} where the case of Cayley graphs was treated using group algebra techniques and we refer there for a more detailed discussion. Here we point out certain simplifications which occur when the group structure is neglected. \section{Random Walks on Graphs} Let $G=(V,E)$ be a graph and consider a nearest neighbour random walk on $G$, i.e., a Markov chain $X_n$ with state space $V$ and transition probabilities \begin{equation} \label{eq:rw:p(x,y)} \IP[X_{n+1}=y| X_n=x] = \begin{cases} p(x,y) &\text{if $y\sim x$} \\ 0 & \text{otherwise} \end{cases} \end{equation} where $p(x,y)$ are fixed given probabilities. Associated to this Markov chain is the \emph{transition operator} $$ Tf(x) = \sum_{y\sim x} p(x,y)\,f(y) $$ on the space $$ \ell_2(G) = \bigl\{f:V\to \IC : \sum_{x\in V} \abs{f(x)}^2 < \infty\bigr\} $$ of square summable functions on (the vertices of) $G$. The $n$-step return probability \begin{equation} \label{eq:SRWnstep} p^{(n)}(x,x) = \IP[X_n=x | X_0=x] = \sum_{x_1,x_2,\dots,x_{n-1}} p(x,x_1)\,p(x_1,x_2)\,\dotsm\, p(x_{n-1},x) \end{equation} can be written as $$ p^{(n)}(x,x) = \langle T^n\delta_x,\delta_x\rangle $$ We will assume throughout that the transition operator is selfadjoint, and in this case the transition probabilities can be interpreted as the moments $$ p^{(n)}(x,x) = \int t^n d\mu(t) $$ of the \emph{Plancherel measure} $$ d\mu(t) = \langle dE(t)\, \delta_x,\delta_x \rangle $$ where $dE$ is comes from the spectral resolution $$ T = \int t \,dE(t) $$ of the operator $T$. \section{Lamplighter random walks} Let us define the lamplighter random walk on the graph $G$. We equip every vertex of $G$ with a lamp of $m$ colors or states, one of the colors being black, i.e., the lamp turned off. Denote $H$ the (finite) set of possible states of a lamp and $m=\abs{H}$ the number of different states. \begin{Definition} A \emph{configuration} on $G$ is a function $\eta:G\to H$ with finite support which will be interpreted as a state of the whole system of lamps. We denote $\mathfrak{C}=H^{(G)}$ the set of configurations. \end{Definition} The \emph{switch-walk-switch lamplighter random walk} describes a random walker moving around in the graph according to the the law \eqref{eq:rw:p(x,y)}. Before and after each step he or she changes the states of the lamp in the current position at random. This way we obtain a Markov chain on the configuration space $\mathfrak{C}\times G$ with transition probabilities $$ \tilde{p}(\xi,x; \eta,y) = \frac{p(x,y)}{m^2} $$ if $y\sim x$ and $\xi$ and $\eta$ coincide outside $x$ and $y$; otherwise, $\tilde{p}(\xi,x; \eta,y)=0$. One can interpret the configuration space $\mathfrak{C}\times G$ again as a graph (the so-called lamplighter graph, see \cite{BartholdiWoess:2005:spectral} for a generalization) and the $n$-step return probability $\tilde{p}^{(n)}(\xi,x; \eta,y)$ can be expressed by the same expression as \eqref{eq:SRWnstep} above. However, because of the assumption that each color is switched with the same probability, there is a simplification. We start with the configuration $\iota$ where all lamps are off and the random walker is in some start vertex $x$. For each path $x=x_0,x_1,\dots,x_n=x$ the intermediate states of the lamps are not important and only the last visit at each site counts, therefore we get \begin{multline*} \IP[\text{at the end all lamps are off}] \\ = \IP[\text{at the last visit each lamp is turned off}] = \left( \frac{1}{m} \right)^{\abs{\{x_0,x_1,\dots,x_n\}}} \end{multline*} and the $n$-step return probability is \begin{equation} \label{eq:LRWnstep} \tilde{p}^{(n)}(\iota,x;\iota,x) = \sum_{x_1,x_2,\dots,x_{n-1}} p(x,x_1)\,p(x_1,x_2)\,\dotsm p(x_{n-1},x) m^{-\abs{\{x_0,x_1,\dots,x_n\}}} . \end{equation} \section{Bernoulli Percolation} Let $0<p<1$. On the same graph $G$, consider Bernoulli site percolation with parameter $p$, i.e., on the probability space $\Omega=\{0,1\}^G$ we consider the independent random variables $(Y_x)_{x\in G}$ with Bernoulli distribution $\IP[Y_x=1]=p$. Given $\omega\in\Omega$, let $A(\omega)$ denote the subgraph of $G$ induced on $\{x : Y_x(\omega)=1\}$ and for any vertex $x\in G$, let $C_x(\omega)$ denote the connected component of $A(\omega)$ containing a vertex $x$, which is called the percolation cluster at $x$. It is well known that for every graph $G$ there is a critical parameter $p_c$ such that for any vertex $x\in G$ a phase transition occurs in the sense that for $p<p_c$ the cluster $C_x$ is almost surely finite and for $p>p_c$ it is infinite with positive probability. In order to make use of this fact we recall a combinatorial interpretation of criticality. \begin{Definition} For a subset $A\subseteq G$ we denote its \emph{vertex boundary} $$ dA = \{y\in G : y\not\in A, y\sim x \text{ for some $x\in A$}\} . $$ For $x\in G$, we denote $$ \mathcal{C}_x=\{A\subseteq G: x\in A, \text{ $A$ finite and connected}\} $$ the set of finite path-connected neighbourhoods of $x$. In the case of $G=\IZ^2$, these are sometimes called \emph{lattice animals}. \end{Definition} The percolation cluster $C_x$ is finite if and only if $C_x=A$ for some $A\in\mathcal{C}_x$. The latter occurs with probability $$ \IP[C_x=A] = p^{\abs{A}} (1-p)^{\abs{dA}} ; $$ thus for $p<p_c$ we have \begin{equation} \label{equ:sump1-p=1} \sum_{A\in \mathcal{C}_x} p^{\abs{A}} (1-p)^{\abs{dA}} = 1 . \end{equation} Now consider the absorbing random walk on $C=C_x(\omega)$: $$ p_C(y,z) = \begin{cases} p(y,z) &\text{if $y,z\in C$}\\ 0 & \text{otherwise} \end{cases} $$ here the $n$-step return probability is \begin{align*} p_C^{(n)}(x,x) &= \sum_{x_1,x_2,\dots,x_{n-1}} p_C(x,x_1)\,p_C(x_1,x_2)\,\dotsm p_C(x_{n-1},x) \\ &= \sum_{x_1,x_2,\dots,x_{n-1}} p(x,x_1)\,p(x_1,x_2)\,\dotsm p(x_{n-1},x)\, \mathbb{1}_{[\{x,x_1,\dots,x_{n-1}\}\subseteq C]} \end{align*} Now taking the expectation of this return probability we get $$ \IE p_C^{(n)}(x,x) = \sum_{x_1,x_2,\dots,x_{n-1}} p(x,x_1)\,p(x_1,x_2)\,\dotsm p(x_{n-1},x)\, p^{\abs{\{x,x_1,\dots,x_{n-1}\}}} $$ since $$ \IE \mathbb{1}_{[\{x,x_1,\dots,x_{n-1}\}\subseteq C]} = \IP [\{x,x_1,\dots,x_{n-1}\}\subseteq C] = p^{\abs{\{x,x_1,\dots,x_{n-1}\}}} . $$ Comparing with formula~\eqref{eq:LRWnstep} we obtain the following generalization of \cite[Theorem~1.1]{LehnerNeuhauserWoess:spectrum}. \begin{Theorem} \label{thm:pnix=epCx} If we set the percolation parameter $p=1/m$, we have $$ \tilde{p}^{(n)} (\iota,x;\iota,x) = \IE p_C^{(n)}(x,x) $$ and therefore the Plancherel measure of $\widetilde{T}$ coincides with the integrated density of states of the random walk on the percolation cluster $C_x$. \end{Theorem} \section{Eigenfunctions} In this section we construct eigenfunctions of the transition operator $\widetilde{T}$ of the lamplighter random walk. We identify $\mathcal{H} = l_2(\mathfrak{C}\times G)= l_2(\mathfrak{C})\otimes_2 l_2(G)$ with the space of square summable functions on $\mathfrak{C}\times G$. Denote the standard bases of $l_2(H)$ and $l_2(G)$ by $(e_\eta)_{\eta\in\mathfrak{C}}$ and $(e_x)_{x\in G}$, respectively, and the canonical basis elements of $\mathcal{H}$ by $e_{\eta,x} = e_{\eta}\otimes e_x$. For each $x\in G$ we need two projections, on the one hand the one-dimensional projections $$ P_x e_y = \delta_{xy} e_y $$ on $l_2(G)$ and on the other hand the averaging operators on $l_2(\mathfrak{C})$ given by $$ \Theta_x f(\eta) = \frac{1}{m} \sum_{\eta' } f(\eta') $$ where $\eta'$ runs over all $m$ different configurations which coincide with $\eta$ outside $x$. We will denote its amplification to $\mathcal{H}$ by $\widetilde{\Theta}_x = \Theta_x\otimes I$. Let us denote by $S_{xy}$ the partial isometry $$ S_{xy} e_{z} = \delta_{yz} e_{x} $$ on $l_2(G)$. Using this notation we can write the transition operator $$ \widetilde{T}f(\eta,x) = \sum_{y\sim x} p(x,y)\, \widetilde{\Theta}_x\widetilde{\Theta}_y f(\eta,y) $$ as $$ \widetilde{T}= \sum_x \sum_{y\sim x} p(x,y)\, \Theta_x \Theta_y \otimes S_{yx} $$ where the sum converges in the strong operator topology. Note that the operators $\widetilde{\Theta}_x$ commute with each other and with $S_{yz}$ and therefore they commute with $\widetilde{T}$. \begin{Definition} For a finite subset $A\subseteq G$ with vertex boundary $dA$ we define the projection $$ \Theta_{A,dA} = \prod_{x\in A} \Theta_x \prod_{y\in dA} (I-\Theta_y) $$ and its amplification to $\mathcal{H}$ by $\widetilde{\Theta}_{A,dA}=\Theta_{A,dA}\otimes I$. \end{Definition} The following lemma collects the main properties of the projections $\Theta_{A,dA}$. \begin{Lemma} \label{lem:ThetaAdA} \begin{enumerate} \item If $A$ and $B$ are connected subsets of $G$ with $A\cap B\ne\emptyset$, then $\Theta_{A,dA}\Theta_{B,dB}=0$. \item In the subcritical regime $p<p_c$ and for fixed $x\in G$, the family $(\Theta_{A,dA})_{A\in \mathcal{C}_x}$ is a partition of unity on $l_2(\mathfrak{C})$, i.e., different projections are mutually orthogonal and their strong sum is $I$. \end{enumerate} \end{Lemma} \begin{proof} If both $A\ne B$ are connected subgraphs both containing a common vertex $x$, then one of them must intersect the vertex boundary of the other, in some vertex $y$, thus $\Theta_y$ collides with its complement $I-\Theta_y$ and they annihilate each other. Therefore $\Theta_{A,dA}$ and $\Theta_{B,dB}$ are orthogonal. To show completeness, consider the von Neumann algebra $\alg{A}$ generated by $\{\Theta_x : x\in G\}$. The set of vectors $\Omega=\{e_{\eta} : \eta\in \mathfrak{C}\}$ is cyclic for $\alg{A}$, i.e., the linear span of $\{\Theta_x e_{\eta} : x\in G, \eta\in\mathfrak{C}\}$ is dense. Since $\alg{A}$ is a commutative algebra, $\Omega$ is also separating. Thus for the proof of the lemma it suffices to show that $\sum_{A\in \mathcal{C}_x} \Theta_{A,dA} e_{\eta} = e_{\eta}$ for each individual $\eta$. Since we have already shown that the projections $(\Theta_{A,dA} e_{\eta})_{A\in \mathcal{C}_x}$ are mutually orthogonal it suffices to show that \begin{equation} \label{eq:sumnormThetaeeta} \sum_{A\in \mathcal{C}_x} \norm{\Theta_{A,dA} e_{\eta}}^2 = 1 . \end{equation} Now \begin{equation*} \norm{\Theta_{A,dA} e_{\eta,x}}^2 = \langle \Theta_{A,dA} e_{\eta,x}, e_{\eta,x} \rangle = p^{\abs{A}} (1-p)^{\abs{dA}} . \end{equation*} and thus condition \eqref{eq:sumnormThetaeeta} is equivalent to condition \eqref{equ:sump1-p=1}, which is satisfied in the subcritical regime (and sometimes in the critical regime as well). \end{proof} \begin{Theorem} \label{thm:eigenprojections} In the subcritical regime, $( \Theta_{A,dA} \otimes P_A )_{A \subseteq G \text{ finite, connected}}$ is a partition of unity on $\mathcal{H}$ and reduces $\widetilde{T}$: \begin{equation} \label{eq:tThetaPA=ThetaTA} \widetilde{T} \,(\Theta_{A,dA} \otimes P_A) = \Theta_{A,dA} \otimes T_A \end{equation} where $$ T_A = P_A T P_A $$ denotes the truncation of the transition operator $T$ of the simple random walk on $T$ to the subgraph $A$. \end{Theorem} \begin{proof} Indeed it follows from Lemma~\ref{lem:ThetaAdA} that the family $(\Theta_{A,dA}\otimes{} P_x)_{x\in G, A\in \mathcal{C}_x}$ is also a partition of unity: $$ \sum_{x\in G} \sum_{A\in \mathcal{C}_x} \Theta_{A,dA}\otimes P_x = I\otimes I $$ and rearranging this sum we obtain $$ \sum_A \Theta_{A,dA} \otimes P_A = I\otimes I . $$ Now we show that each $\Theta_{A,dA}\otimes P_A$ reduces $\widetilde{T}$. First note that $\Theta_{A,dA}$ commutes with all $\Theta_x$ and therefore $\widetilde{\Theta}_{A,dA}$ commutes with $\widetilde{T}$. It suffices to show equality of left and right hand side of \eqref{eq:tThetaPA=ThetaTA} evaluated at $f=e_{\eta,x}$ for every $\eta\in{}\mathfrak{C}{}$ and $x\in{} A$. If $x\not\in A$ then both sides vanish and there is nothing to show. Assume now that $x\in A$, then \begin{align*} \widetilde{T}\, (\Theta_A\otimes P_A)\, e_{\eta,x} &= \widetilde{\Theta}_{A,dA} \widetilde{T} e_{\eta,x} \\ &= \widetilde{\Theta}_{A,dA} \sum_{y\sim x} p(x,y)\, \widetilde{\Theta}_x\widetilde{\Theta}_y e_{\eta,y} \\ &= \widetilde{\Theta}_{A,dA} \sum_{\substack{y\sim x\\ y\in A}} p(x,y)\, e_{\eta,y} \\ &= (\Theta_{A,dA}\otimes T_A)\,e_{\eta,x} . \end{align*} \end{proof} \begin{Corollary} In the subcritical regime there exists a complete orthonormal system of finitely supported eigenfunctions of $\widetilde{T}$. \end{Corollary} \begin{proof} If we denote for each finite connected subgraph $A\subseteq{}G$ by $\{f_a: a\in{} A\}$ a basis of $l_2(A)\subseteq{}l_2(G)$ consisting of eigenfunctions of $T_A$, the eigenspaces of $\widetilde{T}$ are given by $\{\Theta_{A,dA} l_2(\mathfrak{C}{}) \otimes f_a : A \subseteq G\ \text{connected}, a\in A\}$. For every finite connected subset $A\subseteq G$ we apply the Gram-Schmidt procedure to $\{\Theta_{A,dA} e_\eta: \eta\in\mathfrak{C}{}\}$ to obtain a basis of $\Theta_{A,dA}l_2(\mathfrak{C}{})$ consisting of finitely supported functions $(\phi_{A,i})_{i\in \IN}$. This is possible because $\Theta_{A,dA} e_\eta$ has finite support for each $\eta\in\mathfrak{C}$ and $l_2(\mathfrak{C}{})$ is separable. Putting these functions together with the eigenfunctions $f_a$ we obtain the eigenbasis $$ \{ \phi_{A,i}\otimes f_a : A\subseteq G, i\in\IN, a\in A\} . $$ \end{proof} \begin{Remark} In the case where the graph $G$ is a Cayley graph as considered in \cite{LehnerNeuhauserWoess:spectrum}, the projections $\Theta_{A,dA}\otimes P_A$ are not elements of the group algebra and therefore provide a new partition, different from the one obtained in \cite{LehnerNeuhauserWoess:spectrum}. \end{Remark} \section{Concluding Remarks} \begin{enumerate} \item Similar results hold if the lamps are placed on the edges, see again \cite{LehnerNeuhauserWoess:spectrum} for a discussion. The preceding considerations also hold when one allows the number of colors (and accordingly the percolation parameter) to vary among the vertices $x$, however for the sake of simplicity only identical lamps on all vertices were considered here. \item It is not essential that the projections $\Theta_x$ are averaging operators and $p=1/m$. As discussed in \cite{LehnerNeuhauserWoess:spectrum}, similar deterministic models can be constructed for arbitrary percolation parameters $p$. \item It is still unknown what happens in the supercritical regime, where it is conjectured that continuous spectrum occurs at least in some cases. It may be hoped that the new approach will lead to some insight into this question. \end{enumerate}
train/arxiv
BkiUcInxK4tBVhvvrLWX
5
1
\section{Introduction} Tidal interactions likely play a role in a wide variety of astrophysical scenarios, mediating the interactions and influencing the orbital evolution of moons, planets, stars, and compact objects alike \citep{Ogilvie2014}. Despite their wide-reaching relevance, several quantitative details of tidal exchanges in energy and angular momentum have proven difficult to square with observations in both astrophysics and planetary sciences, particularly in situations where one or more of the tidally interacting bodies is rotating. Setting aside complications related to the Coriolis force \citep[e.g.,][]{Ogilvie2004,Ogilvie2009,Ogilvie2013,Wu2005,Wu2005b,Ivanov2007,Goodman2009,Rieutord2010,Lin2021}, relatively little has been done to characterize the effects that changes in stellar and planetary shape due to rotation have on dynamical (i.e., frequency-dependent) tidal distortion and dissipation. The gas giant planets in our own solar system motivate such characterization. Jupiter and Saturn respectively rotate at nearly $30\%$ and $40\%$ of their break-up angular velocities, and are consequently oblate. Moreover, measurements of the shape of the tidal bulge raised on Jupiter by Io---characterized by so-called ``Love numbers'' \citep{Durante2020}---deviate significantly from theoretical predictions for purely static tidal perturbers \citep{Wahl2017,Wahl2020,Nettelmann2019}. This discrepancy has inspired the suggestion that Io's orbit may be in resonance with the natural frequency of an internal oscillation mode (in particular a gravito-inertial mode) of Jupiter \citep{Idini2022b}. However, recent calculations \citep{Lin2023} have cast doubt on the ability of such a resonance to reconcile hydrostatic calculations with the observations. Notably, both \citet{Idini2022b} and \citet{Lin2023} excluded the effects of centrifugal flattening in their calculations of tidally driven oscillations. We use spectral methods to directly compute the dissipative tidal response of rapidly rotating and centrifugally flattened planets and stars. Our numerical method is valid for arbitrarily rapid and differential rotation on cylinders, incorporating dissipation via a viscous stress that self-consistently includes rotational flattenin . We first apply this method to computing the frequency-dependent, dynamical tidal response of $\gamma=5/3$ and $3/2$ polytropes rotating at up to nearly the mass-shedding limit. We then compute the tidal response for Jupiter interior models that include both stably stratified and convective regions. The latter calculations demonstrate that resonant wave excitation by dynamical tides is in fact capable of reconciling the discrepancy between observations and hydrostatic calculations of Jupiter and Io's interaction, but only if the non-spherical aspects of Jupiter's rotation are accounted for. Our calculations further suggest that a wider set of internal oscillations than considered by \citet{Idini2022b} should make viable candidates for a Jupiter-Io resonance. This paper is structured as follows. Section \ref{sec:methods} introduces our numerical method, and covers relevant background information. Although many of the technical details may be skipped by those interested only in our results, we note that subsection \ref{sec:love} lays out conventions for Love number definitions that are important to interpreting our calculations. Section \ref{sec:poly} then describes the results of our calculations for very rapidly rotating polytropes, and Section \ref{sec:Jup} describes our calculations for Jupiter. We finally conclude in Section \ref{sec:conc}. \section{Methods and background}\label{sec:methods} \subsection{Fluid dynamics} This subsection introduces the equations governing small-amplitude perturbations to oblate gaseous bodies, and our numerical methods for solving them. \subsubsection{Basic equations} The Newtonian equation of motion for a self-gravitating fluid with pressure $P$, density $\rho$, gravitational potential $\Phi$, and velocity ${\bf u}$ is \begin{equation}\label{eq:EoM0} \dfrac{D{\bf u}}{Dt} =-\frac{\nabla P}{\rho} -\nabla\Phi +{\bf F}, \end{equation} where $D/D_t=\partial_t+{\bf u}\cdot\nabla$ is the convective derivative, and ${\bf F}$ comprises any additional forces. For the case of a viscous fluid subject to a perturbing potential $U$, \begin{equation}\label{eq:F} {\bf F}=-\nabla U + \frac{1}{\rho}\nabla\cdot{\bf T}, \end{equation} where \begin{equation}\label{eq:T} {\bf T}=\mu_v\left[ \nabla{\bf u} +(\nabla{\bf u})^T -\frac{2}{3}(\nabla\cdot{\bf u}){\bf I} \right] \end{equation} is the viscous stress tensor associated with dynamic viscosity $\mu_v$. \autoref{eq:EoM0} must be considered simultaneously with the equation of mass conservation, \begin{equation}\label{eq:cty} \dfrac{D\rho}{Dt} =-\rho\nabla\cdot{\bf u}, \end{equation} Poisson's equation \begin{equation}\label{eq:Poi} \nabla^2\Phi=4\pi G\rho \end{equation} (here $G$ is the gravitational constant), an equation of state, and the thermal energy equation. Ignoring non-adiabatic heating by viscous dissipation, or cooling by radiation, the latter is given by \begin{equation}\label{eq:Therm} \dfrac{D P}{Dt} =-\Gamma_1 P\nabla\cdot{\bf u}, \end{equation} where $\Gamma_1$ is the first adiabatic exponent. \subsubsection{Equilibrium state} To model the steady state of rotating stars and gaseous planets, we construct axisymmetric, time-independent solutions of Equations \eqref{eq:EoM0}-\eqref{eq:Therm} with equilibrium pressure $P_0,$ density $\rho_0,$ gravitational potential $\Phi_0$ and velocity field ${\bf u}_0=\boldsymbol{\Omega}\times{\bf r}=R\Omega(R)\hat{\boldsymbol{\phi}}$. Here $\boldsymbol{\Omega}$ is an angular velocity that we allow to depend on cylindrical $R=r\sin\theta$ (the distance from the rotation axis). Ignoring ${\bf F}$, such equilibria satisfy \begin{equation} {\bf G =R\Omega^2\hat{\bf R} -\nabla \Phi_0, \end{equation} where ${\bf G}=\rho_0^{-1}\nabla P_0$ is an effective gravity that includes centrifugal flattening due to rotation. The equilibrium model of the rotating planet or star provides a natural scale for non-dimensionalization: throughout, we adopt units scaled by the total mass and equatorial radius $R_\text{eq}$ (i.e., $G=M=R_\text{eq}=1$). The relevant time-scale is then dictated by the dynamical frequency $\Omega_d=(GM/R_\text{eq}^3)^{1/2}.$ The primary difficulty in computing rotating stellar and planetary equilibria derives from the fact that the oblate, rotationally flattened surface is not known ahead of time for any but the simplest cases. We use the approach to this free-boundary value problem described in \citet{Dewberry2022b} to compute the polytropic models considered in this work. Note that in combination with stable stratification, such rotation can give rise to baroclinic flows involving meridional circulation \citep{Rieutord2006}, which we neglect. \subsubsection{Linearized equations} We write $P=P_0+\delta P$, $\rho=\rho_0+\delta\rho$, $\Phi=\Phi_0+\delta\Phi,$ ${\bf u}={\bf u}_0+{\bf v}$, where $\delta P,\delta\rho,\delta\Phi,{\bf v}$ are small-amplitude Eulerian perturbations with a harmonic dependence $\propto\exp[\text{i}(m\phi-\sigma t)]$ on inertial-frame frequency $\sigma$ and azimuthal wavenumber $m.$ Assuming an adiabatic relationship between Lagrangian pressure and density perturbations \citep{LyndenBell1967}, the fluid dynamic equations can then be linearized to find \begin{align}\label{eq:EoMl} -\text{i}\sigma{\bf v} +{\bf v}\cdot\nabla{\bf u}_0 +{\bf u}_0\cdot\nabla{\bf v} -{\bf G}\beta +(\nabla +\nabla\ln\rho_0)h +\nabla\delta\Phi &\\\notag -\frac{1}{\rho_0}\nabla \cdot {\bf \delta T} &=-\nabla U, \\\label{eq:Tl} \delta{\bf T} -\mu_v[ \nabla{\bf v} +(\nabla{\bf v})^T -(2/3)(\nabla\cdot{\bf v}){\bf I} ]&=0, \\\label{eq:Ctyl} -\text{i}\omega\beta +\frac{1}{\rho_0}\nabla\cdot(\rho_0{\bf v})&=0, \\\label{eq:TEl} -\text{i}\omega (h-c_A^2\beta) +({\bf G}-c_A^2\nabla\ln\rho_0)\cdot {\bf v}&=0, \\\label{eq:Poil} 4\pi G\rho_0\beta-\nabla^2\delta\Phi&=0. \end{align} Here $h=\delta P/\rho_0,$ $\beta=\delta\rho/\rho_0,$ $c_A^2=\Gamma_1P_0/\rho_0,$ and $\omega=\sigma - m\Omega$. For a rigidly rotating body with constant $\Omega$, $\omega$ gives the frequency in the corotating frame. Equations \eqref{eq:EoMl}-\eqref{eq:Poil} can be treated as both an inhomogeneous boundary value problem with $\sigma$ and $U$ specified, and an eigenvalue problem with $U\equiv0$ and $\sigma$ unknown. As described in Appendices \ref{app:lin} and \ref{app:num}, we use spectral methods to solve both. This process is complicated by the influence of the Coriolis force (which intervenes directly via terms involving ${\bf u}_0$), and centrifugal flattening (which acts through modification of the equilibrium state). To include the latter, we use a non-orthogonal, surface-matching coordinate system $(\zeta,\theta,\phi)$ \citep{Bonazzola1998} that has been employed by several authors in the calculation of stellar and planetary oscillation modes \citep{Lignieres2006,Reese2006,Reese2009,Reese2013,Reese2021,Ouazzani2012,Xu2017,Dewberry2021,Dewberry2022a,Dewberry2022b} and stellar structure \citep{Rieutord2016}. \subsection{Tides} This section lays out definitions, and introduces previous results from tidal theory that are relevant to the interpretation of our results. \subsubsection{Tidal potential} Assuming an orbital separation ${\bf d}$ sufficiently large for a tidal perturber to be treated as a point-mass $M'$, the tidal potential it imposes on the primary body can be written in terms of a multipole expansion as \citep{Jackson1962} \begin{equation}\label{eq:U} =-\frac{GM'}{a} \sum_{n=2}^\infty\sum_{m=-n}^n \left(\frac{4\pi}{2n+1}\right) \left(\frac{r}{a}\right)^n Y_n^{m*}(\theta',\phi')Y_n^m(\theta,\phi), \end{equation} where $a(t)=|{\bf d}|$, primes denote (time-dependant) satellite coordinates, and $Y_n^m$ are ortho-normalized spherical harmonics of degree $n$\footnote{The interplay between separate harmonics in the tidal potential and the response it induces motivates our use of both $\ell$ and $n$ for spherical harmonic degrees. We generally employ $n$ for degrees in the tidal potential that are summed over, reserving $\ell$ for the harmonic degree of interest in the induced response.} and azimuthal wavenumber $m$. Here we have neglected the degree $n=0$ and $n=1$ terms in the expansion, which respectively have no effect and lead to basic Keplerian motion. Ignoring orbital eccentricity and inclination, this expansion can be written in the inertial frame as \begin{equation} U=\sum_{n=2}^\infty\sum_{m=-n}^n U_{n m}r^n Y_n^m(\theta,\phi) \exp[-\text{i}\sigma_t t], \end{equation} where $\Omega_o=[G(M+M')/a^3]^{1/2}$ is the mean motion of the perturber, \begin{align} U_{nm}&=-\left(\frac{GM'}{a^{n+1}}\right)\left(\frac{4\pi}{2n+1}\right)Y_n^{m*}(\pi/2,0 , \end{align} and $\sigma_t$ are inertial-frame tidal frequencies. For this simplified case of a coplanar and circular orbit, $\sigma_t=m\Omega_o$. Throughout, we adopt a nominal mass ratio of $q=M'/M=10^{-4}.$ For linear tides, this assumption only affects the relationship between $a$ and $\Omega_o$. \subsubsection{Potential Love numbers}\label{sec:love} Fluid motions induced by the perturbing tidal potential will lead to an \textit{external} gravitational response $\Phi'$ that can in turn be expanded as \begin{equation}\label{eq:PhEx} \Phi'=\sum_{n=2}^\infty\sum_{m=-n}^n \Phi'_{n m}r^{-(n+1)} Y_n^m\exp[-\text{i}\sigma_t t]. \end{equation} For the linear tidal perturbation of an axisymmetric body, a coefficient $\Phi'_{\ell m}$ of a given degree $\ell$ and azimuthal wavenumber $m$ can be related to the coefficients $U_{nm}$ in the tidal potential via a linear relation involving potential ``Love numbers'' \citep{Ogilvie2013}: \begin{equation}\label{eq:klnm} \Phi'_{\ell m} =\sum_{n=|m|}^\infty k_{\ell m}^n U_{nm}. \end{equation} A given $k^n_{\ell m}=k^n_{\ell m}(\sigma_t)$ thus specifies the amount to which a harmonic of degree $n$ in the tidal potential drives a gravitational response in degree $\ell$, at a given tidal frequency $\sigma_t$. In a spherically symmetric body $k_{\ell m}^n=0$ when $\ell\not=n$, but this is not true in general; in a rotationally flattened body harmonic coefficients of one degree in the induced tidal response cannot be solely attributed to coefficients of the same degree in the tidal potential. It is nevertheless still useful to consider the direct ratios \begin{equation}\label{eq:klm} k_{\ell m}=\frac{\Phi'_{\ell m}}{U_{\ell m}} =\frac{1}{U_{\ell m}}\sum_{n=|m|}^\infty k_{\ell m}^{n}U_{nm}, \end{equation} keeping in mind that these may not accurately reflect a causal relationship. In particular, in centrifugally flattened bodies the sectoral ($n=|m|$) part of the tidal potential can produce just as much of a tesseral ($n>|m|$) response as the corresponding tesseral part of the tidal potential. \cite{Dewberry2022a} showed that this sectoral driving of the tesseral response generically produces anomalously large $k_{\ell m}$ for all $\ell>|m|$,\footnote{See also \citet{Idini2022a}, who came to similar conclusions via a different approach.} characterized by a strong dependence on the satellite separation $a$. Specifically $U_{nm}/U_{\ell m}\propto a^{\ell-n}$, so that for large $a$ and $\ell>|m|$ the term $k^{n=|m|}_{\ell m}U_{n=|m|,m}/U_{\ell m}$ dominates the sum over $n$ in Equation \eqref{eq:klm}. For example, the values of $k_{42}$ reported by \citet{Wahl2020} for detailed Jupiter interior models perturbed by a static potential very closely follow the power law $k_{42}\propto a^2.$ Given a self-consistent tidal potential produced by satellites on Keplerian orbits, the spatial dependence $k_{\ell m}\propto a^{\ell -|m|}$ is equivalent to the frequency dependence \begin{equation} k_{\ell m}\propto \Omega_o^{-2(\ell-|m|)/3} \end{equation} as $\Omega_o\rightarrow0$. The ratios $k_{\ell m}$ thus remain functions solely of frequency for a self-consistent tidal potential \citep{Dewberry2022a} It is helpful to define a ``hydrostatic'' $k_{\ell m}^\text{hs}$ for rigidly rotating bodies via \begin{equation}\label{eq:hs} k_{\ell m}^\text{hs} = \frac{1}{U_{\ell m}} \sum_{n=|m|}^\infty k_{\ell m}^n(\omega_t=0)U_{nm}, \end{equation} where $\omega_t=\sigma_t-m\Omega$. These $k_{\ell m}^\text{hs}$ are not truly hydrostatic, in that they retain an implicit frequency dependence through $U_{nm}/U_{\ell m ,$ but they reproduce previous work employing static satellites at finite separation \citep[e.g.,][]{Wahl2017,Wahl2020,Nettelmann2019}. Consequently they can be used to isolate dynamical (as opposed to purely geometric) effects. \subsubsection{Tidal dissipation}\label{sec:diss} The Love numbers $k_{\ell m}^n=k_{\ell m}^n(\sigma_t)$ are both frequency-dependant and complex-valued, their imaginary parts encoding a phase lag due to dissipation in the tidally perturbed body. For a viscous fluid, the dissipation rate is \begin{equation} D_\nu =\frac{1}{2}\int_V\mu_v {\bf v}\cdot(\nabla\cdot \delta{\bf T}) \text{d}V. \end{equation} The energy and angular momentum transferred from the orbit to the primary due to the action of a given component of degree $\ell$ and order $m$ in the tidal potential---tidal power $P$ and torque $T$, respectively---can be computed from the imaginary parts of the Love numbers via \citep{Ogilvie2013} \begin{align} P=\sigma_t\frac{(2\ell + 1)}{8\pi G}R_\text{eq}|U_{\ell m}|^2\text{Im}[k_{\ell m}^\ell] =(\sigma_t/m)T. \end{align} If the tidally perturbed planet or star rotates rigidly, the dissipation rate from a single component of the tidal potential can be related to the tidal power and torque via $D_\nu=P - \Omega T\propto\omega_t\text{Im}[k_{\ell m}^\ell]$ \citep{Ogilvie2013}. Our calculations verify this equality. Assuming the tidal distortion is dissipated, the requirement that $D_\nu$ be positive-definite then implies that the imaginary part of each $k_{\ell m}^\ell(\sigma_t)$ must have the same sign as $\omega_t$ \citep{Ogilvie2014}. Tidal Love numbers additionally satisfy $k_{\ell m}^n(\omega_t)=[k_{\ell,-m}^n(-\omega_t^*)]^*$. \subsubsection{Modal expansion}\label{sec:modal} We compute the tidal response both directly and through an expansion in the tidally driven oscillation modes of the primary. The latter approach involves a phase space expansion of the form \begin{equation} \left[ \begin{matrix} \boldsymbol{\xi} \\ \partial_t \boldsymbol{\xi} \end{matrix} \right] =\sum_\alpha c_\alpha(t) \left[ \begin{matrix} \boldsymbol{\xi}_\alpha \\ -\text{i}\omega_\alpha\boldsymbol{\xi}_\alpha \end{matrix} \right], \end{equation} where $\boldsymbol{\xi}_\alpha$ and $\omega_\alpha=\sigma_\alpha-m\Omega$ are the Lagrangian displacements and frequencies of eigenmode solutions to Equations \eqref{eq:EoMl}-\eqref{eq:Poil} (in the absence of tidal forcing and viscosity), and $c_\alpha$ are tidally driven amplitudes. This sum over modes indexed by $\alpha$ include complex conjugate ($\boldsymbol{\xi}_\alpha\mapsto\boldsymbol{\xi}_\alpha^*$) solutions, which are also modes. Ignoring dissipation and differential rotation, the amplitude for an oscillation mode of azimuthal wavenumber $m$ satisfies the equation \citep{Schenk2001} \begin{equation} \dot{c}_{\alpha} +\text{i}\sigma_{\alpha}c_{\alpha} =-\frac{\text{i}}{2\epsilon_{\alpha}}\exp[-\text{i}\sigma_tt] \sum_{n=|m|}^\infty U_{n m}Q_{n m}^\alpha \end{equation} where \begin{align} \epsilon_\alpha &=\omega_\alpha \langle \boldsymbol{\xi}_\alpha,\boldsymbol{\xi}_{\alpha}\rangle +\langle \boldsymbol{\xi}_\alpha, \text{i}{\bf \Omega\times}\boldsymbol{\xi}_{\alpha} \rangle \\\label{eq:Qlm} Q_{n m}^\alpha &=\langle \boldsymbol{\xi}_\alpha,\nabla (r^n Y_n^m)\rangle =-\frac{(2n+1)}{4\pi \Phi'_{nm,\alpha}, \end{align} and $\Phi'_{nm,\alpha}$ are contributions to the coefficients in the expansion of Equation \ref{eq:PhEx} that are attributable to each driven mode $\alpha$. The $Q_{nm}^\alpha$ coefficients are often referred to as overlap integrals of a given $n$ and $m.$ Steady-state ``adiabatic'' solutions with $\dot{c}_\alpha=-\text{i}\sigma_tc_\alpha$ have amplitudes given by \begin{equation} c_\alpha =-\frac{\text{exp}[-\text{i}\sigma_tt]} {2\epsilon_{\alpha}(\sigma_\alpha-\sigma_t)}\sum_{n=|m|}^\infty U_{nm}Q_{n m}^\alpha =\sum_{n=|m|}^\infty c_{\alpha}^n\text{exp}[-\text{i}\sigma_tt]. \end{equation} Writing $\Phi'_{\ell m}=\sum_\alpha c_\alpha^\ell\Phi'_{\ell m,\alpha},$ Love numbers $k_{\ell m}^n$ can then be computed by considering the effect of an isolated tidal potential of only one degree: \begin{equation}\label{eq:klmmode} k_{\ell m}^n =\frac{2\pi}{(2\ell+1)} \sum_{\alpha} \frac{Q_{\ell m}^\alpha Q_{nm}^\alpha} {\epsilon_{\alpha}(\sigma_\alpha-\sigma_t)}. \end{equation} Meanwhile, summing over $n$ in the full tidal potential provides \begin{equation} k_{\ell m} =\frac{2\pi}{(2\ell+1)} \sum_{\alpha}\sum_{n=|m|}^\infty \frac{Q_{\ell m}^\alpha Q_{nm}^\alpha} {\epsilon_{\alpha}(\sigma_\alpha-\sigma_t)} \left(\frac{U_{nm}}{U_{\ell m}}\right). \end{equation} \section{Fully isentropic and stratified polytropes}\label{sec:poly} In this section we describe the results from tidal calculations for simple but very rapidly rotating polytropic models with equilibrium pressure and density related by $P_0\propto \rho_0^\gamma$. We consider two polytropic relations: $\gamma=5/3$ and $\gamma=3/2$. Together with a purely constant first adiabatic exponent $\Gamma_1=5/3$, $\gamma=5/3$ and $\gamma=3/2$ polytropes are neutrally and stably stratified throughout (respectively). Our $\gamma=5/3$, neutrally stratified polytropes might be taken as reasonable models for fully convective compact objects or M-dwarfs. Meanwhile the $\gamma=3/2,$ stably stratified polytropes approximate some intermediate mass stars. Aside from their general applicability, the separate cases of fully isentropic and fully stratified stars provide a useful introduction to the partially stratified models of Jupiter considered in Section \ref{sec:Jup}. For both values of $\gamma$, we compute the $m=2$ tidal response for oblate models rotating at up to $99\%$ of the dynamical frequency $\Omega_d=(GM/R_\text{eq}^3)^{1/2}$. $\Omega_d$ provides a rough approximation to the critical ``mass-shedding'' limit at which the stars become unbound at the equator \citep[for $\gamma=5/3$ and $3/2,$ the mass-shedding limits are $\Omega\simeq1.02\Omega_d$ and $1.01\Omega_d$, respectively;][]{Dewberry2022b}. \begin{figure*} \centering \includegraphics[width=\textwidth]{n15poly_k222_vs_k22_Nz100Nm128No20Nr200_mnu1.0e-05.pdf} \caption{Real (top) and imaginary (bottom) parts of $k_{22}^2$ for rigidly rotating isentropic $\gamma=5/3$ polytropes with dynamic viscosity $\mu_v=10^{-5}$, plotted as a function of $\omega_t.$ From dark to light, line colors indicate increasingly rapid rotation. The top plot transitions from a log to linear scale at $Re[k_{22}^2]=1$. The faint dashed lines show values of $k_{22}$ obtained for a tidal potential including degrees $n=2-12$. The peaks in $Im[k_{22}^2]\Omega/\omega_t$ correspond to tidal resonances with fundamental modes and inertial modes.} \label{fig:n15_k222} \end{figure*} \subsection{Unstratified $\gamma=5/3$ polytropes} The panels in Fig. \ref{fig:n15_k222} show the real (top) and imaginary (bottom) parts of Love numbers $k_{22}^2$ (Equation \ref{eq:klnm}) as a function of tidal frequency $\omega_t$, computed for isentropic $\gamma=5/3$ polytropes perturbed by a purely quadrupolar ($\ell=m=2$) tidal potential. The top panel employs a symmetric log-scale that transitions to linear at $Re[k_{22}^2]=1$. From dark to light, the line colors indicate polytropic models with increasingly rapid rotation. For comparison, the dotted lines show direct ratios $k_{22}$ (Equation \ref{eq:klm}) computed for an $m=2$ tidal potential including degrees $n=2$ to $n=12.$ For the most part $k_{22}\simeq k_{22}^2$, indicating (unsurprisingly) that the quadrupolar response of the star is dominated by the quadrupolar part of the tidal potential. The calculations shown in Fig. \ref{fig:n15_k222} involved a constant dynamic viscosity $\mu_v=10^{-5}$ (in units with $G=M=R_\text{eq}=1$). This is large from an astrophysical perspective, but sufficiently small to reveal the important dynamical features of the model. \begin{figure*} \centering \includegraphics[width=\textwidth]{n15poly_k424_vs_k42_Nz100Nm128No20Nr200_mnu1.0e-05.pdf} \caption{Same as Fig. \ref{fig:n15_k222}, but showing $k_{42}^4$ Love numbers computed for $\gamma=5/3$ polytropes perturbed by an isolated $\ell=m=4$ potential (solid lines). The dashed lines again show $k_{42}$ profiles computed by perturbing with a tidal potential including degrees $n=2-12,$ and the top panel transitions from log to linear at $Re[k_{42}^4]=10^{-1}$. Because of centrifugal flattening, the $k_{42}$ profiles disagree significantly with $k_{42}^2.$} \label{fig:n15_k424} \end{figure*} Resonances with internal oscillation modes produce sharp sign changes in $Re[k_{22}^2]$ and corresponding extrema in $Im[k_{22}^2]\Omega/\omega_t.$ Note that $Im[k_{22}^2]\Omega/\omega_t$ remains strictly positive, since sign$(Im[k_{\ell m}^n])=$sign$(\omega_t)$ \citep[see Section \ref{sec:diss}; ][]{Ogilvie2013}. The strong resonances at tidal frequencies $\omega_t/\Omega\lesssim -2$ and $\omega_t/\Omega\gtrsim 1$ correspond to retrograde and prograde fundamental modes (f-modes) with predominantly sectoral ($\ell\simeq m=2$) structure in their eigenfunctions. With faster and faster rotation, the natural frequencies of these oscillations become smaller in amplitude compared with the rotation rate \citep[e.g.,][]{Dewberry2022a}, and the resonances consequently move inward on an x-axis scaled by $\Omega$. For rotation rates $\Omega\gtrsim0.56$, higher degree ``tesseral'' ($\ell>m$) f-modes appear at higher frequencies in the bottom panel. They produce smaller resonances because of smaller spatial overlap with the $Y_2^2$ harmonic; in a non-rotating, spherically symmetric star the overlap integrals of tesseral f-modes with the sectoral tide vanish entirely. The bottom panel in Fig. \ref{fig:n15_k222} also illustrates some additional resonant peaks that remain fixed close to $\omega_t/\Omega\simeq-1.2$ and $\omega_t/\Omega\simeq0.6$ as the rotation rate increases. These resonances are produced by inertial modes \citep[e.g.,][]{Wu2005}, whose primary restoring force is the Coriolis. Inertial modes form a dense spectrum in the (rotating-frame) frequency range $-2\Omega<\omega<2\Omega,$ but only the longest wavelength modes couple strongly enough with the tidal potential to produce visible features in Fig. \ref{fig:n15_k222}. The solitary peaks near $\omega_t/\Omega\simeq-1.2$ correspond to the longest wavelength retrograde inertial mode, while the sequence of peaks with $\omega_t/\Omega\lesssim0.6$ are due to prograde inertial modes. The latter grow in amplitude with increasing rotation because of mixing (avoided crossing) with the prograde sectoral f-mode, as described in \citet{Dewberry2022a} for isentropic $\gamma=2$ polytropes. Fig. \ref{fig:n15_k424} plots the real and imaginary parts of $k_{42}^4$ computed for the same $\gamma=5/3$ polytropes as shown in Fig. \ref{fig:n15_k222}. Since inertial oscillations generically couple more strongly to tesseral components of the tidal potential than sectoral \citep{Ogilvie2009,Ogilvie2013}, they feature more prominently in Fig. \ref{fig:n15_k424} than in Fig. \ref{fig:n15_k222}. Fig. \ref{fig:n15_k424} also demonstrates much more dramatic disagreement between the Love number $k_{42}^4$ and the direct ratio $k_{42}$ (dashed lines). The apparent resonance in $k_{42}$ at $\omega_t/\Omega=-2$ has nothing to do with oscillations, instead reflecting the fact that as $\Omega_o\rightarrow0$ the tesseral response of centrifugally flattened bodies becomes dominated by the sectoral tide \citep[see Section \ref{sec:love};][]{Dewberry2022a,Idini2022a}. Additionally, $Im[k_{42}]/\omega_t$ becomes negative over several ranges of frequency, disappearing from the log-scale of the plot in the bottom panel. Since $Im[k_{\ell m}^n]/\omega_t$ is strictly positiv , negative values of $Im[k_{\ell m}]/\omega_t$ indicate frequency regimes where the induced tidal response in one harmonic may be dominated by a different harmonic in the tidal potential. \begin{figure*} \centering \includegraphics[width=\textwidth]{n2poly_k222_vs_k22_Nz100Nm128No20Nr200_mnu1.0e-05.pdf} \caption{Same as Fig. \ref{fig:n15_k222}, but for $\gamma=3/2$ polytropes. Instead of inertial modes, the stable stratification in these stars supports gravito-inertial and rossby modes. Filled circles indicate the frequencies corresponding to the cross-sections shown in Fig. \ref{fig:n2_ros}.} \label{fig:n2_k222} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{n2poly_k424_vs_k42_Nz100Nm128No20Nr200_mnu1.0e-05.pdf} \caption{Same as Fig. \ref{fig:n15_k424}, but for $\gamma=3/2$ polytropes. Rotation causes g-mode eigenfunctions to overlap with multiple spherical harmonics, in turn leading to less regular sequences of peaks in $Im[k_{42}^4]$with increasingly rapid rotation.} \label{fig:n2_k424} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{n2_modemo_Nz100Nr200Nm128No20nu1.0e-05_vr.pdf} \caption{Cross-sections illustrating the gravitational (left) and radial velocity (right) perturbations induced in $\gamma=3/2$ polytropes at the tidal frequencies indicated by the filled circles in Fig. \ref{fig:n2_k222} (bottom). With increasingly rapid rotation, the induced tidal response involves more spherical harmonic degrees.} \label{fig:n2_ros} \end{figure*} \subsection{Stratified $\gamma=3/2$ polytropes} Figs. \ref{fig:n2_k222}-\ref{fig:n2_k424} are the same as \ref{fig:n15_k222}-\ref{fig:n15_k424}, but for stably stratified $\gamma=2$ polytropes. The resonances depicted in Figs. \ref{fig:n2_k222}-\ref{fig:n2_k424} consequently correspond to a different selection of internal oscillation modes. Along with f-modes, the peaks at higher frequencies $|\omega_t/\Omega|\gtrsim0.5$ are produced by gravito-inertial modes (g-modes), which are primarily restored by buoyancy. As shown in the bottom panel of \autoref{fig:n2_k222}, for a given rotation rate only a handful of long wavelength g-modes give rise to significant features in $Im[k_{22}^2]$ for this value of viscosity. Although the g-mode resonances are regularly spaced in $\omega_t/\Omega,$ their eigenfunctions can differ significantly from the g-modes of non-rotating stars. Rotation can confine g-mode eigenfunctions to the equator, and also mix together modes that in the limit $\Omega\rightarrow0$ have different harmonic degrees but nearly degenerate frequencies. The latter effect leads to ``rosette'' patterns in the oscillations' kinetic energy distributions \citep{Ballot2012,Takata2013,Dewberry2021}. The cross-sections (slices along the rotation axis) shown in Fig. \ref{fig:n2_ros} demonstrate the gravitational (left) and radial velocity (right) perturbations of the tidal response computed (using the full tidal potential) at the frequencies indicated by the filled circles in \ref{fig:n2_k222} (bottom). We plot the imaginary part of $\delta\Phi$ (and similarly the real part of $v_r$) because it better illustrates the structure of the resonant waves than the real part (with the phase chosen in Appendix \ref{app:num}, $Re[\delta\Phi]$ is dominated at all frequencies by the structure of non-resonantly driven f-modes). With increasingly rapid rotation, the induced wave patterns couple across a larger number of spherical harmonic degrees. Along with gravito-inertial and rosette waves at larger frequencies, Fig. \ref{fig:n2_k222} and \ref{fig:n2_k424} (bottom) reveal an additional family of resonant modes at $\omega_t/\Omega\simeq-0.2$. ``Rossby'' modes are purely retrograde oscillations restored by both the Coriolis and buoyancy forces \citep{Townsend2003}. With relatively large tidal overlap integrals, Rossby modes may be important to tidal dissipation in super-synchronously rotating white dwarfs \citep{Fuller2014}. \section{Tides in partially stratified bodies: application to Jupiter}\label{sec:Jup} In this section we consider the tidal response of Jupiter interior models that are simple but self-consistently flattened by rotation. Particular motivation for this application comes from the fact that Juno measurements of Jupiter's interaction with Io produce values of $k_{42}\simeq1.29$ \citep{Durante2020} that differ significantly from theoretical calculations of $k_{42}^\text{hs}\simeq1.74$ \citep{Wahl2017,Wahl2020,Nettelmann2019}. We find that dynamical tides are capable of reconciling this discrepancy, essentially due to dynamical driving of the tesseral response by the sectoral tide. \subsection{Interior models} We focus on $\gamma=2$ polytropes with Jupiter's bulk rotation rate $\Omega/\Omega_d\simeq 0.3$, and different profiles of stable stratification introduced via modification of $\Gamma_1.$ In particular, we assume the functional form \begin{equation} \Gamma_1(\zeta) =2 + \frac{A}{2 \left\{ 1 -\cos\left[ 2\pi\left( \frac{\zeta - \zeta_\text{i}} {\zeta_\text{o}-\zeta_\text{i}} \right) \right] \right\}. \end{equation} Here $A$ describes the amplitude of the deviation from isentropy, $\zeta$ is a dimensionless quasi-radial coordinate equal to one on the surface (see Appendix \ref{app:lin}), and $\zeta_\text{i},\zeta_\text{o}$ delimit the boundaries of a stably stratified region. Along with an isentropic model with $A=0,$ we consider partially stratified models characterized by $A=2,[\zeta_\text{i},\zeta_\text{o}]=[-0.7,0.7]$ and $A=0.5,[\zeta_\text{i},\zeta_\text{o}]=[0.59,0.76]$. Recent models involving a wide, stably stratified ``dilute'' core \citep{Wahl2017b,Militzer2022} motivate the former parameterization, while the latter produces a narrower band of stratification in the outer envelope \citep{Stevenson2022}. These profiles for stable stratification are intended only to capture the relevant wave dynamics, and not to serve as detailed interior models for Jupiter. \begin{figure*} \centering \includegraphics[width=\textwidth]{costrat_k222_vs_k22_Nz100Nm128No20Nr200_mnu1.0e-06.pdf} \caption{Profiles of $k_{22}^2$ like those of Fig. \ref{fig:n15_k222}, but for parameterized Jupiter interior models that are neutrally stratified (black), partially stratified with an expansive dilute core (blue), and partially stratified with a stable region in the envelope (orange). These calculations include a dynamic viscosity $\mu_v=10^{-6}.$} \label{fig:jup_k222} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{costrat_k424_vs_k42_Nz100Nm128No20Nr200_mnu1.0e-06.pdf} \caption{Same as Fig. \ref{fig:n15_k424}, but for the Jupiter interior models of Fig. \ref{fig:jup_k222}. The filled circles denote frequencies corresponding to the cross-sections shown in Fig. \ref{fig:jupeaks}.} \label{fig:jup_k424} \end{figure*} \subsection{Tidal wave mixing} Figs. \ref{fig:jup_k222}-\ref{fig:jup_k424} are similar to Figs. \ref{fig:n15_k222}-\ref{fig:n15_k424} and \ref{fig:n2_k222}-\ref{fig:n2_k424}. The solid lines plot real and imaginary parts of the Love numbers $k_{22}^2$ and $k_{42}^4$ describing $\ell=2$ and $\ell=4$ responses to isolated tidal potentials of the same degree, while the faint dashed lines show the ratios $k_{22}$ and $k_{42}$ computed by imposing a tidal potential including degrees $n=2-12.$ The black, blue and orange colors respectively indicate calculations for the completely isentropic, dilute core, and envelope stratification models introduced in the previous subsection. We adopt a dynamic viscosity of $\mu_v=10^{-6}$ for the calculations shown in these figures. \begin{figure*} \centering \includegraphics[width=\textwidth]{costrat_peakmaps_Nz100Nm128No20Nr200_mnu1.0e-06_vrlmxNone.pdf} \caption{Cross-sections illustrating the gravitational (left) and radial velocity (right) perturbations induced in dilute core (top row) and envelope stratification (bottom row) models, at the tidal frequencies indicated by the filled circles in Fig. \ref{fig:jup_k424} (bottom). Interior white lines indicate boundaries between the convective and stably stratified regions. The partial stratification of these models leads to waves with gravito-inertial character in the stratified regions, and purely inertial character in the convective regions.} \label{fig:jupeaks} \end{figure*} The orange and blue curves demonstrate a more complicated spectrum of resonances than the polytropes considered in Section \ref{sec:poly}, owing to the fact that these Jupiter models possess \emph{both} convective and stably stratified regions. Inertial wave spectra are generically dense in frequency space \citep[e.g.,][]{Papaloizou1982}. All gravito-inertial modes with rotating-frame frequencies $\omega\in[-2\Omega,2\Omega]$ consequently have the capacity to mix with inertial waves in adjacent convective regions, so the waves forced by the tidal potential possess an inherently mixed character in the frequency range $\omega_t\in[-2\Omega,2\Omega]$. This is illustrated by the cross-sections in Fig. \ref{fig:jupeaks}, which (like Fig. \ref{fig:n2_ros}) show the gravitational and radial velocity perturbations induced by the full (multiple-degree) tidal potential at the resonant frequencies indicated by filled circles in Fig. \ref{fig:jup_k424} (bottom). The white lines indicate boundaries between the convective and stably stratified regions, with calculations for the dilute core and envelope stratification models shown in the top and bottom rows (respectively). The cross-sections in Fig. \ref{fig:jupeaks} exhibit similar structure to the tidal waves computed by \citet{Lin2023} without the inclusion of centrifugal flattening. The gravito-inertial waves additionally exhibit some formation of rosette patterns, while the non-specular reflection of inertial waves off of the boundary between the stably stratified regions and the outer envelope leads to short-wavelength beams of inertial waves propagating at an angle that depends on $\omega_t$ \citep{Ogilvie2009,Rieutord2010,Ogilvie2013,Lin2021,Lin2023}. Although this scattering to shorter wavelengths can enhance tidal dissipation at some frequencies, \citet{Lin2021} showed that the largest peaks in dissipation still correspond to underlying flows resembling the longest wavelength inertial modes of fully isentropic models. From the perspective of the modal expansion described in Section \ref{sec:modal}, the susceptibility of a given mixed mode $\alpha$ to excitation by a tidal potential of degree $\ell$ boils down to the requirement of a large ratio between overlap integrals $Q_{\ell m}^\alpha$ and $\epsilon_\alpha$ coefficients. This requirement in turn filters for waves with some long-wavelength structure (since the tidal potential is itself long-wavelength), regardless of whether those waves are also scattered to shorter wavelengths. Most importantly, the induced gravito-inertial and inertial waves overlap with multiple spherical harmonic degrees, regardless of whether the perturbing tidal potential includes one or multiple harmonics. \begin{figure*} \centering \includegraphics[width=\textwidth]{dk42_model_cpr_nu1.0e-06_inset.pdf} \caption{Per cent deviations in the real part of $k_{42}$ from the hydrostatic $k_{42}^\text{hs}$ (Equation \ref{eq:hs}), for the same models as Figs. \ref{fig:jup_k222}-\ref{fig:jup_k424}. As shown in the inset, resonances involving mixed waves at $\omega_t/\Omega\simeq-1.5$ (middle panels of Fig. \ref{fig:jupeaks}) lead to deviations of $\simeq-10\%$ to $-15\%$. Deviations of this magnitude are sufficient to reconcile observations and hydrostatic calculations to within $3\sigma$ \citep{Idini2022b}.} \label{fig:dk42} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{costrat_k42_contributions_Nz100Nm128No20Nr200_mnu1.0e-06.pdf} \caption{Curves describing the relative contributions of the degree $n=2$ (solid) and $n=4$ (dashed) parts of the tidal potential to the degree $\ell=4$ part of Jupiter's response. The $n=4$ contribution is inconsequential compared to that of $n=2$. Physically, this reflects the fact that the mixed waves of Fig. \ref{fig:jupeaks} overlap with both $n=2$ and $n=4$, but the quadrupolar forcing has a much larger amplitude at the tidal frequencies of the Galilean moons. The faint, thick lines show calculations with a lower viscosity ($\mu_v=10^{-7}$).} \label{fig:k42n} \end{figure*} \subsection{Jupiter's dynamical $k_{42}$} Although our calculations produce qualitatively similar waves to those described by \citet{Lin2023}, with our inclusion of centrifugal flattening---excluded by both \citet{Idini2022b} and \citet{Lin2023}---we observe a much stronger dynamical impact on the tesseral ratios $k_{\ell m}$ with $\ell>m.$ Fig. \ref{fig:dk42} plots the per cent deviation of $Re[k_{42}]$ profiles computed for the three models shown in Figs. \ref{fig:jup_k222}-\ref{fig:jup_k424} from ``hydrostatic'' $Re[k_{42}^\text{hs}]$ (see Equation \ref{eq:hs}) values comparable with those computed by \citet{Wahl2017,Wahl2020} and \citet{Nettelmann2019}. We compute $k_{42}^\text{hs}$ profiles with the modal expansion described in Section \ref{sec:modal}, using oscillations calculated in the inviscid limit for the neutrally stratified model. Note that the isentropic and stratified Jupiter models considered here agree precisely in the limit $\omega_t\rightarrow0,$ so this approach yields a hydrostatic $k_{42}^\text{hs}$ that is relevant for all three. The inset highlights the frequency range relevant to Jupiter's Galilean moons, whose $m=2$ tidal frequencies are indicated by dashed grey lines. The solid black line again corresponds to the neutrally stratified model. Except for a feature near $\omega_t/\Omega\simeq-1.14$ corresponding to a resonance with the longest wavelength retrograde inertial mode, this model exhibits only gradual variation in $Re[k_{42}]$ due to the non-resonant influence of f-modes at much larger frequencies. As noted by \citet{Idini2022a}, this contributes a deviation of $\simeq-4\%$ from $k_{42}^\text{hs}$ at Io's frequency, which is insufficient to reconcile the tension between observations and hydrostatic calculations. On the other hand, the orange and blue curves (corresponding to the models with stratified regions in the outer envelope and core, respectively) exhibit much larger deviations. In particular, tidal wave excitation close to $\omega_t/\Omega\simeq-1.5$ (the middle cross-sections in Fig. \ref{fig:jupeaks}) induces deviations in $Re[k_{42}]$ of $10-15\%$ relative to the local $Re[k_{42}^\text{hs}]$. Such deviations are sufficient for agreement with the observed $k_{42}$ to within $3\sigma$ \citep{Idini2022a}. This result contrasts with the calculations of \citet{Lin2023}, which suggested that dynamically excited waves (with notably similar morphology to those shown in Fig. \ref{fig:jupeaks}) were incapable of significantly modifying the real part of $k_{42}$ (cf., their Fig. 6 and 8). The difference lies in our inclusion of centrifugal flattening, as demonstrated by Fig. \ref{fig:k42n}. The curves in this figure plot $Re[k_{4 2}^n - k_{42}^n(\omega_t=0)]U_{n2}/U_{42}$ for $n=2$ (solid) and $n=4$ (dashed). This quantity describes the amount to which driving by the tidal potential of degree $n$ contributes to the response in degree $\ell=4.$ With the inclusion of centrifugal flattening, the Love numbers $k_{42}^2$ and $k_{42}^4$ are comparable for all of the waves shown in Fig. \ref{fig:jupeaks}. However, since $U_{22}\gg U_{42}$ as $\Omega_o\rightarrow0$ (i.e., as $\omega_t/\Omega\rightarrow-2$), the contribution from $n=2$ invariably dwarfs that from $n=4.$ Fig. \ref{fig:k42n} clearly demonstrates that wave coupling with the degree $n=2$ part of the tidal potential is the most important for the degree $\ell=4$ part of the tidal response. We note that the dynamic viscosity used here ($\mu_v=10^{-6}$ in units with $G=M=R_\text{eq}=1$) is relatively large, ranging from Ekman numbers of $Ek=\mu_v/(\rho_0 \Omega R_\text{eq}^2)\simeq 10^{-6}$ at $r=0$ to $Ek\simeq10^{-3}$ close to the surface. The simplifying assumption of a constant dynamic (rather than kinematic) viscosity in particular leads to stronger damping in the outer envelope than may be realistic. However, we do not expect this to affect our main results: the faint, wider curves in Fig. \ref{fig:k42n} show calculations with a smaller $\mu_v=10^{-7}.$ Decreasing viscosity does little to affect the real parts of $k_{42}$ away from the strongest resonances, and only causes larger deviations close to resonance. Moreover, the results shown in Fig. \ref{fig:dk42} and Fig. \ref{fig:k42n} do not require moving particularly close to the strongest resonances (the widths of which vanish as $\mu_v\rightarrow0$). The inherently mixed character of the resonantly driven waves shown in Fig. \ref{fig:jupeaks} means that labelling any of them as a g-mode of a particular harmonic degree is inappropriate. Nevertheless, we identify some of the resonances shown in Fig. \ref{fig:k42n} as involving what originate as $\ell=m=2$ g-modes in the non-rotating regime. We propose that resonances involving these oscillations provide as viable a candidate for the observed dynamical variation in Jupiter's $k_{42}$ as the $\ell=4$ g-modes considered by \citet{Idini2022b}. \citet{Idini2022b} used an identification of the resonance with $\ell=4$ g-modes to infer an extended dilute core. Allowing for the possibility that Io may instead be in resonance with an $\ell\simeq2$ g-mode may lead to modification of these expectations for stable stratification in Jupiter. \section{Conclusions}\label{sec:conc} We have introduced a spectral numerical method for self-consistently computing the viscous tidal response of rapidly rotating, oblate planets and stars with arbitrary internal structures. We have applied this method to fully isentropic (Figs. \ref{fig:n15_k222}-\ref{fig:n15_k424}) and fully stratified (Figs. \ref{fig:n2_k222}-\ref{fig:n2_ros}) polytropes with rotation rates up to nearly the mass-shedding limit. We have also computed the tidal response for models of Jupiter's interior that include both stably stratified and convective regions (Figs. \ref{fig:jup_k222}-\ref{fig:jupeaks}). Contrary to recent work excluding centrifugal flattening \citep{Lin2023}, we find (Fig. \ref{fig:dk42}) that tidally excited oscillations in Jupiter are capable of reconciling a discrepancy between observed \citep{Durante2020} and predicted \citep{Nettelmann2019,Wahl2020} values of $k_{42}$ (the ratio between $\ell=4,m=2$ coefficients in multipole expansions of Jupiter's tidal response and Io's tidal potential). We find that in centrifugally flattened models, $\ell=2$ driving of gravito-inertial and inertial waves contributes most significantly to Jupiter's $\ell=4$ response (Fig. \ref{fig:k42n}). Our results indicate that a wider set of internal oscillations than considered by \citet{Idini2022b} (in particular those originating as $\ell=2$ g-modes in the non-rotating regime) may serve as viable candidates for a Jupiter-Io resonance. Evaluating resonances with these additional oscillations in a range of realistic interior models may lead to modified constraints on Jupiter's stable stratification. \section*{Acknowledgements} I thank Jim Fuller, Dong Lai, and Yanqin Wu for helpful conversations. I gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference \#CITA 490888-16]. \section*{Data Availability} The data underlying this work will be provided upon reasonable request to the corresponding author \bibliographystyle{mnras}
train/arxiv
BkiUe8_xK6mkyCKC2lpg
5
1
\section{Introduction} Star formation activities in the extreme outer regions of gas-rich disk galaxies have recently come to be discussed extensively not only in the context of star formation laws in low-density environments of galaxies but also in the context of formation and evolution of disk galaxies (e.g., Ferguson et al. 1998a, b; Leli\`evre \& Roy 2000; Cuillandre et al. 2001; Martin \& Kennicutt 2001; de Blok \& Walter 2003; Gil de Paz et al. 2005; Thilker et al. 2005). Recent observational studies of M83 by the {\it Galaxy Evolution Explore (GALEX)} have discovered UV-bright stellar complexes associated with filamentary HI structures in the extreme outer disk at $R \sim 4R_{\rm HII}$, where $R_{\rm HII}$ corresponds to the radius where the majority of HII regions are detected (Thilker et al. 2005). Furthermore a number of small isolated HII regions have been recently discovered at projected distance up to 30 kpc from their nearest galaxy (e.g., NGC 1533) in the {\it Survey for Ionization in Neutral Gas Galaxies (SINGG)} (e.g., Meurer 2004; Ryan-Weber et al. 2003). It has been discussed whether the observed properties of these recent star formation activities in the extreme outer HI regions of disk galaxies can be understood in terms of local gravitational instability within the HI disks (e.g., Ferguson et al. 1998a; Martin \& Kennicutt 2001). Ferguson et al. (1998a) suggested that a simple picture of the local gravitational instability can not explain self-consistently the observed radial distributions of HI gas and H$\alpha$ flux of star-forming regions. Star formation (proven by H$\alpha$ regions) in dwarf irregular galaxies (e.g., ESO 215-G?009) with extended HI disks is observed to occur in their very outer parts, where the gas densities are well below a critical density of star formation (e.g., Warren et al. 2004). Thus it remains unclear what can control the star formation activities in the extreme outer regions of disk galaxies. Using numerical simulations of galaxy formation based on the cold dark matter (CDM) model, Font et al. (2001) demonstrated that dark matter subhalos predicted in the CDM model can play a minor role in the heating of the disk owing to the very small number of subhalos approaching to the solar radius of 8.5 kpc. The sizes of HI disks of gas-rich galaxies are generally observed to be significantly larger than their optical disks with the sizes of $R_{\rm s}$ (Broeils \& van Woerden 1994) and some fraction of low luminosity galaxies have HI gas envelopes extending out to 4--7 $R_{\rm s}$ (e.g., Hunter 1997). No theoretical attempts have been made to investigate the dynamical impact of dark matter subhalos (hereafter referred to as ``dark impact'' for convenience) on {\it the extended HI disks of galaxies}, though several numerical studies have already investigated the influences of triaxial halos with figure rotation and tidal galaxy interaction on the evolution of the extended HI disks (e.g., Theis 1999; Bekki \& Freeman 2002; Masset \& Bureau 2003; Bekki et al. 2005a, b). The purpose of this Letter is to propose that the dynamical interaction between dark matter subhalos and extended HI gas disks can be important for better understanding the origin of recent star formation observed in the extreme outer regions of disk galaxies. By using hydrodynamical simulations of the dark impact on galaxy-scale HI disks, we show that local fine structures (e.g., filaments and holes) can be formed by the dark impact in the HI disks. We discuss whether star formation can occur in high-density regions of apparently filamentary structures formed by the dark impact. We suggest that characteristic fine structures formed by the dark impact in a HI disk of a galaxy {\it with apparently no interacting visible dwarfs close to the disk} can indirectly prove the presence of dark matter subhalos that frequently pass through the outer part of the HI disk. \section{Model} We investigate how the extended HI disk of a galaxy embedded in a {\it fixed} dark matter halo with the total mass of $M_{\rm h}$ is dynamically influenced by a {single subhalo} that is orbiting the galaxy and represented by a {\it point mass.} In order to elucidate the essence of the dynamical effects of the dark impact more clearly, we adopt the above somewhat idealized model: We will describe the successive and cumulative impact of {\it numerous} subhalos with a reasonable spatial distribution within a {\it live} dark matter halo of a galaxy in our forthcoming papers (Bekki \& Chiba 2005 in preparation). We adopt the density distribution of the NFW halo (Navarro, Frenk \& White 1996) suggested from CDM simulations for the fixed dark matter halo: \begin{equation} {\rho}(r)=\frac{\rho_{s}}{(r/r_{\rm s})(1+r/r_{\rm s})^2}, \end{equation} where $r$, $\rho_{s}$, and $r_{\rm s}$ are the spherical radius, the characteristic density of a dark halo, and the scale length of the halo, respectively. The $c$ parameter of the NFW halo is set to be 10.0. The total baryonic mass ($M_{\rm b}$) of the galaxy is assumed to be $0.1M_{\rm h}$. Henceforth, all masses are measured in units of $M_{\rm b}$ and distances in units of $r_{\rm s}$, unless otherwise specified. Velocity and time are measured in units of $v$ = $ (GM_{\rm b}/r_{\rm s})^{1/2}$ and $t_{\rm dyn}$ = $(r_{\rm s}^{3}/GM_{\rm b})^{1/2}$, respectively, where $G$ is the gravitational constant and assumed to be 1.0 in the present study. If we adopt $M_{\rm b}$ = 6.0 $\times$ $10^{10}$ $ \rm M_{\odot}$ and $r_{\rm s}$ = 10.5\,kpc as fiducial values, then $v$ = 1.57 $\times$ $10^{2}$\,km\,s$^{-1}$ and $t_{\rm dyn}$ = 6.55 $\times$ $10^{7}$ yr. The gas disk with the total mass of $M_{\rm g}$ is composed of $10^5$ SPH (Smoothed Particle Hydrodynamics) particles and an isothermal equation of state with the sound speed of $0.02 v$ is used for the gas. The gas is assumed to have an uniform radial density distribution (rather than an exponential one) and distributed for $2r_{\rm s} \le R \le 4r_{\rm s}$, because we intend to investigate the influence of the dark impact on the dynamical evolution of the outer gas disk. A reasonable dynamical model of the Galaxy embedded in the NFW halo has $r_{\rm s} \sim 0.6 R_{\rm s}$, where $ R_{\rm s}$ is the stellar disk size (Bekki et al 2005c). Therefore the adopted gaseous distribution corresponds to that for $1.2R_{\rm s} \le R \le 2.4R_{\rm s}$ and thus can be regarded as reasonable for investigating gas dynamics of outer HI disks of galaxies. The gas disk is assumed to be influenced only by its host dark halo (not by the inner stellar disk component), because our interests are on the evolution of the very outer part of the gas disk, where the gravitational field of its dark matter halo dominates. The mass of the subhalo ($M_{\rm sb}$) is set to be a free parameter that can control the strength of the dark impact and ranges from $10^{-4} M_{\rm h}$ to $10^{-2} M_{\rm h}$. Although the initial position (${\bf X}_{\rm sb}$) and velocity (${\bf V}_{\rm sb}$) are set to be free parameters, we show the results of the models with ${\bf X}_{\rm sb}$ = ($x$,$y$,$z$) = ($3r_{\rm s}$, 0, $0.5r_{\rm s}$) and ${\bf V}_{\rm sb}$ = ($v_{\rm x}$,$v_{\rm y}$,$v_{\rm z}$) =(0, 0, $0.5V_{\rm c}$), where $V_{\rm c}$ is the circular velocity at ${\bf X}_{\rm sb}$ derived for the NFW halo. The results of other models with different ${\bf X}_{\rm sb}$ and ${\bf V}_{\rm sb}$ will be described in our forthcoming papers (Bekki \& Chiba 2005). We mainly show the results of the ``fiducial model'' with $M_{\rm g}=0.01M_{\rm h}$ and $M_{\rm sb}=0.01M_{\rm h}$, which shows more clearly the roles of the dark impact in the dynamical evolution of HI gas disks in the present study. All the calculations related to the above hydrodynamical evolution have been carried out on the GRAPE board (Sugimoto et al. 1990) at the Astronomical Data Analysis Center (ADAC) at the National Astronomical Observatory of Japan. The gravitational softening parameter in the GRAPE5-SPH code (Bekki \& Chiba 2005) is fixed at 0.019 in our units and the time integration of the equation of motion is performed by using the predict-corrector method with a multiple time step scheme. \section{Result} Figure 1 describes how gaseous fine structures are formed by the dark impact in the extended gas disk for the fiducial model with $M_{\rm sb}/M_{\rm h}=0.01$. As the dark matter subhalo passes through the thin gas disk, it tidally disturbs the disk in a moderately strong manner. Owing to the small ratio of $M_{\rm sb}/M_{\rm h}=0.01$, the tidal field of the subhalo is not strong enough to trigger the formation of global, non-axisymmetric structures (e.g., spiral arms and bars) and warps. The dark impact however can form local fine structures that look like ``filaments'' in the $x$-$y$ projection and ``chimneys'' in the $x$-$z$ projection at $T=1.1$. As the subhalo approaches the gas disk, it gravitationally attracts gaseous particles along its path. The vertical distribution of gaseous particles close to the subhalo passing through the gas disk follows the wake induced by the subhalo. The particles consequently appear to get levitated (or lifted up) during and after the passage of the subhalo through the gas disk. Accordingly chimney-like structures can be clearly seen if the HI disk is viewed from the edge-on. These fine structures can be clearly seen in other models with $M_{\rm sb}/M_{\rm h} > 0.001$ and thus regarded as characteristics of local gaseous structures formed by the dark impact, though the details of their morphologies are appreciably different with one another. Figure 2 shows the two-dimensional (2D) distribution of the projected gaseous densities (${\mu}_{\rm g}$) of the simulated kpc-scale fine structures shown in Figure 1. Because of the tidal compression of gas by the dark impact, some parts of the structures show ${\mu}_{\rm g}$ higher than the threshold gas density for star formation (${\mu}_{\rm thres} \sim 3 {\rm M}_{\odot}$ ${\rm pc}^{-2}$; Hunter et al. 1998). ${\mu}_{\rm g}$ can be as high as $\sim 14 {\rm M}_{\odot} $ ${\rm pc}^{-2}$ and about 4 \% of the local regions (i.e., 105 among 2500 cells) in Figure 2 have ${\mu}_{\rm g} > {\mu}_{\rm thres}$. Although we do not model star formation in the present simulations, these results imply that star formation in the extreme outer parts of gaseous disks of galaxies is highly likely to be triggered by the dark impact. Figure 3 shows how the strength of the dark impact controls the time evolution of ${\mu}_{\rm g}$ of outer gaseous disks of galaxies. The larger values of $M_{\rm sb}/M_{\rm h}$ mean that the subhalos can give stronger dynamical impact to the gas disks. It is clear from Figure 3 that (1) the maximum density of local structures formed by the stronger dark impact is higher and (2) the number fraction of cells with ${\mu}_{\rm g} > {\mu}_{\rm thres}$ is larger for the stronger dark impact. These results imply that star formation can occur in the wider regions of extended HI disks of galaxies when the HI disks are more strongly influenced by the dark matter subhalos. The long-term evolution of local fine structures formed by the dark impact should be investigated and described in our future works with models of star formation, because star formation and its feedback (e.g., supernovae explosion) are highly likely to influence the evolution of the structures. However it would be instructive to describe some results possibly characteristic of the long-term evolution derived in the present models without star formation. Figure 4 shows the 2D distribution of ${\mu}_{\rm g}$ in the model with $M_{\rm sb}/M_{\rm h}=0.001$ at $T=6$. As shown in this figure, a kpc-scale hole surrounded by appreciably higher density regions can be finally formed as a result of the dark impact. This result implies that without energetic thermal and kinematic feedback from massive OB stars and supernovae, giant HI holes can be formed in extended HI disks of galaxies by the dark impact. Thus the dark impact provides a new mechanism for the formation of giant HI holes observed in gas-rich dwarf irregular galaxies (e.g., LMC). The observed giant HI hole in NGC 6822 (e.g., de Blok \& Walater 2003) might well be formed by dynamical impact of a very faint companion dwarf (observed as ``NW cloud'') on the HI disk: The physical mechanism of the HI hole formation can be essentially the same as the dark impact. Thus the HI hole in the NGC 6822 system implies the viability of the dark impact scenario of HI hole formation. We confirm that the HI fine structures can be formed by the dark impact in models with different $M_{\rm g}$ ($<0.05 M_{\rm h}$), though the details of HI morphologies depend on $M_{\rm g}$ and initial orbits of subhalos (i.e., ${\bf V}_{\rm sb}$). The more remarkable chimney-like structures can be formed in the models with higher inclination of the orbits of subhalos. Apparently filamentary structures with higher gas densities can be formed by the dark impact in the model with a higher initial gas density of $M_{\rm g} = 0.02 M_{\rm h}$. \section{Discussions and conclusions} Although the present study has suggested that the dark impact can be responsible for star formation in the outer HI gas disks of galaxies, it remains unclear what roles the dark impact has in the evolution of HI disks {\it within stellar disks of galaxies.} Although the possibility of dark matter subhalos approaching galactic stellar disks is low (Font et al. 2001), we here suggest the following two possible roles of the dark impact. One is that the unique local structures of OB stars, young clusters, and super-associations, such as the galactic belt and the Gould belt in the Galaxy (e.g., Stothers \& Frogel 1974), can result from the dark impact. The other is that giant HI holes without bright optical stellar counterparts (e.g., star clusters responsible for supernovae explosion that for giant HI holes) observed in some low-luminosity galaxies can be due to the dark impact. These speculative suggestions need to be investigated in a quantitatively way by our future high-resolution simulations on the dark impact on inner HI gas disks of galaxies. The present study has shown that the previous passages of dark matter subhalos through extended HI disks of galaxies can be imprinted on fine structures (e.g., filaments and holes) in the HI disks. The present model also predicts that if a galaxy-scale halo with an extended HI gas disk has $\sim 500$ subhalos, $\sim 8$ HI fine structures can be formed by dark impact for every 1 Myr. Recent HI observations have revealed that some fraction of galaxies have very extended HI disks (e.g., Hunter 1997; Warren et al. 2004), though the total number of galaxies whose outer HI structures have been extensively investigated are very small. Accordingly we suggest that if galaxy halos are composed of numerous subhalos, as the CDM model predicts, future high-resolution HI observations on the fine structures of extended gas disks for a statistically significant number of galaxies can {\it indirectly} probe the existence of the subhalos and thereby provide some constraints on possible spatial distributions and kinematics of the subhalos. We also suggest that HI gas disks with apparently no interacting (visible) dwarf galaxies close to the disks would be the best observational targets for proving subhalos in galaxies. \acknowledgments We are grateful to the referee Fabio Governato for valuable comments, which contribute to improve the present paper. K.B. acknowledges the financial support of the Australian Research Council throughout the course of this work. The numerical simulations reported here were carried out on GRAPE systems kindly made available by the Astronomical Data Analysis Center (ADAC) at National Astronomical Observatory of Japan (NAOJ). \clearpage
train/arxiv
BkiUfCzxK6EuNA_gPeog
5
1
\section{Proof of Lemma~\ref{lemma}} \label{sec_lemma} Below we prove a stronger version of Lemma~\ref{lemma} from the main text. Instead of maximizing the QFI on the left-hand side of Eq.~\eqref{eq_thm_bound} over input states $\rho$, we explicitly provide a concrete state for which the bound holds. \setcounter{lemma}{0} \begin{lemma}[stronger version] Let $\{ e^{-i\theta T} \}_\theta$ be a finite-dimensional representation of the unitary group $U(1)$ with a Hermitian generator $T$ and $\mathcal{U}^\theta$ be a channel corresponding to a rotation $e^{-i\theta T}$. Let $\ket{t_+}$ and $\ket{t_-}$ be two orthogonal and normalized eigenvectors of $T$, which correspond to the maximal $t_+$ and minimal $t_-$ eigenvalues of $T$. Then, for $\ket{\psi} = \frac{\ket{t_+} + \ket{t_-}}{\sqrt 2}$ and any $\theta$-dependent channel $\mathcal{C}^\theta$ we have \begin{equation} \label{eq_lemma_strong} \max_\theta F[\mathcal{C}^\theta (\ketbra{\psi}{\psi})] \geq \min_\theta \left[1- 8 \d{\mathcal{C}^\theta, \mathcal{U}^\theta}^2\right](t_+ - t_-)^2. \end{equation} \end{lemma} \begin{proof} We start by defining a $U(1)$-convolved channel $\mathcal{C}^\theta_{\mathrm{con}}$ for the channel $\mathcal{C}^\theta$ in the following way \begin{equation} \mathcal{C}^\theta_{\mathrm{con}} = \intd{\theta'}{\mathcal{U}^{-\theta'}\circ\mathcal{C}^{\theta+\theta'}}, \end{equation} where $d\theta'$ corresponds to the normalized Haar measure of the group $U(1)$, i.e., $\int \t{d} \theta' = 1$ and the interval of integration is $[0,2\pi]$. By making a substitution $\theta'' = \theta + \theta'$ we obtain \begin{equation} \mathcal{C}^\theta_{\mathrm{con}} = \intd{\theta''}{\mathcal{U}^{\theta}\circ\mathcal{U}^{-\theta''}\circ\mathcal{C}^{\theta''}} = \mathcal{U}^{\theta}\circ\mathcal{C}_{\mathrm{con}}, \end{equation} where we define a $\theta$-independent channel $\mathcal{C}_{\mathrm{con}} = \intd{\theta'}{\mathcal{U}^{-\theta'}\circ\mathcal{C}^{\theta'}}$. We can show that the Bures distance between the $U(1)$-convolved channel $\mathcal{C}^\theta_{\mathrm{con}}$ and the unitary rotation channel $\mathcal{U}^\theta$ is upper-bounded by the distance between $\mathcal{C}^\theta$ and $\mathcal{U}^\theta$ maximized over $\theta$, i.e., \begin{equation} \label{eq_dist_con} \d{\mathcal{C}^\theta_{\mathrm{con}},\mathcal{U}^\theta} \leq \max_\theta \d{\mathcal{C}^\theta,\mathcal{U}^\theta}. \end{equation} We infer the above inequality from the following sequence of equalities and inequalities \begin{eqnarray} \left[1-\d{\mathcal{C}^\theta_{\mathrm{con}},\mathcal{U}^\theta}^2\right]^2 &=& [\mathcal{F}(\mathcal{C}^\theta_{\mathrm{con}}, \mathcal{U}^\theta)]^2 = [\mathcal{F}(\mathcal{C}_{\mathrm{con}},\id)]^2 = \min_{\ket{\Phi}} \bra{\Phi} \left[\mathcal{C}_{\mathrm{con}}(\ketbra{\Phi}{\Phi})\right]\ket{\Phi}\\ &=& \min_{\ket{\Phi}} \bra{\Phi} \left[\intd{\theta'} {\mathcal{U}^{-\theta'}\circ\mathcal{C}^{\theta'}(\ketbra{\Phi}{\Phi})}\right] \ket{\Phi} \geq \intd{\theta'}{\min_{\ket{\Phi}} \bra{\Phi} \left[\mathcal{U}^{-\theta'}\circ\mathcal{C}^{\theta'} (\ketbra{\Phi}{\Phi})\right] \ket{\Phi}}\\ &=& \intd{\theta'}{\left[\mathcal{F}(\mathcal{U}^{-\theta'}\circ\mathcal{C}^{\theta'},\id)\right]^2} \geq \min_\theta \left[\mathcal{F}(\mathcal{C}^{\theta},\mathcal{U}^{\theta})\right]^2 = \min_\theta \left[1-\d{\mathcal{C}^\theta,\mathcal{U}^\theta}^2\right]^2. \end{eqnarray} We can upper-bound the QFI of the output state $\mathcal{C}^\theta_{\mathrm{con}}(\rho)$ of the $U(1)$-convolved channel $\mathcal{C}^\theta_{\mathrm{con}}$ by the QFI of $\mathcal{C}^\theta(\rho)$ maximized over $\theta$, where $\rho$ is an arbitrary input state. Namely, \begin{eqnarray} \label{eq_fisher_con} F[\mathcal{C}^\theta_{\mathrm{con}}(\rho)] &=& F\left[\intd{\theta'}{\mathcal{U}^{-\theta'}\circ\mathcal{C}^{\theta+\theta'}(\rho)}\right] \leq \intd{\theta'}{F\left[\mathcal{U}^{-\theta'}\circ\mathcal{C}^{\theta+\theta'}(\rho)\right]} = \intd{\theta'}{F\left[\mathcal{C}^{\theta+\theta'}(\rho)\right]} \leq \max_\theta F\left[\mathcal{C}^{\theta}(\rho)\right], \end{eqnarray} where in the first inequality we use convexity of the QFI, and then in the following equality we use invariance of the QFI under the $\theta$-independent unitary rotation channel $\mathcal{U}^{-\theta'}$. Now, we are going to derive the following lower bound on the QFI of $\mathcal{C}^\theta_{\mathrm{con}}(\ketbra{\psi}{\psi})$ \begin{equation} \label{eq_fisher_psi} F[\mathcal{C}^\theta_{\mathrm{con}}(\ketbra{\psi}{\psi})] \geq \left[1 - 8\d{\mathcal{C}^\theta_{\mathrm{con}},\mathcal{U}^\theta}^2\right] (t_+ - t_-)^2, \end{equation} which combined with Eqs.~\eqref{eq_fisher_con}~and~\eqref{eq_dist_con} leads to the inequality in our lemma. Let $\ket{t_-}, \ket{t_1}, \ldots, \ket{t_\tau},\ket{t_+}$ denote all mutually orthogonal and normalized eigenvectors of $T$, whose corresponding eigenvalues form a non-decreasing sequence, i.e., $t_- \leq t_1 \leq \ldots \leq t_\tau \leq t_+$. Let $\mathcal{H}_\pm = \t{span}(\ket{t_+},\ket{t_-})$ be the space spanned by $\ket{t_+}$ and $\ket{t_-}$ and $\mathcal{H}^\perp_\pm = \t{span}(\ket{t_1},\ldots,\ket{t_\tau})$ be the space orthogonal to $\mathcal{H}_\pm$. We define a decohering channel $\mathcal{D}$ as follows \begin{equation} \mathcal{D}(\cdot) = \Pi_\pm \cdot \Pi_\pm + \Pi^\perp_\pm \cdot \Pi^\perp_\pm, \end{equation} where $\Pi_\pm$ and $\Pi^\perp_\pm$ denote the projectors onto $\mathcal{H}_\pm$ and $\mathcal{H}^\perp_\pm$, respectively. Note that $\mathcal{D}$ does not depend on the parameter $\theta$ and $\mathcal{D}\circ \mathcal{U}^\theta = \mathcal{U}^\theta \circ \mathcal{D}$. Lastly, we choose an orthonormal basis $\mathcal{B} = \{\ket{\psi}, \ket{\psi^\perp}, \ket{t_1}\ldots, \ket{t_\tau}\}$, where $\ket{\psi^\perp} = \frac{\ket{t_+}-\ket{t_-}}{\sqrt{2}}$. Let $f$ denote the fidelity between $\mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi})$ and $\ket{\psi}$, i.e., $f = f[\mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi}),\ketbra{\psi}{\psi}]$. Then, we have \begin{eqnarray} \label{eq_fid_con} f^2 &=& \{f[\mathcal{U}^\theta\circ\mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi}),\mathcal{U}^\theta(\ketbra{\psi}{\psi})] \}^2 \geq [\mathcal{F}(\mathcal{C}^\theta_{\mathrm{con}},\mathcal{U}^\theta)]^2 = \left[1-\d{\mathcal{C}^\theta_{\mathrm{con}},\mathcal{U}^\theta}^2\right]^2 \geq 1-2\d{\mathcal{C}^\theta_{\mathrm{con}},\mathcal{U}^\theta}^2. \end{eqnarray} Also, we can express $\mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi})$ in a generic block form in the basis $\mathcal{B}$ as \begin{equation} \mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi}) = \mat{c|c}{ (f^2+q-f^2 q)\rho_{\pm} & *\\ \hline * & (1-q)(1-f^2) \rho^{\perp}_{\pm}},\quad \rho_{\pm} = \frac{1}{f^2+q-f^2q}\mat{cc}{ f^2 & c \\ c^* & q-f^2 q}, \end{equation} where $q\in[0,1]$, $c$ is some complex number and the density matrices $\rho_{\pm}$ and $\rho^\perp_{\pm}$ are supported on $\mathcal{H}_\pm$ and $\mathcal{H}^\perp_\pm$, respectively. Note that $\rho_{\pm}$ and $\rho^\perp_{\pm}$ do not depend on the parameter $\theta$. Let us write the generator $T$ in the basis $\mathcal{B}$ as \begin{equation} T = T_\pm \oplus T^\perp_\pm,\quad T_{\pm} = \frac{1}{2}\mat{cc}{t_+ + t_- & t_+ - t_- \\ t_+ - t_- & t_+ + t_-},\quad T_\pm^\perp = \t{diag}(t_1,\ldots,t_\tau), \end{equation} where operators $T_\pm$ and $T^\perp_\pm$ act on $\mathcal{H}_\pm$ and $\mathcal{H}^\perp_\pm$. Then, we have \begin{eqnarray} F[\mathcal{C}^\theta_{\mathrm{con}}(\ketbra{\psi}{\psi})] &=& F[\mathcal{U}^\theta\circ\mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi})] \geq F[\mathcal{D}\circ \mathcal{U}^\theta\circ\mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi})] = F[\mathcal{U}^\theta\circ\mathcal{D}\circ \mathcal{C}_{\mathrm{con}}(\ketbra{\psi}{\psi})] \\ &=& F\left[(f^2+q-f^2 q) e^{-i\theta T_\pm} \rho_{\pm} e^{i\theta T_\pm} \oplus (1-q)(1-f^2) e^{-i\theta T^\perp_\pm} \rho^{\perp}_{\pm} e^{i\theta T^\perp_\pm}\right]\\ &=& (f^2+q-f^2 q) F\left(e^{-i\theta T_\pm} \rho_{\pm} e^{i\theta T_\pm}\right) + (1-q)(1-f^2) F\left(e^{-i\theta T^\perp_\pm} \rho^{\perp}_{\pm} e^{i\theta T^\perp_\pm}\right)\\ &\geq& (f^2+q-f^2 q) F\left(e^{-i\theta T_\pm} \rho_{\pm} e^{i\theta T_\pm}\right), \end{eqnarray} where we use the facts that: (i) the QFI does not increase under the parameter-independent channel $\mathcal{D}$ and (ii) for any state with a block-diagonal structure, which is invariant under the parameter change, the QFI is a weighted sum of the QFI corresponding to the respective blocks. Let $\ket{i}$ be an eigenvector of $\rho_\pm$ corresponding to an eigenvalue $\lambda_i$ for $i=1,2$. Then, using a general formula for the QFI in a unitary parameter-encoding model we obtain \begin{equation} F(e^{-i \theta T_\pm } \rho_\pm e^{i\theta T_\pm}) = \sum_{i,j=1}^2 \frac{2 \lvert\bra{i} T_\pm \ket{j}\rvert^2(\lambda_i - \lambda_j)^2}{\lambda_i + \lambda_j} = \left(\frac{f^2-q+f^2 q}{f^2+q-f^2 q}\right)^2 (t_+ - t_-)^2, \end{equation} which leads to the following lower-bound \begin{equation} F[\mathcal{C}^\theta_{\mathrm{con}}(\ketbra{\psi}{\psi})] \geq \frac{(f^2-q+f^2 q)^2}{f^2+q-f^2 q} (t_+ - t_-)^2. \end{equation} The above inequality has to hold for any $q\in[0,1]$. In particular, by setting $q=1$ (which also happens to maximize the right-hand side of the above inequality for $f\geq \frac{1}{2}$) and using Eq.~\eqref{eq_fid_con} we finally arrive at \begin{equation} F[\mathcal{C}^\theta_{\mathrm{con}}(\ketbra{\psi}{\psi})] \geq (2f^2-1)^2 (t_+ - t_-)^2 \geq (4f^2-3) (t_+ - t_-)^2 \geq \left[1-8\d{\mathcal{C}^\theta,\mathcal{U}^\theta}^2\right] (t_+ - t_-)^2, \end{equation} which gives Eq.~\eqref{eq_fisher_psi} and finishes the proof of our lemma. \end{proof} \section{Derivation of fundamental metrological bounds} \label{sec:bound} For completeness, we provide a derivation of fundamental metrological bounds along the lines developed in~\cite{Escher2011,Demkowicz2012, Kolodynski2013,Demkowicz2014} with a straightforward generalization, which allows the channels acting on different physical subsystems to be different. Similarly as in~\cite{Demkowicz2014}, we consider the most general adaptive strategy when deriving the bound. Consider an action of a quantum channel $\mathcal{N}^\theta$ on some state $\ket{\psi}$, which can be written using its Kraus representation \begin{equation} \rho^\theta = \mathcal{N}^{\theta}(\ketbra{\psi}{\psi}) = \sum_k K_k^\theta \ketbra{\psi}{\psi} K_k^{\theta\dagger}. \end{equation} Let $W^{\theta}$ be a unitary dilation of $\mathcal{N}^\theta$ on an extended Hilbert space $\mathcal{H}\otimes \mathcal{H}_{E}$, where $\mathcal{H}_E$ is an ancillary space, such that \begin{equation} \mathcal{N}^{\theta}(\ketbra{\psi}{\psi}) = \t{Tr}_E\left( W^{\theta} \ketbra{\psi}{\psi} \otimes \ketbra{0}{0}_E W^{\theta \dagger} \right), \end{equation} where $\ket{0}$ is some arbitrary state of the ancillary system. The Kraus operators are related to a particular unitary dilation $W^\theta$ via $K_k^{\theta} = {_E}\bra{k} W^{\theta} \ket{0}_E$. We can write a purification of the output state as \begin{equation} \ket{\Psi^{\theta}} = W^{\theta} \ket{\psi} \otimes \ket{0} = \sum_k K_k^{\theta} \otimes \openone \ket{\psi} \otimes \ket{0}. \end{equation} The QFI of $\rho^{\theta}$ is upper-bounded by the QFI of its purification. Using the formula for the QFI of a pure state we have \begin{equation} F(\ket{\Psi(\theta}) = 4 \left(\braket{\dot{\Psi}^{\theta}}{\dot{\Psi}^{\theta}} - \left|\braket{\dot{\Psi}^{\theta}}{\Psi^{\theta}}\right|^2 \right) = 4 \left(\bra{\psi}\sum_k \dot{K}_k^{\theta \dagger} \dot{K}_k^{\theta} \ket{\psi} - \left| \sum_k \bra{\psi}\dot{K}^{\theta \dagger}_k K_k^{\theta} \ket{\psi}\right|^2 \right) \leq 4 \bra{\psi}\sum_k \dot{K}^{\theta \dagger}\dot{K}_{k}^{\theta} \ket{\psi}, \end{equation} where $|\dot{\Psi}^\theta\rangle$ and $\dot{K}_k^{\theta}$ denote the derivatives of $\ket{\Psi^\theta}$ and $K_k^{\theta}$ with respect to $\theta$. Since the above inequality is valid for any purification of the output state, we can obtain the tightest bound by minimizing the right-hand side over different purifications, which, in turn, is equivalent to minimizing over Kraus representations $\{K_k^\theta\}_k$ of $\mathcal{N}^{\theta}$. Thus, \begin{equation} F(\rho^{\theta}) \leq 4 \min_{\{K_k^{\theta}\}_k}\bra{\psi}\sum_k \dot{K}_k^{\theta \dagger} \dot{K}_k^{\theta} \ket{\psi}. \end{equation} Finally, we may bound the QFI optimized over input states as follows \begin{equation} \label{eq:qfibound} \max_{\ket{\psi}} F(\rho^{\theta}) \leq 4 \max_{\ket{\psi}} \min_{\{K_k^{\theta}\}_k} \bra{\psi}\sum_k \dot{K}_k^{\theta \dagger} \dot{K}_k^{\theta} \ket{\psi} \leq 4 \min_{\{K_k^{\theta} \}_k} \max_{\ket{\psi}} \bra{\psi}\sum_k \dot{K}_k^{\theta \dagger}\dot{K}_k^{\theta} \ket{\psi} = 4 \min_{\{K_k^{\theta}\}_k} \left\| \sum_k \dot{K}_k^{\theta \dagger} \dot{K}_k^{\theta} \right\|, \end{equation} where $\| \cdot \|$ is the operator norm and we use the max-min inequality. As a result, we have an upper bound on the achievable QFI of the output state as a function of the Kraus operators defining the quantum channel itself. \begin{figure}[t] \centering \includegraphics[width=0.55\columnwidth]{fig_adaptive} \caption{ The most general form of an adaptive metrological scheme, which uses $\r$ parameter-encoding channels $\mathcal{N}_1^\theta,\ldots,\mathcal{N}_r^\theta$. The control unitaries $\{V_i\}_i$ allow to entangle the probe with an arbitrary large ancillary system, which together with a collective measurement of the output state $\rho^\theta$ guarantees the full generality of the setup.} \label{fig:adaptive} \end{figure} The power of the above formula only becomes evident when one views the channel $\mathcal{N}^{\theta}$ as composed of $r$ independent channels $\mathcal{N}_1^{\theta},\ldots,\mathcal{N}_r^{\theta}$; see Fig.~\ref{fig:adaptive}. Here we consider the most general adaptive strategy, where each individual channel $\mathcal{N}_i^\theta$ is followed by a unitary $V_i$ that potentially entangles the probe with an arbitrary large ancillary system. At the end, all the systems are measured using a collective measurement. In particular, if we appropriately choose the unitaries $V_i$ to be swap operators, then this scheme can encompass a parallel scheme, where $r$ different subsystems are initially prepared in some arbitrary (possibly entangled) state and $r$ channels $\mathcal{N}_1,\ldots,\mathcal{N}_r$ act simultaneously on the corresponding subsystems $1,\ldots,r$. The action of the the channel $\mathcal{N}^{\theta}$ can be written as \begin{equation} \rho^{\theta}=\mathcal{N}^{\theta}(\rho) = \sum_{\boldsymbol{k}}K_{\boldsymbol{k}}^{\theta} \rho K_{\boldsymbol{k}}^{\theta \dagger}, \quad K_{\boldsymbol{k}}^{\theta} = K_{\r,k_{\r}}^{\theta}V_{\r} K_{\r-1,k_{\r-1}}^{\theta}V_{\r-1} \ldots K_{1,k_1}^{\theta} V_1\textrm{ for $\boldsymbol{k}=\{k_1,\dots,k_{\r}\}$}, \end{equation} where we use a boldface index $\boldsymbol{k}$ to combine all the indices of Kraus operators of individual channels $\mathcal{N}_i$. Let us for a moment fix the Kraus representation of each channel $\mathcal{N}_i$. For brevity of the notation, we write $\mathrm{K}_{i,k} := K_{i,k}^{\theta} V_i$, where we suppress the explicit dependence on $\theta$. According to Eq.~\eqref{eq:qfibound}, the upper bound on the QFI of the output state is \begin{equation} F(\rho^{\theta}) \leq 4 \left\| \sum_{\boldsymbol{k}} \dot{K}_{\boldsymbol{k}}^{\theta \dagger} \dot{K}_{\boldsymbol{k}}^{\theta}\right\| = 4 \left\|\sum_{\boldsymbol{k}} \sum_{i,j=1}^{\r} \mathrm{K}_{1,k_1}^\dagger \dots\dot{\mathrm{K}}_{i,k_i}^\dagger \dots \mathrm{K}_{\r,k_{\r}}^\dagger \mathrm{K}_{\r,k_{\r}} \dots \dot{\mathrm{K}}_{j,k_j} \dots \mathrm{K}_{1,k_1} \right\|. \end{equation} Note the following property of the operator norm \begin{equation} \label{eq:normineq} \left\|{\sum_k L_{k}^{\dagger} A L_k}\right\| \leq \| A \|\, \left\| \sum_k L_k^{\dagger} L_k \right\| \end{equation} valid for any operators $A$ and $L_k$. Making use of the above property together with the triangle inequality and the trace preserving condition $\sum_k \mathrm{K}_{k}^{\dagger} \mathrm{K}_k = \openone$ we get \begin{equation} \label{eq:fishbound1} F(\rho^{\theta}) \leq 4 \sum_i \left\| \sum_{k_i}\dot{\mathrm{K}}_{i,k_i}^{\dagger} \dot{\mathrm{K}}_{i,k_i}^{ \dagger} \right\| +\sum_{i<j} \left\| \sum_{k_i,\dots,k_j}\dot{\mathrm{K}}^{k_i \dagger}_{i} \dots {\mathrm{K}}^{k_j \dagger}_{j} \dot{\mathrm{K}}^{k_j}_j \dots {\mathrm{K}}^{k_i}_{i} + h.c.\right\|. \end{equation} When analysing the second summation term, note that the trace preserving condition implies that an operator $iA = \sum_k \dot{\mathrm{K}}^{k \dagger} \mathrm{K}^k$ is anti-hermitian. Then, we have \begin{equation} \left\|\sum_k \dot{\mathrm{K}}^{k \dagger} i A \mathrm{K}^k + h.c.\right\| = \left\| \sum_k (\dot{\mathrm{K}}^k + i \mathrm{K}^k)^\dagger A (\dot{\mathrm{K}}^k + i \mathrm{K}^k) - \dot{\mathrm{K}}^{k \dagger} A \dot{\mathrm{K}}^k - {\mathrm{K}}^{k \dagger} A \mathrm{K}^k \right\|. \end{equation} Using the triangle inequality together with Eq.~\eqref{eq:normineq} we get \begin{equation} \left\|\sum_k \dot{\mathrm{K}}^{k \dagger} i A \mathrm{K}^k + h.c.\right\| \leq 2 \| A\|\left(\left\|\sum_k \dot{\mathrm{K}}^{k \dagger} \dot{\mathrm{K}}^k\right\| + \left\| \sum_k \dot{\mathrm{K}}^{k \dagger} \mathrm{K}^{k \dagger} \right\| +1\right). \end{equation} This leads us to the final bound on the QFI, where we include a minimization over the Kraus representations \begin{equation} \label{eq:unversalbound} F(\rho^{\theta}) \leq 4 \left(\min_{\{K_{i,k}^{\theta}\}} \sum_i \|\alpha_i\| + \sum_{i \neq j} \|\beta_i\| (\|\alpha_j\| + \| \beta_j\| + 1)\right), \quad \alpha_i = \sum_k \dot{K}_{i,k}^{\theta \dagger}\dot{{K}}_{i,k}^{\theta}, \quad \beta_i= \sum_{k} \dot{{K}}_{i,k}^{\theta \dagger} K_{i,k}^{\theta}. \end{equation} Note that we can substitute $\mathrm{K}_{i,k}$ with $K_{i,k}^{\theta}$ as this replacement amounts to multiplying Kraus operators by unitary operations which does not affect the operator norms in Eq.~\eqref{eq:unversalbound}. For typical models of decoherence, such as the erasure or depolarizing noise, one can find Kraus representations $\{ K^\theta_{i,k}\}_k$ satisfying $\beta_i=0$, which leads to powerful upper bounds that scale linearly in the number of subsystems, i.e., \begin{equation} \label{eq:unversalboundbeta0} F(\rho^{\theta}) \leq 4 \sum_i \min_{\{K_{i,k}^{\theta}\}_k, \beta_i=0} \|\alpha_i\| \leq 4 \r \max_i \min_{\{K_{i,k}^{\theta}\}_k, \beta_i=0} \|\alpha_i\|. \end{equation} This, in turn, indicates the impossibility of achieving the Heisenberg scaling of precision \cite{Fujiwara2008, Escher2011,Demkowicz2012, Kolodynski2013,Demkowicz2014}. Note that we can also arrive at the same conclusion as long as $\beta_i = 0$ for all except for some constant number of channels. In order to obtain an explicit bound one needs to perform a minimization over Kraus representations $\{K_{i,k}^\theta\}_k$ for each individual channel $\mathcal{N}_i$. This can be easily done numerically using a semi-definite program as described in \cite{Demkowicz2012, Kolodynski2013, Demkowicz2014, Demkowicz2017, Zhou2018, Zhou2020, Zhou2020}. Here, we will not discuss this optimization procedure; rather, we simply provide the optimal Kraus representations for the erasure and depolarizing noise. We remark that since the bound is given in terms of a minimization over Kraus representations, any Kraus representation provides a valid bound. Note that in our work we consider the adaptive scheme as depicted in Fig.~\ref{fig_schemes}(a) with the total number of channels $r = n m$, where $n$ represents the number of physical subsystems on which respective parameter-encoding channels act in parallel, and $m$ is the number of adaptive steps. As such, Fig.~\ref{fig_schemes}(a) is a special case of the more general Fig.~\ref{fig:adaptive}. \subsection{Erasure noise} In order to describe the erasure channel $\mathcal{N}_e$ acting on a $d$-dimensional subsystem it is convenient to introduce an additional level of the subsystem labeled by $\ket{d+1}$. This way we can formally work with the same number of subsystems and the fact that a given subsystem is lost is simply indicated by its internal state being $\ket{d+1}$. We therefore work using the basis $\{\ket{1},\dots,\ket{d}, \ket{d+1}\}$. The canonical Kraus operators for the erasure noise are \begin{equation} K_0 = \sqrt{1-p} \sum_{i=1}^d \ketbra{i}{i},\quad K_{d+1} =\ketbra{d+1}{d+1},\quad K_k = \sqrt{p} \ketbra{d+1}{k}\textrm{ for $k=1,\dots, d$}, \end{equation} where $p$ is the loss probability, $K_0$ represents the event of no loss, $K_{k}$ represents the event when the subsystem is in a state $\ket{k}$, which is lost, and $K_{d+1}$ represents simply the fact that a lost subsystem remains lost. When the unitary parameter encoding with a generator $T = \t{diag}(t_1,\dots,t_d)$ is additionally considered we get \begin{equation} K_k^{\theta} = K_k e^{-i T \theta}. \end{equation} Note that the generator $T$ should be formally understood as $T \oplus 0$, where $0$ stands for the lack of phase encoding on the lost subsystem. Without loss of generality we assume that the generator $T$ is shifted in a way that the maximal $t_+$ and minimal $t_-$ eigenvalues have the same absolute values, i.e., $t_+ = -t_- = \Delta T/2$. The above Kraus representation is not helpful with obtaining strong metrological bounds as $\beta \neq 0$ and the bound would scale quadratically in the number of subsystems. However, we can consider an equivalent Kraus representation \begin{equation} \tilde{K}_0^{\theta} = K_0^{\theta}, \quad \tilde{K}_k^{\theta} = e^{i c_k \theta} K_k^{\theta}, \quad \tilde{K}_{d+1}^{\theta} = K_{d+1}^{\theta}. \end{equation} When evaluated at $\theta = 0$, this representation results in operators $\alpha$ and $\beta$ of the form \begin{equation} \beta = i \sum_{k=1}^{d+1} (t_k-p c_k) \ketbra{k}{k}, \quad \alpha = \sum_{k=1}^{d+1} [(1-p) t_k^2+p(c_k-t_k)^2]\ketbra{k}{k}. \end{equation} In order to have $\beta=0$ we need to choose $c_k = t_k/p$. Then, the diagonal elements of $\alpha$ are $t_k^2 (1-p)/p$ and the operator norm of $\alpha$ corresponds to their largest absolute value, which is $t_+^2 (1-p)/p = (\Delta T)^2 (1-p)/(4p)$. Thus, when we consider $r$ erasure channels, each with the corresponding loss probability $p_i$, the QFI of the output state of an arbitrary adaptive strategy that involves using these $r$ channels is upper-bounded by \begin{equation} \label{eq:qfierasure} F(\rho^{\theta}) \leq F^{\uparrow}_e = \sum_{i=1}^{\r} (\Delta T_i)^2 \frac{1-p_i}{p_i}, \end{equation} where $\Delta T_i$ is the spectrum spread of the generator $T_i$ acting on the subsystem $i$. \subsection{Depolarizing channel} The simplest way to derive a bound for the depolarizing channel $\mathcal{N}_{d}$ is to note that it can be regarded as a composition of the erasure channel $\mathcal{N}_{e}$ described in the previous subsection with a channel that takes the state $\ket{d+1}$ and returns the maximally mixed state on the $d$-dimensional Hilbert space spanned by $\ket{1},\dots,\ket{d}$ while on other states it acts as the identity channel. Since the QFI does not increase under the action of any parameter-independent channels, any bound derived for the erasure channel with a given loss probability $p$ is valid for the depolarization channel, which replaces the state with the maximally mixed state with probability $p$. However, such a bound is not tight. To obtain a better bound one needs to perform a minimization over Kraus representations of the actual channel. Below we provide an analytical form of the optimal bound for $d=2$. This tighter bound may be used in Eq.~\eqref{eq_thm_bound} in the main text provided physical subsystems $A_i$ correspond to qubits. Kraus operators of the qubit isotropic depolarizing model are \begin{equation} K_{0}=\sqrt{1- \frac{3p}{4}} \openone, \quad K_k=\sqrt{\frac{p}{4}}\,\sigma^{k} \textrm{ for $k=1,2,3$}, \end{equation} where $\sigma^k$ are Pauli matrices, and $1-p$ is the effective shrinking factor of the qubit's Bloch vector. We consider the unitary encoding operation, which rotates the qubit around the $z$ axis, i.e., $U^{\theta} = \exp(- i T \theta )$ with the generator $T = \sigma^Z \Delta T/2$, where $\Delta T$ represent the spectrum spread of the generator. The effective Kraus operators are $K_k^{\theta} = K_k e^{-i T \theta}$. This representation again does not yield $\beta=0$, and we need to find another representation to obtain a tighter bound. The optimal Kraus representation has the following structure \begin{equation} \mat{c}{\tilde{K}_0^{\theta}\\ \tilde{K}_1^{\theta}\\ \tilde{K}_2^{\theta}\\ \tilde{K}_3^{\theta} } = \mat{cccc}{\cos (a\theta)& 0 & 0 & i\sin(a\theta) \\ 0& \cos(b\theta) & - \sin (b\theta) & 0 \\ 0& \sin (b \theta) & \cos(b\theta) & 0 \\ i\sin(a\theta)& 0 & 0 & \cos(a\theta)} \mat{c}{{K}_0^{\theta}\\ {K}_1^{\theta}\\ {K}_2^{\theta}\\ {K}_3^{\theta}}. \end{equation} Then, the operators $\alpha$ and $\beta$ are \begin{equation} \alpha =\frac{1}{4}\left((\Delta T)^2 +\Delta T\left(2 b p - 2a \sqrt{p(4-3p)}\right)+ 2 b^2p - 2a^2(p-2)\right) \openone, \quad \beta = \frac{i}{2}\left( \Delta T + b p -a \sqrt{p(4 - 3p)}\right) \sigma^Z. \end{equation} In order to have $\beta=0$ we set $a= (b p +\Delta T)/\sqrt{p(4-3p)}$. Then, the minimal value of the norm of $\alpha$ is admitted for $b=(2-p)\Delta T/(2p)(2p-3)$ and equals $\| \alpha \| = (\Delta T)^2(1-p)^2/(2p(2p-3))$. Thus, given $\r$ different subsystems the fundamental bound on estimating $\theta$ in the presence of the depolarizing noise is \begin{equation} F[\rho^{\theta}] \leq F^{\uparrow}_d = \sum_{i=1}^{\r} (\Delta T_i)^2 \frac{2 (1-p_i)^2}{p_i(2p_i -3)}, \end{equation} where $p_i$ is the strength of the depolarizing noise on the subsystem $i$. If $p_i < 1/2$, then the right-hand side is indeed smaller than the right-hand side of Eq.~\eqref{eq:qfierasure} and hence the depolarizing bound is tighter than the erasure bound. \twocolumngrid
train/arxiv
BkiUdpc5qhDBblGOU8hA
5
1
\section{Introduction} First principles electronic structure methods have been used to describe and explain a wide range of properties for different condensed matter systems. A critical step is the accurate determination of the ground state atomic structure, since many important properties of a material can change dramatically depending on the structure. Because of the balance between accuracy and computational cost, density functional theory (DFT) has become a commonly used method to find equilibrium geometries of both molecules and extended systems. The primary reason for this is the availablility of forces with little extra computational cost over the energy calculation. Using a typical quasi-Newton minimization algorithm, the local minimum of a potential energy surface can be found in ${\cal O}(N_{DOF})$, where $N_{DOF}$ is the number of degrees of freedom to be optimized. This favorable scaling has made it possible to find minima for many systems of interest. However, in many situations, including transition metals, excited states, and weak binding, current density functional theories may not be accurate enough even for structures, and more accurate post-Hartree-Fock methods that scale as ${\cal O}(N_e^{5-7})$, where $N_e$ is the number of electrons, can often be too computationally expensive. Quantum Monte Carlo (QMC), a stochastic approach to solving the many-body Schr\"odinger equation, offers favorable scaling to obtain the total energy, ${\cal O}(N_e^{2-3})$, and has been shown to provide near chemical accuracy in many different systems\cite{jindra_feo, jeff_benchmark}. However, there are two major challenges in using QMC methods to obtain high precision minimum energy structures. The first is that the techniques so far proposed to calculate forces in diffusion Monte Carlo all have large variances and error terms that depend on the quality of the trial wave function, which is often poor in systems where DFT fails and one would like to apply QMC methods. In fact, despite much work in recent years\cite{chiesa_force,assaraf_force, mella_force, badinski_force1, badinski_force2,filippi_correlated}, QMC forces using the highly accurate diffusion Monte Carlo method have not been applied to more than a few degrees of freedom, although the simpler variational Monte Carlo technique has been applied to more\cite{attaccalite_md}. The second challenge is the stochastic nature of Monte Carlo algorithms, which provides uncertainty in any estimator that only decreases as the square root of the computer time. Reducing the uncertainty enough to resolve the minimum structure accurately using forces or total energies is often prohibitively expensive computationally. Methods such as the stochastic gradient approximation\cite{monro, harju_sga} that are able to operate in the presence of noise suffer from this large uncertainty in the forces. As a result, there are no geometry optimizations of more than three\cite{lester_fit} degrees of freedom, to our knowledge. In this article, we describe an algorithm that uses the already-accurate total energies from QMC to obtain minimum energy geometries with well-defined stochastic uncertainties with multiple degrees of freedom. The algorithm consists of two major parts. One is a sequence of minimizations along one dimensional lines. The use of 1D minimizations allows us to use efficient fits to determine the minimum precisely. The second part is a quadratic fit of the many-dimensional energy surface to determine the new search directions. Both of these parts are completely aware of the stochastic uncertainty present in the evaluations of the energy, an important feature obtained by the use of Bayesian inference. We apply this approach to the hydrogen-transfer model of H$_2$O-OH$^-$ and show that our method can help clarify challenging problems that require accurate calculation of the electronic ground state. DFT and Hartree-Fock calculations were performed using the GAMESS\cite{gamess} package. We used soft pseudopotentials\cite{Dolg_psp_qmc} to remove the core electrons and a TZP-quality contracted gaussian basis to represent the one-particle orbitals. All-electron calculations were performed using the aug-cc-pVQZ\cite{dunning_basis} basis to check the basis set and pseudopotential errors. All QMC calculations were performed using the QWalk\cite{qwalk} program. For energies, we used fixed node diffusion Monte Carlo (DMC) with a time step of 0.02-0.05 Hartrees$^{-1}$, converged for the properties of interest. The trial function was a Slater-Jastrow function with hybrid PBE0\cite{pbe0} orbitals, one and two body terms in the Jastrow, and further checks of the localization error with a three-body Jastrow factor. Further details can be found in e.g. Refs \cite{Foulkes_review, qwalk}. \begin{figure} \includegraphics[width=\columnwidth]{algorithmv2} \caption{(color online) An outline of the algorithm. See text for details.} \label{fig:algorithm} \end{figure} Our minimization algorithm is similar in spirit to many successful minimization algorithms in that it is based on minimizing along directions and updating an estimate of the Hessian matrix to find the diagonal directions. However, in our approach the Hessian matrix is inferred using the Bayesian interpretation of probability, which has the effect of making the algorithm very robust to stochastic uncertainty. Here we present the case when only the value can be calculated, but gradients of the objective function can be included easily if they are available, increasing the efficiency. By using inference, we are able to make very efficient use of the available data to find the Hessian matrix without having to reduce the stochastic uncertainties to small values that cost large amounts of computational time to achieve. The kernel of the algorithm is a sequence of line minimizations. We show that a sequence of uncertain line minimizations will obtain the true minimum on average. Such a sequence can then be viewed as a generator of random variables whose expectation value is the true minimum. Suppose on the first step, we start with the approximate minimum ${\bf x}^{(0)}$. We then define $\delta {\bf x}^{(0)}={\bf x}^{(0)}-{\bf m}$, where ${\bf m}$ is the vector of the unknown true minima. We wish to design our process such that the expectation value of $\delta {\bf x}^{(n)}$ equals zero as $n \to \infty$. Within the quadratic region around the minimum, the potential energy surface is given by \begin{equation} E({\bf x})=\delta {\bf x}^T H \delta {\bf x}+E_0, \end{equation} where $H$ is the symmetric Hessian matrix and $E_0$ is the minimum total energy. On minimizing along each direction $i$, there are two components of the distance from the true minimum. The first is deterministic and comes about from non-zero $\delta x_j$ for $j\neq i$. The second is the stochastic error from the uncertainty in the line minimization, which we can estimate using the Bayesian techniques above. We find that \begin{equation} \delta x_i^{(1)}=\chi_i^{(1)}+\sum_{j \neq i}\frac{H_{ij}}{H_{ii}}\delta x_j^{(0)}, \label{eqn:minimization_error} \end{equation} where $\chi_i^{(1)}$ is a random number. For the $n$th iteration, \begin{equation} \delta x_i^{(n)}= \chi_i^{(n)}-\sum_{j \neq i} \frac{H_{ij}}{H_{ii}} \chi_j^{(n-1)} +\sum_{j \neq i} \sum_{k \neq j} \frac{H_{ij}H_{jk}}{H_{ii}H_{jj}} \chi_k^{(n-2)}+\ldots \label{eqn:avg} \end{equation} The minimum along each line is found using line fitting, as described in the EPAPS document. This allows for a very efficient determination of the minimum. One can see from Eqn~\ref{eqn:avg} that the smaller the off-diagonal matrix elements of the Hessian are, the less interference directions have on each other--for a diagonal Hessian, only one minimization in each direction is necessary once we are in the quadratic regime. We can use the information from the line minimizations to estimate the Hessian as the minimization proceeds. We parameterize the quadratic region with a set of parameters ${\bf c}_Q$, including the elements of the Hessian matrix, the minima, and the minimum energy. After performing the line fits, we have distributions of line fit parameters, each given by ${\bf c}_\ell$ for line $\ell$. Using Bayes' theorem, the likelihood function of the quadratic parameters $L$ is given by \begin{equation} L({\bf c}_Q |D ) \propto p(D|{\bf c}_Q) = \prod_\ell \int p(D_\ell | L_\ell) p(L_\ell | {\bf c}_Q) d {\bf c}_\ell , \end{equation} where D is the set of function evaluations and stochastic uncertainties given by e.g. QMC and each $D_\ell$ is the subset of function evaluations used to minimize along a given line $\ell$. Since $p(L_\ell | {\bf c}_Q)$ is not based on stochastic data, it is a delta function that forces ${\bf c}_\ell$ to be consistent with ${\bf c}_Q$ as follows. Since in the quadratic region, $E({\bf x})=\delta {\bf x}^T H \delta {\bf x} +E_0$, minimizing along a direction ${\bf v}$ from a starting position ${\bf x}_0$ gives the one-dimensional function of $t$, the position along the line: \begin{equation} E(t)=({\bf x}_0+t{\bf v}-{\bf m})^T H ({\bf x}_0+t{\bf v}-{\bf m})+E_0. \end{equation} This gives the following constraints on each set of line parameters for the minimum, curvature, and minimum function value $c_\ell$: \begin{equation} t_m=- \frac{ {\bf v}^T H ({\bf x}_0-{\bf m}) + ({\bf x}_0-{\bf m})^T H {\bf v} }{ 2 {\bf v}^TH{\bf v} } \end{equation} \begin{equation} \left.\frac{d^2E}{dt^2} \right|_{t_m} = 2 {\bf v}^T H {\bf v} \end{equation} \begin{equation} E(t_m)= ({\bf x}_0+t_m{\bf v} -{\bf m})^T H ({\bf x}_0 + t_m {\bf v} - {\bf m}) \end{equation} The net result of this transformation is a set of parameters that includes the Hessian matrix, the location of the minimum, the objective function at the minimum, and the parameters used for the line fits above the three constraints implied by the ${\bf c}_Q$'s. We examine the properties of this probability distribution function in the EPAPS material. It turns out that the maximum likelihood estimator is typically accurate enough to determine the Hessian, which is then diagonalized to obtain new search directions. We outline a single iteration of the algorithm in Fig~\ref{fig:algorithm}. This step operates in two principle regimes. The first is far away from the minimum. In this regime, the Hessian inference method behaves similarly to a deterministic direction set method, with the directions being determined by the Hessian inference. The second regime is when the calculation has mostly converged and the deterministic error in Eqn~\ref{eqn:minimization_error} is small compared to the stochastic error. In this regime, deterministic direction choices are particularly useless, but the inferred Hessian method is able to automatically account for the stochastic error. Once in this ``stochastic regime,'' we can use Eqn~\ref{eqn:avg} to justify averaging the ${\bf x}^{(n)}$'s to obtain a more precise estimate of the minimum. For stochastic functions, the performance of the Hessian inference is much higher than traditional methods such as Powell's method\cite{powell}, which we compare in Fig~\ref{fig:performance}. Since Powell's method uses the concept of conjugate directions to find the search directions, the uncertain minimizations cause the algorithm to fail quickly. Even when the directions are reset every sweep, they can become linearly dependent within a single sweep, causing a high failure rate for even moderate dimensionality. \begin{figure} \includegraphics[width=\columnwidth]{distributions} \caption{(color online) Estimates of reliability for the Hessian inference method versus Powell line minimization. There are several runs for each number of degrees of freedom, each with a different potential energy surface and relative starting position. The potential surface was a randomly generated positive definite quadratic surface with a random minimum and a small cubic perturbation. The condition number of the matrices was typically 50 or less. We added a Gaussian random variable to the values of the potential energy surface to simulate uncertain evaluation. Convergence was judged when the RMS distance to the true minimum converged to the stochastic floor of the minimization.} \label{fig:performance} \end{figure} To show the value of optimizing geometries within QMC, we apply the method to the H$_2$O-OH$^-$ complex (Fig~\ref{fig:pes_scan}), which is present in liquid water and important in many systems in condensed matter, biology, and chemistry. The shape of this potential energy surface is crucial to understanding hydrogen transfer in water. In this case, we use our knowledge of the system to choose fixed search directions for efficiency reasons, omitting the Hessian inference method. For systems that are not as easily decomposed, the Hessian inference is invaluable. It has been noted\cite{perdew_h3o2} that current DFT functionals disagree on the ground state structure of this complex. The potential minima are the non-centrosymmetric structure (A) and the centrosymmetric structure (B). Hartree-Fock and second order M{\o}ller-Plesset perturbation theory (MP2) find that structure A is lower in energy, with a barrier to transfer the proton. This is the traditional picture of this structure. The local density approximation and generalized gradient approximation (PBE\cite{pbe}) of DFT find that structure B is lower in energy. Using our QMC line minimization method without constraints (seven degrees of freedom) we find structure A to be the minimum energy. The results are summarized in Table~\ref{table:geom}. DMC differs {\em qualitatively} from the DFT results in that structure A is the minimum, and quantitatively with MP2, since the oxygen-oxygen distance in MP2 is much smaller than in DMC for structure A. \begin{table} \caption{H$_2$O'-OH$^-$. The non-QMC methods are optimized using conjugate gradient routines in GAMESS. The asterisked DMC result corresponds to constraining the DMC geometry search to the symmetry of structure B. } \label{table:geom} \begin{tabular}{lccc} {\bf Method} & {\bf O-O' } & {\bf O'-H } & {\bf Structure type} \\ \hline LDA & 2.448 & 1.224 & B \\ PBE & 2.470 & 1.235 & B\\ MP2 & 2.469 & 1.123 & A\\ DMC & 2.491(2) & 1.111(3) & A \\ DMC* & 2.469(3) & 1.235(2) & B\\ \end{tabular} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{energies} \caption{(color online) The relative DMC energies of geometries obtained by minimizing different potential energy surfaces. The line is a guide to the eye. Stochastic error bars are the size of the symbols. } \label{fig:pes_scan} \end{figure} In Fig~\ref{fig:pes_scan}, we present the DMC energies of several minima. Without the geometry optimization algorithm, we would use minima from some other methods, for example, PBE and MP2 to obtain structures B and A respectively, then evaluate the total energy using DMC. In this case, since the MP2 approximation obtains a poor O-O distance, DMC predicts a much higher energy for structure A and thus favors the centrosymmetric geometry. However, this is qualitatively incorrect, as we can see from the DMC-optimized structures, which predict structure B to be about 0.015 eV higher in energy than structure A. It is notable that 0.015 eV is a very small energy difference on the scale of chemical bonding, which is why a small error in the geometry by using a lower level theory is enough to reverse the ordering. This energy difference is an upper bound to the barrier to transfer the hydrogen, and thus at room temperature the hydrogen is free to transfer between the oxygen atoms. This may have important implications for the development of effective solvent models for water. In this work, we have viewed the DMC potential energy surface as a given without attempting to improve the accuracy over a simple Slater-Jastrow form; however, our method can also be used to optimize the nodal surface directly and thus further improve the accuracy of the DMC calculation. Similarly, any Monte Carlo method that depends on continuous parameters could put the stochastic line minimization algorithm to use. Stochastic line minimization may find application in experiments whose objective is to obtain a precise minimum in many-dimensional space, but the experiments are difficult to perform precisely. If it is instead easier to perform many iterations of the experiment, then the minimization method that we have outlined may be applicable. The scaling for a quadratic potential is ${\cal O}(N_{DOF}^2)$ if only values are used; if even approximate gradients are available, the scaling can be reduced to approximately ${\cal O}(N_{DOF})$, as discussed in the EPAPS document. The algorithm is quite general, requiring only evaluations of the objective function with a known distribution, and can thus be applied to many minimization problems. At its core, it is a method for converting noisy value calculations into precise minima with rigorous error bounds, so it can be applied in any of the many problems where that is necessary. In electronic structure, diffusion Monte Carlo is accurate and applicable to many systems beyond the ground state of molecules, including solid structures and excited states\cite{schautz_excited}. The scaling for DMC geometry optimization is then ${\cal O}(N_{DOF}^{1-2}N_e^{2-3})$, depending on the size of the system and whether gradients are available. This favorable scaling combined with the already known high accuracy of quantum Monte Carlo could open up new levels of accuracy in structure determination in condensed matter and chemical physics. This work was supported by the Department of Energy under grant No. DE-SC0002623 and the NSF under grant No. 0425914. We wish to thank NERSC and Teragrid as well for providing computational resources. We would also like to acknowledge Yosuke Kanai for useful discussions.
train/arxiv
BkiUc0k4ubngyA6kKs2M
5
1
\section{Introduction} \label{sec:introduction} \vspace*{-3pt} Egocentric videos, also referred to as first-person videos, have been frequently advocated to provide a unique perspective into object interactions~\cite{kanade2012first,fathi2011understanding, mayol2004interaction}. These capture the viewpoint of an object close to that perceived by the user during interactions. Consider, for example, `turning a door handle'. Similar appearance and motion information will be captured from an egocentric perspective as multiple people turn a variety of door handles. Several datasets have been availed to the research community focusing on object interactions from head-mounted~\cite{de2008guide, fathi2011learning, Fathi2012,Damen2014a, lee2012discovering} and chest-mounted~\cite{pirsiavash2012detecting} cameras. These incorporate ground truth labels that mark the \textit{start} and the \textit{end} of each object interaction, such as `open fridge', `cut tomato' and `push door'. These temporal bounds are the base for automating action detection, localization and recognition. They are thus highly influential in the ability of an algorithm to distinguish one {interaction} from another. As temporal bounds vary, the segments may contain different portions of the untrimmed video from which the action is extracted. Humans can still recognize an action even when the video snippet varies or contains only part of the action. Machines are not yet as robust, given that current algorithms strongly rely on the data and the labels we feed to them. Should these bounds be incorrectly or inconsistently annotated, the ability to learn as well as assess models for action recognition would be adversely affected. In this paper, we uncover inconsistencies in defining temporal bounds for object interactions within and across three egocentric datasets. We show that temporal bounds are often ill-defined, with limited insight into how they have been annotated. We systematically show that perturbations of temporal bounds influence the accuracy of action recognition, for both hand-crafted features and fine-tuned classifiers, even when the tested video segment significantly overlaps with the ground truth segment. While this paper focuses on unearthing inconsistencies in temporal bounds, and assessing \textcolor[rgb]{0,0,0}{their} effect on object interaction recognition, we take a step further into proposing an approach for consistently labeling temporal bounds inspired by studies in the human mindset. \vspace*{2pt} \noindent\textbf{Main Contributions} \hspace{4pt} More specifically, we: \begin{itemize} \vspace*{-4pt} \item Inspect the consistency of temporal bounds for object interactions \textit{across and within} three datasets for egocentric object interactions. We demonstrate that current approaches are highly subjective, with visible variability in temporal bounds when annotating instances of the same action; \vspace*{-6pt} \item Evaluate the robustness of two state-of-the-art action recognition approaches, namely Improved Dense Trajectories~\cite{Wang2013} and Convolutional Two-Stream Network Fusion~\cite{feichtenhofer2016convolutional}, to changes in temporal bounds. We demonstrate that the recognition rate drops by 2-10\% when temporal bounds are modified albeit within an Intersection-over-Union of more than 0.5; \vspace*{-6pt} \item Propose, inspired by studies in Psychology, the Rubicon Boundaries to assist in consistent temporal boundary annotations for object interactions; \vspace*{-6pt} \item Re-annotate one dataset using the Rubicon Boundaries, and show more than 4\% increase in recognition accuracy, with improved per-class accuracies for most classes in the dataset. \end{itemize} \vspace*{-6pt} We next review related works in Section~\ref{sec:relatedWork}, before embarking on inspecting labeling consistenc{ies} in Section~\ref{sec:temporalBoundaries}, evaluating recognition robustness in Section~\ref{sec:experiments} and proposing and evaluating the Rubicon Boundaries in Section~\ref{sec:rubiconBoundaries}. The paper concludes with an insight into future directions. \section{Related Work} \label{sec:relatedWork} In this section, we review all papers that, up to our knowledge, ventured into the consistency and robustness of temporal bounds for action recognition. \vspace*{6pt} \noindent \textbf{Temporal Bounds in Non-Egocentric Datasets} \hspace{4pt} The leading work of Satkin and Hebert~\cite{satkin2010modeling} first pointed out that determining the temporal extent of an action is often subjective, and that action recognition results vary depending on the bounds used for training. They proposed to find the most discriminative portion of each segment for the task of action recognition. Given a loosely trimmed training segment, they exhaustively search for the cropping that leads to the highest classification accuracy, using hand-crafted features such as HOG, HOF \cite{laptev2008learning} and Trajectons \cite{matikainen2009trajectons}. Optimizing bounds to maximize discrimination between class labels has also been attempted by Duchenne~\textit{et al.}~\cite{duchenne2009automatic}, where they refined loosely labeled temporal bounds of actions, estimated from film scripts, to increase accuracy across action classes. Similarly, two works evaluated the optimal segment length for action recognition~\cite{schindler2008action,yang2014effective}. From the \textit{start} of the segment, {1}-7 frames were deemed sufficient in~\cite{schindler2008action}, with rapidly diminishing returns as more frames were added. More recently,~\cite{yang2014effective} showed that 15-20 frames were enough to recognize human actions from 3D skeleton joints. Interestingly, assessing the effect of temporal bounds is still an active research topic within novel deep architectures. Recently, Peng~\textit{et al.}~\cite{peng2016multi} assessed how frame-level classifications using multi-region two-stream CNN are pooled to achieve video-level recognition results. The authors reported that stacking more than 5 frames worsened the action detection and recognition results for the tested datasets, though only compared to a 10-frame stack. The problem of finding optimal temporal bounds is much akin to that of action localization in untrimmed videos~\cite{wang2016temporal, lea2016segmental, huang2016connectionist}. Typical approaches attempt to find similar temporal bounds to those used in training, making them equally dependent on manual labels and thus sensitive to inconsistencies in the ground truth labels. An interesting approach that addressed reliance on training temporal bounds for action recognition and localization is that of Gaidon \textit{et al.} \cite{gaidon2013temporal}. They noted that action recognition methods rely on temporal bounds in test videos to be strictly containing an action, and in \textit{exactly} the same fashion as the training segments. They thus redefined an action as a sequence of key atomic frames, referred to as actoms. The authors learned the optimal sequence of actoms per action class with promising results. More recently, Wang \textit{et~al.}~\cite{Wang_2016_CVPR} represented actions as a transformation from a precondition state to an effect state. The authors attempted to learn such transformations as well as locate the end of the precondition and the start of the effect. However, both approaches~\cite{gaidon2013temporal, Wang_2016_CVPR} rely on manual \textcolor[rgb]{0,0,0}{annotations} of actoms~\cite{gaidon2013temporal} or action segments~\cite{Wang_2016_CVPR}, which are potentially as subjective as the temporal bounds of the {actions themselves}. \vspace*{6pt} \noindent \textbf{Temporal Bounds in Egocentric Datasets} \hspace{4pt} Compared to {third} person action recognition (e.g. 101 action classes in~\cite{soomro2012ucf101} and 157 action classes in~\cite{sigurdsson2016hollywood}), egocentric datasets have a smaller number of classes (5-44 classes~\cite{de2008guide, fathi2011learning, Fathi2012, Damen2014a, lee2012discovering, pirsiavash2012detecting, zhou2015temporal}), with considerable ambiguities (e.g. `turn on' vs `turn off' tap). Comparative recognition results {have been} reported on these datasets in~\cite{spriggs2009temporal, taralova2011source, Singh16, li2015delving, Ryoo_2015_CVPR, ma2016going}. Previously, three works noted the challenge and difficulty in defining temporal bounds for egocentric videos~\cite{spriggs2009temporal,Damen2014a,zhou2015temporal}. In~\cite{spriggs2009temporal}, Spriggs \textit{et al.} discussed the level of granularity in action labels (e.g.~`break egg' vs `beat egg in a bowl') for the CMU dataset~\cite{de2008guide}. They also noted the presence of temporally overlapping object interactions (e.g. `pour' while `stirring'). In~\cite{wray2016sembed}, multiple annotators were asked to provide temporal bounds for the same object interaction. The authors showed variability in annotations, yet did not detail what instructions were given to annotators when labeling these temporal bounds. In~\cite{zhou2015temporal}, the human ability to order pairwise egocentric segments was evaluated as the snippet length varied. The work showed that human perception improves as the size of the segment increases to~60 frames, then levels off. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/pourExampleBig_reduced.png} \caption{Annotations for action `pour sugar/oil' from BEOID, GTEA Gaze+ and CMU. \textcolor[rgb]{0,0,0}{Aligned key frames} are shown along with ground truth annotations (red). The yellow rectangle encloses the motion strictly involved in `pour'.} \label{fig:differentBoundariesExample} \end{figure*} To summarize, temporal bounds for object interactions in egocentric video have been overlooked and no previous work attempted to analyze the influence of consistency of temporal bounds or the robustness of representations to variability in these bounds. This paper particularly attempts to answer both questions; \textit{how consistent are current temporal bound labels in egocentric datasets?} and \textit{how sensitive are action recognition results to changes in these temporal bounds?} We next delve into answering these questions. \vspace*{-6pt} \section{Temporal Bounds: Inspecting Inconsistency} \label{sec:temporalBoundaries} \vspace*{-2pt} Current egocentric datasets are annotated for a number of action classes, described using a verb-noun label. Each class instance is annotated with its label as well as the temporal bounds (i.e.~\textit{start} and \textit{end} times) that delimit the frames used to learn the action model. Little information is typically provided on how these manually labeled temporal bounds are acquired. In Section~\ref{subsec:labellingInCurrentDatasets}, we compare labels across and within egocentric datasets. We then discuss in Section~\ref{subsec:multiple} how variability is further increased when multiple annotators for the same action are employed. \subsection{Labeling in Current Egocentric Datasets} \label{subsec:labellingInCurrentDatasets} We study ground truth annotations for three public datasets, namely BEOID~\cite{Damen2014a}, GTEA Gaze+~\cite{Fathi2012} and CMU~\cite{de2008guide}. Observably, many annotations base the \textit{start} and \textit{end} of an action as respectively the first and last frames when the hands are visible in the field of view. Other annotations tend to segment an action more strictly, including only the most relevant physical object interaction within the bounds. Figure~\ref{fig:differentBoundariesExample} illustrates an example of three different temporal bounds for the `pour' action across the three datasets. Frames marked in red are those that have been labeled {in the different datasets} as containing the `pour' action. The annotated temporal bounds in this example vary remarkably{;} (i) BEOID's are the tightest; (ii) The start of GTEA Gaze+'s segment is belated: in fact, the first frame in the annotated segment shows {some oil already} in the pan; (iii) CMU's segment includes picking the oil container {and} putting it down before and after pouring. These conclusions extend to other actions in the three datasets. We observe that annotations are also inconsistent within the same dataset. Figure \ref{fig:boundariesInconsistency} shows three intra-dataset annotations. (i) For the action `open door' in BEOID, one segment includes the hand reaching the door, while the other starts with the hand already holding the door's handle; (ii) For the action `cut pepper' in GTEA Gaze+, in one segment the user already holds the knife and cuts a single slice of the vegetable. The second segment includes the action of picking up the knife, and shows the subject slicing the whole pepper through several cuts. Note that the length difference between the two segments is considerable - the segments are respectively 3 and 80 seconds long; (iii) For the action `crack egg' in CMU, only the first segment shows the user tapping the egg against the bowl. While the figure shows three examples, such inconsistencies have been discovered throughout the three datasets. However, we generally observe that GTEA Gaze+ shows more inconsistencies, which could be due to the dataset size, as it is the largest among the evaluated datasets. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{figures/boundariesInconsistencyStreched_reduced.png} \caption{Inconsistency of temporal bounds within datasets. Two segments from each action are shown with considerable differences in start and end times.} \label{fig:boundariesInconsistency} \end{figure} \vspace*{-4pt} \subsection{Multi-Annotator Labeling} \label{subsec:multiple} \vspace*{-2pt} As noted above, defining when an object interaction begins and finishes is highly subjective. There is usually little agreement when different annotators segment the same object interaction. To assess this variability, we collected 5 annotations for several object interactions from an untrimmed video of the BEOID dataset. First, annotators were only informed of the class name and asked to label the start and the end of the action. We refer to these annotations as \textit{conventional}. We then asked a different set of annotators to annotate the same object interactions following our proposed Rubicon Boundaries (RB) approach which we will present in Section \ref{sec:rubiconBoundaries}. Figure~\ref{fig:beoidAnnotations_boxPlots} shows per-class box plots for the Intersection-over-Union (IoU) measure for all pairs of annotations. RB annotations demonstrate gained consistency for all classes. For conventional annotations, we report an average IoU~=~0.63 and a standard deviation of~0.22, whereas for RB annotations we report increased average IoU~=~0.83 with a lower standard deviation of~0.11. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{figures/beoidIoUPlot2.pdf} \caption{IoU comparison between conventional (red) and RB (blue) annotations for several object interactions.} \label{fig:beoidAnnotations_boxPlots} \end{figure} To assess how consistency changes as more annotations are collected, we employ annotators via the Amazon Mechanical Turk (MTurk) for two object interactions from BEOID, namely the actions of `scan card' and `wash cup', for which we gathered 45 conventional and RB labels. Box plots for MTurk labels are included in Figure~\ref{fig:beoidAnnotations_boxPlots}, showing marginal improvements with RB annotations as well. We will revisit RB annotations in detail in Section~\ref{sec:rubiconBoundaries}. In the next Section, we assess the robustness of current action recognition approaches to variations in temporal boundaries. \vspace*{-6pt} \section{Temporal Bounds: Assessing Robustness} \label{sec:experiments} \vspace*{-4pt} To assess the effect of temporal bounds on action recognition, we systematically vary the \textit{start} and \textit{end} times of annotated segments for the {three} datasets, and report comprehensive results on the effect of such alterations. Results are evaluated using 5-fold cross validation. For training, only ground truth segments are considered. We then classify \textit{both} the original ground truth and the generated segments. We provide results using Improved Dense Trajectories \cite{Wang2013} encoded with Fisher Vector \cite{Sanchez2013} (IDT FV)\footnote{IDT features have been extracted using GNU Parallel \cite{Tange2011a}.} and Convolutional Two-Stream Network Fusion for Video Action Recognition (2SCNN) \cite{feichtenhofer2016convolutional}. The encoded IDT FV features are classified with a linear SVM. Experiments on 2SCNN are carried out using the provided code and the proposed VGG-16 architecture pre-trained on ImageNet and tuned on UCF101~\cite{soomro2012ucf101}. We fine-tune the spatial, temporal and fusion networks on each fold's training set. Theoretically, the two action recognition approaches are likely to respond differently to variations in start and end times. Specifically, 2SCNN averages the classification responses of the fusion network obtained on $n$ frames randomly extracted from a test video $v$ of length $|v|$. In our experiments, $n = \min(20, |v|)$. Such strategy should ascribe some degree of resilience to 2SCNN. IDT \textcolor[rgb]{0,0,0}{densely} samples feature points in the first frame of the video, whereas in the following frames only new feature points are sampled to replace the missing ones. This entails that IDT FV should be more sensitive to start (specifically) and end time variations, at least for shorter videos. This fundamental difference makes both approaches interesting to assess for robustness. \begin{table}[t] \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{l|c|c|c} \hline Dataset & N. of $gt$ segments & N. of $gen$ segments & Classes \\ \hline BEOID~\cite{Damen2014a} & 742 & 16691 & 34 \\ GTEA Gaze+~\cite{Fathi2012} & 1141 & 22221 & 42 \\ CMU~\cite{de2008guide} & 450 & 26160 & 31 \\ \hline \end{tabular}} \caption{Number of ground truth/generated segments and number of classes for BEOID, GTEA Gaze+ and CMU.} \label{table:datasetsInfo} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/datasetLengthDistributionGteaOriginal.pdf} \label{fig:datasetLengthDistribution} \caption{Video's length distribution {across datasets}.} \label{fig:datasetLengthDistribution} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/beoidResults.pdf} \caption{BEOID: classification accuracy vs IoU, start/end shifts and length difference between $gt$ and generated segments.} \label{fig:beoidResults} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{figures/cmuResults.pdf} \caption{CMU: classification accuracy vs IoU, start/end shifts and length difference.} \label{fig:cmuResults} \end{figure*} \subsection{Generating Segments} \label{subsec:generatingSegments} Let $v_{gt}$ be a ground truth action segment obtained by cropping an untrimmed video from time $s_{gt}$ to time $e_{gt}$, which denote the annotated ground truth start and end times. We vary both $s_{gt}$ and $e_{gt}$ in order to generate new action segments with different temporal bounds. More precisely, let ${s_{gen}^{0} = s_{gt} - \Delta}$ and let $s_{gen}^{n} = s_{gt} + \Delta$. The set containing candidate start times is defined as: \begin{equation*} \mathcal{S} = \{s_{gen}^{0}, s_{gen}^{0} + \delta, s_{gen}^{0} + 2\delta, ..., s_{gen}^{0} + (n-1) \delta, s_{gen}^{n}\} \end{equation*} Analogously, let $e_{gen}^{0} = e_{gt} - \Delta$ and let $e_{gen}^{n} = e_{gt}~+~\Delta$, the set containing candidate end times is defined as: \begin{equation*} \mathcal{E} = \{e_{gen}^{0}, e_{gen}^{0} + \delta, e_{gen}^{0} + 2\delta, ..., e_{gen}^{0} + (n-1) \delta, e_{gen}^{n}\} \end{equation*} To accumulate the set of generated action segments, we take all possible combinations of $s_{gen} \in \mathcal{S}$ and $e_{gen} \in \mathcal{E}$ and keep only those such that the Intersection-over-Union between $[s_{gt}, e_{gt}]$ and $[s_{gen}, e_{gen}] \geq 0.5$. In all our experiments, we set $\Delta = 2$ and $\delta = 0.5$ seconds. \subsection{Comparative Evaluation} \label{subsec:results} Table \ref{table:datasetsInfo} reports the number of ground truth and generated segments for BEOID, GTEA Gaze+ and CMU. Figure \ref{fig:datasetLengthDistribution} illustrates the segments' length distribution for the three datasets, showing considerable differences: BEOID and GTEA Gaze+ contain mostly short segments (1-2.5 seconds), although the latter includes also videos up to 40 seconds long. CMU has longer segments, with the majority ranging from {5 to 15 seconds.} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/gteaResultsOriginal.pdf} \caption{GTEA Gaze+: classification accuracy vs IoU, start/end shifts and length difference.} \label{fig:gteaResults} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.9\textwidth]{figures/classAccuracyComparisonGteaOriginal3.pdf} \caption{Accuracy per class differences. Most classes exhibit a drop in accuracy when testing generated segments.} \label{fig:classAccuracyComparison} \end{figure*} \noindent{\textbf{BEOID~\cite{Damen2014a}} \hspace{4pt}} is the evaluated dataset with the most consistent and the tightest temporal bounds. When testing the ground truth segments using both IDT FV and 2SCNN, we observe high accuracy for ground truth segments - respectively 85.3\% and 93.5\% - as shown in Table~\ref{table:accuracyResultsAll}. When classifying the generated segments, we observe a drop in accuracy of 9.9\% and 9.7\% respectively. Figure \ref{fig:beoidResults} shows detailed results where accuracy is reported vs IoU, start/end shifts and length difference between ground truth and generated segments. We particularly show the results for shifts in the start and the end times independently. A \textit{negative start shift} implies that a generated segment begins before the corresponding ground truth segment, and a \textit{negative end shift} implies that a generated segment finishes before the corresponding ground truth segment. These terms are used consistently throughout this section. Results show that: (i) as IoU decreases the accuracy drops consistently for IDT FV and 2SCNN - which questions both approaches' robustness to temporal bounds alterations; (ii) IDT FV exhibits lower accuracy with both negative and positive {start/end} shifts; (iii)~IDT FV similarly exhibits lower accuracy with negative and positive length differences. This is justified as BEOID segments are {tight; by} expanding a segment we include new potentially noisy or irrelevant frames that confuse the classifiers; (iv) 2SCNN is more robust to length difference which is understandable as it randomly samples a maximum of 20 frames regardless of the length. While these are somehow expected, we also note that (v)~2SCNN is robust to \textit{positive start shifts}. \begin{table}[t] \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{l|c|c||c|c} \hline Dataset & $\textcolor{idtColor}{\text{IDT FV}}_{gt}$ & $\textcolor{idtColor}{\text{IDT FV}}_{gen}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gt}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gen}$ \\ \hline BEOID & 85.3 & 75.4 & 93.5 & 83.8 \\ CMU & 54.9 & 52.8 & 76.0 & 71.7 \\ GTEA Gaze+ & 45.4 & 43.3 & 61.2 & 59.6 \\ \hline \end{tabular}} \caption{Classification accuracy for ground truth and generated segments for BEOID, CMU and GTEA~Gaze+.} \label{table:accuracyResultsAll} \end{table} \noindent{\textbf{CMU~\cite{de2008guide}} \hspace{4pt}} is the dataset with longer ground truth segments. Table \ref{table:accuracyResultsAll} compares results obtained for CMU's ground truth and generated segments. For this dataset, IDT FV accuracy drops by 2.1\% for the generated segments, whereas 2SCNN drops by 4.3\%. In Figure \ref{fig:cmuResults}, CMU consistently shows low robustness for both IDT FV and 2SCNN. As IoU changes from $> 0.9$ to $> 0.5$, we observe a drop of more than 20\% in accuracy for both. However, due to the long average length of segments in CMU, the effect of shifts in start end times as well as length differences is not visible for IDT FV. Interestingly for 2SCNN, the accuracy slightly improves with \textit{positive start shift}, \textit{negative end shift} and \textit{negative length difference}. This suggests that CMU's ground truth bounds are somewhat loose and that tighter segments are likely to contain more discriminative frames. \noindent{\textbf{GTEA Gaze+~\cite{Fathi2012}} \hspace{4pt}} is the dataset with the most inconsistent bounds, based on our observations. Table~\ref{table:accuracyResultsAll} shows that accuracy for IDT FV drops by 2.1\%, while overall accuracy for 2SCNN drops marginally (1.6\%). This should not be mistaken for robustness, and that is evident when studying the results in Figure~\ref{fig:gteaResults}. For all variations (i.e. start/end shifts and length differences), the generated segments achieve higher accuracy for both IDT FV and 2SCNN. When labels are inconsistent, shifting temporal bounds does not systematically alter the visual representation of the tested segments. {T}he generated segments tend to include (or exclude) frames that increase the similarity between the testing and training segments. Figure \ref{fig:classAccuracyComparison} reports per-class differences between generated and ground truth segments. Positive values entail that the accuracy for the given class is higher when testing the generated segments, and vice versa. Horizontal lines indicate the average accuracy difference. In total, 63\% of classes in all three datasets exhibit a drop in accuracy drop when using IDT FV compared to 80\% when using 2SCNN. \begin{table}[h] \centering \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{l|c|c||c|c} \hline Dataset & $\textcolor{cnnColor}{\text{2SCNN}}_{gt}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gen}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gt}^{\textcolor{augColor}{aug}}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gen}^{\textcolor{augColor}{aug}}$\\ \hline BEOID & \textbf{93.5} & 83.8 & 92.3 & 86.6 \\ GTEA Gaze+ & \textbf{61.2} & 59.6 & 57.9 & 58.1 \\ \hline \end{tabular}} \caption{\textcolor[rgb]{0,0,0}{2SCNN data augmentation results.}} \label{table:dataAugmentation} \end{table} \noindent{\textbf{Data augmentation:} \hspace{4pt}} {For completeness{, we evaluate the performance }when using temporal data augmentation methods {on two datasets}. {Generated segments in Section~\ref{subsec:generatingSegments} are used to augment training.} We {double} the size of the training sets, taking random samples for augmentation. Test sets remained unvaried. Results are reported in Table \ref{table:dataAugmentation}. While we observe an increase in robustness, we also notice a drop in accuracy for ground truth segments, respectively of 1\% and 4\% for BEOID and GTEA Gaze+ } In conclusion, we note that both IDT FV and 2SCNN are sensitive to changes in temporal bounds for both consistent and inconsistent annotations. Approaches that improve robustness using data augmentation could be attempted, however a broader look at how the methods could be inherently more robust is needed, particularly for CNN architectures. \vspace*{-6pt} \section{{\hspace*{-1pt}Labeling} Proposal: The Rubicon Boundaries} \label{sec:rubiconBoundaries} \vspace*{-4pt} The problem of defining consistent temporal bounds of an action is most akin to the problem of defining consistent bounding boxes of an object. Attempts to define guidelines for annotating objects' bounding boxes started nearly a decade ago. Among others, the VOC Challenge 2007~\cite{everingham2010pascal} proposed what has become the standard for defining the bounding box of an object in images. These consistent labels have been used to train state-of-the-art object detection and classification methods. With this same spirit, in this Section we propose an approach to consistently segment the temporal scope of an object interaction. \noindent \textbf{Defining RB:} The Rubicon Model of Action Phases \cite{gollwitzer1990action}, developed in the field of Psychology, posits an action as a goal a subject desires to achieve and identifies the main sub-phases the person gets through in order to complete the action. First, a person decides what goal he wants to obtain. After forming his intention, he enters the so-called \textit{pre-actional} phase, that is a phase where he plans to perform the action. Following this stage, the subject acts towards goal achievement in the \textit{actional phase}. The two phases are delimited by three transition points: the initiation of prior motion, the start of the action and the goal achievement. The model is named after the historical fact of Caesar crossing the Rubicon river, which became a metaphor for deliberately proceeding past a point of no return, which in our case is the transition point that signals the beginning of an action. We take inspiration from this model, specifically from the aforementioned transitions points, and define two phases for an object interaction: \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{figures/rubiconBoundaries3b23_reduced.png} \caption{Rubicon Boundaries labeling examples for three object interactions.} \label{fig:rubiconBoundariesExample} \end{figure} \vspace*{2pt} \noindent\textbf{\textit{Pre-actional phase}} This sub-segment contains the preliminary motion that directly precedes the goal, and is required for its completion. When multiple motions can be identified, the pre-actional phase should contain only the last one;\\ \textbf{\textit{Actional phase}} This is the main sub-segment containing the motion strictly related to the fulfillment of the goal. The actional phase starts immediately after the pre-actional phase. In the following section, we refer to a label as an RB annotation when the \textit{beginning} of an object interaction aligns with the \textit{start} of the pre-actional phase and the \textit{ending} of the interactions aligns with the \textit{end} of the actional phase. Figure \ref{fig:rubiconBoundariesExample} depicts three object interactions labeled according to the Rubicon Boundaries. The top sequence illustrates the action of cutting a pepper. The sequence shows the subject fetching the knife before cutting the pepper and taking it off the plate. Based on the aforementioned definitions, the pre-actional phase is limited to the motion of moving the knife towards the pepper in order to slice it. This is directly followed by the actional phase where the user cuts the pepper. The actional phase ends as the goal of `cutting' is completed. The middle sequence illustrates the action of opening a fridge, showing a person approaching the fridge, reaching towards the handle before pulling the fridge door open. In this case, the pre-actional phase would be the reaching motion, while the actional phase would be the pulling motion. \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{figures/preActIoU2.pdf} \vspace*{-6pt} \caption{IoU comparison among the pre-actional phase (green), the actional phase (yellow) and their concatenation (blue) for several object interactions of BEOID.} \label{fig:preActIoU} \end{figure} \noindent \textbf{Evaluating RB:} We evaluate our RB proposal for consistency, intuitiveness as well as accuracy and robustness. \noindent \textbf{(i) Consistency:} We already reported consistency results in Section \ref{subsec:multiple}, where RB annotations exhibit higher average overlap and less variation for all the evaluated object interactions - average IoU for all pairs of annotators increased from 0.63 for conventional boundaries to 0.83 for RB. Figure \ref{fig:preActIoU} illustrates per-class IoU box plots for the pre-actional and the actional phases separately, along with the concatenation of the two. For 7 out of the 13 actions, the actional phase was more consistent than the pre-actional phase, and for 12 out of the 13 actions, the concatenation of the phases proved the highest consistency. \noindent \textbf{(ii) Intuitiveness:} While RB showed higher consistency in labeling, any new approach for temporal boundaries would require a shift in practice. We collect RB annotations from university students as well as from MTurk annotators. Locally, students successfully used the RB definitions to annotate videos with no assistance. However, this has not been the case for MTurk annotators for the two object interactions `wash cup' and `scan card'. The MTurk HIT provided the formal definition of the \textit{pre-actional} and \textit{actional} phases, then ran two multiple-choice control questions to assess the ability of annotators to distinguish these phases from a video. The annotators had to select from textual descriptions what the pre and the actional phases entailed. For both object interactions, only a fourth of the annotators answered the control questions correctly. Three possible explanations could be given, namely: annotators were accustomed to the conventional labeling method {and} did not spend sufficient time to study the definitions, or the definitions were difficult to understand. Further experimentation is needed to understand the cause. \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{figures/gteaRBvsOriginalCombined.pdf} \caption{GTEA Gaze+: class accuracy difference between conventional and RB annotations. \textcolor[rgb]{0,0,0}{Some classes achieved higher accuracy only with RB\textsubscript{act}, while other did only with the full RB segment. Bold highlights such cases.} } \label{fig:gteaRBClassAccuracyComparison} \end{figure} \noindent \textbf{(iii) Accuracy:} We annotated GTEA Gaze+ using the Rubicon Boundaries, by employing three people to label its 1141 segments\footnote{{RB labels and {video of results are available on project webpage:} \url{http://www.cs.bris.ac.uk/~damen/Trespass/}}}. For these experiments, we asked annotators to label both the pre-actional and the actional phase. {In Table \ref{table:gteaRBResults2}, we} report results for the actional phase alone (RB\textsubscript{act}) as well as the concatenation of the two phases (RB){,} using 2SCNN on the same 5 folds from Section~\ref{subsec:results}. The concatenated RB segments proved the most accurate, leading to an increase of more than 4\% in accuracy compared to conventional ground truth segments. {Temporal augmentation {on conventional labels ($\textcolor{originalColor}{\text{Conv}}^{\textcolor{augColor}{aug}}_{gt}$)} results in a drop of accuracy by 7.7\% compared with the RB segments, highlighting that consistent labeling cannot be substituted with data augmentation. } Figure \ref{fig:gteaRBClassAccuracyComparison} shows the accuracy per class difference between the two sets of RB annotations and the conventional labels. When using RB\textsubscript{act}, 21/42 classes improved, whereas accuracy dropped for 11 classes compared to the conventional annotations. When using the full RB segment, 23/42 classes improved, while 10 classes were better recognized with the conventional annotations. In each case, 10 and 9 classes remain unchanged. Given that the experimental setup was identical to that used for the conventional annotations, the boost in accuracy can be ascribed solely to the new action boundaries. Indeed, the RB approach helped the annotators to more consistently segment the object interactions contained in GTEA Gaze+, which is one of the most challenging datasets for egocentric action recognition. \noindent \textbf{(iv) Robustness:} Table \ref{table:gteaRBResults2} {also} compares the newly annotated RB segments to generated segments with varied start and end times, as explained in Section~\ref{subsec:generatingSegments}. While RB$_{gen}$ shows higher accuracy than the Conventional$_{gen}$ segments (59.6\% as reported in Table~\ref{table:accuracyResultsAll}), we still observe a clear drop in accuracy between $gt$ and $gen$ segments. Interestingly, we {observe} improved robustness when using the actional phase alone. Given that the actional segment's start is closer in time to the beginning of the object interaction, when varying the start of the segment we are effectively including part of the pre-actional phase in the generated segment, which assists in making actions more discriminative Importantly, we {show} that RB annotations improved both consistency and accuracy of annotations on the largest dataset of egocentric object interactions. \textcolor[rgb]{0,0,0}{We believe these} form solid basis for further discussions and experimentation on consistent labeling of temporal boundaries. \vspace*{-6pt} \section{Conclusion and Future Directions} \label{sec:conclusion} \vspace*{-4pt} \begin{table}[t] \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}} \hline $\textcolor{originalColor}{\text{Conv}}_{gt}$ & $\textcolor{originalColor}{\text{Conv}}^{\textcolor{augColor}{aug}}_{gt}$ & $\textcolor{cnnColor}{\stackrel{\text{act}}{\text{RB}}}_{gt}$ & $\textcolor{cnnColor}{\stackrel{\text{act}}{\text{RB}}}_{gen}$ & $\textcolor{rbColor}{\text{RB}}_{gt}$ & $\textcolor{rbColor}{\text{RB}}_{gen}$ \\ \hline 61.2 & 57.9 & 64.9 & 63.2 & \textbf{65.6} & 61.7 \\ \hline \end{tabular}} \caption{{GTEA Gaze+: 2SCNN classification accuracy comparison for conventional annotations (ground truth and augmented) and RB labels (ground truth and generated).}} \label{table:gteaRBResults2} \end{table} Annotating temporal bounds for object interactions is the base for supervised action recognition algorithms. In this work, we uncovered inconsistencies in temporal bound annotations within and across three egocentric datasets. We assessed the robustness of both hand-crafted features and fine-tuned end-to-end recognition methods, and demonstrated that both IDT FV and 2SCNN are susceptible to variations in start and end times. We then proposed an approach to consistently label temporal bounds for object interactions. We foresee three potential future directions: \noindent{\textbf{Other NN architectures?} \hspace{2pt}} \textcolor[rgb]{0,0,0}{W}hile 2SCNN randomly samples frames from a video segment, the classification accuracy is still sensitive to variations in temporal bounds. Other architectures, particularly those that model temporal progression using Recurrent networks (including LSTM), rely on labeled training samples and would thus equally benefit from consistent labeling. Evaluating the robustness of such networks is an interesting future direction. \noindent{\textbf{How can robustness to temporal boundaries be achieved?} \hspace{2pt} Classification methods that are inherently robust to temporal boundaries, while learning from supervised annotations, is a topic for future directions. As deep architectures reportedly outperform hand-crafted features and other classifiers, architectures that are designed to handle variations in start and end times are desired.} \noindent{\textbf{Which temporal granularity?} \hspace{2pt}} \textcolor[rgb]{0,0,0}{T}he Rubicon Boundaries address consistent labeling of temporal bounds, \textcolor[rgb]{0,0,0}{but} they do not address the concern of granularity of the action. Is the action of cutting a whole tomato composed of several short cuts or is it one long action? The Rubicon Boundaries model discusses actions relative to the goal a person wishes to accomplish. The granularity of an object interaction is another matter, and annotating the level of granularity consistently has not been addressed yet. Expanding Rubicon Boundaries to enable annotating the granularity would require further investigation. \noindent \textbf{Data Statement \& Ack:} \hspace{4pt} {Public datasets were used in this work; no new data were created as part of this study. RB annotations are available on the project's webpage. Supported by EPSRC DTP and EPSRC LOCATE (EP/N033779/1).} {\small \bibliographystyle{ieee}
train/arxiv
BkiUduE5qhDBZQiepITu
5
1
\section{Introduction} We consider a coupled non-autonomous system with time-dependent coefficients which describes interaction of a homogeneous viscous fluid which occupies a domain $ \EuScript O$ bounded by the solid walls of the container $S$ and a horizontal boundary $\Om$ on which a thin nonlinear elastic plate is placed. The motion of the fluid is described by linearized 3D Navier--Stokes equations. To describe deformations of the plate we consider a generalized plate model which accounts only for transversal displacements and covers a general large deflection Karman type model (see, e.g., \cite{lagnese}). However, our results can be also applied in the cases of nonlinear Berger and Kirchhoff plates. \smallskip\par Our mathematical model is formulated as follows. \par Let $ \EuScript O\subset \R^3$ be a bounded domain with a sufficiently smooth boundary $\partial \EuScript O$. We assume that $\partial \EuScript O=\overline{\Omega}\cup \overline{S}$, where $\Om\cap S=\emptyset$ and $$ \Om\subset\{ x=(x_1;x_2;0)\, :\,x'\equiv (x_1;x_2)\in\R^2\} $$ with the smooth contour $\Gamma=\partial\Om$ and $S$ is a surface which lies in the subspace $\R^3_- =\{ x_3\le 0\}$. The exterior normal on $\partial \EuScript O$ is denoted by $n$. We have that $n=(0;0;1)$ on $\Om$. We consider the following Navier--Stokes equations in $ \EuScript O$ for the fluid velocity field $v=v(x,t)=(v^1(x,t);v^2(x,t);v^3(x,t))$ and for the pressure $p(x,t)$: \begin{equation}\label{fl.1} \mu(t) v_t-\Delta v+\nabla p=f(x,t)\quad {\rm in\quad} \EuScript O \times(\tau,+\infty),\;\tau\in \mathbb R. \end{equation} \begin{equation}\label{fl.2} \di v=0 \quad {\rm in}\quad \EuScript O \times(\tau,+\infty),\;\tau\in \mathbb R. \end{equation} where $f(x, t)$ is a volume force.\par We supplement (\ref{fl.1}) and (\ref{fl.2}) with the non-slip boundary conditions imposed on the velocity field $v=v(x,t)$: \begin{equation}\label{fl.4} v=0 ~~ {\rm on}~S; \quad v\equiv(v^1;v^2;v^3)=(0;0;u_t) ~~{\rm on} ~ \Om. \end{equation} Here $u=u(x,t)$ is the transversal displacement of the plate occupying $\Om$ and satisfying the following equation: \begin{equation*} \rho(t) u_{tt} + \De^2 u + \cF(u)=g(x,t)+p|_\Om,\; ~~{\rm in}~~ \Omega \times (\tau, \infty),\;\tau\in \mathbb R. \end{equation*} where $g(x,t)$ is a given body force on the plate, $\cF(u)$ is a nonlinear feedback force which will be specified later. \par We impose clamped boundary conditions on the plate \begin{equation} u|_{\pd\Om}=\left.\frac{\pd u}{\pd n} \right|_{\pd\Om}=0 \label{plBC} \end{equation} and supply \eqref{fl.1}--\eqref{plBC} with initial data of the form \begin{equation} v(x,\tau)=v_\tau,\quad u(x,\tau)=u_\tau^0, \quad u_t(x,\tau)=u_\tau^1, \label{IC} \end{equation} We note that \eqref{fl.2} and \eqref{fl.4} imply the following compatibility condition \begin{equation}\label{Com-con} \int_\Om u_t(x',t) dx'=0 \quad \mbox{for all}~~ t\ge \tau\;\tau\in \mathbb R. \end{equation} This condition fulfills when \[ \int_\Om u(x',t) dx'=const \quad \mbox{for all}~~ t\ge \tau,\;\tau\in \mathbb R. \] which can be interpreted as preservation of the volume of the fluid. \par \medskip \par In this paper our main point of interest is well-posedness and long-time dynamics of solutions to the coupled problem in \eqref{fl.1}--\eqref{IC} for the velocity $v$ and the displacement $u$. \par We consider this problem under rather general hypotheses concerning nonlinearity. These hypotheses cover the cases of von Karman, Berger and Kirchhoff plates. We show that problem \eqref{fl.1}--\eqref{IC} generates a process on a family of time-dependent energy spaces. Our main result states that under some natural conditions concerning feedback forces system \eqref{fl.1}--\eqref{IC} possesses a pullback attractor. To establish this results we adjust the compensated compactness approach widely used for dynamical systems of autonomous equations (see \cite{cl-jdde}, \cite{cl-mem} and \cite[Chapters 7,8]{cl-book} and also the references therein) to processes and pullback attractors. \par The mathematical studies of the long-time behavior of autonomous problems of fluid--structure interaction in the case of viscous fluids and elastic plates/bodies have a long history (see, e.g. \cite{Chu_2010, CF, ChuRyz2011, F} and references therein).\par In the present work we investigate the existence of a pullback attractor in case of time-dependent coefficients in the main parts of equations. For such problems only a few papers are devoted to the existence of pullback attractors \cite{Carab, Conti, Duane}. In paper \cite{Carab} a wave equation with time-dependent coefficient before the damping term is considered, consequently the energy space does not depend on the time parameter. Papers \cite{Conti, Duane} deal with wave equations with time-dependent coefficients before the second derivatives with respect to time and to the space variable respectively. In both works the existence of time-dependend pullback attractor in a scale of spaces is established. \par In our paper we consider for the first time, to the best of our knowledge, the interaction model for a Newtonian fluid and a plate with time-dependent coefficients before the time deriatives. The peculiarity of the problem considered consists in the absence of any mechanical damping in the plate component and the strong coupling of fluid and plate components. \par We prove the well-posedness of the system considered and investigate the long-time dynamics of solutions to the coupled problem in \eqref{fl.1}--\eqref{IC}. In order to show the existence of a time-dependent pullback attractor we derive an abstract result on the asymtotic compactness of processess on time-dependent spaces. \par The paper is organized as follows. In Section 2 we introduce notations, recall some properties of Sobolev type spaces with non-integer indexes on bounded domains and collect some regularity properties of (stationary) Stokes problem which we use in the further considerations. The main notions from the theory of pullback attractors and new abstract results are presented in Section 3. Our main result in Section 4 is Theorem~\ref{WP} on well-posedness and existence of time-dependend absorbing set. Our main result in Section 5 states existence of a pullback attractor. The argument is based on the property established in Theorem~\ref{pr:com}. \section{Spaces and notations.} Now we introduce Sobolev type spaces which are used in what follows (see e.g. \cite{Triebel78}).\par Let $D$ be a sufficiently smooth domain and $s\in\R$. We denote by $H^s(D)$ the Sobolev space of order $s$ on a set $D$ which we define as restriction (in the sense of distributions) of the space $H^s(\R^d)$ (introduced via Fourier transform). We denote by $\|\cdot \|_{s,D}$ the norm in $H^s(D)$ which we define by the relation \[ \|u\|_{s,D}^2=\inf\left\{\|w\|_{s,\R^d}^2\, :\; w\in H^s(\R^d),~~ w=u ~~ \rm{on}~~D \right\} \] We also use the notation $\|\cdot \|_{D}=\|\cdot \|_{0,D}$ for the corresponding $L_2$-norm and, similarly, $(\cdot,\cdot)_D$ for the $L_2$ inner product. We denote by $H^s_0(D)$ the closure of $C_0^\infty(D)$ in $H^s(D)$ (\wrt $\|\cdot \|_{s,D}$) and introduce the spaces \[ H^s_*(D):=\left\{f\big|_D\, :\; f\in H^s(\R^d), \; {\rm supp }\, f\subset \overline{D}\right\},\quad s\in \R. \] Since the extension by zero of elements from $H^s_*(D)$ gives us an element of $H^s(\R^d)$, these spaces $H^s_*(D)$ can be treated not only as functional spaces defined on $D$ (and contained in $H^s(D)$) but also as (closed) subspaces of $ H^s(\R^d)$. Below we need them to describe boundary traces on $\Om\subset\partial \EuScript O$. We endow the classes $H^s_*(D)$ with the induced norms $\|f \|^*_{s,D}= \| f \|_{s,\R^d}$ for $f\in H^s_*(D)$. It is clear that \[ \|f \|_{s,D}\le \|f \|^*_{s,D}, ~~ f\in H^s_*(D). \] It is known (see \cite[Theorem 4.3.2/1]{Triebel78}) that $C_0^\infty(D)$ is dense in $H^s_*(D)$ and \begin{align*} & H^s_*(D)\subset H^s_0(D)\subset H^s(D),~~~ s\in\R;\\ & H^s_0(D) = H^s(D),~~~ -\infty< s\le 1/2;\\ & H^s_*(D)= H^s_0(D),~~~ -1/2< s<\infty,~~ s-1/2\not\in \{ 0,1,2,\ldots\}. \end{align*} In particular, $H^s_*(D)= H^s_0(D)= H^s(D)$ for $|s|<1/2$. By \cite[Remark 4.3.2/2]{Triebel78} we also have that $H^s_*(D)\neq H^s(D)$ for $|s|>1/2$. Note that in the notations of \cite{LiMa_1968} the space $H^{m+1/2}_*(D)$ is the same as $H^{m+1/2}_{00}(D)$ for every $m= 0,1,2,\ldots$ , and for $s=m+\si$ with $0<\si<1$ we have \[ \|u \|^*_{s,D}=\left\{ \| u\|^2_{s,D} +\sum_{|\al|=m}\int_D \,\frac{|D^\al u(x)|^2}{d(x,\pd D)^{2\si}}\, dx \right\}^{1/2}, \] where $d(x,\pd D)$ is the distance between $x$ and $\pd D$. The norm $\|\cdot \|^*_{s,D}$ is equivalent to $\|\cdot \|_{s,D}$ in the case when $s>-1/2$ and $s-1/2\not\in\{ 0,1,2,\ldots\}$, but not equivalent in general. \par Understanding adjoint spaces \wrt duality between $C_0^\infty(D)$ and $[C_0^\infty(D)]'$ by Theorems 4.8.1 and 4.8.2 from \cite{Triebel78} we also have that \begin{align*} [H^s_*(D)]'= H^{-s}(D),~ s\in\R, ~~~\mbox{and} ~~~ [H^s(D)]' = H_*^{-s} (D),~ s\in (-\infty,1/2). \end{align*} To describe fluid velocity fields we introduce the following scale of spaces. \par Let $\mathscr{C}( \EuScript O)$ be the class of $C^\infty$ vector-valued solenoidal (i.e., divergence-free) functions $ v=(v^1;v^2;v^3)$ on $\overline{\EuScript O}$ which vanish in a neighborhood of $S$ and such that $v^1=v^2=0$ on $\Om$. We denote by $X_t$ the closure of $\sC( \EuScript O)$ \wrt the following $L_2$-norms $$ \|\cdot\|_{X_t}=\mu(t)\|\cdot\|_{L_2( \EuScript O)} $$ and by $Y$ the closure \wrt the $H^1( \EuScript O)$-norm. One can see that \[ X_t=\left\{ v=(v^1;v^2;v^3)\in [L_2( \EuScript O)]^3\, :\; {\rm div}\, v=0;\; \gamma_n v\equiv (v,n)=0~\mbox{on}~ S,\; t\in\mathbb R\right\} \] and \[ Y=\left\{ v=(v^1;v^2;v^3)\in [H^1( \EuScript O)]^3\, \left| \begin{array}{l} {\rm div}\, v=0,\; v=0~\mbox{on}~ S, \\ v^1=v^2=0~\mbox{on}~\Om \end{array} \right. \right\}. \] The space $Y$ is endowed with the norm $\|\cdot\|_Y= \|\nabla\cdot\|_{L_2( \EuScript O)}$. For some details concerning this type spaces we refer to \cite{temam-NS}, for instance.\par We also need the Sobolev spaces consisting of functions with zero average on the domain $\Om$, namely we consider the space \[ \widehat{L}_2(\Om)=\left\{u\in L_2(\Om): \int_\Om u(x') dx' =0 \right\} \] and also the scale of time-dependent spaces \[\widehat{L}_t^2(\Omega)=\left\{ L^2(\Omega): \|\cdot\|_{\widehat{L}_t^2(\Omega)}^2=\rho(t) \|\cdot\|_{{L}_2}^2,\; t\in\mathbb R\right\}.\] We use the notation $\widehat H^s(\Om)=H^s(\Om)\cap\widehat L_2(\Om)$ for $s>0$ with the standard $H^s(\Om)$-norm. The notations $\widehat H^s_*(\Om)$ and $\widehat H^s_0(\Om)$ have a similar meaning. \section{Abstract results on attractors.} We begin with some definitions from the theory of processses. \begin{definition} \label{d1} A two paramter family $U(t,\tau): H_\tau\to H_t$, $t\ge \tau\in \mathbb R$ of operators in a scale of Banach spaces $H_t$, $t\in \mathbb R$ is called a process if \begin{itemize} \item $U(\tau, \tau)=I$ \item $U(t, s)U(s, \tau)=U(t, \tau),\; t\ge s\ge \tau\in \mathbb R$ \end{itemize} \end{definition} \begin{definition} \label{d2} The family of sets $\EuScript B=\{B_t\}_{t\in\mathbb R}$, for $B_t\in H_t$ is positively invariant if $U(t,\tau)B_\tau\subset B_t$ $\forall t\in\mathbb R$. \end{definition} \begin{definition} \label{d3} The family of bounded sets $\EuScript B=\{B_t\}_{t\in\mathbb R}$, for $B_t\in H_t$ is uniformly bounded if there exists $R>0$ such that $B_t\in \mathbb B_t(R)=\{z\in H_t, \|z\|_{H_t}\le R \}$ for any $ t\in\mathbb R$. \end{definition} \begin{definition} To study the asymptotic behavior of the operators $U(t, \tau)$ we need to define a suitable object which attracts solutions of the system originating sufficiently far in the past. In order to do it we need to introduce the notion of absorbtion and attraction. \end{definition} \begin{definition} \label{d4} The family of uniformly bounded sets $\EuScript B=\{B_t\}_{t\in\mathbb R}$, is time-dependent absorbing if for any $R>0$ there exists $ \Theta=\Theta(R)$ such that $U(t,\tau)\mathbb B_\tau(R)\in B_t$ for any $\tau\le t-\Theta$. \end{definition} The process $U(t, \tau)$ is called dissipative whenever it admits a pullback absorbing family. \begin{definition} \label{d31} A time dependent $\omega$-limit of any pullback absorbing family $\EuScript B=\{B_t\}_{t\in\mathbb R}$, for $B_t\in H_t$ is the family $\Omega=\{\omega_t(\EuScript B)\}_{t\in\mathbb R}$, where \begin{equation} \omega_t(\EuScript B)=\bigcap\limits_{y\le t}\overline{\bigcup\limits_{\tau\le y}U(t, \tau)B_\tau}. \end{equation} \end{definition} \begin{definition} \label{d5} The family of uniformly bounded sets $\EuScript K=\{K_t\}_{t\in\mathbb R}$ is pullback attracting if for every uniformly bounded family $ \EuScript B=\{B_t\}_{t\in\mathbb R}$ $$\lim\limits_{\tau\to-\infty}\delta_t(U(t,\tau)B_\tau, K_t)=0,$$ where $\delta_t(B,C)=\sup\limits_{x\in B}\inf\limits_{y\in C}\|x-y\|_{H_t}$ denotes the Hausdorff semidistance. \end{definition} Now we are in position to define the pullback attractor. \begin{definition} \label{d6} A process is asymptotically compact if there exists a pullback attracting family of compact sets $\EuScript K=\{K_t\}_{t\in\mathbb R}$, $K_t\in H_t$. \end{definition} \begin{definition} \label{d7} Pullback attractor is the smallest element of pullback attracting families $\mathbb K=\{\EuScript K=\{K_t\}_{t\in\mathbb R}\}$, where $K_t\subset H_t$ are compact in the corresponding spaces. \end{definition} The classical approach (see, e.g. \cite{}) to verification of asymptotic compactness of a process consists in finding a decomposition $U(t, \tau)=U_0(t, \tau)+U_1(t, \tau)$ with the properties $$\|U_0(t, \tau)z\|_{H_t}\le Ce^{-\delta(t-\tau)},\;C,\delta>0,\;z\in H_\tau$$ and $$\sup\limits_{t\ge \tau}\|U_1(t, \tau)z\|_{R_t}\le M,$$ where $R_t$ is a compactly embedded into $H_t$ in Banach space. However, for the system considered it is not obvious how to get such a decomposition due to strong coupling (fluid and plate components cannot be splitted in terms of constraction of Galerkin approximations). Therefore, we need to derive another criterion for asymptotic compactness. In order to do this we adjust to our situation the method of compensated compactness. \begin{theorem} \label{pr:com} Let $\EuScript D=\{D_t\}_{t\in\mathbb R}$ be a time-dependent absorbing family of a process $U(t,\tau):H_\tau\to H_t$ and for any $\varepsilon>0$ there exists $T_0=T_0(\varepsilon)>0$ such that for any $y_1, y_2\in D_{t-T_0}$ \begin{equation} \label{ac} \|U(t,t-T_0)y_1-U(t,t-T_0)y_2\|_{H_t}\le \varepsilon+\Phi_{T_0, t}(y_1, y_2), \end{equation} where the function $\Phi_{T_0, t}(y_1, y_2): D_{t-T_0}\times D_{t-T_0}\to \mathbb R$ possesses the property \begin{equation} \label{p}\liminf\limits_{n\to\infty}\liminf\limits_{m\to\infty}\Phi_{T_0, t}(y_n, y_m)=0 \end{equation} for any sequence $\{y_n\}\in D_{t-T_0}$. \end{theorem} \begin{proof} We can assume without loss of generality that $\EuScript D$ is positively invariant. Otherwise, we can substitute $D_t$ with $\bigcup\limits_{\tau\le t-\Theta}U(t,\tau)D_\tau\subset D_t$ .\par We fix $T>0$. Obviously, we have a representation \[\omega_t(\EuScript D)=\bigcap\limits_{k\in\mathbb N}C_k^t,\] where \[C_k^t=\overline{U(t, t-kT)D_{t-kT}}.\] Now we need to check that \begin{equation} \label{inkl} C_{k+1}^t\subset C_k^t. \end{equation} Indeed, due to the invariance of the family $\EuScript D$ we have $U(t-kT, t-(k+1)T)D_{t-(k+1)T}\subset D_{t-kT}$, consequently, \begin{multline*}C_{k+1}^t=\overline{U(t, t-(k+1)T)D_{t-(k+1)T}}\\=\overline{U(t, t-kT)U(t-kT, t-(k+1)T)D_{t-(k+1)T}}\subset \overline{U(t, t-kT)D_{t-kT}}=C_{k}^t. \end{multline*} Therefore, we have a sequence of nonempty closed sets $$ C_1^t\supset C_2^t\supset...\supset C_{k}^t\supset C_{k+1}^t \supset... $$ To show that $\omega_t(\EuScript D)$ is nonemty and compact it remains to prove that \begin{equation} \label{alphalim} \lim\limits_{k\to\infty}\alpha(C_k^t)=0. \end{equation} Due to \eqref{inkl} \[\alpha(C_k^t)=\alpha(C_k^t\cup C_{k+1}^t)=\max\left(\alpha(C_k^t), \alpha(C_{k+1}^t)\right),\] consequently, \begin{equation}\label{ainkl} \alpha(C_k^t)\ge \alpha(C_{k+1}^t) \end{equation} for any $k\in\mathbb N$. It follows readily from \eqref{ainkl} that to show \eqref{alphalim} it is sufficient to prove that for any $\varepsilon>0$ there exists $k_0\in\mathbb N$ such that $\alpha(C_{k_0}^t)\le \varepsilon$.\par Now we use the contradiction argument. Let there exists $\varepsilon_0>0$ such that for any $k\in \mathbb N$ \begin{equation} \label{ep} \alpha(C_k^t)>6\varepsilon_0. \end{equation} For this $\varepsilon_0$ we choose $T_0=T_0(\varepsilon_0)$ such that \eqref{ac}, \eqref{p} hold. There exist $k_0\in\mathbb N$ and $0<\delta_0<T$ such that $T_0=k_0T-\delta_0$. We use the notation $\EuScript L_0=U(t, t-T_0)D_{t-T_0}=U(t, t-k_0T+\delta_0)D_{t-k_0T+\delta_0}$. Then, \begin{multline} \label{l} C_{k_0}^t=U(t, t-k_0T)D_{t-k_0T}=U(t, t-k_0T+\delta_0)U(t-k_0T+\delta_0, t-k_0T)D_{t-k_0T}\\\subset U(t, t-T_0) D_{t-T_0}=\EuScript L_0. \end{multline} It follows from \eqref{ep} and \eqref{l} that \[\alpha(\EuScript L_0)\ge \alpha(C_{k_0}^t)>6\varepsilon_0.\] This implies that there exists a sequence $\{y_n\}_{n=1}^\infty\in D_{t-T_0}$ such that for any $n,m\in \mathbb N$ such that $n\ne m$ \[2\varepsilon_0\le\|U(t,t-T_0)y_n-U(t,t-T_0)y_m\|_{H_t}\le \varepsilon_0+\Phi_{T_0, t}(y_n, y_m),\] \end{proof} and, therefore, \[\Phi_{T_0, t}(y_n, y_m)\ge \varepsilon_0,\] which contradicts to \eqref{p}. This means that $\Omega=\{\omega_t(\EuScript D)\}_{t\in\mathbb R}$ is a pullback attracting family of compact sets. \section{Well-posedness and existense of absorbing set.} In this section we prove the existence and uniqueness of solutions to the problem considered, generation of a continuous process, and existence of its time-dependent absorbing set. We introduce the scale of phase spaces \[H_t=X_t\times \widehat {H_0^2(\Omega)}\times \widehat{L_t^2(\Omega)}\] equipped with the norm \[\|W\|_{H_t}^2=\mu(t)\|v\|_{L^2(\EuScript O)}^2+\rho(t)\|u\|_{L^2(\Omega)}^2+\|u_t\|_{L^2(\Omega)}^2,\;\;W=(v,u,u_t).\] Now we impose assumptions on the parameters of problem \eqref{fl.1}--\eqref{IC} (cf. \cite{Conti, Ma}).\par {\bf Assumptions on $\mu$ and $\rho$.}\par \begin{enumerate} \item[(A1)] $\mu(t), \;\rho(t)>0$. \item[(A2)] $\mu(t), \rho(t) \in C^1(\mathbb R)$ are decreasing functions. \item[(A3)] There exists $L>0$ such that \[\sup\limits_{t\in \mathbb R}(|\mu(t)|+|\mu'(t)|+|\rho(t)|+|\rho'(t)|)\le L.\] \item[(A4)] $\lim\limits_{t\to +\infty}\mu(t)=0$, $\lim\limits_{t\to +\infty}\rho(t)=0$. \end{enumerate} {\bf Assumptions on $\cF$.}\par \begin{enumerate} \item[(F1)] There exists $\epsilon>0$ such that $F$ is locally Lipschitz from $H_0^{2-\epsilon}(\Om)$ into $H^{-1/2}(\Om)$, i.e. \[\| F(u_1)-F(u_2)\|_{-1/2,\Omega}\le C_R\|u_1-u_2\|_{2-\epsilon, \Omega},\] for any $u_1, u_2\in H_0^2(\Omega)$ possessing the property $\|u_i\|_{2,\Omega}\le R, i=1,2$. \item[(F2)] There exists a $ C^1$ - functional $\Pi(u)$ on $H_0^2(\Omega)$ such that $F(u)=\Pi'(u)$ and $\Pi(u)\le Q(\|u\|_{2,\Omega})$, where the fuction $Q$ is increasing. \item[(F3)] There exist $0<\nu<1$ and $C\ge 0$ such that \[(1-\nu)\|\Delta u\|_\Om^2+\Pi(u)+C\ge 0\] for any $u\in H_0^2(\Omega)$. \item[(F4)] There exist $a_1, a_2\ge 0$ and $0<\nu<1$ such that \[(F(u), u)\ge a_1\Pi(u)-a_2-(1-\nu)\|\Delta u\|_\Omega^2.\] \end{enumerate} {\bf Assumptions on $f$ and $g$.}\par \begin{enumerate} \item[(G1)] $ f\in L_{\it{loc}}^2(\mathbb R;Y')$, $ g\in L_{\it{loc}}^2(\mathbb R;H^{-1/2}(\Omega))$. \item[(G2)] There exist $\sigma_0,\; C_{f,g}>0$, such that for any $t\in \mathbb R$ and $\sigma\in [0,\sigma_0]$ \[\int\limits_{-\infty}^t e^{-\sigma(t-s)}\left(\|f(s)\|_{Y'}^2+\|g(s)\|_{-1/2,\Omega}^2\right)ds\le C_{f, g}.\] \end{enumerate} \begin{remark} We note that assumption (A4) is ipmosed in order to consider the problem in time-dependent spaces. Otherwise, due to assumption (A2) we obtain the equivalence of the norms \[\|W\|_{H_t}^2 \le \|W\|_{H_\tau}^2\le \max\left\{1,\frac{\mu(\tau)}{\rho(t)} , \frac{\rho(\tau)}{\rho(t)}\right\}\|W\|_{H_t}^2.\] \end{remark} \begin{remark} The examples of function satisfying assumptions (G1), (G2) are periodic functions or $e^{-\kappa t},\;\;\kappa>0$. \end{remark} We define the spaces of test functions $$ \EuScript L_T=\left\{\psi=(\phi, b): \left|\begin{array}{l} \phi\in L^2(\tau,T;[H^1(\EuScript O)]^3), \phi_t\in L^2(\tau,T;[L_t^2(\EuScript O)]^3,\\ div \phi=0,\;\phi|_S=0,\;\phi|_\Omega=(0,0,b),\\ b\in L^2(\tau,T,\widehat{H_0^2(\Omega)}), \; b_t\in L^2(\tau,T,\widehat{L_t^2(\Omega)}) \end{array}\right.\right\} $$ and $\EuScript L_T^0=\{\psi\in \EuScript L_T\, :\, \psi(t)=0\}.$ \par In order to make our statements precise we need to introduce the definition of weak solutions to problem \eqref{fl.1}--\eqref{IC}. \begin{definition} A pair of functions $(v(t),u(t))$ is said to be a weak solution to problem \eqref{fl.1}--\eqref{IC} on a time interval $[\tau,t]$ if \begin{itemize} \item $W(t)=(v(t), u(t), u_t(t))\in L_\infty(\tau,T;H_t);$ \item $v\in L_2(\tau,T;Y)$, $u_t\in L_2(\tau,T;[H_*^{1/2}(\Omega)]^2)$ \item $ u(\tau)=u_\tau^0;$ \item For almost all $ t\in [\tau,T]$ \begin{equation} \label{com} v(t)|_{\Omega}=u_t(t); \end{equation} \item For every $\psi=(\phi, b)\in \EuScript L_T^0$ the following equality holds \begin{multline} \label{sol_def} -\int\limits_\tau^T\mu(t) (v, \phi_t)_\EuScript O dt-\frac12\int\limits_\tau^T\mu'(t) (v, \phi)_\EuScript O dt+\int\limits_\tau^T\mu(t) (\nabla v, \nabla \phi)_\EuScript O dt\\ -\int\limits_\tau^T\rho(t) (u_t, b_t)_\Omega dt-\frac12\int\limits_\tau^T\rho'(t) (u_t, b)_\Omega dt+\int\limits_\tau^T (\nabla u, \nabla b)_\Omega dt\\ =\int\limits_\tau^T (f(t), \phi)_\EuScript O dt-\int\limits_\tau^T (g(t), \phi)_\Omega dt+\mu(\tau) ( v_\tau, \phi(\tau))_\EuScript O\\ +\rho(\tau) (u_\tau^1, b(\tau))_\Omega. \end{multline} \end{itemize} \end{definition} The following theorem holds true \begin{theorem} \label{WP} Under assumptions (A1)-(A4), (F1)-(F3), (G1)-(G2) problem \eqref{fl.1}--\eqref{IC} generates a strongly continuous process $U(t, \tau ) : H_\tau \to H_t$, $t \ge \tau \in \mathbb R$, satisfying the following continuous dependence property: for every pair of initial data $W_\tau^i = (v_\tau^i, u_\tau^{0i}, u_\tau^{1i}) \in H_\tau$ such that $\|W_\tau^i\|_{H_\tau} \le R, i = 1, 2$, $R>0$ the difference of the corresponding solutions satisfies \begin{equation} \label{contin} \|U(t, \tau)W_\tau^1 - U(t, \tau)W_\tau^2\|_{H_t} \le e^{K(t-\tau)}\|W_\tau^1 - W_\tau^2\|_{H_\tau} , \;t \ge\tau, \end{equation} for some constant $K = K(R) \ge 0$.\par The energy equality \begin{multline}\label{energy} \sE(v(t), u(t), u_t(t))+ \int_\tau^t \|\g v\|^2 ds- \frac12\int_\tau^t \mu'(s) \| v\|^2 ds -\frac12 \int_\tau^t \rho'(s) \| u_s\|^2 ds \\=\sE(v_\tau, u_\tau^0, u_\tau^1) +\int_\tau^t(f, v)_\EuScript O ds +\int_\tau^t (g, u_s)_\Om ds \end{multline} holds for every $t>\tau$, where the energy functional $\sE$ is defined by the relation \begin{equation}\label{en-def} \sE(v, u, u_t)=E(v,u,u_t)+\int\limits_\Om\Pi(u)dx, \end{equation} here \begin{equation}\label{en-def1} E(v,u,u_t)=\frac12\left[\mu(t)\|v\|^2_\EuScript O+ \rho(t) \|u_t\|^2_\Om+\| \Delta u\|_\Om^2\right]. \end{equation} \end{theorem} \begin{proof} The proof is quite standard and relies on the method of Galerkin approximations (see e.g.). We place it here for the sake of completeness. \par {\em Step 1. Existence.} \par Let $\{e_i=(e_{1i}, e_{2i})\}_{i\in \N}$ be the orthonormal basis in $\widetilde X_t=\{v\in X_t: \; (v,n)_\Om=0\}$ consisting of the eigenvectors of the Stokes problem: \begin{equation} \label{stokes} -\De e_i +\nabla p_i =\lambda_i e_i \quad \mbox{in} \; \EuScript O,~~~ {\rm div}e_i=0, \quad e_i|_{\pd\EuScript O}=0, \end{equation} where $0<\lambda_1\le \lambda_2\le \cdots$ are the corresponding eigenvalues. The existence of solutions to \eqref{stokes} can be shown in the same way as in \cite{temam-NS}.\par We define the operator $N: [\widehat L^2(\Om)]^2 \mapsto [H^{1/2}(\EuScript O)]^2$ by the formula \begin{equation}\label{fl.n0} Nu=v ~~\mbox{iff}~~\left\{ \begin{array}{l} -\Delta v+\nabla p=0, \quad \di v=0 \quad {\rm in}\quad \EuScript O; \\ v=0 ~~ {\rm on}~\pd\EuScript O\setminus \Om; \quad v=u~~{\rm on} ~ \Om. \end{array}\right. \end{equation} Operator $N$ possesses properties \cite{} \begin{equation} \label{n} N:\, [\widehat H^s_*(\Om)]^2\mapsto [H^{1/2+s}(\EuScript O)]^3\cap X_t \end{equation} continuously for every $s\ge -1/2 $ and \begin{equation} \label{n1} \|Nu\|_{1/2+s, \EuScript O}\le C\|u\|^*_{s, \Om},\quad u\in [H^s_*(\Om)]^2. \end{equation}\par We also introduce a positive self-adjoint operator $A=\Delta^2$ with the domain $\sD(A)= (H^4\cap \widehat H_0^2)(\Omega)$. It is easy to see that $\sD(A^{1/2})=\widehat H_0^2(\Omega)$. Denote by $\{g_i\}_{i\in\N}$ the orthonormal basis in $\widehat L_2(\Om)$ which consists of eigenfunctions of the operator $A$ \begin{equation} \label{101} Ag_i=\kappa_i g_i \end{equation} with the eigenvalues $0<\kappa_1\le\kappa_2\le\ldots$.\par Let $\varphi_i=N g_i$, where the operator $N$ is defined by (\ref{fl.n0}).\par We define an approximate solution as a pair of functions $(v_{n,m}; u_n)$: \begin{equation} v_{n,m}(t)=\sum_{i=1}^m \alpha_i(t)e_i +\sum_{j=1}^{2n} \dot{\beta}_j(t)\varphi_j, \quad u_n(t)=\sum_{j=1}^{2n}\beta_j(t)g_j \label{approx_sol} \end{equation} which satisfy the relations \begin{equation} \mu(t)\left(\dot{\alpha}_k(t) +\sum_{j=1}^{2n} \ddot{\beta}_j(t)(\varphi_j,e_k)_\EuScript O\right)+\lambda_k \alpha_k(t)+\sum_{j=1}^{2n} \dot{\beta}_j(t)(\g \varphi_j, \g e_k)_\EuScript O =(f, e_k)_\EuScript O \label{e_eq} \end{equation} for $k=1,...,m$, and \begin{multline} \mu(t)\left(\sum_{i=1}^m \dot{\alpha}_i(t)(e_i, \varphi_k)_\EuScript O+\sum_{j=1}^{2n} \ddot{\beta}_j(t)(\varphi_j, \varphi_k)_\EuScript O\right)+\rho (t)\ddot{\beta}_k(t)\\ + \sum_{i=1}^m \alpha_i(t)(\g e_i, \g \varphi_k)_\EuScript O + \sum_{j=1}^{2n} \dot{\beta}_j(t) (\g e_j, \g \varphi_k)_\EuScript O +\kappa_k \beta_k(t)+\\ +(F(u_n(t)), g_k) = (f(t), \varphi_k)_\EuScript O +(g(t), g_k)_\Om \label{phi_eq} \end{multline} for $k=1,\dots,2n$. This system of ordinary differential equations \eqref{e_eq}--\eqref{phi_eq} is endowed with the initial data \[ v_{n,m}(\tau)=\Pi_m(v_\tau-Nu_\tau^1)+NP_n u_\tau^1, \] \[ u_n(\tau)=P_nu_\tau^0, \;\; \dot{u}_n(\tau)=P_nu_\tau^1 , \] where $\Pi_m$ is an orthoprojector on $Lin\{e_j: j=1,\ldots,m,\}$ in $\widetilde{X_t}$, $P_n$ is an orthoprojector on $Lin\{g_i : i=1,\ldots,n\}$ in $\widehat L_t^2(\Om)$. Since $\Pi_m$ and $P_n$ are spectral projectors we have that \begin{equation}\label{id-conv} (v_{n,m}(\tau); u_n(\tau); \dot{u}_n(\tau))\to (v_\tau;u_\tau^0;u_\tau^1),\;\text{strongly in }\; H_\tau,\; m,n\to\infty. \end{equation} Arguing as in \cite{ChuRyz2011} we infer that system \eqref{e_eq} and \eqref{phi_eq} has a unique solution on any time interval $[\tau,T]$. \par It follows from (\ref{approx_sol}) that \[ v_{n,m}(t)=\sum_{i=1}^m \alpha_i(t)e_i + N[\pd_t u_n(t)], \] where $N$ is given by (\ref{fl.n0}). This implies the following boundary compatibility condition \begin{equation}\label{nm-comp} v_{n,m}(t)=\pd_t u_n(t)~~ \mbox{on}~~ \Om. \end{equation} Multiplying \eqref{e_eq} by $\al_k(t)$ and \eqref{phi_eq} by $\dot\beta_k(t)$, after summation we obtain an energy relation of the form \eqref{energy} for the approximate solutions $(v_{n,m}; u_n)$ (for a similar argument we refer to \cite{ChuRyz2011} ). Assumptions (A2), (F2), (F3), (G1) together with the trace theorem imply the following a priori estimate: \begin{multline}\label{a-pri0} \sup_{t\in [\tau,T]}\left[\mu(t)\|v_{n,m}(t)\|_\EuScript O^2 +\rho(t)\| \pd_t u_{n}(t)\|_\Om^2+ \|\Delta u_n(t)\|^2_\Om\right]\\ +\int_\tau^T\|\nabla v_{n,m}(t)\|_\EuScript O^2 dt + \int_\tau^T \| \pd_t u_{n}(t)\|_{[H_*^{1/2}(\Om)]^2}^2 dt \le C(T, \|\cW_\tau\|_{H_\tau}^2) \end{multline} for any existence interval $[\tau,T]$ of approximate solutions, where the constant $C(T, \|\cW_\tau\|_{H_\tau})$ does not depend on $n$ and $m$. In particular, this implies that any approximate solution can be extended on any time interval by the standard procedure, i.e., the solution is global. It also follows from \eqref{a-pri0} that the sequence $\{(v_{n,m}; u_n; \pd_t u_n)\}$ contains a subsequence such that \begin{align} &(v_{n,m}; u_n; \pd_t u_n) \rightharpoonup (v; u; \pd_t u) \quad \ast\mbox{-weakly in } L_\infty(\tau,T; H_t),\label{ux-conv} \\ &v_{n,m} \rightharpoonup v \quad \mbox{weakly in } L_2(\tau,T;Y). \label{u_conv} \end{align} Moreover, by the Aubin-Dubinsky theorem (see, e.g., \cite[Corollary~4]{Simon}) we can assert that \begin{align} &u_n \rightarrow u \quad \mbox{strongly in } C(\tau,T; \widehat H^{2-\epsilon}_0(\Om)) \label{u-strong} \end{align} for every $\epsilon>0$. Besides, the trace theorem yields \begin{equation}\label{xt-conv} \pd_t u_n \rightharpoonup \pd_t u \quad \mbox{weakly in } L_2(\tau,T; [H^{1/2}_*(\Om)]^2). \end{equation} One can see that $(v_{n,m}; u_n; \pd_t u_n)(t)$ satisfies \eqref{sol_def} with the test function $\phi$ of the form \begin{equation}\label{phi-pq} \phi=\phi_{l,q}=\sum_{i=1}^l \gamma_i(t)e_i +\sum_{j=1}^q\delta_j(t)\varphi_j, \end{equation} where $l\le m$, $q\le n$ and $\gamma_i$, $\delta_j$ are scalar absolutely continuous functions on $[\tau,T]$ such that $\dot{\ga}_i,\dot{\delta}_j\in L_2(\tau,T)$ and $\gamma_i(T)=\delta_j(T)=0$. Thus using (\ref{ux-conv})-- (\ref{u_conv}) we can pass to the limit and show that $(v; u; \pd_t u)(t)$ satisfies \eqref{e_eq}--\eqref{phi_eq} with $\phi=\phi_{l,q}$, where $l$ and $q$ are arbitrary. By (\ref{id-conv}) and (\ref{u-strong}) we have $\cW(\tau)=\cW_\tau$. Compatibility condition \eqref{com} follows from (\ref{nm-comp}) and (\ref{xt-conv}). \par To conclude the proof of the existence of weak solutions we only need to show that any function $\psi$ in $\EuScript L_T^0$ can be approximated by a sequence of functions of the form (\ref{phi-pq}). This can be done in the following way. We first approximate the corresponding boundary value of $b$ by a finite linear combination $h$ of $\xi_j$, then we approximate the difference $\psi-Nh$ (with $N$ define by (\ref{fl.n0})) by finite linear combination of $e_i$. Limit transition in nonlinear terms is quite standard, so we omit it here. Thus the existence of weak solutions is proved.\par {\em Step 2. Energy equality.} \par To prove the energy equality for a weak solution we follow the scheme presented in ~\cite{KochLa_2002}. We introduce a finite difference operator $D_h$, depending on a small parameter $h$. Let $g$ be a bounded function on $[\tau,T]$ with values in some Hilbert space. We extend $g(t)$ for all $t\in \R$ by defining $g(t)=g(0)$ for $t<\tau$ and $g(t)=g(T)$ for $t>T$. With this extension we denote \[ g^+_h(t)=g(t+h)-g(t), \quad g^-_h(t)=g(t)-g(t-h), \quad D_h g(t)=\frac 1{2h}(g^+_h(t)+ g^-_h(t)). \] Properties of the operator $D_h$ are collected in Proposition 4.3 \cite{KochLa_2002}. \par Taking in (\ref{sol_def}) $\phi(t)=\int_t^T\chi(s)ds\cdot \phi$, where $\chi$ is a smooth scalar function and $\phi$ belongs to the space \begin{equation}\label{space-W} \widehat{Y}=\left\{ \phi\in Y \left| \; \phi|_\Om= b \in \widehat H^2_0(\Om) \right. \right\}, \end{equation} one can see that the weak solution $(v(t); u(t))$ satisfies the relation \begin{multline} \label{wc} \mu(t) (v(t), \phi)_{\EuScript O} +\rho(t)(u_t(t), b)_\Om = (v_\tau, \phi)_{\EuScript O} + (u_\tau^1, b)_\Om+ \int_\tau^t\big[ \frac12\mu'(s)(v, \phi)_{\EuScript O}\\+\frac12\rho'(s)(u_t, b)_{\Om}-(\g v, \g \phi)_{\EuScript O}-(\Delta u, \Delta b)_\Om +(F(u), b)_\Om+(f, \phi)_{\EuScript O}+(g, b)_\Om\big] ds \end{multline} for all $t\in [\tau,T]$ and $\phi\in \widehat{Y}$ with $\phi\big\vert_\Om=b$.\par The vector $(v(t), u(t), u_t(t))$ is weakly continuous in $H_t$ for any weak solution $(v(t), u(t))$ to problem \eqref{fl.1}--\eqref{IC}. Indeed, it follows from (\ref{wc}) that $(v(t), u(t))$ satisfies the relation \[ \mu(t)(v(t),\phi)_\EuScript O = \mu(\tau) (v_\tau, \phi)_{\EuScript O} +\int_\tau^t\left[\frac12\mu'(s)(v, \phi)_{\EuScript O} - (\g v,\g \phi)_{\EuScript O} + (f(s), \phi)_{\EuScript O} \right] ds \] for almost all $t\in [\tau,T]$ and for all $\phi\in Y_0=\{ v\in Y :\, v|_\Om=0\}\subset\widehat {Y}\subset Y$, where $\widehat{Y}$ is given by \eqref{space-W}. This implies that $v(t)$ is weakly continuous in $Y_0'$. Since $X_t\subset Y'_0$, for any $\tau<t<T$ we can apply Lions lemma (see \cite[Lemma 8.1]{LiMa_1968}) and conclude that $v(t)$ is weakly continuous in $X_t$. The same lemma gives us weak continuity of $u(t)$ in $\widehat H_0^2(\Om)$. Now using (\ref{wc}) again with $\phi\in \widehat{Y}$ we conclude that $ t\mapsto (u_t(t), b)_\Om $ is continuous for every $b\in \widehat H_0^2(\Om)$. This implies that $ t\mapsto u_t(t)$ is weakly continuous in $[L_2(\Om)]^2 $. Using weak continuity of weak solutions, we can extend the variational relation in \eqref{sol_def} on the class of test functions from $\EuScript L_T$ (instead of $\EuScript L_T^0$) by an appropriate limit transition. More precisely, one can show that any weak solution $(v; u)$ satisfies the relation \begin{multline} \label{sol_def_t} -\int\limits_\tau^T\mu(t)(v,\phi_t)_{\EuScript O}dt+\int\limits_\tau^T (\g v, \g \phi)_{\EuScript O} dt-\int\limits_\tau^T\rho(t) (u_t, b_{t})_\Om dt+\int\limits_\tau^T(F(u), b) dt\\+\int\limits_\tau^T ( \Delta u, \Delta b)_\Om dt=(v_\tau, \phi(\tau))_{\EuScript O}+(u_\tau^1, b(\tau))_\Om-\mu(T)(v(T), \phi(T))_{\EuScript O}\\-\rho(T)(u_T(T), b(T))_\Om+\frac12\int\limits_\tau^T\mu'(t)(v,\phi)_{\EuScript O}dt+\frac12\int\limits_\tau^T\rho'(t) (u_t, b)_\Om dt \\+\int\limits_\tau^T(f, \phi)_{\EuScript O} dt+\int\limits_\tau^T(g, b)_{\Omega} dt, \end{multline} for every $\psi=(\phi, b)\in\EuScript L_T$ .\par Let $(v(t), (t))$ be a weak solution to problem \eqref{fl.1}--\eqref{IC}. Now we use \begin{equation} \label{98} \phi=\frac 1{2h}\int_{t-h}^{t+h} v(s) ds \end{equation} as a test function in \eqref{sol_def_t}. For the shell component we have test function $b=\phi|_\Om=D_h u$ -- the same one that used in \cite{KochLa_2002} for the full Karman model. \par Arguing as in the proof of Proposition 4.3 \cite{KochLa_2002} we can infer \begin{multline} \label{l1} \lim\limits_{h\to 0} \left(\int\limits_\tau^T\mu(t)(v(t), D_h v(t))_{\EuScript O}dt-\frac12\int\limits_\tau^T\mu'(t)(v(t), \int\limits_{t-h}^{t+h} v(s)ds)_{\EuScript O} dt\right)\\=\frac12\left(\mu(T)\|v(T)\|_{\EuScript O}^2-\mu(\tau)\|v(\tau)\|_{\EuScript O}^2\right) \end{multline} \begin{multline} \label{l2} \lim\limits_{h\to 0} \left(\int\limits_\tau^T\rho(t)(u_{t}(t), D_h u_t(t))_\Om dt-\frac12\int\limits_\tau^T\rho'(t)(u_t(t), D_h u(t))_\Om dt\right)\\=\frac12\left(\rho(T)\|u_T(T)\|_{\Om}^2-\rho(\tau)\|u_\tau(\tau)\|_{\Om}^2\right) \end{multline} Then, relying on \eqref{sol_def_t}, \eqref{l1}, and \eqref{l2} we can conclude the proof. All the arguments for the fluid component in our model are the same as in \cite{cr-full-karman}, and the arguments for the plate component are analogous to those presented in the proof of Lemma 4.1 \cite{KochLa_2002}. This makes it possible to prove the energy equality in \eqref{energy}.\par Continuity of weak solutions with respect to $t$ can be obtained in the standard way from the energy equality and weak continuity (see \cite[Ch.~3]{LiMa_1968} and also \cite{KochLa_2002}).\par {\em Step 3. Continuity with respect to the initial data and uniqueness.} \par It follows from energy estimate \eqref{energy} and (F3) that if $\|W_\tau\|_{H_\tau}\le R$, then there exists $C(R)>0$ such that $\|U(t, \tau)W_\tau\|_{H_t}\le C(R)$. Consequently, the Gronwall lemma and (F1) yield estimate \eqref{contin}. The uniqueness of solutions follows. \end{proof} Now we are in position to show the existence of a time-dependent absorbing family. \begin{lemma} \label{lem} Let $t \ge \tau$ . Let $U(t, \tau )W_\tau$ be the solution of \eqref{fl.1}--\eqref{IC} with initial time $\tau$ and initial data $W_\tau \in H_\tau$. Then, if (F4) holds, there exist $\omega>0$, $K \ge 0$ and an increasing positive function $Q$ such that \begin{equation} \label{dis} \|U(t, \tau )W_\tau\|_{H_t} \le Q(\|W_\tau\|_{H_\tau})e^{-ω(t-\tau)} + K,\;\; \tau \le t.\end{equation} \end{lemma} \begin{proof} We constuct the Lyapunov functional of the form \begin{equation} \label{lap} L(t)=\EuScript E(t)+\delta\left(\mu(t)(v, Nu)_{\EuScript O}+\rho(t)(u_t, u)_\Om\right). \end{equation} It is easy to see from (F3) and the properties of the operator $N$ that there exist $c_i>0$, $i=\overline{1,4}$ such that \begin{equation} \label{est} -c_1+c_2E(t)\le L(t)\le c_3 \EuScript E(t)+c_4, \end{equation} where $E(t)$ is defined by \eqref{en-def1}. All the calculations below can be performed on Galerkin approximations. It follows from the energy inequality \eqref{energy} and (F4) that \begin{multline*}\frac{d}{dt}L(t)=-\|\g v\|_{\EuScript O}^2 +\rho'(t)\|u_t\|_\Om^2+\mu'(t)\|v\|_{\EuScript O}^2+(f, v)_{\EuScript O}+(g, u_t)_{\Om}+\delta\mu(t)(v, Nu_t)_{\EuScript O}\\ -\delta\|\Delta u\|_\Om^2+\delta \rho(t) \|u_t\|_\Om^2 +\delta \rho'(t) (u_t, u)_\Om+\delta \mu'(t) (v, Nu)_{\EuScript O}\\-\delta (F(u), u)_\Om+\delta(f, Nu)_{\EuScript O}+\delta (g, u)_\Om-\delta (\g v, \g Nu)_{\EuScript O}\le -\omega L(t)+C(\|f\|_{\EuScript O}^2+\|g\|_{-1/2, \Om}^2) \end{multline*} for some $\omega, C>0$. Consequently, \[\frac{d}{dt}L(t)+\omega L(t)\le C(\|f\|_{\EuScript O}^2+\|g\|_{-1/2, \Om}^2)\] and using the Gronwall lemma, (G2), (F2), and \eqref{est} we come to \eqref{dis}. \end{proof} Lemma \ref{lem} yields the existence of a time-dependent absorbing family with the entering time $\Theta=\max\{0, \frac 1\omega log\frac{Q(R)}{1+K}\}$. \section{Pullback attractor.} In order to establish the existence of a pullback attractor to the system considered, our remaining task is to show estimate \eqref{ac}. \begin{lemma} Let $W^i(t)=(v^i(t), u^i(t), u_{t}^i(t))$, $i=\overline\{1,2\}$ be two weak solutions to problem \eqref{fl.1}--\eqref{IC} with initial conditions $W_\tau^i\in H_{t-T_0}$, $\|W^i\|_{H_{t-T_0}}\le R$. Then, for any $\varepsilon>0$ there exists $T_0>0$ and a positive constant $C(t, T_0)$ such that \begin{equation} \label{est1} \|W^1(t)-W^2(t)\|_{H_{t}}\le \varepsilon +C(T_0, R)\max\limits_{[t-T_0, t]}(\|u^1(s)-u^2(s)\|_{2-\epsilon, \Om}^2), \end{equation} for any $\epsilon>0$. \end{lemma} \begin{proof} We use the notations $W(t)=(v(t), u(t), u_t(t))=W^1(t)-W^2(t)$. It follows from the energy inequality that \begin{equation} \label{91} \frac{d}{d\xi} E(\xi)\le -\|\g v\|_{\EuScript O}^2+\mu'(\xi)\|v\|_{\EuScript O}^2+\rho'(\xi)\|u_\xi\|_\Om^2+(F(u^1)-F(u^1), u_\xi)_\Om. \end{equation} Integrating \eqref{91} over the interval $[s, t]$ and then $[t-T_0, t]$ we come to \begin{multline} \label{92} T_0 E(t)\le \int\limits_{t-T_0}^tE(s)ds-\int\limits_{t-T_0}^t \int\limits_{s}^t\|\g v\|_{\EuScript O}^2d\xi ds+\int\limits_{t-T_0}^t \int\limits_{s}^t\mu'(\xi)\|v\|_{\EuScript O}^2d\xi ds\\+\int\limits_{t-T_0}^t \int\limits_{s}^t\rho'(\xi)\|u_\xi\|_\Om^2d\xi ds+\int\limits_{t-T_0}^t \int\limits_{s}^t(F(u^1)-F(u^1), u_\xi)_\Om d\xi ds. \end{multline} It follows from the trace theorem and assumption (F1) that for any $\sigma>0$ \begin{multline} \label{93} \left|\;\int\limits_{t-T_0}^t \int\limits_{s}^t(F(u^1)-F(u^1), u_\xi)_\Om d\xi ds\right|\le \sigma \int\limits_{t-T_0}^t \int\limits_{s}^t\|\g v\|_{\EuScript O}^2d\xi ds\\+C(T_0, R, \sigma)\max\limits_{[t-T_0, t]}\|u\|_{2-\epsilon, \Om}^2. \end{multline} Integrating \eqref{91} over the interval $[t-T_0, t]$ and taking into consideration (A2) and (F1) we obtain \begin{multline} \label{94} E(t)+\int\limits_{t-T_0}^t\|\g v\|_{\EuScript O}^2ds\le C(R)\int\limits_{t-T_0}^t\|u\|_{2-\epsilon,\Om}^2ds+E(t-T_0)\le C(T_0, R)\max\limits_{[t-T_0, t]}\|u\|_{2-\epsilon, \Om}^2+C(R). \end{multline} It is a straightforward consequence of the trace theorem that \begin{equation} \label{95} \int\limits_{t-T_0}^t E(s)ds\le C\int\limits_{t-T_0}^t\|\g v\|_{\EuScript O}^2ds+\int\limits_{t-T_0}^t\|u\|_{2, \Om}^2ds. \end{equation} Now we estimate the last term in \eqref{95}. Substituting into \eqref{sol_def} $b=u$ and $\phi=Nu$ and choosing $\tau=t-T_0$ and $T=t$ we arrive at \begin{multline*} \int\limits_{t-T_0}^t\|u\|_{2, \Om}^2ds\le \int\limits_{t-T_0}^t\rho(s)\|u_s\|_{2, \Om}^2ds-\rho(t)(u_t(t), u(t))_\Om+ \rho(t-T_0)(u_t(t-T_0), u(t-T_0))_\Om\\+\int\limits_{t-T_0}^t(F(u^1)-F(u^2), u_s)_\Om ds+\int\limits_{t-T_0}^t\mu(s)(v, Nu_s)_{{\EuScript O}} ds -\int\limits_{t-T_0}^t(\g v, \g Nu)_{{\EuScript O}} ds\\- \mu(t)(v(t), Nu(t))_{{\EuScript O}}+ \mu(t-T_0)(v(t-T_0), Nu(t-T_0))_{{\EuScript O}}+\int\limits_{t-T_0}^t\mu'(s)(v, Nu)_{{\EuScript O}} ds. \end{multline*} Relying on the properties of the operator $N$, the trace theorem, and (A3) we have the estimate \begin{equation} \label{96} \int\limits_{t-T_0}^t\|u\|_{2, \Om}^2ds\le C(T_0, R)\max\limits_{[t-T_0, t]}\|u\|_{2-\epsilon, \Om}^2+С\int\limits_{t-T_0}^t\|\g v\|_{\EuScript O}^2ds+C(R). \end{equation} Combining \eqref{92}--\eqref{96} and choosing $\sigma$ in \eqref{93} we obtain \[E(t)\le \frac{C(R)}{T_0}+C(T_0, R)\max\limits_{[t-T_0, t]}\|u\|_{2-\epsilon, \Om}^2,\] which leads immediately to the assertion of the lemma.\\ Now we formulate our main result. \end{proof} \begin{theorem} The process $U(t, \tau)$ generated by problem \eqref{fl.1}--\eqref{IC} posessess a pullback attractor. \end{theorem} \begin{proof} There exists a time-dependent absorbing family and we have in hand Lemma 2. Therefore, to use Theorem 1 it remains to show that \eqref{p} holds true for $\Phi_{T_0,t}(W^1,W^2)=C(T_0, R)\max\limits_{[t-T_0, t]}\|u^1(s)-u^2(s)\|_{2-\epsilon, \Om}^2$. Let $W^n(t)=(v^n(t), u^n(t), u_t^n(t))$ be a sequence of solutions to problem \eqref{fl.1}--\eqref{IC} corresponding to initial data $W^n$, from $D_{t-T_0}$, i.e. $\|W^n\|_{H_{t-T_0}}\le R$ .Then, it follows from Lemma 1 that up to a subsequence $$ \begin{array}{l} u^n(s)-u^m(s) \to 0,\;\; \text{weak-* in}\;\; L_\infty(t-T_0, t; \widehat H_0^2(\Om)),\\ u_s^n(s)-u_s^m(s) \to 0 ,\;\;\text{weakly in} \;\; L_2(t-T_0, t; H_*^{1/2}(\Om)). \end{array} $$ By the Aubin's compactness lemma \cite{Simon}, we have \eqref{p}. This together with Theorem 1 completes the proof. \end{proof} \textbf{Acknowledgements.}\par The author is grateful to Irina Kmit, Humboldt University of Berlin, for fruitful discussion and valuable comments.\par The author was partially supported by the Volkswagen Foundation grant within frameworks of the international project "Modeling, Analysis, and Approximation Theory toward Applications in Tomography and Inverse Problems."
train/arxiv
BkiUde45qX_BxN0lOEqR
5
1
\section{Degree Boundedness} \label{sec:boundedness} The boundedness problem (Def. \ref{def:decision}, point \ref{decision:bound}) asks for the existence of a bound on the degree (Def. \ref{def:degree}) of the models of a sentence $\exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}})$. Intuitively, the $\bound{\Delta}{\predname{A}}$ problem has a negative answer if and only if there are increasingly large unfoldings (i.e., expansions of a formula by replacement of a predicate atom with one of its definitions) of $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ repeating a rule that contains an interaction atom involving a parameter of the rule, which is always bound to the same component. \ifLongVersion For instance, the rule $\mathit{Worker}(x) \leftarrow \exists y ~.~ \interactwo{x}{out}{y}{in} * \compact{y} * \mathit{Worker}(x)$ (Example \ref{ex:star}) declares an unbounded number of interactions $\interactwo{x}{out}{y}{in}$ involving the component to which $x$ is bound.\fi We formalize the notion of unfolding below: \begin{definition}\label{def:unfolding} Given a predicate $\predname{A}$ and a sequence $(\mathsf{r}_1, i_1), \ldots, (\mathsf{r}_n, i_n) \in \left(\Delta\times\mathbb{N}\right)^+$, where $\mathsf{r}_1$ is the rule $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi \in \Delta$, the \emph{unfolding} $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \unfold{(\mathsf{r}_1, i_1) \ldots (\mathsf{r}_n, i_n)}{\Delta} \psi$ is inductively defined as \begin{inparaenum}[(1)] % \item $\psi = \phi$ if $n=1$, and % \item$\psi$ is obtained from $\phi$ by replacing its $i_1$-th predicate atom $\predname{B}(y_1, \ldots, y_{\arityof{\predname{B}}})$ with $\psi_1[x_1/y_1, \ldots, x_{\arityof{\predname{B}}}/y_{\arityof{\predname{B}}}]$, where $\predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}}) \unfold{(\mathsf{r}_2,i_2) \ldots (\mathsf{r}_n,i_n)}{\Delta} \psi_1$ is an unfolding, if $n>1$. % \end{inparaenum} \end{definition} We show that the $\bound{\Delta}{\predname{A}}$ problem can be reduced to the existence of increasingly large unfoldings or, equivalently, a cycle in a finite directed graph, built by a variant of the least fixpoint iteration algorithm used to solve the satisfiability problem (Fig. \ref{fig:base-graph}). \begin{definition}\label{def:dependency} Given satisfiable base pairs $\mathfrak{t},\mathfrak{u} \in \mathsf{SatBase}$ and a rule from $\Delta$: \vspace*{-.25\baselineskip} \[\mathsf{r} ~:~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(z^1_1, \ldots, z^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(z^h_1, \ldots, z^h_{\arityof{\predname{B}_h}})\] where $\phi$ is a quantifier- and predicate-free formula, we write $(\predname{A},\mathfrak{t}) \depof{\mathsf{r}}{i} (\predname{B},\mathfrak{u})$ if and only if $\predname{B} = \predname{B}_i$ and there exist satisfiable base tuples $\mathfrak{t}_1, \ldots, \mathfrak{u} = \mathfrak{t}_i, \ldots, \mathfrak{t}_h \in \mathsf{SatBase}$, such that \(\mathfrak{t} \in \proj{\big(\basetupleof{\phi}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=1}^h \mathfrak{t}_\ell[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\big)}{x_1, \ldots, x_{\arityof{\predname{A}}}}\). We define the directed graph with edges labeled by pairs $(\mathsf{r},i) \in \Delta \times \mathbb{N}$: \vspace*{-.5\baselineskip} \begin{align*} \graphof{\Delta} & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \big(\set{\defnof{\Delta} \times \mathsf{SatBase}}, \set{\tuple{(\predname{A},\mathfrak{t}),(\mathsf{r},i),(\predname{B},\mathfrak{u})} \mid (\predname{A},\mathfrak{t}) \depof{\mathsf{r}}{i} (\predname{B},\mathfrak{u})}\big) \end{align*} \end{definition} \begin{figure}[t!] {\small\begin{algorithmic}[0] \State \textbf{input}: a SID $\Delta$ \State \textbf{output}: $\graphof{\Delta} = (V,E)$ \end{algorithmic}} {\small\begin{algorithmic}[1] \State initially $V := \emptyset$, $E := \emptyset$ \For{$\predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi \in \Delta$, with $\phi$ quantifier- and predicate-free} \State $V := V \cup \left(\set{\predname{A}} \times \proj{\basetupleof{\phi}{\set{x_1,\ldots,x_{\arityof{\predname{A}}}}}}{x_1,\ldots,x_{\arityof{\predname{A}}}}\right)$ \EndFor \While{$V$ or $E$ still change} \For{$\mathsf{r} : \predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \Asterisk_{\ell=1}^h \predname{B}_\ell(z^\ell_1,\ldots,z^\ell_{\arityof{\predname{B}_\ell}}) \in \Delta$} \If{there exist $(\predname{B}_1,\mathfrak{t}_1), \ldots, (\predname{B}_h,\mathfrak{t}_h) \in V$} \State $\mathcal{X} := \proj{\left(\basetupleof{\phi}{\set{x_1,\ldots,x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=1}^h \mathfrak{t}_\ell[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\right)}{x_1,\ldots,x_{\arityof{\predname{A}}}}$ \State $V := V \cup (\set{\predname{A}} \times \mathcal{X})$ \State $E := E \cup \set{\tuple{(\predname{A},\mathfrak{t}),(\mathsf{r},\ell),(\predname{B}_\ell,\mathfrak{t}_\ell)} \mid \mathfrak{t} \in \mathcal{X},\ell\in\interv{1}{h}}$ \EndIf \EndFor \EndWhile \end{algorithmic}} \caption{Algorithm for the Construction of $\graphof{\Delta}$} \label{fig:base-graph} \vspace*{-\baselineskip} \end{figure} The graph $\graphof{\Delta}$ is built by the algorithm in Fig. \ref{fig:base-graph}, a slight variation of the classical Kleene iteration algorithm for the computation of the least solution of the constraints of the form (\ref{eq:basesid})\ifLongVersion (see Fig. \ref{fig:base-sets})\fi. A path $(\predname{A}_1,\mathfrak{t}_1) \depof{\mathsf{r}_1}{i_1} (\predname{A}_2,\mathfrak{t}_2) \depof{\mathsf{r}_2}{i_2} \ldots \depof{\mathsf{r}_n}{i_n} (\predname{A}_n,\mathfrak{t}_n)$ in $\graphof{\Delta}$ induces a unique unfolding $\predname{A}_1(x_1,\ldots,x_{\arityof{\predname{A}_1}}) \unfold{(\mathsf{r}_1,i_1) \ldots (\mathsf{r}_n,i_n)}{\Delta} \phi$ (Def. \ref{def:unfolding}). Since the vertices of $\graphof{\Delta}$ are pairs $(\predname{A},\mathfrak{t})$, where $\mathfrak{t}$ is a satisfiable base tuple and the edges of $\graphof{\Delta}$ reflect the construction of the base tuples from the least solution of the constraints (\ref{eq:basesid}), the outcome $\phi$ of this unfolding is always a satisfiable formula. \begin{myLemmaE}\label{lemma:unfold-soundness} Given a path $(\predname{A}_0,\mathfrak{t}_0) \depof{\mathsf{r}_1}{i_1} \ldots \depof{\mathsf{r}_n}{i_n} (\predname{A}_n,\mathfrak{t}_n)$ in $\graphof{\Delta}$, where $\mathfrak{t}_0 = (\basecomps, \baseinteracs, \pureform)$, a state map $\varrho$ and a store $\nu$, such that $(\emptyset,\emptyset,\varrho) \models^\nu \pi$, there exists a configuration $(\mathcal{C},\mathcal{I},\varrho)$, such that $(\mathcal{C},\mathcal{I},\varrho) \models^\nu_\Delta \phi$, where $\predname{A}_0(x_1, \ldots, x_{\arityof{\predname{A}_0}}) \unfold{(\mathsf{r}_1,i_1) \ldots (\mathsf{r}_n,i_n)}{\Delta} \phi$ is the unique unfolding corresponding to the path. \end{myLemmaE} \begin{proofE} Let $\mathsf{r}_1$ be the following rule: \[\predname{A}_0(x_1, \ldots, x_{\arityof{\predname{A}_0}}) \leftarrow \phi \text{, where } \phi = \exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=2}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})\] and $\psi*\pi$ is a quantifier- and predicate-free formula and $\pi$ is, moreover, pure. The proof goes by induction on the length $n \geq 1$ of the path. For the base case $n=1$, by Def. \ref{def:dependency}, the edge $(\predname{A}_0,\mathfrak{t}_0) \depof{\mathsf{r}_1}{i_1} (\predname{A}_1,\mathfrak{t}_1)$ implies the existence of base tuples $\mathfrak{u}_\ell \in \leastbaseof{\predname{B}_\ell}$, for all $\ell \in \interv{2}{h}$, such that $\predname{B}_{i_1} = \predname{A}_1$, $\mathfrak{u}_{i_1-1} = \mathfrak{t}_1$ and: \[\mathfrak{t}_0 \in \proj{\left(\basetupleof{\psi*\pi}{\set{x_1,\ldots,x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=2}^h \mathfrak{u}_\ell[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\right)}{x_1,\ldots,x_{\arityof{\predname{A}}}}\] Let $\mathfrak{u}_\ell \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \basetuplen{\ell}$, for all $\ell \in \interv{2}{h}$ and $\pi' \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \pi * \Asterisk_{\ell=2}^h \pi_\ell$. Since $\mathfrak{t}_0$ is satisfiable, there exists a store $\nu'$, that agrees with $\nu$ over $x_1, \ldots, x_{\arityof{\predname{A}_0}}$, such that, moreover: \[\begin{array}{rl} \nu'(x) = \nu'(y) \text{ only if } x \formeq{\pi'} y, & \text{for all } x, y \in \fv{\psi * \pi'} \cup \hspace*{3.5cm} (\dagger) \\ & \bigcup_{\ell=2}^h \big(\comps^\sharp_\ell \cup \set{z_i \mid \tuple{z_1, \ldots, z_n} \in \interacs^\sharp_\ell(\tau), \tau\in\mathsf{Inter}}\big) \end{array}\] We define the configurations $(\mathcal{C}_1,\mathcal{I}_1,\varrho), \ldots, (\mathcal{C}_h,\mathcal{I}_h,\varrho)$ inductively, as follows: \begin{compactitem} % \item $\mathcal{C}_1 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{\nu'(y) \mid \compact{y} \text{ occurs in } \psi}$, % \item $\mathcal{I}_1 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{(\nu'(z_1), p_1, \ldots, \nu'(z_s), p_s) \mid \tuple{z_1.p_1, \ldots, z_t.p_t} \text{ occurs in } \psi}$, % \item for all $\ell \in \interv{2}{h}$, assuming $\mathcal{C}_1, \ldots, \mathcal{C}_{\ell-1}$ and $\mathcal{I}_1, \ldots, \mathcal{I}_{\ell-1}$ are defined, let: \[\begin{array}{rcl} \mathcal{D}_\ell & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \bigcup_{i=1}^{\ell-1} \mathcal{C}_i \cup \bigcup_{i=\ell+1}^h \nu'(\comps^\sharp_i) \\ \mathcal{J}_\ell & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \bigcup_{i=1}^{\ell-1} \mathcal{I}_i \cup \bigcup_{i=\ell+1}^h \nu'(\interacs^\sharp_i) \end{array}\] % \end{compactitem} We prove first that $\mathcal{D}_\ell \cap \nu'(\comps^\sharp_\ell) = \emptyset$ and $\mathcal{J}_\ell \cap \nu'(\interacs^\sharp_\ell) = \emptyset$ (we prove only the first point, the second uses a similar reasoning), by induction on $\ell\in\interv{2}{h}$. For the base case $\mathcal{D}_2 \cap \nu'(\comps^\sharp_2) \neq \emptyset$, we prove the points below: \begin{compactitem} % \item $\mathcal{C}_1 \cap \nu'(\comps^\sharp_2) = \emptyset$: suppose, for a contradiction, that there exists $c \in \mathcal{C}_1 \cap \nu'(\comps^\sharp_2)$, then $c = \nu'(y)$, for a component atom $\compact{y}$ from $\psi$ and $c = \nu'(x)$, for some $x \in \comps^\sharp_2$. By ($\dagger$), we obtain $x \formeq{\pi'} y$, contradicting the existence of $\mathfrak{t}_0$. % \item $\nu'(\comps^\sharp_i) \cap \nu'(\comps^\sharp_2) = \emptyset$, for some $i \in \interv{3}{h}$: suppose, for a contradiction, that there exists $c \in \nu'(\comps^\sharp_i) \cap \nu'(\comps^\sharp_2)$, then $c = \nu'(x) = \nu'(y)$, for some $x \in \comps^\sharp_i$ and $y \in \comps^\sharp_2$. By ($\dagger$), we obtain $x \formeq{\pi'} y$, contradicting the existence of $\mathfrak{t}_0$. % \end{compactitem} We assume that $\mathcal{D}_j \cap \nu'(\comps^\sharp_j) = \emptyset$, for all $j \in \interv{2}{\ell-1}$. By Lemma \ref{lemma:sat-soundness}, there exist configurations $(\mathcal{C}_j, \mathcal{I}_j, \varrho)$, such that $\mathcal{C}_j \cap \mathcal{D}_j = \emptyset$ ($\ddagger$) and $(\mathcal{C}_j, \mathcal{I}_j, \varrho) \models^{\nu'} \predname{B}_j(x_1, \ldots, x_{\arityof{\predname{B}_j}})$, for all $j \in \interv{2}{\ell-1}$. We prove $\mathcal{D}_\ell \cap \nu'(\comps^\sharp_\ell) = \emptyset$, by showing the following points: \begin{compactitem} % \item $\mathcal{C}_j \cap \nu'(\comps^\sharp_\ell) = \emptyset$, for all $j \in \interv{1}{\ell-1}$: suppose, for a contradiction, that there exists $c \in \mathcal{C}_j \cap \nu'(\comps^\sharp_\ell)$, for some $j \in \interv{1}{\ell-1}$, then $c \in \mathcal{C}_j \cap \mathcal{D}_j$, because $\mathcal{D}_j \subseteq \nu'(\comps^\sharp_\ell)$, in contradiction with $\mathcal{C}_j \cap \mathcal{D}_j = \emptyset$ ($\ddagger$). % \item $\nu'(\comps^\sharp_j) \cap \nu'(\comps^\sharp_\ell) = \emptyset$, for all $j \in \interv{2}{\ell-1}$: suppose, for a contradiction, that there exists $c \in \nu'(\comps^\sharp_j) \cap \nu'(\comps^\sharp_\ell)$, then $c = \nu'(x) = \nu'(y)$, for some $x \in \comps^\sharp_j$ and $y \in \comps^\sharp_\ell$. By ($\dagger$), we obtain $x \formeq{\pi'} y$, contradicting the existence of $\mathfrak{t}_0$. % \end{compactitem} Consequently, $\mathcal{D}_\ell \cap \nu'(\comps^\sharp_\ell) = \emptyset$, for all $\ell\in\interv{2}{h}$. By Lemma \ref{lemma:sat-soundness}, there exists a configuration $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho)$, such that $\mathcal{C}_\ell \cap \mathcal{D}_\ell = \emptyset$ and $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho) \models^{\nu'} \predname{B}_\ell(x_1, \ldots, x_{\arityof{\predname{B}_\ell}})$, for all $\ell\in\interv{2}{h}$. We obtain that $\mathcal{C}_i \cap \mathcal{C}_j = \emptyset$ and $\mathcal{I}_i \cap \mathcal{I}_j = \emptyset$, for all $1 \leq i < j \leq h$, meaning that the configuration $(\mathcal{C},\mathcal{I},\varrho) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C}_1, \mathcal{I}_1, \varrho) \bullet \ldots \bullet (\mathcal{C}_h, \mathcal{I}_h, \varrho)$ is defined, which leads to $(\mathcal{C},\mathcal{I},\varrho) \models^{\nu'} \phi$. \vspace*{\baselineskip}\noindent For the inductive step $n>1$, by Def. \ref{def:dependency}, there exists base tuples $\mathfrak{u}_\ell \in \leastbaseof{\predname{B}_\ell}$, for all $\ell \in \interv{2}{h}$, such that $\predname{A}_1 = \predname{B}_{i_1}$, $\mathfrak{u}_{i_1-1} = \mathfrak{t}_1$ and: \[\mathfrak{t}_0 \in \proj{\left(\basetupleof{\psi*\pi}{\set{x_1,\ldots,x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=1}^h \mathfrak{u}_\ell[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\right)}{x_1,\ldots,x_{\arityof{\predname{A}}}}\] Then there exists a store $\nu'$ that agrees with $\nu$ over $x_1, \ldots, x_{\arityof{\predname{A}_0}}$ and satisfies ($\dagger$). Let $\nu'' \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \nu'[x_1/z^{i_1}_1, \ldots, x_{\arityof{\predname{A}_1}}/z^{i_1}_{\arityof{\predname{A}_1}}]$. By the inductive hypothesis, since $(\predname{A}_1,\mathfrak{t}_1) \depof{\mathsf{r}_2}{i_2} \ldots \depof{\mathsf{r}_n}{i_n} (\predname{A}_n,\mathfrak{t}_n)$ is a path in $\graphof{\Delta}$, there exists a configuration $(\mathcal{C}', \mathcal{I}', \varrho)$, such that $(\mathcal{C}', \mathcal{I}', \varrho) \models^{\nu''} \predname{A}_1(z^{i_1}_1, \ldots, z^{i_1}_{\arityof{\predname{A}_1}})$, because $(\mathcal{C}', \mathcal{I}', \varrho) \models^{\nu'} \phi_1$, for the unfolding $\predname{A}_1(x_1, \ldots, x_{\arityof{\predname{A}_1}}) \unfold{(\mathsf{r}_2, i_2) \ldots (\mathsf{r}_n, i_n)}{\Delta} \phi_1$. The required configuration is defined as $(\mathcal{C},\mathcal{I},\varrho) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C}_1,\mathcal{I}_1,\varrho) \bullet \ldots \bullet (\mathcal{C}_{i_1-1},\mathcal{I}_{i_1-1},\varrho) \bullet (\mathcal{C}',\mathcal{I}',\varrho) \bullet (\mathcal{C}_{i_1+1},\mathcal{I}_{i_1+1},\varrho) \bullet \ldots \bullet (\mathcal{C}_h,\mathcal{I}_h,\varrho)$, where $(\mathcal{C}_1,\mathcal{I}_1,\varrho)$, $\ldots$, $(\mathcal{C}_{i_1-1},\mathcal{I}_{i_1-1},\varrho)$ and $(\mathcal{C}_{i_1+1},\mathcal{I}_{i_1+1},\varrho)$, $\ldots$, $(\mathcal{C}_h,\mathcal{I}_h,\varrho)$ are defined as in the base case, by taking $\nu''$ instead of $\nu'$ and defining, for all $\ell \in \interv{2}{h} \setminus \set{i_1}$: \[\begin{array}{rcl} \mathcal{D}_\ell & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \mathcal{C}' \cup \bigcup_{i=1}^{\ell-1} \mathcal{C}_i \cup \bigcup_{i=\ell+1}^h \nu'(\comps^\sharp_i) \\ \mathcal{J}_\ell & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \mathcal{I}' \cup \bigcup_{i=1}^{\ell-1} \mathcal{I}_i \cup \bigcup_{i=\ell+1}^h \nu'(\interacs^\sharp_i) \end{array}\] The proof of the fact that $\mathcal{C}_1$, $\ldots$, $\mathcal{C}_{i_1-1}$, $\mathcal{C}'$, $\mathcal{C}_{i_1+1}$, $\ldots$, $\mathcal{C}_h$ and $\mathcal{I}_1$, $\ldots$, $\mathcal{I}_{i_1-1}$, $\mathcal{I}'$, $\mathcal{I}_{i_1+1}$, $\ldots$, $\mathcal{I}_h$ are pairwise disjoint, respectively, follows by the same argument as in the base case. \qed \end{proofE} \begin{myLemmaE}\label{lemma:unfold-completeness} Given an unfolding $\predname{A}_0(x_1, \ldots, x_{\arityof{\predname{A}_0}}) \unfold{(\mathsf{r}_1,i_1) \ldots (\mathsf{r}_n,i_n)}{\Delta} \phi$, a configuration $(\mathcal{C},\mathcal{I},\varrho)$ and a store $\nu$, such that $(\mathcal{C},\mathcal{I},\varrho)\models^\nu_\Delta \phi$, then $\graphof{\Delta}$ has a path $(\predname{A}_0,\basetuplen{0}) \depof{\mathsf{r}_1}{i_1} \ldots$ \\ $\depof{\mathsf{r}_n}{i_n} (\predname{A}_n,\basetuplen{n})$, for some $\basetuplen{0}, \ldots, \basetuplen{n} \in \mathsf{SatBase}$, such that $\nu(\comps^\sharp_0) \subseteq \mathcal{C}_0$, $\nu(\interacs^\sharp_0) \subseteq \mathcal{I}_0$ and $(\emptyset,\emptyset,\varrho) \models^\nu \pi_0$. \end{myLemmaE} \begin{proofE} Let $\mathsf{r}_1$ be the following rule: \[\predname{A}_0(x_1, \ldots, x_{\arityof{\predname{A}_0}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=2}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})\] and $\psi*\pi$ is a quantifier- and predicate-free formula and $\pi$ is, moreover, pure. The proof goes by induction on the length $n \geq 1$ of the path. For the base case $n=1$, we have $\phi = \exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=2}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, hence there exists a store $\nu'$, that agrees with $\nu$ over $x_1, \ldots, x_{\arityof{\predname{A}_0}}$, and configurations $(\mathcal{C}_1, \mathcal{I}_1, \varrho), \ldots, (\mathcal{C}_h, \mathcal{I}_h, \varrho)$, such that: \begin{compactitem} % \item $(\mathcal{C}_1, \mathcal{I}_1, \varrho) \models^{\nu'} \psi * \pi$, % \item $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho) \models^{\nu'}_{\Delta} \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, for all $\ell \in \interv{2}{h}$, and % \item $\gamma = (\mathcal{C}_1, \mathcal{I}_1, \varrho) \bullet \ldots \bullet (\mathcal{C}_h, \mathcal{I}_h, \varrho)$. % \end{compactitem} We consider the following base tuples: \begin{compactitem} % \item $\pbasetuplen{1} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \basetupleof{\psi*\pi}{\set{x_1, \ldots, x_{\arityof{\predname{A}_0}}}}$, % \item for all $\ell \in \interv{2}{h}$, there exist $\pbasetuplen{\ell} \in \leastbaseof{\predname{B}_\ell}[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]$, such that $\mathcal{C}_\ell \subseteq \nu'(\overline{\comps}^\sharp_\ell)$, $\mathcal{I}_\ell \subseteq \nu'(\overline{\interacs}^\sharp_\ell)$ and $(\emptyset,\emptyset,\varrho) \models^{\nu'} \overline{\pi}_\ell$, by Lemma \ref{lemma:sat-completeness}. % \end{compactitem} By similar argument to the one from the proof of Lemma \ref{lemma:sat-completeness} (base case), we show that the composition $\bigotimes_{\ell=1}^h \pbasetuplen{\ell}$ is defined and let $\basetuplen{0} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \proj{\left(\bigotimes_{\ell=1}^h \pbasetuplen{\ell} \right)}{x_1, \ldots, x_{\arityof{\predname{A}}}}$. Moreover, we obtain $\nu(\comps^\sharp_0) \subseteq \mathcal{C}_0$, $\nu(\interacs^\sharp_0) \subseteq \mathcal{I}_0$ and $(\emptyset,\emptyset,\varrho) \models^\nu \pi_0$, as in the proof of Lemma \ref{lemma:sat-completeness}. Then, by Def. \ref{def:dependency}, $\graphof{\Delta}$ has an edge $(\predname{A}_0, \basetuplen{0}) \depof{\mathsf{r}_1}{i_1} (\predname{B}_{i_1-1}, \pbasetuplen{i_1-1})$. \vspace*{\baselineskip}\noindent For the inductive step $n>1$, let $\predname{B}_{i_1-1}(x_1, \ldots, x_{\arityof{\predname{B}_{i_1-1}}}) \unfold{(\mathsf{r}_2,i_2) \ldots (\mathsf{r}_n,i_n)}{\Delta} \phi_1$ be an unfolding, such that $\phi$ is obtained from $\exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=2}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, by replacing $\predname{B}_{i_1-1}(z^{i_1-1}_1, \ldots, z^{i_1-1}_{\arityof{\predname{B}_{i_1-1}}})$ with $\phi_1[x_1/z^{i_1-1}_1, \ldots, x_{\arityof{\predname{B}_{i_1-1}}}/z^{i_1-1}_{\arityof{\predname{B}_{i_1-1}}}]$. Since $(\mathcal{C},\mathcal{I},\varrho) \models^\nu_\Delta \phi$, there exists a store $\nu'$, that agrees with $\nu$ over $x_1, \ldots, x_{\arityof{\predname{A}_0}}$, and configurations $(\mathcal{C}_1, \mathcal{I}_1, \varrho), \ldots, (\mathcal{C}_h, \mathcal{I}_h, \varrho)$, where $\gamma = (\mathcal{C}_1, \mathcal{I}_1, \varrho) \bullet \ldots \bullet (\mathcal{C}_h, \mathcal{I}_h, \varrho)$ and the following hold: \begin{compactitem} % \item $(\mathcal{C}_1, \mathcal{I}_1, \varrho) \models^{\nu'} \psi * \pi$, % \item $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho) \models^{\nu'}_{\Delta} \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, for all $\ell \in \interv{2}{h} \setminus \set{i_1+1}$, % \item $(\mathcal{C}_{i_1-1}, \mathcal{I}_{i_1-1}, \varrho) \models^{\nu'}_\Delta \phi_1[x_1/z^{i_1-1}_1, \ldots, x_{\arityof{\predname{B}_{i_1-1}}}/z^{i_1-1}_{\arityof{\predname{B}_{i_1-1}}}]$, hence $(\mathcal{C}_{i_1-1}, \mathcal{I}_{i_1-1}, \varrho) \models^{\nu'}_\Delta \predname{B}_{i_1-1}(z^{i_1-1}_1, \ldots, z^{i_1-1}_{\arityof{\predname{B}_{i_1-1}}})$. % \end{compactitem} We consider the following base tuples: \begin{compactitem} % \item $\pbasetuplen{1} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \basetupleof{\psi*\pi}{\set{x_1, \ldots, x_{\arityof{\predname{A}_0}}}}$, % \item for all $\ell \in \interv{2}{h} \setminus \set{i_1+1}$, there exist $\pbasetuplen{\ell} \in \leastbaseof{\predname{B}_\ell}[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]$, such that $\mathcal{C}_\ell \subseteq \nu'(\overline{\comps}^\sharp_\ell)$, $\mathcal{I}_\ell \subseteq \nu'(\overline{\interacs}^\sharp_\ell)$ and $(\emptyset,\emptyset,\varrho) \models^{\nu'} \overline{\pi}_\ell$, by Lemma \ref{lemma:sat-completeness}. % \item $\graphof{\Delta}$ has a path $(\predname{B}_{i_1-1},\basetuplen{1}) \depof{\mathsf{r}_2}{i_2} \ldots \depof{\mathsf{r}_n}{i_n}$ $(\predname{A}_n,\basetuplen{n})$, such that $\mathcal{C}_{i_1-1} \subseteq \nu'(\comps^\sharp_1)$, $\mathcal{I}_{i_1-1} \subseteq \nu'(\interacs^\sharp_1)$ and $(\emptyset,\emptyset,\varrho) \models^{\nu'} \pi_1$, by the inductive hypothesis. % \end{compactitem} By an argument similar to the one from Lemma \ref{lemma:sat-completeness}, the composition $\pbasetuplen{} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \bigotimes_{\ell=1}^{i_1-2} \pbasetuplen{\ell} \otimes \basetuplen{1} \otimes \bigotimes_{\ell=i_1}^h \pbasetuplen{\ell}$ is defined and let $\basetuplen{0} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \proj{\pbasetuplen{}}{\set{x_1, \ldots, x_{\arityof{\predname{A}_0}}}}$. Finally, the conditions $\nu(\comps^\sharp_0) \subseteq \mathcal{C}_0$, $\nu(\interacs^\sharp_0) \subseteq \mathcal{I}_0$ and $(\emptyset,\emptyset,\varrho) \models^\nu \pi_0$ follow from a similar argument to the one used in Lemma \ref{lemma:sat-completeness}. \qed \end{proofE} An \emph{elementary cycle} of $\graphof{\Delta}$ is a path from some vertex $(\predname{B},\mathfrak{u})$ back to itself, such that $(\predname{B},\mathfrak{u})$ does not occur on the path, except at its endpoints. The cycle is, moreover, \emph{reachable} from $(\predname{A},\mathfrak{t})$ if and only if there exists a path $(\predname{A},\mathfrak{t}) \depof{\mathsf{r}_1}{i_1} \ldots \depof{\mathsf{r}_n}{i_n} (\predname{B},\mathfrak{u})$ in $\graphof{\Delta}$. We reduce the complement of the $\bound{\Delta}{\predname{A}}$ problem, namely the existence of an infinite set of models of $\exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ of unbounded degree, to the existence of a reachable elementary cycle in $\graphof{\Delta'}$, where $\Delta'$ is obtained from $\Delta$, as described in the following. First, we consider, for each predicate $\predname{B} \in \defnof{\Delta}$, a predicate $\predname{B}'$, of arity $\arityof{\predname{B}}+1$, not in $\defnof{\Delta}$ i.e., the set of predicates for which there exists a rule in $\Delta$. Second, for each rule \(\predname{B}_0(x_1, \ldots, x_{\arityof{\predname{B}_0}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \Asterisk_{\ell=2}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}) \in \Delta\), where $\phi$ is a quantifier- and predicate-free formula and $\interactionvars{\phi} \subseteq \fv{\phi}$ denotes the subset of variables occurring in interaction atoms in $\phi$, the SID $\Delta'$ has the following rules: \begin{eqnarray} \predname{B}'_0(x_1, \ldots, x_{\arityof{\predname{B}_0}}, x_{\arityof{\predname{B}_0}+1}) & \leftarrow & \exists y_1 \ldots \exists y_m ~.~ \phi * \Asterisk_{\xi \in \interactionvars{\phi}} x_{\arityof{\predname{B}_0}+1} \not= \xi * \nonumber \\ & & \hspace{1.5cm} \Asterisk_{\ell=2}^h \predname{B}'_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}, x_{\arityof{\predname{B}_0}+1}) \label{eq:interaction-not-bound} \\ \predname{B}'_0(x_1, \ldots, x_{\arityof{\predname{B}_0}}, x_{\arityof{\predname{B}_0}+1}) & \leftarrow & \exists y_1 \ldots \exists y_m ~.~ \phi * x_{\arityof{\predname{B}_0}+1} = \xi * \nonumber \\ & & \hspace{1.5cm} \Asterisk_{\ell=2}^h \predname{B}'_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}, x_{\arityof{\predname{B}_0}+1}) \label{eq:interaction-bound} \\ \text{for each variable } \xi & \in & \interactionvars{\phi} \text{, that occurs in an interaction atom in $\phi$.} \nonumber \end{eqnarray} Intuitively, there exists a family of models (with respect to $\Delta$) of $\exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ of unbounded degree if and only if these are models of $\exists x_1 \ldots \exists x_{\arityof{\predname{A}}+1} ~.~ \predname{A}'(x_1, \ldots, x_{\arityof{\predname{A}}+1})$ (with respect to $\Delta'$) and the last parameter of each predicate $\predname{B}' \in \defnof{\Delta'}$ can be mapped, in each of the these models, to a component that occurs in unboundedly many interactions. The latter condition is equivalent to the existence of an elementary cycle, containing a rule of the form (\ref{eq:interaction-bound}), that it, moreover, reachable from some vertex $(\predname{A}',\mathfrak{t})$ of $\graphof{\Delta'}$, for some $\mathfrak{t}\in\mathsf{SatBase}$. This reduction is formalized below: \begin{myLemmaE}\label{lemma:length-unfolding} Let $\predname{A}$ be a predicate and $\gamma$ be a model of $\exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. Then there exists an unfolding $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \unfold{w}{\Delta} \psi$ of length $\lenof{w} \geq \frac {\log(\degreeof{\gamma}) - \log \beta_1} {\log \beta_2}$ where $\beta_1$ is the maximal number of components and interaction atoms and $\beta_2$ is the maximal number of predicate atoms, occurring in a rule of $\Delta$. \end{myLemmaE} \begin{proofE} Let $\gamma$ be a configuration, $\nu$ be a store and $\predname{A}$ be a predicate, such that $\gamma \models_\Delta^{\nu} \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. We consider the derivation tree $T$ induced by the definition of the $\models_\Delta^\nu$ relation. The nodes of $T$ are labelled by a triple $\gamma' \models_\Delta^{\nu'} \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$. We start from the root labelled by $\gamma \models_\Delta^{\nu} \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ and define the children of a node inductively. For each node $\gamma' \models_\Delta^{\nu'} \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$, there exists a rule: \[\mathsf{r} ~:~ \predname{B}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(z^1_1, \ldots, z^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(z^h_1, \ldots, z^h_{\arityof{\predname{B}_h}})\] and configurations $\gamma_0, \ldots, \gamma_h$, such that $\gamma = \gamma_0 \bullet \dots \bullet \gamma_h$, $\gamma_0 \models_\Delta^{\nu''} \phi$ and $\gamma_i \models_\Delta^{\nu''} \predname{B}_i(z^i_1, \ldots, z^i_{\arityof{\predname{B}_i}})$ for every $i \in \interv{1}{h}$, where $\phi$ is a predicate-free formula and $\nu''$ is a store that agrees with $\nu'$ over $x_1, \dots, x_{\arityof{\predname{B}}}$. We define $\nu_\ell \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \nu''[x_1/z^\ell, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]$, for all $\ell \in \interv{1}{h}$. Then the node $\gamma' \models_\Delta^{\nu'} \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$ has $h$ children in $T$, where the $\ell$-th child is labelled by $\gamma_\ell \models_\Delta^{\nu_\ell} \predname{B}_\ell(x_1, \ldots, x_{\arityof{\predname{B}_\ell}})$, for all $\ell \in \interv{1}{h}$. The construction is finite since $\gamma \models_\Delta^{\nu} \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ has a finite inductive definition. We now consider the degree of the configurations which occur in $T$. By Def. \ref{def:composition}, we obtain $\degreeof{\gamma} \leq \degreeof{\gamma_0} + \sum_{i=1}^h \degreeof{\gamma_i}$. With each $\gamma_i$ associated to a child of this node (except $\gamma_0$), we obtain that $\degreeof{\gamma_\text{init}}$ does not exceed $\beta_1$ times the number of nodes in $T$. Since $h \leq\beta_2$, the height $n$ of $T$ is bound to the degree $\degreeof{\gamma}$ by the inequality $\degreeof{\gamma} \leq \beta_1 \times \sum_{k=0}^n {\beta_2}^k = \beta_1 \times {\beta_2}^{n+1} - 1$, leading to: \[n+1 \geq \frac {\log \degreeof{\gamma} - \log \beta_1} {\log \beta_2}\] Finally, with $T$ of height $n$, there exists a branch in $T$ (starting from the root) of length exactly $n+1$. Yet each branch of $T$ corresponds to an unfolding $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \unfold{w}{\Delta} \psi$, with $w$ obtained by concatenating for every node $(\gamma,\nu,\predname{A})$ of the branch (from root to leaf) the couple $(\mathsf{r},i)$ consisting of: \begin{compactitem} % \item the rule $\mathsf{r} \in\Delta$ used to unfold $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, and % \item the position $i$ of this node among its brothers in $T$ (take $i=1$ for the root). % \end{compactitem} This unfolding has the length required, which concludes the lemma. \qed \end{proofE} \begin{lemmaE}\label{lemma:unbounded} There exists an infinite sequence of configurations $\gamma_1, \gamma_2, \ldots$ such that $\gamma_i \models_\Delta \exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ and $\degreeof{\gamma_i} < \degreeof{\gamma_{i+1}}$, for all $i\geq1$ if and only if $\graphof{\Delta'}$ has an elementary cycle containing a rule (\ref{eq:interaction-bound}), reachable from a node $(\predname{A}',\mathfrak{t})$, for $\mathfrak{t}\in\mathsf{SatBase}$. \end{lemmaE} \begin{proofE} ``$\Rightarrow$'' Let $\nu_1, \nu_2, \ldots$ be stores such that $\gamma_i \models_\Delta^{\nu_i} \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, for all $i \geq 1$. By Lemma \ref{lemma:length-unfolding}, where exists unfoldings $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \unfold{w_i}{\Delta'} \phi_i$ of lengths $\lenof{w_1} < \lenof{w_2} < \ldots$, such that $\gamma_i \models_\Delta^{\nu_i} \phi_i$, for all $i \geq 1$. For each configuration $\gamma_i$, let $d_i \in \mathbb{C}$ be a component, such that $\degreeof{\gamma_i} = \cardof{\set{(c_1,p_1,\ldots,c_n,p_n) \mid d_i=c_j, j \in \interv{1}{n}}}$. By induction on $\lenof{w_i}\geq1$, we build unfoldings $\predname{A}'(x_1, \ldots, x_{\arityof{\predname{A}}}, x_{\arityof{\predname{A}}+1}) \unfold{w'_i}{\Delta'} \phi'_i$ that bind $x_{\arityof{\predname{A}}+1}$ to all variables bound to $d_i$, using rules of type (\ref{eq:interaction-bound}). By Lemma \ref{lemma:unfold-completeness}, $w'_1, w'_2, \ldots$ are labels of paths from $\graphof{\Delta'}$, that start in $(\predname{A},\mathfrak{t}_1), (\predname{A},\mathfrak{t}_2), \ldots$, respectively. Since $\graphof{\Delta'}$ is finite, we can chose an infinite subsequence of paths that start in the same node of $\graphof{\Delta'}$ and repeat the same vertex, with a rule of type (\ref{eq:interaction-bound}) in between. \vspace*{\baselineskip}\noindent ``$\Leftarrow$'' Let $(\predname{A}',\mathfrak{t}) \depof{\mathsf{r}'_1}{i_1} \ldots \depof{\mathsf{r}'_n}{i_n} (\predname{B}_n,\mathfrak{t}_n) \Depof{\mathsf{r}'_{n+1}}{i_{n+1}} \ldots \Depof{\mathsf{r}'_{n+p}}{i_{n+p}} (\predname{B}_n,\mathfrak{t}_n)$ be a path in $\graphof{\Delta'}$, such that one of the rules $\mathsf{r}'_{n+1}, \ldots, \mathsf{r}'_{n+p}$ is of the form (\ref{eq:interaction-bound}) and let $w'_i \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathsf{r}'_1,i_1) \ldots (\mathsf{r}'_n,i_n) [(\mathsf{r}'_{n+1},i_{n+1}) \ldots (\mathsf{r}'_{n+p},i_{n+p})]^i$, for all $i \geq 1$. By Lemma \ref{lemma:unfold-soundness}, there exist unfoldings $\predname{A}'(x_1, \ldots, x_{\arityof{\predname{A}}+1}) \unfold{w'_i}{\Delta'} \phi'_i$, stores $\nu_i$ and configurations $\gamma_i$, such that $\gamma_i \models^{\nu_i} \phi'_i$. We define: \[\delta_i \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \cardof{\set{(c_1,p_1,\ldots,c_n,p_n) \in \mathcal{I}_i \mid \nu(x_{\arityof{\predname{A}}+1})=c_j, j \in \interv{1}{n}}} \text{, for all $i \geq 1$}\] where $\gamma_i \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C}_i, \mathcal{I}_i, \varrho_i)$. Since $\gamma_i \models^{\nu_i} \phi'_i$ and one of the rules $\mathsf{r}'_{n+1}, \ldots, \mathsf{r}'_{n+p}$ is of type (\ref{eq:interaction-bound}), the sequence $\delta_1, \delta_2, \ldots$ is strictly increasing. Moreover, we have $\delta_i \leq \degreeof{\gamma_i}$, for all $i \geq 1$, hence there exists a sequence of integers $1 \leq i_1 < i_2 < \ldots$ such that $\degreeof{\gamma_{i_j}} < \degreeof{\gamma_{i_{j+1}}}$, for all $j \geq 1$. \qed \end{proofE} The complexity result below uses a similar argument on the maximal size of (hence the number of) base tuples as in Theorem \ref{thm:sat}, leading to similar complexity gaps: \begin{theoremE}\label{thm:bound} $\klbound{\Delta}{\predname{A}}{k}{\infty}$ is in $\mathsf{co}$-$\mathsf{NP}$, $\klbound{\Delta}{\predname{A}}{\infty}{\ell}$ is in $\mathsf{EXP}$\ and $\bound{\Delta}{\predname{A}}$ is in $2\mathsf{EXP}$. \end{theoremE} \begin{proofE} Lemma \ref{lemma:unbounded} shows the reduction of the complement of $\bound{\Delta}{\predname{A}}$ to the existence of a reachable cycle in the graph $\graphof{\Delta'}$, where $\Delta'$ is constructed from $\Delta$ in polynomial time. Moreover, we have $\maxarityof{\Delta'} = \maxarityof{\Delta}+1$ and $\maxintersize{\Delta'} = \maxintersize{\Delta}$. We distinguish the three cases below: \begin{compactitem} \item $k<\infty$, $\ell=\infty$: in this case, we can define a non-deterministic algorithm as follows. We guess the solution $(\tuple{W_1,\ldots, W_K}, W_{K+1}, \tuple{i_1,i_2,\ldots, i_n}$, where: \begin{compactitem} % \item $\tuple{W_1,\ldots,W_K}$ defines an acyclic witness for a satisfiable least solution of $\predname{A}'$ in $\Delta'$ constructed as in the proof of Thm.~\ref{thm:sat}; % \item $W_{K+1} = (T_{K+1}, r_{K+1}, t_{K+1,1},...,e_{K+1,h_{K+1}})$ is similar to a regular entry $W_i$, that is, contains a base tuple $T_{K+1}$, an index $r_{K+1}$ of a rule of $\Delta'$ and indices $e_{K+1,1}$, $\ldots$, $e_{K+1,h_{K+1}} \in \{1,\ldots, K\}$ such that $T_{K+1}$ is computed correctly by applying the rule $r_{K+1}$ from base tuples $T_{e_{K+1,1}},\ldots, T_{e_{K+1,h_{K+1}}}$ as explained in the proof of Thm.~\ref{thm:sat}; % \item $\tuple{i_1,i_2,\ldots,i_n}$ defines an acyclic path starting at the initial node in the directed acyclic graph defined by $W$, that is, $1 = i_1 < i_2 < \ldots < i_n \le K$ and moreover $i_{j+1} \in \{e_{i_j,1},\ldots, e_{i_j,h_{i_j}}\}$ for all $j\in\{1,2,\ldots,n-1\}$; % \item the path $\tuple{i_1,i_2,\ldots,i_n}$ can be \emph{closed} into a witness reachable cycle from $i_1$ by using $W_{K+1}$ that is, whenever (i) rules $r_{i_n}$ and $r_K$ define the same predicate, and moreover $T_{i_n} = T_{K+1}$ , (ii) the intersection $X = \{e_{K+1,1},\ldots,e_{K+1,h_{K+1}}\} \cap \{i_1, i_2, \ldots, i_n\} \not= \emptyset$, (iii) if $i_j = \min X$, that is, the cycle starts at $i_j$ then at least one of the rules used along the cycle $r_{i_j}, r_{i_{j+1}}, \ldots, r_{i_{n-1}}, r_{K+1}$ is of the form (\ref{eq:interaction-bound}). % \end{compactitem} The solution is of linear size $\mathcal{O}(\size{\Delta'})$ by the same arguments as in the proof of Thm.~\ref{thm:sat}. Therefore, it can be guessed in polynomial time, and moreover checked in polynomial time following the conditions above. This implies the membership of the complement problem in $\mathsf{NP}$, henceforth $\klbound{\Delta}{\predname{A}}{k}{\infty}$ is in $\mathsf{co}$-$\mathsf{NP}$. % \item $k=\infty$, $\ell<\infty$: in this case, using the algorithm from Fig. \ref{fig:base-graph}, the graph $\graphof{\Delta'}$ is constructed in time $2^{\mathit{poly}(\size{\Delta'})}$ as previously explained in the proof of Theorem \ref{thm:sat}. Finding a reachable cycle with the additional properties required by Lemma \ref{lemma:unbounded} can be done in two additional steps, respectively, first building the SCCs decomposition of $\graphof{\Delta'}$ and then checking reachability of SCCs containing edges derived from rules of form (\ref{eq:interaction-bound}) from SCCs containing vertices $(\predname{A}',\mathfrak{t})$. Both steps can be done in in linear time in the size of $\graphof{\Delta'}$ i.e., using Tarjan algorithm for SCC decomposition and standard graph traversals. Therefore, the overall time complexity remains $2^{\mathit{poly}(\size{\Delta'})}$, and as such $\klbound{\Delta}{\predname{A}}{\infty}{\ell}$ is in $\mathsf{EXP}$. % \item $k=\infty$, $\ell=\infty$: following the same argument as in the previous point and noticing that the graph $\graphof{\Delta'}$ is constructed in time $2^{2^{\mathit{poly}(\size{\Delta'})}}$ we conclude that $\bound{\Delta}{\predname{A}}$ is in $2\mathsf{EXP}$. \qed \end{compactitem} \end{proofE} \noindent Moreover, the construction of $\graphof{\Delta'}$ allows to prove the following cut-off result: \begin{propositionE}\label{prop:bound-cutoff} Let $\gamma$ be a configuration and $\nu$ be a store, such that $\gamma \models_\Delta^\nu \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. If $\klbound{\Delta}{\predname{A}}{k}{\ell}$ then \begin{inparaenum}[(1)] % \item $\degreeof{\gamma}=\mathit{poly}(\size{\Delta})$ if $k<\infty$, $\ell=\infty$, % \item $\degreeof{\gamma}=2^{\mathit{poly}(\size{\Delta})}$ if $k=\infty$, $\ell < \infty$ and % \item $\degreeof{\gamma}=2^{2^{\mathit{poly}(\size{\Delta})}}$ if $k=\infty$, $\ell=\infty$. \end{inparaenum} \end{propositionE} \begin{proofE} First, we show that in all cases, the degree is bounded by $2^{B^*} \cdot L \cdot I$ where $B^*$ is the maximal length of a satisfiable base tuple in $\Delta'$, $L$ is the number of predicates in $\Delta'$ and $I$ is the maximal number of interactions defined in a rule in $\Delta'$. The maximal length $B^*$ of a satisfiable base tuples has been considered in the proof of Thm.~\ref{thm:sat} to derive an upper bound on the the number of distinct satisfiable base tuples for a SID. Then, $2^{B^*} \cdot L$ represents a bound on the number of nodes in the graph $\graphof{\Delta'}$ as for every predicate there will be at most $2^{B^*}$ satisfiable base tuples associated to it. Meantime, this value also represents a bound on the longest acyclic path in $\graphof{\Delta'}$. We are interested on acyclic paths because cycles in $\graphof{\Delta'}$ are guaranteed to never connect (use in interactions) the extra variable introduced in $\Delta'$ (otherwise the system would not be of bounded degree). But then, along the acyclic paths, at most $I$ interactions are defined at each step, henceforth, the bound of $2^{B^*} \cdot L \cdot I$ on the number on total interactions that could involve the extra variable. Second, let observe that both $L$ and $I$ are the same in $\Delta'$ and in $\Delta$ and equal to $\mathcal{O}(\size{\Delta})$. Moreover, it was shown in the proof of Thm.~\ref{thm:sat} that $B^* = 2\alpha + 2\alpha^2 + p^{\min(\alpha,\beta)} \alpha^{\min(\alpha,\beta)}$ for $\alpha = \maxarityof{\Delta'} = \maxarityof{\Delta}+1$ and $\beta = \maxintersize{\Delta'} = \maxintersize{\Delta}$, $p = \cardof{\mathcal{P}}$ the number of ports. Henceforth, we distinguished the three cases, respectively (i) $B^* = \mathcal{O}(1)$ if $k<\infty$, $\ell=\infty$, (ii) $B^* = \mathit{poly}(\size{\Delta})$ if $k=\infty$, $\ell < \infty$ and (iii) $B^* = 2^{\mathit{poly}(\size{\Delta})}$ if $k=\infty$, $\ell=\infty$. By using the above in the expression $2^{B^*} \cdot L \cdot I$ we obtain the values of the bound as stated in the Proposition. \qed \end{proofE} \section{Conclusions and Future Work} We study the satisfiability and entailment problems in a logic used to write proofs of correctness for dynamically reconfigurable distributed systems. The logic views the components and interactions from the network as resources and reasons also about the local states of the components. We reuse existing techniques for Separation Logic \cite{Reynolds02}, showing that our configuration logic is more expressive than \textsf{SL}, fact which is confirmed by a number of complexity gaps. Closing up these gaps and finding tight complexity classes in the more general cases is considered for future work. In particular, we aim at lifting the boundedness assumption on the degree of the configurations that must be considered to check the validity of entailments. \section{Definitions} \label{sec:definitions} We denote by $\mathbb{N}$ the set of positive integers. For a set $A$, we define $A^1 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} A$, $A^{i+1} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} A^i \times A$, for all $i \geq 0$, and $A^+ = \bigcup_{i\geq1} A^i$, where $\times$ denotes the Cartesian product. We denote by $\pow{A}$ the powerset of $A$ and by $\mpow{A}$ the power-multiset (set of multisets) of $A$. The cardinality of a finite set $A$ is denoted as $\cardof{A}$. By writing $A \subseteq_{\mathit{fin}} B$ we mean that $A$ is a finite subset of $B$. Given integers $i$ and $j$, we write $\interv{i}{j}$ for the set $\set{i,i+1,\ldots,j}$, assumed to be empty if $i>j$. For a tuple $\vec{t} = \tuple{t_1, \ldots, t_n}$, we define $\lenof{\vec{t}} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} n$, $\at{\vec{t}}{i} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} t_i$ and $\at{\vec{t}}{\interv{i}{j}} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \tuple{t_i, \ldots, t_j}$. By writing $x=\mathit{poly}(y)$, for given $x,y\in\mathbb{N}$, we mean that there exists a polynomial function $f : \mathbb{N} \rightarrow \mathbb{N}$, such that $x \leq f(y)$. \subsection{Configurations} We model distributed systems as hypergraphs, whose vertices are \emph{components} (i.e., the nodes of the network) and hyperedges are \emph{interactions} (i.e., describing the way the components communicate with each other). The components are taken from a countably infinite set $\mathbb{C}$, called the \emph{universe}. We consider that each component executes its own copy of the same \emph{behavior}, represented as a finite-state machine $\mathbb{B}=(\mathcal{P},\mathcal{Q},\arrow{}{})$, where $\mathcal{P}$ is a finite set of \emph{ports}, $\mathcal{Q}$ is a finite set of \emph{states} and $\arrow{}{} \subseteq \mathcal{Q} \times \mathcal{P} \times \mathcal{Q}$ is a transition relation. Intuitively, each transition $q \arrow{p}{} q'$ of the behavior is triggerred by a visible event, represented by the port $p$. For instance, the behavior of the components of the token ring system from Fig. \ref{fig:ring} (a) is $\mathbb{B}=(\set{\mathit{in},\mathit{out}}, \set{\mathsf{H},\mathsf{T}}, \set{\mathsf{H}\arrow{\mathit{in}}{}\mathsf{T}, \mathsf{T}\arrow{\mathit{out}}{}\mathsf{H}})$. The universe $\mathbb{C}$ and the behavior $\mathbb{B}=(\mathcal{P},\mathcal{Q},\arrow{}{})$ are considered fixed in the rest of this paper. We introduce a logic for describing infinite sets of \emph{configurations} of distributed systems with unboundedly many components and interactions. A configuration is a snapshot of the system, describing the topology of the network (i.e., the set of present components and interactions) together with the local state of each component: \begin{definition}\label{def:configuration} A \emph{configuration} is a tuple $\gamma = (\mathcal{C},\mathcal{I}, \varrho)$, where: \begin{compactitem} % \item $\mathcal{C} \subseteq_{\mathit{fin}} \mathbb{C}$ is a finite set of \emph{components}, that are present in the configuration, % \item $\mathcal{I} \subseteq_{\mathit{fin}} (\mathbb{C}\times\mathcal{P})^+$ is a finite set of \emph{interactions}, where each interaction is a sequence $(c_1, p_1, \ldots, c_n, p_n) \in (\mathbb{C} \times \mathcal{P})^n$ that binds together the ports $p_1, \ldots, p_n$ of the pairwise distinct components $c_1, \ldots, c_n$, respectively. % \item $\varrho : \mathbb{C} \rightarrow \mathcal{Q}$ is a \emph{state map} associating each (possibly absent) component, a state of the behavior $\mathbb{B}$, such that the set $\set{c \in \mathbb{C} \mid \varrho(c) = q}$ is infinite, for each $q \in \mathcal{Q}$. % \end{compactitem} \end{definition} The last condition requires that there is an infinite pool of components in each state $q \in \mathcal{Q}$; since $\mathbb{C}$ is infinite and $\mathcal{Q}$ is finite, this condition is feasible. For example, the configurations of the token ring from Fig. \ref{fig:ring} (a) are $(\{c_1, \ldots, c_n\}, \{(c_i,\mathit{out},c_{(i \mod n) + 1},\mathit{in}) \mid i \in \interv{1}{n}\}, \varrho)$, where $\varrho:\mathbb{C}\rightarrow\set{\mathsf{H},\mathsf{T}}$ is a state map. The ring topology is described by the set of components $\{c_1, \ldots, c_n\}$ and interactions $\{(c_i,\mathit{out}, c_{(i \mod n) + 1},\mathit{in}) \mid i \in \interv{1}{n}\}$. Intuitively, an interaction $(c_1, p_1, \ldots, c_n, p_n)$ synchronizes transitions labeled by the ports $p_1, \ldots, p_n$ from the behaviors (i.e., replicas of the state machine $\mathbb{B}$) of $c_1, \ldots, c_n$, respectively. The interactions are classified according to their sequence of ports, called the \emph{interaction type} and let $\mathsf{Inter} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \mathcal{P}^+$ be the set of interaction types; an interaction type models, for instance, the passing of a certain kind of message (e.g., request, acknowledgement, etc.). From an operational point of view, two interactions that differ by a permutation of indices e.g., $(c_1, p_1, \ldots, c_n, p_n)$ and $(c_{i_1}, p_{i_1}, \ldots, c_{i_n}, p_{i_n})$ such that $\set{i_1, \ldots, i_n} = \interv{1}{n}$, are equivalent, since the set of transitions is the same; nevertheless, we chose to distinguish them in the following, exclusively for reasons of simplicity. \ifLongVersion Note that Def. \ref{def:configuration} allows configurations with interactions that involve absent components (i.e., not from the set $\mathcal{C}$ of present components in the given configuration). The following definition distinguishes such configurations: \begin{definition}\label{def:tightness} Let $\gamma = (\mathcal{C},\mathcal{I},\varrho)$ be a configuration. An interaction $(c_1, p_1, \ldots, c_n, p_n)$ is \emph{loose} in $\gamma$ if and only if $c_i \not\in \mathcal{C}$, for some $i \in \interv{1}{n}$. If $\mathcal{I}$ contains at least one interaction that is loose in $\gamma$, we say that $\gamma$ is \emph{loose}. An interaction (resp. configuration) that is not loose is said to be \emph{tight}. \end{definition} For instance, every configuration of the system from Fig. \ref{fig:ring} (a) is tight and becomes loose if a component is deleted. Moreover, the reconfiguration program from Fig. \ref{fig:ring} (c) manipulates tight configurations only. In particular, loose configurations are useful for the definition of a composition operation, as the union of disjoint sets of components and interactions: \else Below we define the composition of configurations, as the union of disjoint sets of components and interactions: \fi \begin{definition}\label{def:composition} The composition of two configurations $\gamma_i = (\mathcal{C}_i, \mathcal{I}_i, \varrho)$, for $i = 1,2$, such that $\mathcal{C}_1 \cap \mathcal{C}_2 = \emptyset$ and $\mathcal{I}_1 \cap \mathcal{I}_2 = \emptyset$, is defined as $\gamma_1 \bullet \gamma_2 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C}_1 \cup \mathcal{C}_2, \mathcal{I}_1 \cup \mathcal{I}_2, \varrho)$. The composition $\gamma_1 \bullet \gamma_2$ is undefined if $\mathcal{C}_1 \cap \mathcal{C}_2 \neq \emptyset$ or $\mathcal{I}_1 \cap \mathcal{I}_2 \neq \emptyset$. \end{definition} \ifLongVersion Note that a tight configuration may be the result of composing two loose configurations, whereas the composition of tight configurations is always tight. The example below shows that, in most cases, a non-trivial decomposition of a tight configuration necessarily involves loose configurations: \begin{example}\label{ex:composition} Let $\gamma_i = (\mathcal{C}_i, \mathcal{I}_i, \varrho)$ be loose configurations, where $\mathcal{C}_i = \set{c_i}$, $\mathcal{I}_i = \set{(c_i, \mathit{out}, c_{3-i}, \mathit{in})}$, for all $i = 1,2$. Then $\gamma \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \gamma_1 \bullet \gamma_2$ is the tight configuration $\gamma = (\set{c_1, c_2}, \set{(c_1, \mathit{out}, c_2, \mathit{in}), (c_2, \mathit{out}, c_1, \mathit{in})}, \varrho)$. The only way of decomposing $\gamma$ into two tight subconfigurations $\gamma'_1$ and $\gamma'_2$ is taking $\gamma'_1 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \gamma$ and $\gamma'_2 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\emptyset, \emptyset, \varrho)$, or viceversa. \hfill$\blacksquare$ \end{example} \fi In analogy with graphs, the \emph{degree} of a configuration is the maximum number of interactions from the configuration that involve a (possibly absent) component: \begin{definition}\label{def:degree} The \emph{degree} of a configuration $\gamma = (\mathcal{C}, \mathcal{I}, \varrho)$ is defined as $\degreeof{\gamma} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \max_{c\in\mathbb{C}} \degreenode{c}{\gamma}$, where $\degreenode{c}{\gamma} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \cardof{\{(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I} \mid c = c_i,~ i \in \interv{1}{n}\}}$. \end{definition} For instance, the configuration of the system from Fig. \ref{fig:ring} (a) has degree two. \subsection{Configuration Logic} Let $\mathbb{V}$ and $\mathbb{A}$ be countably infinite sets of \emph{variables} and \emph{predicates}, respectively. For each predicate $\predname{A} \in \mathbb{A}$, we denote its arity by $\arityof{\predname{A}}$. The formul{\ae} of the \emph{Configuration Logic} (\textsf{CL}) are described inductively by the following syntax: \[\phi := \predname{emp} \mid \compact{x} \mid \interacn{x_1}{p_1}{x_n}{p_n} \mid \compin{x}{q} \mid x=y \mid x\neq y \mid \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \mid \phi * \phi \mid \exists x ~.~ \phi\] where $x, y, x_1, \ldots \in \mathbb{V}$, $q \in \mathcal{Q}$ and $\predname{A} \in \mathbb{A}$. A formula $\compact{x}$, $\interacn{x_1}{p_1}{x_n}{p_n}$, $\compin{x}{q}$ and $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ is called a \emph{component}, \emph{interaction}, \emph{state} and \emph{predicate} atom, respectively. Sometimes, we use the shorthand $\compactin{x}{q} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \compact{x} * \compin{x}{q}$. Intuitively, the formula $\compactin{x}{q} * \compactin{y}{q'} * \interactwo{x}{out}{y}{in} * \interactwo{x}{in}{y}{out}$ describes a configuration consisting of two distinct components, denoted by the values of $x$ and $y$, in states $q$ and $q'$, respectively, and two interactions binding the $\mathit{out}$ port of one to the $\mathit{in}$ port of the other component. \ifLongVersion For instance, $\gamma = \gamma_1 \bullet \gamma_2$ from Example \ref{ex:composition} is such a configuration. \fi A formula is said to be \emph{pure} if and only if it consists of state atoms, equalities and disequalities. A formula with no occurrences of predicate atoms (resp. existential quantifiers) is called \emph{predicate-free} (resp. \emph{quantifier-free}). A variable is \emph{free} if it does not occur within the scope of an existential quantifier and let $\fv{\phi}$ be the set of free variables of $\phi$. A \emph{sentence} is a formula with no free variables. A \emph{substitution} $\phi[x_1/y_1 \ldots x_n/y_n]$ replaces simultaneously every free occurrence of $x_i$ by $y_i$ in $\phi$, for all $i \in \interv{1}{n}$. Before defining the semantics of \textsf{CL}\ formul{\ae}, we introduce the set of inductive definitions that assigns meaning to predicates: \begin{definition}\label{def:sid} A \emph{set of inductive definitions (SID)} $\Delta$ consists of \emph{rules} $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi$, where $x_1, \ldots, x_{\arityof{\predname{A}}}$ are pairwise distinct variables, called \emph{parameters}, such that $\fv{\phi} \subseteq \set{x_1, \ldots, x_{\arityof{\predname{A}}}}$. The rule $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi$ \emph{defines} $\predname{A}$ and we denote by $\defn{\Delta}{\predname{A}}$ the set of rules from $\Delta$ that define $\predname{A}$. \end{definition} Note that having distinct parameters in a rule is without loss of generality, as e.g., a rule $\predname{A}(x_1, x_1) \leftarrow \phi$ can be equivalently written as $\predname{A}(x_1, x_2) \leftarrow x_1 = x_2 * \phi$. As a convention, we shall always use the names $x_1, \ldots, x_{\arityof{\predname{A}}}$ for the parameters of a rule that defines $\predname{A}$. The semantics of \textsf{CL}\ formul{\ae} is defined by a satisfaction relation $\gamma \models^\nu_\Delta \phi$ between configurations and formul{\ae}. This relation is parameterized by a \emph{store} $\nu : \mathbb{V} \rightarrow \mathbb{C}$ mapping the free variables of a formula into components from the universe (possibly absent from $\gamma$) and an SID $\Delta$. We write $\nu[x \leftarrow c]$ for the store that maps $x$ into $c$ and agrees with $\nu$ on all variables other than $x$. The definition of the satisfaction relation is by induction on the structure of formul{\ae}, where $\gamma = (\mathcal{C}, \mathcal{I}, \varrho)$ is a configuration (Def. \ref{def:configuration}): \[\begin{array}{rclcl} \gamma & \models^\nu_\Delta & \predname{emp} & \iff & \mathcal{C} = \emptyset \text{ and } \mathcal{I} = \emptyset \\ \gamma & \models^\nu_\Delta & \compact{x} & \iff & \mathcal{C} = \set{\nu(x)} \text{ and } \mathcal{I} = \emptyset \\ \gamma & \models^\nu_\Delta & \interacn{x_1}{p_1}{x_n}{p_n} & \iff & \mathcal{C} = \emptyset \text{ and } \mathcal{I} = \set{(\nu(x_1), p_1, \ldots, \nu(x_n), p_n)} \\ \gamma & \models^\nu_\Delta & \compin{x}{q} & \iff & \gamma \models^\nu_\Delta \predname{emp} \text{ and } \varrho(\nu(x)) = q \\ \gamma & \models^\nu_\Delta & x \sim y & \iff & \gamma \models^\nu_\Delta \predname{emp} \text{ and } \nu(x)\sim\nu(y) \text{, for all } \sim \in \set{=,\neq} \\ \gamma & \models^\nu_\Delta & \predname{A}(y_1, \ldots, y_{\arityof{\predname{A}}}) & \iff & \gamma \models^\nu_\Delta \phi[x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}] \text{, for some rule } \\ &&&& \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi \text{ from } \Delta \\ \gamma & \models^\nu_\Delta & \phi_1 * \phi_2 & \iff & \text{exist } \gamma_1, \gamma_2 \text{, such that } \gamma = \gamma_1 \bullet \gamma_2 \text{ and } \gamma_i \models^\nu_\Delta \phi_i \text{, for } i = 1,2 \\ \gamma & \models^\nu_\Delta & \exists x ~.~ \phi & \iff & \gamma \models^{\nu[x \leftarrow c]}_\Delta \phi \text{, for some } c \in \mathbb{C} \end{array}\] If $\phi$ is a sentence, the satisfaction relation $\gamma \models^\nu_\Delta \phi$ does not depend on the store, written $\gamma \models_\Delta \phi$, in which case we say that $\gamma$ is a \emph{model} of $\phi$. If $\phi$ is a predicate-free formula, the satisfaction relation does not depend on the SID, written $\gamma \models^\nu \phi$. A formula $\phi$ is \emph{satisfiable} if and only if the sentence $\exists x_1 \ldots \exists x_n ~.~ \phi$ has a model, where $\fv{\phi} = \set{x_1, \ldots, x_n}$. A formula $\phi$ \emph{entails} a formula $\psi$, written $\phi \models_\Delta \psi$ if and only if, for any configuration $\gamma$ and store $\nu$, we have $\gamma \models^\nu_\Delta \phi$ only if $\gamma \models^\nu_\Delta \psi$. \subsection{Separation Logic} \label{sec:sl} Separation Logic (\textsf{SL}) \cite{Reynolds02} will be used in the following to prove several technical results concerning the decidability and complexity of certain decision problems for \textsf{CL}. For self-containment reasons, we define \textsf{SL}\ below. The syntax of \textsf{SL}\ formul{\ae} is described by the following grammar: \[\phi := \predname{emp} \mid x_0 \mapsto (x_1, \ldots, x_\mathfrak{K}) \mid x = y \mid x \neq y \mid \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \mid \phi * \phi \mid \exists x ~.~ \phi\] where $x, y, x_0, x_1, \ldots \in \mathbb{V}$, $\predname{A} \in \mathbb{A}$ and $\mathfrak{K} \geq 1$ is an integer constant. Formul{\ae} of \textsf{SL}\ are interpreted over finite partial functions $\mathsf{h} : \mathbb{C} \rightharpoonup_{\scriptscriptstyle\mathit{fin}} \mathbb{C}^\mathfrak{K}$, called \emph{heaps}\footnote{We use the universe $\mathbb{C}$ here for simplicity, the definition works with any countably infinite set.}, by a satisfaction relation $\mathsf{h} \Vdash^\nu \phi$, defined inductively as follows: \[\begin{array}{rclcl} \mathsf{h} & \Vdash^\nu_\Delta & \predname{emp} & \iff & \mathsf{h} = \emptyset \\ \mathsf{h} & \Vdash^\nu_\Delta & x_0 \mapsto (x_1,\ldots,x_\mathfrak{K}) & \iff & \dom{\mathsf{h}} = \set{\nu(x_0)} \text{ and } \mathsf{h}(\nu(x_0)) = \tuple{\nu(x_1), \ldots, \nu(x_\mathfrak{K})} \\ \mathsf{h} & \Vdash^\nu & \phi_1 * \phi_2 & \iff & \text{there exist } \mathsf{h}_1, \mathsf{h}_2 \text{ such that } \dom{\mathsf{h}_1} \cap \dom{\mathsf{h}_2} = \emptyset, \\ &&&& \mathsf{h} = \mathsf{h}_1 \cup \mathsf{h}_2 \text{ and } \mathsf{h}_i \Vdash^\nu_\Delta \phi_i \text{, for both } i = 1,2 \\ \end{array}\] where $\dom{\mathsf{h}} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{c \in \mathbb{C} \mid \mathsf{h}(c) \text{ is defined}}$ is the domain of the heap and (dis-)equalities, predicate atoms and existential quantifiers are defined same as for \textsf{CL}. \subsection{Decision Problems} We define the decision problems that are the focus of the upcoming sections. As usual, a decision problem is a class of yes/no queries that differ only in their input. In our case, the input consists of an SID and one or two predicates, written between square brackets. \begin{definition}\label{def:decision} We consider the following problems, for a SID $\Delta$ and predicates $\predname{A}, \predname{B} \in \mathbb{A}$: \begin{enumerate} % \item\label{decision:sat} $\sat{\Delta}{\predname{A}}$: is the sentence $\exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ satisfiable for $\Delta$? % \ifLongVersion \item\label{decision:tight} $\tight{\Delta}{\predname{A}}$: is every model $\gamma$ of the sentence $\exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ a tight configuration? \fi % \item\label{decision:bound} $\bound{\Delta}{\predname{A}}$: is the set $\set{\degreeof{\gamma} \mid \gamma \models_\Delta \exists x_1 \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})}$ finite? % \item\label{decision:entl} $\entl{\Delta}{\predname{A}}{\predname{B}}$: does $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \models_\Delta \exists x_{\arityof{\predname{B}}+1} \ldots \exists x_{\arityof{\predname{A}}} ~.~ \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$ hold? % \end{enumerate} \end{definition} We define the size of a formula $\phi$ as the total number of occurrences of symbols needed to write it down, denoted by $\sizeof{\phi}$. The size of a SID $\Delta$ is $\sizeof{\Delta} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \sum_{\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi \in \Delta} \sizeof{\phi} + \arityof{\predname{A}} + 1$. Other parameters of a SID $\Delta$ are its: \begin{compactitem} % \item \emph{maximal arity}, denoted as $\maxarityof{\Delta} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \max\set{\arityof{\predname{A}} \mid \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi \in \Delta}$, \item \emph{width}, denoted as $\maxwidthof{\Delta} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \max\set{\sizeof{\phi} \mid \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi \in \Delta}$, % \item \emph{maximal interaction size}, denoted as $\maxintersize{\Delta} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \max\{n \mid \interacn{x_1}{p_1}{x_n}{p_n} \text{ occurs in } \phi, \\ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi \in \Delta\}$. \end{compactitem} For each decision problem $\prob{\Delta}{\predname{A},\predname{B}}$, we consider its $(k,\ell)$-bounded versions $\klprob{\Delta}{\predname{A},\predname{B}}{k}{\ell}$, obtained by restricting the predicates and interaction atoms occurring $\Delta$ to $\maxarityof{\Delta} \leq k$ and $\maxintersize{\Delta} \leq \ell$, respectively, where $k$ and $\ell$ are either positive integers or infinity. We consider, for each $\prob{\Delta}{\predname{A},\predname{B}}$, the subproblems $\klprob{\Delta}{\predname{A},\predname{B}}{k}{\ell}$ corresponding to the three cases \begin{inparaenum}[(1)] \item $k<\infty$ and $\ell=\infty$, \item $k=\infty$ and $\ell<\infty$, and \item $k=\infty$ and $\ell=\infty$. \end{inparaenum} As we explain next, this is because, for the decision problems considered (Def. \ref{def:decision}), the complexity for the case $k<\infty, \ell<\infty$ matches the one for the case $k<\infty, \ell=\infty$. \ifLongVersion Moreover, for each problem $\prob{\Delta}{\predname{A}}$ (resp. $\prob{\Delta}{\predname{A},\predname{B}}$), we consider its general version $\prob{\Delta}{\phi}$ (resp. $\prob{\Delta}{\phi,\psi}$), where $\phi$ and $\psi$ are \textsf{CL}\ formul{\ae}, whose predicates are interpreted by the rules in $\Delta$. The generalized problems $\prob{\Delta}{\phi}$ involving one predicate atom (points \ref{decision:sat} and \ref{decision:bound} of Def. \ref{def:decision}) can be reduced to their restricted versions $\prob{\Delta}{\predname{A}}$, by introducing a fresh predicate $\predname{A}_\phi$ (not occurring in $\Delta$), of arity $n\geq0$ and a rule $\predname{A}_\phi(x_1, \ldots, x_n) \leftarrow \phi$, where $\fv{\phi} = \set{x_1, \ldots, x_n}$. This reduction is linear in the size of the input and changes none of the following complexity results. Concerning the generalized entailment problem $\entl{\Delta}{\phi}{\psi}$, the reduction to the problem $\entl{\Delta}{\predname{A}}{\predname{B}}$ (Def. \ref{def:decision} \ref{decision:entl}) might affect its decidability status, which is subject to syntactic restrictions on the rules in $\Delta$ (details will be given in \S\ref{sec:entailment}). Satisfiability (\ref{decision:sat}) and entailment (\ref{decision:entl}) arise naturally during verification of reconfiguration programs. For instance, $\sat{\Delta}{\phi}$ asks whether a specification $\phi$ of a set configurations (e.g., a pre-, post-condition, or a loop invariant) is empty or not (e.g., an empty precondition typically denotes a vacuous verification condition), whereas $\entl{\Delta}{\phi}{\psi}$ is used as a side condition for the Hoare rule of consequence, as in e.g., the proof from Fig. \ref{fig:ring} (c). Moreover, entailments must be proved when checking inductiveness of a user-provided loop invariant. In contrast, the applications of the tightness (\ref{decision:tight}) and boundedness (\ref{decision:bound}) problems are less obvious and require a few explanations. The $\tight{\Delta}{\phi}$ problem is relevant in the context of compositional verification of distributed systems. Suppose we have a distributed system consisting of two interacting subsystems, whose sets of initial configurations are described by $\phi_1$ and $\phi_2$, respectively i.e., the initial configurations of the system are described by $\phi_1 * \phi_2$. The compositional verification of a reconfiguration program $\mathsf{P}$ reduces checking the validity of a Hoare triple $\hoare{\phi_1 * \phi_2}{P}{\psi_1 * \psi_2}$ to checking the validity of the simpler $\hoare{\phi_i}{P}{\psi_i}$, for $i = 1,2$. Unfortunately, this appealing method faces the problem of \emph{interference} between the subsystems described by $\phi_1$ and $\phi_2$, namely the loose interactions of $\phi_i$ might connect to present components of $\phi_{3-i}$ and change their states during the execution. In this case, it is sufficient to infer the sets of cross-boundary interactions $\mathcal{F}_{i,3-i}$, describing those interactions from $\phi_i$ that connect to components from $\phi_{3-i}$, and check the validity of the triples $\hoare{\phi_i * \mathcal{F}_{3-i,i}}{P}{\psi_i * \mathcal{F}_{3-i,i}}$, under a relaxed semantics which considers that the interactions in $\mathcal{F}_{3-i,i}$ can fire anytime, or according to the order described by some regular language. However, if $\tight{\Delta}{\phi_1}$ (resp. $\tight{\Delta}{\phi_2}$) has a negative answer, the set of cross-boundary interactions may be unbounded, hence not representable by a finite separating conjunction of interaction atoms $\mathcal{F}_{1,2}$ (resp. $\mathcal{F}_{2,1}$). Thus, the tightness problem is important in establishing necessary conditions under which a compositional proof rule can be applied to checking correctness of reconfigurations in a distributed system. The $\bound{\Delta}{\phi}$ problem is used to check a necessary condition for the decidability of entailments i.e., $\entl{\Delta}{\phi}{\psi}$. If $\bound{\Delta}{\phi}$ has a positive answer, we can reduce the problem $\entl{\Delta}{\phi}{\psi}$ to an entailment problem for \textsf{SL}, which is always interpreted over heaps of bounded degree \cite{EchenimIosifPeltier21}. Otherwise, the decidability status of the entailment problem is open, for configurations of unbounded degree, such as the one described by the example below. \begin{example}\label{ex:star} The following SID describes star topologies with a central controller connected to an unbounded number of workers stations: \[\mathit{Star}(x) \leftarrow \compact{x} * \mathit{Worker}(x),~ \mathit{Worker}(x) \leftarrow \predname{emp} \mid \exists y ~.~ \interactwo{x}{out}{y}{in} * \compact{y} * \mathit{Worker}(x)~ \blacksquare\] \end{example} \fi \section{Entailment} \label{sec:entailment} This section is concerned with the entailment problem $\entl{\Delta}{\predname{A}}{\predname{B}}$, that asks whether $\gamma \models^\nu_\Delta \exists x_{\arityof{\predname{A}}+1} \ldots \exists x_{\arityof{\predname{B}}} ~.~ \predname{B}(x_1,\ldots,x_{\arityof{\predname{B}}})$, for every configuration $\gamma$ and store $\nu$, such that $\gamma \models^\nu_\Delta \predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}})$. For instance, the proof from Fig. \ref{fig:ring} (c) relies on the following entailments, that occur as the side conditions of the Hoare logic rule of consequence: \[\begin{array}{l} \ring{h}{t}(y) \models_\Delta \exists x \exists z . \compactin{y}{\mathsf{H}} * \interac{y}{out}{z}{in} * \chain{h-1}{t}(z,x) * \interac{x}{out}{y}{in} \\ \compactin{z}{\mathsf{H}} * \interac{z}{out}{x}{in} * \chain{h-1}{t}(x,y) * \interac{y}{out}{z}{in} \models_\Delta \ring{h}{t}(z) \end{array}\] By introducing two fresh predicates $\predname{A}_1$ and $\predname{A}_2$, defined by the rules: \begin{align} \predname{A}_1(x_1) & \leftarrow \exists y \exists z . \compactin{x_1}{\mathsf{H}} \!*\! \interac{x_1}{out}{z}{in} * \chain{h-1}{t}(z,y) \!*\! \interac{y}{out}{x_1}{in} \label{eq:right-rule} \\[-1mm] \predname{A}_2(x_1,x_2) & \leftarrow \exists z . \compactin{x_1}{\mathsf{H}} * \interac{x_1}{out}{z}{in} * \chain{h-1}{t}(z,x_2) * \interac{x_2}{out}{x_1}{in} \label{eq:left-rule} \end{align} the above entailments are equivalent to $\entl{\Delta}{\ring{h}{t}}{\predname{A}_1}$ and $\entl{\Delta}{\predname{A}_2}{\ring{h}{t}}$, respectively, where $\Delta$ consists of the rules (\ref{eq:right-rule}) and (\ref{eq:left-rule}), together with the rules that define the $\ring{h}{t}$ and $\chain{h}{t}$ predicates (\S\ref{sec:running-example}). We show that the entailment problem is undecidable, in general (Thm. \ref{thm:entl-undecidable}), and recover a decidable fragment, by means of three syntactic conditions, typically met in our examples. These conditions use the following notion of \emph{profile}: \begin{definition}\label{def:profile} The \emph{profile} of a SID $\Delta$ is the pointwise greatest function $\profile{\Delta} : \mathbb{A} \rightarrow \pow{\mathbb{N}}$, mapping each predicate $\predname{A}$ into a subset of $\interv{1}{\arityof{\predname{A}}}$, such that, for each rule $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \phi$ from $\Delta$, each atom $\predname{B}(y_1, \ldots, y_{\arityof{\predname{B}}})$ from $\phi$ and each $i \in \profile{\Delta}(\predname{B})$, there exists $j \in \profile{\Delta}(\predname{A})$, such that $x_j$ and $y_i$ are the same variable. \end{definition} The profile identifies the parameters of a predicate that are always replaced by a variable $x_1, \ldots, x_{\arityof{\predname{A}}}$ in each unfolding of $\predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}})$, according to the rules in $\Delta$; it is computed by a greatest fixpoint iteration, in time $\mathit{poly}(\size{\Delta})$. \begin{definition}\label{def:pcr} A rule $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \Asterisk_{\ell=1}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, where $\phi$ is a quantifier- and predicate-free formula, is said to be: \begin{enumerate} % \item\label{it:progressing} \emph{progressing} if and only if $\phi = \compact{x_1} * \psi$, where $\psi$ consists of interaction atoms involving $x_1$ and (dis-)equalities, such that $\bigcup_{\ell=1}^h \set{z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}} = \set{x_2, \ldots, x_{\arityof{\predname{A}}}} \cup \set{y_1, \ldots, y_m}$, % \item\label{it:connected} \emph{connected} if and only if, for each $\ell\in\interv{1}{h}$ there exists an interaction atom in $\psi$ that contains both $z^\ell_1$ and a variable from $\set{x_1} \cup \set{x_i \mid i \in \profile{\Delta}(\predname{A})}$, % \item\label{it:restricted} \emph{equationally-restricted (e-restricted)} if and only if, for every disequation $x \neq y$ from $\phi$, we have $\set{x,y} \cap \set{x_i \mid i \in \profile{\Delta}(\predname{A})} \neq\emptyset$. % \end{enumerate} A SID $\Delta$ is \emph{progressing}, \emph{connected} and \emph{e-restricted} if and only if each rule in $\Delta$ is \emph{progressing}, \emph{connected} and \emph{e-restricted}, respectively. \end{definition} For example, the SID consisting of the rules from \S\ref{sec:running-example}, together with rules (\ref{eq:right-rule}) and (\ref{eq:left-rule}) is progressing, connected and e-restricted. \begin{myTextE} For a configuration $\gamma = (\mathcal{C},\mathcal{I},\varrho)$, let: \[\nodesof{\gamma} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \mathcal{C} \cup \Set{c_i \mid (c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}, i \in \interv{1}{n}}\] be the set of (possibly absent) components that occur in $\gamma$. \end{myTextE} \begin{myLemmaE}\label{lemma:progressing-sid} Given a progressing SID $\Delta$ and a predicate $\predname{A}\in\defnof{\Delta}$, for any configuration $\gamma = (\mathcal{C}, \mathcal{I}, \varrho)$ and store $\nu$, such that $\gamma\models^\nu_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, we have $\set{\nu(x_1), \ldots, \nu(x_{\arityof{\predname{A}}})} \subseteq \nodesof{\gamma} = \mathcal{C}$. \end{myLemmaE} \begin{proofE} We proceed by fixpoint induction on the definition of $\gamma \models^\nu_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. By definition, there exists a progressing rule $$\mathsf{r} ~:~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \compact{x_1} * \psi * \Asterisk_{\ell=1}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$$ a store $\nu'$ and configurations $\gamma_0, \dots, \gamma_h$ such that: \begin{compactitem} % \item $\gamma = \gamma_0 \bullet \dots \bullet \gamma_h$, % \item $\nu(x_i) = \nu'(x_i)$ for all $i \in \interv{1}{\arityof{\predname{A}}}$, % \item $\gamma_0 \models_\Delta^{\nu'} \compact{x_1} * \psi$, and % \item $\gamma_\ell \models_\Delta^{\nu'} \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$ for all $\ell \in \interv{1}{h}$. % \end{compactitem} For $1\leq \ell\leq h$, let $\nu_\ell(x_i) = \nu'(z^\ell_i)$ for $1\leq i\leq \arityof{\predname{B}_\ell}$. Now apply the induction hypothesis on the derivation of $\gamma_\ell \models_\Delta^{\nu_\ell} \predname{B}_\ell(x_1, \ldots, x_{\arityof{\predname{B}_\ell}})$ to obtain that $\set{\nu_\ell(x_1), \ldots, \nu_\ell(x_{\arityof{\predname{B}_\ell}})} \subseteq \nodesof{\gamma_\ell}$. Since $\mathsf{r}$ is progressing, we have: \begin{equation} \begin{split} &\set{\nu(x_1), \ldots, \nu(x_{\arityof{\predname{A}}})} \subseteq \set{\nu'(x_1), \ldots, \nu'(x_{\arityof{\predname{A}}})} \cup \set{\nu'(y_1), \ldots, \nu'(y_m)} \\ &= \set{\nu'(x_1)} \cup \bigcup_{\ell=1}^h \set{\nu'(z^\ell_1), \ldots, \nu'(z^\ell_{\arityof{\predname{B}_\ell}})} = \set{\nu'(x_1)} \cup \bigcup_{\ell=1}^h \set{\nu_\ell(x_1), \ldots, \nu_\ell(x_{\arityof{\predname{B}_\ell}})} \\ &\subseteq \nodesof{\gamma_0} \cup \bigcup_{\ell=1}^h \nodesof{\gamma_\ell} = \nodesof{\gamma} \nonumber \text{\qed} \end{split} \end{equation} \end{proofE} We recall that $\defn{\Delta}{\predname{A}}$ is the set of rules from $\Delta$ that define $\predname{A}$ and denote by $\defs{\Delta}{\predname{A}}$ the least superset of $\defn{\Delta}{\predname{A}}$ containing the rules that define a predicate from a rule in $\defs{\Delta}{\predname{A}}$. The following result shows that the entailment problem becomes undecidable as soon as the connectivity condition is even slightly lifted: \begin{theoremE}\label{thm:entl-undecidable} $\entl{\Delta}{\predname{A}}{\predname{B}}$ is undecidable, even when $\Delta$ is progressing and e-restricted, and only the rules in $\defs{\Delta}{\predname{A}}$ are connected (the rules in $\defs{\Delta}{\predname{B}}$ may be disconnected). \end{theoremE} \begin{proofE} By a reduction from the known undecidable problem of universality of context-free languages \cite{BarHillel61}. A context-free grammar $G = \tuple{N,T,S,\Delta}$ consists of a finite set $N$ of nonterminals, a finite set $T$ of terminals, a start symbol $S \in N$ and a finite set $\Delta$ of productions of the form $A \rightarrow w$, where $A \in N$ and $w \in (N \cup T)^*$. Given finite strings $u, v \in (N \cup T)^*$, the step relation $u \Rightarrow v$ replaces a nonterminal $A$ of $u$ by the right-hand side $w$ of a production $A \rightarrow w$ and $\Rightarrow^*$ denotes the reflexive and transitive closure of $\Rightarrow$. The language of $G$ is the set $\lang{G}$ of finite strings $w \in T^*$, such that $s \Rightarrow^* w$. The problem $T^* \subseteq \lang{G}$ is known as the universality problem, known to be undecidable. Moreover, we assume w.l.o.g.\ that: \begin{itemize} \item $T = \set{0,1}$, because every terminal can be encoded as a binary string, % \item $\lang{G}$ does not contain the empty string $\epsilon$, because computing a grammar $G'$ such that $\lang{G'} = \lang{G} \cap T^+$ is possible and, moreover, we can reduce from the modified universality problem problem $T^+ \subseteq \lang{G'}$ instead of the original $T^* \subseteq \lang{G}$, % \item $G$ is in Greibach normal form, i.e.\ it contains only production rules of the form $\predname{B}_0 \rightarrow b \predname{B}_1 \ldots \predname{B}_n$, where $\predname{B}_0, \ldots \predname{B}_n \in N$, for some $n \geq 0$ and $b \in T$. \end{itemize} Let $\mathcal{P} = \set{p_0,p_1}$ be a set of ports. For each nonterminal $\predname{B}_0 \in N$, we have a predicate $\predname{B}_0$ or arity two and a rule $\predname{B}_0(x_1,x_2) \leftarrow \exists y_1 \ldots \exists y_n ~.~ \compact{x_1} * \interac{x_1}{p_a}{y_1}{p_a} * \predname{B}_1(y_1,y_2) * \ldots * \predname{B}_n(y_n,x_2)$, for each rule $\predname{B}_0 \rightarrow b \predname{B}_1 \ldots \predname{B}_n$ of $G$. Moreover, we consider the rules $\predname{A}(x_1,x_2) \leftarrow \exists z ~.~ \interac{x_1}{p_a}{z}{p_a} * \predname{A}(z,x_2)$ and $\predname{A}(x_1,x_2) \leftarrow \interac{x_1}{p_a}{x_2}{p_a}$, for all $a \in \set{0,1}$. Let $\Delta$ be the SID containing the above rules. It is easy to check that the SID is progressing and e-restricted and that, moreover, the rules from $\defs{\Delta}{\predname{A}}$ are connected. Finally, $\predname{A}(x_1,x_2) \models_{\scriptscriptstyle\asid} \predname{B}(x_1,x_2)$ if and only if $T^+ \subseteq \lang{G}$. \qed \end{proofE} On the positive side, we prove that $\entl{\Delta}{\predname{A}}{\predname{B}}$ is decidable, if $\Delta$ is progressing, connected and e-restricted, assuming further that $\bound{\Delta}{\predname{A}}$ has a positive answer. In this case, the bound on the degree of the models of $\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ is effectively computable, using the algorithm from Fig. \ref{fig:base-graph} (see Prop. \ref{prop:bound-cutoff} for a cut-off result) and denote by $\mathfrak{B}$ this bound, throughout this section. The proof uses a reduction of $\entl{\Delta}{\predname{A}}{\predname{B}}$ to a similar problem for \textsf{SL}, showed to be decidable \cite{EchenimIosifPeltier21}. We recall the definition of \textsf{SL}, interpreted over heaps $\mathsf{h} : \mathbb{C} \rightharpoonup_{\scriptscriptstyle\mathit{fin}} \mathbb{C}^\mathfrak{K}$, introduced in \S\ref{sec:sl}. \textsf{SL}\ rules are denoted as $\xannot{}{}{\predname{A}}(x_1, \ldots, x_{\#(\xannot{}{}{\predname{A}})}) \leftarrow \phi$, where $\phi$ is a \textsf{SL}\ formula, such that $\fv{\phi} \subseteq \set{x_1, \ldots, x_{\#(\xannot{}{}{\predname{A}})}}$ and \textsf{SL}\ SIDs are denoted as ${\overline{\Delta}}$. The profile $\profile{{\overline{\Delta}}}$ is defined for \textsf{SL}\ same as for \textsf{CL}\ (Def. \ref{def:profile}). \begin{definition}\label{def:pcr-sl} A \textsf{SL}\ rule $\xannot{}{}{\predname{A}}(x_1, \ldots, x_{\#(\xannot{}{}{\predname{A}})}) \leftarrow \phi$ from a SID ${\overline{\Delta}}$ is said to be: \begin{enumerate} % \item \emph{progressing} if and only if $\phi = \exists t_1 \ldots \exists t_m ~.~ x_1 \mapsto (y_1, \ldots, y_\mathfrak{K}) * \psi$, where $\psi$ contains only predicate and equality atoms, % \item \emph{connected} if and only if $z_1 \in \set{x_i \mid i \in \profile{{\overline{\Delta}}}(\xannot{}{}{\predname{A}})} \cup \set{y_1, \ldots, y_\mathfrak{K}}$, for every predicate atom $\xannot{}{}{\predname{B}}(z_1,\ldots,z_{\#(\xannot{}{}{\predname{B}})})$ from $\phi$. % % \end{enumerate} \end{definition} Note that the definitions of progressing and connected rules are different for \textsf{SL}, compared to \textsf{CL}\ (Def. \ref{def:pcr}); in the rest of this section, we rely on the context to distinguish progressing (connected) \textsf{SL}\ rules from progressing (connected) \textsf{CL}\ rules. Moreover, e-restricted rules are defined in the same way for \textsf{CL}\ and \textsf{SL}\ (point \ref{it:restricted} of Def. \ref{def:pcr}). A tight upper bound on the complexity of the entailment problem between \textsf{SL}\ formul{\ae}, interpreted by progressing, connected and e-restricted SIDs, is given below: \begin{theorem}[\cite{EchenimIosifPeltier21}]\label{thm:sl-entailment} The \textsf{SL}\ entailment problem is in $2^{2^{\mathit{poly}(\width{{\overline{\Delta}}} \cdot \log\size{{\overline{\Delta}}})}}$, for progressing, connected and e-restricted SIDs. \end{theorem} The reduction of $\entl{\Delta}{\predname{A}}{\predname{B}}$ to \textsf{SL}\ entailments is based on the idea of viewing a configuration as a logical structure (hypergraph), represented by an indirected \emph{Gaifman graph}, in which every tuple from a relation (hyperedge) becomes a clique \cite{Gaifman82}. In a similar vein, we encode a configuration, of degree at most $\mathfrak{B}$, by a heap of degree $\mathfrak{K}$ (Def. \ref{def:gaifman-heap}), such that $\mathfrak{K}$ is defined using the following integer function: \vspace*{-.5\baselineskip} \[\pos{i}{j}{k} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} 1 + \mathfrak{B} \cdot \sum_{\ell=1}^{j-1} \lenof{\tau_\ell} + i \cdot \lenof{\tau_j} + k\] where $\mathsf{Inter} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{\tau_1, \ldots, \tau_M}$ is the set of interaction types and $\mathcal{Q} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{q_1, \ldots, q_N}$ is the set of states of the behavior $\mathbb{B}=(\mathcal{P},\mathcal{Q},\arrow{}{})$ (\S\ref{sec:definitions}). Here $i \in \interv{0}{\mathfrak{B}-1}$ denotes an interaction of type $j \in \interv{1}{M}$ and $k \in \interv{0}{N-1}$ denotes a state. We use $M$ and $N$ throughout the rest of this section, to denote the number of interaction types and states, respectively. For a set $\mathcal{I}$ of interactions, let $\xituples{\mathcal{I}}{j}{c} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \{\tuple{c_1, \ldots, c_n} \mid (c_1,p_1, \ldots, c_n,p_n) \in \mathcal{I},~ \tau_j = \tuple{p_1, \ldots, p_n},~ c \in \set{c_1,\ldots,c_n}\}$ be the tuples of components from an interaction of type $\tau_j$ from $\mathcal{I}$, that contain a given component $c$. \begin{definition}\label{def:gaifman-heap} Given a configuration $\gamma = (\mathcal{C},\mathcal{I},\varrho)$, such that $\degreeof{\gamma} \leq \mathfrak{B}$, a \emph{Gaifman heap} for $\gamma$ is a heap $\mathsf{h} : \mathbb{C} \rightharpoonup_{\scriptscriptstyle\mathit{fin}} \mathbb{C}^\mathfrak{K}$, where $\mathfrak{K} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \pos{0}{M+1}{N}$, $\dom{\mathsf{h}} = \nodesof{\gamma}$ and, for all $c_0 \in \dom{\mathsf{h}}$, such that $\mathsf{h}(c_0) = \tuple{c_1,\ldots,c_\mathfrak{K}}$, the following hold: \begin{enumerate} % \item\label{it21:gaifman-heap} $c_1 = c_0$ if and only if $c_0 \in \mathcal{C}$, % \item\label{it22:gaifman-heap} for all $j \in \interv{1}{M}$, $\xituples{\mathcal{I}}{j}{c} = \set{\vec{c}_1, \ldots, \vec{c}_s}$ if and only if there exist integers $0 \leq k_1 < \ldots < k_s < \mathfrak{B}$, such that $\tuple{\mathsf{h}(c_0)}_{\ipos{k_i}{j}} = \vec{c}_i$, for all $i \in \interv{1}{s}$, where $\ipos{i}{j} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \interv{\pos{i-1}{j}{0}}{\pos{i}{j}{0}}$ are the entries of the $i$-th interaction of type $\tau_j$ in $\mathsf{h}(c_0)$, % \item\label{it23:gaifman-heap} for all $k \in \interv{1}{N}$, we have $\tuple{\mathsf{h}(c_0)}_{\spos{k}} = c_0$ if and only if $\varrho(c_0) = q_k$, where the entry $\spos{k} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \pos{0}{M+1}{k-1}$ in $\mathsf{h}(c_0)$ corresponds to the state $q_k \in \mathcal{Q}$. % \end{enumerate} We denote by $\gaifman{\gamma}$ the set of Gaifman heaps for $\gamma$. \end{definition} Intuitively, if $\mathsf{h}$ is a Gaifman heap for $\gamma$ and $c_0 \in \dom{\mathsf{h}}$, then the first entry of $\mathsf{h}(c_0)$ indicates whether $c_0$ is present (condition \ref{it21:gaifman-heap} of Def. \ref{def:gaifman-heap}), the next $\mathfrak{B} \cdot \sum_{j=1}^M \lenof{\tau_j}$ entries are used to encode the interactions of each type $\tau_j$ (condition \ref{it22:gaifman-heap} of Def. \ref{def:gaifman-heap}), whereas the last $N$ entries are used to represent the state of the component (condition \ref{it23:gaifman-heap} of Definition \ref{def:gaifman-heap}). Note that the encoding of configurations by Gaifman heaps is not unique: two Gaifman heaps for the same configuration may differ in the order of the tuples from the encoding of an interaction type and the choice of the unconstrained entries from $\mathsf{h}(c_0)$, for each $c_0 \in \dom{\mathsf{h}}$. On the other hand, if two configurations have the same Gaifman heap encoding, they must be the same configuration. \begin{figure}[t!] \vspace*{-\baselineskip} \begin{subfigure}[b]{.4\textwidth} \scalebox{0.55}{\input{tikz_three_comps.tex}} \centerline{\footnotesize(a)} \end{subfigure}% \begin{subfigure}[b]{.6\textwidth} \scalebox{0.4}{\input{tikz_stack.tex}} \centerline{\footnotesize(b)} \end{subfigure}% \vspace*{-.5\baselineskip} \caption{Gaifman Heap for a Chain Configuration} \label{fig:gaifman-heap} \vspace*{-1.5\baselineskip} \end{figure} \begin{example}\label{ex:gaifman-heap} Fig. \ref{fig:gaifman-heap} (b) shows a Gaifman heap for the configuration in Fig. \ref{fig:gaifman-heap} (a), where each component belongs to at most $2$ interactions of type $\tuple{\mathit{out},\mathit{in}}$. \hfill$\blacksquare$ \end{example} \begin{myTextE} We say that a configuration $\gamma'$ is a \emph{subconfiguration} of $\gamma$, denoted $\gamma' \sqsubseteq \gamma$ if and only if $\gamma = \gamma' \bullet \gamma''$, for some configuration $\gamma''$. The following lemma builds Gaifman heaps for subconfigurations: \begin{lemma}\label{lemma:gaifman-subconfig} Given configurations $\gamma$ and $\gamma'$, such that $\gamma' \sqsubseteq \gamma$, if $\mathsf{h} \in \gaifman{\gamma}$, then $\mathsf{h}' \in \gaifman{\gamma'}$, where $\dom{\mathsf{h}'} = \dom{\mathsf{h}} \cap \nodes{\gamma'}$ and $\mathsf{h}'(c) = \mathsf{h}(c)$, for all $c\in\dom{\mathsf{h}'}$. \end{lemma} \proof{ We have $\dom{\mathsf{h}'} = \dom{\mathsf{h}} \cap \nodesof{\gamma'} = \nodesof{\gamma} \cap \nodesof{\gamma'} = \nodesof{\gamma'}$, because $\dom{\mathsf{h}} = \nodesof{\gamma} \supseteq \nodesof{\gamma'}$. The points (\ref{it21:gaifman-heap}-\ref{it23:gaifman-heap}) of Def. \ref{def:gaifman-heap} are by easy inspection. \qed} \end{myTextE} We build a \textsf{SL}\ SID ${\overline{\Delta}}$ that generates the Gaifman heaps of the models of the predicate atoms occurring in a progressing \textsf{CL}\ SID $\Delta$. The construction associates to each variable $x$, that occurs free or bound in a rule from $\Delta$, a unique $\mathfrak{K}$-tuple of variables $\gaifimg{x} \in \mathbb{V}^\mathfrak{K}$, that represents the image of the store value $\nu(x)$ in a Gaifman heap $\mathsf{h}$ i.e., $\mathsf{h}(\nu(x)) = \nu(\gaifimg{x})$. Moreover, we consider, for each predicate symbol $\predname{A} \in \defnof{\Delta}$, an annotated predicate symbol $\annotate{}{\predname{A}}$ of arity $\arityof{\annotate{}{\predname{A}}} = (\mathfrak{K}+1)\cdot\arityof{\predname{A}}$, where $\iota : \interv{1}{\arityof{\predname{A}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$ is a map associating each parameter $i \in \interv{1}{\arityof{\predname{A}}}$ and each interaction type $\tau_j$, for $j \in \interv{1}{M}$, a set of integers $\iota(i,j)$ denoting the positions of the encodings of the interactions of type $\tau_j$, involving the value of $x_i$, in the models of $\annotate{}{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}}, \gaifimg{x_1}, \ldots, \gaifimg{x_{\arityof{\predname{A}}}})$ (point \ref{it22:gaifman-heap} of Def. \ref{def:gaifman-heap}). Then ${\overline{\Delta}}$ contains rules of the form: \begin{align} \annotate{}{\predname{A}}(x_1, \ldots, x_{\#(\predname{A})},\gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}}) & \leftarrow \label{rule:slsid} \\ \exists y_1 \ldots \exists y_m \exists \gaifimg{y_1} \ldots \exists \gaifimg{y_m} ~.~ \overline{\psi} * \pi ~* & \Asterisk_{\ell=1}^h ~\annotate{\ell}{\predname{B}}(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}^\ell)}, \gaifimg{z^\ell_1}, \ldots, \gaifimg{z^\ell_{\#(\predname{B}^\ell)}}) \nonumber \end{align} for which $\Delta$ has a \emph{stem rule} \(\predname{A}(x_1, \ldots, x_{\#(\predname{A})}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=1}^h \predname{B}^\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}^\ell}})\), where $\psi*\pi$ is a quantifier- and predicate-free formula and $\pi$ is the conjunction of equalities and disequalities from $\psi*\pi$. However, not all rules (\ref{rule:slsid}) are considered in ${\overline{\Delta}}$, but only the ones meeting the following condition: \begin{definition}\label{def:well-formed} A rule of the form (\ref{rule:slsid}) is \emph{well-formed} if and only if, for each $i \in \interv{1}{\arityof{\predname{A}}}$ and each $j \in \interv{1}{M}$, there exists a set of integers $Y_{i,j} \subseteq \interv{0}{\mathfrak{B}-1}$, such that: \begin{compactitem} % \item $\cardof{Y_{i,j}} = \cardof{\xiatoms{\psi,\pi}{j}{x_i}}$, where $\xiatoms{\psi,\pi}{j}{x}$ is the set of interaction atoms $\interacn{z_1}{p_1}{z_n}{p_n}$ from $\psi$ of type $\tau_j = \tuple{p_1, \ldots, p_n}$, such that $z_s \formeq{\pi} x$, for some $s \in \interv{1}{n}$, % \item $Y_{i,j} \subseteq \iota(i,j)$ and $\iota(i,j) \setminus Y_{i,j} = \xipos{j}{x_i}$, where $\xipos{j}{x} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \bigcup_{\ell=1}^h \bigcup_{k=1}^{\arityof{\predname{B}^\ell}} \set{\iota^\ell(k,j) \mid x \formeq{\pi} z^\ell_k}$ is the set of positions used to encode the interactions of type $\tau_j$ involving the store value of the parameter $x$, in the sub-configuration corresponding to an atom $\predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}^\ell)})$, for some $\ell\in\interv{1}{h}$. % \end{compactitem} \end{definition} We denote by ${\overline{\Delta}}$ the set of well-formed rules (\ref{rule:slsid}), such that, moreover: \[\begin{array}{l} \overline{\psi} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} x_1 \mapsto \gaifimg{x_1} ~*~ \Asterisk_{\!\!x\in\fv{\psi}} ~\compstate{\psi}{x} ~*~ \Asterisk_{\!\!i=1}^{\!\!\arityof{\predname{A}}} ~\interparam{\psi}{x_i} \text{, where:} \\[1mm] \compstate{\psi}{x} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \Asterisk_{\compact{x} \text{ occurs in } \psi} ~\tuple{\gaifimg{x}}_1 = x ~*~ \Asterisk_{\compin{x}{q_k} \text{ occurs in } \psi} ~\tuple{\gaifimg{x}}_{\spos{k}} = x \\[1mm] \interparam{\psi}{x_i} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \Asterisk_{\!\!j=1}^{\!\!M} \Asterisk_{\!\!p=1}^{\!\!r_j} ~\tuple{\gaifimg{x_i}}_{\ipos{j}{k^j_p}} = \vec{x}^j_p \text{ and } \set{k^j_1, \ldots, k^j_{r_j}} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \iota(i,j) \setminus \xipos{j}{x_i} \end{array}\] Here for two tuples of variables $\vec{x} = \tuple{x_1, \ldots, x_k}$ and $\vec{y} = \tuple{y_1, \ldots, y_k}$, we denote by $\vec{x}=\vec{y}$ the formula $\Asterisk_{i=1}^k x_i=y_i$. Intuitively, the \textsf{SL}\ formula $\compstate{\psi}{x}$ realizes the encoding of the component and state atoms from $\psi$, in the sense of points (\ref{it21:gaifman-heap}) and (\ref{it23:gaifman-heap}) from Def. \ref{def:gaifman-heap}, whereas the formula $\interparam{\psi}{x_i}$ realizes the encodings of the interactions involving a parameter $x_i$ in the stem rule (point \ref{it22:gaifman-heap} of Def. \ref{def:gaifman-heap}). In particular, the definition of $\interparam{\psi}{x_i}$ uses the fact that the rule is well-formed. \begin{myLemmaE}\label{lemma:gaifman-soundness} Let $\Delta$ be a progressing SID and $\predname{A} \in \defnof{\Delta}$ be a predicate, such that $\gamma \models^\nu \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, for some configuration $\gamma=(\mathcal{C},\mathcal{I},\varrho)$ and store $\nu$. Then, for each heap $\mathsf{h} \in \gaifman{\gamma}$, there exists a map $\iota : \interv{1}{\arityof{\predname{A}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$ and a store $\overline{\store}$, such that the following hold: \begin{enumerate} % \item\label{it1:gaifman-soundness} $\overline{\store}(x_i) = \nu(x_i) \in \dom{\mathsf{h}}$ and $\overline{\store}(\gaifimg{x_i}) = \mathsf{h}(\nu(x_i))$, $\forall i \in \interv{1}{\arityof{\predname{A}}}$, % \item\label{it2:gaifman-soundness} $\xituples{\mathcal{I}}{j}{\overline{\store}(x_i)} = \Set{\tuple{\overline{\store}(\gaifimg{x_i})}_{\ipos{j}{k}} \mid k \in \iota(i,j)}$, $\forall i \in \interv{1}{\arityof{\predname{A}}}~ \forall j \in \interv{1}{M}$, % \item\label{it3:gaifman-soundness} $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}} \annotate{}{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}}, \gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}})$. % \end{enumerate} \end{myLemmaE} \begin{proofE} By induction on the definition of $\gamma \models^\nu \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, assume that $\gamma \models^\nu\exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=1}^h \predname{B}^\ell(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}^\ell)})$, where $\predname{A}(x_1, \ldots, x_{\#(\predname{A})}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=1}^h \predname{B}^\ell(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}^\ell)})$ is a rule from $\Delta$, such that $\psi*\pi$ is quantifier- and predicate-free and $\pi$ is the conjunction of equalities and disequalities from $\psi*\pi$. Then there exists a store $\nu'$, that agrees with $\nu$ over $x_1, \ldots, x_{\arityof{\predname{A}}}$, and configurations $\gamma_0 = (\mathcal{C}_0,\mathcal{I}_0,\varrho), \ldots, \gamma_h = (\mathcal{C}_h,\mathcal{I}_h,\varrho)$, such that: \begin{compactitem} % \item $\gamma_0 \models^{\nu'}_\Delta \psi*\pi$, % \item $\gamma_\ell \models^{\nu'}_\Delta \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, for all $\ell \in \interv{1}{h}$, and % \item $\gamma = \gamma_0 \bullet \ldots \bullet \gamma_h$. % \end{compactitem} We define the heaps $\mathsf{h}_0, \ldots, \mathsf{h}_h$, as follows: \begin{compactitem} % \item for each $\ell\in\interv{1}{h}$, let $\dom{\mathsf{h}_\ell} = \nodesof{\gamma_\ell}$, $\mathsf{h}_\ell(c) = \mathsf{h}(c)$, for all $c\in\dom{\mathsf{h}_\ell}$, % \item $\mathsf{h}_0 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \mathsf{h} \setminus (\bigcup_{\ell=1}^h \mathsf{h}_\ell)$. % \end{compactitem} By Lemma \ref{lemma:gaifman-subconfig}, we obtain that $\mathsf{h}_\ell \in \gaifman{\gamma_\ell}$, for all $\ell\in\interv{1}{h}$. We define $\mathsf{h} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \mathsf{h}_0 \cup \ldots \cup \mathsf{h}_h$ and prove that this is indeed a heap, by showing $\dom{\mathsf{h}_i} \cap \dom{\mathsf{h}_j} = \emptyset$, for all $0 \leq i < j \leq h$. If $i = 0$, we have $\dom{\mathsf{h}_i} \cap \dom{\mathsf{h}_j} = \emptyset$, by the definition of $\mathsf{h}_i$. Else, suppose, for a contradiction, that $c \in \dom{\mathsf{h}_i} \cap \dom{\mathsf{h}_j}$, for some $1 \leq i < j \leq h$. Then $c \in \nodesof{\gamma_i} \cap \nodes{\gamma_j}$. Since $\gamma_\ell \models \exists x_1 \ldots \exists x_{\arityof{\predname{B}_\ell}} ~.~ \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}_\ell}})$, by Lemma \ref{lemma:progressing-sid}, we obtain $c \in \mathcal{C}_i \cap \mathcal{C}_j$, which contradicts the fact that $\gamma_i \bullet \gamma_j$ is defined. Next, we apply the inductive hypothesis to find stores $\overline{\store}_\ell$ and maps $\iota^\ell$ such that, for $\ell \in \interv{1}{h}$, we have: \begin{compactitem} % \item $\overline{\store}(z^\ell_i) = \nu'(z^\ell_i) \in \dom{\mathsf{h}_\ell}$ and $\mathsf{h}_\ell(\nu'(z^\ell_i)) = \overline{\store}(\gaifimg{z^\ell_i})$, $\forall i \in \interv{1}{\arityof{\predname{B}_\ell}}$, % \item $\xituples{\mathcal{I}_\ell}{j}{\overline{\store}(z^\ell_i)} = \Set{\tuple{\overline{\store}_\ell(\gaifimg{z^\ell_i})}_{\ipos{j}{k}} \mid k \in \iota^\ell(i,j)}$, $\forall i \in \interv{1}{\#(\predname{B}_\ell)}~ \forall j \in \interv{1}{M}$, and % \item $\mathsf{h}_\ell \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}_\ell} \annotate{}{\predname{B}_\ell}(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}, \gaifimg{z^\ell_1}, \ldots, \gaifimg{z^\ell_{\#(\predname{B}_\ell)}})$. % \end{compactitem} First, for each $i \in \interv{1}{\arityof{\predname{A}}}$ and each $j \in \interv{1}{M}$, we define $\iota(i,j) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{k_1, \ldots, k_{s_i}} \cup \bigcup_{\ell=1}^h \bigcup_{k=1}^{\arityof{\predname{B}_\ell}} \Set{\iota^\ell(k,j) \mid x_i \formeq{\pi} z^\ell_k}$, where: \begin{compactitem} % \item $\xituples{\mathcal{I}_0}{j}{\nu(x_i)} = \set{\vec{c}_1, \ldots, \vec{c}_{s_{i,j}}}$, and % \item $0 \leq k^{i,j}_1 < \ldots < k^{i,j}_{s_{i,j}} < \mathfrak{B}$ are integers, such that $\tuple{\mathsf{h}(\nu(x_i))}_{\ipos{j}{k^{i,j}_\ell}} = \vec{c}_\ell$, $\forall \ell \in \interv{1}{s_{i,j}}$; the existence of these integers is stated by point (\ref{it22:gaifman-heap}) of Def. \ref{def:gaifman-heap}, relative to $\nu(x_i)$. % \end{compactitem} Second, we define the store $\overline{\store}$ as follows: \begin{compactitem} % \item $\overline{\store}(x_1) = \nu(x_1)$ and $\overline{\store}(\gaifimg{x_1}) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \mathsf{h}(\nu(x_1))$, % \item $\overline{\store}(z^\ell_i) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \nu'(z^\ell_i)$, $\forall \ell\in\interv{1}{h}~ \forall i\in\interv{1}{\arityof{\predname{B}_\ell}}$, % \item $\overline{\store}(\gaifimg{z^\ell_i}) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \mathsf{h}(\nu'(z^\ell_i))$, $\forall \ell\in\interv{1}{h}~ \forall i\in\interv{1}{\arityof{\predname{B}_\ell}}$, % \item $\overline{\store}$ is arbitrary everywhere else. % \end{compactitem} The points (\ref{it1:gaifman-soundness}) and (\ref{it2:gaifman-soundness}) of the statement follow from the definitions of $\overline{\store}$ and $\iota$, respectively. To prove point (\ref{it3:gaifman-soundness}), suppose, for a contradiction, that $k \in \set{k^{i,j}_1, \ldots, k^{i,j}_{s_{i,j}}} \cap \iota^\ell(t,j) \neq \emptyset$, for some $i \in \interv{1}{\arityof{\predname{A}}}$, $j \in \interv{1}{M}$, $t\in\interv{1}{\arityof{\predname{B}_\ell}}$ and $\ell\in\interv{1}{h}$, such that $x_i \formeq{\pi} z^\ell_t$. Then there exists a tuple of components $\vec{c} \in \xituples{\mathcal{I}_0}{j}{\nu(x_i)}$, such that $\vec{c} = \tuple{\overline{\store}_\ell(\gaifimg{z^\ell_i})}_{\ipos{j}{t}} \in \xituples{\mathcal{I}_\ell}{j}{\overline{\store}(z^\ell_i)}$. Hence $\mathcal{I}_0 \cap \mathcal{I}_\ell \neq \emptyset$, which contradicts the fact that the composition $\gamma_0 \bullet \gamma_\ell$ is defined. Hence, the rule: \begin{align*} \annotate{}{\predname{A}}(x_1, \ldots, x_{\#(\predname{A})},\gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}}) & \leftarrow \\ \exists y_1 \ldots \exists y_m \exists \gaifimg{y_1} \ldots \exists \gaifimg{y_m} ~.~ \overline{\psi} * \pi ~* & \Asterisk_{\ell=1}^h ~\annotate{\ell}{\predname{B}}(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}^\ell)}, \gaifimg{z^\ell_1}, \ldots, \gaifimg{z^\ell_{\#(\predname{B}^\ell)}}) \ \end{align*} is well-formed and thus belongs to ${\overline{\Delta}}$. To obtain $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}} \annotate{}{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}}, \gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}})$, by the definition of: \[\overline{\psi} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} x_1 \mapsto \gaifimg{x_1} * \Asterisk_{\!\!x\in\fv{\psi}} ~\compstate{\psi}{x} * \Asterisk_{\!\!i=1}^{\!\!\arityof{\predname{A}}} ~\interparam{\psi}{x_i}\] it is sufficient to prove the following points: \begin{compactitem} % \item $\mathsf{h}_0 \Vdash^{\overline{\store}} x_1 \mapsto \gaifimg{x_1}$: by the definition $\overline{\store}$, we have $\overline{\store}(\gaifimg{x_1}) = \mathsf{h}(\nu(x_1))$, hence it is sufficient to prove that $\dom{\mathsf{h}_0} = \set{\nu(x_1)}$. ``$\subseteq$'' Let $c \in \dom{\mathsf{h}_0}$ be a component. By the definition of $\mathsf{h}_0 = \mathsf{h} \setminus \bigcup_{\ell=1}^h \mathsf{h}_\ell$, we have $\dom{\mathsf{h}_0} = \dom{\mathsf{h}} \setminus \bigcup_{\ell=1}^h \dom{\mathsf{h}_\ell} = \nodesof{\gamma} \setminus \bigcup_{\ell=1}^h \nodesof{\gamma_\ell} = \nodesof{\gamma_0}$, because $\mathsf{h} \in \gaifman{\gamma}$ and $\mathsf{h}_\ell \in \gaifman{\gamma_\ell}$, for all $\ell \in \interv{1}{h}$. Since $\gamma_0 \models^{\nu'}_{\Delta} \psi$, we have $c = \nu'(x)$, for some $x \in \fv{\psi}$. Suppose, for a contradiction, that $x$ and $x_1$ are not the same variable, then $x \in \set{z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}}$, for some $\ell \in \interv{1}{h}$, because $\Delta$ is progressing (Def. \ref{def:pcr}). By Lemma \ref{lemma:progressing-sid}, we obtain $c \in \nodes{\gamma_\ell}$, contradiction. Then $c = \nu'(x_1) = \nu(x_1)$. ``$\supseteq$'' Because $\Delta$ is progressing, $\psi = \compact{x_1} * \varphi$, hence $\nu(x_1) = \nu'(x_1) \in \nodesof{\gamma_0} = \dom{\mathsf{h}_0}$, because $\gamma_0 \models^{\nu'} \psi$. % \item $\emptyset \Vdash^{\overline{\store}} \compstate{\psi}{x}$, for each $x\in\fv{\psi}$: by the definition of $\overline{\store}$, $\mathsf{h} \in \gaifman{\gamma}$ and points \ref{it21:gaifman-heap} and \ref{it23:gaifman-heap} of Def. \ref{def:gaifman-heap}. % \item $\emptyset \Vdash^{\overline{\store}} \interparam{\psi}{x_i}$, for each $i \in \interv{1}{\arityof{\predname{A}}}$: by the definition of $\overline{\store}$, $\mathsf{h} \in \gaifman{\gamma}$, definition of $\iota(i,j)$, for all $i \in \interv{1}{\arityof{\predname{A}}}$ and $j\in\interv{1}{M}$ and point \ref{it22:gaifman-heap} of Def. \ref{def:gaifman-heap}. % \item $\emptyset \Vdash^{\overline{\store}} \pi$: because $(\emptyset,\emptyset,\varrho) \models^{\nu'} \pi$ and $\overline{\store}$ agrees with $\nu'$ over $\fv{\pi}$. \qed % \end{compactitem} \end{proofE} \begin{myLemmaE}\label{lemma:gaifman-completeness} Let $\Delta$ be a progressing SID and $\predname{A} \in \defnof{\Delta}$ be a predicate, such that $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}} \annotate{}{\predname{A}}(x_1, \ldots, x_{\#(\predname{A})}, \gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}})$, for a map $\iota : \interv{1}{\#(\predname{A})} \times \interv{1}{m} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$ and a store $\overline{\store}$. Then, the following hold: \begin{enumerate} % \item\label{it1:gaifman-completeness} $\overline{\store}(x_i) \in \dom{\mathsf{h}}$ and $\mathsf{h}(\overline{\store}(x_i)) = \overline{\store}(\gaifimg{x_i})$, for all $i \in \interv{1}{\arityof{\predname{A}}}$, % \item\label{it2:gaifman-completeness} there exists a configuration $\gamma$, such that $\mathsf{h} \in \gaifman{\gamma}$ and $\gamma \models^{\overline{\store}}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. % \end{enumerate} \end{myLemmaE} \begin{proofE} By fixpoint induction on the definition of $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}} \annotate{}{\predname{A}}(x_1, \ldots, x_{\#(\predname{A})}, \gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}})$. Consider the following well-formed rule from ${\overline{\Delta}}$: \begin{align*} \annotate{}{\predname{A}}(x_1, \ldots, x_{\#(\predname{A})},\gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}}) & \leftarrow \\ \exists y_1 \ldots \exists y_m \exists \gaifimg{y_1} \ldots \exists \gaifimg{y_m} ~.~ \overline{\psi} * \pi ~* & \Asterisk_{\ell=1}^h ~\annotate{\ell}{\predname{B}}(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}^\ell)}, \gaifimg{z^\ell_1}, \ldots, \gaifimg{z^\ell_{\#(\predname{B}^\ell)}}) \end{align*} such that \(\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}'} \overline{\psi} * \pi ~* \Asterisk_{\ell=1}^h ~\annotate{\ell}{\predname{B}}(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}^\ell)}, \gaifimg{z^\ell_1}, \ldots, \gaifimg{z^\ell_{\#(\predname{B}^\ell)}})\), where $\overline{\store}'$ is a store that agrees with $\nu$ over $x_1, \ldots, x_{\#(\predname{A})}$ and $\gaifimg{x_1}, \ldots, \gaifimg{x_{\#(\predname{A})}}$. \vspace*{\baselineskip}\noindent(\ref{it1:gaifman-completeness}) If $i = 1$ then $x_1 \mapsto \gaifimg{x_1}$ is a subformula of $\overline{\psi}$, thus $\overline{\store}(x_1) = \overline{\store}'(x_1) \in \dom{\mathsf{h}}$ and $\mathsf{h}(\overline{\store}(x_1)) = \overline{\store}'(\gaifimg{x_1}) = \overline{\store}(\gaifimg{x_1})$. Otherwise, because $\Delta$ is progressing, $x_i \in \set{z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}}$, for some $\ell \in \interv{1}{h}$ and point (\ref{it1:gaifman-completeness}) follows from the inductive hypothesis. \vspace*{\baselineskip}\noindent(\ref{it2:gaifman-completeness}) There exist heaps $\mathsf{h}_0, \ldots, \mathsf{h}_h$, such that the following hold: \begin{compactitem} % \item $\dom{\mathsf{h}_i} \cap \dom{\mathsf{h}_j} = \emptyset$, for all $0 \leq i < j \leq h$ and $\mathsf{h} = \mathsf{h}_0 \cup \ldots \cup \mathsf{h}_h$, % \item $\mathsf{h}_0 \Vdash^{\overline{\store}'} \overline{\psi} * \pi$, % \item $\mathsf{h}_\ell \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}'} \annotate{\ell}{\predname{B}}(y^\ell_1, \ldots, y^\ell_{\#(\predname{B}^\ell)}, \gaifimg{y^\ell_1}, \ldots, \gaifimg{y^\ell_{\#(\predname{B}^\ell)}})$, for all $\ell \in \interv{1}{h}$. % \end{compactitem} By the inductive hypothesis, there exist configurations $\gamma_1 = (\mathcal{C}_1, \mathcal{I}_1, \varrho_1), \ldots, \gamma_h = (\mathcal{C}_h, \mathcal{I}_h, \varrho_h)$, such that $\mathsf{h}_\ell \in \gaifman{\gamma_\ell}$ and $\gamma_\ell \models^{\overline{\store}}_\Delta \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\#(\predname{B}_\ell)})$, for all $\ell \in \interv{1}{h}$. We define the configuration $\gamma_0 = (\mathcal{C}_0, \mathcal{I}_0, \varrho_0)$, as follows: \begin{compactitem} % \item $\mathcal{C}_0 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{\overline{\store}(x_1)}$, % \item $\mathcal{I}_0 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{(\overline{\store}'(z_1), p_1, \ldots, \nu'(z_n), p_n) \mid \interacn{z_1}{p_1}{z_n}{p_n} \text{ occurs in } \psi}$, % \item $\varrho_0(\overline{\store}'(z)) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} q_k$ if and only if $\compin{z}{q_k}$ occurs in $\psi$, otherwise $\varrho_0$ is arbitrary. % \end{compactitem} Moreover, we define the state map $\varrho$ as $\varrho(c) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \varrho_\ell(c)$ if $c \in \dom{\mathsf{h}_\ell}$, for all $\ell\in\interv{0}{h}$ and $\varrho(c)$ is arbitrary, for $c \not\in \bigcup_{\ell=0}^h \dom{\mathsf{h}_\ell}$. Since $\dom{\mathsf{h}_0}, \ldots, \dom{\mathsf{h}_h}$ are pairwise disjoint, $\varrho$ is properly defined. First, we prove that the composition $(\mathcal{C}_0, \mathcal{I}_0, \varrho) \bullet \ldots \bullet (\mathcal{C}_h, \mathcal{I}_h, \varrho)$ is defined, namely that, for all $0 \leq i < j \leq h$, we have:\begin{compactitem} % \item $\mathcal{C}_i \cap \mathcal{C}_j = \emptyset$: If $i = 0$ then either $\mathcal{C}_0 = \emptyset$, in which case we are done, or $\mathcal{C}_0 = \set{\overline{\store}(x_1)} = \set{\overline{\store}'(x_1)}$. By the definition of $\overline{\psi} = x_1 \mapsto \gaifimg{x_1} * \varphi$ and $\mathsf{h}_0 \Vdash^{\overline{\store}'} \overline{\psi}$, we obtain $\overline{\store}'(x_1) \in \dom{\mathsf{h}_0}$. Since $\dom{\mathsf{h}_0} \cap \dom{\mathsf{h}_j} = \emptyset$, we obtain $\overline{\store}(x_1) \not\in \dom{\mathsf{h}_j}$. Since $\mathsf{h}_j \in \gaifman{\gamma_j}$, we obtain $\mathcal{C}_j \subseteq \nodesof{\gamma_j} = \dom{\mathsf{h}_j}$, thus $\mathcal{C}_i \cap \bullet_j = \emptyset$. Else $i > 0$ and, since $\mathsf{h}_i \in \gaifman{\gamma_i}$ and $\mathsf{h}_j \in \gaifman{\gamma_j}$, we obtain $\mathcal{C}_i \subseteq \nodesof{\gamma_i} = \dom{\mathsf{h}_i}$ and $\mathcal{C}_j \subseteq \nodesof{\gamma_j} = \dom{\mathsf{h}_j}$. But $\dom{\mathsf{h}_i} \cap \dom{\mathsf{h}_j} = \emptyset$, leading to $\mathcal{C}_i \cap \mathcal{C}_j = \emptyset$. % \item $\mathcal{I}_i \cap \mathcal{I}_j = \emptyset$: If $i = 0$, by the definition of $\mathcal{I}_0$, each interaction from $\mathcal{I}_0$ is of the form $(\overline{\store}'(z_1), p_1, \ldots, \overline{\store}'(z_n), p_n)$, such that $\interacn{z_1}{p_1}{z_n}{p_n}$ is an interaction atom occuring in $\psi$. Since, moreover, $\Delta$ is progressing, we have $x_1 \in \set{z_1, \ldots, z_n}$, hence $\overline{\store}(x_1) \in \set{\overline{\store}'(z_1), \ldots, \overline{\store}'(z_n)}$. Let $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}_j$ be an interaction. Since $\mathsf{h}_j \in \gaifman{\gamma_j}$, we obtain $\set{c_1, \ldots, c_n} \subseteq \nodesof{\gamma_j} = \dom{\mathsf{h}_j}$, hence $\set{c_1, \ldots, c_n} \cap \dom{\mathsf{h}_0} = \set{c_1, \ldots, c_n} \cap \set{\overline{\store}(x_1)} = \emptyset$, leading to $\mathcal{I}_i \cap \mathcal{I}_j = \emptyset$, because the choices of $(\overline{\store}'(z_1), p_1, \ldots, \overline{\store}'(z_n), p_n) \in \mathcal{I}_i$ and $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}_j$ are arbitrary. Else, $i > 0$ and let $(c^i_1, p_1, \ldots, c^i_n, p_n) \in \mathcal{I}_i$, $(c^j_1, p_1, \ldots, c^j_n, p_n) \in \mathcal{I}_j$ be two interactions of the same type. Since $\mathsf{h}_i \in \gaifman{\gamma_i}$ and $\mathsf{h}_j \in \gaifman{\gamma_j}$, we have $\set{c^i_1, \ldots, c^i_n} \subseteq \nodesof{\gamma_i} = \dom{\mathsf{h}_i}$ and $\set{c^j_1, \ldots, c^j_n} \subseteq \nodesof{\gamma_j} = \dom{\mathsf{h}_j}$, respectively. Since $\dom{\mathsf{h}_i} \cap \dom{\mathsf{h}_j} = \emptyset$, we obtain $\set{c^i_1, \ldots, c^i_n} \cap \set{c^j_1, \ldots, c^j_n} = \emptyset$. Since the choices of $(c^i_1, p_1, \ldots, c^i_n, p_n) \in \mathcal{I}_i$ and $(c^j_1, p_1, \ldots, c^j_n, p_n) \in \mathcal{I}_j$ are arbitrary, we obtain $\mathcal{I}_i \cap \mathcal{I}_j = \emptyset$. % \end{compactitem} Consequently, we define $\gamma \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C}_0, \mathcal{I}_0, \varrho) \bullet \ldots \bullet (\mathcal{C}_h, \mathcal{I}_h, \varrho)$ and conclude by proving the following points: \begin{compactitem} % \item $\mathsf{h} = \gaifman{\gamma}$: we prove that $\dom{\mathsf{h}} = \bigcup_{\ell=0}^h \dom{\mathsf{h}_\ell} = \nodesof{\gamma}$, as required by Def. \ref{def:gaifman-heap}. The conditions (\ref{it21:gaifman-heap}-\ref{it23:gaifman-heap}) for $c_0=\overline{\store}(x_1)$ are by the definition of $\overline{\psi}$; for $c_0 \in \dom{\mathsf{h}_\ell}$ these conditions follow from $\mathsf{h}_\ell \in \gaifman{\gamma_\ell}$. ``$\subseteq$'' Let $c \in \dom{\mathsf{h}}$ be a component. If $c \in \dom{\mathsf{h}_0} = \set{\overline{\store}{x_1}}$, then $c \in \mathcal{C}_0 \subseteq \nodesof{\gamma_0} \subseteq \nodesof{\gamma}$. Else $c \in \dom{\mathsf{h}_\ell}$, for some $\ell \in \interv{1}{h}$, then $c \in \nodesof{\gamma_\ell} \subseteq \nodesof{\gamma}$, because $\mathsf{h}_\ell \in \gaifman{\gamma_\ell}$. ``$\supseteq$'' Let $c \in \nodesof{\gamma} = \bigcup_{\ell=0}^h \nodesof{\gamma_\ell}$ be a component. If $c \in \nodesof{\gamma_0}$ then either $c \in \mathcal{C}_0$ or $c$ occurs in some interaction from $\mathcal{I}_0$. If $c \in \mathcal{C}_0$ then $c = \overline{\store}(x_1) \in \dom{\mathsf{h}_0} \subseteq \dom{\mathsf{h}}$, by the definition of $\mathcal{C}_0$. Else there exists an interaction $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}_0$, such that $c \in \set{c_1, \ldots, c_n}$. In this case $c = \overline{\store}'(z)$, for some variable $z$ that occurs in an interaction atom from $\psi$. Since $\Delta$ is progressing, $z \in \set{z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}}}$, for some $\ell \in \interv{1}{h}$. Because $\gamma_\ell \models^{\overline{\store}'}_\Delta \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, we obtain $c = \overline{\store}'(z) \in \nodesof{\gamma_\ell}$, by Lemma \ref{lemma:progressing-sid}, and $c \in \dom{\mathsf{h}_\ell} \subseteq \dom{\mathsf{h}}$, because $\mathsf{h}_\ell \in \gaifman{\gamma_\ell}$. If $c \in \nodesof{\gamma_\ell}$, for some $\ell \in \interv{1}{h}$, we have $c \in \dom{\mathsf{h}_\ell} \subseteq \dom{\mathsf{h}}$, because $\mathsf{h}_\ell \in \gaifman{\gamma_\ell}$. % \item $\gamma \models^{\overline{\store}}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$: Let the stem of the above rule from ${\overline{\Delta}}$ be: \[\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \psi * \pi * \Asterisk_{\ell=1}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})\] Since $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho_\ell) \models^{\overline{\store}'}_\Delta \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$ and $\varrho$ agrees with $\varrho_\ell$ over $\nodesof{\gamma_\ell}$, it follows that $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho) \models^{\overline{\store}'}_\Delta \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, for all $\ell \in \interv{1}{h}$. Moreover, $(\mathcal{C}_0, \mathcal{I}_0, \varrho) \models^{\overline{\store}'} \psi$ by definition, and $(\emptyset, \emptyset, \varrho) \models^{\overline{\store}'} \pi$, because $\emptyset \Vdash^{\overline{\store}'} \pi$. Altogether, we obtain $\gamma \models^{\overline{\store}'}_\Delta \psi * \pi * \Asterisk_{\ell=1}^h \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, leading to $\gamma \models^{\overline{\store}}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. \qed % \end{compactitem} \end{proofE} We state below the main result of this section on the complexity of the entailment problem. The upper bounds follow from a many-one reduction of $\entl{\Delta}{\predname{A}}{\predname{B}}$ to the \textsf{SL}\ entailment $\xannot{}{\iota}{\predname{A}}(x_1,\ldots,x_{\arityof{\predname{A}}}, \gaifimg{x_1}, \ldots,\gaifimg{x_{\arityof{\predname{A}}}}) \slmodels_{\scriptscriptstyle\slsid} \exists x_{\arityof{\predname{B}}+1} \ldots \exists x_{\arityof{\predname{B}}} \exists \gaifimg{x_{\arityof{\predname{B}}+1}} \ldots \exists \gaifimg{x_{\arityof{\predname{B}}}} ~.~ \\ \xannot{}{\iota'}{\predname{B}}(x_1,\ldots,x_{\arityof{\predname{B}}}, \gaifimg{x_1},\ldots,\gaifimg{x_{\arityof{\predname{B}}}})$, in combination with the upper bound provided by Theorem \ref{thm:sl-entailment}, for \textsf{SL}\ entailments. If $k < \infty$, the complexity is tight for \textsf{CL}, whereas gaps occur for $k=\infty, \ell<\infty$ and $k=\infty, \ell=\infty$, due to the cut-off on the degree bound (Prop. \ref{prop:bound-cutoff}), which impacts the size of ${\overline{\Delta}}$ and time needed to generate it from $\Delta$. \begin{theoremE}\label{thm:entailment} If $\Delta$ is progressing, connected and e-restricted and, moreover, $\bound{\Delta}{\predname{A}}$ has a positive answer, $\klentl{\Delta}{\predname{A}}{\predname{B}}{k}{\ell}$ is in $2\mathsf{EXP}$, $\klentl{\Delta}{\predname{A}}{\predname{B}}{\infty}{\ell}$ is in $3\mathsf{EXP}$\ $\cap$ $2\mathsf{EXP}$-hard, and $\entl{\Delta}{\predname{A}}{\predname{B}}$ is in $4\mathsf{EXP}$\ $\cap$ $2\mathsf{EXP}$-hard. \end{theoremE} \begin{proofE} The proof consists of three parts. (1) We reduce $\entl{\Delta}{\predname{A}}{\predname{B}}$ to an equivalent \textsf{SL}\ entailment problem, for a progressing, connected and e-restricted SID. (2) This reduction provides upper bounds for $\klentl{\Delta}{\predname{A}}{\predname{B}}{k}{\ell}$, in the cases $k,\ell<\infty$, $k=\infty,\ell<\infty$ and $k=\ell=\infty$, respectively. (3) We give a lower bound for $\entl{\Delta}{\predname{A}}{\predname{B}}$, by reduction from \textsf{SL}\ entailment. \vspace*{\baselineskip}\noindent(1) We prove that, for each map $\iota : \interv{1}{\arityof{\predname{A}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$ there exists a map $\iota' : \interv{1}{\arityof{\predname{B}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$, such that: \begin{align*} \predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}}) & \models_\Delta & \exists x_{\arityof{\predname{B}}+1} \ldots \exists x_{\arityof{\predname{B}}} ~.~ \predname{B}(x_1,\ldots,x_{\arityof{\predname{B}}}) ~\iff \\ \xannot{}{\iota}{\predname{A}}(x_1,\ldots,x_{\arityof{\predname{A}}},\gaifimg{x_1},\ldots,\gaifimg{x_{\arityof{\predname{A}}}}) & \slmodels_{\scriptscriptstyle\slsid} & \exists x_{\arityof{\predname{B}}+1} \ldots \exists x_{\arityof{\predname{B}}} \exists \gaifimg{x_{\arityof{\predname{B}}+1}} \ldots \exists \gaifimg{x_{\arityof{\predname{B}}}} ~.~ \\ && \xannot{}{\iota'}{\predname{B}}(x_1,\ldots,x_{\arityof{\predname{B}}},\gaifimg{x_1},\ldots,\gaifimg{x_{\arityof{\predname{B}}}}) \end{align*} \noindent``$\Rightarrow$'' Let $\mathsf{h}$ be a heap and $\overline{\store}$ be a store, such that $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}} \xannot{}{\iota}{\predname{A}}(x_1,\ldots,x_{\arityof{\predname{A}}},\gaifimg{x_1}, \ldots, \gaifimg{x_{\arityof{\predname{A}}}})$. By Lemma \ref{lemma:gaifman-completeness}, we have $\mathsf{h}(\overline{\store}(x_i)) = \overline{\store}(\gaifimg{x_i})$, for all $i \in \interv{1}{\arityof{\predname{A}}}$ and, moreover, there exists a configuration $\gamma$, such that $\mathsf{h} \in \gaifman{\gamma}$ and $\gamma \models^{\overline{\store}}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. By the hypothesis, we obtain $\gamma \models^{\overline{\store}}_\Delta \exists x_{\arityof{\predname{A}}+1} \ldots \exists x_{\arityof{\predname{B}}} ~.~ \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$, hence there exists a store $\overline{\store}'$, that agrees with $\overline{\store}$ over $x_1, \ldots, x_{\arityof{\predname{A}}}$, such that $\gamma \models^{\overline{\store}'}_\Delta \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$. By Lemma \ref{lemma:gaifman-soundness}, there exists a store $\overline{\store}''$ that agrees with $\overline{\store}'$ over $x_1, \ldots, x_{\arityof{\predname{B}}}$, such that $\mathsf{h}(\overline{\store}''(x_i)) = \overline{\store}''(\gaifimg{x_i})$, for all $i \in \interv{1}{\arityof{\predname{B}}}$ and $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}''} \xannot{}{\iota'}{\predname{B}}(x_1, \ldots, x_{\arityof{\predname{B}}}, \gaifimg{x_1}, \ldots, \gaifimg{x_{\arityof{\predname{B}}}}))$, for some map $\iota' : \interv{1}{\arityof{\predname{B}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$, because $\mathsf{h} \in \gaifman{\gamma}$. Hence, $\overline{\store}''$ agrees with $\overline{\store}$ over $x_1, \ldots, x_{\arityof{\predname{A}}}, \gaifimg{x_1}, \ldots, \gaifimg{\arityof{\predname{A}}}$, thus we obtain \(\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}} \exists x_{\arityof{\predname{A}}+1} \ldots \exists x_{\arityof{\predname{B}}} ~.~ \xannot{}{\iota'}{\predname{B}}(x_1, \ldots, x_{\arityof{\predname{B}}}, \gaifimg{x_1}, \ldots, \gaifimg{x_{\arityof{\predname{B}}}})\). \vspace*{\baselineskip}\noindent''$\Leftarrow$'' Let $\gamma$ be a configuration, $\nu$ be a store such that $\gamma \models_\Delta^\nu \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$ and $\mathsf{h} \in \gaifman{\gamma}$ be a heap. Cleary, such a heap exists, for any given configuration, by Def. \ref{def:gaifman-heap}. By Lemma \ref{lemma:gaifman-soundness}, there exists a map $\iota : \interv{1}{\arityof{\predname{A}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$ and a store $\overline{\store}$, that agrees with $\nu$ over $x_1, \ldots, x_{\arityof{\predname{A}}}$, such that $\overline{\store}(\gaifimg{x_i}) = \mathsf{h}(\nu(x_i))$, for all $i \in \interv{1}{\arityof{\predname{A}}}$ and $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}} \xannot{}{\iota}{\predname{A}}(x_1,\ldots,x_{\arityof{\predname{A}}},\gaifimg{x_1}, \ldots, \gaifimg{x_{\arityof{\predname{A}}}})$. By the hypothesis, we have $\mathsf{h} \slmodels_{\scriptscriptstyle\slsid}^{\overline{\store}'} \xannot{}{\iota'}{\predname{B}}(x_1,\ldots,x_{\arityof{\predname{B}}},\gaifimg{x_1},\ldots,\gaifimg{x_{\arityof{\predname{B}}}})$, for some map $\iota' : \interv{1}{\arityof{\predname{B}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$ and a store $\overline{\store}'$ that agrees with $\overline{\store}$ over $x_1,\ldots,x_{\arityof{\predname{A}}}$ and $\gaifimg{x_1}, \ldots, \gaifimg{x_{\arityof{\predname{A}}}}$. By Lemma \ref{lemma:gaifman-completeness}, we have $\overline{\store}'(\gaifimg{x_i}) = \mathsf{h}(\overline{\store}(x_i)) = \overline{\store}(\gaifimg{x_i})$, for all $i \in \interv{1}{\arityof{\predname{A}}}$ and there exists a configuration $\gamma'$, such that $\mathsf{h} \in \gaifman{\gamma'}$ and $\gamma' \models^{\overline{\store}'}_\Delta \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$. Since $\mathsf{h} \in \gaifman{\gamma} \cap \gaifman{\gamma'}$, by Def. \ref{def:gaifman-heap}, we obtain $\gamma=\gamma'$, hence $\gamma \models^{\overline{\store}'}_\Delta \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$. Since, moreover $\overline{\store}'$, $\overline{\store}$ and $\nu$ all agree over $x_1, \ldots, x_{\arityof{\predname{A}}}$, we obtain $\gamma \models^\nu_\Delta \exists x_{\arityof{\predname{A}}+1} \ldots \exists x_{\arityof{\predname{B}}} ~.~ \predname{B}(x_1, \ldots, x_{\arityof{\predname{B}}})$. \vspace*{\baselineskip}\noindent Since $\Delta$ is progressing and connected, ${\overline{\Delta}}$ is progressing and connected as well. Moreover, ${\overline{\Delta}}$ is e-restricted, because $\Delta$ is e-restricted and the construction of ${\overline{\Delta}}$ only introduces equalities, not disequalities. \vspace*{\baselineskip}\noindent(2) The upper bound relies on the result of \cite[Theorem 32]{EchenimIosifPeltier21b}, that gives a $2^{2^{\poly{\width{{\overline{\Delta}}} \cdot \log\size{{\overline{\Delta}}}}}}$ upper bound for \textsf{SL}\ entailments. Note that the number of variables in each rule from ${\overline{\Delta}}$ is the number of variables in its stem rule multiplied by $\mathfrak{K}+1$, hence \(\maxwidthof{{\overline{\Delta}}} \leq \maxwidthof{\Delta} \cdot (\mathfrak{K}+1) = \maxwidthof{\Delta} \cdot \mathfrak{B} \cdot \maxintersize{\Delta} \cdot 2^{\mathcal{O}(\maxintersize{\Delta})}\), because $\mathfrak{K} = \pos{0}{M+1}{N} = 1 + \mathfrak{B} \cdot \sum_{\ell=1}^M \lenof{\tau_\ell} + N = \mathfrak{B} \cdot \maxintersize{\Delta} \cdot 2^{\mathcal{O}(\maxintersize{\Delta})}$. The time needed to build ${\overline{\Delta}}$ and its size are bounded as follows: \[\begin{array}{rcl} \sizeof{{\overline{\Delta}}} & \leq & \cardof{{\overline{\Delta}}} \cdot \maxwidthof{{\overline{\Delta}}}\text{, since there are $2^{\mathfrak{B} \cdot M \cdot \maxarityof{\Delta}}$ maps $\iota : \interv{1}{\arityof{\predname{A}}} \times \interv{1}{M} \rightarrow 2^{\interv{0}{\mathfrak{B}-1}}$} \\ & \leq & 2^{\mathfrak{B} \cdot M \cdot \maxarityof{\Delta}} \cdot \cardof{\Delta} \cdot \maxwidthof{{\overline{\Delta}}} \\ & = & 2^{\mathfrak{B} \cdot 2^{\maxintersize{\Delta}} \cdot \maxarityof{\Delta}} \cdot \cardof{\Delta} \cdot \mathfrak{B} \cdot \maxintersize{\Delta} \cdot 2^{\mathcal{O}(\maxintersize{\Delta})} \\ & = & \sizeof{\Delta} \cdot 2^{\mathit{poly}{(\mathfrak{B} \cdot 2^{\maxintersize{\Delta}} \cdot \maxarityof{\Delta})}} \end{array}\] By Prop. \ref{prop:bound-cutoff}, we consider the following cases: \begin{compactitem} % \item if $k,\ell < \infty$ then $\mathfrak{B}=\mathit{poly}{(\sizeof{\Delta})}$, thus $\sizeof{{\overline{\Delta}}} = 2^{2^{\mathcal{O}(\sizeof{\sizeof{\Delta}})}}$ % \item if $k=\infty$ and $\ell < \infty$ then $\mathfrak{B} = 2^{\mathit{poly}(\size{\Delta})}$, thus $\sizeof{{\overline{\Delta}}} = 2^{2^{2^{\mathcal{O}(\sizeof{\sizeof{\Delta}})}}}$ % \item if $k=\infty$ and $\ell=\infty$ then $\mathfrak{B} = 2^{2^{\mathit{poly}(\size{\Delta})}}$, thus $\sizeof{{\overline{\Delta}}} = 2^{2^{2^{2^{\mathcal{O}(\sizeof{\sizeof{\Delta}})}}}}$ % \end{compactitem} \vspace*{\baselineskip}\noindent(3) The $2\mathsf{EXP}$-hard lower bound for $\klentl{\Delta}{\predname{A}}{\predname{B}}{\infty}{\ell}$ and $\entl{\Delta}{\predname{A}}{\predname{B}}$ is obtained by reduction from the \textsf{SL}\ entailment problem $\xannot{}{}{\predname{A}}(x_1, \ldots, x_k) \slmodels_{\scriptscriptstyle\slsid} \xannot{}{}{\predname{B}}(x_1, \ldots, x_k)$, where ${\overline{\Delta}}$ is a progressing and connected SID, with no disequalities \cite[Theorem 18]{EchenimIosifPeltier20}. Note that the maximum arity of $\Delta$ cannot be bounded to a constant, in order to obtain $2\mathsf{EXP}$-hardness of the \textsf{SL}\ entailment problem, hence the lower bound does not apply to $\klentl{\Delta}{\predname{A}}{\predname{B}}{k}{\ell}$. The idea of the reduction is to encode each \textsf{SL}\ atomic proposition of the form $x \mapsto (y_1, \ldots, y_\mathfrak{K})$ by the formula $\compact{x} * \interacn{x}{p_0}{y_\mathfrak{K}}{p_\mathfrak{K}}$. Then each model $\mathsf{h}$ of a \textsf{SL}\ predicate atom $\xannot{}{}{\predname{A}}(x_1, \ldots, x_{\#(\xannot{}{}{\predname{A}})})$ is represented by a configuration $\gamma = (\mathcal{C}, \mathcal{I}, \varrho)$, such that $\mathcal{C} = \dom{\mathsf{h}}$ and $\mathcal{I} = \set{(c_0,p_0,\ldots,c_\mathfrak{K},p_\mathfrak{K}) \mid \mathsf{h}(c_0) = \tuple{c_1, \ldots, c_\mathfrak{K}}}$. Since ${\overline{\Delta}}$ is progressing and connected, the \textsf{CL}\ SID $\Delta$, obtained from the reduction, is progressing and connected. Since, moreover, the reduction does not introduce disequalities, $\Delta$ is trivially e-restricted. Because the reduction takes polynomial time, we obtain a $2\mathsf{EXP}$-hard lower bound. \qed \end{proofE} \section{Introduction} Distributed systems are increasingly used as critical parts of the infrastructure of our digital society, as in e.g., datacenters, e-banking and social networking. In order to address maintenance (e.g., replacement of faulty and obsolete network nodes by new ones) and data traffic issues (e.g., managing the traffic inside a datacenter \cite{DBLP:journals/comsur/Noormohammadpour18}), the distributed systems community has recently put massive effort in designing algorithms for \emph{reconfigurable systems}, whose network topologies change at runtime \cite{DBLP:journals/sigact/FoersterS19}. However, dynamic reconfiguration is an important souce of bugs that may result in e.g., denial of services or even data corruption\footnote{ \url{https://status.cloud.google.com/incident/appengine/19007}}. This paper contributes to a logical framework that addresses the timely problems of formal \emph{modeling} and \emph{verification} of reconfigurable distributed systems. The basic building blocks of this framework are \begin{inparaenum}[(i)] % \item a Hoare-style program proof calculus \cite{AhrensBozgaIosifKatoen21} used to write formal proofs of correctness of reconfiguration programs, and % \item an invariant synthesis method \cite{BozgaIosifSifakis21} that proves the safety (i.e., absence of reachable error configurations) of the configurations defined by the assertions that annotate a reconfiguration program. % \end{inparaenum} These methods are combined to prove that an initially correct distributed system cannot reach an error state, following the execution of a given reconfiguration sequence. The assertions of the proof calculus are written in a logic that defines infinite sets of configurations, consisting of \emph{components} (i.e., processes running on different nodes of the network) connected by \emph{interactions} (i.e., multi-party channels alongside which messages between components are transfered). Systems that share the same architectural style (e.g., pipeline, ring, star, tree, etc.) and differ by the number of components and interactions are described using inductively defined predicates. Such configurations can be modified either by \begin{inparaenum}[(a)] % \item\label{it:reconfiguration} adding or removing components and interactions (reconfiguration), or % \item\label{it:havoc} changing the local states of components, by firing interactions. \end{inparaenum} The assertion logic views components and interactions as \emph{resources}, that can be created or deleted, in the spirit of resource logics \emph{\`{a} la} Bunched Implications \cite{PymOHearn99}, or Separation Logic \cite{Reynolds02}. The main advantage of using resource logics is their support for \emph{local reasoning} \cite{CalcagnoOHearnYan07}: reconfiguration actions are specified by pre- and postconditions mentioning only the resources involved, while framing out the rest of the configuration. The price to pay for this expressive power is the difficulty of automating the reasoning in these logics. This paper makes several contributions in the direction of proof automation, by studying the complexity of the \emph{satisfiability} and \emph{entailment} problems, for the configuration logic under consideration. Additionally, we study the complexity of \ifLongVersion two robustness properties \cite{JansenKatelaanMathejaNollZuleger17}, namely \emph{tightness} (are all interactions entirely connected to components?) and \else a robustness property \cite{JansenKatelaanMathejaNollZuleger17}, namely \fi \emph{degree boundedness} (is every component involved in a bounded number of interactions?). In particular, the latter problem is used as a prerequisite for defining a fragment with a decidable entailment problem. \subsection{Motivating Example} \label{sec:running-example} The logic studied in this paper is motivated by the need for an assertion language that supports reasoning about dynamic reconfigurations in a distributed system. For instance, consider a distributed system consisting of a finite (but unknown) number of \emph{components} (processes) placed in a ring, executing the same finite-state program and communicating via \emph{interactions} that connect the \emph{out} port of a component to the \emph{in} port of its right neighbour, in a round-robin fashion, as in Fig. \ref{fig:ring} (a). The behavior of a component is a machine with two states, $\mathsf{T}$ and $\mathsf{H}$, denoting whether the component has a token ($\mathsf{T}$) or not ($\mathsf{H}$). A component $c_i$ without a token may receive one, by executing a transition $\mathsf{H} \arrow{\mathit{in}}{} \mathsf{T}$, simultaneously with its left neighbour $c_j$, that executes the transition transition $\mathsf{T} \arrow{\mathit{out}}{} \mathsf{H}$. Then, we say that the interaction $(c_j, \mathit{out}, c_i, \mathit{in})$ has fired, moving a token one position to the right in the ring. Note that there can be more than one token, moving independently in the system, as long as no token overtakes another token. The token ring system is formally specified by the following inductive rules: \begin{align*} \ring{h}{t}(x) & \leftarrow \exists y \exists z ~.~ \compactin{x}{q} * \interactwo{x}{out}{z}{in} * \chain{h'}{t'}(z,y) * \interactwo{y}{out}{x}{in} \\ \chain{h}{t}(x,y) & \leftarrow \exists z.~\compactin{x}{q} * \interac{x}{out}{z}{in} * \chain{h'}{t'}(z,y) \\ \chain{0}{1}(x,x) & \leftarrow \compactin{x}{\mathsf{T}} \hspace*{8mm} \chain{1}{0}(x,x) \leftarrow \compactin{x}{\mathsf{H}} \hspace*{8mm} \chain{0}{0}(x,x) \leftarrow \compact{x} \\ \text{where } h' & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \left\{\begin{array}{ll} \max(h-1,0) & \text{, if } q = \mathsf{H} \\ h & \text{, if } q = \mathsf{T} \end{array}\right. \text{ and } t' \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \left\{\begin{array}{ll} \max(t-1,0) & \text{, if } q = \mathsf{T} \\ t & \text{, if } q = \mathsf{H} \end{array}\right. \end{align*} The predicate $\ring{h}{t}(x)$ describes a ring with at least two components, such that at least $h$ (resp. $t$) components are in state $\mathsf{H}$ (resp. $\mathsf{T}$). The ring consists of a component $x$ in state $q$, described by the formula $\compactin{x}{q}$, an interaction from the $\mathit{out}$ port of $x$ to the $\mathit{in}$ port of another component $z$, described as $\interactwo{x}{out}{z}{in}$, a separate chain of components stretching from $z$ to $y$ ($\chain{h'}{t'}(z,y)$), and an interaction connecting the $\mathit{out}$ port of component $y$ to the $\mathit{in}$ port of component $x$ ($\interactwo{y}{out}{x}{in}$). Inductively, a chain consists of a component $\compactin{x}{q}$, an interaction $\interactwo{x}{out}{z}{in}$ and a separate $\chain{h'}{t'}(z,y)$. Fig. \ref{fig:ring} (b) depicts the unfolding of the inductive definition of the token ring, with the existentially quantified variables $z$ from the above rules $\alpha$-renamed to $z^1, z^2, \ldots$ to avoid confusion. \begin{figure}[t!] \begin{center} \begin{minipage}{.55\textwidth} \hspace*{-5mm}\centerline{\input{ring3.pdf_t}} \end{minipage} \begin{minipage}{.44\textwidth} {\begin{lstlisting} $\set{\ring{h}{t}(y)}$ /*assuming $h\geq2,~t\geq1$*/ $\set{\boxaround{\compactin{y}{\mathsf{H}} * \interac{y}{out}{z}{in} * \chain{h-1}{t}(z,x)} * \interac{x}{out}{y}{in}}$ disconnect(x.$\mathit{out}$, y.$\mathit{in}$); $\set{\boxaround{\compactin{y}{\mathsf{H}}} * \interac{y}{out}{z}{in} * \boxaround{\chain{h-t}{t}(z,x)}}$ disconnect(y.$\mathit{out}$, z.$\mathit{in}$); $\set{\compactin{y}{\mathsf{H}} * \boxaround{\chain{h-1}{t}(z,x)}}$ delete(y); $\set{\boxaround{\chain{h-1}{t}(z,x)}}$ connect(x.$\mathit{out}$, z.$\mathit{in}$) $\set{\chain{h-1}{t}(z,x) * \interac{x}{out}{z}{in}}$ $\set{\ring{h-1}{t}(z)}$ \end{lstlisting}} \vspace*{-.7\baselineskip} \centerline{\footnotesize(c)} \end{minipage} \end{center} \vspace*{-\baselineskip} \caption{Inductive Specification and Reconfiguration of a Token Ring} \vspace*{-2\baselineskip} \label{fig:ring} \end{figure} A \emph{reconfiguration program} takes as input a mapping of program variables to components and executes a sequence of \emph{basic operations} i.e., component/interaction creation/deletion, involving the components and interactions denoted by these variables. For instance, the reconfiguration program in Fig. \ref{fig:ring} (c) takes as input three adjacent components, mapped to the variables $\mathtt{x}$, $\mathtt{y}$ and $\mathtt{z}$, respectively, removes the component $\mathtt{y}$ together with its left and right interactions and reconnects $\mathtt{x}$ directly with $\mathtt{z}$. Programming reconfigurations is error-prone, because the interleaving between reconfiguration actions and interactions in a distributed system may lead to bugs that are hard to trace. For instance, if a reconfiguration program removes the last component in state $\mathsf{T}$ (resp. $\mathsf{H}$) from the system, no token transfer interaction may fire and the system deadlocks. We prove absence of such errors using a Hoare-style proof system \cite{AhrensBozgaIosifKatoen21}, based on the above logic as assertion language. For instance, the proof from Fig. \ref{fig:ring} (c) shows that the reconfiguration sequence applied to a component $\mathtt{y}$ in state $\mathsf{H}$ (i.e., $\compactin{y}{\mathsf{H}}$) in a ring with at least $h\geq2$ components in state $\mathsf{H} $ and at least $t\geq1$ components in state $\mathsf{T}$ leads to a ring with at least $h-1$ components in state $\mathsf{H}$ and at least $t$ in state $\mathsf{T}$; note that the states of the components may change during the execution of the reconfiguration program, as tokens are moved by interactions. \ifLongVersion For reasons of proof scalability, a basic operation is specified only with regard to the components and interactions required to avoid faulting. For instance $\hoare{\compactin{x}{q}}{\predname{delete}(x)}{\predname{emp}}$ (resp. $\hoare{\interactwo{x}{out}{y}{in}}{\predname{disconnect}(\mathtt{x}.\mathit{out}, \mathtt{y}.\mathit{in})}{\predname{emp}}$) means that $\predname{delete}$ (resp. $\predname{disconnect}$) requires a component (resp. interaction) and returns an empty configuration, whereas $\hoare{\predname{emp}}{\predname{connect}(\mathtt{x}.\mathit{out},\mathtt{y}.\mathit{in})}{\interactwo{x}{out}{y}{in}}$ means that $\predname{connect}$ requires nothing and creates an interaction between the given ports of the given components. These local specifications are plugged into a context described by a frame formula $F$, using the \emph{frame rule} $\hoare{\phi}{\predname{P}}{\psi} \Rightarrow \hoare{\phi*\boxaround{F}}{\predname{P}}{\psi*F}$; for readability, the frame formul{\ae} (from the preconditions of the conclusion of the frame rule applications) are enclosed in boxes. \else The proof in Fig. \ref{fig:ring} (c) uses \emph{local axioms} specifying, for each basic operation, only those components and interactions required to avoid faulting, with a \emph{frame rule} $\hoare{\phi}{\predname{P}}{\psi} \Rightarrow \hoare{\phi*\boxaround{F}}{\predname{P}}{\psi*F}$; for readability, the frame formul{\ae} (from the preconditions of the conclusion of the frame rule applications) are enclosed in boxes. \fi The proof also uses the \emph{consequence rule} $\hoare{\phi}{P}{\psi} \Rightarrow \hoare{\phi'}{P}{\psi'}$ that applies if $\phi'$ is stronger than $\phi$ and $\psi'$ is weaker than $\psi$. The side conditions of the consequence rule require checking the validity of the entailments $\ring{h}{t}(y) \models \exists x \exists z ~.~ \interac{x}{out}{y}{in} * \compactin{y}{\mathsf{H}} * \interac{y}{out}{z}{in} * \chain{h-1}{t}(z,x)$ and $\chain{h-1}{t}(z,x) * \interac{x}{out}{z}{in} \models \ring{h-1}{t}(z)$, for all $h\geq2$ and $t\geq1$. These side conditions can be automatically discharged using the results on the decidability of entailments given in this paper. Additionally, checking the satisfiability of a precondition is used to detect trivially valid Hoare triples. \subsection{Related Work} \label{sec:related-work} Formal modeling coordinating architectures of component-based systems has received lots of attention, with the development of architecture description languages (ADL), such as BIP \cite{basu2006modeling} or REO \cite{Arbab04}. Many such ADLs have extensions that describe programmed reconfiguration, e.g., \cite{DR-BIP-STTT,KrauseMaraikarLazovikArbab11}, classified according to the underlying formalism used to define their operational semantics: \emph{process algebras} \cite{CavalcanteBO15,magee1996dynamic}, \emph{graph rewriting} \cite{taentzer1998dynamic,DBLP:journals/scp/WermelingerF02,LeMetayer}, \emph{chemical reactions} \cite{wermelinger1998towards} (see the surveys \cite{bradbury2004survey,rumpe2017classification}). Unfortunately, only few ADLs support formal verification, mainly in the flavour of runtime verification \cite{BucchiaroneG08,DormoyKL10,LanoixDK11,DBLP:conf/sac/El-HokayemBS21} or finite-state model checking \cite{Clarke08}. Parameterized verification of unbounded networks of distributed processes uses mostly hard-coded coordinating architectures (see \cite{BloemJacobsKhalimovKonnovRubinVeithWidder15} for a survey). A first attempt at specifying architectures by logic is the \emph{interaction logic} of Konnov et al. \cite{KonnovKWVBS16}, a combination of Presburger arithmetic with monadic uninterpreted function symbols, that can describe cliques, stars and rings. More structured architectures (pipelines and trees) can be described using a second-order extension \cite{MavridouBBS17}. However, these interaction logics are undecidable and lack support for automated reasoning. Specifying parameterized component-based systems by inductive definitions is not new. \emph{Network grammars} \cite{ShtadlerGrumberg89,LeMetayer,Hirsch} use context-free grammar rules to describe systems with linear (pipeline, token-ring) architectures obtained by composition of an unbounded number of processes. In contrast, we use predicates of unrestricted arities to describe architectural styles that are, in general, more complex than trees. Moreover, we write inductive definitions using a resource logic, suitable also for writing Hoare logic proofs of reconfiguration programs, based on local reasoning \cite{CalcagnoOHearnYan07}. Local reasoning about concurrent programs has been traditionally the focus of Concurrent Separation Logic (\textsf{CSL}), based on a parallel composition rule \cite{DBLP:journals/tcs/OHearn07}, initially with a non-interfering (race-free) semantics \cite{Brookes:2016} and later combining ideas of assume- and rely-guarantee \cite{Owicki1978,DBLP:phd/ethos/Jones81} with local reasoning \cite{FengFerreiraShao07,Vafeiadis07} and abstract notions of framing \cite{Dinsdale-Young10,Dinsdale-Young13,Farka21}. However, the body of work on CSL deals almost entirely with shared-memory multithreading programs, instead of distributed systems, which is the aim of our work. In contrast, we develop a resource logic in which the processes do not just share and own resources, but become mutable resources themselves. The techniques developed in this paper are inspired by existing techniques for similar problems in the context of Separation Logic (\textsf{SL}) \cite{Reynolds02}. For instance, we use an abstract domain similar to the one defined by Brotherston et al. \cite{DBLP:conf/csl/BrotherstonFPG14} for checking satisfiability of symbolic heaps in \textsf{SL}\ and reduce a fragment of the entailment problem in our logic to \textsf{SL}\ entailment \cite{EchenimIosifPeltier21}. In particular, the use of existing automated reasoning techniques for \textsf{SL}\ has pointed out several differences between the expressiveness of our logic and that of \textsf{SL}. First, the configuration logic describes hypergraph structures, in which edges are $\ell$-tuples for $\ell\geq2$, instead of directed graphs as in \textsf{SL}, where $\ell$ is a parameter of the problem: considering $\ell$ to be a constant strictly decreases the complexity of the problem. Second, the degree (number of hyperedges containing a given vertex) is unbounded, unlike in \textsf{SL}, where the degree of heaps is constant. Therefore, we dedicate an entire section (\S\ref{sec:boundedness}) to the problem of deciding the existence of a bound (and computing a cut-off) on the degree of the models of a formula, used as a prerequisite for the encoding of the entailment problems from the configuration logic as \textsf{SL}\ entailments. \section{Satisfiability} \label{sec:satisfiability} We show that the satisfiability problem (Def. \ref{def:decision}, point \ref{decision:sat}) is decidable, using a method similar to the one pioneered by Brotherston et al.~\cite{DBLP:conf/csl/BrotherstonFPG14}, for checking satisfiability of inductively defined symbolic heaps in \textsf{SL}. We recall that a formula $\pi$ is \emph{pure} if and only if it is a separating conjunction of equalities, disequalities and state atoms. \begin{definition}\label{def:closure} The \emph{closure} $\closureof{\pi}$ of a pure formula $\pi$ is the limit of the sequence $\pi^0, \pi^1, \pi^2, \ldots$ such that $\pi^0 = \pi$ and, for each $i \geq 0$, $\pi^{i+1}$ is obtained by joining (with $*$) all of the following formul{\ae} to $\pi^i$: \begin{compactitem} % \item $x = z$, where $x$ and $z$ are the same variable, or $x = y$ and $y = z$ both occur in $\pi^i$, % \item $x \neq z$, where $x = y$ and $y \neq z$ both occur in $\pi^i$, or % \item $\compin{y}{q}$, where $\compin{x}{q}$ and $x = y$ both occur in $\pi^i$. % \end{compactitem} \end{definition} Because only finitely many such formul{\ae} can be added, the sequence of pure formul{\ae} from Def. \ref{def:closure} is bound to stabilize after polynomially many steps. A pure formula is satisfiable if and only if its closure does not contain contradictory literals i.e., $x = y$ and $x \neq y$, or $\compin{x}{q}$ and $\compin{x}{q'}$, for $q \neq q' \in \mathcal{Q}$. We write $x \formeq{\pi} y$ (resp. $x ~\formneq{\pi} y$) if and only if $x = y$ (resp. $x \neq y$) occurs in $\closureof{\pi}$ and $\notof{x \formeq{\pi} y}$ (resp. $\notof{x ~\formneq{\pi} y}$) whenever $x \formeq{\pi} y$ (resp. $x \formneq{\pi} y$) does not hold. Note that e.g., $\notof{x \formeq{\pi} y}$ is not the same as $x ~\formneq{\pi} y$. \begin{myLemmaE}\label{lemma:pureform-sat} A pure formula $\pi$ is satisfiable if and only if the following hold: \begin{enumerate} % \item for all $x, y \in \fv{\pi}$, $x=y$ and $x\neq y$ do not occur both in $\closureof{\pi}$, % \item for all $x \in \fv{\pi}$ and $q \neq r \in \mathcal{Q}$, $\compin{x}{q}$ and $\compin{x}{r}$ do not occur both in $\closureof{\pi}$. % \end{enumerate} \end{myLemmaE} \begin{proofE} A pure formula $\pi$ is satisfiable if and only if there exists a store $\nu$ and a configuration $(\emptyset, \emptyset, \varrho)$, such that $(\emptyset,\emptyset,\varrho) \models^\nu \pi$. ``$\Leftarrow$'' It is easy to see that $\formeq{\pi}$ is an equivalence relation, for each pure formula $\pi$. Given any state map $\varrho$, we define $\nu$ by assigning each equivalence class of $\formeq{\pi}$ a distinct component $c$, such that $\varrho(c)=q$ if $\compin{y}{q}$ occurs in $\pi$, for a variable $y$ in the class. By the conditions of the Lemma, $\varrho$ and $\nu$ are well defined and we have $(\emptyset,\emptyset,\varrho) \models^\nu \pi$, by definition. ``$\Rightarrow$'' If $(\emptyset,\emptyset,\varrho) \models^\nu \pi$ then $(\emptyset,\emptyset,\varrho) \models^\nu \closureof{\pi}$, because each additional formula in $\closureof{\pi}$ is a logical consequence of $\pi$. Since $\closureof{\pi}$ is satisfiable, the two conditions of the Lemma must hold. \qed \end{proofE} \emph{Base tuples} constitute the abstract domain used by the algorithms for checking satisfiability (point \ref{decision:sat} of Def. \ref{def:decision}) and boundedness (point \ref{decision:bound} of Def. \ref{def:decision}), defined as follows: \begin{definition}\label{def:base-tuple} A \emph{base tuple} is a triple $\mathfrak{t} = (\comps^\sharp, \interacs^\sharp, \pi)$, where: \begin{compactitem} % \item $\comps^\sharp \in \mpow{\mathbb{V}}$ is a multiset of variables denoting present components, % \item $\interacs^\sharp : \mathsf{Inter} \rightarrow \mpow{\mathbb{V}^+}$ maps each interaction type $\tau \in \mathsf{Inter}$ into a multiset of tuples of variables of length $\lenof{\tau}$ each, and % \item $\pi$ is a pure formula. % \end{compactitem} A base tuple is called \emph{satisfiable} if and only if $\pi$ is satisfiable and the following hold: \begin{compactenum} % \item\label{it1:base-tuple} for all $x,y \in \comps^\sharp$, $\notof{x \formeq{\pi} y}$, % \item\label{it2:base-tuple} for all $\tau \in \mathsf{Inter}$, $\tuple{x_1, \ldots, x_{\lenof{\tau}}}, \tuple{y_1, \ldots, y_{\lenof{\tau}}} \in \interacs^\sharp(\tau)$, there exists $i \in \interv{1}{\lenof{\tau}}$ such that $\notof{x_i \formeq{\pi} y_i}$, % \item\label{it3:base-tuple} for all $\tau \in \mathsf{Inter}$, $\tuple{x_1, \ldots, x_{\arityof{\tau}}} \in \interacs^\sharp(\tau)$ and $1 \leq i < j \leq \lenof{\tau}$, we have $\notof{x_i \formeq{\pi} x_j}$. % \end{compactenum} We denote by $\mathsf{SatBase}$ the set of satisfiable base tuples. \end{definition} Note that a base tuple $(\basecomps, \baseinteracs, \pureform)$ is unsatisfiable if $\comps^\sharp$ ($\interacs^\sharp$) contains the same variable (tuple of variables) twice (for the same interaction type), hence the use of multisets in the definition of base tuples. It is easy to see that checking the satisfiability of a given base tuple $(\basecomps, \baseinteracs, \pureform)$ can be done in time $\mathit{poly}(\cardof{\comps^\sharp}+\sum_{\tau\in\mathsf{Inter}} \cardof{\interacs^\sharp(\tau)}+\sizeof{\pi})$. We define a partial \emph{composition} operation on satisfiable base tuples, as follows: \[ (\comps^\sharp_1, \interacs^\sharp_1, \pi_1) \otimes (\comps^\sharp_2, \interacs^\sharp_2, \pi_2) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\comps^\sharp_1 \cup \comps^\sharp_2, \interacs^\sharp_1 \cup \interacs^\sharp_2, \pi_1 * \pi_2) \] where the union of multisets is lifted to functions $\mathsf{Inter} \rightarrow \mpow{\mathbb{V}^+}$ in the usual way. The composition operation $\otimes$ is undefined if $(\comps^\sharp_1, \interacs^\sharp_1, \pi_1) \otimes (\comps^\sharp_2, \interacs^\sharp_2, \pi_2)$ is not satisfiable e.g., if $\comps^\sharp_1 \cap \comps^\sharp_2 \neq \emptyset$, $\interacs^\sharp_1(\tau) \cap \interacs^\sharp_2(\tau) \neq \emptyset$, for some $\tau\in\mathsf{Inter}$, or $\pi_1 * \pi_2$ is not satisfiable. Given a pure formula $\pi$ and a set of variables $X$, the projection $\proj{\pi}{X}$ removes from $\pi$ all atomic propositions $\alpha$, such that $\fv{\alpha} \not\subseteq X$. The \emph{projection} of a base tuple $(\comps^\sharp,\interacs^\sharp,\pi)$ on a variable set $X$ is formally defined below: \[\begin{array}{rcl} \proj{(\comps^\sharp,\interacs^\sharp,\pi)}{X} & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \left(\comps^\sharp \cap X, \lambda \tau ~.~ \set{\tuple{x_1, \ldots, x_{\lenof{\tau}}} \in \interacs^\sharp(\tau) \mid x_1, \ldots, x_{\lenof{\tau}} \in X}, \proj{\closureof{\constrof{\interacs^\sharp} * \pi}}{X}\right) \\ \text{where } \constrof{\interacs^\sharp} & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \Asterisk_{\tau\in\mathsf{Inter}}\Asterisk_{\tuple{x_1, \ldots, x_{\lenof{\tau}}} \in \interacs^\sharp(\tau)} \Asterisk_{1 \leq i < j \leq \lenof{\tau}} ~x_i \neq x_j \end{array}\] The \emph{substitution} operation $(\comps^\sharp, \interacs^\sharp, \pi)[x_1/y_1, \ldots, x_n/y_n]$ replaces simultaneously each $x_i$ with $y_i$ in $\comps^\sharp$, $\interacs^\sharp$ and $\pi$, respectively. \begin{myTextE} For a store $\nu$, we denote by $\nu[x_1/y_1, \ldots, x_n/y_n]$ the store such that $\nu[x_1/y_1, \ldots, x_n/y_n](x_i) = \nu(y_i)$ and agrees with $\nu$ over $\mathbb{V}\setminus\set{x_1, \ldots, x_n}$. \end{myTextE} We lift the composition, projection and substitution operations to sets of satisfiable base tuples, as usual. \begin{myLemmaE}\label{lemma:pure-subst} Given a formula $\phi$ and a substitution $\sigma = [x_1/y_1, \ldots, x_n/y_n]$, for any configuration $(\mathcal{C}, \mathcal{I}, \varrho)$ and store $\nu$, $(\mathcal{C},\mathcal{I},\varrho) \models^\nu_\Delta \phi\sigma$ only if $(\mathcal{C},\mathcal{I},\varrho) \models^{\nu\sigma} \phi$. \end{myLemmaE} \begin{proofE} By induction on the definition of $\models^\nu_\Delta$. \qed \end{proofE} Next, we define the base tuple corresponding to a quantifier- and predicate-free formula $\phi = \psi * \pi$, where $\psi$ consists of component and interaction atoms and $\pi$ is pure. Since, moreover, we are interested in those components and interactions that are visible through a given indexed set of parameters $X = \set{x_1, \ldots, x_n}$, for a variable $y$, we denote by $\reprof{y}{X}{\pi}$ the parameter $x_i$ with the least index, such that $y \formeq{\pi} x_i$, or $y$ itself, if no such parameter exists. We define the following sets of formul{\ae}: \[\begin{array}{rcl} \basetupleof{\phi}{X} & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \left\{\begin{array}{ll} \set{(\comps^\sharp, \interacs^\sharp, \pi)} & \text{, if } (\comps^\sharp, \interacs^\sharp, \pi) \text{ is satisfiable} \\ \emptyset & \text{, otherwise} \end{array}\right. \\ \text{where } \comps^\sharp & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \set{\reprof{x}{X}{\pi} \mid \compact{x} \text{ occurs in } \psi} \\ \interacs^\sharp & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \lambda \tuple{p_1, \ldots, p_s} . \Set{\Tuple{\reprof{y_1}{X}{\pi}, \ldots, \reprof{y_s}{X}{\pi}} \mid \interacn{y_1}{p_1}{y_s}{p_s} \text{ occurs in } \psi} \end{array}\] We consider a tuple of variables $\overrightarrow{\mathcal{X}}$, having a variable $\basevarof{\predname{A}}$ ranging over $\pow{\mathsf{SatBase}}$, for each predicate $\predname{A}$ that occurs in $\Delta$. With these definitions, each rule of $\Delta$: \begin{equation*} \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(z^1_1, \ldots, z^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(z^h_1, \ldots, z^h_{\arityof{\predname{B}_h}}) \end{equation*} where $\phi$ is a quantifier- and predicate-free formula, induces the constraint: \vspace*{-.5\baselineskip} \begin{equation}\label{eq:basesid} \hspace*{-3.5mm}\basevarof{\predname{A}} \supseteq \proj{\big(\basetupleof{\phi}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=1}^h \basevarof{\predname{B}_\ell}[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\big)}{x_1, \ldots, x_{\arityof{\predname{A}}}} \end{equation} Let $\asid^\sharp$ be the set of such constraints, corresponding to the rules in $\Delta$ and let $\mu\basevartuple.\basesid$ be the tuple of least solutions of the constraint system generated from $\Delta$, indexed by the tuple of predicates that occur in $\Delta$, such that $\leastbaseof{\predname{A}}$ denotes the entry of $\mu\basevartuple.\basesid$ correponding to $\predname{A}$. Since the composition and projection are monotonic operations, such a least solution exists and is unique. Moreover, since $\mathsf{SatBase}$ is finite, the least solution can be attained in a finite number of steps, using a standard Kleene iteration\ifLongVersion(see Fig. \ref{fig:base-sets})\fi. \ifLongVersion \begin{figure}[t!] {\small\begin{algorithmic}[0] \State \textbf{input}: a SID $\Delta$ \State \textbf{output}: $\mu\basevartuple.\basesid$ \end{algorithmic}} {\small\begin{algorithmic}[1] \State initially $\mu\basevartuple.\basesid := \lambda \predname{A} ~.~ \emptyset$ \For{$\predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi \in \Delta$, with $\phi$ quantifier- and predicate-free} \State $\leastbaseof{\predname{A}} := \leastbaseof{\predname{A}} \cup \proj{\basetupleof{\phi}{\set{x_1,\ldots,x_{\arityof{\predname{A}}}}}}{x_1,\ldots,x_{\arityof{\predname{A}}}}$ \EndFor \While{$\mu\basevartuple.\basesid$ still change} \For{$\mathsf{r} : \predname{A}(x_1,\ldots,x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \Asterisk_{\ell=1}^h \predname{B}_\ell(z^\ell_1,\ldots,z^\ell_{\arityof{\predname{B}_\ell}}) \in \Delta$} \If{there exist $\mathfrak{t}_1 \in \leastbaseof{\predname{B}_1}, \ldots, \mathfrak{t}_h \in \leastbaseof{\predname{B}_h}$} \State $\leastbaseof{\predname{A}} := \leastbaseof{\predname{A}} ~\cup~ \proj{\left(\basetupleof{\phi}{\set{x_1,\ldots,x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=1}^h \mathfrak{t}_\ell[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\right)}{x_1,\ldots,x_{\arityof{\predname{A}}}}$ \EndIf \EndFor \EndWhile \end{algorithmic}} \caption{Algorithm for the Computation of the Least Solution} \label{fig:base-sets} \end{figure} \fi \begin{myTextE} Given a base tuple $(\comps^\sharp, \interacs^\sharp, \pi)$ and a store $\nu$, we define the following sets of components and interactions, respectively: \begin{align*} \nu(\comps^\sharp) & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{\nu(x) \mid x \in \comps^\sharp} \\ \nu(\interacs^\sharp) & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \bigcup_{\tuple{p_1, \ldots, p_n}\in\mathsf{Inter}} \set{(\nu(x_1), p_1, \ldots, \nu(x_n), p_n) \mid (x_1, \ldots, x_n) \in \interacs^\sharp(\tuple{p_1, \ldots, p_n})} \end{align*} \end{myTextE} We state below the main result leading to an elementary recursive algorithm for the satisfiability problem (Thm. \ref{thm:sat}). \begin{myLemmaE}\label{lemma:sat-soundness} Given a base tuple $(\basecomps, \baseinteracs, \pureform) \in \leastbaseof{\predname{A}}[x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}]$, a state map $\varrho$ and a store $\nu$ such that $(\emptyset, \emptyset, \varrho) \models^\nu \pi$, a set of components $\mathcal{D}$ disjoint from $\nu(\comps^\sharp)$ and a set of interactions $\mathcal{J}$ disjoint from $\nu(\interacs^\sharp)$, there exists a configuration $(\mathcal{C},\mathcal{I},\varrho)$, such that $(\mathcal{C},\mathcal{I},\varrho) \models^\nu_\Delta \predname{A}(y_1, \ldots, y_{\arityof{\predname{A}}})$, $\mathcal{C} \cap \mathcal{D} = \emptyset$ and $\mathcal{I} \cap \mathcal{J} = \emptyset$. \end{myLemmaE} \begin{proofE} Let $\sigma \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} [x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}]$ be a substitution and $\basetuplen{0} \in \leastbaseof{\predname{A}}$ be a base pair, such that $(\basecomps, \baseinteracs, \pureform) = \basetuplen{0}\sigma$. Since $(\emptyset, \emptyset, \varrho) \models^\nu \pi$, we obtain $(\emptyset, \emptyset, \varrho) \models^{\nu_0} \pi_0$, by Lemma \ref{lemma:pure-subst}, where we define $\nu_0 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \nu\sigma$. Let \[\mathcal{K} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \mathcal{D} \cup \set{c_i \mid (c_1, p_1, \ldots, c_n, p_n) \in \mathcal{J},~ i \in \interv{1}{n}}\] The proof is by fixpoint induction on the definition of $\basetuplen{0}$. Assume that: \[\basetuplen{0} \in \proj{\big(\basetupleof{\psi*\pi'}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=2}^h \leastbaseof{\predname{B}_\ell}[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\big)}{x_1, \ldots, x_{\arityof{\predname{A}}}}\] for a rule \(\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \psi * \pi' * \Asterisk_{\ell=2}^h\predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})\) of $\Delta$, such that $\psi * \pi'$ is quantifier-free, $\psi$ consists of component and interaction atoms and $\pi'$ is the largest pure subformula of $\psi * \pi'$. Then there exist base tuples $\basetuplen{1}, \ldots, \basetuplen{h}$, such that: \begin{compactitem} % \item $\basetuplen{1} \in \basetupleof{\psi*\pi'}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}}$, % \item $\basetuplen{\ell} \in \leastbaseof{\predname{B}_{\ell}}[x_1/z^{\ell}_1, \ldots, x_{\arityof{\predname{B}_{\ell}}}/z^{\ell}_{\arityof{\predname{B}_{\ell}}}]$, for all $\ell \in \interv{2}{h}$, % \item $\basetuplen{0} = \proj{\big(\basetuplen{1} \otimes \ldots \otimes \basetuplen{h}\big)}{x_1, \ldots, x_{\arityof{\predname{A}}}}$. % \end{compactitem} From the first and last points, we deduce $\pi_0 = \proj{ \closureof{ \pi' * \Asterisk_{\ell=2}^{h} (\constrof{\interacs^\sharp_\ell} * \pi_{\ell}) }}{x_1, \ldots, x_{\arityof{\predname{A}}}}$. Let $\pi'' \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \pi' * \Asterisk_{\ell=2}^{h} (\constrof{\interacs^\sharp_\ell} * \pi_{\ell})$ and define a store $\nu'_0$, by assigning each $\formeq{\pi''}$-equivalence class the following component: \begin{compactitem} % \item $\nu_0(x_i)$, if $x_i$ belongs to the class, for some $i \in \interv{1}{\arityof{\predname{A}}}$, % \item else, if the class is disjoint from $\set{x_1, \ldots, x_{\arityof{\predname{A}}}}$ and $\compin{y}{q}$ occurs in $\pi'$, for a variable $y$ in the class, we assign $c \in \mathbb{C}\setminus\mathcal{K}$, such that $\varrho(c) = q$; since $\pi'$ is satisfiable, there are no two state atoms $\compin{y}{q}$ and $\compin{z}{r}$, such that $y \formeq{\pi'} z$ and $q \neq r$ in $\pi'$ and, moreover, chosing $c$ is always possible, by the last point of Def. \ref{def:configuration}, % \item otherwise, the class is assigned an arbitrary component $c \in \mathbb{C} \setminus \mathcal{K}$. % \end{compactitem} Such a store exists, because $\pi''$ is satisfiable and, moreover, $(\emptyset,\emptyset,\varrho) \models^{\nu'_0} \pi''$, hence also $(\emptyset,\emptyset,\varrho) \models^{\nu'_0} \pi_{\ell}$, for all $\ell\in\interv{2}{h}$. We define two sequences of sets of components $\mathcal{C}_1, \ldots, \mathcal{C}_{h}$ and interactions $\mathcal{I}_1, \ldots, \mathcal{I}_{h}$, as follows: \begin{compactitem} % \item $\mathcal{C}_1 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{\nu'_0(y) \mid \compact{y} \text{ occurs in } \psi}$, % \item $\mathcal{I}_1 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \set{(\nu'_0(z_1), p_1, \ldots, \nu'_0(z_t), p_t) \mid \interacn{z_1}{p_1}{z_t}{p_t} \text{ occurs in } \psi}$, % \item for all $\ell \in \interv{2}{h}$, assume that $\mathcal{C}_1, \ldots, \mathcal{C}_{\ell-1}$ and $\mathcal{I}_1, \ldots, \mathcal{I}_{\ell-1}$ have been defined and let us define: \[\begin{array}{rcl} \mathcal{D}_\ell & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \mathcal{D} ~\cup~ \bigcup_{j=1}^{\ell-1} \mathcal{C}_j ~\cup~ \bigcup_{j=\ell+1}^h \nu'_0(\comps^\sharp_j) \\ \mathcal{J}_\ell & \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} & \mathcal{J} ~\cup~ \bigcup_{j=1}^{\ell-1} \mathcal{I}_j ~\cup~ \bigcup_{j=\ell+1}^h \nu'_0(\interacs^\sharp_j) \end{array}\] First, we prove that $\mathcal{D}_\ell \cap \nu'_0(\comps^\sharp_\ell) = \emptyset$ and $\mathcal{J}_\ell \cap \nu'_0(\interacs^\sharp_\ell) = \emptyset$ (we prove the first point, the second using a similar argument). Suppose, for a contradiction, that $c \in \mathcal{D}_\ell \cap \nu'_0(\comps^\sharp_\ell)$. We distinguish the following cases: \begin{compactitem} % \item if $c \in \mathcal{D} \cap \nu'_0(\comps^\sharp_\ell)$, then $c \in \mathcal{D} \cap \nu'_0(\comps^\sharp_0)$, because $\comps^\sharp_\ell \subseteq \comps^\sharp_0$, contradiction with $\mathcal{D} \cap \nu'_0(\comps^\sharp_0) = \mathcal{D} \cap \nu(\comps^\sharp) = \emptyset$. % \item else, if $c \in \mathcal{C}_j \cap \nu'_0(\comps^\sharp_\ell)$, for some $j \in \interv{1}{\ell-1}$, then $c \in \mathcal{C}_j \cap \mathcal{D}_j$, because $\nu'_0(\comps^\sharp_\ell) \subseteq \mathcal{D}_j$, contradiction with the inductive hypothesis $\mathcal{C}_j \cap \mathcal{D}_j = \emptyset$. % \item otherwise, $c \in \nu'_0(\comps^\sharp_j) \cap \nu'_0(\comps^\sharp_\ell)$, hence there exist variables $y_j \in \comps^\sharp_j$ and $y_\ell \in \comps^\sharp_\ell$, such that $y_j \formeq{\pi''} y_\ell$, contradiction with the fact that $\basetuplen{j} \otimes \basetuplen{\ell}$ is satisfiable. % \end{compactitem} Second, we apply the inductive hypothesis to obtain configurations $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho)$, such that $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho) \models^{\nu'_0}_\Delta \predname{B}_\ell(t^\ell_1, \ldots, t^\ell_{\arityof{\predname{B}_\ell}})$, $\mathcal{C}_\ell \cap \mathcal{D}_\ell = \emptyset$ and $\mathcal{I}_\ell \cap \mathcal{J}_\ell = \emptyset$, for all $\ell \in \interv{2}{h}$. By the definitions of $\mathcal{D}_\ell$ and $\mathcal{J}_\ell$, the sets $\mathcal{C}_\ell$ and $\mathcal{I}_\ell$ are pairwise disjoint, respectively, hence the composition $(\mathcal{C}, \mathcal{I}, \varrho) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \scalebox{2}{$\comp$}_{\ell=1}^h (\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho)$ is defined. Moreover, $(\mathcal{C}, \mathcal{I}, \varrho) \models^{\nu'_0}_\Delta \psi * \pi' * \Asterisk_{\ell=2}^h \predname{B}_\ell(t^\ell_1, \ldots, t^\ell_{\arityof{\predname{B}_\ell}})$, hence $(\mathcal{C}, \mathcal{I}, \varrho) \models^{\nu'_0}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, leading to $(\mathcal{C}, \mathcal{I}, \varrho) \models^{\nu} \predname{A}(y_1, \ldots, y_{\arityof{\predname{A}}})$. Finally, we are left with proving that $\mathcal{C} \cap \mathcal{D} = \emptyset$ and $\mathcal{I} \cap \mathcal{J} = \emptyset$ (we prove the first point only, the second uses a similar reasoning). Since $\mathcal{C} = \bigcup_{\ell=1}^h \mathcal{C}_\ell$, this is equivalent to proving the following: \begin{compactitem} % \item $\mathcal{C}_1 \cap \mathcal{D} = \emptyset$: suppose, for a contradiction, that $\nu'_0(y) \in \mathcal{D}$, for a variable $y$, such that $\compact{y}$ occurs in $\psi$. By the definition of $\nu'_0$, we have either $\nu'_0(y) \in \nu'_0(\comps^\sharp_0)$ or $\nu'_0(y) \in \mathcal{K}$. Since $\nu'_0(\comps^\sharp_0) \cap \mathcal{D} = \mathcal{K} \cap \mathcal{D} = \emptyset$, both cases lead to a contradiction. % \item $\mathcal{C}_\ell \cap \mathcal{D} = \emptyset$, for all $\ell \in \interv{2}{h}$: because $\mathcal{C}_\ell \cap \mathcal{D}_\ell = \emptyset$ and $\mathcal{D} \subseteq \mathcal{D}_\ell$, by definition of $\mathcal{D}_\ell$, for all $\ell \in \interv{2}{h}$. \qed % \end{compactitem} \end{compactitem} \end{proofE} \begin{myLemmaE}\label{lemma:sat-completeness} Given a predicate atom $\predname{A}(y_1, \ldots, y_{\arityof{\predname{A}}})$, a store $\nu$ and a configuration $(\mathcal{C},\mathcal{I},\varrho)$, such that $(\mathcal{C},\mathcal{I},\varrho) \models^\nu_\Delta \predname{A}(y_1, \ldots, y_{\arityof{\predname{A}}})$, there exists a base tuple $(\comps^\sharp, \interacs^\sharp, \pi) \in \leastbaseof{\predname{A}}[x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}]$, such that $\nu(\comps^\sharp) \subseteq \mathcal{C}$, $\nu(\interacs^\sharp) \subseteq \mathcal{I}$ and $(\emptyset, \emptyset, \varrho) \models^\nu \pi$. \end{myLemmaE} \begin{proofE} By fixpoint induction on the definition of the satisfaction relation $\models^\nu_\Delta$. Since $(\mathcal{C},\mathcal{I},\varrho) \models^\nu_\Delta \predname{A}(y_1, \ldots, y_{\arityof{\predname{A}}})$, by Lemma \ref{lemma:pure-subst}, we have $(\mathcal{C},\mathcal{I},\varrho) \models^{\nu_0}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, where $\nu_0 \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \nu[x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}]$. Hence, $\Delta$ has a rule \(\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \psi * \pi' * \Asterisk_{\ell=2}^h\predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})\), such that $\psi * \pi'$ is quantifier-free, $\psi$ consists of component and interaction atoms and $\pi'$ is pure and there exists a store $\nu'_0$, that agrees with $\nu_0$ over $x_1, \ldots, x_{\arityof{\predname{A}}}$ and configurations $(\mathcal{C}_1, \mathcal{I}_1, \varrho)$, $\ldots, (\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho)$, such that: \begin{compactitem} % \item $(\mathcal{C}_1, \mathcal{I}_1, \varrho) \models^{\nu'_0} \psi * \pi'$, % \item $(\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho) \models^{\nu'_0}_\Delta \predname{B}_\ell(z^\ell_1, \ldots, z^\ell_{\arityof{\predname{B}_\ell}})$, for all $\ell \in \interv{2}{h}$, and % \item $(\mathcal{C},\mathcal{I},\varrho) = (\mathcal{C}_1, \mathcal{I}_1, \varrho) \bullet \ldots \bullet (\mathcal{C}_h, \mathcal{I}_h, \varrho)$. % \end{compactitem} We consider the following base tuples: \begin{compactitem} % \item $\basetuplen{1} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \basetupleof{\psi * \pi'}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}}$, % \item $\basetuplen{\ell} \in \leastbaseof{\predname{B}_\ell}[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]$, such that $\nu'_0(\comps^\sharp_\ell) \subseteq \mathcal{C}_\ell$, $\nu'_0(\interacs^\sharp_\ell) \subseteq \mathcal{I}_\ell$ and $(\emptyset, \emptyset, \varrho) \models^{\nu'_0} \pi_\ell$, whose existence is guaranteed by the inductive hypothesis, for all $\ell \in \interv{2}{h}$. % \end{compactitem} By the definition of $\basetupleof{\psi * \pi'}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}}$ and the fact that $(\mathcal{C}_1, \mathcal{I}_1, \varrho) \models^{\nu'_0} \psi * \pi'$, we obtain $\nu'_0(\comps^\sharp_1) = \mathcal{C}_1$ and $\nu'_0(\interacs^\sharp_1) = \mathcal{I}_1$. Since the composition $\scalebox{2}{$\comp$}_{\ell=1}^h (\mathcal{C}_\ell, \mathcal{I}_\ell, \varrho)$ is defined, the sets $\mathcal{C}_1, \ldots, \mathcal{C}_h$ and $\mathcal{I}_1, \ldots, \mathcal{I}_h$ are pairwise disjoint, respectively. Since $\nu'_0(\comps^\sharp_\ell) \subseteq \mathcal{C}_\ell$ and $\nu'_0(\interacs^\sharp) \subseteq \mathcal{I}_\ell$, for all $\ell \in \interv{1}{h}$, we deduce that $\bigotimes_{\ell=1}^h \basetuplen{\ell}$ is satisfiable, because: \begin{compactitem} % \item for all $1 \leq i < j \leq h$, for any two variables $y \in \comps^\sharp_i$ and $z \in \comps^\sharp_j$ we have $\notof{y \formeq{\pi'} z}$, because $\nu'_0(\comps^\sharp_i) \cap \nu'_0(\comps^\sharp_j) = \emptyset$, % \item for all $1 \leq i < j \leq h$, all $\tau \in \mathsf{Inter}$, for any two tuples $\tuple{y_1, \ldots, y_{\lenof{\tau}}} \in \interacs^\sharp_i(\tau)$ and $\tuple{z_1, \ldots, z_{\lenof{\tau}}} \in \interacs^\sharp_j(\tau)$, we have $\notof{y_k \formeq{\pi'} z_k}$, for at least some $k \in \interv{1}{\lenof{\tau}}$, because $\nu'_0(\interacs^\sharp_i) \cap \nu'_0(\interacs^\sharp_j) = \emptyset$, % \item for each tuple $\tuple{y_1, \ldots, y_{\lenof{\tau}}} \in \interacs^\sharp_\ell(\tuple{p_1, \ldots, p_n})$, for $\ell \in \interv{1}{h}$, we have $\notof{y_i \formeq{\pi'} y_j}$, for all $1 \leq i < j \leq n$, because $(\nu'_0(y_1), p_1, \ldots, \nu'_0(y_n), p_n) \in \mathcal{I}_\ell$, hence $\nu'_0(y_1), \ldots,\nu'_0(y_n)$ are pairwise distinct, % \item $(\emptyset, \emptyset, \varrho) \models^{\nu'_0} \pi' * \Asterisk_{\ell=2}^h \pi_\ell$, hence $(\emptyset, \emptyset, \varrho) \models^{\nu'_0} \pi' * \Asterisk_{\ell=2}^h \constrof{\interacs^\sharp_\ell} * \pi_\ell$, by the previous point. % \end{compactitem} Then we define $\basetuplen{0} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \proj{\left(\bigotimes_{\ell=1}^h \basetuplen{\ell} \right)}{x_1, \ldots, x_{\arityof{\predname{A}}}}$ and $(\basecomps, \baseinteracs, \pureform) \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \basetuplen{0}[x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}]$. By the definition of $\asid^\sharp$, we have: \[\leastbaseof{\predname{A}} \supseteq \proj{\big(\basetupleof{\psi*\pi'}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=2}^h \leastbaseof{\predname{B}_\ell}[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\big)}{x_1, \ldots, x_{\arityof{\predname{A}}}}\] and, since, by the construction of $\basetuplen{0}$, \[\basetuplen{0} \in \proj{\big(\basetupleof{\psi*\pi'}{\set{x_1, \ldots, x_{\arityof{\predname{A}}}}} \otimes \bigotimes_{\ell=2}^h \leastbaseof{\predname{B}_\ell}[x_1/z^\ell_1, \ldots, x_{\arityof{\predname{B}_\ell}}/z^\ell_{\arityof{\predname{B}_\ell}}]\big)}{x_1, \ldots, x_{\arityof{\predname{A}}}}\] we obtain $\basetuplen{0} \in \leastbaseof{\predname{A}}$, leading to $(\basecomps, \baseinteracs, \pureform) \in \leastbaseof{\predname{A}}[x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}]$. Next, we check that $\nu(\comps^\sharp) \subseteq \bigcup_{\ell=1}^h \nu'_0(\comps^\sharp_\ell) \subseteq \bigcup_{\ell=1}^h \mathcal{C}_\ell = \mathcal{C}$ and $\nu(\interacs^\sharp) \subseteq \bigcup_{\ell=1}^h \nu'_0(\interacs^\sharp_\ell) \subseteq \bigcup_{\ell=1}^h \mathcal{I}_\ell = \mathcal{I}$. Finally, the requirement $(\emptyset,\emptyset,\varrho) \models^\nu \pi$ follows from the following: \begin{compactitem} % \item $\pi = \pi_0[x_1/y_1, \ldots, x_{\arityof{\predname{A}}}/y_{\arityof{\predname{A}}}]$, by the definition of $(\basecomps, \baseinteracs, \pureform)$, % \item $(\emptyset,\emptyset,\varrho) \models^{\nu'_0} \pi'$ and $(\emptyset,\emptyset,\varrho) \models^{\nu'_0} \pi_\ell$, for all $\ell \in\interv{2}{h}$, % \item $\pi_0 = \proj{\closureof{\pi' * \Asterisk_{\ell=2}^h \constrof{\interacs^\sharp_\ell} * \pi_\ell}}{x_1, \ldots, x_{\arityof{\predname{A}}}}$, where $(\emptyset,\emptyset,\varrho) \models^{\nu'_0} \constrof{\interacs^\sharp_\ell}$ follows from the satisfiability of $\basetuplen{\ell}$, for all $\ell \in \interv{2}{h}$. \qed % \end{compactitem} \end{proofE} \begin{lemmaE}\label{lemma:sat} $\sat{\Delta}{\predname{A}}$ has a positive answer if and only if $\leastbaseof{\predname{A}} \neq \emptyset$. \end{lemmaE} \begin{proofE} ``$\Leftarrow$'' follows from Lemma \ref{lemma:sat-soundness} and ``$\Rightarrow$'' follows from Lemma \ref{lemma:sat-completeness}. \qed \end{proofE} If the maximal arity of the predicates occurring in $\Delta$ is bound by a constant $k$, no satisfiable base tuple $(\basecomps, \baseinteracs, \pureform)$ can have a tuple $\tuple{y_1, \ldots, y_{\lenof{\tau}}} \in \interacs^\sharp(\tau)$, for some $\tau\in\mathsf{Inter}$, such that $\lenof{\tau} > k$, since all variables $y_1, \ldots, y_{\lenof{\tau}}$ are parameters denoting distinct components (point \ref{it3:base-tuple} of Def. \ref{def:base-tuple}). Hence, the upper bound on the size of a satisfiable base tuple is constant, in both the $k<\infty, \ell<\infty$ and $k<\infty, \ell=\infty$ cases, which are, moreover indistinguishable complexity-wise (i.e., both are $\mathsf{NP}$-complete). In contrast, in the cases $k=\infty, \ell<\infty$ and $k=\infty, \ell=\infty$, the upper bound on the size of satisfiable base tuples is polynomial and simply exponential in $\sizeof{\Delta}$, incurring a complexity gap of one and two exponentials, respectively. The theorem below states the main result of this section: \begin{theoremE}\label{thm:sat} $\klsat{\Delta}{\predname{A}}{k}{\infty}$ is $\mathsf{NP}$-complete for $k\ge 4$, $\klsat{\Delta}{\predname{A}}{\infty}{\ell}$ is $\mathsf{EXP}$-complete and $\sat{\Delta}{\predname{A}}$ is in $2\mathsf{EXP}$. \end{theoremE} \begin{proofE} \emph{Membership (upper bounds).} For non-negative integers $m \le n$ denote by $\nordsubsets{n}{m} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \frac{n!}{(n-m)!}$ the number of ordered $m$-element subsets of a $n$-element set. Let $\alpha=\maxarityof{\Delta}$, $\beta=\maxintersize{\Delta}$ and $p = \cardof{\mathcal{P}}$. The maximum length of a satisfiable base tuple is $B \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \alpha + (\sum_{j=1}^{\min(\alpha,\beta)} p^j \cdot \nordsubsets{\alpha}{j}) + (2\alpha^2 + \alpha)$, that is, size of the set of components $\comps^\sharp$ plus the size of the set of interactions $\interacs^\sharp$ plus the length of the longest pure formula $\pi$. In general, for any non-negative integer $j$ there exists at most $p^j$ interaction types of arity $j$ with ports from $\mathcal{P}$; moreover, for any such interaction type there exists at most $\nordsubsets{\alpha}{j}$ interactions relating distinct components from an $\alpha$-element set. Moreover, no such interaction exists neither if $j > \alpha$ nor $j > \beta$. For any $u \le \alpha$ it holds that $\sum_{j=1}^u p^j \cdot \nordsubsets{\alpha}{j} \le p^u \alpha^u$ (an easy check by induction on $u$). We use the inequality above with $u=\min(\alpha,\beta)$ and obtain that $B \le 2\alpha + 2\alpha^2 + p^{\min(\alpha,\beta)} \alpha^{\min(\alpha,\beta)} \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} B^*$. We distinguish the three cases:\begin{compactenum} % \item $k<\infty$, $\ell=\infty$: since $\alpha \le k$ then $\alpha$ is constant and $B^* = \mathcal{O}(1)$, % \item $k=\infty$, $\ell < \infty$: since $\beta\le \ell$ and $\alpha=\mathcal{O}(\size{\Delta})$ then $B^*=\mathit{poly}(\size{\Delta})$, % \item $k=\infty$, $\ell=\infty$: since $\alpha=\mathcal{O}(\size{\Delta})$ then $B^*=2^{\mathit{poly}(\size{\Delta)}}$. % \end{compactenum} Let $N \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} 2^{B^*}$, that is, (an over-approximation of) the total number of base tuples. Clearly, $N$ is constant in case (1) and respectively $2^{\mathit{poly}(\size{\Delta)}}$ and $2^{2^{\mathit{poly}(\size{\Delta)}}}$ in cases (2), (3). Let $L$ be the number of predicates occuring in $\Delta$ and $H$ be the maximum number of predicates used in a term in $\Delta$. Let observe that both $L$ and $H$ are in general $\mathcal{O}(\size{\Delta})$. Then the least solution $\mu\basevartuple.\basesid$ has at most $N$ base tuples for each predicate, hence at most $L \cdot N$ base tuples. Furthermore, for each rule of $\Delta$ the time to check and/or produce the base tuple $(\comps^\sharp_0,\interacs^\sharp_0,\pi_0)$ with respect to the rule constraint (\ref{eq:basesid}) and given arguments $(\comps^\sharp_j,\interacs^\sharp_j,\pi_j)_{j=1,h}$ is polynomial $\mathit{poly}(B^*,\size{\Delta}))$. That is, both composition and projection take at most $(H+1) B^* + \size{\Delta}^3$ time as they need to process (union or scan) at most $H+1$ base tuples of length $B^*$ each plus the closure of pure formula with at most $\size{\Delta}$ variables. \begin{compactenum} \item $k < \infty$, $\ell=\infty$: We define a non-determinstic algorithm as follows. Let $(\Delta,\predname{A})$ be the input instance. We guess a witness $\tuple{W_1,\ldots,W_K}$ for a least solution, where $1 \le K \le L\cdot N$ and each $W_i$ entry is of the form $(T_i,r_i,e_{i,1},\ldots,e_{i,h_i})$, where $T_i$ is a base tuple, $r_i$ an index of a rule of $\Delta$ and $e_{i,1},\ldots,e_{i,h}$ are index values from $\set{i+1,\ldots,K}$ for $0 \le h_i \le H$. The length of every witness entry is therefore at most $B^* + \lceil \log_2(\size{\Delta}) \rceil + H \lceil \log_2(L\cdot N) \rceil$. As $N$ is constant when $k < \infty$, and $L$ and $H$ are $\mathcal{O}(\size{\Delta})$ it follows that the number of guesses is polynomial for building $\tuple{W_1,\ldots,W_K}$. We now check that $\tuple{W_1,\ldots,W_K}$ represents indeed a valid computation of a base tuples from the least solution i.e., $(\basecomps, \baseinteracs, \pureform) \in \leastbaseof{\predname{A}}$. For this, we need to check: (a) every entry is well-formed, that is, the rule indexed by $r_i$ instantiates precisely $h_i$ predicates; moreover, for every $1 \le j \le h_i$ the index $e_{i,j}$ designates an entry $W_{e_{i,j}}$ whose rule defines the $j$-th predicate instantiated by the rule $r_i$; (b) the base tuple of every entry is satisfiable and correctly computed, that is, $T_i$ is the result of applying the constraint (\ref{eq:basesid}) for rule $r_i$ with actual arguments $T_{e_{i,1}},\ldots, T_{e_{i,h_i}}$ from the referred entries; (c) the rule $r_1$ of the first entry $W_1$ defines the predicate $\predname{A}$. Again, as $B^*$ and $N$ are constant in this case, all these checks are done in polynomial time. Since both the generation and the checking of the witness are polynomial time, this ensures membership in $\mathsf{NP}$. \item $k = \infty$, $\ell < \infty$: Consider the computation of the least solution $\mu\basevartuple.\basesid$ using standard Kleene iteration. At every step, a rule of $\Delta$ and a tuple of at most $H$ base tuples arguments are selected to produce a new base tuple. Thus, in the worst case, at most $\size{\Delta}$ rules in combination with at most $N^H$ base tuples need to be selected and evaluated. If no new base tuple is generated the fixpoint is reached and the algorithms stops. Since there are at most $L \cdot N$ base tuples in the least solution, the total time will be therefore $L\cdot N \cdot \size{\Delta} \cdot N^H \cdot t(B^*,\size{\Delta})$ where $t(B^*,\size{\Delta})$ is the (polynomial) time to process one selection. It is an easy check that the above is $2^{\mathit{poly}(\size{\Delta})}$ since $N=2^{\mathit{poly}(\size{\Delta})}$ in this case. \item $k=\infty$, $\ell=\infty$: Following the same reasoning as in the previous case the complexity is $2^{2^{\mathit{poly}(\size{\Delta)}}}$ as $N=2^{2^{\mathit{poly}(\size{\Delta})}}$ in this case. \end{compactenum} \emph{Hardness (lower bounds)}. The restricted fragment of \textsf{CL}\ to $*$, $=$, $\not=$ is equisatisfiable to the restricted fragment of \textsf{SL}\ restricted to $*$, $=$, $\not=$. The satisfiability of the above \textsf{SL}\ fragment has been proven respectively $\mathsf{NP}$-hard, if the arities of predicates are bounded by a constant $k \ge 3$ \cite[Theorem 4.9]{DBLP:conf/csl/BrotherstonFPG14} and $\mathsf{EXP}$-hard, in general \cite[Theorem 4.15]{DBLP:conf/csl/BrotherstonFPG14}. Yet, the reductions considered in these proofs rely on the use of a predefined \emph{nil} constant symbol in the \textsf{SL}\ logic; this constant can be nevertheless replaced by a variable consistently propagated along the SID, that is, at the price of increasing the arities of all predicates by one. Therefore, it follows immediately that $\klsat{\Delta}{\phi}{k}{\infty}$ is $\mathsf{NP}$-hard for $k\ge 4$ and $\klsat{\Delta}{\phi}{\infty}{\ell}$, $\sat{\Delta}{\phi}$ are both $\mathsf{EXP}$-hard. \qed \end{proofE} \begin{example}\label{ex:sat-worst-case} The doubly-exponential upper bound for the algorithm computing the least solution of a system of constraints of the form (\ref{eq:basesid}) is necessary, in general, as illustrated by the following worst-case example. Let $n$ be a fixed parameter and consider the $n$-arity predicates $A_1,\ldots,A_n$ defined by the following SID: \[\begin{array}{rcll} A_i(x_1,\ldots,x_n) & \rightarrow & \Asterisk_{j=0}^{n-i} ~A_{i+1}(x_1,\ldots,x_{i-1}, [x_i,\ldots,x_n]^j) & \text{, for all } i \in \interv{1}{n-1} \\ A_n(x_1,\ldots,x_n) & \rightarrow & \interacn{x_1}{p}{x_n}{p} \\ A_n(x_1,\ldots,x_n) & \rightarrow & \predname{emp} \end{array}\] where, for a list of variables $x_i,\ldots,x_n$ and an integer $j\geq0$, we write $[x_i,\ldots,x_n]^j$ for the list rotated to the left $j$ times (e.g., $[x_1,x_2,x_3,x_4,x_5]^2=x_3,x_4,x_5,x_1,x_2$). In this example, when starting with $A_1(x_1,\ldots,x_n)$ one eventually obtains predicate atoms $A_n(x_{i_1},\ldots,x_{i_n})$, for any permutation $x_{i_1},\ldots,x_{i_n}$ of $x_1,\ldots,x_n$. Since $A_n$ may choose to create or not an interaction with that permutation of variables, the total number of base tuples generated for $A_1$ is $2^{n!}$. That is, the fixpoint iteration generates $2^{2^{\mathcal{O}(n \log n)}}$ base tuples, whereas the size of the input of $\sat{\Delta}{\predname{A}}$ is $\mathit{poly}(n)$. \hfill$\blacksquare$ \end{example} \section{Tightness} \label{sec:tightness} The tightness problem (Def. \ref{def:decision}, point \ref{decision:tight}) is the complement of a problem slightly stronger than satisfiability (\ref{decision:sat}): given a SID $\Delta$ and a formula $\phi$, such that $\fv{\phi} = \set{x_1, \ldots, x_n}$, the \emph{looseness problem} $\loose{\Delta}{\predname{A}}$ asks for the existence of a loose configuration $\gamma$ (Def. \ref{def:tightness}), such that $\gamma \models_\Delta \exists x_1 \ldots \exists x_n ~.~ \phi$. We establish upper and lower bounds for the complexity of the looseness problem by a reduction to and from the satisfiability problem. The bounds for the tightness problem follow by standard complementation of the complexity classes for the looseness problem. \paragraph{From Looseness to Satisfiability.} Let $\Delta$ be a given SID and $\predname{A}$ be a predicate. For each predicate $\predname{B}$ that occurs in $\Delta$, we consider a fresh predicate ${\bpred'}$, not occurring in $\Delta$, such that $\arityof{{\bpred'}}=\arityof{\predname{B}}+1$. The SID ${\widetilde{\Delta}}$ consists of $\Delta$ and, for each rule of $\Delta$ of the form: \[\predname{B}_0(x_1, \ldots, x_{\arityof{\predname{B}_0}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})\] where $\phi$ is a quantifier- and predicate-free formula, ${\widetilde{\Delta}}$ has the following rules: \[{\bpred'}_0(x_1, \ldots, x_{\arityof{\predname{B}_0}+1}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi ~*~ x_{\arityof{\predname{B}_0}+1} = z ~*~ \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) ~* \ldots *~ \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})\] if $z$ occurs in an interaction atom from $\phi$, and: \[\begin{array}{rcl} {\bpred'}_0(x_1, \ldots, x_{\arityof{\predname{B}_0}+1}) & \leftarrow & \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * {\bpred'}_i(t^i_1, \ldots, t^i_{\arityof{\predname{B}_i}}, x_{\arityof{\predname{B}_0}+1}) \\ && \hspace*{2cm} * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}}) \end{array}\] for some $i \in \interv{1}{h}$, if $h \geq 1$. Moreover, for each rule of $\Delta$ of the form above, with no predicate atoms (i.e., $h=0$), ${\widetilde{\Delta}}$ contains the rule: \[\begin{array}{rcll} {\bpred'}(x_1, \ldots, x_{\arityof{\predname{B}}+1}) & \leftarrow & \exists y_1 \ldots \exists y_m ~.~ \phi * \compact{x_{\arityof{\predname{B}_0}+1}} \end{array}\] if and only if $\phi$ contains no predicate atoms. Finally, there is a fresh predicate $\looseof{\predname{A}}$, of arity $\arityof{\predname{A}}$, with a rule: \[\looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y ~.~ {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}}, y) * \compact{y}\] Intuitively, the last parameter of a ${\bpred'}$ predicate binds to an arbitrary variable of an interaction atom. A configuration $\gamma$ is loose if and only if the value (component) of some variable occurring in an interaction atom is absent, in which case the component can be added to $\gamma$ (without clashing with a present component of $\gamma$) by the last rule. The reduction is polynomial, since the number of rules in ${\widetilde{\Delta}}$ is linear in the number of rules in $\Delta$ and the size of each newly added rule is increased by a constant. The following lemma states the correctness of the reduction: \begin{lemma}\label{lemma:loose-sat} Given a SID $\Delta$ and a predicate $\predname{A}$, the problem $\loose{\Delta}{\predname{A}}$ has a positive answer if and only if the problem $\sat{{\widetilde{\Delta}}}{\looseof{\predname{A}}}$ has a positive answer. \end{lemma} \proof{ ``$\Rightarrow$'' Let $\gamma \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C}, \mathcal{I}, \varrho)$ be a loose configuration, such that $\gamma \models^{\nu}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, for some store $\nu$. Since $\gamma$ is loose, there exists an interaction $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}$, such that $c_i \not\in \mathcal{C}$, for some $i \in \interv{1}{n}$. We prove that $\gamma \models_{{\widetilde{\Delta}}}^{\nu[y \leftarrow c_i]} {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}},y)$, by fixpoint induction on the definition of $\gamma \models^{\nu}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. This is sufficient, because then we obtain $(\mathcal{C}\cup\set{c_i},\mathcal{I},\varrho) \models_{{\widetilde{\Delta}}}^{\nu} \looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}})$, thus $\sat{{\widetilde{\Delta}}}{\looseof{\predname{A}}}$ has a positive answer. Consider the rule of $\Delta$: \[\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})\] where $\phi$ is quantifier- and predicate-free and $c'_1, \ldots, c'_m \in \mathbb{C}$ are components, such that $(\mathcal{C},\mathcal{I},\varrho) \models_{\Delta}^{\nu'} \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$, where $\nu' \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \nu[y_1 \leftarrow c'_1, \ldots, y_m \leftarrow c'_m]$. We distinguish the following cases: \begin{compactitem} % \item if $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}$ because of an interaction atom $\interacn{z_1}{p_1}{z_n}{p_n}$ from $\phi$, such that $\nu'(z_i) = c_i$, for all $i \in \interv{1}{n}$, then $(\mathcal{C},\mathcal{I},\varrho) \models_{{\widetilde{\Delta}}}^{\nu'[y \leftarrow c_i]} \phi * y = z_i * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$, hence $(\mathcal{C},\mathcal{I},\varrho) \models_{{\widetilde{\Delta}}}^{\nu[y \leftarrow c_i]} {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}}, y)$, by the definition of ${\widetilde{\Delta}}$. % \item else $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}$ because of a configuration $\gamma'$, such that $\gamma = \gamma' \bullet \gamma''$, for some configuration $\gamma''$, and $\gamma' \models^{\nu'}_\Delta \predname{B}_i(t^i_1, \ldots, t^i_{\arityof{\predname{B}_i}})$. By the inductive hypothesis, we obtain $\gamma' \models^{\nu'[y \leftarrow c_i]}_\Delta {\bpred'}_i(t^i_1, \ldots, t^i_{\arityof{\predname{B}_i}},y)$, hence $\gamma \models^{\nu[y \leftarrow c_i]}_\Delta {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}},y)$, because ${\widetilde{\Delta}}$ contains the rule: \[\begin{array}{rcl} {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}+1}) & \leftarrow & \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * {\bpred'}_i(t^i_1, \ldots, t^i_{\arityof{\predname{B}_i}}, x_{\arityof{\predname{A}}+1}) \\ && \hspace*{2cm} * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}}) \end{array}\] and $\gamma'' \models^{\nu'} \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * {\bpred'}_{i-1}(t^{i-1}_1, \ldots, t^{i-1}_{\arityof{\predname{B}_{i-1}}}) * {\bpred'}_{i+1}(t^{i+1}_1, \ldots, t^{i+1}_{\arityof{\predname{B}_{i+1}}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$ follows from $\gamma = \gamma' \bullet \gamma''$. % \end{compactitem} \noindent''$\Leftarrow$'' Let $\gamma \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C},\mathcal{I},\varrho)$ be a configuration and $\nu$ be a store, such that $\gamma \models^{\nu}_{{\widetilde{\Delta}}} \looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}})$. Since the only rule of ${\widetilde{\Delta}}$ that defines $\looseof{\predname{A}}$ is: \[\looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y ~.~ {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}}, y) * \compact{y}\] there exists a component $c \in \mathcal{C}$, such that $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu[y \leftarrow c]}_{\widetilde{\Delta}} {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}}, y)$. We prove the following: \begin{compactitem} % \item there exists an interaction $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}$, such that $c_i = c$, and % \item $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, % \end{compactitem} by fixpoint induction on the definition of $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu[y \leftarrow c]}_{\widetilde{\Delta}} {\apred'}(x_1, \ldots, x_{\arityof{\predname{A}}}, y)$. Based on the definition of ${\widetilde{\Delta}}$, we distinguish the following cases, where $\phi$ is quantifier- and predicate-free, $c'_1, \ldots, c'_m \in \mathbb{C}$ are components and $\nu' \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} \nu[y_1 \leftarrow c'_1, \ldots, y_m \leftarrow c'_m]$: \begin{itemize} % \item $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu'[y \leftarrow c]}_{\widetilde{\Delta}} \phi ~*~ y = z ~*~ \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) ~* \ldots *~ \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$, where $z$ occurs in an interaction atom from $\phi$. In this case, there exists an interaction $(c_1, p_1, \ldots, c_n, p_n) \in \mathcal{I}$, such that $c=c_i$, for some $i \in \interv{1}{n}$. Moreover, $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, because $\Delta$ has a rule: \[\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})\] such that $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu}_\Delta \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) ~* \ldots *~ \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$. % \item $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu'[y \leftarrow c]}_{\widetilde{\Delta}} \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * {\bpred'}_i(t^i_1, \ldots, t^i_{\arityof{\predname{B}_i}}, y) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$. In this case, there exists configurations $\gamma'$ and $\gamma''$, such that $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) = \gamma' \bullet \gamma''$, $\gamma' \models^{\nu'[y \leftarrow c]}_{\widetilde{\Delta}} {\bpred'}_i(t^i_1, \ldots, t^i_{\arityof{\predname{B}_i}}, y)$ and $\gamma'' \models^\nu_\Delta \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$. By the inductive hypothesis, there exists an interaction $(c_1,p_1,\ldots,c_n,p_n)$ in $\gamma'$, such that $c=c_i$, for some $i\in\interv{1}{n}$ and $\gamma' \models^\nu_\Delta \predname{B}_i(t^i_1, \ldots, t^i_{\arityof{\predname{B}_i}})$. Then $(c_1,p_1,\ldots,c_n,p_n) \in \mathcal{I}$ and $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^\nu_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$, since $\Delta$ has a rule: \[\predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \ldots \exists y_m ~.~ \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) * \ldots * \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})\] such that $\gamma' \bullet \gamma'' \models^{\nu'}_\Delta \phi * \predname{B}_1(t^1_1, \ldots, t^1_{\arityof{\predname{B}_1}}) ~* \ldots *~ \predname{B}_h(t^h_1, \ldots, t^h_{\arityof{\predname{B}_h}})$. \item $(\mathcal{C}\setminus\set{c},\mathcal{I},\varrho) \models^{\nu'[y \leftarrow c]}_{\widetilde{\Delta}} \phi * \compact{y}$ this case contradicts the semantics of \textsf{CL}. \qed % \end{itemize}} \paragraph{From Satisfiability to Looseness.} Given a SID $\Delta$ and a predicate $\predname{A}$, we build a SID ${\widetilde{\Delta}}$ that defines a predicate $\looseof{\predname{A}}$, of equal arity, not occurring in $\Delta$, such that $\sat{\Delta}{\predname{A}}$ has a positive answer if and only if there exists a loose configuration $\gamma$ and a store $\nu$, such that $\gamma \models^\nu_{\widetilde{\Delta}} \looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}})$. The rules of ${\widetilde{\Delta}}$ are the rules of $\Delta$, to which the following rule is added, for some ports $p_1, p_2 \in \mathcal{P}$: \[\looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}}) \leftarrow \exists y_1 \exists y_2 ~.~ \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) * \interactwo{y_1}{p_1}{y_2}{p_2}\] This reduction is polynomial, because we add one rule, of size linear on $\arityof{\predname{A}}$. The following lemma states the correctness of the reduction: \begin{lemma}\label{lemma:sat-loose} Given a SID $\Delta$ and a predicate $\predname{A}$, the problem $\sat{\Delta}{\predname{A}}$ has a positive answer if and only if the problem $\loose{{\widetilde{\Delta}}}{\looseof{\predname{A}}}$ has a positive answer. \end{lemma} \proof{ ``$\Rightarrow$'' If $\sat{\Delta}{\predname{A}}$ has a positive answer, there exists a configuration $\gamma \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\mathcal{C},\mathcal{I},\varrho)$ and a store $\nu$, such that $\gamma \models^{\nu}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. Consider the configuration $\gamma' \stackrel{\scriptscriptstyle{\mathsf{def}}}{=} (\emptyset, \set{(c_1,p_1,c_2,p_2)}, \varrho)$, for some components $c_1,c_2 \not\in \mathcal{C}$. Then the composition $\gamma \bullet \gamma'$ is defined and we have $\gamma \bullet \gamma' \models^{\nu[y_1\leftarrow c_1,y_2 \leftarrow c_2]}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) * \interactwo{y_1}{p_1}{y_2}{p_2}$, leading to $\gamma \bullet \gamma' \models^\nu_{\widetilde{\Delta}} \looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}})$. Moreover, $\gamma\bullet\gamma'$ is loose, because $c_1, c_2 \not\in\mathcal{C}$. ``$\Leftarrow$'' If $\gamma \models^\nu_{\widetilde{\Delta}}\looseof{\predname{A}}(x_1, \ldots, x_{\arityof{\predname{A}}})$, we necessarily have $\gamma \models^{\nu[y_1 \leftarrow c_1, y_2 \leftarrow c_2]}_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}}) * \interactwo{y_1}{p_1}{y_2}{p_2}$, for some components $c_1, c_2 \in \mathbb{C}$, hence there exists a configuration $\gamma'$, such that $\gamma' \models^\nu_\Delta \predname{A}(x_1, \ldots, x_{\arityof{\predname{A}}})$. \qed} The polynomial reductions from Lemmas \ref{lemma:loose-sat} and \ref{lemma:sat-loose} establish the following complexity bounds for the tightness problem: \begin{theorem}\label{thm:tightness} $\kltight{\Delta}{\predname{A}}{k}{\infty}$ is $\mathsf{co}$-$\mathsf{NP}$-complete, $\kltight{\Delta}{\predname{A}}{\infty}{\ell}$ is $\mathsf{EXP}$-complete and $\tight{\Delta}{\predname{A}}$ is $2\mathsf{EXP}$. \end{theorem} \proof{ Since $\klloose{\Delta}{\predname{A}}{k}{\infty}$ is polynomially-reducible to $\klsat{\Delta}{\predname{A}}{k+1}{\infty}$, by Theorem \ref{thm:sat}, we obtain that $\klloose{\Delta}{\predname{A}}{k}{\infty}$ is in $\mathsf{NP}$. Moreover, since $\klsat{\Delta}{\predname{A}}{k}{\infty}$ is polynomially-reducible to $\klloose{\Delta}{\predname{A}}{k}{\infty}$, by Theorem \ref{thm:sat}, we obtain that $\klloose{\Delta}{\predname{A}}{k}{\infty}$ is $\mathsf{NP}$-complete. Because $\kltight{\Delta}{\predname{A}}{k}{\infty}$ is the complement of $\klloose{\Delta}{\predname{A}}{k}{\infty}$, we obtain that $\kltight{\Delta}{\predname{A}}{k}{\infty}$ is $\mathsf{co}$-$\mathsf{NP}$-complete. The rest of the bounds are obtained by the same polynomial reductions and the fact that $\kltight{\Delta}{\predname{A}}{k}{\infty}$ is the complement of $\klloose{\Delta}{\predname{A}}{k}{\infty}$, for any $k$ and $\ell$, either integer constants, or infinity. \qed}
train/arxiv
BkiUdeQ5qhDBPRyD_KQV
5
1
\section{Introduction} Let $C \subset \mathbb{P}^2$ be an irreducible plane curve of degree $d \ge 4$ over an algebraically closed field $k$ of characteristic $p \ge 0$, and let $k(C)$ be its function field. We consider the projection $\pi_P: C \dashrightarrow \mathbb{P}^1$ from a point $P \in \mathbb{P}^2$. If $\pi_P$ is separable, then the Galois closure of $k(C)/\pi_P^*k(\mathbb{P}^1)$ is denoted by $L_P$, and the Galois group is denoted by $G_P$. Pirola and Schlesinger said that $P$ is {\it uniform}, if $G_P$ is the full symmetric group (see \cite{pirola-schlesinger}). It follows from results of Yoshihara and Pirola--Schlesinger that there exist only finitely many non-uniform points $P \in \mathbb{P}^2$, at least when $p=0$ and $C$ is smooth (see \cite{pirola-schlesinger, yoshihara}). The following problem is natural: {\it for different points $P_1$ and $P_2 \in \mathbb{P}^2$, when does $L_{P_1}=L_{P_2}$ hold?} However, it has never been considered, except for the case where $L_{P_i}=k(C)$ for $i=1, 2$. In this article, we settle this problem. Let $X$ be a smooth projective curve. For a finite subgroup $H$ of ${\rm Aut}(X)$ and a point $Q \in X$, the quotient map is denoted by $f_H: X \rightarrow X/H$ and the image $f_H(Q)$ is denoted by $\overline{Q}^H$. We show the following theorem. \begin{theorem} \label{maincor} Let $H$, $G_1$, and $G_2 \subset {\rm Aut}(X)$ be finite subgroups such that $H \subset G_1 \cap G_2$, and let $P_1$ and $P_2 \in X$. Then, five conditions \begin{itemize} \item[(a)] $X/{G_1} \cong \Bbb P^1$ and $X/{G_2} \cong \Bbb P^1$, \item[(b)] $G_1 \cap G_2=H$, \item[(c)] $H' \vartriangleleft G_i, \ H' \subset H\Rightarrow H'=\{1\}$, for $i=1, 2$, \item[(d)] $\sum_{h \in H}h(P_1)+\sum_{\sigma \in G_1} \sigma (P_2)=\sum_{h \in H}h(P_2)+\sum_{\tau \in G_2} \tau (P_1)$, and \item[(e)] $HP_1 \ne HP_2$ \end{itemize} are satisfied, if and only if there exists a birational embedding $\varphi: X/H \rightarrow \mathbb P^2$ of degree $(G_1:H)+1$ such that $\varphi(\overline{P_1}^H)$ and $\varphi(\overline{P_2}^H)$ are different smooth points of $\varphi(X/H)$, and $L_{\varphi(\overline{P_i}^H)}=k(X)$ and $G_{\varphi(\overline{P_i}^H)}=G_i$ for $i=1, 2$. \end{theorem} Since any plane curve is a quotient curve of the smooth model of the Galois closure at each point, all plane curves $C$ with two different smooth points $P_1$ and $P_2 \in C$ such that $L_{P_1}=L_{P_2}$ are described completely in Theorem \ref{maincor}. In Section 2, we prove a generalization of Theorem \ref{maincor}, to understand the proof in more general setting. In Section 3, we show the following theorem for the Hermitian curve, which may be surprising. \begin{theorem} \label{hermitian} Let $p>0$, $q$ be a power of $p$, and let a positive integer $m$ divide $q^2-1$. The Hermitian curve defined by $$ X^qZ+XZ^q-Y^{q+1}=0 $$ is denoted by $X$. Then, there exists a plane curve $C \subset \mathbb{P}^2$ of degree $q^3+1$ and different smooth points $P_1$ and $P_2$ exist for $C$ such that $L_{P_1}=L_{P_2}=k(X)$ and $G_{P_i} \cong N_1 \rtimes C_m$ for $i=1, 2$, where $N_1$ is a Sylow $p$-group of ${\rm Aut}(X)$ and $C_m$ is a cyclic group of order $m$. In particular, points $P_1$ and $P_2$ are not uniform. \end{theorem} \section{Proof of the main theorem} In this section, we prove Theorem \ref{maincor} and its generalization. If $H$ is a normal subgroup of a subgroup $G \subset {\rm Aut}(X)$, then there exists a natural homomorphism $G \rightarrow {\rm Aut}(X/H); \sigma \mapsto \overline{\sigma}^H$, where $\overline{\sigma}^H$ corresponds to the restriction $\sigma^*|_{k(X)^H}$. The image is denoted by $\overline{G}^H$, which is isomorphic to $G/H$. The following is a generalization of Theorem \ref{maincor}, since the case where $N_1=N_2=\{1\}$ implies Theorem \ref{maincor}. \begin{theorem} \label{main} Let $N_1$, $N_2$, $H$, $G_1$, and $G_2 \subset {\rm Aut}(X)$ be finite subgroups such that $N_i \subset H \subset G_i$ and $N_i \vartriangleleft G_i$ for $i=1, 2$, and let $P_1$ and $P_2 \in X$. Then, five conditions \begin{itemize} \item[(a)] $X/{G_1} \cong \Bbb P^1$ and $X/{G_2} \cong \Bbb P^1$, \item[(b)] $G_1 \cap G_2=H$, \item[(c)] $H' \vartriangleleft G_i$, $N_i \subset H' \subset H$ $\Rightarrow$ $H'=N_i$, for $i=1, 2$, \item[(d)] $\sum_{h \in H}h(P_1)+\sum_{\sigma \in G_1} \sigma (P_2)=\sum_{h \in H}h(P_2)+\sum_{\tau \in G_2} \tau (P_1)$, and \item[(e)] $HP_1 \ne HP_2$ \end{itemize} are satisfied, if and only if there exists a birational embedding $\varphi: X/H \rightarrow \mathbb P^2$ of degree $(G_1:H)+1$ such that $\varphi(\overline{P_1}^H)$ and $\varphi(\overline{P_2}^H)$ are different smooth points of $\varphi(X/H)$, and $L_{\varphi(\overline{P_i}^H)}=k(X)^{N_i}$ and $G_{\varphi(\overline{P_i}^H)}=\overline{G_i}^{N_i}$ for $i=1, 2$. \end{theorem} \begin{proof} We consider the only-if part. By condition (e), $\overline{P_1}^H \ne \overline{P_2}^H$. Let $D$ be the divisor as in condition (d). Since $$\sum_{\sigma \in G_1}\sigma(P_2)=\sum_{H\sigma \in H \setminus G_1}\sum_{h \in H} h\sigma(P_2), $$ it follows that $$(f_H)_*D=|H|\left(\overline{P_1}^H+\sum_{H\sigma \in H\setminus G_1}\overline{\sigma(P_2)}^H\right)=|H|\left(\overline{P_2}^H+\sum_{H\tau \in H \setminus G_2}\overline{\tau(P_1)}^H\right)$$ as divisors on $X/H$. Therefore, $$ \overline{D}^H:=\overline{P_1}^H+\sum_{H\sigma \in H\setminus G_1}\overline{\sigma(P_2)}^H=\overline{P_2}^H+\sum_{H\tau \in H \setminus G_2}\overline{\tau(P_1)}^H. $$ Let $q_i: X/H \rightarrow \mathbb{P}^1$ be the morphism induced by the extension $k(X)^H/k(X)^{G_i}$. Note that $$f_H^*(\overline{Q}^H)=\sum_{h \in H}h(Q) \ \mbox{ and } \ (q_i \circ f_H)^*(q_i(\overline{Q}^H))=\sum_{\sigma \in G_i}\sigma(Q)$$ for each point $Q \in X$ (see, for example, \cite[III.7.1, III.7.2, III.8.2]{stichtenoth}). It follows that $\overline{D}^H-\overline{P_1}^H$ coincides with the pull-back $q_1^*(q_1(\overline{P_2}^H))$, since $(f_H)^*(\overline{D}^H-\overline{P_1}^H)=\sum_{\sigma \in G_1}\sigma(P_2)=(f_H)^*(q_1^*(q_1(\overline{P_2}^H))$, and $(f_H)_*(f_H)^*(D')=|H|D'$ for any divisor $D' \in {\rm Div}(X/H)$. Let $f$ and $g \in k(X)^H$ be generators of $k(X)^{G_1}$ and $k(X)^{G_2}$ such that $(f)_{\infty}=\overline{D}^H-\overline{P_1}^H$ and $(g)_{\infty}=\overline{D}^H-\overline{P_2}^H$, by (a), where $(f)_{\infty}$ is the pole divisor of $f$. Then, $f, g \in \mathcal{L}(\overline{D}^H)$. Let $\varphi: X/H \rightarrow \Bbb P^2$ be given by $(f:g:1)$. To prove that $\varphi$ is birational onto its image, we show that $k(X)^H=k(f, g)$. Since $k(X)/k(f)$ is Galois, there exists a subgroup $H_1$ of $G_1$ such that $H_1={\rm Gal}(k(X)/k(f, g))$. Similarly, there exists a subgroup $H_2$ of $G_2$ such that $H_2={\rm Gal}(k(X)/k(f, g))$. Since $G_1 \cap G_2=H$ by condition (b), $H_1=H_2=H$. The morphism $\varphi$ is birational onto its image. The sublinear system of $|\overline{D}^H|$ corresponding to $\langle f, g, 1\rangle$ is base-point-free, since ${\rm supp}(\overline{D}^H) \cap {\rm supp}((f)+\overline{D}^H)=\{\overline{P_1}^H\}$ and ${\rm supp}(\overline{D}^H) \cap {\rm supp}((g)+\overline{D}^H)=\{\overline{P_2}^H\}$. Therefore, $\deg \varphi(X/H)=\deg \overline{D}^H$, and the morphism $(f:1)$ (resp. $(g:1)$) coincides with the projection from the smooth point $\varphi(\overline{P_1}^H) \in \varphi(X/H)$ (resp. $\varphi(\overline{P_2}^H) \in \varphi(X/H)$). The Galois closure of $k(X)^H/k(X)^{G_i}$ coincides with $k(X)^{N_i}$, by condition (c). We consider the if part. Since $\pi_{\varphi(\overline{P_i}^H)}^*k(\mathbb{P}^1)=(k(X)^{N_i})^{\overline{G_i}^{N_i}}=k(X)^{G_i}$, condition (a) is satisfied. Since $k(X)^{N_i}$ is the Galois closure of $k(X)^H/k(X)^{G_i}$, conditions (c) is satisfied. By the assumption, $H \subset G_1 \cap G_2$. To prove (b), we take a suitable system of coordinates so that $\varphi(\overline{P_1}^H)=(0:1:0)$ and $\varphi(\overline{P_2}^H)=(1:0:0)$. Then, $k(X)^{G_1}=k(x)$ and $k(X)^{G_2}=k(y)$. For $\sigma \in G_1 \cap G_2$, $\sigma^*(x)=x$ and $\sigma^*(y)=y$. Since $k(X)^H=k(x,y)$, $\sigma \in H$. Condition (b) is satisfied. Since $\varphi(\overline{P_1}^H) \ne \varphi(\overline{P_2}^H)$, condition (e) is satisfied. Let $\overline{D}^H$ be the divisor induced by the intersection of $\varphi(X/H)$ and the line $\ell:=\overline{\varphi(\overline{P_1})\varphi(\overline{P_2})}$, where $\overline{\varphi(\overline{P_1})\varphi(\overline{P_2})}$ is the line passing through points $\varphi(\overline{P_1})$ and $\varphi(\overline{P_2})$. We can consider the line $\ell$ as a point in the images of $\pi_{\varphi(\overline{P_1}^H)}\circ\varphi$ and $\pi_{\varphi(\overline{P_2}^H)}\circ\varphi$. Since $\overline{P_2}^H \in \varphi^{-1}(\ell)$, $$ (\pi_{\varphi(\overline{P_1}^H)} \circ \varphi)^*\ell=\sum_{H\sigma \in H\setminus G_1}\overline{\sigma(P_2)}^H. $$ Since this divisor coincides with $\overline{D}^H-\overline{P_1}^H$, it follows that $$ \overline{D}^H=\overline{P_1}^H+\sum_{H\sigma \in H\setminus G_1}\overline{\sigma(P_2)}^H=\overline{P_2}^H+\sum_{H\tau \in H\setminus G_2}\overline{\tau(P_1)}^H. $$ Considering the divisor $f_H^*(\overline{D}^H)$, condition (d) is satisfied. \end{proof} \begin{remark} A similar result holds for ``outer'' points. In this case, we consider a $6$-tuple $(G_1, G_2, H, N_1, N_2, Q)$ with $Q \in X$ such that $G_1$, $G_2$, $H$, $N_1$ and $N_2$ satisfy conditions (a), (b) and (c), and $\sum_{\sigma \in G_1}\sigma(Q)=\sum_{\tau \in G_2}\tau(Q)$ holds. \end{remark} \section{Examples} In this section, we assume that the characteristic $p$ is positive and $q$ is a power of $p$. The finite field of $q^2$ elements is denoted by $\mathbb{F}_{q^2}$. We consider the Hermitian curve $X \subset \mathbb{P}^2$ of degree $q+1$, which is defined by $X^qZ+XZ^{q}-Y^{q+1}=0$. \begin{proof}[Proof of Theorem \ref{hermitian}] Let $P_1=(1:0:0)$ and let $P_2=(0:0:1) \in X$. We consider subgroups $$N_1=\{\sigma_{a, b}: (X:Y:Z) \mapsto (X+a^qY+b Z:Y+a Z:Z) \ | \ a, b \in \mathbb{F}_{q^2}, b^q+b=a^{q+1} \}, $$ $$N_2=\{(X:Y:Z) \mapsto (X: a X+Y: b X+a^q Y+Z) \ | \ a, b \in \mathbb{F}_{q^2}, b^q+b=a^{q+1} \}, $$ and $$ C_{q^2-1}=\{\eta_c: (X: Y: Z) \mapsto (c^{q+1}X: c Y :Z) \ | \ c \in \mathbb{F}_{q^2} \setminus \{0\} \} $$ of ${\rm Aut}(X)$. Let $H$ be a subgroup of $C_{q^2-1}$ and let $G_i=\langle N_i, H \rangle \cong N_i \rtimes H$. We prove that conditions (a), (b), (c), (d) and (e) in Theorem \ref{maincor} are satisfied for $5$-tuple $(G_1, G_2, H, P_1, P_2)$. Since it is known that $X/N_i \cong \mathbb{P}^1$, by L\"{u}roth's theorem, condition (a) is satisfied. Since $G_1 \cap G_2 \subset \{\sigma \in G_1 \ | \ \sigma(P_2)=P_2\}=H$, condition (b) is satisfied. Since $(\sigma_{a, b}^{-1} \eta_{c} \sigma_{a, b})^*y=c y+a(c-1)$ for $\sigma_{a, b} \in N_1$ and $\eta_c \in H$, $H$ does not contain a normal subgroup of $G_1$ other than $\{1\}$. Condition (c) is satisfied. It is well known that the cardinality of the set $X(\mathbb{F}_{q^2})$ of all $\mathbb{F}_{q^2}$-rational points of $X$ is equal to $q^3+1$, and $N_i$ acts on the set $X(\mathbb{F}_{q^2}) \setminus \{P_i\}$ transitively for $i=1, 2$. Since $$ \sum_{h \in H}h(P_1)+\sum_{\sigma \in G_1}\sigma(P_2)=\sum_{Q \in X(\mathbb{F}_{q^2})} |H|Q=\sum_{h \in H}h(P_2)+\sum_{\tau \in G_2}\tau(P_1), $$ condition (d) is satisfied. Since $HP_1=\{P_1\} \ne \{P_2\}=HP_2$, condition (e) is satisfied. The proof of Theorem \ref{hermitian} is completed. \end{proof} If $ms=q+1$ and $c$ is a primitive $(q^2-1)$-th root of unity, then the subgroup $C_m$ of $C_{q^2-1}$ of order $m$ is generated by $\eta_{c^{s(q-1)}} \in C_{q^2-1}$. Since $\eta_{c^{s(q-1)}}^*(x)=x$ and $\eta_{c^{s(q-1)}}^*(y^m)=y^m$, the quotient curve $X/C_m$ has a plane model defined by $x^q+x=y^s$. Therefore, the following holds. \begin{corollary} Let $ms=q+1$, and let $X$ be the Hermitian curve of degree $q+1$. Then, for the curve $x^q+x=y^s$, there exist a plane model $C$ of degree $q^3+1$ and different smooth points $P_1$ and $P_2 \in C$ such that $L_{P_i} \cong k(X)$ and $G_{P_i} \cong N_1 \rtimes C_m$ for $i=1, 2$. \end{corollary} \begin{remark} A result similar to Theorem \ref{hermitian} holds for the rational, Suzuki or Ree curve (see \cite[Sections 12.2 and 12.4]{hkt} for the properties of the Suzuki or Ree curves). \end{remark}
train/arxiv
BkiUdybxaJJQnJA_XumD
5
1
\section{Introduction} \label{intro} {\it Extreme events} (also called critical transitions, disasters, catastrophes and crises) are a most important yet least understood feature of many natural and human-made processes. Among examples are destructive earthquakes, El-Ni\~nos, economic depressions, stock-market crashes, and major terrorist acts. Extreme events are relatively rare, and at the same time they inflict a lion's share of the damage to population, economy, and environment. Accordingly, studying the extreme events is pivotal both for fundamental predictive understanding of complex systems and for disaster preparedness (see \cite{KBS03,AJK05} and references therein). In this paper we work within a framework that emphasizes mechanisms underlying formation of extreme events. Prominent among such mechanisms is {\it direct cascading} or {\it fragmentation}. Among other applications, this mechanism is at the heart of the study of 3D turbulence \cite{Fri96}. A statistical model of direct cascade is conveniently given by the branching processes; they describe populations in which each individual can produce descendants (offsprings) according to some probability distribution. A branching process may incorporate {\it spatial} dynamics, several types of particles (multi-type processes), age-dependence (random lifetimes of particles), and immigration due to external driving forces \cite{AN04}. In many real-world systems, observations are only possible within a specific domain of the phase space of a system. Accordingly, we consider here a system with an {\it unobservable} source of external driving ultimately responsible for extreme events. We assume that observations can only be made on a {\it subspace} of the phase space. The direct cascade (branching) within a system starts with injection of the largest particles into the source. These particles are divided into smaller and smaller ones, while spreading away from the source and eventually reaching the subspace of observations. An important observer's goal is to locate the external driving source. The distance between the observation subspace and the source thus becomes a natural control parameter. An extreme event in this system can be defined as emergence of a large particle in the observation subspace. Clearly, as the source approaches the subspace of observation, the total number of observed particles increases, the bigger particles become relatively more frequent, and the probability of an extreme event increases. In this paper, we give a complete quantitative description of this phenomenon for an age-dependent multi-type branching diffusion process with immigration in $\mathbb{R}^n$. It turns out that our model closely reproduces the major premonitory patterns of extreme events observed in hierarchical complex systems. Extreme events in such systems are preceded by transformation of size distribution in the permanent background activity (see {\it e.g.,} \cite{KBS03}). In particular, general activity increases, in favor of relatively strong although sub-extreme events. That was established first by analysis of multiple fracturing and seismicity \cite{KB96,RKB97}, and later generalized to socio-economic processes \cite{KSA05}. Our results suggest a simple universal mechanism of such premonitory patterns. \section{Model} \label{model} The system consists of particles indexed by their {\it generation} $k=0,1,\dots$. Particles of zero generation ({\it immigrants}) are injected into the system by an external forcing. Particles of any generation $k>0$ are produced as a result of splitting of particles of generation $k-1$. Immigrants ($k=0$) are born at the origin ${\bf x}:=(x_1,\dots,x_n) = {\bf 0}$ according to a homogeneous Poisson process with intensity $\mu$. Each particle lives for some random time $\tau$ and then transforms (splits) into a random number $\beta$ of particles of the next generation. The probability laws of the lifetime $\tau$ and branching $\beta$ are rank-, time-, and space-independent. New particles are born at the location of their parent at the moment of splitting. The lifetime distribution is exponential: $\mathsf{P}\{\tau<t\} = 1 - e^{-\lambda\,t},$ $\lambda>0$. The conditional probability that a particle transforms into $n\ge 0$ new particles (0 means that it disappears) given that the transformation took place is denoted by $p_n$. The probability generating function for the number $\beta$ of new particles is thus \begin{equation} \label{branching_pgf} h(s) = \sum_n p_n\,s^n. \end{equation} The expected number of offsprings (also called the {\it branching number}) is $B:=E(\beta)=h'(1)$ (see {\it e.g.}, \cite{AN04}). Each particle diffuses in $\mathbb{R}^n$ independently of other particles. This means that the density $p({\bf x,y},t)$ of a particle that was born at instant $0$ at point ${\bf y}$ solves the equation \begin{equation} \frac{\partial p}{\partial t} = D\left(\sum_i \frac{\partial^2}{\partial x_i^2}\right)p \equiv D\bigtriangleup_{{\bf x}} p \label{diffusion} \end{equation} with the initial condition $p({\bf x,y},0) = \delta({\bf x-y})$. The solution of (\ref{diffusion}) is given by \begin{equation} p({\bf x,y},t)= \left(4\,\pi\,D\,t\right)^{-n/2} \exp\left\{-\frac{|{\bf x-y}|^2}{4\,D\,t}\right\}, \label{sol} \end{equation} where $|{\bf x}|^2 = \sum_i x_i^2$. It is convenient to introduce particle {\it rank} $r:=r_{\rm max}-k$ for an arbitrary integer $r_{\rm max}$ and thus consider particles of ranks $r\le r_{\rm max}$. This reflects our focus on direct cascading, which often assumes that particles with larger size ({\it e.g.}, length, volume, mass, energy, momentum, {\it etc.}) split into smaller ones according to an appropriate conservation law. Figure ~\ref{fig_example} illustrates the model dynamics. \begin{figure} \centering\includegraphics[width=.22\textwidth]{slice_a.eps} \centering\includegraphics[width=.22\textwidth]{slice_b.eps} \centering\includegraphics[width=.22\textwidth]{slice_c.eps} \centering\includegraphics[width=.22\textwidth]{slice_d.eps} \caption{Example of a 3D model population. Different panels show 2D subspaces of the model 3D space at different distances $|{\bf x}|$ to the origin. Model parameters are $\mu=\lambda=1$, $D=1$, $B=2$. Circle size is proportional to the particle rank. Different shades correspond to populations from different immigrants, the descendants of earlier immigrants have lighter shade. The clustering of particles is explained by the splitting histories. Note that, as the origin approaches, the particle activity significantly changes, indicating the increased probability of an extreme event. } \label{fig_example} \end{figure} \section{Spatio-temporal particle rank distributions} \label{results} The model decsribed above is a superposition of independent branching processes generated by individual immigrants. We consider first the case of a single immigrant; then we expand these results to the case of multiple immigrants. Finally, we analyze the rank distribution of particles. Proofs of all statements will be published in a forthcoming paper. \subsection{Single immigrant} \label{single} Let $p_{k,i}(G,{\bf y},t)$ be the conditional probability that at time $t\ge 0$ there exist $i\ge 0$ particles of generation $k\ge 0$ within spatial region $G\subset\mathbb{R}^n$ given that at time 0 a single immigrant was injected at point ${\bf y}$. The corresponding generating function is \begin{equation} F_k(G,{\bf y},t;s) = \sum_{i}p_{k,i}(G,{\bf y},t)s^i. \end{equation} \begin{proposition} The generating functions $F_k(G,{\bf y},t;s)$ solve the following recursive system of non-linear partial differential equations: \begin{equation} \frac{\partial}{\partial t} F_k =-D\bigtriangleup_{\bf y} F_k-\lambda\,F_k+\lambda\,h\left(F_{k-1}\right),\quad k\ge 1, \label{pgf} \end{equation} with the initial conditions $F_k(G,{\bf y},0;s)\equiv 1$, $k\ge 1$, and \begin{equation} F_0(G,{\bf y},t;s) = (1-P) + P\,s, \label{pgf_ini} \end{equation} where $P:=e^{-\lambda\,t}\int_{G} p({\bf x,y},t)d{\bf x}.$ \label{prop_F} \end{proposition} Next, consider the expected number $\bar A_k(G,{\bf y},t)$ of generation $k$ particles at instant $t$ within the region $G$ produced by a single immigrant injected at point ${\bf y}$ at time $t=0$. It is given by the following partial derivative (see {\it e.g.}, \cite{AN04}) \begin{equation} \bar A_k(G,{\bf y},t):=\frac{\partial F_k(G,{\bf y},t;s)}{\partial s}\mid_{s=1}. \end{equation} Consider also the expectation density $A_k({\bf x},{\bf y},t)$ that satisfies, for any $G\subset\mathbb{R}^n$, \begin{equation} \bar A_k(G,{\bf y},t)=\int_G A_k({\bf x},{\bf y},t)\,d{\bf x}. \label{gs} \end{equation} \begin{corollary} The expectation densities $A_k({\bf x},{\bf y},t)$ solve the following recursive system of linear partial differential equations: \begin{equation} \frac{\partial A_k}{\partial t} =D\bigtriangleup_{\bf x} A_k-\lambda\,A_k+\lambda\,B\,A_{k-1},\quad k\ge 1, \label{exp_ave} \end{equation} with the initial conditions $A_k({\bf x},{\bf y},0)\equiv 0,~k\ge 1,$ \begin{eqnarray} A_0({\bf x,y},0) &=& \delta({\bf y-x}),\nonumber\\ A_0({\bf x,y},t) &=& e^{-\lambda\,t} p({\bf x,y},t),~t>0. \end{eqnarray} The solution to this system is given by \begin{eqnarray} A_k({\bf x,y},t) = \frac{(\lambda\,B\,t)^k}{k!}A_0({\bf x,y},t). \end{eqnarray} \label{col} \end{corollary} The system \eqref{exp_ave} has a transparent intuitive meaning. The rate of change of the expectation density $A_k({\bf x},{\bf y},t)$ is affected by the three processes: diffusion of the existing particles of generation $k$ in $\mathbb{R}^n$ (first term in the rhs of \eqref{exp_ave}), splitting of the existing particles of generation $k$ at the rate $\lambda$ (second term), and splitting of generation $k-1$ particles that produce on average $B$ new particles of generation $k$ (third term). \subsection{Multiple immigrants} \label{multiple} Here we expand the results of the previous section to the case of multiple immigrants that appear at the origin according to a homogeneous Poisson process with intensity $\mu$. The expectation ${\mathcal{A}}_k$ of the number of particles of generation $k$ is given, according to the properties of expectations, by \begin{eqnarray} {\mathcal{A}}_k({\bf x},t) = \int_0^t A_k({\bf x},{\bf 0},s)\,\mu\,ds \end{eqnarray} The steady-state spatial distribution ${\mathcal{A}}_k({\bf x})$ corresponds to the limit $t\to \infty$ and is given by \begin{equation} {\mathcal{A}}_k(z)=\frac{\mu} {\lambda\,k!}\left(\frac{B}{2}\right)^k \left(\frac{2\,\pi\,D}{\lambda}\right)^{-n/2}\, z^{\nu}\,K_{\nu}(z). \label{Az} \end{equation} Here $z:=|{\bf x}|\sqrt{\lambda/D}$, $\nu=k-n/2+1$ and $K_{\nu}$ is the modified Bessel function of the second kind. \subsection{Rank distribution and spatial deviations} \label{GR} Recall that the particle rank is defined as $r=r_{\rm max}-k$. The spatially averaged steady-state rank distribution is a pure exponential law with index $B$: \begin{eqnarray} A_k=\int\limits_{{\mathbb{R}}^n}\int\limits_0^{\infty} A_k({\bf x,0},t) \mu\,dt\,d{\bf x} =\frac{\mu}{\lambda}\,B^k\propto B^{-r}. \label{pureexp} \end{eqnarray} To analyze deviations from the pure exponent, we consider the ratio $\gamma_k({\bf x})$ between the number of particles of two consecutive generations: \begin{equation} \gamma_k({\bf x}):=\frac{{\mathcal{A}}_k({\bf x})}{{\mathcal{A}}_{k+1}({\bf x})}. \label{gamma} \end{equation} For the purely exponential rank distribution, $A_k({\bf x}) = c\,B^k$, the value of $\gamma_k({\bf x})=1/B$ is independent of $k$ and ${\bf x}$; while deviations from the pure exponent will cause $\gamma_k$ to vary as a function of $k$ and/or ${\bf x}$. Combining \eqref{Az} and \eqref{gamma} we find \begin{equation} \gamma_k({\bf x})=\frac{2\,(k+1)}{B\,z}\, \frac{K_{\nu}(z)}{K_{\nu+1}(z)}, \label{gamma1} \end{equation} where, as before, $z:=|{\bf x}|\,\sqrt{\lambda/D}$ and $\nu=k-n/2+1$. \begin{proposition} The asymptotic behavior of the function $\gamma_k(z)$ is given by \begin{eqnarray} \lim\limits_{z\to 0}\gamma_k(z)&=&\left\{ \begin{array}{cc} \infty,&\nu\le 0,\\ \displaystyle\frac{1}{B}\left(1+\frac{n}{2\,\nu}\right),& \nu>0, \end{array}\right. \label{gammaz0}\\ \gamma_k(z)&\sim& \frac{2(k+1)}{B\,z},~{z\to\infty},~{\rm fixed~}k, \label{gammazinf}\\ \gamma_k(z)&\sim& \frac{1}{B}\left(1+\frac{n}{2\,\nu}\right),~{k\to\infty}, ~{\rm fixed~}z. \label{gammakinf} \end{eqnarray} \label{gammalim} \end{proposition} Proposition \ref{gammalim} allows one to describe all deviations of the particle rank distribution from the pure exponential law \eqref{pureexp}. Figure~\ref{fig_Ak} illustrates our findings. First, Eq.~\eqref{gammakinf} implies that at any spatial point, the distribution asymptotically approaches the exponential form as rank $r$ decreases (generation $k$ increases). Thus the deviations can only be observed at the largest ranks (small generation numbers). Analysis of the large-rank distribution is done using Eqs.~\eqref{gammaz0},\eqref{gammazinf}. Near the origin, where the immigrants enter the system, Eq.~\eqref{gammaz0} implies that $\gamma_k(z) > \gamma_{k+1}(z)>1/B$ for $\nu >0$. Hence, one observes the {\it upward deviations} from the pure exponent: for the same number of rank $r$ particles, the number of rank $r+1$ particles is larger than predicted by \eqref{pureexp}. The same behavior is in fact observed for $\nu\le 0$ (the details will be published elsewhere). In addition, for $\nu\le 0$ the ratios $\gamma_k(z)$ do not merely deviate from $1/B$, but diverge to infinity at the origin. Away from the origin, according to Eq.~\eqref{gammazinf}, we have $\gamma_k(z)<\gamma_{k+1}(z)<1/B$, which implies {\it downward deviations} from the pure exponent: for the same number of rank $r$ particles, the number of rank $r+1$ particles is smaller than predicted by \eqref{pureexp}. \begin{figure} \centering\includegraphics[width=.45\textwidth]{Ak_dim.eps} \caption{Deviations from self-similarity: Expected number $A_k(z)$ of generation $k$ particles at distance $z$ from the origin (cf. Proposition \ref{gammalim}). The distance $z$ is increasing (from top to bottom line in each panel) as $z=10^{-3},2,5,10,20$. Model dimension is $n=1$ (panel A), $n=3$ (panel B), $n=5$ (panel C), and $n=10$ (panel D). Other model parameters: $\mu=\lambda=1$, $D=1$, $B=2$, $r_{\rm max}=21$. One can clearly see the transition from downward to upward deviation of the rank distributions from the pure exponential form as we approach the origin.} \label{fig_Ak} \end{figure} \section{Discussion} Motivation for this work is the problem of prediction of extreme events in complex systems. Our point of departure is a classical model of spatially distributed population of particles of different ranks governed by direct cascade of branching and external driving. In the probability theory this model is known as the age-dependent multi-type branching diffusion process with immigration \cite{AN04}. We introduce here a new approach to the study of this process. We assume that observations are only possible on a subspace of the system phase space while the source of external driving remains unobservable. The natural question under this approach is the dependence of size-distributions of particles on the distance to the source. The complete analytical solution to this problem is given by the Proposition~\ref{prop_F}. It is natural to consider rank as a logarithmic measure of the particle size. If we assume a size-conservation law in the model, the exponential rank distriburtion derived in \eqref{pureexp} corresponds to a self-similar, power-law distribution of particle sizes, characteristic for many complex systems. Thus, the Proposition~\ref{gammalim} describes space-dependent deviations from the self-similarity (see also Fig.~\ref{fig_Ak}); in particular, deviations premonitory to an extreme event. The numerical experiments (that will be published elsewhere) confirm the validity of our analytical results and asymptotics in a finite model. The model studied here exhibits very rich and intriguing premonitory behavior. Figure~\ref{fig_example} shows several 2D snapshots of a 3D model at different distances from the source. One can see that, as the source approaches, the following changes in the background activity emerge: a) The intensity (total number of particles) increases; b) Particles of larger size become relatively more numerous; c) Particle clustering becomes more prominent; d) The correlation radius increases; e) Coherent structures emerge. In other words, the model exhibits a broad set of premonitory phenomena previously observed heuristically in real and modeled systems: multiple fracturing \cite{RKB97}, seismicity \cite{KB96}, socio-economics \cite{KSA05}, percolation \cite{ZWG04}, hydrodynamics, hierarchical models of extreme event development \cite{KBS03}. These phenomena are at the heart of earthquake prediction algorithms well validated during 20 years of forward world-wide tests (see {\it e.g.,} \cite{KBS03}). In this paper we analyse only the first-moment properties of the system; such properties can explain the premonitory intensity increase (item a above) and transformation of the particle rank distribution (item b). At the same time, the framework developed here allows one to quantitatively analyze other premonitory phenomena; this can be readily done by considering the higher-moment properties. \acknowledgments This study was supported in part by NSF grant No. ATM 0620838.
train/arxiv
BkiUgPzxK7IDOUSbv6xb
5
1
\section{Introduction} Given a collection of 2D polygons, a \emph{gluing} describes a closed surface by specifying how to glue (a part of) each edge of these polygons onto (a part of) another edge. Alexandrov's uniqueness theorem~\cite{alex} states that any valid gluing that is homeomorphic to a sphere and that does not yield a total facial angle greater than $2\pi$ at any point, corresponds to the surface of a unique convex 3D polyhedron (doubly covered convex polygons are also regarded as polyhedra). Note that the original polygonal pieces might need to be folded to obtain this 3D surface. Unfortunately, the proof of Alexandrov's theorem is highly non-constructive. The only known approximation algorithm to find the vertices of this polyhedron~\cite{kpd09-approx} has (pseudopolynomial) running time really large in $n$, where $n$ is the total complexity of the gluing. In particular, its running time depends on $n$ as $\tilde{O}(n^{578.5})$, and it also depends on the aspect ratio of the polyhedral metric, the Gaussian curvature at its vertices, and the desired precision of the solution. There is no known exact algorithm for reconstructing the 3D polyhedron, and in fact the coordinates of the vertices of the polyhedron might not even be expressible as a closed formula~\cite{bannister2014galois}. Enumerating all possible valid gluings is also not an easy task, as the number of gluings can be exponential even for a single polygon~\cite{DDLO02}. However one valid gluing can be found in polynomial time using dynamic programming~\cite{DO07,lo96-dynprog}. Complete enumerations of gluings and the resulting polyhedra are only known for very specific cases such as the Latin cross~\cite{ddlop99} and a single regular convex polygon~\cite{DO07}. The special case when the polygons to be glued together are all identical regular $k$-gons, and the gluing is \emph{edge-to-edge} was recently studied by the first two authors of this paper~\cite{kl17-hex}. For $k>6$, the only two possibilities are two $k$-gons glued into a doubly-covered $k$-gon, or one $k$-gon folded in half (if $k$ is even). When $k=6$, the number of hexagons that can be glued into a convex polyhedron is unbounded. However, for non-flat polyhedra of this type there are at most ten possible graph structures. For six structures out of these ten, the gluings realizing them have been found. For doubly-covered 2D polygons, all the possible polygons and the gluings forming them have been characterized. In this paper we continue this study by thoroughly considering the case of $k=5$, i.e., gluing regular pentagons edge to edge. This setting differs substantially from the case of hexagons, since it is not possible to produce a flat vertex by gluing regular pentagons. Therefore both the number of possible graph structures and the number of possible gluings is finite and little enough to study each one of them individually. We start by enumerating all edge-to-edge gluings of regular pentagons satisfying the conditions of the Alexandrov's Theorem (Section~\ref{sec:enum}). After that we solve the problem of establishing the graph structure of the convex polyhedra corresponding to each such gluing $G$. Using the existing methods (implementation~\cite{sech} of the Bobenko-Izmestiev algorithm~\cite{boben}), we obtain an approximate polyhedron $P$ for gluing $G$. With the help of a computer program, we generate a certificate that the edges of these approximate polyhedra are present in the sought polyhedra. In particular, we upper bound the discrepancy in vertex coordinates between the unique convex polyhedron corresponding to $G$ a given approximate polyhedron (Theorem~\ref{thm:precision}), which implies a sufficient condition for the polyhedron to have a certain edge (Theorem~\ref{thm:code-whattocheck}). Our computer program checks this condition automatically. For non-simplicial approximate polyhedra $P$, to prove that there are no additional edges present in the sought polyhedra, we resort to ad-hoc geometric methods, using symmetry arguments and reconstructing the process of gluing the polyhedron (Section~\ref{section:geom}). While the main outcome of this work is the full list of the convex polyhedra that are obtained by gluing regular pentagons edge to edge (Section~\ref{section:complete}), the methods for obtaining it are of independent interest and may be applied to other problems of the same flavour. \section{Preliminaries and definitions} In this section we review definitions and previous results that are necessary for the rest of this paper. We start with some basic notions. By a polyhedron we mean a three-dimensional polytope, and, unless stated otherwise, all the polyhedra we are considering are convex. Doubly-covered convex polygon is also regarded as a convex polyhedron. A polyhedron is called \emph{simplicial} if all its faces are triangles. Consider an edge $e$ of a polyhedron; and let $f_1$ and $f_2$ be the two faces of the polyhedron that are incident to $e$. We call a vertex in $f_1$ or $f_2$ {\it opposite to $e$} if it is not incident to $e$. If $f_1$ and $f_2$ are triangles, then there are exactly two vertices opposite to~$e$, see Figure~\ref{fig:oppPoly}. \begin{figure} \centering \begin{minipage}[b]{0.41\textwidth} \centering \includegraphics[height=40mm]{figs/opposite} \caption{Vertices $u_1$ and $u_2$ are opposite \\ to edge $e$ of polyhedron $P$.} \label{fig:oppPoly} \end{minipage}\hspace{1.15cm} \begin{minipage}[b]{0.41\textwidth} \centering \tikz[scale=1.5]{ \input{figs/inscope_g-b_ex} } \caption{Gaussian curvature of \\ the vertices of a convex pentahedron.} \label{fig:gauscurv} \end{minipage} \end{figure} \begin{definition} \label{def:gauscurv} Let $P$ be a convex polyhedron. The \emph{Gaussian curvature} at a vertex $v$ of $P$ equals $\ll 2\pi - \sum_{j=1}^t{\alpha^v_j}\rr$, where $t$ is the number of faces of $P$ incident to $v$, and $\alpha^v_j$ is the angle at $v$ of the $j$-th face incident to $v$. \end{definition} Since $P$ is convex, the Gaussian curvature at each vertex of $P$ is non-negative. \begin{theorem}[Gauss, Bonnet 1848] The total sum of the Gaussian curvature of all vertices of a 3D polyhedron $P$ equals $4\pi$. \end{theorem} For an example, see Figure~\ref{fig:gauscurv} that shows a convex pentahedron and the values of Gaussian curvature at each of its vertices. \begin{definition} \label{def:gluingDef} {\it A gluing $G$} is a collection of polygons $T_1 \ldots T_n$ equipped with an equivalence relation $\sim$ on their border describing how the polygons should be glued to one another. \end{definition} \begin{definition} \label{def:polyMetric} The {\it polyhedral metric} $M$ of a gluing $G$ is the intrinsic metric of the simplicial complex corresponding to $G$: the distance between two points of the gluing is the infimum of the lengths of the polygonal lines joining the points such that each vertex of it is within one of the polygons $T_1 \ldots T_n$. \end{definition} We denote the distance between points $p$, $q$ of $G$ by $|pq|$. \begin{definition} \label{def:alexCond} Gluing $G$ (and the polyhedral metric corresponding to it) is said to satisfy \emph{Alexandrov's conditions} if: \begin{itemize} \item[a)] the topological space produced by $G$ is homeomorphic to a sphere, and \item[b)] the total sum of angles at each of the vertices of $G$ is at most $2\pi$. \end{itemize} \end{definition} \begin{theorem}[Alexandrov, 1950,~\cite{alex}] \label{thm:alexandrov} If a gluing $G$ satisfies Alexandrov's conditions then this gluing corresponds to a unique convex polyhedron $\P (G)$: that is, the polyhedral metric of $G$ and the shortest-path metric of the surface of $\P (G)$ are equivalent. \end{theorem} Correspondence to a polyhedron discribed in this theorem intuitively means that $\P (G)$ can be glued from polygons of $G$ in accordance with relation~$\sim$. Note that polygons of $G$ need not correspond to faces of $\P (G)$. Recall that a chord of a polygon $Q$ is any segment connecting two points on the border of $Q$ that lies completely inside $Q$. \begin{definition} \label{def:netDef} For a polyhedron $P$, a net of $P$ is a gluing $G = (T_1 \ldots T_n, \sim)$ of $P$ together with the set of chords of the polygons $T_i$ that do not intersect each other except possibly at endpoints. Those chords represent creases, i.e. lines along which $P$ should be folded from this polygon. \end{definition} \section{Gluing regular pentagons together} \label{sec:enum} In this section, we describe how to enumerate all the edge-to-edge gluings of regular pentagons. \subsection{How many pentagons can we glue and which vertices can we obtain?} Let $P$ be a convex polyhedron obtained by gluing several regular pentagons edge to edge. Vertices of $P$ are clearly vertices of the pentagons. The sum of facial angles around a vertex $v$ of $P$ equals $3\pi/5$ (the interior angle of a regular pentagon) times the number of pentagons glued together at $v$. Since the Gaussian curvature at $v$ is in $(0,2\pi)$, the number of pentagons glued at $v$ can be either one, two, or three. This yields the Gaussian curvature at $v$ to be respectively $7\pi/5$, $4\pi/5$, or $\pi/5$. Note that, as opposed to the case of regular hexagons, it is not possible to produce a vertex of curvature $0$ (which would be a flat point on the surface of $P$) by gluing several pentagons. Therefore all the vertices of the pentagons must correspond to vertices of $P$. \begin{prop} \label{prop:upperbound} Suppose $P$ is a convex polyhedron obtained by gluing edge-to-edge $N$ regular pentagons. Then: (a) $P$ has $2 + 1.5N$ vertices in total. In particular, $N$ must be even. (b) $N$ is at most $12$. \end{prop} \begin{proof} From the above discussion, the vertices of $P$ can be subdivided into three types according to their Gaussian curvature: (1) the ones of curvature $7\pi/5$, (2) $4\pi/5$, and (3) $\pi/5$. Let us denote the number of vertices type 1, 2 and 3, respectively, as $x, y, z$. Then we have the following system of two equations: \[ \begin{cases} 7x+4y+z=20 \\ x+2y+3z = 5N \end{cases} \] The first equation is implied by the Gauss-Bonnet theorem; the second one is obtained by counting the vertices of pentagons, since each polyhedron vertex of type 1, 2 and 3 corresponds to respectively one, two or three pentagon vertices. (a) By summing up the equations after multiplying the first one by $0.1$ and the second one by $0.3$, we obtain that $x+y+z = 2 + 1.5N$. (b) Since $x,y,z$ are non-negative integers, from the first equation we derive that the maximum number of vertices is obtained when $x = 0, y = 0, z = 20$. This assignment corresponds to $N = 12$ by the second equation. \end{proof} \subsection{Enumerating all possible gluings.} \label{sec:method} We used a computer program to list all the non-isomorphic gluings of this type. Our program is a simple modification of the one that enumerates the gluings of hexagons~\cite{kl17-hex}. The gluings are depicted in Figures \ref{fig:net21}, \ref{fig:net22}, \ref{fig:net41}, \ref{fig:net42}, \ref{fig:net43}, \ref{fig:net6}, \ref{fig:net8}, \ref{fig:net12}. \input{graphics/list2} \section{A complete list of all shapes obtained by gluing pentagons} \label{section:complete} Below is the list of all polyhedra that can be obtained by gluing regular pentagons. For those polyhedra that are simplicial, their graph structure is confirmed by applying method of Section~\ref{section:algorithmic}, for the others the proof is geometric and is done in Section~\ref{section:geom}. \begin{itemize} \item 2 pentagons:\vspace{-3.5mm} \begin{itemize} \item doubly-covered regular pentagon, see Figures~\ref{fig:1},~\ref{fig:net21}. \item simplicial hexahedron with 5 vertices (3 vertices of degree 4, and 2 vertices of degree 3), see Figures~\ref{fig:2},~\ref{fig:net22}. \end{itemize} \end{itemize} \vspace{-3.5mm} \input{graphics/list4} \begin{itemize} \item 4 pentagons:\vspace{-3.5mm} \begin{itemize} \item simplicial dodecahedron with 8 vertices (2 vertices of degree 5 and 6 vertices of degree 4), see Figures~\ref{fig:4-1},~\ref{fig:net41}. \item octohedron with 8 vertices (4 vertices of degree 4 and 4 vertices of degree 3) and 4 quadrilateral and 4 triangular faces. It is a truncated biprism, see Figures~\ref{fig:4-2},~\ref{fig:net42}. \item hexahedron with 8 vertices each of degree 3 and 6 quadrilateral faces. This is a parallelepiped, see Figures~\ref{fig:4-3},~\ref{fig:net43}. \end{itemize} \end{itemize} Note that $\P_{4,1}$, $\P_{4,2}$, $\P_{4,3}$ can be glued from a single common polygon by altering the relation $\sim$. \input{graphics/list6} \begin{itemize} \item 6 pentagons: simplicial decaoctohedron (18-hedron) with 11 vertices (5 vertices of degree 6, 6 vertices of degree 4), see Figures~\ref{fig:6},~\ref{fig:net6}. \item 8 pentagons: simplicial icositetrahedron (24-hedron) with 14 vertices (2 vertices of degree 6, 12 vertices of degree 5), see Figures~\ref{fig:8},~\ref{fig:net8}. \item 12 pentagons: regular dodecahedron with 20 vertices of degree 3 and 12 pentagonal faces, see Figure~\ref{fig:12},~\ref{fig:net12}. \end{itemize} We now proceed with a description of how to determine the graph structures of the polyhedra in this list. We separately confirm the presence of the edges (Section~\ref{section:algorithmic}) and prove that no additional edges are present in the quadrilateral faces of $\P_{4,2}$ and $\P_{4,3}$ (Section~\ref{section:geom}). \section{An algorithmic method to verify the graph structure of a glued polyhedron} \label{section:algorithmic} Consider a polyhedral metric $M$ that satisfies the Alexandrov's conditions and thus corresponds to a unique polyhedron $\P$. Suppose we have a polyhedron $P$ that approximates $\P$. That is, vertices of $P$ are in one-to-one correspondence with the cone points of $M$ (and thus with the vertices of $\P$). In this section we show how to check whether the graph structure of $\P$ contains all the edges of $P$. We will be using the following notation: $v_1$, $v_2$, $v_3$,~$\ldots$ for the vertices of $\P$; $u_1$, $u_2$, $u_3$,~$\ldots$ for the corresponding vertices of $P$; $V$, $E$, $F$ for the number of vertices, edges and faces of $P$ respectively; $\mathcal D$ for the maximum degree of a vertex of $P$; $L$ for the length of the longest edge of $P$; $\ball{u}$ for the ball in $\br^3$ of radius $r$ centered at the point $u$. We also know the lengths of edges and distances between vertices of $\P$ since those are lengths of shortest paths between cone points of metric $M$. Let the \emph{discrepancy} of an edge $u_iu_j$ of $P$ be the absolute value of the difference between the length of that edge and the distance between the corresponding vertices $v_i$ and $v_j$ of $\P$. Let \emph{maximum edge discrepancy} $\mu$ of $P$ be the maximum discrepancy for all edges of $P$. Similarly, for any facial angle $u_ju_iu_k$ of $P$, let \emph{discrepancy} of this angle be the absolute value of the difference between the values of $u_ju_iu_k$ and of the angle between the corresponding shortest paths in $\P$; let the maximum angle discrepancy $\gamma$ of $P$ be the maximum discrepancy for all the facial angles of $P$. We base our check on the following theorem. \begin{theorem} \label{thm:precision} Suppose $\mu$ is the maximum edge discrepancy between $P$ and $\P$, $\gamma$ is the maximum angle discrepancy between $P$ and $\P$, $\mathcal D$ is the maximum degree of a vertex of $P$. If $\mathcal D \gamma < \pi / 2$, then each vertex of $\P$ lies within an $r$--ball centered at the corresponding vertex of $P$, where \begin{equation} r = E^2 \cdot L \cdot 2 \sin ( \mathcal D \gamma / 2 ) + E \mu. \end{equation} \end{theorem} We defer its proof to the Section~\ref{section:precis}, and for now we focus on describing our check, using the theorem as a black box. Let $u_iu_j$ be an edge of $P$ and let $u_a$, $u_b$ be the two vertices of $P$ opposite to the edge $u_iu_j$ (see Figure~\ref{fig:oppPoly}). We want to check that there does not exist a plane intersecting all four $r$--balls centered at $u_i$, $u_j$, $u_a$, $u_b$ respectively. Assume without loss of generality that the plane passing through $u_a$, $u_i$, $u_j$ is not vertical and that $P$ lies below that plane (otherwise apply a rigid transformation to $P$ so that it becomes true). Note that we always can do this since $P$ is convex. Consider three planes $\Pi_1$, $\Pi_2$, $\Pi_3$ tangent to $\ball{u_i}$, $\ball{u_j}$, $\ball{u_a}$ such that: \begin{itemize} \item $\Pi_1$ is below $\ball{u_i}$, $\ball{u_j}$ and above $\ball{u_a}$, \item $\Pi_2$ is below $\ball{u_i}$ and above $\ball{u_j}$, $\ball{u_a}$, \item $\Pi_3$ is below $\ball{u_j}$ and above $\ball{u_i}$, $\ball{u_a}$. \end{itemize} \begin{figure}[h] \centering \def\scopescale{0.77} \input{figs/tang_plane} \caption{Plane $\Pi_1$ tangent to $\ball{u_i}$, $\ball{u_j}$, $\ball{u_a}$.} \label{fig:tangPlane} \end{figure} \begin{theorem} \label{thm:code-whattocheck} If $u_b$ lies below $\Pi_1$, $\Pi_2$ and $\Pi_3$ and the distance from $u_b$ to each of the planes $\Pi_1$, $\Pi_2$ and $\Pi_3$ is greater than $r$, then there must be the edge $v_iv_j$ in $\P$. \end{theorem} An example can be seen on Figure~\ref{fig:tangPlane}: plane $\Pi_1$ is tangent to $\ball{u_i}$, $\ball{u_j}$, $\ball{u_a}$. Point $u_{b,1}$ is below $\Pi_1$, and point $u_{b,2}$ is above $\Pi_1$, the distance from each of the points to $\Pi_1$ is greater than $r$. To prove this theorem, we need the following lemma. \begin{lemma} \label{lm:ballsPlaneCase} Given two disks $\ball\ule$, $\ball\uri$ in $\br^2$; points $\ule$, $\uri$ lie on $x$ axis. Given a point $u$, $x_u > x_{\uri}$, $y_u < 0$. If $u$ lies below the common tangent of the disks that is above $\ball\ule$ and below $\ball\uri$, than there is no line passing through $\ball\ule$, $\ball\uri$, and $u$. \end{lemma} The example for this lemma can be seen in Figure~\ref{fig:ballsPlaneCaseEx}. Point $u_1$ is above the tangent, so there may be a line passing through it and the two disks. Point $u_2$ is below the tangent, so no lines through $\ball\ule$, $\ball\uri$, $u$ are possible. \input{figs/thm5/ballsPlaneCaseEx} \begin{proof} Consider the set of points in $\br^2$ covered by all lines passing through $\ball\ule$, $\ball\uri$. We are looking for the lower border of it which corresponds to the lowest line passing through these disks. \input{figs/thm5/tangRaise} Consider a line passing through the disks. If it is not tangent to $\ball \ule$ from above, it can be made lower by raising its intersection with $\ball\ule$, see Figure~\ref{fig:tangRaiseA}. If it is not tangent to $\ball\uri$ from below, it also can be made lower by lowering its intersection with $\ball\uri$, see Figure~\ref{fig:tangRaiseB}. Therefore, any line passing through $\ball\ule$, $\ball\uri$ is higher than the common tangent of these disks when $x > x_{\uri}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:code-whattocheck}] We can assume that points $u_i$, $u_j$ lie on $y$ axis, see Figure~\ref{fig:tangPlane}. For each pair $(x,y)$ we want to find minimum $z$ such that there is a plane passing through $\ball{u_i}$, $\ball{u_j}$, $\ball{u_a}$, and $(x,y,z)$. Let us consider three cases: (1) $y_{u_i} \le y \le y_{u_j}$, (2) $y \le y_{u_i}$, (3) $y_{u_j} \le y$. Consider case 1. Project everything on plane $y=0$. The projections of $\ball{u_i}$ and $\ball{u_j}$ coincide, and a plane $\fltn{-1.0741858665333857}{0.5795877843576744}{-0.3774601545686791} node{ }$ passing through these disks can be lowered by matching the projections of its intersections with the disks, thus making projection of $\fltn{-1.0741858665333857}{0.5795877843576744}{-0.3774601545686791} node{ }$ a line. Now we can apply Lemma~\ref{lm:ballsPlaneCase} to the projection to get plane $\Pi_1$ from the statement of the Theorem. Consider case 2. Project everything on a plane orthogonal to the segment $u_ju_a$. Using similar argument, applying Lemma~\ref{lm:ballsPlaneCase} we get plane $\Pi_2$ from the statement. Case 3 is symmetric to case 2 and gives us plane $\Pi_3$. Therefore, all points of $\ball{u_b}$ should lie below the planes $\Pi_1$, $\Pi_2$, $\Pi_3$, which yields the condition of distance between $u_b$ and the planes being greater than $r$. \end{proof} The check suggested in Theorem~\ref{thm:code-whattocheck} requires $O(1)$ time, and has to be performed once for every edge $u_iu_j$ of $P$. This implies the following. \begin{theorem} \label{thm:checkGraphStructure} Given a polyhedral metric $M$ satisfying Alexandrov's conditions and an approximation $P$ for the polyhedron $\P$ that corresponds to $M$, there is a procedure to verify for each edge of $P$ if it is present in $\P$. The procedure answers ``yes'' only for those edges that are present in $\P$, and it answers ``inconclusive'' if the approximation $P$ is not precise enough. The procedure requires time $\mathcal O(E)$. \end{theorem} Inconclusive answers occur if a plane exists that intersects all four $r$--balls even though there is an edge connecting two of the vertices. In such case, precision has to be increased by replacing $P$ with a polyhedron that has smaller discrepancy in edge lengths and values of angles and repeating the procedure. Theorem~\ref{thm:checkGraphStructure} yields that if $\P$ is simplicial we can in time $\mathcal O(E)$ verify whole its graph structure without any additional effort. However, if there are faces in $\P$ with four or more vertices, the absence of the edges that are diagonals of these faces has to be proved, which requires some creativity. For non-simplicial shapes glued from pentagons such proofs are given in Section~\ref{section:geom}. To obtain polyhedron $P$ one can use the algorithm developed by Kane et al.~\cite{kpd09-approx} or the one by Bobenko, Izmestiev~\cite{boben}. Each of them outputs a polyhedron $P$ which is an approximation of $\P$. In this work we used the implementation of the latter presented by Sechelmann~\cite{sech}. It gave us approximation with $\mu \sim 10^{-7}$, $\gamma \sim 10^{-6}$, $L \sim 2.5$. These parameters allowed for $r \sim 10^{-3}$, which was enough to verify the presence of all the suggested edges. To do so, we developed a program that checks the condition of Theorem~\ref{thm:code-whattocheck}. Its source code can be found in our bitbucket repository\footnote{\url{bitbucket.org/boris-a-zolotov/diplomnaia-rabota-19/src/master/praxis/haskell}}. \subsection{Proof of Theorem~\ref{thm:precision}} \label{section:precis} We now proceed with the proof of Theorem~\ref{thm:precision}. To prove it, we need the following lemma. \begin{lemma} \label{lm:singlesegm} Let $pq$, $pq'$ be line segments in $\br^3$, $|pq| = \ell$. If there are two real numbers $\varepsilon$, $\theta$ with $\varepsilon > 0$ and $0 < \theta < \tfrac{\pi}{2}$ such that \[ \ell-\varepsilon \le |pq'| \le \ell+\varepsilon \text{\quad and\quad} \magl qpq' \le \theta, \] then\quad $|qq'| \le 2 \ell \sin\frac{\theta}{2} + \varepsilon. \quad\refstepcounter{equation}\hfill(\theequation)\label{eq:singlesegm}$ \end{lemma} \begin{proof} $pq'$ can be obtained from $pq$, as shown in Figure~\ref{fig:edgeOffset}, by a composition $\rho \circ \tau$ of \begin{itemize} \item[(1)] rotation $\rho$ around $p$ by an angle at most $\theta$, \item[(2)] homothety $\tau$ with center $p$ and ratio $\lambda$, where $\lambda$ is some real number with $\frac{\ell-\varepsilon}{\ell} \le \lambda \le \frac{\ell+\varepsilon}{\ell}$. \end{itemize} \begin{figure}[h] \centering \input{figs/edgeOffset} \caption{After a segment is rotated by at most $\theta$ and its length changed by at most $\varepsilon$, its endpoint $q$ moves by at most $\ell \cdot 2\sin\tfrac{\theta}{2} + \varepsilon$.} \label{fig:edgeOffset} \end{figure} First, it is clear that $\left| \rho(q),\ \tau(\rho(q)) \right|\le\varepsilon$, since $\tau$ is defined so as to add not more than $\varepsilon$ to a segment of length $\ell$. Now we estimate $\dist (q, \rho(q))$. It is at most $\ell \cdot 2\sin ( \theta / 2)$, which is the length of the base of an isosceles triangle with sides equal to $\ell$ and angle at the apex $\theta$. Combining the above estimations with the triangle inequality concludes the proof. \end{proof} \begin{figure} \centering \includegraphics[scale=1.1]{figs/angleOffset} \caption{The angle between the edge of $\P$ and the edge of $P$ is less than $\mathcal D\gamma$.} \label{fig:angleOffset} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:precision}] Place $P$ and $\P$ in such a way that \begin{enumerate} \item a pair of their corresponding vertices, $u_1$ in $P$ and $v_1$ in $\P$, coincide, \item a pair of corresponding edges, $e'$ incident to $u_1$ in $P$ and $e$ incident to $v_1$ in $P$, lie on the same ray, and \item a pair of corresponding faces, $f'$ in $P$ incident to $u_1$ and $e'$ and $f$ in $\P$ incident to $v_1$ and $e$, lie on the same half-plane. \end{enumerate} Consider a pair of corresponding vertices, $u$ in $P$ and $v$ in $\P$. In order to estimate $|uv|$ consider a shortest path $\pi_1 = u_1w'_1w'_2\ldots w'_ku$ in the graph structure of polyhedron $P$. It is comprised of edges of $P$ and is not the geodesic shortest path from $u_1$ to $u$. Vertices of $\pi_1$ correspond to the vertices of another path $\pi_2 = v_1w_1w_2\ldots w_kv$ in $\P$. Since $\pi_1$ is a simple path, it contains at most $E$ edges and therefore its total length is at most $EL$. We now focus on the paths themselves, not on the polyhedra. Path $\pi_2$ can be obtained from $\pi_1$ by a sequence of changes of edge directions (see Figures~\ref{fig:angleOffset},~\ref{fig:anglePathOffset}) and edge lengths (see Figure~\ref{fig:edgePathOffset}). Let us estimate by how much endpoint $u$ of path $\pi_1$ can move when this sequence of changes is applied. Denote $w_0' \coloneqq u_1$, $w_0 \coloneqq v_1$ and assume that for each $j = 1, \ldots, i$ edge $w_{j-1}' w_j'$ is parallel to $w_{j-1} w_j$. Then, by the triangle inequality, the angle $\alpha$ between $w'_i w'_{i+1}$ and $w_i w_{i+1}$ is at most $\mathcal D\gamma$, see Figure~\ref{fig:angleOffset}. Rotate the path $w_i' \ldots w_k' u$ around $w_i'$ by angle $\alpha$ so $w_i' w_{i+1}'$ and $w_i w_{i+1}$ become parallel. Distance $|w_i' u|$ is at most $EL$, so, by Lemma~\ref{lm:singlesegm}, every time we apply such rotation, the endpoint $u$ of path $\pi_1$ moves by at most $E L \cdot 2 \sin (\mathcal D \gamma / 2)$. Since there are at most $E$ vertices in the path and $E$ rotations are applied, the endpoint $u$ moves by at most \begin{equation} \label{eq:anglePathOffset} E^2 \cdot L \cdot 2 \sin\ll \frac{\mathcal D \gamma}{2} \rr. \end{equation} \begin{figure} \centering \begin{subfigure}[b]{0.44\columnwidth} \centering \includegraphics[scale=1.15]{figs/anglePathOffset} \caption{\ } \label{fig:anglePathOffset} \end{subfigure} ~ \centering \begin{subfigure}[b]{0.44\columnwidth} \centering \includegraphics[scale=1.15]{figs/edgePathOffset} \caption{\ } \label{fig:edgePathOffset} \end{subfigure} \caption{Illustration for the proof of Theorem~\ref{thm:precision}: {\itshape (a)}~Rotation by the angle less than $\mathcal D\gamma$ is applied to the path $w'_i\ldots w'_k$. {\itshape (b)}~The edge $w'_iw'_{i+1}$ is being lengthened or shortened by not more than $\mu$. \vspace{-3mm}} \end{figure} Now that the directions of all the edges in path $\pi_1$ coincide with the directions of the edges in path $\pi_2$, we can make the lengths of corresponding edges match. If the length of a single edge of a path in $P$ is changed by at most $\mu$, and other edges are not changed (as shown in Figure~\ref{fig:edgePathOffset}), then the end of the path also moves by not more than $\mu$. Therefore after we adjust the length of all the edges, the endpoint $u$ of path $\pi_1$ moves by at most $E \cdot \mu.~\refstepcounter{equation}\hfill(\theequation)\label{eq:edgePathOffset}$ Combining~(\ref{eq:anglePathOffset}) and~(\ref{eq:edgePathOffset}) implies that in total point $u$ moved by at most \begin{equation} E^2 \cdot L \cdot 2 \sin ( \mathcal D \gamma / 2 ) + E \mu. \end{equation} This completes the proof. \end{proof} \section{Geometric methods to determine graph structure} \label{section:geom} In this section we give the last part of the proof that the polyhedra corresponding to the gluings listed in Section~\ref{section:complete} have the same graph structure as the polyhedra listed in the same section. That is, we prove that quadrilateral faces of $P_{4,2}$, $P_{4,3}$ correspond to quadrlateral faces of $\P_{4,2}$, $\P_{4,3}$, i. e., that certain edges are not present in $\P_{4,2}$, $\P_{4,3}$. \subsection{Quadrilateral faces of $\P_{4,2}$} Recall that $\P_{4,2}$ is the polyhedron that corresponds to the gluing $G_{4,2}$ (see Figure~\ref{fig:net42}). Let $A, B, \ldots, H$ denote the vertices of $G_{4,2}$, see Figure~\ref{fig:4-2razv}. We have already established by the methods of Section~\ref{section:algorithmic} that $\P_{4,2}$ has edges that are shown in the net on Figure~\ref{fig:net42} (black lines). We now prove the following. \begin{theorem} \label{thm:4-2symm} For the polyhedron $\P_{4,2} = \P(G_{4,2})$, each of the 4-tuples of vertices $(G,H,C,D)$, $(A,B,H,G)$, $(E,F,C,B)$, $(A,D,F,E)$ forms a quadrilateral face of $\P_{4,2}$. \end{theorem} \begin{proof} \begin{figure}[h] \input{figs/4-2symm} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.2cm]{figs/s/s422} \caption{The net of $\P_{4,2}$.} \label{fig:4-2razv} \end{figure} Observe first that there are two vertical planes such that $\P_{4,2}$ is symmetric with respect to both of them: (1) the plane $\zeta$ that passes through edge $GH$ (the common side of two pentagons), and the midpoints $M_1$, $M_2$, $M_3$ of edges $AD$, $EF$, $BC$ respectively (see Figure~\ref{fig:4-2symm}); and (2) the plane $\zeta'$ that passes through edge $EF$ and the midpoints of edges $AB$, $GH$ and $DC$. Indeed, polyhedron $\P_{4,2}$ is symmetric with respect to plane $\zeta$, since the segment $HM_2$ cuts in half the pentagon $EFCHB$ (colored orange in Figures~\ref{fig:4-2symm} and~\ref{fig:4-2razv}), and so does the segment $GM_2$ does with pentagon $FEAGD$ (colored yellow in Figures~\ref{fig:4-2symm} and~\ref{fig:4-2razv}). The argument for the plane $\zeta'$ is analogous. Suppose for the sake of contradiction that $BF$ is an edge of $\P_{4,2}$. Then segment $EC$ must also be an edge due to the symmetry with respect to plane $\zeta$. However, segments $BF$ and $EC$ cross inside the pentagon $EFCHB$ and thus cannot be both the chords of the net of $\P_{4,2}$. We arrive to a contradiction. By the same argument $EC$ cannot be an edge of $\P_{4,2}$. Therefore $EFCB$ is a quadrilateral face of $\P_{4,2}$. The existence of quadrilateral faces $GHCD,\ ABHG,\ ADFE$ is implied by a symmetric argument. This completes the proof. \end{proof} \subsection{Quadrilateral faces of $\P_{4,3}$} Polyhedron $\P_{4,3}$ is the polyhedron that corresponds to the gluing $G_{4,3}$ (see Figure~\ref{fig:net43}). Again let $A, B, \ldots H$ denote the vertices of $G_{4,3}$, see Figure~\ref{fig:4-3razv}. The chords shown in the net on Figure~\ref{fig:net43} (black lines) are already proven to be corresponding to the edges of $\P_{4,3}$. We now prove the following. \begin{theorem} \label{thm:4-3symm} For the polyhedron $\P_{4,3} = \P(G_{4,3})$, each of the 4-tuples of vertices $(E,A,B,F)$, $(E,A,D,H)$, $(C,G,F,B)$, $(C,G,H,D)$, $(A,B,C,D)$, $(E,F,G,H)$ forms a quadrilateral face of $\P_{4,3}$. In particular, each of these faces is a parallelogram. \end{theorem} \begin{proof} \begin{figure}[h] \input{figs/4-3parall} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.2cm]{figs/s/s432} \caption{The net of $\P_{4,3}$.} \label{fig:4-3razv} \end{figure} We show that there is a convex polyhedron with the net as in Figure~\ref{fig:4-3razv} that satisfies the claim. By Alexandrov's theorem such polyhedron is unique and is exactly $\P_{4,3}$. The pentagon $EAFHA$ (colored green in Figure~\ref{fig:4-3razv}) is folded along its diagonals $EF$ and $EH$ and glued along its edge $EA$. We use one degree of freedom to place it so that it is symmetric with respect to the plane through $EAM_1$, where $M_1$ is the midpoint of $HF$. Let us now take another pentagon $AFBDH$ and glue one of its vertices to $A$. Place this pentagon in a way that the plane $ADB$ is parallel to the plane $EHF$ (see the orange pentagon in Figure~\ref{fig:4-3razv}). Now we glue these two pentagons along the edges $AF$ and $AH$ without changing the position of the triangle $ADB$. Since $\magl FEA + \magl EAF + \magl FAB = \pi$, the points $E, A, B, F$ are coplanar and form a parallelogram. By analogous arguments, $EADH$, $CGFB$, and $CGHD$ are parallelograms as well. It is easy to see that the shape we just obtained by gluing the pentagons $EAFHA$ and $AHDBF$ is still symmetric with respect to the plane $EAM_1$, and the planes $EHF$ and $ADB$ are parallel. Now let us show that points $H, D, B, F$ are coplanar and form a square $HDBF$. Indeed, all of its sides are have equal length as sides of a regular pentagon, and it has an axis of symmetry passing through the midpoints $M_1$ and $M_2$ of its opposite sides. Now if we glue the two halves of the polyhedron along this common square, the triangles $CDB$ and $ADB$ will be coplanar, since $\magl CM_2M_1 = \magl EM_1M_2$ and $\magl EM_1M_2 + \magl M_1M_2A = \pi$. Since $|AD| = |DC| = |CB| = |BA|$ as diagonals of a regular pentagon, $ADCB$ is a rhombus. By a similar argument, $EHGF$ is a rhombus as well. This completes the proof. \end{proof}
train/arxiv
BkiUbBk4eIZijQpFYIvV
5
1
\section{Introduction}\label{sec:intro} In recent years, massive multiple-input single-output (MISO) has been actively investigated for fifth-generation (5G) and future wireless communication systems due to its significant gain in spectral efficiency \cite{marzetta2010noncooperative}. Because of the large number of antennas, whereas, dealing with a high hardware cost and considerable power consumption become one of the key challenges. In massive MISO systems, the use of cheap and efficient building block, e.g., digital-to-analog converters (DACs) or analog-to-digital converters (ADCs), has attracted the most interest as a promising low-power solution \cite{spencer2004introduction,larsson2014massive}. Considering the same clock frequency and resolution, it is known that DACs have lower power consumption than ADCs, therefore research on low-resolution DACs are often ignored for this reason. However, in downlink multiuser massive MISO systems, the number of transmit antenna at base station (BS) is much larger than the number of receive antennas. In this context, we should consider DACs' power consumption, cost, and computation complexity. In downlink systems, conventional precoding method such as zero-forcing (ZF) and regularized ZF (RZF) achieve almost optimal performance effectively \cite{peel2005vector}. These linear precoding schemes have a low complexity and widely used in wireless communication with high resolution DACs (e.g., 12 bits). But in reality, massive MISO must be built with low cost DACs. This is because power consumption due to quantization increases exponentially as resolution increases. Various non-linear precoding methods with phase-shift-keying (PSK) constellations have been studied actively in the system, which can achieve good performances with low-complexities \cite{park2019construction,li2018massive,castaneda20171,landau2017branch}. However, the above methods cannot be straightforwardly applied to more practical quadrature-amplitude-modulation (QAM) constellations since in QAM constellations, the decision regions are bounded. Recently, various precoding methods for QAM constellations have been investigated in \cite{amor201716,jacobsson2016nonlinear,sohrabi2018one,jacobsson2017quantized,jedda2018quantized}. Leveraging the fact that 16-QAM symbols can be obtained as the superposition of two QPSK symbols, the authors in \cite{amor201716} formulated an optimization problem using gradient projection to obtain 1-bit transmit vectors, and stored them in a look-up-table per coherent channel. In \cite{jacobsson2016nonlinear}, non-linear 1-bit precoding methods for massive MIMO with QAM constellations have been proposed, which are enabled by semi-definite relaxation and $\ell_\infty$-norm relaxation. However, these methods do not provide an elegant complexity-performance trade-off. Thus, it is necessary to investigate a precoding method with an attractive performance and low-complexity under QAM constellations, which is the major subject of this paper. \textcolor{blue}{Some precoding methods with 1-bit DACs follow the minimum mean square error (MMSE) design criterion. In \cite{castaneda20171}, C1PO method is proposed from using bi-convex relaxation. Due to complexity of matrix inversion at C1PO, C2PO has been proposed as a low-complexity algorithm variant. The paper shows attractive error-rate performance with efficient complexity at low-order modulation. In \cite{chen2019mmse}, The MMSE-based one-bit precoding, MMSE-ERP is developed with enhanced receive processing capability at the mobile stations. the algorithm is based on a combination of the alternating minimization method using a projected gradient method, and equilibrium constraint. the performance of MMSE-ERP is significant. Also, \cite{wang2018finite} provides IDE algorithm that exploits an alternating direction method of multipliers (ADMM) framework. Furthermore, complexity efficient algorithm called as a IDE2. Both of IDE and IDE2 achieve excellent error-rate performance. constructive interference (CI) design criterion is similar to our one. \cite{li2020interference, li20201bit,li2018massive} introduce many symbol-level precoding methods that have CI design critrion. Symbol scaling is the efficient algorithm that achieve good performance with PSK. In \cite{li20201bit,li2020interference}, the optimization problems are defined with both of equality constraints and inequality constraints. Based on the problems, A partial branch and bound(P-BB) and an ordered partial sequential update(OPSU) achieve near-optimal performance and significant performance, respectively. In \cite{jedda2018quantized}, maximum safety margin (MSM) design criterion exploiting the constructive interference. Also, MSM algorithm and analysis of the algorithm are provided for constant envelope precoding with PSK and QAM.} In this paper, we present a novel direction to construct a 1-bit transmit signal vector for a downlink MU-MISO system with 1-bit DACs. Toward this, our contributions are summarized as follows: \begin{itemize} \item We first derive the so-called {\em feasibility condition} which ensures that each user's noiseless observation is placed in a desired decision region. That is, if a transmit signal is constructed to satisfy the feasibility condition, each user can recover a desired signal successfully in a higher signal-to-noise ratio (SNR). \item Incorporating the robustness to an additive noise into the feasibility condition, we show that our problem to construct a 1-bit transmit signal vector can be formulated as a mixed integer linear programming (MILP). This problem can be optimally solved via a linear programming (LP) relaxation and branch-and-bound. \item Furthermore, we propose a low-complexity algorithm to solve the MILP, by introducing a novel greedy algorithm, which can almost achieve the optimal performance with much lower computational complexity. \item Via simulation results, we demonstrate that the proposed method can outperform the state-of-the-art methods. Furthermore, the complexity comparisons of the proposed and existing methods demonstrate the potential of the proposed direction and algorithm. \end{itemize} This paper is organized as follows. In Section~\ref{sec:pre}, we provide useful notations and definitions, and describe a system model. In Section \ref{sec:structure}, we propose an efficient method to construct a transmit signal vector for downlink MU-MISO systems with 1-bit DACs. Moreover, the low complexity methods are proposed in Section \ref{sec:low complexity methods}. Section \ref{sec:simulation} provides simulation results. Conclusions are provided in \ref{sec:conclusion}. \section{Preliminaries}\label{sec:pre} In this section, we provide useful notations used throughout the paper, and then describe the system model. \subsection{Notation} The lowercase and uppercase bold letters represent column vectors and matrices, respectively. The symbol $(\cdot)^{{\sf T}}$ denotes the transpose of a vector or a matrix. For any vector $\bf x$, $x_i$ represents the $i$-th component of $\bf x$. Let $\left[a:b\right]\stackrel{\Delta}{=}\{a,a+1,\ldots,b\}$ for any integer $a$ and $b$ with $a<b$. The notation of $\text{card}(\mathcal{U})$ denotes the number of elements of a finite set $\mathcal{U}$. A rank of a matrix {\bf A} \ is represented as rank({\bf A}). $\Re({\bf a})$ and $\Im({\bf a})$ represent the real and complex parts of a complex vector ${\bf a} \in \mbox{\bb C}$, respectively. For any $x\in\mathbb{C}$, we let \begin{equation}\label{eq:1} g(x)=[\Re(x), \Im(x)]^{{\sf T}}, \end{equation} and the inverse mapping of $g$ is denoted as $g^{-1}$. Also, $g$ and $g^{-1}$ are the component-wise operations, i.e., $g([x_1,x_2]^{{\sf T}})=[\Re(x_1),\Im(x_1),\Re(x_2),\Im(x_2)]^{{\sf T}}$. For a complex-value $x$, its real-valued matrix expansion $\phi(x)$ is defined as \begin{equation}\label{eq:2} \phi(x)=\left[{\begin{array}{cc} \Re(x)& -\Im(x)\\ \Im(x)& \Re(x) \end{array}}\right]. \end{equation} As an extension into a vector, the operation of $\phi$ is applied in an element-wise manner as \begin{equation} \phi([x_1,x_2]^{{\sf T}})=[\phi(x_1)^{{\sf T}},\phi(x_2)^{{\sf T}}]^{{\sf T}}. \end{equation} $\bar{\bf{1}}_n$ denotes the length-$n$ all-one vector, and $\otimes$ indicates Kronecker product operator. \subsection{System Model} We consider a downlink MU-MISO system where BS equipped with $N_t \gg K$ transmits antennas serves $K$ users with a single antenna. As a natural extension of our earlier work in \cite{park2019construction} which focuses on PSK constellations only, this paper considers $4^n$-QAM with $n\geq 2$. Let ${\cal C}$ denote the set of constellation points of $4^n$-QAM. Also, letting ${\bf x}=[x_1,\ldots,x_{N_t}]^{{\sf T}}$ be a transmit vector at the BS, the received signal vector ${\bf x}\in\mbox{\bb C}^{K}$ at the $K$ users is given as \begin{equation}\label{eq:3} {\bf y} = \sqrt{\rho}{\bf H}{\bf x}+{\bf z}, \end{equation} where ${\bf H}\in\mathbb{C}^{K\times N_t}$ denotes the frequency-flat Rayleigh fading channel, each of which component follows a complex Gaussian distribution with zero mean and unit variance, and ${\bf z}\in\mathbb{C}^{K\times 1}$ denotes the additive Gaussian noise vector whose each element are distributed as complex Gaussian random variables with zero mean and unit variance, i.e., $z_i \sim \mathcal{CN}(0,\sigma^2=1)$. The SNR is defined as ${\sf SNR}=\rho/\sigma^2$, where $\rho$ denotes the per-antenna power constraint. Throughout the paper, it is assumed that the channel matrix ${\bf H}$ is perfectly known at the BS. Given a message vector ${\bf s}\in\mathcal{C}^K$, BS needs to construct a transmit vector ${\bf x}$ such that each user $k$ can recover the desired message $s_k$ successfully. Toward this, our goal is to construct such a precoding function ${\cal P}$: \begin{equation}\label{eq:4} {\bf x}=\mathcal{P}({\bf H},{\bf s}), \end{equation} which produces a transmit vector ${\bf x}$ from the channel matrix ${\bf H}$ and the message vector ${\bf s}$. Focusing on the impact of 1-bit DACs on the downlink precoding, we assume that BS is equipped with 1-bit DACs while all $K$ users are equipped with infinite-resolution ADCs. To isolate the performance impact of 1-bit DACs, each component $x_i$ of the transmit vector ${\bf x}$ is restricted as \begin{equation}\label{eq:5} \Re(x_i)\ \rm{and}\ \Im(\mathit{x_i})\in \{-1,1\}. \end{equation} Since this restriction causes a severe non-linearity, conventional precoding methods, developed by exploiting the linearity, cannot ensure an attractive performance. The objective of this paper is to construct a precoding function $\mathcal{P}({\bf H},{\bf s})$ with a manageable complexity and suitable for the considered non-linear MISO channels. \section{The Proposed Transmit-Signal Vectors}\label{sec:structure} We formulate an optimization problem to construct a transmit-vector ${\bf x}$ under $4^n$-QAM. Especially, this problem can be represented as a manageable MILP. We remark that our earlier work on PSK \cite{park2019construction} cannot be employed as the decision regions of $4^n$-QAM because a QAM constellation is bounded (see Fig.~\ref{fig:2}). For the ease of exploration, an equivalent real-valued expression is used as follows: \begin{equation}\label{eq:9} \tilde{{\bf y}} = \sqrt{\rho}\tilde{{\bf H}}\tilde{{\bf x}}+\tilde{{\bf z}}, \end{equation} where $\tilde{{\bf x}}=g({\bf x})$, $\tilde{{\bf x}}=g({\bf x})$, $\tilde{{\bf z}}=g({\bf z})$, and $\tilde{{\bf H}}=\phi({\bf H})\in\mathbb{R}^{2K\times 2N_t}$ denotes the real-value expansion matrix of ${\bf H}$. Before explaining the main result, we provide the useful definitions which are used throughout the paper. \vspace{0.1cm} \begin{definition}\label{def1}{\em (Decision region)} For any constellation point $s\in\mathcal{C}$, the decision region of $s$ is defined as \begin{equation}\label{eq:6} \mathcal{R}(s) \triangleq \left\{y\in\mathbb{C}:|y-s| \le \min_{c\in\mathcal{C}:c\ne s}|y-c|\right\}. \end{equation} This region means that a received signal $y \in\mathcal{R}(s)$ is detected as $s$. In addition the real-valued decision region is given as \begin{equation} \tilde{\mathcal{R}}(s)=g\left(\mathcal{R}(s)\right). \end{equation}\hfill$\blacksquare$ \end{definition} \vspace{0.1cm} \begin{definition}\label{def2}{\em (Base region)} A base region $\tilde{\mathcal{B}}_i\subseteq \mathbb{R}^2, \forall i\in\left[0:3\right]$, is defined as \begin{equation}\label{eq:7} \tilde{\mathcal{B}}_{i} \triangleq \{\alpha_{i}^1{\bf m}_{i}^1+\alpha_{i}^2{\bf m}_{i}^2: \alpha_{i}^1,\alpha_{i}^2>0\}, \end{equation} where ${\bf m}_i^\ell$ represents a basis vector with \begin{equation}\label{eq:8} {\bf m}_{i}^{\ell} = \begin{cases} g\left(\sqrt{2}\cos(\frac{\pi}{4}(1+2i))\right) & \mbox{if }\ell=1 \\ g\left(j\sqrt{2}\sin(\frac{\pi}{4}(1+2i))\right) & \mbox{if }\ell=2. \end{cases} \end{equation}\hfill$\blacksquare$ \end{definition} \vspace{0.1cm} \begin{definition}\label{def3}{\em (Partial matrix)} A partial matrix ${\bf A}_{\mathcal{U}}\in\mathbb{R}^{m\times \text{card}(\mathcal{U})}$ is defined as \begin{equation} {\bf A}_{\mathcal{U}}\triangleq[({\bf A}^{{\sf T}})_{u_1},({\bf A}^{{\sf T}})_{u_2},\ldots,({\bf A}^{{\sf T}})_{u_{\text{card}(\mathcal{U})}}]^{{\sf T}}, \end{equation} where ${\bf A}\in\mathbb{R}^{m\times n}$, $\mathcal{U}\triangleq \{u_1,u_2,\ldots,u_{\text{card}(\mathcal{U})}\} \subseteq[1:n]$ and $({\bf A}^{{\sf T}})_k$ denotes the k-th row of ${\bf A}^{{\sf T}}$. \end{definition} \hfill$\blacksquare$ \vspace{0.1cm} \begin{definition}\label{def4}{\em (Partial vector)} A partial vector ${\bf x}_{\mathcal{U}}\in\mathbb{R}^{\text{card}(\mathcal{U})\times 1}$ is defined as \begin{equation} {\bf x}_{\mathcal{U}}\triangleq[x_{u_1},x_{u_2},\ldots,x_{u_\text{card}(\mathcal{U})}]^{{\sf T}}, \end{equation} where ${\bf x}\in\mathbb{R}^{n\times 1}$, $\mathcal{U}\triangleq \{u_1,u_2,\ldots,u_{\text{card}(\mathcal{U})}\} \subseteq[1:n]$. \end{definition} \vspace{0.1cm} \begin{figure}[!t] \begin{center} \includegraphics[width=0.35\textwidth]{figure2.pdf} \end{center} \caption{Description of the decision regions for $4^2$-QAM with adaptive $\tau$.} \label{fig:2} \end{figure} In the sequel, the decision region in Definition~\ref{def1} will be represented by the intersections of the $n$ base regions in Definition~\ref{def2} with proper shift values. This representation makes it easier to formulate an optimization problem. First of all, we need to decide the size of bounded decision regions, i.e., the parameter $\tau$ in Fig.~\ref{fig:2} should be determined. Note that $2\tau=d_{\rm min}$ denotes the minimum Euclidean distance of the given constellation points. In PSK, $\tau$ is always infinite regardless of a channel matrix, whereas in $4^n$-QAM, it should be well-optimized. Specifically, $\tau$ should be chosen as large as possible to ensure a reliable performance, provided that a noiseless received signal belongs to the corresponding decision regions at all the $K$ users. From now on, we will explain how to construct a transmit-signal vector ${\bf x}$ for a decision-size $\tau$. Throughout the paper, it is assumed that all $K$ users have the identical decision size $\tau$ for the practicability of an optimization. Given $4^n$-QAM, each symbol is indexed by a length-$n$ quaternary vector $(i_1,...,i_n)$ with $i_j \in [0:3]$, i.e., \begin{equation} \mathcal{C}=\left\{s_{(0,\ldots,0)}^{n},s_{(0,\ldots,1)}^{n},\ldots,s_{(3,\ldots,3)}^{n}\right\}. \end{equation} Each constellation point can be represented as a linear combination of the $n$ basis symbols $c_i$'s such as \begin{equation}\label{eq:13} s_{(i_1,\ldots, i_n)}^{n} \triangleq \tau \sum_{l=1}^n 2^{n-l} c_{i_l} = \tau {s'}_{(i_1,\ldots, i_n)}^{n}. \end{equation} Here, a normalized constellation point and the basis symbols are defined as \begin{equation}\label{eq:12} {s'}_{(i_1,\ldots, i_n)}^{n} \triangleq \sum_{l=1}^n 2^{n-l} c_{i_l}, \end{equation} where $c_i$'s, four fundamental constellation points are given as \begin{equation} c_i \triangleq \sqrt{2}\left\{{\cos\left({\frac{\pi}{4}} (1+2i)\right)+j\sin\left({\frac{\pi}{4}} (1+2i)\right)}\right\}, \end{equation} for $i\in[0:3]$. For the ease of expression, we represent the constellation $\mathcal{C}$ and the corresponding decision regions $\mathcal{R}( s^{n}_{(i_1,\ldots, i_n)})$ in the corresponding real-valued forms: \begin{equation}\label{eq:14} \tilde{\mathcal{C}} = \left\{g(s_{(0,\ldots,0)}^{n}),g(s_{(0,\ldots,1)}^{n}),\ldots, g(s_{(3,\ldots, 3)}^{n})\right\}, \end{equation} and \begin{equation}\label{eq:15} \tilde{\mathcal{R}}\left( s^{n}_{(i_1, \ldots, i_n)}\right) = g\left(\mathcal{R}\left( s^{n}_{(i_1, \ldots ,i_n)}\right)\right). \end{equation} A transmit vector ${\bf x}$ should ensure that a noiseless received signal at the $k$-th user (i.e., $r_k={\bf h}_k^T{\bf x}$) should be placed in the corresponding decision regions for all users $k\in[1:K]$. This necessity condition implies that ${\bf x}$ should satisfy the following condition: \begin{align}\label{eq:region_constraint} g(r_k) &\in \tilde{\mathcal{R}}\left( s^{n}_{(\mu_{k,1},\ldots, \mu_{k,n})}\right), k \in [1:K], \end{align} for $k \in [1:K]$, where $r_k={\bf h}_k^T{\bf x}$ denotes a noiseless received signal (i.e., $y_k ={\bf h}_k^T{\bf x} +z_k$). \vspace{0.2cm} \noindent{\bf Feasibility condition:} The condition in (\ref{eq:region_constraint}) can be rewritten in a way that the optimization problem can be interpreted as an LP problem. The decision region in (\ref{eq:region_constraint}) can be expressed as the intersections of the $n$ shifted base regions in Definition~\ref{def2}: \begin{align}\label{eq:18} \tilde{\mathcal{R}}\left( s^{n}_{(i_1, \ldots, i_n)}\right)\triangleq \tilde{\mathcal{B}}_{i_1}\bigcap_{l=2}^n \left\{\tilde{\mathcal{B}}_{i_l}+2^{n-(l-1)}g\left(s_{(i_1,\ldots, i_{l-1})}^{l-1}\right)\right\}, \end{align}where the shifted base region with a bias $c$ is defined as \begin{equation}\label{eq:19} \tilde{\mathcal{B}}_{i}+c \triangleq \{\alpha_{i}^1{\bf m}_{i}^1+\alpha_{i}^2{\bf m}_{i}^2+c: \alpha_{i}^{1},\alpha_{i}^{2}>0\}. \end{equation} Then, the condition in (\ref{eq:region_constraint}) holds if $g(r_k)$ can be represented by the following $n$ linear equations with some positive coefficients, i.e., \begin{align}\label{eq:20} g(r_k) &= \alpha_{k,1}^1{\bf m}_{\mu_{k,1}}^1+\alpha_{k,1}^2{\bf m}_{\mu_{k,1}}^2+2^{n}g(0) \\ & = \alpha_{k,2}^1{\bf m}_{\mu_{k,2}}^1+\alpha_{k,2}^2{\bf m}_{\mu_{k,2}}^2+2^{n-1}g(s_{(\mu_{k,1})}^{1}) \nonumber \\ &\;\; \vdots \nonumber \\ & = \alpha_{k,n}^1{\bf m}_{\mu_{k,n}}^1+\alpha_{k,n}^2{\bf m}_{\mu_{k,n}}^2+2^1g(s_{(\mu_{k,1},\ldots ,\mu_{k,n-1})}^{n-1}), \nonumber \end{align} for some $\alpha_{k,1}^1,\alpha_{k,1}^2, \ldots,\alpha_{k,n}^1,\alpha_{k,n}^2 \ge 0$. The condition in (\ref{eq:20}) is called a {\em feasibility} condition as it can guarantee that $r_k\in \mathcal{R}\left( s^{n}_{(\mu_{k,1},\ldots,\mu_{k,n})}\right)$ for $k\in [1:K]$. In other words, if this condition is satisfied, all $K$ users can reliably detect the desired messages in higher SNRs. \begin{example}Assuming $4^2$-QAM, we explain how to obtain the feasibility condition in (\ref{eq:18}). Consider the decision region $\mathcal{R}(s^{2}_{(0,2)})$. From Fig.~\ref{fig:2}, the decision region is represented by the intersection of the two base regions $\mathcal{B}_0$ (i.e., the infinite region with blue basis in Fig.~\ref{fig:2}) and {$\mathcal{B}_2+s_{(0)}^{1}$} (i.e., the infinite region with red basis in Fig.~\ref{fig:2}). Thus, the decision region (i.e., the gray region in Fig.~\ref{fig:2}) is represented as \begin{equation}\label{eq:21} \mathcal{R}\left(s^{2}_{(0,2)}\right) \triangleq \left\{\mathcal{B}_{0}+2^2g(0)\right\}\cap \left\{\mathcal{B}_{2}+2^1s_{(0)}^{1}\right\}. \end{equation} Also, from Definition 2, the above condition can be represented by the following two linear equations: \begin{align}\label{eq:23} g({\bf r}_k) =& \alpha_{k,1}^1{\bf m}_{0}^1+\alpha_{k,1}^2{\bf m}_{0}^2+2^2g(0),\nonumber \\ = &\alpha_{k,2}^1{\bf m}_{2}^1+\alpha_{k,2}^2{\bf m}_{2}^2+2^1 g\left(s_{(0)}^{1}\right), \end{align} for some positive coefficients $\alpha_{k,1}^1,\alpha_{k,1}^2,\alpha_{k,2}^1,\alpha_{k,2}^2>0$. This is equivalent to the condition in (\ref{eq:20}). In the same way, we can verify the feasibility condition in (\ref{eq:18}). \hfill$\blacksquare$ \end{example} \vspace{0.1cm} We are now ready to derive MILP problem which can generate a good transmit vector ${\bf x}$ under 1-bit DAC constraints. We first represent the feasibility condition in a matrix form. Define the $n$ copies of the channel vector ${\bf h}_k$ as \begin{equation} {\bf H}^k\stackrel{\Delta}{=} \bar{\bf{1}}_n \otimes {\bf h}_k=[\underbrace{{\bf h}_k^{\sf T},\ldots, {\bf h}_k^{\sf T}}_{n}]^{\sf T}, \label{eq:26} \end{equation} where ${\bf h}_k$ denotes the $k$-th row of ${\bf H}$. Also, the corresponding real-valued expression is denoted as \begin{equation} \tilde{{\bf H}}^k=\phi({\bf H}^k). \end{equation} Accordingly, the $n$-extended received vector at $k$-th user is written as \begin{align}\label{eq:27} {\bf r}^k&\triangleq g({\bf H}^k{\bf x}) \nonumber\\ &=\tilde{{\bf H}}^k\tilde{{\bf x}} =\bar{\bf{1}}_n \otimes g(r_k). \end{align} We next express the right-hand side of \eqref{eq:20}, i.e., linear constraints, in a matrix form. From Definition 2, we let: \begin{align} {\bf M}_i \triangleq [{\bf m}_{i}^1 \ {\bf m}_{i}^2]&= \begin{bmatrix} \Re(c_{i}) & 0 \\ 0 & \Im(c_{i}) \end{bmatrix} \nonumber\\ &= \begin{bmatrix} \sqrt{2}\cos\left({\frac{\pi}{4}} (1+2i)\right) & 0 \\ 0 & \sqrt{2}\sin\left({\frac{\pi}{4}} (1+2i)\right) \end{bmatrix}. \label{eq:25} \end{align} We notice that ${\bf M}_i$ is a symmetric and orthogonal matrix as \begin{align}\label{eq:sym_or} {\bf M}_{i}{\bf M}_{i} = \begin{bmatrix} \cos\left({\frac{\pi}{2}} (1+2i)\right)+1 & 0 \\ 0 & -\cos\left({\frac{\pi}{2}} (1+2i)\right)+1 \end{bmatrix}={\bf I}. \end{align} Since the decision region of a constellation point $4^n$-QAM is formed as the conjunction of $n$ shifted base regions, we need to establish a tightly packed format that can cope with both base regions and shifts (biases, equivalently). The former is addressed by the basis matrix ${\bf M}^{\mu_k}$ and coefficient vector $\hbox{\boldmath$\alpha$}^k$, which are respectively written as \begin{align} {\bf M}^{\mu_k} &\triangleq \mbox{diag}({\bf M}_{\mu_{k,1}},\ldots,{\bf M}_{\mu_{k,n}}) \label{eq:28}\\ \hbox{\boldmath$\alpha$}^k &\triangleq[\alpha_{k,1}^1,\alpha_{k,1}^2,\ldots,\alpha_{k,n}^1,\alpha_{k,n}^2]^{\sf T}. \end{align}Lastly, the whole series of the biases are formed as the normalized bias vector ${\bf b}$ with $\tau$, which is defined from (\ref{eq:13}) as \begin{align}\label{eq:29} {\bf b}^{\mu_k} \triangleq g\left([2^n\cdot0, 2^{n-1}\cdot {s'}_{(\mu_{k,1})}^{1}, \ldots ,2^1\cdot {s'}_{(\mu_{k,1},\ldots,\mu_{k,n-1})}^{n-1}]^{\sf T}\right)\nonumber\\ =\frac{1}{\tau}g\left([2^n\cdot0, 2^{n-1}\cdot s_{(\mu_{k,1})}^{1}, \ldots ,2^1\cdot s_{(\mu_{k,1},\ldots,\mu_{k,n-1})}^{n-1}]^{\sf T}\right). \end{align} From \eqref{eq:28}-\eqref{eq:29}, the matrix form of $k$-th user's feasibility conditions (\ref{eq:20}) is given as \begin{equation}\label{eq:31} {\bf r}^k={\bf M}^{\mu_k}\boldsymbol{\alpha}^k+\tau{\bf b}^{\mu_k}. \end{equation} Leveraging the expression designed for each user, we construct the cascaded matrix form of feasibility conditions on all $K$ users as \begin{equation}\label{eq:32} \Bar{{\bf r}}=\Bar{{\bf H}}\tilde{{\bf x}}=\Bar{{\bf M}}\bar{\boldsymbol{\alpha}}+\tau\Bar{{\bf b}}, \end{equation} where \begin{align*} \Bar{{\bf M}} &\triangleq \mbox{diag}({\bf M}^{\mu_1},\ldots,{\bf M}^{\mu_K}),\; \Bar{{\bf H}} \triangleq [(\tilde{{\bf H}}^{1})^{\sf T},\ldots,(\tilde{{\bf H}}^{K})^{\sf T}]^{\sf T} \\ \Bar{{\bf r}} &\triangleq [({\bf r}^{1})^{\sf T},\ldots,({\bf r}^{K})^{\sf T}]^{\sf T},\; \Bar{{\bf b}} \triangleq [({\bf b}^{\mu_1})^{\sf T},\ldots,({\bf b}^{\mu_K})^{\sf T}]^{\sf T} \\ \bar{\boldsymbol{\alpha}}&\triangleq [(\boldsymbol{\alpha}^1)^{\sf T},\ldots,(\boldsymbol{\alpha}^K)^{\sf T}]^{\sf T}. \end{align*} Thus, the feasibility condition in (\ref{eq:32}) is rewritten as \begin{equation}\label{eq:37} \bar{\boldsymbol{\alpha}}=\underbrace{\bar{{\bf M}}\bar{{\bf H}}}_{\triangleq{\boldsymbol{\Lambda}}}\tilde{{\bf x}}-\tau\underbrace{\bar{{\bf M}}\bar{{\bf b}}}_{\triangleq{\boldsymbol{\Lambda}_b}}, \end{equation} using the fact that $\bar{{\bf M}}^{-1}=\bar{{\bf M}}$ from (\ref{eq:sym_or}). We remark that $\boldsymbol{\Lambda}\in\mathbb{R}^{2nK\times2N_t}$ and $\boldsymbol{\Lambda}_b\in\mathbb{R}^{2nK\times1}$ are fully determined by the channel matrix $\bar{{\bf H}}$ and users' messages $\{\mu_k: k\in[1:K]\}$. \noindent{\bf Robustness:} A feasible transmit vector can provide an attractive performance in higher SNR regimes. However, the robustness to an additive Gaussian noise cannot be guaranteed. To enhance the robustness, one reasonable way is to make a noiseless received signal to be placed in the center of the decision region. Namely, we aim at moving away the noiseless signal from the boundaries of the decision areas. By taking this goal into account, we formulate the following optimization problem: \begin{align}\label{eq:39} &\mathcal{P}_1:&& \max_{\tilde{{\bf x}},\tau} \min\{\alpha_{k,j}^i: i=1,2,\ j\in[1:n],\ k\in[1:K]\} \\ & \text{s.t.} &&\bar{\boldsymbol{\alpha}}=\boldsymbol{\Lambda}\tilde{{\bf x}}-\tau\boldsymbol{\Lambda}_{b},\nonumber\\ &&& \alpha_{k,j}^1,\alpha_{k,j}^2 > 0,\ j\in[1:n],\ k\in[1:K],\nonumber\\ &&& \tilde{{\bf x}}\in\{-1,1\}^{2N_t} \nonumber. \end{align} \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{figure4.pdf} \caption{The normalized noiseless received signals of $4^2$-QAM.} \label{fig:4} \end{figure} From now on, we explain how to solve the problem $\mathcal{P}_1$ efficiently. According to dealing with the decision parameter $\tau$, we consider the two different scenarios: i) a fixed $\tau$ irrespective of a channel matrix ${\bf H}$; ii) a channel-dependent $\tau$. Clearly, the second scenario requires a more overhead since in this case, BS needs to transmit the $\tau$ to the $K$ users more frequently. For the first scenario, we employ the asymptotic result provided in \cite{sohrabi2018one}, where it is fully determined as a function of $N_t$, $n$ and $K$: \begin{equation}\label{eq:10} \tau \stackrel{\Delta}{=} {\frac{\sqrt{{2/ \pi}}}{6}} \sqrt{\frac{2\rho {N_t}^2}{\tilde{f}(K,n)}}, \end{equation} where \begin{equation}\label{eq:11} \tilde{f}(K,n) = K\frac{2^n+1}{3(2^n-1)}+2\sqrt{K\frac{(2^n+1)(2^{2n}-4)}{ 22.5(2^n-1)^3}}. \end{equation} Leveraging the fixed $\tau$ in (\ref{eq:10}), $\mathcal{P}_1$ can be formulated as MILP: \begin{align}\label{eq:43} &\mathcal{P}_2:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\boldsymbol{\Lambda}_i\tilde{{\bf x}}-\tau \boldsymbol{\Lambda}_{b,i} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& \tilde{{\bf x}}\in\{-1,1\}^{2N_t}, \end{align} where $\boldsymbol{\Lambda}_i$ and $\boldsymbol{\Lambda}_{b,i}$ denote the $i$-th row of $\boldsymbol{\Lambda}$ and $\boldsymbol{\Lambda}_b$, respectively. For the second scenario, $\tau$ is set by $\tau \stackrel{\Delta}{=} t$. In general, this value can be changed according to a channel matrix ${\bf H}$. This choice is motivated by the fact that $t$ is the lower bound of the coefficients $\bar{\hbox{\boldmath$\alpha$}}$, and due to the normalized $\bar {{\bf M}}$ and $\bar{{\bf b}}$, the coefficients directly signify how far away it is from a detection boundary. Accordingly, the optimization problem to find the decision parameter $\tau$ and transmit vector ${\bf x}$ simultaneously is defined as \begin{align}\label{MILP4} &\mathcal{P}_3:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\frac{1}{1+\boldsymbol{\Lambda}_{b,i}}\boldsymbol{\Lambda}_i\tilde{{\bf x}} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& \tilde{{\bf x}}\in\{-1,1\}^{2N_t}. \end{align} {\color[rgb]{0,0,0.90} The proposed MILP problems in $\mathcal{P}_2$ and $\mathcal{P}_3$ can be solved via the well-known branch-and-bound (B\&B) method like in \cite{landau2017branch}, when implemented correctly, should identify the optimal precoding vector in its criterion. Furthermore, the partial B\&B like in \cite{li2020interference} is, as the title suggest, a nearly optimal method. } However, its computational complexity is quite expensive for a realistic implementation \cite{landau2017branch}. In the following section, we will present a low-complexity method to solve the optimization problems in $\mathcal{P}_2$ and $\mathcal{P}_3$ efficiently. \begin{remark} Fig.~\ref{fig:4} verifies the proposed approach, where $10^4$ normalized noiseless signals, i.e., ${\bf H}{\bf x}$, are plotted with $N_t=8$, $K=2$, and $n=2$. The blue points depict the noiseless received signals when ZF precoding in \cite{peel2005vector} is used with the assumption of infinite resolution. In contrast, the red points show the noiseless received signals when the proposed 1-bit transmit vectors obtained from the solutions of $\mathcal{P}_2$ are used. Fig.~\ref{fig:4} clearly shows that the red points can provide more robustness than the blue points even with the low-resolution data converters. \hfill$\blacksquare$ \end{remark} \section{Low-Complexity Precoding Methods}\label{sec:low complexity methods} In this section, we present an efficient algorithms to solve MILP problems in $\mathcal{P}_2$ and $\mathcal{P}_3$. We first solve the LP problem by relaxing the integer constraint in $\mathcal{P}_3$ as the bounded interval: \begin{align}\label{eq:46} &\mathcal{P}_4:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\frac{1}{1+\boldsymbol{\Lambda}_{b,i}}\boldsymbol{\Lambda}_i\tilde{{\bf x}} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& -1\le\tilde{x}_j\le 1,\ j\in[1:2N_t]. \end{align} Similarly, $\mathcal{P}_2$ can reformed as the following LP problem with relaxed linear constraint: \begin{align}\label{LP5} &\mathcal{P}_5:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\boldsymbol{\Lambda}_i\tilde{{\bf x}}-\tau \boldsymbol{\Lambda}_{b,i} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& -1\le\tilde{x}_j\le 1,\ j\in[1:2N_t], \end{align} where $\tau$ is given in (\ref{eq:10}). The above problems can be efficiently solved via simplex method \cite{luenberger1984linear}, and the corresponding relaxed LP solutions are denoted as $\tilde{{\bf x}}_{\rm LP}$. We then refine the solution of $\mathcal{P}_4$ or $\mathcal{P}_5$ via a greedy algorithm (see Algorithm 1), so that it satisfies the one-bit constraints. Starting from the solutions of $\mathcal{P}_4$ or $\mathcal{P}_5$, i.e., $\tilde{{\bf x}}_{\rm LP}$, the main steps behind the second stage are 1) choosing an antenna index $i$; 2) testing the possible values of the antennas, that is $\tilde{x}_i \in \{-1,1\}$; 3) calculating the set of scaling coefficients when artificially changing $\tilde{x}_i$; and 4) finally setting $\tilde{x}_i=j$ where the substitution of $j\in\{-1,1\}$ for $\tilde{x}_i=j$ insists the maximization of the minimum element in the coefficients. \begin{algorithm}[t]\label{al:1} \caption{Greedy Algorithm} \textbf{Input:} $\tilde{{\bf x}}_{\rm LP}\in\mathbb{R}^{2N_t\times1}$, $\boldsymbol{\Lambda}\in\mathbb{R}^{2nK\times2N_t}$, $\boldsymbol{\Lambda}_b\in\mathbb{R}^{2nK\times1}$ and $\tau\in\mathbb{R}^{+}$. \textbf{Initialization:} $\tilde{{\bf x}}=\tilde{{\bf x}}_{\rm LP}$ (obtained by either $\mathcal{P}_4$ or $\mathcal{P}_5$). \begin{algorithmic} \For{$i=1:2N_t$} \For{$j\in \{-1,1\}$} \State $\tilde{x}_i=j$ and $\bar{\boldsymbol{\alpha}}^{(j)}=\boldsymbol{\Lambda}\tilde{{\bf x}}-\tau \boldsymbol{\Lambda}_b$ \EndFor \State Update $\tilde{x}_i\leftarrow \operatornamewithlimits{argmax}_{j\in\{-1,1\}}\{\min(\bar{\boldsymbol{\alpha}}^{(j)})\}$ \EndFor \State \textbf{Output:} $\tilde{{\bf x}}\in\mathcal{R}^{2N_t\times1}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t]\label{al:2} \caption{\textcolor{blue}{Partial Greedy Algorithm}} \textbf{Input:} $\tilde{{\bf x}}_{\rm LP}\in\mathbb{R}^{2N_t\times1}$, $\boldsymbol{\Lambda}\in\mathbb{R}^{2nK\times2N_t}$, $\boldsymbol{\Lambda}_b\in\mathbb{R}^{2nK\times1}$, $\tau\in\mathbb{R}^{+}$, $\mathcal{O}\subseteq[1:2N_t]$, $\mathcal{Q}\subseteq[1:2N_t]$. \textbf{Initialization:} $\tilde{{\bf x}}=\tilde{{\bf x}}_{\rm LP}$ (obtained by either $\mathcal{P}_4$ or $\mathcal{P}_5$),\\ $\boldsymbol{\Lambda}_k=\tau\boldsymbol{\Lambda}_b-\boldsymbol{\Lambda}_{\mathcal{O}}\tilde{{\bf x}}_{\mathcal{O}}$. \begin{algorithmic} \For{$i=1:\text{card}(\mathcal{Q})$} \For{$j\in \{-1,1\}$} \State $\tilde{x}_{\mathcal{Q}_i}=j$ and $\bar{\boldsymbol{\alpha}}^{(j)}=\boldsymbol{\Lambda}_{\mathcal{Q}}\tilde{{\bf x}}_{\mathcal{Q}}- \boldsymbol{\Lambda}_k$ \EndFor \State Update $\tilde{x}_{\mathcal{Q}_i}\leftarrow \operatornamewithlimits{argmax}_{j\in\{-1,1\}}\{\min(\bar{\boldsymbol{\alpha}}^{(j)})\}$ \EndFor \State \textbf{Output:} $\tilde{{\bf x}}\in\mathcal{R}^{2N_t\times1}$ \end{algorithmic} \end{algorithm} \subsection{Greedy algorithms} $\tilde{{\bf x}}_{\rm LP}$, which is the solution of $\mathcal{P}_4$ or $\mathcal{P}_5$, is obtained via simplex method \cite{luenberger1984linear} instead of the interior point method \cite{den2012interior}. The simplex method concentrates on finding out an optimal solution that is an extreme point of constraint set of $\mathcal{P}_4$ or $\mathcal{P}_5$. Via numerical tests, we have confirmed that the extreme points are basic feasible solutions whose most entries are already satisfying 1-bit constraint. A theoretical proof is non-trivial and left for an interesting future work. From $\tilde{{\bf x}}_{\rm LP}$, the solution of LP, set $\mathcal{O}$ and $\mathcal{Q}$ are obtained such as \begin{align} \mathcal{O}&=\{i:\tilde{x}_i\in\{-1,1\},\ i\in[1:2N_t]\}\\ \mathcal{Q}&=\{i:-1<\tilde{x}_i<1,\ i\in[1:2N_t]\}. \end{align} To alleviate the computational complexity of the greedy algorithm, a partial greedy algorithm is suggested based on $\tilde{{\bf x}}_{\rm LP}$, i.e., solution of $\mathcal{P}_4$ or $\mathcal{P}_5$. Unlike the full greedy algorithm, the second stage of the partial greedy algorithm performs the greedy search on the entries in $\mathcal{Q}$ while elements in $\mathcal{O}$ are unchanged (see Algorithm 2) with Definition~\ref{def3},~\ref{def4}. We can diminish the size of the search space, thereby reducing the complexity dramatically. \subsection{Computational complexity}\label{subsec:complexity} We compare the proposed algorithms with the existing methods in terms of the computational complexity measured by the total number of real-valued multiplications. We first evaluate the complexity of the optimal method based on an exhaustive search that explores all possible signal candidates $\tilde{{\bf x}}\in\{-1,1\}^{2N_t}$. Since each candidate requires $2nK\cdot2N_t$ operations to generate the magnitude of coefficients in the feasibility conditions in \eqref{eq:37}, the total complexity of the exhaustive search is computed as \begin{equation}\label{eq:49} \mathcal{X}_e=4nKN_t\cdot2^{2N_t}. \end{equation} We next focus on the computational complexity of the proposed algorithms which consist of LP solver of $\mathcal{P}_4$ and its greedy refinement. For the LP solver, we use the simplex method in whose computational complexity of the standard constraints of LP (i.e., ${\bf A}{\bf x}\ge{\bf b}$, where ${\bf A}\in\mathbb{R}^{m\times n}$, ${\bf b}\in\mathbb{R}^{m\times 1},{\bf x}\in \mathbb{R}^{n\times1}$) is given as \cite{dantzig1998linear,den2012interior}: \begin{align} \mathcal{X}_{\rm LP} &= \mathbbm{i}_{\rm LP}\cdot\{(m+1)(n+1)+2 m\}\nonumber\\ &=\mathbbm{i}_{\rm LP}\cdot\{m\times n+3m+n+1\},\label{eq:52} \end{align} where $\mathbbm{i}_{\rm LP}$ is the number of iterations during the simplex method. \textcolor{blue}{The simplex method visit all $2^{2N_t}$ vertices in worst case. Fortunately, the fact that expected complexity of the method is polynomial is proved by \cite{10.1145/990308.990310}.} Because of the randomness of $\boldsymbol{\Lambda}$ and $\boldsymbol{\Lambda}_b$, $\mathbbm{i}_{\rm LP}$ varies according to the constraints, however, we can obtain the approximation of average $\mathbbm{i}_{\rm LP} \approx \alpha \times m$ ,where $\exp{(\alpha)} = \log_2{(2+\frac{n}{m})}$ \cite{shu1993linear}. Note that the number of iterations totally depends on the constraints of LP instead of the objective function. Overall, the complexity of simplex method in the system is represented as \begin{equation} \mathcal{X}_{\rm LP} = (\log\{\log_2{(2+\frac{n}{m})}\}\times m)\cdot(m \times n+3m+n+1). \label{eq:X_LP} \end{equation} Also, the quantized LP represents the algorithm that directly quantizes the solution of $\mathcal{P}_4$ or $\mathcal{P}_5$ to generate 1-bit transmit vector using {\it sign} function, i.e., ${\bf x}_q={\rm sign}({\bf x}_{\rm LP})$. Thus, the corresponding complexity is the same as the one in the LP as \begin{equation} \mathcal{X}_q = \mathcal{X}_{\rm LP}. \end{equation} Next, the complexity of the full-greedy method is obtained as \begin{align}\label{eq:53} \mathcal{X}_{\rm F-greedy}&=2\times2nK\times2N_t \end{align} based on the fact that the algorithm needs to sequentially search over $\tilde{x}_i\in\{-1,1\}, \forall i\in[1:2N_t]$ and each iteration requires $2nK$ operations. Similarly, the complexity of the partial-greedy algorithm is represented as \begin{equation} \label{eq:X_partial} \mathcal{X}_{\rm P-greedy}=2\times2nK\times \text{card}(\mathcal{Q}). \end{equation} Combining \eqref{eq:X_LP}, \eqref{eq:53}, and \eqref{eq:X_partial}, the total computational complexities of the proposed methods are computed as \begin{align} \mathcal{X}_{\rm pro1}&=\mathcal{X}_{\rm LP}+\mathcal{X}_{\rm F-greedy}\nonumber\\ &=\mathcal{X}_{\rm LP}+8nKN_t,\\ \mathcal{X}_{\rm pro2}&=\mathcal{X}_{\rm LP}+\mathcal{X}_{\rm P-greedy}\nonumber\\ &=\mathcal{X}_{\rm LP}+4nK\text{card}(\mathcal{Q}).\label{eq:54} \end{align} As a benchmark method, we consider a low-complexity method, the symbol-scaling method (SS) \cite{li2018massive} whose computational complexity is given as \begin{equation}\label{eq:50} \mathcal{X}_{\rm SS}=4N_t^2+24nKN_t-2nK. \end{equation} \textcolor{blue}{Also, computation complexities of SQUID, C1PO and IDE \cite{wang2018finite,castaneda20171,jacobsson2016nonlinear} are given as \begin{align} \mathcal{X}_{\rm SQUID} = &\mathbbm{i}_{\rm SQUID}(2 N_t K+N_t) \nonumber\\ &+\frac{1}{3}K^3+2N_t K(K+1)+K^2, \end{align} \begin{align} \mathcal{X}_{\rm C1PO} = \mathbbm{i}_{\rm C1PO}(2 N_t K^2+N_t)+\frac{1}{3}K^3+N_t K^2, \end{align} \begin{align} \mathcal{X}_{\rm IDE} = \mathbbm{i}_{\rm IDE}(N_t K+\frac{4}{3}K^3+5N_t K+3N_t+K)\nonumber \\ +N_t K^2, \end{align} where $\mathbbm{i}_{\rm x}$ denotes the number of iterations during the ${\rm x}$ algorithms, respectively. For simulation and TABLE~\ref{table:1}, we set $\mathbbm{i}_{\rm SQUID}=100,\ \mathbbm{i}_{\rm C1PO}=24,\ \mathbbm{i}_{\rm IDE}=100$, which is the setting for each paper provided \cite{wang2018finite,castaneda20171,jacobsson2016nonlinear}.} The P-BB and OPSU algorithm are proposed for QAM constellation with low complexity in \cite{li2020interference}. Recall that the full B\&B has a prohibitive complexity in real equipment \cite{landau2017branch}. The P-BB also is based on $\tilde{x}_{\rm LP}$ and corresponding $\mathcal{O}, \mathcal{Q}$. In detail, P-BB fix entries of $\tilde{{\bf x}}_{\rm LP}$, which satisfy $\tilde{x}_i\in\tilde{{\bf x}}_{\rm LP}, i\in\mathcal{O}$, but only reconstruct the rest entries of $\tilde{{\bf x}}_{\rm LP}$(i.e., $\tilde{x}_i\in\tilde{{\bf x}}_{\rm LP}, i\in\mathcal{Q}$) based on B\&B. Moreover, the P-BB algorithm significantly reduces the complexity compared to F-BB. Unfortunately, in the worst case, it needs to search all subset $\{-1,1\}^{\text{card}(\mathcal{Q})}$, causing high complexity with many users. The OPSU algorithm is essentially a greedy algorithm based on B\&B. The complexity of P-BB ($\mathcal{X}_{\rm P-BB}$) cannot be specific due to B\&B search tree by case. It's totally natural that the computation complexity of OPSU ($\mathcal{X}_{\rm OPSU}$) is even lower than $\mathcal{X}_{\rm P-BB}$ \cite{li2020interference}. The complexity of OPSU is represented as \begin{align} \mathcal{X}_{\rm OPSU} &= \mathcal{X}_{\rm LP}+2(2K+2K) \text{card}(\mathcal{Q}) \\ &=\mathcal{X}_{\rm LP}+8K\text{card}(\mathcal{Q}). \end{align} In nature, we have: \begin{equation}\label{com_pbb,opsu} \mathcal{X}_{\rm P-BB} \gg \mathcal{X}_{\rm OPSU}. \end{equation} \textcolor{blue}{The complexity analysis in the TABLE~\ref{table:1} suppose the assumptions such as a single alternation stage, the cardinality of partial set, and the number of iterations of LP. Moreover, pivoting of the simplex method depends on the rank({\bf A}), which is constraint matrix of standard LP form. From \textit{Lemma \ref{Lemma_1}}\cite{li2020interference}, the rank$(\boldsymbol{\Lambda})$ equals rank of LP constraints for OPSU and P-BB as $2K$. Therefore, in real, there is not much difference between the complexity of OPSU and complexity of our proposed methods. Therefore, for the realistic comparison, we demonstrate the run-time simulation in Figs.~\ref{fig:runtime} and \ref{fig:runtime_detail}.} \begin{lemma}\label{Lemma_1} $rank(\boldsymbol{\Lambda})=2K$ with flat-fading Rayleigh fading channel ${\bf H}$ with probability 1. \textit{proof}: See Appendix \ref{proof_Lemma_1}. \end{lemma} \section{Simulation Results}\label{sec:simulation} In this section, we verify the superiority of the proposed methods by comparing the symbol-error-rate (SER) performances of existing methods. Simulations include the following methods: \begin{itemize} \item Zero forcing {\bf(ZF)}: The conventional ZF method for unquantized MU-MIMO systems, which is used as the lower-bound of the 1-bit quantized methods. \item Quantized zero forcing {\bf (QZF)}: The direct 1-bit quantization of ZF. \item Symbol scaling {\bf (SS)}: The low complexity method proposed in \cite{li2018massive}. \item Quantized LP {\bf (QLP)}: The direct 1-bit quantization of the solution from $\mathcal{P}_4$ in Figs. \ref{fig:7}, \ref{fig:8} and $\mathcal{P}_5$ in Figs. \ref{fig:5}, \ref{fig:6}. \item Partial branch and bound {\bf (P-BB)}: The method for QAM constellations proposed in \cite{li2020interference}. \item Ordered partial sequential update {\bf (OPSU)}: The ordered greedy method based on B\&B proposed in \cite{li2020interference}. \textcolor{blue}{ \item Squared-infinity norm Douglas-Rachford splitting {\bf (SQUID)}: The well-known algorithm that have excellent performance in \cite{jacobsson2016nonlinear}. \item Biconvex 1-bit precoding {\bf (C1PO, C2PO)}: The low complexity algorithms for any constellations in \cite{castaneda20171}. \item Iterative discrete estimation {\bf (IDE)}: The iteration algorithm based on ADMM in \cite{wang2018finite}. \item ADMM-Leo {\bf (ADMM-Leo)}: The efficient algorithm based on ADMM in \cite{chu2019efficient}. \item MSM method {\bf (MSM)}: The constant envelope precoding for MSM design criterion in \cite{jedda2018quantized}. \item MMSE with enhanced receive processing {\bf (MMSE-ERP)}: The MMSE based precoding using alternating minimization in \cite{chen2019mmse}.} \item Full-greedy {\bf (F-greedy)}: The proposed method 1, full greedy algorithm based LP from Algorithm 1. \item Partial-greedy {\bf (P-greedy)}: The proposed method 2, partial greedy algorithm based LP from Algorithm 2. \end{itemize} Recall that ${\sf SNR}$ is defined as per-antenna signal-to-noise ratio, i.e., $\rho/\sigma^2$. \begin{table*}[h] \caption{Comparison of computation complexities at setting of performances} \begin{center} \begin{tabular}{|c|c|c|} \hline Precoding methods & $4^2$-QAM, $N_t=64, K=8$& $4^3$-QAM, $N_t=128, K=8$\\ \hline Exhaustive search& $1.4\times10^{42}$ &$1.4\times10^{81}$ \\ \cline{1-3} P-BB& $69481\ll\cdot\leq1.34\cdot10^8$ &$263030\ll\cdot\leq4\cdot10^8$ \\ \hline \cline{1-3} OPSU& 69481 & 263030 \\ \hline \cline{1-3} QLP& 131320 & 643100 \\ \hline \cline{1-3} F-greedy& 139510 &667680 \\ \hline \cline{1-3} P-greedy& 133240 &645980 \\ \hline \cline{1-3} \cline{1-3} SS& 40928 &139216 \\ \hline \cline{1-3} SQUID (iteration=100)& 118250 &236270 \\ \hline \cline{1-3} C1PO (iteration=24)& 31915 &62123 \\ \hline \cline{1-3} IDE (iteration=100)& 757960 &1446900 \\ \hline \end{tabular} \label{table:1} \end{center} \end{table*} In addition, we evaluate the computation complexities based on the analyses in Section \ref{subsec:complexity}. For complexity of OPSU and P-greedy algorithms in TABLE~\ref{table:1}, We also use average $\text{card}(Q)$ from $10^5$ simulations, and the average $\text{card}(Q)$ in the setting ($4^2$-QAM, $N_t=64$, $K=8$) is $14.988$ Also, in the setting of ($4^3$-QAM, $N_t=128$, $K=8$), the average $\text{card}(Q)$ is $14.998$. TABLE~\ref{table:1} shows that the computation complexity of proposed methods are moderate \cite{li20201bit}. The following two scenarios are considered according to the overhead for information on a decision-region $\tau$: \begin{itemize} \item[] \textbf{Scenario i)} $\tau$ is chosen regardless of a channel matrix ${\bf H}$, i.e., $\tau$ is determined by (\ref{eq:10}). The corresponding results are provided in Figs. \ref{fig:5} and \ref{fig:6}. The $\tilde{{\bf x}}_{\rm LP}$ and $\tau$, inputs of our proposed methods (i.e., F-greedy and P-greedy algorithms) is determined from $\mathcal{P}_5$. \\ \item[] \textbf{Scenario ii)} Figs. \ref{fig:7} and \ref{fig:8} adopt $\tau$ from own schemes. In scenario ii), the solutions of $\mathcal{P}_4$, $\tilde{{\bf x}}_{\rm LP}$ and $\tau(\stackrel{\Delta}{=} t)$ are equipped at the proposed algorithms. Although we have to transmit $\tau$ depending on channel ${\bf H}$ at given time, the performance gain is attractive. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure_16qam,64,8,fixed_tau.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=64, $K$=8, and $4^2$-QAM with a fixed $\tau$.} \label{fig:5} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure_64qam,128,8,fixed_tau.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=128, $K$=8, and $4^3$-QAM with fixed $\tau$.} \label{fig:6} \end{figure} Fig.~\ref{fig:5} shows the SER performance comparisons of the above algorithms for downlink MU-MISO systems with 1-bit DACs where $N_t=64$, $K=8$, and $4^2$-QAM. Without 1-bit constraints, the ZF method provides an optimal performance with infinite-resolution DACs. This can be interpreted as the lower-bound of the above 1-bit constraint methods. Note that, in all simulation settings, we cannot evaluate the performance of MILP due to its unmanageable complexity. At high SNR, we observe that all 1-bit precoding methods including the quantized LP suffer from a severe error-floor except the proposed methods. To overcome the error-floor, we apply the F-greedy and P-greedy algorithms which determine the entries of ${\bf x}_{\rm LP}$ such that they belong to $\{-1,1\}$ while keeping the feasibility and robustness. Fig.~\ref{fig:6} shows the SER performance comparisons for the configuration of $N_t=128$, $K=8$, and $4^3$-QAM showing the similar trend. Based on the formulation of the optimization problem, our formulation has all candidates in the decision region due to the expression of intersections of the $n$ shifted base regions. In detail, it causes that our MILP, LP problem can have only inequality constraint without equality constraint. \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure_16qam,64,8,adaptive_tau.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=64, $K$=8, and $4^2$-QAM with adaptive $\tau$.} \label{fig:7} \end{figure} Fig.~\ref{fig:7} shows the performance comparisons of the MU-MISO case where $N_t=64$, $K=8$ and $4^2$-QAM with adaptive $\tau$. We observe the performances of the proposed methods are the closest to the optimal performance, however the deviation between the full greedy method and the partial greedy method is trivial, which means the $\mathcal{P}_4$ provides near optimal $\tau$ and the refinement of ${\bf x}_{\rm{LP}}$ over $\mathcal{Q}$ is sufficient. At high SNR, the proposed methods show more performance gain over P-BB which is the near optimal method \cite{li2020interference}, which further validates that $\tau$ from $\mathcal{P}_4$ is properly chosen. \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure_64qam,128,8,adaptive_tau.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=128, $K$=8, and $4^3$-QAM with adaptive $\tau$.} \label{fig:8} \end{figure} Fig.~\ref{fig:8} also shows the same aspect of the systems, where $N_t=128$, $K=8$ and $4^3$-QAM with adaptive $\tau$. Although we assume that the iteration is only once in the actual algorithm process, P-BB and OPSU find $\tau$ alternatively, whereas the proposed method fix the $\tau$ from $\mathcal{P}_4$ at a sitting. The performances in Figs.~\ref{fig:7} and \ref{fig:8} show that $\tau$ found at once in $\mathcal{P}_4$ is reasonable. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure_runtime.pdf} \caption{Run-time versus the number of BS antennas for precoding methods , where $K$=8, and $4^2$-QAM with adaptive $\tau$.} \label{fig:runtime} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure_runtime_detail.pdf} \caption{Run-time versus the number of BS antennas for QLP, OPSU, F-greedy, P-greedy algorithms, where $K$=8, and $4^2$-QAM with adaptive $\tau$.} \label{fig:runtime_detail} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure_smallscaleMIMO.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=8, $K$=2, and $4^2$-QAM with adaptive $\tau$.} \label{fig:smallscale} \end{figure} \textcolor{blue}{Fig.~\ref{fig:smallscale} shows the performance of IP, solution from $\mathcal{P}_3$ in small-scale MIMO. The other algorithms perform undesirable performance. However, the proposed methods achieve some suitable performance and IP attains near-optimal performance. Unfortunately, high complexity of $\mathcal{P}_4$ mainly caused by large-scale antenna array prevents the simulation. The performance loss of IP is caused by the adaptive $\tau$. $\tau$ of IP is generally smaller than $\tau$ of ZF with infinite resolution due to difference of candidates by resolutions. We already provide computational complexity of many methods in \ref{subsec:complexity}. However, many assumption of the computations are existed. For the specific comparison, we compare run-time of the methods in same computer setting with $10^4$ simulations. Fig.~\ref{fig:runtime} and Fig.~\ref{fig:runtime_detail} shows the proposed methods have short run-times. In TABLE~\ref{table:1}, the complexity of OPSU is almost half of the F-greedy. However, in real, the complexity of LP is almost same. In addition, the run-time of LP mainly depends on the number of users unlike other methods. we expect to compensate the performance loss from resolution with more BS antennas without complexity loss.} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure_estimation_error.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs for channel estimiation error $\epsilon$, where $N_t$=64, $K$=8, and $4^2$-QAM with adaptive $\tau$ in $SNR$=10.} \label{fig:channel_error} \end{figure} \textcolor{blue}{Fig.~\ref{fig:runtime} shows novelty of our algorithms in terms of the computation complexity. The run-time of each algorithms is the averaged over $10^4$ simulations. Run-time of our algorithms are attractive when having large-scale antennas arrays, i.e., massive MIMO systems. From performance figures and Fig.~\ref{fig:runtime}, the proposed algorithms perform the same as P-BB, the near-optimal performance. But, run-time is about 10 times less. Furthermore, we exploit the simplex method where the complexity almost depend on the number of users. Therefore the run-times of the methods based on LP rarely increase. So, we can consider the scenario that cover up the performance loss as more antennas at BS instead of increasing power of all transmit antennas. } \textcolor{blue}{ We further present the robustness of our algorithms to channel estimation errors. At BS, we assume the imperfect CSI as \begin{equation} {\bf H}_{e} = \sqrt{1-\epsilon}{\bf H} + \sqrt{\epsilon}{\bf E}, \end{equation} where $\epsilon\in[0,1]$ and ${\bf E}\in\mathbbm{C}^{K\times N_t}$. Therefore, $\epsilon=0, \epsilon\in(0,1)$, and $\epsilon=1$ mean perfect CSI, partial CSI and a no CSI scenario, respectively. In fig.~\ref{fig:channel_error}, the proposed algorithms still achieve near-optimal performance with 10 {\rm dB} SNR under the imperfect CSI. } \section{Conclusion}\label{sec:conclusion} We proposed the construction of 1-bit transmit signal vector for downlink MU-MISO systems with QAM constellations. In this regard, we derived the linear feasibility constraints which ensure that each user can recover the desired message successfully and transformed them into the cascaded matrix form. From this, we constructed mixed integer linear programming (MILP) problem whose solution generates a 1-bit transmit vector to satisfy the feasibility conditions and guarantee the robustness to a noise. To address the computational complexity of MILP, we proposed the LP-relaxed algorithm consisting of two steps: 1) to solve the relaxed LP; 2) to refine the LP solution to fit into the 1-bit constraint. Via simulation results, we demonstrated that the proposed methods show better performances with low-complexity compared with the benchmarks. One promising future direction is to further reduce the complexity of the proposed method without the performance loss. \begin{appendices} \section{Proof for Lemma 1}\label{proof_Lemma_1} $\mathcal{W}$ denotes the space spanned by rows of $\bar{{\bf H}}$. Any ${\bf w}^{{\sf T}}\in\mathcal{W}$ is represented as a linear combination of the row of $\bar{{\bf H}}$ \begin{equation}\label{eq_appA:1} {\bf w}^{{\sf T}} = {\bf v}^{{\sf T}}\bar{{\bf H}}, \end{equation} where ${\bf v}^{{\sf T}}$ is $1\times2nK$ vector. $\bar{{\bf M}}$ is full rank because it is diagonal matrix by (\ref{eq:28}). It has $2nK$ linear independent rows which span the space of $2nK$ dimension. Therefore, There exists a $1\times2nK$ vector ${\bf u}^{{\sf T}}$ such that \begin{equation}\label{eq_appA:2} {\bf v}^{{\sf T}} = {\bf u}^{{\sf T}}\bar{{\bf M}}. \end{equation} And then, by (\ref{eq_appA:1}), (\ref{eq_appA:2}), ${\bf w}^{{\sf T}}$ is represented as \begin{equation} {\bf w}^{{\sf T}} = {\bf v}^{{\sf T}}\bar{{\bf H}} = ({\bf u}^{{\sf T}}\bar{{\bf M}})\bar{{\bf H}} = {\bf u}^{{\sf T}}(\bar{{\bf M}}\bar{{\bf H}}) \end{equation} In detail, ${\bf w}^{{\sf T}}$ is a linear combination of the rows of $\bar{{\bf M}\bar{{\bf H}}}$ and a linear combination of rows of $\bar{{\bf H}}$ as well. By definition of rank, \begin{equation} \text{rank}(\boldsymbol{\Lambda}) = \text{rank}(\bar{{\bf M}}\bar{{\bf H}}) = \text{rank}(\bar{{\bf H}}). \end{equation} ${\bf H}$ is flat-fading Rayleigh fading channel that has full rank and real part and imaginary part of ${\bf H}$ are i.i.d.. We could see $\text{rank}(\bar{{\bf H}}) = 2K$ based on (\ref{eq:26}), (\ref{eq:32}) and $\text{rank}(\tilde{{\bf H}}) = 2K$. Overall, proof of Lemma 1\ref{Lemma_1} is completed as rank of $\boldsymbol{\Lambda}$ is 2K. \end{appendices} \section*{Acknowledgment} This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1A2C1099836). \bibliographystyle{IEEEtran} \section{Introduction}\label{sec:intro} In recent years, massive multiple-input single-output (MISO) has been actively investigated for fifth-generation (5G) and future wireless communication systems due to its significant gain in spectral efficiency \cite{marzetta2010noncooperative}. Because of the large number of antennas, whereas, dealing with a high hardware cost and considerable power consumption become one of the key challenges. In massive MISO systems, the use of cheap and efficient building block, e.g., digital-to-analog converters (DACs) or analog-to-digital converters (ADCs), has attracted the most interest as a promising low-power solution \cite{spencer2004introduction,larsson2014massive}. Considering the same clock frequency and resolution, it is known that DACs have lower power consumption than ADCs, therefore research on low-resolution DACs are often ignored for this reason. However, in downlink multiuser massive MISO systems, the number of transmit antenna at base station (BS) is much larger than the number of receive antennas. In this context, we should consider DACs' power consumption, cost, and computation complexity. In downlink systems, conventional precoding method such as zero-forcing (ZF) and regularized ZF (RZF) achieve almost optimal performance effectively \cite{peel2005vector}. These linear precoding schemes have a low complexity and widely used in wireless communication with high resolution DACs (e.g., 12 bits). But in reality, massive MISO must be built with low cost DACs. This is because power consumption due to quantization increases exponentially as resolution increases. Various non-linear precoding methods with phase-shift-keying (PSK) constellations have been studied actively in the system, which can achieve good performances with low-complexities \cite{park2019construction,li2018massive,castaneda20171,landau2017branch}. However, the above methods cannot be straightforwardly applied to more practical quadrature-amplitude-modulation (QAM) constellations since in QAM constellations, the decision regions are bounded. Recently, various precoding methods for QAM constellations have been investigated in \cite{amor201716,jacobsson2016nonlinear,sohrabi2018one,jacobsson2017quantized,jedda2018quantized}. Leveraging the fact that 16-QAM symbols can be obtained as the superposition of two QPSK symbols, the authors in \cite{amor201716} formulated an optimization problem using gradient projection to obtain 1-bit transmit vectors, and stored them in a look-up-table per coherent channel. In \cite{jacobsson2016nonlinear}, non-linear 1-bit precoding methods for massive MIMO with QAM constellations have been proposed, which are enabled by semi-definite relaxation and $\ell_\infty$-norm relaxation. However, these methods do not provide an elegant complexity-performance trade-off. Thus, it is necessary to investigate a precoding method with an attractive performance and low-complexity under QAM constellations, which is the major subject of this paper. In this paper, we present a novel direction to construct a 1-bit transmit signal vector for a downlink MU-MISO system with 1-bit DACs. Toward this, our contributions are summarized as follows: \begin{itemize} \item We first derive the so-called {\em feasibility condition} which ensures that each user's noiseless observation is placed in a desired decision region. That is, if a transmit signal is constructed to satisfy the feasibility condition, each user can recover a desired signal successfully in a higher signal-to-noise ratio (SNR). \item Incorporating the robustness to an additive noise into the feasibility condition, we show that our problem to construct a 1-bit transmit signal vector can be formulated as a mixed integer linear programming (MILP). This problem can be optimally solved via a LP relaxation and branch-and-bound. \item Furthermore, we propose a low-complexity algorithm to solve the MILP, by introducing a novel greedy algorithm, which can almost achieve the optimal performance with much lower computational complexity. \item Via simulation results, we demonstrate that the proposed method can outperform the state-of-the-art methods. Furthermore, the complexity comparisons of the proposed and existing methods demonstrate the potential of the proposed direction and algorithm. \end{itemize} This paper is organized as follows. In Section~\ref{sec:pre}, we provide useful notations and definitions, and describe a system model. In Section \ref{sec:structure}, we propose an efficient method to construct a transmit signal vector for downlink MU-MISO systems with 1-bit DACs. Section \ref{sec:simulation} provides simulation results. Conclusions are provided in \ref{sec:conclusion}. \section{Preliminaries}\label{sec:pre} In this section, we provide useful notations used throughout the paper, and then describe the system model. \subsection{Notation} The lowercase and uppercase bold letters represent column vectors and matrices, respectively. The symbol $(\cdot)^{{\sf T}}$ denotes the transpose of a vector or a matrix. For any vector $\bf x$, $x_i$ represents the $i$-th component of $\bf x$. Let $\left[a:b\right]\stackrel{\Delta}{=}\{a,a+1,\ldots,b\}$ for any integer $a$ and $b$ with $a<b$. The notation of $\text{card}(\mathcal{U})$ denotes the number of elements of a finite set $\mathcal{U}$. $\Re({\bf a})$ and $\Im({\bf a})$ represent the real and complex parts of a complex vector ${\bf a} \in \mbox{\bb C}$, respectively. For any $x\in\mathbb{C}$, we let \begin{equation}\label{eq:1} g(x)=[\Re(x), \Im(x)]^{{\sf T}}, \end{equation} and the inverse mapping of $g$ is denoted as $g^{-1}$. Also, $g$ and $g^{-1}$ are the component-wise operations, i.e., $g([x_1,x_2]^{{\sf T}})=[\Re(x_1),\Im(x_1),\Re(x_2),\Im(x_2)]^{{\sf T}}$. For a complex-value $x$, its real-valued matrix expansion $\phi(x)$ is defined as \begin{equation}\label{eq:2} \phi(x)=\left[{\begin{array}{cc} \Re(x)& -\Im(x)\\ \Im(x)& \Re(x) \end{array}}\right]. \end{equation} As an extension into a vector, the operation of $\phi$ is applied in element-wise, i.e., \begin{equation} \phi([x_1,x_2]^{{\sf T}})=[\phi(x_1)^{{\sf T}},\phi(x_2)^{{\sf T}}]^{{\sf T}}. \end{equation} \subsection{System Model} We consider a downlink MU-MISO system where BS equipped with $N_t \gg K$ transmits antennas serves $K$ users with a single antenna. As a natural extension of our earlier work in \cite{park2019construction} which focuses on PSK constellations only, this paper considers $4^n$-QAM with $n\geq 2$. Let ${\cal C}$ denote the set of constellation points of $4^n$-QAM. Also, letting ${\bf x}=[x_1,\ldots,x_{N_t}]^{{\sf T}}$ be a transmit vector at the BS, the received signal vector ${\bf x}\in\mbox{\bb C}^{K}$ at the $K$ users is given as \begin{equation}\label{eq:3} {\bf x} = \sqrt{\rho}{\bf H}{\bf x}+{\bf z}, \end{equation} where ${\bf H}\in\mathbb{C}^{K\times N_t}$ denotes the frequency-flat Rayleigh fading channel, each of which component follows a complex Gaussian distribution with zero mean and unit variance, and ${\bf z}\in\mathbb{C}^{K\times 1}$ denotes the additive Gaussian noise vector whose each element are distributed as complex Gaussian random variables with zero mean and unit variance, i.e., $z_i \sim \mathcal{CN}(0,\sigma^2=1)$. The signal-to-noise ratio (SNR) is defined as ${\sf SNR}=\rho/\sigma^2$, where $\rho$ denotes the per-antenna power constraint. Throughout the paper, it is assumed that the channel matrix ${\bf H}$ is perfectly known at the BS. Given a message vector ${\bf s}\in\mathcal{C}^K$, BS needs to construct a transmit vector ${\bf x}$ such that each user $k$ can recover the desired message $s_k$ successfully. Toward this, our goal is to construct such a precoding function ${\cal P}$: \begin{equation}\label{eq:4} {\bf x}=\mathcal{P}({\bf H},{\bf s}), \end{equation} which produces a transmit vector ${\bf x}$ from the channel matrix ${\bf H}$ and the message vector ${\bf s}$. Focusing on the impact of 1-bit DACs on the downlink precoding, we assume that BS is equipped with 1-bit DACs while all $K$ users are equipped with infinite-resolution ADCs. To isolate the performance impact of 1-bit DACs, each component $x_i$ of the transmit vector ${\bf x}$ is restricted as \begin{equation}\label{eq:5} \Re(x_i)\ \rm{and}\ \Im(\mathit{x_i})\in \{-1,1\}. \end{equation} Since this restriction causes a severe non-linearity, conventional precoding methods, developed by exploiting the linearity, cannot ensure an attractive performance. The objective of this paper is to construct a precoding function $\mathcal{P}({\bf H},{\bf s})$ with a manageable complexity and suitable for the considered non-linear MISO channels. \section{The Proposed Transmit-Signal Vectors}\label{sec:structure} We formulate an optimization problem to construct a transmit-vector ${\bf x}$ under $4^n$-QAM. Especially, this problem can be represented as a manageable mixed integer linear programming (MILP). We remark that our earlier work on PSK \cite{park2019construction} cannot be employed as the decision regions of $4^n$-QAM because a QAM constellation is bounded (see Fig.~\ref{fig:2}). For the ease of exploration, an equivalent real-valued expression is used as follows: \begin{equation}\label{eq:9} \tilde{{\bf x}} = \sqrt{\rho}\tilde{{\bf H}}\tilde{{\bf x}}+\tilde{{\bf z}}, \end{equation} where $\tilde{{\bf x}}=g({\bf x})$, $\tilde{{\bf x}}=g({\bf x})$, $\tilde{{\bf z}}=g({\bf z})$, and $\tilde{{\bf H}}=\phi({\bf H})\in\mathbb{R}^{2K\times 2N_t}$ denotes the real-value expansion matrix of ${\bf H}$. Before explaining the main result, we provide the useful definitions which are used throughout the paper. \vspace{0.1cm} \begin{definition}\label{def1}{\em (Decision region)} For any constellation point $s\in\mathcal{C}$, the decision region of $s$ is defined as \begin{equation}\label{eq:6} \mathcal{R}(s) \triangleq \left\{y\in\mathbb{C}:|y-s| \le \min_{c\in\mathcal{C}:c\ne s}|y-c|\right\}. \end{equation} This region means that a received signal $y \in\mathcal{R}(s)$ is detected as $s$. In addition the real-valued decision region is given as \begin{equation} \tilde{\mathcal{R}}(s)=g\left(\mathcal{R}(s)\right). \end{equation}\hfill$\blacksquare$ \end{definition} \vspace{0.1cm} \begin{definition}\label{def2}{\em (Base region)} A base region $\tilde{\mathcal{B}}_i\subseteq \mathbb{R}^2, \forall i\in\left[1:n\right]$, is defined as \begin{equation}\label{eq:7} \tilde{\mathcal{B}}_{i} \triangleq \{\alpha_{i}^1{\bf m}_{i}^1+\alpha_{i}^2{\bf m}_{i}^2: \alpha_{i}^1,\alpha_{i}^2>0\}, \end{equation} where ${\bf m}_i^\ell$ represents a basis vector with \begin{equation}\label{eq:8} {\bf m}_{i}^{\ell} = \begin{cases} g\left(\sqrt{2}\cos(\frac{\pi}{4}(1+2i))\right) & \mbox{if }\ell=1 \\ g\left(j\sqrt{2}\sin(\frac{\pi}{4}(1+2i))\right) & \mbox{if }\ell=2. \end{cases} \end{equation}\hfill$\blacksquare$ \end{definition} \vspace{0.1cm} \begin{definition}\label{def3}{\em (Partial matrix)} A partial matrix ${\bf A}_{\mathcal{U}}\in\mathbb{R}^{m\times \text{card}(\mathcal{U})}$ is defined as \begin{equation} {\bf A}_{\mathcal{U}}\triangleq[({\bf A}^{{\sf T}})_{u_1},({\bf A}^{{\sf T}})_{u_2},\ldots,({\bf A}^{{\sf T}})_{u_{\text{card}(\mathcal{U})}}]^{{\sf T}}, \end{equation} where ${\bf A}\in\mathbb{R}^{m\times n}$, $\mathcal{U}\triangleq \{u_1,u_2,\ldots,u_{\text{card}(\mathcal{U})}\} \subseteq[1:n]$ and $({\bf A}^{{\sf T}})_k$ denotes the k-th row of ${\bf A}^{{\sf T}}$. \end{definition} \vspace{0.1cm} \begin{definition}\label{def4}{\em (Partial vector)} A partial vector ${\bf x}_{\mathcal{U}}\in\mathbb{R}^{\text{card}(\mathcal{U})\times 1}$ is defined as \begin{equation} {\bf x}_{\mathcal{U}}\triangleq[x_{u_1},x_{u_2},\ldots,x_{u_\text{card}(\mathcal{U})}]^{{\sf T}}, \end{equation} where ${\bf x}\in\mathbb{R}^{n\times 1}$, $\mathcal{U}\triangleq \{u_1,u_2,\ldots,u_{\text{card}(\mathcal{U})}\} \subseteq[1:n]$. \end{definition} \vspace{0.1cm} \begin{figure}[!t] \begin{center} \includegraphics[width=0.35\textwidth]{figure2.pdf} \end{center} \caption{Description of the decision regions for $4^2$-QAM with a parameter $\tau$.} \label{fig:2} \end{figure} In the sequel, the decision region in Definition~\ref{def1} will be represented by the intersections of the $n$ base regions in Definition~\ref{def2} with proper shift values. This representation makes it easier to formulate an optimization problem. First of all, we need to decide the size of bounded decision regions, i.e., the parameter $\tau$ in Fig.~\ref{fig:2} should be determined. Note that $2\tau=d_{\rm min}$ denotes the minimum Euclidean distance of the given constellation points. In PSK, $\tau$ is always infinite regardless of a channel matrix, whereas in $4^n$-QAM, it should be well-optimized. Specifically, $\tau$ should be chosen as large as possible to ensure a reliable performance, provided that a noiseless received signal belongs to the corresponding decision regions at all the $K$ users. From now on, we will explain how to construct a transmit-signal vector ${\bf x}$ for a decision-size $\tau$. Throughout the paper, it is assumed that all $K$ users have the identical decision size $\tau$ for the practicability of an optimization. Given $4^n$-QAM, each symbol is indexed by a length-$n$ quaternary vector $(i_1,...,i_n)$ with $i_j \in [0:3]$, i.e., \begin{equation} \mathcal{C}=\left\{s_{(0,\ldots,0)}^{(n)},s_{(0,\ldots,1)}^{(n)},\ldots,s_{(3,\ldots,3)}^{(n)}\right\}. \end{equation} Each constellation point can be represented as a linear combination of the $n$ basis symbols $c_i$'s such as \begin{equation}\label{eq:13} s_{(i_1,\ldots, i_n)}^{(n)} \triangleq \tau \sum_{l=1}^n 2^{n-l} c_{i_l} = \tau {s'}_{(i_1,\ldots, i_n)}^{(n)}. \end{equation} Here, a normalized constellation point and the basis symbols are defined as \begin{equation}\label{eq:12} {s'}_{(i_1,\ldots, i_n)}^{(n)} \triangleq \sum_{l=1}^n 2^{n-l} c_{i_l}, \end{equation} where $c_i$'s, four fundamental constellation points are given as \begin{equation} c_i \triangleq \sqrt{2}\left\{{\cos\left({\frac{\pi}{4}} (1+2i)\right)+j\sin\left({\frac{\pi}{4}} (1+2i)\right)}\right\}, \end{equation} for $i\in[0:3]$. For the ease of expression, we represent the constellation $\mathcal{C}$ and the corresponding decision regions $\mathcal{R}(s^{(n)}_{(i_1,\ldots, i_n)})$ in the corresponding real-valued forms: \begin{equation}\label{eq:14} \tilde{\mathcal{C}} = \left\{g(s_{(0,\ldots,0)}^{(n)}),g(s_{(0,\ldots,1)}^{(n)}),\ldots, g(s_{(3,\ldots, 3)}^{(n)})\right\}, \end{equation} and \begin{equation}\label{eq:15} \tilde{\mathcal{R}}\left(s^{(n)}_{(i_1, \ldots, i_n)}\right) = g\left(\mathcal{R}\left(s^{(n)}_{(i_1, \ldots ,i_n)}\right)\right). \end{equation} A transmit vector ${\bf x}$ should ensure that a noiseless received signal at the $k$-th user (i.e., $r_k={\bf h}_k^T{\bf x}$) should be placed in the corresponding decision regions for all users $k\in[1:K]$. This necessity condition implies that ${\bf x}$ should satisfy the following condition: \begin{align}\label{eq:region_constraint} g(r_k) &\in \tilde{\mathcal{R}}\left(s^{(n)}_{(\mu_{k,1},\ldots, \mu_{k,n})}\right), k \in [1:K], \end{align} for $k \in [1:K]$, where $r_k={\bf h}_k^T{\bf x}$ denotes a noiseless received signal (i.e., $y_k ={\bf h}_k^T{\bf x} +z_k$). \vspace{0.2cm} \noindent{\bf Feasibility condition:} The condition in (\ref{eq:region_constraint}) can be rewritten in a way that the optimization problem can be interpreted as an LP problem. The decision region in (\ref{eq:region_constraint}) can be expressed as the intersections of the $n$ shifted base regions in Definition~\ref{def2}: \begin{align}\label{eq:18} \tilde{\mathcal{R}}\left(s^{(n)}_{(i_1, \ldots, i_n)}\right)\triangleq \tilde{\mathcal{B}}_{i_1}\bigcap_{l=2}^n \left\{\tilde{\mathcal{B}}_{i_l}+2^{n-(l-1)}g\left(s_{(i_1,\ldots, i_{l-1})}^{(l-1)}\right)\right\}, \end{align}where the shifted base region with a bias $c$ is defined as \begin{equation}\label{eq:19} \tilde{\mathcal{B}}_{i}+c \triangleq \{\alpha_{i}^1{\bf m}_{i}^1+\alpha_{i}^2{\bf m}_{i}^2+c: \alpha_{i}^{1},\alpha_{i}^{2}>0\}. \end{equation} Then, the condition in (\ref{eq:region_constraint}) holds if $g(r_k)$ can be represented by the following $n$ linear equations with some positive coefficients, i.e., \begin{align}\label{eq:20} g(r_k) &= \alpha_{k,1}^1{\bf m}_{\mu_{k,1}}^1+\alpha_{k,1}^2{\bf m}_{\mu_{k,1}}^2+2^{n}g(0) \\ & = \alpha_{k,2}^1{\bf m}_{\mu_{k,2}}^1+\alpha_{k,2}^2{\bf m}_{\mu_{k,2}}^2+2^{n-1}g(s_{(\mu_{k,1})}^{(1)}) \nonumber \\ &\;\; \vdots \nonumber \\ & = \alpha_{k,n}^1{\bf m}_{\mu_{k,n}}^1+\alpha_{k,n}^2{\bf m}_{\mu_{k,n}}^2+2^1g(s_{(\mu_{k,1},\ldots ,\mu_{k,n-1})}^{(n-1)}), \nonumber \end{align} for some $\alpha_{k,1}^1,\alpha_{k,1}^2, \ldots,\alpha_{k,n}^1,\alpha_{k,n}^2 \ge 0$. The condition in (\ref{eq:20}) is called a {\em feasibility} condition as it can guarantee that $r_k\in \mathcal{R}\left(s^{(n)}_{(\mu_{k,1},\ldots,\mu_{k,n})}\right)$ for $k\in [1:K]$. In other words, if this condition is satisfied, all $K$ users can reliably detect the desired messages in higher SNRs. \begin{example}Assuming $4^2$-QAM, we explain how to obtain the feasibility condition in (\ref{eq:18}). Consider the decision region $\mathcal{R}(s^{(2)}_{(0,2)})$. From Fig.~\ref{fig:2}, the decision region is represented by the intersection of the two base regions $\mathcal{B}_0$ (i.e., the infinite region with blue basis in Fig.~\ref{fig:2}) and {$\mathcal{B}_2+s_{(0)}^{(1)}$} (i.e., the infinite region with red basis in Fig.~\ref{fig:2}). Thus, the decision region (i.e., the gray region in Fig.~\ref{fig:2}) is represented as \begin{equation}\label{eq:21} \mathcal{R}\left(s^{(2)}_{(0,2)}\right) \triangleq \left\{\mathcal{B}_{0}+2^2g(0)\right\}\cap \left\{\mathcal{B}_{2}+2^1s_{(0)}^{(1)}\right\}. \end{equation} Also, from Definition 2, the above condition can be represented by the following two linear equations: \begin{align}\label{eq:23} g({\bf r}_k) =& \alpha_{k,1}^1{\bf m}_{0}^1+\alpha_{k,1}^2{\bf m}_{0}^2+2^2g(0),\nonumber \\ = &\alpha_{k,2}^1{\bf m}_{2}^1+\alpha_{k,2}^2{\bf m}_{2}^2+2^1 g\left(s_{(0)}^{(1)}\right), \end{align} for some positive coefficients $\alpha_{k,1}^1,\alpha_{k,1}^2,\alpha_{k,2}^1,\alpha_{k,2}^2>0$. This is equivalent to the condition in (\ref{eq:20}). In the same way, we can verify the feasibility condition in (\ref{eq:18}). \hfill$\blacksquare$ \end{example} \vspace{0.1cm} We are now ready to derive MILP problem which can generate an optimal transmit vector ${\bf x}$ under 1-bit DAC constraints. We first represent the feasibility condition in a matrix form. Define the $n$ copies of the channel vector ${\bf h}_k$ as \begin{equation} {\bf H}^k\stackrel{\Delta}{=} \bar{\bf{1}}_n \otimes {\bf h}_k=[\underbrace{{\bf h}_k^{\sf T},\ldots, {\bf h}_k^{\sf T}}_{n}]^{\sf T}, \label{eq:26} \end{equation} where ${\bf h}_k$ denotes the $k$-th row of ${\bf H}$, $\bar{\bf{1}}_n$ denotes the length-$n$ all-1 vector, and $\otimes$ indicates Kronecker product operator. Also, the corresponding real-valued expression is denoted as \begin{equation} \tilde{{\bf H}}^k=\phi({\bf H}^k). \end{equation} Accordingly, the $n$-extended received vector at $k$-th user is written as \begin{align}\label{eq:27} {\bf r}^k&\triangleq g({\bf H}^k{\bf x}) \nonumber\\ &=\tilde{{\bf H}}^k\tilde{{\bf x}} =\bar{\bf{1}}_n \otimes g(r_k). \end{align} We next express the right-hand side of \eqref{eq:20}, i.e., linear constraints, in a matrix form. From Definition 2, we let: \begin{align} {\bf M}_i \triangleq [{\bf m}_{i}^1 \ {\bf m}_{i}^2]&= \begin{bmatrix} \Re(c_{i}) & 0 \\ 0 & \Im(c_{i}) \end{bmatrix} \nonumber\\ &= \begin{bmatrix} \sqrt{2}\cos\left({\frac{\pi}{4}} (1+2i)\right) & 0 \\ 0 & \sqrt{2}\sin\left({\frac{\pi}{4}} (1+2i)\right) \end{bmatrix}. \label{eq:25} \end{align} We notice that ${\bf M}_i$ is symmetric and orthogonal matrices as \begin{align}\label{eq:sym_or} {\bf M}_{i}{\bf M}_{i} = \begin{bmatrix} \cos\left({\frac{\pi}{2}} (1+2i)\right)+1 & 0 \\ 0 & -\cos\left({\frac{\pi}{2}} (1+2i)\right)+1 \end{bmatrix}={\bf I}. \end{align} Since the decision region of a constellation point $4^n$-QAM is formed as the conjunction of $n$ shifted base regions, we need to establish a tightly packed format that can cope with both base regions and shifts (biases). The former is addressed by the basis matrix ${\bf M}^{\mu_k}$ and coefficient vector $\hbox{\boldmath$\alpha$}^k$, which are respectively written as \begin{align} {\bf M}^{\mu_k} &\triangleq \mbox{diag}({\bf M}_{\mu_{k,1}},\ldots,{\bf M}_{\mu_{k,n}}) \label{eq:28}\\ \hbox{\boldmath$\alpha$}^k &\triangleq[\alpha_{k,1}^1,\alpha_{k,1}^2,\ldots,\alpha_{k,n}^1,\alpha_{k,n}^2]^{\sf T}. \end{align}Lastly, the whole series of the biases are formed as the normalized bias vector ${\bf b}$ with $\tau$, which is defined from (\ref{eq:13}) as \begin{align}\label{eq:29} {\bf b}^{\mu_k} \triangleq g\left([2^n\cdot0, 2^{n-1}\cdot {s'}_{(\mu_{k,1})}^{(1)}, \ldots ,2^1\cdot {s'}_{(\mu_{k,1},\ldots,\mu_{k,n-1})}^{(n-1)}]^{\sf T}\right)\nonumber\\ =\frac{1}{\tau}g\left([2^n\cdot0, 2^{n-1}\cdot s_{(\mu_{k,1})}^{(1)}, \ldots ,2^1\cdot s_{(\mu_{k,1},\ldots,\mu_{k,n-1})}^{(n-1)}]^{\sf T}\right). \end{align} From \eqref{eq:28}-\eqref{eq:29}, the matrix form of $k$-th user's feasibility conditions (\ref{eq:20}) is given as \begin{equation}\label{eq:31} {\bf r}^k={\bf M}^{\mu_k}\boldsymbol{\alpha}^k+\tau{\bf b}^{\mu_k}. \end{equation} Leveraging the expression designed for each user, we construct the cascaded matrix form of feasibility condition on all $K$ users as \begin{equation}\label{eq:32} \Bar{{\bf r}}=\Bar{{\bf H}}\tilde{{\bf x}}=\Bar{{\bf M}}\bar{\boldsymbol{\alpha}}+\tau\Bar{{\bf b}}, \end{equation} where \begin{align*} \Bar{{\bf M}} &\triangleq \mbox{diag}({\bf M}^{\mu_1},\ldots,{\bf M}^{\mu_K}),\; \Bar{{\bf H}} \triangleq [(\tilde{{\bf H}}^{1})^{\sf T},\ldots,(\tilde{{\bf H}}^{K})^{\sf T}]^{\sf T} \\ \Bar{{\bf r}} &\triangleq [({\bf r}^{1})^{\sf T},\ldots,({\bf r}^{K})^{\sf T}]^{\sf T},\; \Bar{{\bf b}} \triangleq [({\bf b}^{\mu_1})^{\sf T},\ldots,({\bf b}^{\mu_K})^{\sf T}]^{\sf T} \\ \bar{\boldsymbol{\alpha}}&\triangleq [(\boldsymbol{\alpha}^1)^{\sf T},\ldots,(\boldsymbol{\alpha}^K)^{\sf T}]^{\sf T}. \end{align*} Thus, the feasibility condition in (\ref{eq:32}) is rewritten as \begin{equation}\label{eq:37} \bar{\boldsymbol{\alpha}}=\underbrace{\bar{{\bf M}}\bar{{\bf H}}}_{\triangleq{\boldsymbol{\Lambda}}}\tilde{{\bf x}}-\tau\underbrace{\bar{{\bf M}}\bar{{\bf b}}}_{\triangleq{\boldsymbol{\Lambda}_b}}, \end{equation} where we used the fact that $\bar{{\bf M}}^{-1}=\bar{{\bf M}}$ from (\ref{eq:sym_or}). We remark that $\boldsymbol{\Lambda}\in\mathbb{R}^{2nK\times2N_t}$ and $\boldsymbol{\Lambda}_b\in\mathbb{R}^{2nK\times1}$ are fully determined by the channel matrix $\bar{{\bf H}}$ and users' messages $\{\mu_k: k\in[1:K]\}$. \noindent{\bf Robustness:} A feasible transmit vector can provide an attractive performance in higher SNR regimes. However, the robustness to an additive Gaussian noise cannot be guaranteed. To enhance the robustness, one reasonable way is to make a noiseless received signal to be placed in the center of the decision region. Namely, we aim at moving away the noiseless signal from the boundaries of the decision areas. By taking this goal into account, we formulate the following optimization problem: \begin{align}\label{eq:39} &\mathcal{P}_1:&& \max_{\tilde{{\bf x}},\tau} \min\{\alpha_{k,j}^i: i=1,2,\ j\in[1:n],\ k\in[1:K]\} \\ & \text{s.t.} &&\bar{\boldsymbol{\alpha}}=\boldsymbol{\Lambda}\tilde{{\bf x}}-\tau\boldsymbol{\Lambda}_{b},\nonumber\\ &&& \alpha_{k,j}^1,\alpha_{k,j}^2 > 0,\ j\in[1:n],\ k\in[1:K],\nonumber\\ &&& \tilde{{\bf x}}\in\{-1,1\}^{2N_t} \nonumber. \end{align} \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{figure4.pdf} \caption{The normalized noiseless received signals of $4^2$-QAM.} \label{fig:4} \end{figure} From now on, we will explain how to solve the problem $\mathcal{P}_1$ efficiently. According to dealing with the decision parameter $\tau$, we consider the two different scenarios: i) a fixed $\tau$ irrespective of a channel matrix ${\bf H}$; ii) a channel-dependent $\tau$. Clearly, the second scenario requires a more overhead since in this case, BS needs to transmit the $\tau$ to the $K$ users more frequently. For the first scenario, we employ the asymptotic result provided in \cite{sohrabi2018one}, where it is fully determined as a function of $N_t$, $n$ and $K$: \begin{equation}\label{eq:10} \tau \stackrel{\Delta}{=} {\frac{\sqrt{{2/ \pi}}}{6}} \sqrt{\frac{2\rho {N_t}^2}{\tilde{f}(K,n)}}, \end{equation} where \begin{equation}\label{eq:11} \tilde{f}(K,n) = K\frac{2^n+1}{3(2^n-1)}+2\sqrt{K\frac{(2^n+1)(2^{2n}-4)}{ 22.5(2^n-1)^3}}. \end{equation} Leveraging the fixed $\tau$ in (\ref{eq:10}), $\mathcal{P}_1$ can be formulated as MILP: \begin{align}\label{eq:43} &\mathcal{P}_2:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\boldsymbol{\Lambda}_i\tilde{{\bf x}}-\tau \boldsymbol{\Lambda}_{b,i} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& \tilde{{\bf x}}\in\{-1,1\}^{2N_t}, \end{align} where $\boldsymbol{\Lambda}_i$ and $\boldsymbol{\Lambda}_{b,i}$ denote the $i$-th row of $\boldsymbol{\Lambda}$ and $\boldsymbol{\Lambda}_b$, respectively. For the second scenario, $\tau$ is set by $\tau \stackrel{\Delta}{=} t$. In general, this value can be changed according to a channel matrix ${\bf H}$. This choice is motivated by the fact that $t$ is the lower bound of the coefficients $\bar{\hbox{\boldmath$\alpha$}}$, and due to the normalized $\bar {{\bf M}}$ and $\bar{{\bf b}}$, the coefficients directly signify how far away it is from a detection boundary. Accordingly, the optimization problem to find the decision parameter $\tau$ and transmit vector ${\bf x}$ simultaneously is defined as \begin{align}\label{MILP4} &\mathcal{P}_3:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\frac{1}{1+\boldsymbol{\Lambda}_{b,i}}\boldsymbol{\Lambda}_i\tilde{{\bf x}} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& \tilde{{\bf x}}\in\{-1,1\}^{2N_t}. \end{align} The proposed MILP problems in $\mathcal{P}_2$ and $\mathcal{P}_3$ can be solved via the well-known branch-and-bound (B\&B) method \cite{landau2017branch}, which achieves a near optimal performance. However, its computational complexity is quite expensive for a realistic implementation \cite{landau2017branch}. In the following section, we will present a low-complexity method to solve the optimization problems in $\mathcal{P}_2$ and $\mathcal{P}_3$ efficiently. \begin{remark} Fig.~\ref{fig:4} verifies the proposed approach, where $10^4$ normalized noiseless signals ${\bf H}{\bf x}$ are plotted with $N_t=8$, $K=2$, and $n=2$. The blue points depict the noiseless received signals when ZF precoding in \cite{peel2005vector} is used with the assumption of infinite resolution. In contrast, the red points show the noiseless received signals when the proposed 1-bit transmit vectors obtained from the solutions of $\mathcal{P}_2$ are used. Fig.~\ref{fig:4} clearly shows that the red points can provide more robustness than the blue points even with the low-resolution data converters. \hfill$\blacksquare$ \end{remark} \section{Low-Complexity Precoding Methods} In this section, we present an efficient algorithms to solve MILP problems in $\mathcal{P}_2$ and $\mathcal{P}_3$. We first solve the LP problem by relaxing the integer constraint in $\mathcal{P}_3$ as the bounded interval: \begin{align}\label{eq:46} &\mathcal{P}_4:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\frac{1}{1+\boldsymbol{\Lambda}_{b,i}}\boldsymbol{\Lambda}_i\tilde{{\bf x}} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& -1\le\tilde{x}_j\le 1,\ j\in[1:2N_t]. \end{align} Similarly, $\mathcal{P}_2$ can reformed as the following LP problem with relaxed linear constraint: \begin{align}\label{LP5} &\mathcal{P}_5:&& \operatornamewithlimits{argmax}_{\tilde{{\bf x}},t}\ \ t \nonumber\\ & \text{s.t.} &&\boldsymbol{\Lambda}_i\tilde{{\bf x}}-\tau \boldsymbol{\Lambda}_{b,i} \ge t,\ i\in[1:2nK], \nonumber\\ &&& t>0, \nonumber\\ &&& -1\le\tilde{x}_j\le 1,\ j\in[1:2N_t], \end{align} where $\tau$ is given in (\ref{eq:10}). The above problems can be efficiently solved via simplex method \cite{luenberger1984linear}, and the corresponding relaxed LP solutions are denoted as $\tilde{{\bf x}}_{\rm LP}$. Then, we refine the solution of $\mathcal{P}_4$ or $\mathcal{P}_5$ via a greedy algorithm (see Algorithm 1), so that it satisfies the one-bit constraints. Starting from the solutions of $\mathcal{P}_4$ or $\mathcal{P}_5$, i.e., $\tilde{{\bf x}}_{\rm LP}$, the main idea behind the second stage is to choose an antenna index $i$, to test the possible values of the antennas, that is $\tilde{x}_i \in \{-1,1\}$, to calculate the set of scaling coefficients when we artificially change $\tilde{x}_i$, and finally to set $\tilde{x}_i=j$ where the substitution of $j\in\{-1,1\}$ for $\tilde{x}_i=j$ insists the maximization of the minimum element in the coefficients. \begin{algorithm}[t]\label{al:1} \caption{Greedy Algorithm} \textbf{Input:} $\tilde{{\bf x}}_{\rm LP}\in\mathbb{R}^{2N_t\times1}$, $\boldsymbol{\Lambda}\in\mathbb{R}^{2nK\times2N_t}$, $\boldsymbol{\Lambda}_b\in\mathbb{R}^{2nK\times1}$ and $\tau\in\mathbb{R}^{+}$. \textbf{Initialization:} $\tilde{{\bf x}}=\tilde{{\bf x}}_{\rm LP}$ (obtained by either $\mathcal{P}_4$ or $\mathcal{P}_5$). \begin{algorithmic} \For{$i=1:2N_t$} \For{$j\in \{-1,1\}$} \State $\tilde{x}_i=j$ and $\bar{\boldsymbol{\alpha}}^{(j)}=\boldsymbol{\Lambda}\tilde{{\bf x}}-\tau \boldsymbol{\Lambda}_b$ \EndFor \State Update $\tilde{x}_i\leftarrow \operatornamewithlimits{argmax}_{j\in\{-1,1\}}\{\min(\bar{\boldsymbol{\alpha}}^{(j)})\}$ \EndFor \State \textbf{Output:} $\tilde{{\bf x}}\in\mathcal{R}^{2N_t\times1}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t]\label{al:2} \caption{Partial Greedy Algorithm} \textbf{Input:} $\tilde{{\bf x}}_{\rm LP}\in\mathbb{R}^{2N_t\times1}$, $\boldsymbol{\Lambda}\in\mathbb{R}^{2nK\times2N_t}$, $\boldsymbol{\Lambda}_b\in\mathbb{R}^{2nK\times1}$, $\tau\in\mathbb{R}^{+}$, $\mathcal{O}\subseteq[1:2N_t]$, $\mathcal{Q}\subseteq[1:2N_t]$. \textbf{Initialization:} $\tilde{{\bf x}}=\tilde{{\bf x}}_{\rm LP}$ (obtained by either $\mathcal{P}_4$ or $\mathcal{P}_5$). \begin{algorithmic} \For{$i=1:\text{card}(\mathcal{Q})$} \For{$j\in \{-1,1\}$} \State $\tilde{x}_{\mathcal{Q}_i}=j$ and $\bar{\boldsymbol{\alpha}}^{(j)}=\boldsymbol{\Lambda}\tilde{{\bf x}}-\tau \boldsymbol{\Lambda}_b$ \EndFor \State Update $\tilde{x}_{\mathcal{Q}_i}\leftarrow \operatornamewithlimits{argmax}_{j\in\{-1,1\}}\{\min(\bar{\boldsymbol{\alpha}}^{(j)})\}$ \EndFor \State \textbf{Output:} $\tilde{{\bf x}}\in\mathcal{R}^{2N_t\times1}$ \end{algorithmic} \end{algorithm} \subsection{Greedy algorithms} $\tilde{{\bf x}}_{\rm LP}$, which is the solution of $\mathcal{P}_4$ or $\mathcal{P}_5$, is obtained via simplex method \cite{luenberger1984linear} instead of the interior point method \cite{den2012interior}. The simplex method concentrates on finding out an optimal solution that is an extreme point of constraint set of $\mathcal{P}_4$ or $\mathcal{P}_5$. The extreme points are basic feasible solutions whose most entries are already satisfying 1-bit constraint \cite{luenberger1984linear}. From $\tilde{{\bf x}}_{\rm LP}$, the solution of LP, set $\mathcal{O}$ and $\mathcal{Q}$ are obtained such as \begin{align} \mathcal{O}&=\{i:\tilde{x}_i\in\{-1,1\},\ i\in[1:2N_t]\}\\ \mathcal{Q}&=\{i:-1<\tilde{x}_i<1,\ i\in[1:2N_t]\}. \end{align} To alleviate the computational complexity of the greedy algorithm, a partial greedy algorithm is suggested based on $\tilde{{\bf x}}_{\rm LP}$, i.e., solution of $\mathcal{P}_4$ or $\mathcal{P}_5$. Unlike the full greedy algorithm, the second stage of the partial greedy algorithm performs the greedy search on the entries in $\mathcal{Q}$ while elements in $\mathcal{O}$ are unchanged (see Algorithm 2). We can diminish the size of the search space, thereby reducing the complexity dramatically. \subsection{Computational complexity}\label{subsec:complexity} We compare the proposed algorithms with the existing methods in terms of the computational complexity measured by the total number of real-valued multiplications. We first evaluate the complexity of the optimal method based on an exhaustive search that explores all possible signal candidates $\tilde{{\bf x}}\in\{-1,1\}^{2N_t}$. Since each candidate requires $2nK\cdot2N_t$ operations to generate the magnitude of coefficients in the feasibility conditions in \eqref{eq:37}, the total complexity of the exhaustive search is computed as \begin{equation}\label{eq:49} \mathcal{X}_e=4nKN_t\cdot2^{2N_t}. \end{equation} We next focus on the computational complexity of the proposed algorithms which consist of LP solver of $\mathcal{P}_4$ and its greedy refinement. For the LP solver, we use the simplex method in whose computational complexity of the standard constraints of LP (i.e., ${\bf A}{\bf x}\ge{\bf b}$, where ${\bf A}\in\mathbb{R}^{m\times n}$, ${\bf b}\in\mathbb{R}^{m\times 1},{\bf x}\in \mathbb{R}^{n\times1}$) is given as \cite{dantzig1998linear,den2012interior}: \begin{align} \mathcal{X}_{\rm LP} &= \mathbbm{i}_{\rm LP}\cdot\{(m+1)(n+1)+2 m\}\nonumber\\ &=\mathbbm{i}_{\rm LP}\cdot\{m\times n+3m+n+1\},\label{eq:52} \end{align} where $\mathbbm{i}_{\rm LP}$ is the number of iterations during the simplex method. Because of the randomness of $\boldsymbol{\Lambda}$ and $\boldsymbol{\Lambda}_b$, $\mathbbm{i}_{\rm LP}$ varies according to the constraints, however, we can obtain the approximation of $\mathbbm{i}_{\rm LP} \approx \alpha \times m$ ,where $\exp{(\alpha)} = \log_2{(2+\frac{n}{m})}$ \cite{shu1993linear}. Note that the number of iterations totally depends on the constraints of LP instead of the objective function. Overall, the complexity of simplex method in the system is represented as \begin{equation} \mathcal{X}_{\rm LP} = (\log\{\log_2{(2+\frac{n}{m})}\}\times m)\cdot(m \times n+3m+n+1). \label{eq:X_LP} \end{equation} Also, the quantized LP represents the algorithm that directly quantizes the solution of $\mathcal{P}_4$ or $\mathcal{P}_5$ to generate 1-bit transmit vector using {\it sign} function, i.e., ${\bf x}_q={\rm sign}({\bf x}_{\rm LP})$. Thus, the corresponding complexity is the same as the one in the LP as \begin{equation} \mathcal{X}_q = \mathcal{X}_{\rm LP}. \end{equation} Next, the complexity of the full-greedy method is obtained as \begin{align}\label{eq:53} \mathcal{X}_{\rm F-greedy}&=2\times2nK\times2N_t \end{align} based on the fact that the algorithm needs to sequentially search over $\tilde{x}_i\in\{-1,1\}, \forall i\in[1:2N_t]$ and each iteration requires $2nK$ operations. Similarly, the complexity of the partial-greedy algorithm is represented as \begin{equation} \label{eq:X_partial} \mathcal{X}_{\rm P-greedy}=2\times2nK\times \text{card}(\mathcal{Q}). \end{equation} Combining \eqref{eq:X_LP}, \eqref{eq:53}, and \eqref{eq:X_partial}, the total computational complexity of the proposed methods are computed as \begin{align} \mathcal{X}_{\rm pro1}&=\mathcal{X}_{\rm LP}+\mathcal{X}_{\rm F-greedy}\nonumber\\ &=\mathcal{X}_{\rm LP}+8nKN_t,\\ \mathcal{X}_{\rm pro2}&=\mathcal{X}_{\rm LP}+\mathcal{X}_{\rm P-greedy}\nonumber\\ &=\mathcal{X}_{\rm LP}+4nK\text{card}(\mathcal{Q}).\label{eq:54} \end{align} As a benchmark method, we consider a low-complexity method, the symbol-scaling method (SS) \cite{li2018massive} whose computational complexity is given as \begin{equation}\label{eq:50} \mathcal{X}_{\rm SS}=4N_t^2+24nKN_t-2nK. \end{equation} A Partial Branch-and-Bound (P-BB) and an ordered partial sequential update (OPSU) algorithm are proposed for QAM constellation with low complexity in \cite{li2020interference}. Recall the full B\&B (F-BB) has a prohibitive complexity in real equipment \cite{landau2017branch}. Thus, the P-BB also is based on $\tilde{x}_{\rm LP}$ and corresponding $\mathcal{O}, \mathcal{Q}$. In detail, P-BB fix entries of $\tilde{{\bf x}}_{\rm LP}$, which satisfy $\tilde{x}_i\in\tilde{{\bf x}}_{\rm LP}, i\in\mathcal{O}$, but only reconstruct the rest entries of $\tilde{{\bf x}}_{\rm LP}$(i.e., $\tilde{x}_i\in\tilde{{\bf x}}_{\rm LP}, i\in\mathcal{Q}$) based on B\&B. Moreover, the P-BB algorithm significantly reduces the complexity compared to F-BB. Unfortunately, in the worst case, it needs to search all subset $\{-1,1\}^{\text{card}(\mathcal{Q})}$, causing huge complexity with many users. The OPSU algorithm is essentially a greedy algorithm based on B\&B. The complexity of P-BB ($\mathcal{X}_{\rm P-BB}$) couldn't be specific due to B\&B search tree by case. It's totally natural that the computation complexity of OPSU ($\mathcal{X}_{\rm OPSU}$) is greatly lower than $\mathcal{X}_{\rm P-BB}$ \cite{li2020interference}. The complexity of OPSU is represented as \begin{align} \mathcal{X}_{\rm OPSU} &= \mathcal{X}_{\rm LP}+2(2K+2K) \text{card}(\mathcal{Q}) \\ &=\mathcal{X}_{\rm LP}+8K\text{card}(\mathcal{Q}). \end{align} In nature, we have: \begin{equation}\label{com_pbb,opsu} \mathcal{X}_{\rm P-BB} \gg \mathcal{X}_{\rm OPSU}. \end{equation} For fair comparisons, thus, the complexities of all methods except P-BB are compared via above computation complexities. \section{Simulation Results}\label{sec:simulation} In this section we will verify the superiority of the proposed method by comparing the symbol-error-rate (SER) performances of various methods. For simulations, the following methods are used: \begin{itemize} \item Zero forcing {\bf(ZF)}: The conventional ZF method for unquantized MU-MIMO systems, which is used as the lower-bound of the 1-bit quantized methods. \item Symbol scaling {\bf (SS)}: The low complexity method proposed in \cite{li2018massive}. \item Quantized zero forcing {\bf (QZF)}: The direct 1-bit quantization of ZF \item Quantized LP {\bf (QLP)}: The direct 1-bit quantization of the solution from $\mathcal{P}_4$ in Figs. \ref{fig:7}, \ref{fig:8} and $\mathcal{P}_5$ in Figs. \ref{fig:5}, \ref{fig:6}. \item Partial branch and bound {\bf (P-BB)}: The method for QAM constellations proposed in \cite{li2020interference}. \item Ordered partial sequential update {\bf (OPSU)}: The ordered greedy method based on B\&B proposed in \cite{li2020interference}. \item Full-greedy {\bf (F-greedy)}: The proposed method 1, full greedy algorithm based LP from Algorithm 1. \item Partial-greedy {\bf (P-greedy)}: The proposed method 2, partial greedy algorithm based LP from Algorithm 2. \end{itemize} Recall that ${\sf SNR}$ is defined as per-antenna signal power to noise, i.e., $\rho/\sigma^2$. \begin{table*}[h] \caption{Comparison of computation complexities at setting of performances} \begin{center} \begin{tabular}{|c|c|c|} \hline Precoding methods & $4^2$-QAM, $N_t=64, K=8$& $4^3$-QAM, $N_t=128, K=8$\\ \hline Exhaustive search& $1.4\times10^{42}$ &$1.4\times10^{81}$ \\ \cline{1-3} P-BB& $69481\ll\cdot\leq1.34\cdot10^8$ &$263030\ll\cdot\leq4\cdot10^8$ \\ \hline \cline{1-3} OPSU& 69481 & 263030 \\ \hline \cline{1-3} QLP& 131320 & 643100 \\ \hline \cline{1-3} F-greedy& 139510 &667680 \\ \hline \cline{1-3} P-greedy& 133240 &645980 \\ \hline \cline{1-3} \cline{1-3} SS& 40928 &139216 \\ \hline \end{tabular} \label{table:1} \end{center} \end{table*} In addition, we evaluate the computation complexities based on the analyses in section \ref{subsec:complexity}. In Table~\ref{table:1}, we assume that the iteration of all the methods is done only one. This is because alternative algorithms like P-BB, OPSU cannot be measured accurately. For complexity of OPSU and P-greedy algorithms in Table \ref{table:1}, We also use average $\text{card}(Q)$ from $10^5$ simulations the average $\text{card}(Q)$ in the setting ($4^2$-QAM, $N_t=64$, $K=8$) is $14.988$ Also, in the setting of ($4^3$-QAM, $N_t=128$, $K=8$), the average $\text{card}(Q)$ is $14.998$. Table~\ref{table:1} shows that the computation complexity of proposed methods are moderate \cite{li20201bit}. Note that the following two scenarios are considered according to the overhead for information a decision-region $\tau$: \textbf{Scenario i)} In this scenario, $\tau$ is chosen independently from a channel matrix ${\bf H}$, i.e., $\tau$ is determined by (\ref{eq:10}). The corresponding results are provided in Figs. \ref{fig:5} and \ref{fig:6}. The $\tilde{{\bf x}}_{\rm LP}$ and $\tau$, inputs of our proposed methods (i.e., F-greedy and P-greedy algorithms) is determined from $\mathcal{P}_5$. \textbf{Scenario ii)} Figs. \ref{fig:7} and \ref{fig:8} adopt $\tau$ from own schemes. In scenario ii), the solutions of $\mathcal{P}_4$, $\tilde{{\bf x}}_{\rm LP}$ and $\tau(\stackrel{\Delta}{=} t)$ are equipped at proposed algorithms. Although we have to transmit $\tau$ depended on channel ${\bf H}$ at a time, the performance gain is super attractive. \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure.5,16qam,64,8,fixed.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=64, $K$=8, and $4^2$-QAM with a fixed parameter $\tau$.} \label{fig:5} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure.7,64qam,128,8,fixed.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=128, $K$=8, and $4^3$-QAM with a fixed parameter $\tau$.} \label{fig:6} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure.6,16qam,64,8,not fixed.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=64, $K$=8, and $4^2$-QAM with a parameter $\tau$.} \label{fig:7} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure.8.64qam,128,8,not fixed.pdf} \caption{Performance comparisons of precoding methods for the downlink MU-MISO systems with 1-Bit DACs, where $N_t$=128, $K$=8, and $4^3$-QAM with a parameter $\tau$.} \label{fig:8} \end{figure} Fig.~\ref{fig:5} shows the SER performance comparisons of the above algorithms for downlink MU-MISO systems with 1-bit DACs where $N_t=64$, $K=8$, and $4^2$-QAM. Without 1-bit constraints, the ZF method provides an optimal performance with infinite-resolution DACs. This can be interpreted as the lower-bound of the above 1-bit constraint methods. Note that, in all simulation settings, we cannot evaluate the performance of MILP due to its unmanageable complexity. At high SNR, we observe that all 1-bit precoding methods including the quantized LP suffer from a severe error-floor except the proposed methods. To overcome the error-floor, we apply the F-greedy and P-greedy algorithms which determine the entries of ${\bf x}_{\rm LP}$ such that they belong to $\{-1,1\}$ while keeping the feasibility and robustness. Fig.~\ref{fig:6} shows the SER performance comparisons for the configuration of $N_t=128$, $K=8$, and $4^3$-QAM showing the similar trend. Based on the formulation of the optimization problem, our formulation has all candidates in the decision region due to the expression of intersections of the $n$ shifted base regions. In detail, it causes that our MILP, LP problem can have only inequality constraint without equality constraint. Fig.~\ref{fig:7} shows the performance comparisons of the MU-MISO case where $N_t=64$, $K=8$ and $4^2$-QAM with a parameter $\tau$. We observe the performances of the proposed methods are the closest to the optimal performance, however the deviation between the full greedy method and the partial greedy method is trivial, which means the $\mathcal{P}_4$ provides near optimal $\tau$ and the refinement of ${\bf x}_{\rm{LP}}$ over $\mathcal{Q}$ is sufficient. At high SNR, the proposed methods show more performance gain over P-BB which is the near optimal method \cite{li2020interference}, which further validates that $\tau$ from $\mathcal{P}_4$ is properly chosen. Fig.~\ref{fig:8} also shows same aspect in the systems, where $N_t=128$, $K=8$ and $4^3$-QAM with a parameter $\tau$. Although we assume that the iteration is only once, in the real algorithm process, P-BB and OPSU find $\tau$ alternative, whereas the proposed method fix the $\tau$ from $\mathcal{P}_4$ at a sitting. The performances in Fig.~\ref{fig:7} and Fig.~\ref{fig:8} show that $\tau$ found at once in $\mathcal{P}_4$ is reasonable. \section{Conclusion}\label{sec:conclusion} We proposed the construction of 1-bit transmit signal vector for downlink MU-MISO systems with QAM constellations. In this regard, we derived the linear feasibility constraints which ensure that each user can recover the desired message successfully and transformed them into the cascaded matrix form. From this, we constructed mixed integer linear programming (MILP) problem whose solution generates a 1-bit transmit vector to satisfy the feasibility conditions and guarantee the robustness to a noise. To address the computational complexity of MILP, we proposed the LP-relaxed algorithm consisting of two steps: 1) to solve the relaxed LP; 2) to refine the LP solution to fit into the 1-bit constraint. Via simulation results, we demonstrated that the proposed methods show better performances with low-complexity compared with the benchmarks. One promising future direction is to further reduce the complexity of the proposed method without the performance loss. \section*{Acknowledgment} This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NRF-2020R1A2C1099836). \bibliographystyle{IEEEtran}
train/arxiv
BkiUdQQ5qhLA_wzE9Dms
5
1
\section{INTRODUCTION} The high depletion factors found for Fe in the interstellar medium (ISM), down to $\log(\mbox{Fe}/\mbox{H})-\log(\mbox{Fe}/\mbox{H})_\odot=-2.3$ \citep{sav96}, and the relatively high cosmic abundance of this element, imply that Fe is a very important contributor to the mass of refractory dust grains \citep*{sof94}. The high depletion factors also imply that the destruction of a small quantity of dust grains will translate into a significant, i.e. measurable, increase of the Fe abundance in the gaseous phase. Hence, the study of the Fe abundance in the gas of different regions where different conditions prevail can be used to identify the processes that govern the evolution of dust in the ISM.\@ In the diffuse ISM, the depletion patterns found for all available elements, including Fe, have led to the identification of shock waves as the main destruction mechanism of dust (\citealt{jen04} and references therein). In \ion{H}{2} regions, the lack of strong lines from other refractory elements and the reasons outlined above imply that Fe is the best choice to study depletion trends. Such a study, based on the Fe abundances measured in several Galactic \ion{H}{2} regions, suggests that energetic photons are responsible for the destruction of some dust grains in these nebulae \citep{rod96,rod02}. Fe is expected to be in three ionization states in \ion{H}{2} regions: $\mbox{Fe}/\mbox{H}=\mbox{Fe}^{+}/\mbox{H}^{+}+ \mbox{Fe}^{++}/\mbox{H}^{+}+\mbox{Fe}^{3+}/\mbox{H}^{+}$, so that the measurement of \ion{Fe}{2}, \ion{Fe}{3}, and \ion{Fe}{4} emission lines will allow us to determine the Fe abundance in these nebulae. [\ion{Fe}{2}] and [\ion{Fe}{3}] lines, although weak, have already been observed in several \ion{H}{2} regions and starburst galaxies (e.g.\@ \citealt{izo99,rod02}). Most of the optical [\ion{Fe}{2}] lines are affected by fluorescence effects \citep{rod99,ver00}, but the Fe$^{+}$ abundance can be estimated from a few lines that are almost insensitive to fluorescence. The Fe$^{+}$ abundance turns out to be low in most of the \ion{H}{2} regions studied to date \citep{rod02}, as expected from the low ionization potential for this ion (16.2 eV). On the other hand, [\ion{Fe}{4}] lines are much weaker than [\ion{Fe}{2}] and [\ion{Fe}{3}] lines and hence very difficult to observe. Therefore, the Fe abundance in \ion{H}{2} regions is usually obtained from [\ion{Fe}{3}] lines and an ionization-correction factor ($ICF$), derived from photoionization models, to account for the contribution of Fe$^{3+}$. The relation \begin{equation} \label{eq1} \frac{\mbox{Fe}}{\mbox{O}}=ICF\,\frac{\mbox{Fe}^{++}}{\mbox{O}^+}= \frac{x(\mbox{O}^+)}{x(\mbox{Fe}^{++})}\frac{\mbox{Fe}^{++}}{\mbox{O}^+}, \end{equation} where $x(X^{n+})$ stands for the ionization fraction of the $X^{n+}$ ion, is especially well suited for determining the Fe abundance from optical observations of \ion{H}{2} regions, since (1) the ionization potentials of the O and Fe ions are similar (30.6 and 54.8 eV for Fe$^{++}$ and Fe$^{3+}$, 35.3 and 54.9 eV for O$^{+}$ and O$^{++}$), and (2) both O$^{+}$ and O$^{++}$ can be measured from strong optical lines and one can get the O abundance $\mbox{O}/\mbox{H}=\mbox{O}^{+}/\mbox{H}^{+} + \mbox{O}^{++}/\mbox{H}^{+}$, and hence also $\mbox{Fe}/\mbox{H}$ from $\mbox{Fe}/\mbox{O}$. The values of the above $ICF$ and their dependence, if any, with the degree of ionization can be found using grids of photoionization models. The available grids of models \citep{sta90,gru92} imply that a constant value for this $ICF$, $x(\mbox{O}^{+})/x(\mbox{Fe}^{++})=1.1$, should provide a good estimate of the total Fe abundance to within $\pm0.2$~dex. However, available measurements of some weak [\ion{Fe}{4}] lines \citep{rub97,rod03} imply Fe$^{3+}$ abundances that are smaller, by factors 3--8, than the values implied by equation (\ref{eq1}) with an $ICF$ equal to 1.1. This ``[\ion{Fe}{4}] discrepancy'' translates into an uncertainty of up to a factor of 5 in the Fe abundances derived for a wide range of objects, from the nearby Orion Nebula to the low metallicity blue compact galaxy SBS~0335$-$052. Thus, the discrepancy has important implications for our understanding of the evolution of dust in \ion{H}{2} regions, the dependence of dust depletion factors on metallicity, and the chemical history of low metallicity dwarf galaxies. In this paper we study the ionization equilibrium of Fe using photoionization models that incorporate recently improved values for all the atomic data relevant to the problem. We compare the new model results with the available observational data, discuss the possible reasons behind the [\ion{Fe}{4}] discrepancy, and study its effect on the reliability of the derived Fe abundances. \section{MODEL RESULTS} We have used the photoionization code NEBULA (see, e.g, \citealt{rub91a,rub91b} and references therein) to calculate a grid of spherically symmetric models of constant density ionized by a single star. We use this grid to determine the value of $x(\mbox{O}^+)/x(\mbox{Fe}^{++})$, the $ICF$ in equation (\ref{eq1}), and its dependence on the degree of ionization given by $x(\mbox{O}^+)/x(\mbox{O}^{++})=\mbox{O}^+/\mbox{O}^{++}$. We have updated NEBULA with the atomic data derived recently from improved calculations that are relevant to the problem: photoionization cross sections for Fe$^{+}$, Fe$^{++}$, O$^{0}$ and O$^{+}$ \citep{nah94,nah96a,kj02b,ver96}, recombination coefficients for Fe$^{++}$, Fe$^{3+}$, O$^{+}$ and O$^{++}$ \citep{nah96b,nah97,nah99}, the charge-exchange reactions involving these ions (\citealt{kin96}; and the ORNL/UGA Charge Transfer Database for Astrophysics\footnote{http://www-cfadc.phy.ornl.gov/astro/ps/data/home}); and the NLTE model stellar atmospheres of \citet*{stern03} for solar metallicity with surface gravity $\log(g)=4$. The photoionization cross section of Fe$^{+}$ was constructed using both the calculated values of \citet{nah94} and the experimental ones of \citet{kj02b}, following the prescriptions given by the later authors. Other recent upgrades to the code are described by \citet{sim04}. Our grid of 36 models covers the following parameter space: effective temperature of the ionizing star $T_{\rm eff}=35\,000$, $40\,000$, $45\,000$, and 50\,000~K; total nucleon density $N=100$, 1000, and $10\,000$~cm$^{-3}$; and ``Orion metallicity'' Z$_{\rm Orion}$ ($\mbox{He}/\mbox{H}=0.1$, $\mbox{C}/\mbox{H}=3.3\times10^{-4}$, $\mbox{N}/\mbox{H}=4.5\times10^{-5}$, $\mbox{O}/\mbox{H}=4.0\times10^{-4}$, $\mbox{Ne}/\mbox{H}=8.1\times10^{-5}$, $\mbox{S}/\mbox{H}=2.2\times10^{-5}$, $\mbox{Ar}/\mbox{H}=4.5\times10^{-6}$, $\mbox{Si}/\mbox{H}=3.0\times10^{-6}$, $\mbox{Fe}/\mbox{H}=3.0\times10^{-6}$), Z$_{\rm Orion}/10$, and Z$_{\rm Orion}/30$. All the fluxes of the ionizing stars were normalized to get a total number of ionizing photons s$^{-1}$ for hydrogen of $10^{49}$, but we checked that this has no effect on the $ICF$: we ran two new models with $10^{51}$ ionizing photons s$^{-1}$ and found that they follow the same trend of $x(\mbox{O}^+)/x(\mbox{Fe}^{++})$ versus $\mbox{O}^+/\mbox{O}^{++}$ defined by the original grid. This same consistent behavior was found when we used one of the supergiant models of \citet{stern03}, with $T_{\rm eff}=40\,000$~K and $\log(g)=3.4$, as the ionizing star. The photoionization cross sections for the O and, especially, the Fe ions show some sharply peaked resonances arising from excitations to autoionizing states (quasi-bound states, above the ionization threshold). The energies at which these resonances occur, their widths and their peak intensities can be uncertain by a few percent or more, as seen in the direct comparison with experimental data in the few instances where the later are available (e.g.\@ \citealt{kj02a,kj02b} for O$^+$ and Fe$^+$, respectively). For this reason, and for reasons of computational ease, the photoionization cross sections are usually smoothed or fitted with simple functions. We used the analytic fits of \citet{ver96} for O$^{0}$ and O$^{+}$, and, following \citet*{baum98}, we smoothed the photoionization cross sections for Fe$^{+}$ and Fe$^{++}$ doing a convolution with a Gaussian of width 3\% the energy. The fluxes of the stellar atmospheres were also smoothed by convolving with a Gaussian of width 1\% the energy. Figure~\ref{sgv} shows the values of the $ICF$ in equation (\ref{eq1}), $x(\mbox{O}^{+})/x(\mbox{Fe}^{++})$, obtained from various models as a function of the degree of ionization given by $\mbox{O}^{+}/\mbox{O}^{++}$. The results of previous ionization models \citep{sta90,gru92} for metallicities that go from solar to 1/50 of solar, are also shown for comparison. Our new models show lower values for the $ICF$, and a small dependence with the degree of ionization. It can also be seen in Figure~\ref{sgv} that the results for solar or near solar metallicity show slightly larger values for the $ICF$ than the results for lower metallicities. This is due to the relatively high optical depth reached in the outer parts of these solar-metallicity models at the O$^+$ ionization edge. At lower metallicities this optical depth becomes negligible. This dependence on the metallicity is small and will not be further considered. A least-squares fit to the new model results in Figure~\ref{sgv} leads to the ionization-correction scheme: \begin{equation} \label{eq2} \frac{\mbox{Fe}}{\mbox{O}}= \frac{x(\mbox{O}^+)}{x(\mbox{Fe}^{++})}\frac{\mbox{Fe}^{++}}{\mbox{O}^+}= 0.9\,\biggl(\frac{\mbox{O}^{+}}{\mbox{O}^{++}}\biggr)^{0.08}\, \frac{\mbox{Fe}^{++}}{\mbox{O}^+}. \end{equation} \section{COMPARISON WITH THE OBSERVATIONS} The $ICF$ implied by the models can be compared with the values derived empirically for the handful of objects where the observed spectra include any [\ion{Fe}{4}] line (along with diagnostic lines, [\ion{O}{2}], [\ion{O}{3}], and [\ion{Fe}{3}] lines). To the objects considered by \citet{rod03}, we have added several objects where [\ion{Fe}{4}]~$\lambda6739.8$ has been observed recently: the \ion{H}{2} regions M42 \citep{est04} and NGC~3576 \citep{gar04}, and three of the planetary nebulae (PNe) observed by \citet{liu04a}: NGC~6210, NGC~6826, and NGC~6884. \citet{liu04a} provided the intensities of other [\ion{Fe}{4}] lines in some objects, but all of them are blends or possible misidentifications. Physical conditions and ionic abundances have been derived following the same procedure outlined in \citet{rod03}, except that the new values for the Fe$^{3+}$ transition probabilities of \citet{fro04} have been used for all the objects. The Fe$^{3+}$ abundances implied by these new data differ from the values presented by \citet{rod03} by less than 20\%. Table~\ref{t1} shows the physical conditions used in the abundance determination; Table~\ref{t2} shows the ionic and total abundances derived for all the objects. In all the objects but two (N88A and SBS~0335$-$052), the abundances have been derived with the usual two-zone scheme: we used the electron temperature implied by the [\ion{O}{3}] diagnostic lines, $T_e$[\ion{O}{3}], to derive the O$^{++}$ and Fe$^{3+}$ abundances, and $T_e$[\ion{N}{2}] to derive the O$^+$ and Fe$^{++}$ abundances. In N88A and SBS~0335$-$052, we used $T_e$[\ion{O}{3}] to derive all ionic abundances. We could have used instead an uncertain estimate of $T_e$[\ion{N}{2}] obtained from $T_e$[\ion{O}{3}] and one of the existing relations between the two $T_e$'s (either empirical or derived from photoionization models), but this would not change our results in a significant way. For example, with the relation of \citet*{cam86} we obtain $T_e\mbox{[\ion{N}{2}]}=12\,900$~K in N88A~bar. If we had used this $T_e$ instead of $T_e\mbox{[\ion{O}{3}]}=14\,200$~K to derive the O$^+$ and Fe$^{++}$ abundances in this object, the total Fe and O abundances presented in Table~\ref{t2} would change by less than 0.05~dex, and the value of any of the ionic ratios we will be considering would remain within the error bars. As in \citet{rod03}, it has been assumed that $\mbox{O}/\mbox{H}= \mbox{O}^+/\mbox{H}^++\mbox{O}^{++}/\mbox{H}^+$, $\mbox{Fe}/\mbox{H}= \mbox{Fe}^{+}/\mbox{H}^++\mbox{Fe}^{++}/\mbox{H}^++\mbox{Fe}^{3+}/\mbox{H}^+$, and that the Fe$^+$ abundance is negligible in those objects showing a high degree of ionization. The spectra of the PNe and SBS~0335$-$052 show some lines from high ionization ions, such as [\ion{O}{4}], [\ion{Fe}{5}], [\ion{Fe}{6}] or [\ion{Fe}{7}], but we expect that only traces of these ions are likely to be present in our sample objects. \citet{liu04b} calculated $\mbox{O}^{3+}/\mbox{H}^+\sim3\times10^{-5}$ from the [\ion{O}{4}] line at $25.9$~$\mu$m measured by the Infrared Space Observatory ({\sl ISO}) in NGC~6884. This $\mbox{O}^{3+}$ abundance is just 8\% of our adopted O abundance. \citeauthor{liu04b} also estimated that the contribution of $\mbox{O}^{3+}$ to the total abundance would be $\sim10$ times lower for NGC~6210 and completely negligible for NGC~6826. Furthermore, since $\mbox{Fe}^{3+}$ has an ionization potential very close to that of $\mbox{O}^{++}$ ($54.8$ and $54.9$~eV, respectively), $\mbox{Fe}^{4+}$ is not likely to have a significant concentration. Similar considerations, based on the $\mbox{He}^{++}/\mbox{He}^+$ ratio, were used by \citet{rod03} to conclude that the concentrations of $\mbox{O}^{3+}$ and $\mbox{Fe}^{4+}$ are likely to be very low for the other high ionization objects. The values of $x(\mbox{O}^{+})/x(\mbox{Fe}^{++})$ versus $\mbox{O}^{+}/\mbox{O}^{++}$ implied by the results in Table~\ref{t2} are compared with the model results in Figure~\ref{ru}. The new model results, although closer to the measured values than were the previous model predictions, are still unable to explain the measured values. We constructed additional models with a different geometry, where the ionized gas is located in a shell around the star. We used Z$_{\rm Orion}/10$, $T_{\rm eff}=45\,000$ and $50\,000$~K, $N=100$, 1000, and $10\,000$~cm$^{-3}$, and internal radii for the shell in the range 0.1--5.4 pc. Typical results, shown in Figure~\ref{ru} as stars, are very close to those obtained from the spherical models. We also calculated the $ICF$ implied by different lines of sight through the spherical model with Z$_{\rm Orion}/10$, $T_{\rm eff}=45\,000$~K, and $N=100$~cm$^{-3}$. The results followed the same trend defined by the shell models, and did not help in explaining the discrepancy. Hence, we did not pursue this approach further. One could also speculate that the discrepancy is due to the fact that we are comparing simple constant-density models with complex real objects. \citet*{moo04} found that this can lead to significant errors in ratios like $\mbox{O}^{+}/\mbox{O}^{++}$. However, according to the models, Fe$^{++}$ and O$^+$ should have similar concentrations, and [\ion{O}{2}] and [\ion{Fe}{3}] lines should form in similar regions in the nebula, and hence these ions should be affected by similar systematic effects. \citet{rod03} discussed the most likely explanations of this discrepancy, namely, errors in the collision strengths used to derive the Fe$^{++}$ and Fe$^{3+}$ abundances or errors in the $ICF$ derived from models, probably arising from errors in the input parameters governing the ionization equilibrium. If the model-predicted $ICF$ is seriously wrong, then the trend defined in Figure~\ref{ru} by the observed objects will lead to an $ICF$ that should be more reliable than the one predicted by the model results. The trend is only clearly defined for those objects whose degree of ionization is within the range covered by the models (i.e. with $\log(\mbox{O}^{+}/\mbox{O}^{++})$ above $\sim-1.35$). We note that since our photoionization models are tailored for \ion{H}{2} regions, the highest $T_{\rm eff}$ we are considering is $50\,000$~K.\@ For this $T_{\rm eff}$, there are very few ionizing photons with energies above $\sim54$~eV and O$^{++}$ and Fe$^{3+}$ (both with ionization potentials above 54~eV) are not expected to be substantially further ionized. The three objects with higher degree of ionization in Figure~\ref{ru} are PNe, where the central stars can reach or surpass $T_{\rm eff}$ of $100\,000$~K. Even if these objects do not have significant concentrations of either O$^{3+}$ or Fe$^{4+}$, as discussed above, they might have small amounts of these ions that could lead to a change in the trend followed by the $ICF$ with $\log(\mbox{O}^{+}/\mbox{O}^{++})$. Hence, a change in the trend at $\log(\mbox{O}^{+}/\mbox{O}^{++})\sim-1.4$ does not seem unlikely. Thus we limit the relationship to define an $ICF$ for lower degrees of ionization. A least-squares linear fit to the data for those objects with $\log(\mbox{O}^{+}/\mbox{O}^{++})>-1.35$ in Figure~\ref{ru} leads to the ionization-correction scheme: \begin{equation} \label{eq3} \frac{\mbox{Fe}}{\mbox{O}}= \frac{x(\mbox{O}^+)}{x(\mbox{Fe}^{++})}\frac{\mbox{Fe}^{++}}{\mbox{O}^+}= 1.1\,\biggl(\frac{\mbox{O}^{+}}{\mbox{O}^{++}}\biggr)^{0.58}\, \frac{\mbox{Fe}^{++}}{\mbox{O}^+}, \end{equation} which should be valid for $-1.35<\log(\mbox{O}^{+}/\mbox{O}^{++})<-0.1$. For $\log(\mbox{O}^{+}/\mbox{O}^{++})\geq-0.1$, the concentrations of Fe$^{++}$ and O$^+$ will grow, making these ions the dominant ionization states, and a constant $ICF$: \begin{equation} \label{eq4} \frac{\mbox{Fe}}{\mbox{O}}=\frac{\mbox{Fe}^{+}+\mbox{Fe}^{++}}{\mbox{O}^+}, \end{equation} would be the preferred choice. The contribution of Fe$^+$ will still be very small for most objects \citep{rod02}. In \S~\ref{why} we consider what changes in the relevant input parameters that affect the ionization equilibrium of the models would be needed to explain the discrepancy. Changes in the values of the collision strengths are considered in \S~\ref{col}. \section{AN ERROR IN THE MODELS' $ICF$? \label{why}} The $ICF$ implied by the photoionization models depends on the following factors: (1) the number of photoionizations of Fe$^{++}$ and O$^{+}$, which in turn depends on the photoionization cross sections for these ions and on the spectral energy distribution of the ionizing flux, (2) the number of radiative and dielectronic recombinations of Fe$^{3+}$ and O$^{++}$, which depends on the recombination coefficients, and (3) the rate of the charge-exchange reactions leading to recombinations of Fe$^{3+}$ and O$^{++}$. The smoothing of the photoionization cross sections and the stellar atmospheres could introduce errors in the number of photoionizations computed in the models. In order to constrain these errors, we calculated the number of photoionizations of Fe$^{++}$ implied by the original stellar fluxes and photoionization cross section of Fe$^{++}$ by integrating the product of these two quantities between $h\nu_0=30.65$~eV, the ionization potential of Fe$^{++}$, and $h\nu_1=54.4$~eV, the ionization potential of He$^+$ and the maximum energy considered in the photoionization models. We compared the result with the number of photoionizations implied by the smoothed stellar fluxes and found that the original value was 1\% lower than the smoothed one for $T_{\rm eff}=35\,000$~K and 18, 6, and 9\% higher for $T_{\rm eff}=40\,000$, $45\,000$, and 50\,000~K, respectively. These differences are far too small to change our results in a significant way. We consider now the effect of changes in the stellar ionizing flux. The ionizing fluxes could be wrong because of uncertainties in the stellar atmosphere models, or because we are using the results for models with solar metallicity whereas lower metallicities might be more appropriate. We can constrain the kind of changes we need to check by noting that the smoothed photoionization cross sections for Fe$^{++}$ and O$^{+}$ are very similar for energies above the O$^{+}$ ionization threshold (Fig.~\ref{ip}). Therefore, only a change in the ionizing flux in the energy range between the two ionization thresholds (i.e. between $30.65$ and $35.12$ eV) can have a significant effect on the $ICF$. A lower ionizing flux in this energy range will change the $ICF$ in the right direction to solve the discrepancy. We did a test with two of the model atmospheres, dividing their fluxes by factors of up to a factor of ten in the energy interval of interest (see Fig.~\ref{at}), thus bringing the fluxes in this interval very close to zero. We achieved this by dividing the fluxes by the function $1+ 9 \exp[-0.5((E-2.145)/0.06)^2]$, where E is the energy in Ry. We ran two models using these new stellar fluxes (with $N=100$~cm$^{-3}$ and Z$_{\rm Orion}$), and found that the decrease in the $ICF$ was $\simeq0.07$ dex, far too small to explain the discrepancy. We believe that this rules out any uncertainty in the stellar ionizing flux distributions as the main cause behind the discrepancy. We are then left with errors in the atomic data governing the ionization equilibrium of O and Fe as the possible explanations of the discrepancy. Since the Fe ions are more complex than the O ions, their atomic data are more difficult to calculate and hence more uncertain. We will center our discussion on the effects of changes in the ionization and recombination data for Fe. Changes in the data for O going in the opposite direction from those we will consider for the Fe data would also help in explaining the discrepancy, but we note that the discrepancy was first discovered as related to Fe by considering tailor-made models for M42, i.e., the discrepancy seems to be independent of the degree of ionization given by the O ions \citep{rub97}. We consider the $\mbox{Fe}^{++}\longleftrightarrow\mbox{Fe}^{3+}$ ionization equilibrium, since Fe$^+$ has a low concentration in all the objects of our sample. At a given point in a nebula, the ionization-equilibrium equation for these two ions is given by (see, e.g.\@ \citealt{ost89}): \begin{equation} \label{eq5} N(\mbox{Fe}^{++})\int_{\nu_0}^{\nu_1}\frac{4\pi J_\nu}{h\nu}\, \sigma(\mbox{Fe}^{++})\,d\nu =N(\mbox{Fe}^{3+})N_e\,\alpha(\mbox{Fe}^{++},T_e)+ N(\mbox{Fe}^{3+})\,N(\mbox{H}^0)\,\delta(T_e), \end{equation} where $N(X)$ is the volume density of $X$, $J_\nu$ is the mean intensity of the radiation field at the point, $\sigma(\mbox{Fe}^{++})$ is the photoionization cross section for Fe$^{++}$, $\alpha(\mbox{Fe}^{++},T_e)$ is the total (radiative plus dielectronic) recombination coefficient of $\mbox{Fe}^{3+}$, and $\delta(T_e)$ is the rate coefficient of the charge-exchange reaction $\mbox{Fe}^{3+} + \mbox{H}^0 \rightarrow \mbox{Fe}^{++} + \mbox{H}^+$. For a given $\mbox{O}^{+}/\mbox{O}^{++}$ we can get a lower value of the $ICF$ $x(\mbox{O}^{+})/x(\mbox{Fe}^{++})$ by decreasing the photoionization cross section for Fe$^{++}$ or increasing either the recombination coefficient of Fe$^{3+}$ or the rate of the aforementioned charge-exchange reaction. We selected two models to use as templates. They are ionized by stars with $T_{\rm eff}=40\,000$~K and $10^{49}$ ionizing photons s$^{-1}$, and $T_{\rm eff}=50\,000$~K, $10^{51}$ ionizing photons s$^{-1}$; both have $N=100$~cm$^{-3}$ and Z$_{\rm Orion}/10$. We then tested sequentially the effect on the results of these models of changes by a factor of 2 in the photoionization cross section, and by factors of 2 and 10 in each of the recombination coefficient and the rate of the charge-exchange reaction. We did not consider a change by a factor of 10 in the photoionization cross section of Fe$^{++}$ because the comparisons of calculated cross sections with the available experimental data (e.g.\@ \citealt{kj02a,kj02b} for O$^+$ and Fe$^+$, respectively) usually show better agreement. The results from the two model templates and from the test calculations are shown as connected open circles in Figure~\ref{cha}, where it can be seen that a change by a factor of 10 in the recombination data will be needed in order to reproduce the observed results. A factor of 10 uncertainty would not be unexpected for the dielectronic part of a recombination coefficient or for the rate of a charge-exchange reaction \citep{fer03}. \section{ERRORS IN THE COLLISION STRENGTHS FOR Fe$^{++}$ OR Fe$^{3+}$? \label{col}} Figure~\ref{cham}a shows the comparison between the $x(\mbox{O}^{+})/x(\mbox{Fe}^{++})$ $ICF$ implied by models and observations when either the derived Fe$^{++}$ abundances are divided by a factor $2.5$ or the Fe$^{3+}$ abundances are multiplied by the same factor. Figure~\ref{cham}b shows the same comparison for $\mbox{Fe}^{++}/\mbox{Fe}^{3+}$ as a function of $\mbox{O}^{+}/\mbox{O}^{++}$. It can be seen that this factor of 2.5 change in the relative abundances of Fe$^{++}$ and Fe$^{3+}$ would lead to an agreement between observations and models. It might then look promising that the recent calculations of collision strengths for Fe$^{++}$ by \citet{mcl02} differ from the previous results we are using here \citep{zha96} by factors up to 2. This new atomic data might then imply Fe$^{++}$ abundances lower by a factor of 2, thus reducing the discrepancy in Fig.~\ref{ru}. However, \citet{mcl02} calculate the collision strengths only for transitions between terms, whereas the fine-structure values are needed to derive Fe$^{++}$ abundances and, most important in the present context, to check their reliability by comparing the predicted relative line intensities with the observed ones. The atomic data we are using here for [\ion{Fe}{3}] do seem to be reliable, since they lead to consistent abundances for the various [\ion{Fe}{3}] lines (\citealt{rod02,rod03}) and, when the [\ion{Fe}{3}] lines are used as an electron density diagnostic, they lead to values similar to those implied by the usual diagnostics such as [\ion{S}{2}], [\ion{Cl}{3}] or [\ion{Ar}{4}] \citep{gar04,est04}. However, it is possible to preserve this consistency while changing the values of the collision strengths so that they lead to lower Fe$^{++}$ abundances. Since the upper levels of the optical [\ion{Fe}{3}] lines are mainly populated through collisional transitions from the ground term, the relative intensities of the lines we are considering will not change significantly if all the collision strengths for transitions originating in the ground term are changed by a similar factor. It is therefore suggestive that all the term-averaged collision strengths of \citet{zha96} for transitions from the ground term are lower than the results given by \citet{mcl02} by factors $\sim2$. A test calculation shows that if the collision strengths of \citet{zha96} for transitions from the ground term are enhanced by a factor of 2, the [\ion{Fe}{3}] line ratios remain mostly unaffected and the Fe$^{++}$ abundances are lower by a factor of $\sim2$. Hence, this change in the Fe$^{++}$ collisional data would explain most, if not all, the discrepancy. New calculations that provide the collision strengths for the fine-structure levels will be extremely valuable in order to test this idea. On the other hand, the discrepancy might be due to errors in the atomic data for Fe$^{3+}$, which are difficult to test through a comparison between observed and predicted relative line intensities because the lines are very weak and difficult to measure. Our new model results along with the new observational results for M42 and, especially, for NGC~3576 in Figure~\ref{ru}, which are very close to the expected ones, allow us to rule out the large uncertainties in the collision strengths, of factors 6--7, contemplated by \citet{rod03}, and to settle for changes by factors 2--3. A comparison between the results in Figures~\ref{cha} and \ref{cham} shows that both the explanation involving errors in the collision strengths of Fe$^{3+}$ or Fe$^{++}$ by a factor 2--3 and the explanation requiring a change in the recombination data for Fe$^{3+}$ (the recombination coefficient or the rate of the charge-exchange reaction with H$^0$) by a factor of 10 look equally plausible (it must be taken into account that the model results show some dispersion around their defined trends). Of course, the final explanation for the discrepancy might require a combination of causes, but the fact that we cannot decide on the most likely or on the more important one, introduces an uncertainty in the Fe abundances calculated for the various objects. In the next section, we assess this uncertainty and try to see what constraints we can place on the Fe abundance in the nebulae of our sample. \section{CONSTRAINING THE IRON ABUNDANCES} The last two columns in Table~\ref{t2} show the Fe abundances derived for all the objects from the sum of the ionic abundances (col.~[8]) and from the Fe$^{++}$ abundance and the $ICF$ (see eq.~[\ref{eq2}]) implied by our photoionization models (col.~[9]). If the model predicted $ICF$ is seriously wrong, the best values for the Fe abundance will be those shown in column~(8); if the discrepancy is only due to errors in the Fe$^{3+}$ collision strengths, the best values will be those in column~(9); and if the discrepancy is mainly due to errors in the Fe$^{++}$ collision strengths by the factor of $\sim2$ suggested by the calculations of \citet{mcl02} (see \S~\ref{col}), the best values will be those shown in column~(9) lowered by $\sim0.3$~dex. Figure~\ref{dep} shows the depletion factors for the Fe/O abundance ratio ($[\mbox{Fe}/\mbox{O}]=\log(\mbox{Fe}/\mbox{O})-\log(\mbox{Fe}/\mbox{O})_\odot$) implied by these three possibilities as a function of the O/H abundance ratio. We note that if the discrepancy is due to some combination of the aforementioned causes, the errors required in any of the atomic data are likely to be lower than those considered above, and the depletion factors will consequently be intermediate between those shown in the three panels of Figure~\ref{dep}. We use Fe/O to calculate depletion factors because our objects have different metallicities (see the values of the O/H abundance ratio in Table~\ref{t2}) but are likely to have a near solar value for Fe/O (considering the abundances in gas and dust), or at least, their intrinsic Fe/O abundance will show less variation than either O/H or Fe/H. Indeed, the Fe/O abundance ratio has been found to be near solar or slightly ($\sim0.2$~dex) below solar for the Magellanic Clouds and for other low-metallicity dwarf galaxies (see e.g., \citealt{ven01}; \citet*{she01}, and references therein). We have used $\log(\mbox{Fe}/\mbox{O})_\odot=-1.2$, a value that agrees to within $\pm0.1$~dex with recent determinations of the solar Fe and O abundances \citep{hol01,asp04,mel04}. Even if we do not know which are the correct values for the nebular Fe abundances, several things can be inferred from the results in Figure~\ref{dep}. Since we are considering quite extreme variations of the atomic data entering in the abundance determinations, we can use the range of abundances as a reasonable constraint on the true Fe abundances. An inspection of the depletion factors in Figure~\ref{dep} shows that the Galactic \ion{H}{2} regions (M42, NGC~3576) and PNe (IC~4846, NGC~6210, NGC~6826, and NGC~6884) have depletion factors in the range $-1.3$ to $-2.0$, intermediate between the values observed for warm and cold clouds in the Galactic ISM, where typical depletion factors are $\sim-1.2$ and $\sim-2.2$, respectively \citep{sav96}. Thus most of the Fe atoms are in dust grains in these nebulae: less than $\sim5$\% of their total Fe abundance is present in the gas. The depletion factor in LMC~30~Doradus, where the metallicity is about $0.2$~dex lower, is at the higher end of the above range, $\sim-1.4$. In SMC~N88A, with about $0.5$~dex lower metallicity than in the Galactic objects, the depletion factor is lower, in the range $-0.5$ to $-1.1$. The blue compact galaxy SBS~0335$-$052, with a metallicity $-1.3$~dex below those of the Galactic nebulae, shows a somewhat lower depletion, in the range $-0.4$ to $-0.9$. The depletions could be somewhat lower for the metal-poor objects if their intrinsic Fe/O abundance ratios are below solar, as commented above. This trend of higher depletions at higher metallicities, which was also shown to hold for a smaller sample of objects by \citet{rod02}, is consistent with what we know about depletion factors in the ISM of the Magellanic Clouds \citep{wel99,wel01} and with a recent measurement of the gas to dust ratio in the SMC \citep{bot04}. A similar trend has been found to hold for Damped Ly$\alpha$ systems \citep{vla04}. The trend could arise from a low efficiency of dust-formation processes at low metallicities or, at least for the objects in our sample, from a high dust-destruction rate due to the harsh radiation fields usually found in metal-poor galaxies. Further measurements of both [\ion{Fe}{3}] and [\ion{Fe}{4}] lines in a sample of metal-poor galaxies would help to constrain this issue. We note that the depleted Fe atoms are not likely to be in the form of silicates in N88A.\@ \citet*{roc87} did not find the $9.7$~$\mu$m silicate feature in the IR spectrum of this \ion{H}{2} region, and \citet{kurt99} derived a solar value for the Si/O abundance ratio. Furthermore, \citet{wel01} found that Si (and Mg) are essentially undepleted in the SMC ISM, concluding that silicates cannot be an important component of dust in this galaxy. This lack of silicate dust does not seem to be a common feature of low-metallicity galaxies, since the $9.7$~$\mu$m feature has been detected in SBS~0335$-$052 \citep{hou04}. The depleted Fe may be in the form of oxides or metallic grains, or deposited onto carbon grains. Such Fe-containing non-silicate dust grains are also considered to be an important dust component in our own Galaxy, where less than half of the depleted Fe can be accounted for with silicates (see \citealt{whi03} and references therein). \section{SUMMARY AND CONCLUSIONS} We have presented a detailed analysis of the current discrepancy between the observationally derived and model predicted concentrations of Fe ions in ionized nebulae. We have calculated new photoionization models that incorporate state-of-the-art values for all the atomic data relevant to the problem. The predicted Fe ionic concentrations have been compared with those implied by the available observational data. Our new model results are closer to the observed values than previous calculations, but there is still a discrepancy that translates into an uncertainty in the derived Fe abundances of a factor up to $3.7$. We have studied the possible reasons for this discrepancy and conclude that the most likely explanations are those due to uncertainties in the atomic data. We are able to find a satisfactory agreement between the model predictions and the observations in three different ways: (1) increasing either the total recombination coefficient or the rate of the charge exchange reaction with H$^0$ for Fe$^{3+}$ by a factor of $\sim10$, (2) decreasing the collision strengths for Fe$^{3+}$ by a factor of 2--3, and (3) increasing the collision strengths for Fe$^{++}$ by a factor of 2--3. Of course, if errors in different atomic data are involved, the above factors need not be as large. Since we are considering quite drastic changes in the atomic data involved in the abundance calculation, we feel justified in using the Fe abundances implied by the three possible explanations listed above as a way to constrain the true Fe abundances in the gas of our sample objects. Our set of Galactic \ion{H}{2} regions and PNe have Fe depletion factors ($\log(\mbox{Fe}/\mbox{O})-\log(\mbox{Fe}/\mbox{O})_\odot$) below $-1.3$; most of the Fe atoms are deposited onto dust grains in these nebulae. Only a few per cent or less of their Fe atoms are in the gas phase. The extragalactic \ion{H}{2} regions of our sample (LMC~30~Doradus, SMC~N88A, and SBS~0335$-$052) show somewhat lower depletions and help define a trend of increasing depletion with increasing metallicity (see Fig.~\ref{dep}). The depletion factor in SBS~0335$-$052, one of the most metal-deficient galaxies known, is only poorly constrained: it should be in the range $-0.9$ to $-0.4$. The exact amount of depletion in this interesting object will not be known until the [\ion{Fe}{4}] discrepancy is fully explained. \acknowledgments We thank Janet Simpson and Grazyna Stasi\'nska for helpful comments. The comments of an anonymous referee also helped to improve the paper. MR acknowledges support from Mexican CONACYT project J37680-E.\@ Support for RHR was from the NASA Long-Term Space Astrophysics (LTSA) Program and the Astrophysics Data Program (ADP). RHR thanks Scott McNealy for providing a Sun workstation. The models were run on Cray computers at NASA/Ames (Consolidated Supercomputer Management Office) and at JPL; time on the latter was provided by funding from JPL Institutional Computing and Information Services and the NASA Offices of Earth Science, Aeronautics, and Space Science.
train/arxiv
BkiUfHvxK7IAE_K9yq1V
5
1
\section{Introduction} Let $\N$ denote the set of non-negative integers, $X$ be a separable and infinite dimensional Banach space over the real or complex scalar field $\K$, and let $\mathcal{B}(X)$ denote the algebra of bounded linear operators on $X$. An operator $T \in \mathcal{B}(X)$ is called {\it hypercyclic} if there exists $x \in X$ such that $\{T^nx:n\in \N\}$ is dense in $X$ and such a vector $x$ is said to be a hypercyclic vector for $T$. By $HC(T)$, we will denote the set of hypercyclic vectors for $T$. Disjointness in hypercyclicity is introduced independently by Bernal \cite{Ber07} and by B\`es and Peris \cite{BePe07} in 2007. For $N \geq 2$, operators $T_1, \ldots,T_N \in \mathcal{B}(X)$ are called {\it disjoint hypercyclic} or {\it d-hypercyclic} if the direct sum operator $T_1 \oplus \dots \oplus T_N$ has a hypercyclic vector of the form $(x, \ldots, x) \in X^N$. Such a vector $x \in X$ is called a d-hypercyclic vector for $T_1, \ldots,T_N$. By $d$-$HC(T_1, \ldots,T_N)$, we will denote the set of d-hypercyclic vectors for $T_1, \ldots,T_N$. If $d$-$HC(T_1, \ldots,T_N)$ is dense, $T_1, \ldots,T_N$ are called to be {\it densely d-hypercyclic}. It is well known that for $T \in \mathcal{B}(X)$, the set $HC(T)$ is either empty or dense and it is non-empty if and only if $T$ is {\it topologically transitive}, that is for any two non-empty open $U,V \subset X$, there exists a positive integer $n$ such that $U \cap T^{-n}(V) \neq \emptyset$. Similarly, B\`es and Peris \cite{BePe07} proved that operators $T_1, \ldots,T_N$ are densely d-hypercyclic if and only if they are {\it d-topologically transitive}, that is for any non-empty open $U, V_1, \ldots, V_N \subset X$, there exists a positive integer $n$ such that $U \cap T_1^{-n}(V_1) \cap \ldots \cap T_N^{-n}(V_N) \neq \emptyset$. In \cite{SaSh14}, contrary to the single operator case, Sanders and Shkarin showed the existence of d-hypercyclic operators which are not densely d-hypercyclic and, therefore, fail to be d-topologically transitive. We next remind a necessary condition for d-hypercyclicity, a natural extension of the Hypercyclicity Criterion which has played a significant role in linear dynamics. Note that for $N = 1$, the following definition gives the single operator version of the Hypercyclicity Criterion. \begin{definition} \cite{BePe07}\label{def:CoHC} {\rm Let $(n_k)$ be a strictly increasing sequence of positive integers. We say that $T_1,\dots,T_N\in \mathcal{B}(X)$ satisfy the {\it d-Hypercyclicity Criterion with respect to $(n_k)$} provided there exist dense subsets $X_0, X_1,\dots, X_N$ of $X$ and mappings $S_{m, k}:X_m\to X$ with $1\le m \le N, \ k\in \N$ satisfying \begin{equation} \label{eq:CoHC} \begin{aligned} T_m^{n_k} &\underset{k\to\infty}{\longrightarrow} 0 \ \ \ \ \ \mbox{ pointwise on $X_0$,} \\ S_{m, k} &\underset{k\to\infty}{\longrightarrow} 0 \ \ \ \ \ \mbox{ pointwise on $X_m$, and } \\ \left( T_m^{n_k}S_{i,k}-\delta_{i,m}\, Id_{X_m} \right) &\underset{k\to\infty}{\longrightarrow } 0 \ \ \ \ \ \mbox{ pointwise on $X_m$ $(1\le i\le N)$.} \end{aligned} \end{equation} In general, we say that $T_1,\dots,T_N$ satisfy the d-Hypercyclicity Criterion if there exists some sequence $(n_k)$ for which \eqref{eq:CoHC} is satisfied.} \end{definition} \begin{theorem} \cite[Theorem 2.7]{BePe07} \label{T:HDCoHC} $T_1,\dots, T_N \in \mathcal{B}(X)$ satisfy the d-Hypercyclicity Criterion if and only if for each $r\in\N$, the direct sum operators $\bigoplus_{i=1}^{r}T_1, \dots , \bigoplus_{i=1}^{r}T_N$ are d-topologically transitive on $X^r$. \end{theorem} In \cite{BePe99}, B\`es and Peris showed that an operator $T \in \mathcal{B}(X)$ satisfies the Hypercyclicity Criterion if and only if it is {\it weakly mixing}, that is $T \oplus T$ is topologically transitive. An older result of Furstenberg \cite{Fu67} also shows that $T$ is weakly mixing if and only if for each $r\in\N$, the direct sum operator $\bigoplus_{i=1}^{r}T$ is topologically transitive. In a landmark result, De la Rosa and Read \cite{DeRe09} constructed a Banach space that supports a hypercyclic operator which is not weakly mixing, and thus fails to satisfy the Hypercyclicity Criterion. In the disjointness case, the picture is again different. We say $T_1,\dots, T_N \in \mathcal{B}(X)$ are {\it d-weakly mixing} if $T_1 \oplus T_1, \ldots, T_N \oplus T_N$ are d-topologically transitive on $X^2$. In \cite{SaSh14}, Sanders and Shkarin also showed that every Banach space supports d-weakly mixing operators which fail to satisfy the d-Hypercyclicity Criterion. Then combining the above-mentioned results, we have the following implications:\\ \begin{center} {\it d-Hypercyclicity Criteron $\Rightarrow$ d-weakly mixing $\Rightarrow$ d-topologically transitive $\Rightarrow$ d-hypercyclic} \end{center} \vspace{2mm} \noindent and none of the reverse implications hold. In this short note, we are interested in the disjoint version of a strong recurrence property of hypercyclic operators, called frequent hypercyclicity, which is introduced by Bayart and Grivaux \cite{BaGr06}. An operator $T \in \mathcal{B}(X)$ is called {\it frequently hypercyclic} if there exists some $x \in X$ such that for every non-empty open set $U \subset X$ the set $\{n : T^nx \in U\} \subset \N$ has positive lower density. Such a vector $x$ is called a {\it frequently hypercyclic vector} for the operator $T$. Recall that the lower density of a set $A \subset \N$ is defined by \begin{equation*} \mbox{\underline{dens}} A:=\liminf_{N\to \infty}\frac{card\{n \leq N: n\in A\}}{N}. \end{equation*} Frequent hypercyclicity cleary implies hypercyclicity. In fact, Grosse-Erdmann and Peris \cite{GrPe05} showed that frequently hypercyclic operators are weakly mixing and, therefore, they satisfy the Hypercyclicity Criterion. We say $T_1, \ldots , T_N \in \mathcal{B}(X)$ with $N \geq 2$ are {\it d-frequently hypercyclic} if there exists a vector $x$ in $X$ such that the vector $(x, \dots, x)$ is a frequently hypercyclic vector for the direct sum operator $T_1 \oplus \cdots \oplus T_N$ on $X^N$. Clearly, d-frequent hypercyclicity implies d-hypercyclicity, however it is not clear whether d-frequent hypercyclicity implies d-weakly mixing or even d-topological transitivity (dense d-hypercyclicity). In the next section, we will answer in the negative the following questions posed in \cite{MaSa16}:\\ \begin{question} If $T_1,T_2 \in \mathcal{B}(X)$ are d-frequently hypercyclic, must they be d-weakly mixing? Must they satisfy the d-Hypercyclicity Criterion?\\ \end{question} \section{d-Frequently hypercyclic operators which fail to be d-weakly mixing} For any $T, T_1, \ldots, T_N \in \mathcal{B}(X)$, let $FHC(T)$ and $d$-$FHC(T_1, \ldots, T_N)$ denote the sets of frequently hypercyclic vectors of $T$ and d-frequently hypercyclic vectors of $T_1, \ldots, T_N$, respectively. Note that, if $(f_1, \ldots, f_N) \in HC(T \oplus \ldots \oplus T)$ then the vectors $f_1, \ldots, f_N$ must be linearly independent. By modifying the results of Sanders and Shkarin in \cite{SaSh14}, we first show the existence of d-frequently hypercyclic operators which fail to be d-weakly mixing. \begin{lemma}\label{lemma_invertible} Let $T,L$ be in $\mathcal{B}(X)$ and $L$ be invertible. If $S := L^{-1}TL$, then $f \in d$-$FHC(T, S)$ if and only if $(f, Lf) \in FHC(T \oplus T)$. \end{lemma} \begin{proof} If $f \in d$-$FHC(T, S)$, then for any two non-empty, open sets $U,V \subset X$ we have \[ \mbox{\underline{dens}}\{n \in \N: T^n f \in U, S^n f \in L^{-1}(V) \} > 0. \] This gives that $\mbox{\underline{dens}}\{n \in \N: T^n f \in U, T^n L f \in V \} > 0$ and, therefore, $(f, Lf) \in FHC(T \oplus T)$. Now if we assume $(f, Lf) \in FHC(T \oplus T)$ and $U,V \subset X$ are non-empty and open, then by invertibility of $L$, we can say that \[ \mbox{\underline{dens}}\{n \in \N: T^n f \in U, T^n Lf \in L(V) \} > 0. \] But, this implies $\mbox{\underline{dens}}\{n \in \N: T^n f \in U, L^{-1}T^n L f \in V \} > 0$ and, thus, $f \in d$-$FHC(T, S)$. \end{proof} \begin{lemma}\label{lemma_iterations} Let $T$ be in $\mathcal{B}(X)$ and $(f,g) \in FHC(T \oplus T)$. \begin{enumerate} \item For any $r \in \N$, $(T^r f, g) \in FHC(T \oplus T)$. \item For any non-zero $c \in \K$, $(f, f + cg) \in FHC(T \oplus T)$. \end{enumerate} \end{lemma} \begin{proof} To prove the first statement, let $U,V \subset X$ be any two non-empty open sets and $r \in \N$. We have that \[ \mbox{\underline{dens}}\{n \in \N: T^n f \in T^{-r}(U), T^n g \in V \} > 0, \] which implies $\mbox{\underline{dens}}\{n \in \N: T^n(T^rf) \in U, T^n g \in V \} > 0$, and $(T^r f, g) \in FHC(T \oplus T)$. For the second statement, let $c \in \K$ with $c \neq 0$ and $U,V \subset X$ be non-empty and open sets. Choose $u_0 \in U$ and $v_0 \in V$ and define $x_0 := v_0 - u_0$. Choose $\varepsilon > 0$ such that $B(u_0, \varepsilon) \subset U$ and $B(v_0, \varepsilon) \subset V$ where $B(x, r)$ denotes the ball centered at $x \in X$ with radius $r > 0$. By our assumption, we have that \[ \mbox{\underline{dens}}\{n \in \N: T^n f \in B(u_0, \varepsilon / 2), T^n g \in c^{-1} B(x_0, \varepsilon / 2) \} > 0. \] This implies $\mbox{\underline{dens}}\{n \in \N: T^n f \in B(u_0, \varepsilon / 2), T^n f + cT^n g \in B(v_0, \varepsilon) \} > 0$, and therefore, \[ \mbox{\underline{dens}}\{n \in \N: T^n f \in U, T^n (f + cg) \in V \} > 0. \] Thus, $(f, f + cg) \in FHC(T \oplus T)$. \end{proof} \begin{theorem}\label{theorem_weakly_mixing} \label{theorem_izmir} Let $X$ be a separable Banach space and $T \in \mathcal{B}(X)$ such that $T \oplus T$ is frequently hypercyclic on $X \times X$. Then, there exists $S \in \mathcal{B}(X)$ such that $T, S$ are densely d-frequently hypercyclic but they fail to be d-weakly mixing. \end{theorem} \begin{proof} Let $(f,g) \in FHC(T \oplus T)$ where $f$ and $g$ are linearly independent. By the Hahn-Banach Theorem, there exists a linear functional $\lambda$ in the dual space $X^*$ such that $\lambda(f) = 1$ and $\lambda(g) = 0$. Define $L \in \mathcal{B}(X)$ by $Lx = x +\lambda(x)g$ for $x \in X$. Then $L$ is an invertible operator with the inverse $L^{-1}x = x - \lambda(x)g$. Now, define $S := L^{-1}TL$. First, we show that $T, S$ are densely d-frequently hypercyclic. To this end, let $U \subset X$ be non-empty and open and choose $r \in \N$ such that $T^r f \in U$ and $\lambda(T^r f) \neq 0$. Since $(f,g) \in FHC(T \oplus T)$, we have that $(T^r f,g) \in FHC(T \oplus T)$ by Lemma \ref{lemma_iterations}(1) and $(T^r f, T^r f + \lambda(T^r f )g) \in FHC(T \oplus T)$ by Lemma \ref{lemma_iterations}(2). The last expression means $(T^r f, LT^r f) \in FHC(T \oplus T)$ which in turn gives that $T^r f \in d$-$FHC(T, S)$ by Lemma \ref{lemma_invertible}. Now, in order to reach a contradiction, assume that $T \oplus T$, $S \oplus S$ are d-hypercyclic. Then there exists a $(x,y) \in X \times X$ such that the set $$ \{(T^n x, T^n y, S^n x, S^n y): n \geq 0\} = \{(T^n x, T^n y, L^{-1}T^n Lx, L^{-1}T^n Ly): n \geq 0\} $$ is dense in $X^4$. Therefore, $(x, y, Lx, Ly) \in HC(T \oplus T \oplus T \oplus T)$. As in Lemma \ref{lemma_iterations}, from the last statement we can derive \[ (x, y, Lx - x, Ly - y) \in HC(T \oplus T \oplus T \oplus T). \] This implies $(x, y, \lambda(x)g, \lambda(y)g) \in HC(T \oplus T \oplus T \oplus T)$, or in particular, $(\lambda(x)g, \lambda(y)g) \in HC(T \oplus T)$. But, this is a contradiction since the vectors $\lambda(x)g$ and $\lambda(y)g$ are linearly dependent. \end{proof} In \cite{DeFrGrPe12}, De la Rosa et al. showed that any complex separable infinite-dimensional Banach space with an unconditional Schauder decomposition supports an operator $T$ such that $T \oplus T$ is frequently hypercyclic. Indeed, $T$ has a perfectly spanning set of eigenvectors associated to unimodular eigenvalues, so does $T\oplus T$. Hence, we have the following. \begin{corollary} \label{corollary_existence} Every complex separable infinite-dimensional Banach space with an unconditional Schauder decomposition supports a densely d-frequently hypercyclic tuple of operators which fail to be d-weakly mixing and, therefore, fail to satisfy the d-Hypercyclicity Criterion. \end{corollary} Chan \cite{Ch01} showed that the hypercyclic operators on a separable infinite dimensional Hilbert space form a dense subset of the algebra of continuous linear operators in the strong operator topology and, later, this result is extended to Frechet spaces by B\`es and Chan \cite{BeCh03}. In \cite{MaSa20}, the first author and Sanders showed that for any d-hypercyclic $T_1, \ldots, T_N \in \mathcal{B}(X)$ the set of operators $S \in \mathcal{B}(X)$ such that $T_1, \ldots, T_N, S$ remain to be d-hypercyclic is SOT-dense in $X$. Motivated by these results, we give the next theorem. \begin{theorem} \label{theorem_lima} Let $X$ be a separable Banach space and $T \in \mathcal{B}(X)$ such that $T \oplus T$ is frequently hypercyclic on $X \times X$. Then the set $$ \{S \in \mathcal{B}(X): T, S \text{ are d-frequently hypercyclic but do not satisfy the d-Hypercyclicity Criterion}\} $$ is SOT-dense in $\mathcal{B}(X)$. \end{theorem} \begin{proof} Let $T \in \mathcal{B}(X)$ so that $T \oplus T$ is frequently hypercyclic on $X \times X$. Let $A \in \mathcal{B}(X)$ be arbitrary and $$\mathcal{U}_{e_1,\ldots,e_N, \epsilon} := \{B \in \mathcal{B}(X): \|B(e_i) - A(e_i)\| < \epsilon, 1 \leq i \leq N\}$$ be a SOT-neighborhood of $A$ where $e_1,\ldots,e_N \in X$ are linearly independent and $\epsilon >0$. Since $X$ supports a frequently hypercyclic operator, $X$ is infinite dimensional and one can find $f_1, \ldots, f_N \in X$ such that $\|f_i - A(e_i)\| < \epsilon$ for $1 \leq i \leq N$, and $e_1,\ldots, e_N, f_1, \ldots, f_N$ are linearly independent. Now choose $x_1, \ldots, x_N, Tx_1, \ldots, Tx_N \in FHC(T)$ and $(f,g) \in FHC(T \oplus T)$ so that the set $$ \mathcal{I} := \{e_1,\ldots, e_N, f_1, \ldots, f_N, x_1, \ldots, x_N, Tx_1, \ldots, Tx_N, f, g\} $$ is linearly independent. For each $1 \leq i \leq N$, pick $x_i^*, y_i^* \in X^*$ such that $x_i^*(e_i) = x_i^*(x_i) = 1$ with $x_i^* \equiv 0$ on $\mathcal{I} \backslash \{e_i, x_i\}$ and $y_i^*(f_i) = y_i^*(Tx_i) = 1$ with $y_i^* \equiv 0$ on $\mathcal{I} \backslash \{f_i, Tx_i\}$. Let $E := span\{e_1,\ldots, e_N, f_1, \ldots, f_N\}$, $F := span \{x_1, \ldots, x_N, Tx_1, \ldots, Tx_N\}$, and $Z := \bigcap_{i=1}^N (\ker x_i^* \cap \ker y_i^*)$. Then, $X = E \oplus Z = F \oplus Z$. Now, note that $(f,g) \in FHC(T \oplus T) \cap (Z \times Z)$. By the Hahn-Banach theorem, we can choose a $\lambda \in X^*$ such that $\lambda(f) = 1$ and $\lambda(g) = 0$ and define $L \in \mathcal{B}(X)$ as follows: For any $x \in X = E \oplus Z$, there exist unique $y \in E$ and $z \in Z$ such that $x = y+ z$ with $y$ in the form $$ y = \sum_{j=1}^N \alpha_j e_j + \sum_{j=1}^N \beta_j f_j. $$ Then define $L(x)$ as $$ L(x) = \sum_{j=1}^N \alpha_j x_j + \sum_{j=1}^N \beta_j Tx_j + z + \lambda(z)g. $$ It is easy to see that $L$ is invertible where for any $x \in X = F \oplus Z$ with $x = y + z$, $y = \sum_{j=1}^N \alpha_j x_j + \sum_{j=1}^N \beta_j Tx_j \in F$, and $z \in Z$, we have $$ L^{-1}(x) = \sum_{j=1}^N \alpha_j e_j + \sum_{j=1}^N \beta_j f_j + z - \lambda(z)g. $$ Now, if we define $S := L^{-1}TL$, then $S \in \mathcal{U}_{e_1,\ldots,e_N, \epsilon}$ since $Se_i = f_i$ for $1 \leq i \leq N$. By Lemma \ref{lemma_iterations}, $(f,g) \in FHC(T \oplus T)$ implies that $(f, f + \lambda(f)g) \in FHC(T \oplus T)$. Since we also have that $f \in Z$, $Lf = f + \lambda(f)g$, thus $(f, Lf) \in FHC(T \oplus T)$ and, as before, $f \in d$-$FHC(T, S)$ by Lemma \ref{lemma_invertible}. Lastly, we need to show that $T, S$ do not satisfy the Disjoint Hypercyclicity Criterion. By Theorem \ref{T:HDCoHC}, it is enough to show that $\bigoplus_{i=1}^{4N+2}T, \bigoplus_{i=1}^{4N+2}S$ cannot be disjoint topologically transitive. By way of contradiction, assume $(u_1, \ldots, u_{4N+2})$ be a disjoint hypercyclic vector for the direct sums $\bigoplus_{i=1}^{4N+2}T, \bigoplus_{i=1}^{4N+2}S$. This means that $$ \{(T^nu_1, \ldots, T^nu_{4N+2}, S^nu_1, \ldots, S^nu_{4N+2}): n \geq 0\} $$ is dense in $X^{8N+4}$. This, in turn, implies that $$ (u_1, \ldots, u_{4N+2}, Lu_1, \ldots, Lu_{4N+2}) \in HC\left(\bigoplus_{i=1}^{8N+4}T\right). $$ By Lemma \ref{lemma_iterations}, we conclude that $$ (u_1, \ldots, u_{4N+2}, Lu_1 - u_1, \ldots, Lu_{4N+2} - u_{4N+2}) \in HC\left(\bigoplus_{i=1}^{8N+4}T\right), $$ or \begin{equation}\label{eq:Lx-x} (Lu_1 - u_1, \ldots, Lu_{4N+2} - u_{4N+2}) \in HC\left(\bigoplus_{i=1}^{4N+2}T\right). \end{equation} Now, it is enough to observe that, for $1 \leq i \leq 4N+2$, we have $$ Lu_i - u_i \in span\{e_1,\ldots, e_N, f_1, \ldots, f_N, x_1, \ldots, x_N, Tx_1, \ldots, Tx_N, g\}, $$ and, therefore, the set $\{Lu_i - u_i : 1 \leq i \leq 4N+2\}$ is linearly dependent, contradicting (\ref{eq:Lx-x}). \end{proof} We remind the reader that determining whether $T\oplus T$ is frequently hypercyclic whenever $T$ is frequently hypercyclic is still an open problem since Bayart and Grivaux \cite{BaGr06} posed it for the first time in 2006. Therefore, we cannot remove the condition of frequent hypercyclicity of $T\oplus T$ in Theorem \ref{theorem_izmir} and Theorem \ref{theorem_lima}. However, we have a different panorama concerning $\mathcal{U}$-frequent hypercyclicty and d-reiterative hypercyclicity. We get these notions when we replace the positive lower density condition in the definition of frequent hypercyclicty by positive upper density and positive upper Banach density, respectively (see \cite{BeMePePu16}). A recent result by Ernst et al. \cite{ErEsMe}, asserts that $T$ is $\mathcal{U}$-frequently hypercyclic (reiteratively hypercyclic) if and only if $T\oplus T$ is $\mathcal{U}$-frequently hypercyclic (reiteratively hypercyclic). Now, observe that the proofs of Theorem \ref{theorem_izmir} and Theorem \ref{theorem_lima} adapt easily to the notions of $\mathcal{U}$-frequent hypercyclicity and reiterative hypercyclicity without any conditions on $T \oplus T$. To the best of our knowledge, the following question is open: \begin{question} Does every separable Banach space support a reiteratively hypercyclic operator? \end{question} In \cite{BeMaPeSh12}, B\`es et al. showed the existence of a mixing operator $T$ such that $T, T^2$ are not d-mixing and asked whether there exists a mixing operator $T$ such that $T, T^2$ are not even d-topologically transitive. Using a result in ergodic Ramsey theory, the second author \cite{Pu17} answered this question in the positive by showing that the same operator $T$ given by B\`es et al. is also chaotic, and $T, T^2$ fail to be d-topologically transitive. So, we pose the following question: \begin{question} Is there an example of a frequently hypercyclic operator $T$ for which $T,T^2$ are not d-topologically transitive? \end{question} We end this note by restating the following open question which was posed also in \cite{MaSa16}: \begin{question} If $T_1,T_2 \in \mathcal{B}(X)$ are d-frequently hypercyclic, must they be densely d-hypercyclic (equivalently, d-topologically transitive)? \end{question}
train/arxiv
BkiUc9E4eIZjgBtf8Rzj
5
1
\section{Introduction} \vspace{-.2cm} This paper gives a categorical structure to each of the classes of Lie bialgebras, Manin triples, classical $r$-matrices, $\calo$-operators and pre-Lie algebras by introducing their morphisms. The morphisms are compatible with natural correspondences among these classes, so that these correspondences become functors. \vspace{-.2cm} \subsection{Lie bialgebras, Manin triples and the related structures} The Lie bialgebra is the algebraic structure corresponding to a Poisson-Lie group. It is also the classical structure of a quantized universal enveloping algebra~\mcite{CP,D}. The great importance of Lie bialgebras is reflected by its close relationship with several other fundamental notions. First Lie bialgebras are characterized by Manin triples and matched pairs of Lie algebras~\mcite{D1}. In fact, there is a one-one correspondence between Lie bialgebras and Manin triples of Lie algebras. The same holds for Lie bialgebras and matched pairs of Lie algebras associated to coadjoint representations. Furthermore, solutions of the classical Yang-Baxter equation, or the classical $r$-matrices, naturally give rise to coboundary Lie bialgebras \cite{CP,STS}. Furthermore, such solutions are provided by $\calo$-operators, which in turn are provided by pre-Lie algebras. See~\cite{Bai2,Bai3,Bu,Ku} for more details. These close relations can be summarized in the following diagram where some of the correspondences go both ways, showing by arrows in both directions, and some are even one-one correspondences, showing by two-directional double arrows. \vspace{-.1cm} \begin{equation} \begin{split} \xymatrix{ &&&\text{matched pairs of}\atop \text{Lie algebras} \\ \text{pre-Lie}\atop\text{ algebras} \ar@<.4ex>[r] & \mathcal{O}\text{-operators on}\atop\text{Lie algebras}\ar@<.4ex>[l] \ar@<.4ex>[r]& \text{solutions of}\atop \text{CYBE} \ar@<.4ex>[l] \ar@2{->}[r]& \text{Lie}\atop \text{ bialgebras} \ar@2{<->}[d] \ar@2{<->}[u] \\ &&& \text{Manin triples of}\atop \text{Lie algebras} & } \end{split} \mlabel{eq:bigdiag} \end{equation} \vspace{-.6cm} \subsection{Existing morphisms of the structures} For further studies and applications of the important structures and their relations in the above diagram, it is fundamental to understand them in the context of categories. The first step in this direction is to define suitable morphisms for these structures to make them into categories, so that their relations can be made precise as functors and equivalences among these categories. Unfortunately, the understanding of their morphisms is quite limited and can be summarized as follows. \vspace{-.2cm} \subsubsection{Morphisms of Lie bialgebras and Manin triples} A notion of homomorphisms of Lie bialgebras has been defined in analog to homomorphisms of associative bialgebras and has been applied in the important quantization of Lie bialgebras~\mcite{CP,En,EK,EK1}. The notion of isomorphisms of two Manin triples is also defined and agrees with the isomorphisms of Lie bialgebras. However, the expected extension of isomorphisms of Manin triples to homomorphisms do not agree with the homomorphisms of their corresponding Lie bialgebras~\mcite{CP}. Explicitly, let $f$ be a homomorphism between two Lie bialgebras $(\g_1, \delta_1)$ and $(\g_2,\delta_2)$. Then $f:\g_1\mto \g_2$ is a homomorphism of Lie algebras and $f^*:\g_2^*\rightarrow \g_1^*$ is a homomorphism of the dual Lie algebras. It is natural to expect that $f$ should also give a homomorphism of the corresponding Manin triples $(\g_1\bowtie \g_1^*,\g_1,\g_1^*)$ and $(\g_2\bowtie \g_2^*,\g_2,\g_2^*)$. In fact, as the most natural choice, the map $f+f^*$ should be a homomorphism between the two Lie algebras $\g_1\bowtie \g_1^*$ and $\g_2\bowtie \g_2^*$. However, this does not hold, even in the case when $\g_1=\g_2$. \vspace{-.1cm} \subsubsection{Morphisms of classical $r$-matrices, $\calo$-operators and pre-Lie algebras} Recently, a notion of weak homomorphisms of classical $r$-matrices was introduced in~\mcite{TBGS} in associate with a notion of homomorphisms of the corresponding $\calo$-operators. But this notion of weak homomorphisms is defined only for the skew-symmetric classical $r$-matrices (hence only for triangular Lie bialgebras). It is natural to ask what to expect without the skew-symmetric condition, so that the anticipated homomorphisms of classical $r$-matrices are compatible with a suitably defined homomorphism of Lie bialgebras. Homomorphisms of pre-Lie algebras are naturally defined, but it is not known how they could be preserved under the correspondence between pre-Lie algebras and $\calo$-operators. \vspace{-.2cm} \subsection{Our approach} Thus so far our understanding of the important classes of objects in \meqref{eq:bigdiag} and their relationship mostly remains at the level of sets and set correspondences. Morphisms for these objects are either not defined, or are defined but not compatible with the correspondences. The goal of this paper is to introduce morphisms for all the classes in the diagram so that the diagram gives functors and natural equivalences of the resulting categories. Since the homomorphisms we will define for the various classes of objects are compatible with the correspondences among the classes, we use the uniform term of {\bf \weak homomorphisms} for the new homomorphisms in all the classes. To reach our goal, we utilize two strategies. The first strategy is a polarization process that allows us to first consider endomorphisms instead of homomorphisms. The second strategy is a change of order of the operations which equip Lie algebras with two extra structures: bialgebras and endomorphisms. \vspace{-.1cm} \subsubsection{Polarization} Our strategy of polarization is in a sense similar to polarizing a homogeneous polynomial in one variable to a multilinear form in multivariable by linear substitutions and derivations~\mcite{Pr}. The depolarization is then evaluating along a diagonal line. For us depolarization of morphisms in a category gives endomorphisms. So for each of the classes of constructions such as Lie bialgebras, instead of attempting to define morphisms between any two of them, we focus on the special case of endomorphisms on any given object. Taking Lie bialgebras as an example, we first define endomorphisms of a Lie bialgebra, instead of homomorphisms between any two Lie bialgebras. This depolarization does not reduce the level of difficulty by itself, but allows us to look the endomorphisms of a Lie bialgebra for instance from a different angle and apply the strategy of changing of operation orders. \vspace{-.1cm} \subsubsection{Endomorphisms of Lie algebras and bialgebras of Lie algebras with endomorphisms} With the polarization reduction, our goal is to define endomorphisms of Lie bialgebras and the related structures in diagram \meqref{eq:bigdiag}. To define endomorphisms for a Lie bialgebra, we can regard it as equipping an extra endomorphism structure to the Lie bialgebra which, by itself, is obtained from equipping a bialgebra structure to a Lie algebra. Thus we are looking at the composition of two processes of equipping two extra structures to a given structure, beginning with a Lie algebra. Our second strategy is to switch the order of these two processes. This is shown in the following diagram for the instance of Lie bialgebras. Each of the other classes in Diagram~\meqref{eq:bigdiag} is also obtained from equipping a Lie algebra with an extra structure, so can be treated in the same way. \vspace{-1cm} \begin{equation} \begin{split} \xymatrix{ \text{\small Lie algebras} \ar[rrr]^{\text{\small endomorphism}} \ar[d]^{\text{\small bialgebraization}} &&& \text{\small Endo Lie algebras} \ar[d]^{\text{\small bialgebraization}} \\ \text{\small Lie bialgebras}\atop {=\text{\small Bi-Lie algebras}} \ar[rrr]^{\text{\small endomorphism}} &&& {\text{\small Bi-endo Lie algebras}} \atop \text{\small =Endo Lie bialgebras} } \end{split} \mlabel{eq:diag} \end{equation} \vspace{-.4cm} In this diagram, the vertical arrows are equipping a given structure by a suitable bialgebra structure and the horizontal arrows are equipping a given structure with suitable endomorphisms. Our goal for a notion of endomorphisms of Lie bialgebras, which amounts to endomorphisms of the bialgebra structures on Lie algebras, is starting from the Lie algebras in the diagram, going downward and then right. Instead of attacking this apparently challenging task, we go first right and then downward, that is, we first equip Lie algebras with endomorphisms, called {\bf endo Lie algebras}. We then attempt to equip a bialgebra structure for endo Lie algebras. The notion of an endo Lie algebra is interesting on its own right: an associative algebra, in particular a commutative associative algebra equipped with an injective endomorphism is called a difference algebra~\mcite{Co,Le} as an algebraic study of difference equations and a close analogy of differential algebras~\mcite{PS}. The same strategy is applied to the other structures in Diagram~\meqref{eq:bigdiag}: instead of defining endomorphisms of a Lie algebra with the various extra structures in the diagram, we define the various extra structures on an endo Lie algebra, resulting in the following diagram. \vspace{-.2cm} \begin{equation} \begin{split} \xymatrix{ &&&\text{matched pairs of}\atop \text{endo Lie algebras} \\ \text{endo} \atop \text{pre-Lie algebras} \ar@<.4ex>[r] & \mathcal{O}\text{-operators on}\atop\text{endo Lie algebras}\ar@<.4ex>[l] \ar@<.4ex>[r]& \text{solutions of}\atop \text{endo CYBE} \ar@<.4ex>[l] \ar@2{->}^{}[r]& \text{bialgebras of}\atop \text{endo Lie algebras} \ar@2{<->}_{}[d] \ar@2{<->}^{}[u] \\ &&& \text{Manin triples of}\atop \text{endo Lie algebras} & } \end{split} \mlabel{eq:bigdiagend} \end{equation} \vspace{-.4cm} Once constructed, each of the structures on endo Lie algebras naturally gives rise to a notion of endomorphisms of this structure on Lie algebras. Once this is done, then as noted above, a polarization process extends the notion of endomorphisms to a notion of homomorphisms, equipping each class of objects with a category structure. Furthermore, the correspondences among the classes are naturally functors and, in the case of one-one correspondences, equivalences. These are summarized in the following enriched diagram of \meqref{eq:bigdiag}. Here the labels on the arrows indicate the results where the functors and equivalences are established. It is remarkable that the natural categorical structure of pre-Lie algebras is compatible with the category of $\calo$-operators hereby introduced. In fact, there is a pair of adjoint functors between the two categories. \vspace{-.6cm} \begin{equation} \mlabel{eq:bigdiagcat} \begin{split} \xymatrix{ &&&&&& {\begin{subarray}{c} \text{category of}\\ \text{matched pairs of}\\ \text{Lie algebras} \end{subarray}}\\ {\begin{subarray}{c} \text{category of}\\ \text{pre-Lie} \\ \text{algebras}\end{subarray}} \ar@<.3ex>[rr]^{\rm Prop~\mref{pro:prelieo}}_{\rm Prop~\mref{pro:otopl}}&& {\begin{subarray}{c} \text{category of } \\ \mathcal{O}\text{-operators on} \\ \text{Lie algebras} \end{subarray}} \ar@<.3ex>[ll]\ar@<.3ex>[rr]^{\rm Cor~ \mref{cor:zero}}&& {\begin{subarray}{c} \text{category of } \\ \text{solutions of}\\ \text{CYBE} \end{subarray}} \ar@<.4ex>[ll]^{\rm {Thm}~\mref{pp:oprm}} \ar@2{->}^{\rm {Thm}~\mref{co:r12}}[rr]&& {\begin{subarray}{c} \text{category of} \\ \text{Lie bialgebras}\end{subarray}} \ar@2{<->}_{\rm Thm~\mref{thm:equivalence111}}[d] \ar@2{<->}^{\rm Thm~\mref{thm:equivalence111}}[u] \\ &&&&&& {\begin{subarray}{c} \text{category of} \\ \text{Manin triples of}\\ \text{Lie algebras}\end{subarray}} } \end{split} \end{equation} \vspace{-.5cm} \subsection{Outline of the paper} We first introduce in Section~\mref{sec:endolie} the notion of an endo Lie algebra and formulate a bialgebra theory for endo Lie algebras together with their equivalences to matched pairs and Manin triples of endo Lie algebras. We then interpret a bialgebra of endo Lie algebras as an endomorphism of a Lie bialgebra, which then naturally generalizes to a homomorphism of Lie bialgebras that is compatible with that of Manin triples (and matched pairs) of Lie algebras, showing that the correspondences among Lie bialgebras, matched pairs of Lie algebras, and Manin triples of Lie algebras are equivalences of categories (Theorem~\mref{thm:equivalence111}). We then extend in Section~\mref{sec:rmat} the classical relations of Lie bialgebras with the classical Yang-Baxter equation as well as classical $r$-matrices to the context of endo Lie algebras. This naturally gives rise to a notion of \weak homomorphisms for any $r$-matrices, not just the skew-symmetric ones. This notion is shown to be compatible with the \weak homomorphisms of Lie bialgebras, leading to a functor of the corresponding categories (Theorem~\mref{co:r12}). Finally in Section~\mref{sec:oop}, we give the notion of $\calo$-operators on endo Lie algebras and apply it to define \weak homomorphisms of $\calo$-operators in such a way that is compatible with the \weak homomorphisms of classical $r$-matrices in Section~\mref{sec:rmat} (Theorem~\mref{pp:oprm} and Corollary~\mref{cor:zero}). This notion of \weak homomorphisms of $\calo$-operators is moreover compatible with the natural homomorphism of pre-Lie algebras, giving rise to a pair of adjoint functors between the corresponding two categories (Propositions~\mref{pro:prelieo} and \mref{pro:otopl}). We also consider a case where all the constructions can be given explicitly, providing natural examples of \weak isomorphisms of Lie bialgebras that are not the previously defined isomorphisms. This further justifies the significance of the \weak homomorphisms of Lie bialgebras introduced in this paper. \smallskip \noindent {\bf Notations. } Throughout this paper, all vector spaces, tensor products, and linear homomorphisms are over a fixed field $K$. Let $\End(V)$ denote the space of linear operators on a vector space $V$. The vector spaces and Lie algebras are finite dimensional unless otherwise specified. \vspace{-.3cm} \section{Endo Lie algebras and their bialgebras} \mlabel{sec:endolie} In this section we introduce the notion of endo Lie algebras and give the equivalent structures of bialgebras, matched pairs and Manin triples for endo Lie algebras. \vspace{-.1cm} \subsection{Endo Lie algebras and their representations} \mlabel{sec:rep} We first introduce the notion of a representation of an endo Lie algebra characterized by a semi-direct product. We then introduce the notion of a \drep of an endo Lie algebra in order to construct a reasonable representation on the dual space. \begin{defi} An {\bf endo Lie algebra} is a triple $(\frak g,[\;,\;],\phi)$, or simply $(\frak g, \phi)$, where $ (\frak g,[\;,\;])$ is a Lie algebra and $\phi:\g \rightarrow \g$ is a Lie algebra endomorphism. \end{defi} As an analogy of a Lie algebra representation, we have \begin{defi} A {\bf representation} of an endo Lie algebra $(\frak g, \phi)$ is a triple $(V, \rho, \alpha)$ where $(V, \rho)$ is a representation of the Lie algebra $\g$ and $\alpha\in \End(V)$ such that \begin{equation} \alpha (\rho(x)(v))=\rho(\phi(x))(\alpha(v)),\;\;\forall x\in \g, v\in V. \mlabel{eq:repn} \end{equation}Two representations $(V_1, \rho_1, \alpha_1)$ and $(V_2, \rho_2, \alpha_2)$ of an endo Lie algebra $(\g, \phi)$ are called {\bf equivalent} if there exists a linear isomorphism $\varphi:V_1\rightarrow V_2$ such that \begin{equation} \varphi (\rho_1(x)(v))=\rho_2(x)(\varphi(v)),\quad \varphi \alpha_1 (v)=\alpha_2\varphi(v),\quad \forall x\in \g, v\in V_1. \mlabel{de:2.1} \end{equation} \end{defi} With $\ad$ denoting the adjoint representation of the Lie algebra $\g$, the triple $(\g, \ad,\phi)$ is naturally a representation of the endo Lie algebra $(\g,\phi)$, called the {\bf adjoint representation} of $(\g,\phi)$. For vector spaces $V_1$ and $V_2$, and linear maps $\phi_1:V_1\rightarrow V_1$ and $\phi_2:V_2\rightarrow V_2$, we abbreviate $\phi_1+\phi_2$ for the linear map \begin{equation} \phi_{V_1\oplus V_2}: V_1\oplus V_2\mto V_1\oplus V_2, \quad \phi_{V_1\oplus V_2}(v_1+v_2):=\phi_1(v_1)+\phi_2(v_2),\quad \forall v_1\in V_1, v_2\in V_2. \mlabel{eq:Liehom} \end{equation} For a Lie algebra $\g$, a linear space $V$ and a linear map $\rho:\g\rightarrow {\rm End}(V)$, define a multiplication $[\cdot,\cdot]_\ltimes$ on $\g\oplus V$ by \begin{equation} [x+u,y+v]_\ltimes:=[x,y]+\rho(x)v-\rho(y)u,\;\;\forall x,y\in \g, u,v\in V. \mlabel{eq:semipro} \end{equation} As is well-known, $\g\oplus V$ is a Lie algebra if and only if $(V,\rho)$ is a representation of $\g$. The Lie algebra is denoted by $\g\ltimes_\rho V$ and is called the {\bf semi-direct product} of $\g$ and $V$. Similarly for an endo Lie algebra, we have \begin{pro} Let $(\g,\phi)$ be an endo Lie algebra, $(V, \rho)$ a representation of the Lie algebra $\g$ and $\alpha$ a linear operator on $V$. Then $(\g\ltimes_{\rho}V, \phi+\alpha)$ is an endo Lie algebra if and only if $(V,\rho,\alpha)$ is a representation of $(\g,\phi)$. The resulting endo Lie algebra $(\g\ltimes_{\rho} V, \phi+\alpha)$ is called the {\bf semi-direct product} of the endo Lie algebra $(\g,\phi)$ and its representation $(V,\rho,\alpha)$. \mlabel{pro:lhpsemi} \end{pro} This result follows as a special case of matched pairs of endo Lie algebras in Theorem~\mref{thm:3.1} in which the Lie algebra $\h:=V$ is the abelian Lie algebra. We next turn our attention to dual representations of endo Lie algebras. Denote the usual pairing between the dual space $V^*$ and $V$ by \begin{equation}\mlabel{eq:pair} \langle\, ,\, \rangle : V^*\times V\mto K, ~~\langle w^*, v \rangle :=w^*(v), \quad \forall v\in V, w^*\in V^*. \end{equation} For a linear map $\varphi: V\mto W$, the transpose of $\varphi$ is defined by \begin{equation} \mlabel{eq:trans} \varphi^*: W^*\mto V^*, \quad \varphi^*(w^*)(v):=w^* (\varphi(v)),\;\;\forall w^*\in W^*,v\in V. \end{equation} For a representation $(V,\rho)$ of a Lie algebra $\g$, its {\bf dual representation} is the linear map defined by \begin{equation} \rho^*: \g\mto \End(V^*), \quad \rho^*(x):=-\rho(x)^* ,~~ \forall x\in \g. \mlabel{eq:2.5} \end{equation} To obtain the dual representation of an endo Lie algebra, an extra condition is needed. \begin{lem} \mlabel{lem:admrep} Let $(\g, \phi)$ be an endo Lie algebra. Let $(V, \rho)$ be a representation of the Lie algebra $\g$. For $\beta\in \End(V)$, the triple $(V^*,\rho^*,\beta^*)$ is a representation of $(\g,\phi)$ if and only if $\beta$ satisfies \begin{eqnarray} \beta (\rho(\phi(x))(v))=\rho(x) (\beta(v)),\;\;\forall x\in \g, v\in V. \mlabel{eq:dualrepn} \end{eqnarray} \end{lem} \begin{proof} By Eq.~\meqref{eq:repn}, the triple $(V^*,\rho^*,\beta^*)$ is a representation of $(\g,\phi)$ means that $$ \beta^*(\rho^*(x))=\rho^*(\phi(x))\beta^*, \quad \forall x\in \g.$$ Then the lemma follows from Eqs.~\meqref{eq:trans}, \meqref{eq:2.5} and the nondegenerate pairing in Eq.~\meqref{eq:pair}. \end{proof} We reserve a notion for this property due to its pivotal role in our study. \begin{defi} Let $(\g, \phi)$ be an endo Lie algebra. Let $(V, \rho)$ be a representation of the Lie algebra $\g$ and let $\beta\in \End(V)$. We say that $\beta$ {\bf \dreping the endo Lie algebra $(\g,\phi)$ on $(V,\rho)$} if $(V^*,\rho^*,\beta^*)$ is a representation of $(\g,\phi)$, that is, Eq.~\meqref{eq:dualrepn} holds. When $(V,\rho)$ is taken to be the adjoint representation $(\g,\ad)$ of the Lie algebra $\g$, we simply say that $\beta$ {\bf \dreping $(\g,\phi)$}. \mlabel{de:admop} \end{defi} For later applications, we display some direct consequences. By Lemma~\mref{lem:admrep} we have \vspace{-.1cm} \begin{cor} Let $(\g, \phi)$ be an endo Lie algebra. A linear operator $\psi$ on $\g$ \dreping $(\g,\phi)$ if and only if \vspace{-.3cm} \begin{eqnarray} \psi[\phi(x),y]=[x,\psi(y)],\;\;\forall x,y\in \g. \mlabel{eq:Lieadmiss} \end{eqnarray} \end{cor} By Proposition~\mref{pro:lhpsemi} and Definition~\mref{de:admop}, we also have \begin{cor} Let $(\g,\phi)$ be an endo Lie algebra, $(V,\rho)$ be a representation of $\g$ and $\beta\in \End(V)$. If $\beta$ \dreping $(\g,\phi)$ on $(V,\rho)$, then we have the semi-direct product endo Lie algebra $(\g\ltimes_{\rho^*}V^*,\phi+\beta^*)$. \mlabel{co:dualsemipro} \end{cor} \vspace{-.4cm} \subsection{Matched pairs of endo Lie algebras} We first recall the concept of a matched pair of Lie algebras~\mcite{Maj,T}. \vspace{-.1cm} \begin{defi} \mlabel{de:match} A {\bf matched pair of Lie algebras} is a quadruple $(\g,\h,\rho,\mu)$, where $\g:=(\g, [\;,\;]_\g)$ and $\h:=(\h, [\;,\;]_\h)$ are Lie algebras, $\rho: \g\mto \End (\h)$ and $\mu: \h\mto \End(\g)$ are linear maps such that \begin{enumerate} \item $(\g,\mu)$ is a representation of $(\h, [\;,\;]_\h)$, \item $(\h, \rho)$ is a representation of $(\g, [\;,\;]_\g)$ and \item \mlabel{it:mat3} the following compatibility conditions hold: for any $x,y\in \g$ and $a,b\in \h$, \begin{eqnarray} \rho(x)[a,b]_\h-[\rho(x)a,b]_\h-[a,\rho(x)b]_\h+\rho(\mu(a)x)b-\rho(\mu(b)x)a&=&0, \mlabel{eq:Liemp1}\\ \mu(a)[x,y]_\g-[\mu(a)x,y]_\g-[x,\mu(a)y]_\g+\mu(\rho(x)a)y-\mu(\rho(y)a)x&=&0. \end{eqnarray} \end{enumerate} \end{defi} For Lie algebras $(\g, [\;,\;]_\g)$, $(\h, [\;,\;]_\h)$ and linear maps $\rho: \g\mto \End(\h)$, $\mu:\h\mto \End(\g)$, define a multiplication on the direct sum $\g\oplus \h$ by \begin{equation} [x+a, y+b]_{\bowtie}:=[x,y]_\g+\mu(a)y- \mu(b)x+\rho(x)b-\rho(y)a+[a,b]_\h, \ \forall x,y \in \g, a, b\in \h. \mlabel{eq:3.7} \end{equation} Then by~\mcite{T}, $(\g\oplus \h, [\;,\;]_{\bowtie})$ is a Lie algebra if and only if $(\g,\h,\rho,\mu)$ is a matched pair of $\g$ and $\h$. We denote the resulting Lie algebra $(\g\oplus \h, [\;,\;]_{\bowtie})$ by $\g\bowtie_{\rho}^{\mu} \h$ or simply $\g\bowtie \h$. Further, for any Lie algebra $\frak l$ whose underlying vector space is a vector space direct sum of two Lie subalgebras $\g$ and $\h$, there is a matched pair $(\g,\h,\rho,\mu)$ such that there is an isomorphism from the resulting Lie algebra $(\g\oplus \h, [\;,\;]_{\bowtie})$ via Eq.~(\mref{eq:3.7}) to the Lie algebra $\frak l$ and the restrictions of the isomorphism to $\g$ and $\h$ are the identity maps. \begin{defi} A {\bf matched pair of endo Lie algebras} is a quadruple $((\g,\phi_\g),(\h,\phi_\h),\rho,\mu)$, where $(\g, \phi_\g)$ and $(\h, \phi_\h)$ are endo Lie algebras, $\rho: \g\mto \End (\h)$ and $\mu: \h\mto \End(\g)$ are linear maps such that \begin{enumerate} \item $(\g,\mu,\phi_\g)$ is a representation of the endo Lie algebra $(\h, \phi_\h)$, \mlabel{it:emat1} \item $(\h, \rho,\phi_\h)$ is a representation of the endo Lie algebra $(\g, \phi_\g)$, \mlabel{it:emat2} \item $(\g,\h,\rho,\mu)$ is a matched pair of Lie algebras. \mlabel{it:emat3} \end{enumerate} \end{defi} We have the following characterization of matched pairs of endo Lie algebras. \begin{thm} Let $(\g, \phi_\g)$ and $(\h, \phi_\h)$ be endo Lie algebras and let $(\g,\h,\rho,\mu)$ be a matched pair of the Lie algebras $\g$ and $\h$. Then the pair $(\g\bowtie \h,\phi_{\g}+\phi_{\h})$ is an endo Lie algebra if and only if $((\g, \phi_\g), (\h, \phi_\h), \rho, \mu)$ is a matched pair of endo Lie algebras. \mlabel{thm:3.1} \end{thm} As noted after Proposition~\mref{pro:lhpsemi}, the proposition follows from the theorem as the special case when the linear space $V$ is regarded as an abelian Lie algebra $\h$. \begin{proof} Let $x,y\in\g$ and $a,b\in\h$. Then we have \begin{eqnarray*} (\phi_{\g}+\phi_{\h})([x+a,y+b]_{\bowtie}) &=&\phi_\g([x,y]_\g)+\phi_\h(\rho(x)(b))-\phi_\g(\mu(b)(x))\\ &&+\phi_\g(\mu(a)(y))-\phi_\h(\rho(y)(a))+\phi_\h([a,b]_\h),\\ ~[(\phi_{\g}+\phi_{\h})(x+a),(\phi_{\g}+\phi_{\h})(y+b)]_{\bowtie} &=&[\phi_\g(x),\phi_\g(y)]_\g+\rho(\phi_\g(x))\phi_\h(b)- \mu(\phi_\h(b))\phi_\g(x)\\ && +\mu(\phi_\h(a))\phi_\g(y)-\rho(\phi_\g(y))\phi_\h(a) +[\phi_\h(a),\phi_\h(b)]_\h. \end{eqnarray*} Note that the equality of the left hand sides means that $(\g\bowtie \h,\phi_{\g}+\phi_{\h})$ is an endo Lie algebra, while taking $x=b=0$ and then $a=y=0$ in the equality of the right hand sides yields the first two conditions for $((\g, \phi_\g), (\h, \phi_\h), \rho, \mu)$ to be a matched pair of endo Lie algebras. Thus the conclusion follows. \end{proof} \vspace{-.4cm} \subsection{Manin triples of endo Lie algebras} We recall the concept of a Manin triple of Lie algebras. See~\mcite{CP} for details. \begin{defi} \mlabel{de:1.4} A bilinear form $\frakB\in(\g\ot\g)^*$ on a Lie algebra $\g$ is called {\bf invariant} if \begin{equation} \mlabel{eq:1.3} \mathfrak{B}([x,y], z)=\mathfrak{B}(x, [y,z]),~~\forall~x, y, z\in \g. \end{equation} \end{defi} As an analog of the notion of a Frobenius (associative) algebra, a Lie algebra with a nondegenerate symmetric invariant bilinear form is called a {\bf quadratic Lie algebra}. Let $(\g,[\;,\;]_\g)$ be a Lie algebra. Suppose that there is a Lie algebra structure $[\;,\;]_{\g^*}$ on its dual space $\g^*$ and a Lie algebra structure on the vector space direct sum $\g\oplus \g^\ast$ which contains both $(\g,[\;,\;]_\g)$ and $(\g^*,[\;,\;]_{\g^*})$ as Lie subalgebras. Define a bilinear form on $\g\oplus \g^*$ by \begin{equation} \frakB_d(x + a^* ,y + b^* ): = \langle x,b^*\rangle + \langle a^* , y\rangle, \quad \forall a^*, b^* \in \g^* , x, y \in \g. \mlabel{eq:3.9} \end{equation} If $\frakB_d$ is invariant, then $(\g\oplus \g^*,\frakB_d)$ is a quadratic Lie algebra and the triple $(\g\oplus\g^*,\g, \g^*)$ of Lie algebras is called a (standard) {\bf Manin triple of Lie algebras} associated to $(\g,[\;,\;]_\g)$ and $(\g^*,[\;,\;]_{\g^*})$. This Lie algebra on $\g\oplus \g^*$ comes from a matched pair of $\g$ and $\g^*$ in Eq.~(\mref{eq:3.7}), and hence will also be denoted by $\g\bowtie\g^*$. Indeed, we have \begin{thm} \mlabel{thm:frob} $($\mcite{CP}$)$ Let $(\g,[\;,\;]_\g)$ be a Lie algebra. Suppose that there is a Lie algebra structure $[\;,\;]_{\g^*}$ on its dual space $\g^\ast$. Then there is a Manin triple of Lie algebras associated to $(\g,[\;,\;]_\g)$ and $(\g^*,[\;,\;]_{\g^*})$ if and only if $(\g,\g^*, \ad_\g^*,\ad_{\g^*}^*)$ is a matched pair of Lie algebras. \end{thm} For endo Lie algebras, we give the following definition. \begin{defi} \mlabel{de:1.3} A {\bf quadratic endo Lie algebra} is a triple $(\g,\phi,\frakB)$ where $(\g,\phi)$ is an endo Lie algebra and $(\g,\frakB)$ is a quadratic Lie algebra. Let $\widehat{\phi}:\g\mto \g$ denote the {\bf adjoint linear transformation of $\phi$} under the nondegenerate bilinear form $\frakB$: \begin{equation} \mlabel{eq:adjoint} \mathfrak{B}(\phi(x), y)=\mathfrak{B}(x, \widehat{\phi}(y)),\;\;\forall x,y\in \g. \end{equation} \end{defi} \begin{pro} Let $(\g,\phi,\frakB)$ be a quadratic endo Lie algebra. Let $\widehat \phi$ be the adjoint of $\phi$. Then $\widehat{\phi}$ \dreping $(\g,\phi)$. In other words, $(\g^*, \ad^*,{\widehat{\phi}\,}^*)$ is a representation of the endo Lie algebra $(\g,\phi)$. Furthermore, as representations of $(\g,\phi)$, $(\g, \ad, \phi)$ and $(\g^*, \ad^*,{\widehat{\phi}\,}^*)$ are equivalent. Conversely, let $(\g,\phi)$ be an endo Lie algebra and $\psi\in \End(\g)$ dually represent $(\g,\phi)$. If the representation $(\g^*, \ad^*,\psi^*)$ of $(\g,\phi)$ is equivalent to $(\g,\ad,\phi)$, then there exists a nondegenerate invariant bilinear form $\frakB$ on $\g$ such that $\widehat{\phi}=\psi$. \mlabel{pp:frobadm} \end{pro} \begin{proof} For any $x,y,z\in \g$, we have \begin{eqnarray*} 0&=& \frakB([\phi(x),\phi(y)],z)-\frakB(\phi[x,y],z)\\ &=&\frakB(\phi(x),[\phi(y),z])-\frakB([x,y],\widehat\phi(z))\\ &=&\frakB(x,\widehat\phi([\phi(y),z]))-\frakB(x,[y,\widehat\phi(z)]). \end{eqnarray*} Thus $\widehat{\phi}([\phi(y),z])=[y,\widehat\phi(z)].$ Hence $(\g^*, \ad^*,{\widehat{\phi\,}}^*)$ is a representation of $(\g,\phi)$. Define a linear map $\varphi:\g\mto \g^*$ by $$\varphi(x)(y):=\frakB(x,y),\;\;\forall x,y\in \g.$$ Since the bilinear form $\frakB$ is nondegenerate, the linear map $\varphi$ is a linear isomorphism. Moreover, for any $x,y,z\in A$, we have \begin{eqnarray*} \langle \varphi (\ad(x)y),z\rangle&=&\frakB([x,y],z)=\frakB([z,x],y)=\langle \varphi (y), [z,x]\rangle =\langle \ad^*(x)\varphi(y), z\rangle,\\ \langle \varphi (\phi(x)), y\rangle &=&\frakB(\phi(x), y)=\frakB(x, \widehat \phi(y))=\langle \varphi(x), \widehat{\phi}(y)\rangle=\langle {\widehat{\phi}\,}^*(\varphi(x)),y\rangle. \end{eqnarray*} Hence $(\g, \ad, \phi)$ is equivalent to $(\g^*, \ad^*,{\widehat{\phi\,}}^*)$ as representations of $(\g,\phi)$. Conversely, suppose that $\varphi:\g\rightarrow \g^*$ is a linear isomorphism giving the equivalence between $(\g, \ad, \phi)$ and $(\g^*,\ad^*,\psi^*)$. Define a bilinear form $\frakB$ on $\g$ by $$\frakB(x,y):=\langle \varphi(x), y\rangle,\;\;\forall x,y\in \g.$$ Then by a similar argument as above, we show that $\frakB$ is a nondegenerate invariant bilinear form on $\g$ such that $\widehat{\phi}=\psi$. \end{proof} We now extend the notion of Manin triples from the context of Lie algebras to that of endo Lie algebras. \begin{defi} \mlabel{de:3.4} Let $(\g, \phi)$ be an endo Lie algebra. Suppose that $(\g^*,\psi^*)$ is also an endo Lie algebra. A {\bf Manin triple of endo Lie algebras} associated to $(\g,\phi)$ and $(\g^*,\psi^*)$ is a Manin triple $(\g\bowtie \g^*, \g,\g^*)$ of Lie algebras such that $(\g\bowtie \g^*,\phi+ \psi^*, \frakB_d)$ is a quadratic endo Lie algebra. We use $((\g\bowtie \g^*,\phi+\psi^*),(\g,\phi),(\g^*,\psi))$ to denote this Manin triple. \end{defi} \begin{lem} Let $(\g\bowtie \g^*,\phi+ \psi^*, \frakB_d)$ be a quadratic endo Lie algebra. \begin{enumerate} \item The adjoint $\widehat{ \phi+\psi^*}$ of $\phi+\psi^*$ with respect to $\frakB_d$ is $\psi+ \phi^*$. Further $\psi+\phi^*$ \dreping the endo Lie algebra $(\g\bowtie \g^*, \phi+\psi^*)$. \mlabel{it:abas1} \item $\psi$ \dreping the endo Lie algebra $(\g, \phi)$. \mlabel{it:abas2} \item $\phi^*$ \dreping the endo Lie algebra $(\g^*, \psi^*)$. \mlabel{it:abas3} \end{enumerate} \mlabel{lem:abas} \end{lem} \begin{proof} (\mref{it:abas1}) For any $x,y\in \g, a^*,b^*\in \g^*$, we apply Eq.~\meqref{eq:3.9} to give \begin{eqnarray*} \frakB_d\big((\phi+\psi^*)(x+a^*),y+b^*\big)&=&\langle \phi(x),b^*\rangle + \langle \psi^*(a^*), y\rangle\\ &=&\langle x, \phi^*(b^*)\rangle + \langle a^*, \psi(y)\rangle\\ &=&\frakB_d(x+a^*, (\psi+\phi^*)(y+b^*)). \end{eqnarray*} Hence the adjoint $\widehat{ \phi+\psi^*}$ of $\phi+\psi^*$ with respect to $\frakB_d$ is $\psi+ \phi^*$. By Proposition~\mref{pp:frobadm}, for the quadratic Lie algebra $(\g\bowtie \g^*,\phi+ \psi^*, \frakB_d)$, the linear map $\widehat{ \phi+\psi^*}=\psi+\phi^*$ \dreping $(\g\bowtie \g^*, \phi+\psi^*)$. \smallskip \noindent (\mref{it:abas2}) By Item~(\mref{it:abas1}), $\psi+\phi^*$ \dreping $(\g\bowtie \g^*, \phi+\psi^*)$. By Eq.~(\mref{eq:Lieadmiss}), this is the case if and only if for any $x,y\in \g, a^*,b^*\in \g^*$, \begin{eqnarray*} (\psi+\phi^*)([(\phi+\psi^*)(x+a^*),y+b^*]_{\bowtie})=[x+a^*,(\psi+\phi^*)(y+b^*)]_{\bowtie}. \end{eqnarray*} Now taking $a^*=b^*=0$ in the above equation, we have $\psi[\phi(x),y]_\g=[x,\psi(y)]_\g,$ that is, $\psi$ \dreping $(\g,\phi)$. \smallskip \noindent (\mref{it:abas3}) Likewise, taking $x=y=0$ in the above equation yields $ \phi^*[\psi^*(a^*),b^*]_{\g^*}=[a^*,\phi^*(b^*)]_{\g^*},$ that is, $\phi^*$ \dreping $(\g^*, \psi^*)$. \end{proof} Enriching Theorem~\mref{thm:frob} to the context of endo Lie algebras, we obtain \begin{thm} Let $(\g,\phi)$ be an endo Lie algebra. Suppose that there is an endo Lie algebra structure $(\g^*,\psi^*)$ on its dual space $\g^\ast$. Then there is a Manin triple of endo Lie algebras $((\g\bowtie \g^*,\phi+\psi^*), (\g,\phi),(\g^*,\psi^*))$ associated to $(\g,\phi)$ and $(\g^*,\psi^*)$ if and only if $((\g,\phi),(\g^*,\psi^*), \ad_\g^*,\ad^*_{\g^*})$ is a matched pair of endo Lie algebras. \mlabel{thm:3.7} \end{thm} \begin{proof} ($\Longrightarrow$) By the assumption, there is a Manin triple of endo Lie algebras $((\g\bowtie \g^*,\phi+\psi^*), (\g,\phi),(\g^*,\psi^*))$ associated to $(\g,\phi)$ and $(\g^*,\psi^*)$. Then in particular $(\g\bowtie \g^*,\g,\g^*)$ is a Manin triple of Lie algebras associated to $\g$ and $\g^*$. Hence by Theorem~\mref{thm:frob}, $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$ is a matched pair of Lie algebras for which the Lie algebra on $\g\oplus \g^*$ is the Lie algebra $\g\bowtie \g^*$. Since the homomorphism on $\g\bowtie \g^*$ is $\phi+\psi^*$, by Lemma~\ref{lem:abas}, $(\g^*,\ad_\g^*,\psi^*)$ and $(\g,\ad_{\g^*}^*,\phi)$ are representations of the endo Lie algebras $(\g,\phi)$ and $(\g^*,\psi^*)$ respectively. Hence $((\g,\phi),(\g^*,\psi^*), \ad_\g^*,\ad^*_{\g^*})$ is a matched pair of endo Lie algebras. \smallskip \noindent ($\Longleftarrow$) If $((\g,\phi),(\g^*,\psi^*), \ad_\g^*,\ad^*_{\g^*})$ is a matched pair of endo Lie algebras, then $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$ is a matched pair of Lie algebras. Hence by Theorem~\mref{thm:frob} again, $\frakB_d$ is invariant on $\g\bowtie \g^*$. By Theorem~\mref{thm:3.1}, the matched pair of endo Lie algebras also equips the Lie algebra $\g\bowtie \g^*$ with the endomorphism $\phi+\psi^*$, giving us a quadratic endo Lie algebra. This is exactly what we need. \end{proof} \vspace{-.4cm} \subsection{Endo Lie bialgebras} With our previous preparations, we are ready to introduce the notion of endo Lie bialgebras, as an enrichment of the notion of Lie bialgebras that we now recall and refer the reader to~\mcite{CP} for details. \begin{thm} \mlabel{thm:md} Let $(\g,[\;,\;]_\g)$ be a Lie algebra. Suppose that there is a Lie algebra $(\g^*, [\;,\;]_{\g^*})$ on the linear dual $\g^*$ of $\g$. Let $\delta:\g\mto \g\ot \g$ denote the linear dual of the multiplication $[\;,\;]_{\g^*}:\g^*\ot \g^*\mto \g^*$ on $\g^*$. Then the quadruple $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$ is a matched pair of Lie algebras if and only if, for any $x,y\in \g$, we have \begin{equation} \mlabel{eq:3.14} \delta[x,y]_\g=(\ad_\g(x)\otimes\id+\id\otimes \ad_\g(x))\delta(y)-(\ad_\g(y)\otimes\id+\id\otimes \ad_\g(y))\delta(x). \end{equation} \end{thm} Under our assumption of finite dimension, the Lie algebra structure $(\g^*,[\;,\;]_{\g^*})$ is equivalent to the condition that $(\g,\delta)$ is a {\bf Lie coalgebra}~\mcite{Mi} which is defined without the dimensional restriction. \begin{defi} \mlabel{de:lieco} A linear space $\g$ with a linear map $\delta:\g\mto \g\ot \g$ is called a {\bf Lie coalgebra} if $\delta$ is {\bf coantisymmetric}, in the sense that $\delta=-\tau \delta$ for the flip map $\tau:\g\ot \g\mto \g\ot \g$, and satisfies the {\bf co-Jacobian identity}: \begin{equation} (\id +\sigma+\sigma^2)(\id \ot \delta)\delta =0, \end{equation} where $\sigma(x\ot y\ot z):=z\ot x\ot y$ for $x, y, z\in \g$. \end{defi} Combining a Lie algebra and a Lie coalgebra gives \begin{defi} \mlabel{de:bial} A {\bf Lie bialgebra} is a triple $(\g,[\;,\;]_\g,\delta)$, where $\g:=(\g,[\;,\;])$ is a Lie algebra, $(\g,\delta)$ is a Lie coalgebra (that is, $(\g^*,\delta^*)$ is a Lie algebra when $\g$ is finite dimensional) and Eq.~(\mref{eq:3.14}) holds. \end{defi} The notion of a Lie bialgebra applies to Lie algebras of any dimension. Under the finite-dimension condition, the notion is characterized by matched pairs of Lie algebras because of Theorem~\mref{thm:md}, and then by Manin triples of Lie algebras thanks to Theorem~\mref{thm:frob}. With the extra structure of endomorphisms, we also have the following equivalent characterizations. \begin{thm} \mlabel{thm:rbbial} \mlabel{thm:rbinfbialg} Let $(\g, [\;,\;]_\g,\phi)$ be an endo Lie algebra. Suppose that there is an endo Lie algebra $(\g^*,[\;,\;]_{\g^*},\psi^*)$ on the linear dual $\g^*$ of $\g$. Let $\delta:\g\mto \g\ot \g$ denote the linear dual of the multiplication $[\;,\;]_{\g^*}:\g^*\ot \g^*\mto \g^*$ on $\g^*$. Then the following statements are equivalent. \begin{enumerate} \item The quadruple $((\g,\phi),(\g^*,\psi^*), \ad_\g^*,\ad^*_{\g^*})$ is a matched pair of endo Lie algebras; \mlabel{it:rbb1} \item \mlabel{it:rbb2} There is a Manin triple of endo Lie algebras associated to the endo Lie algebras $(\g, [\;,\;]_\g,\phi)$ and $(\g^*,[\;,\;]_{\g^*},\psi^*)$; \item \mlabel{it:rbb3} The triple $(\g,[\;,\;]_\g,\delta)$ is a Lie bialgebra. Furthermore, the linear operators $\phi^*$ and $\psi$ dually represent $(\g^*,\psi^*)$ and $(\g,\phi)$ respectively. \end{enumerate} \end{thm} \begin{proof} The equivalence (\mref{it:rbb1}) $\Longleftrightarrow$ (\mref{it:rbb2}) is given in Theorem~\mref{thm:3.7}. By definition, Item~(\mref{it:rbb1}) means that $(\g,\g^*,\ad_\g^*,\ad_{\g^*}^*)$ is a matched pair of Lie algebras and that the dual representation conditions in Item~(\mref{it:rbb3}) hold. Since the matched pair condition is equivalent to the triple $(\g,[\;,\;]_\g,\delta)$ being a Lie bialgebra by Theorem~\mref{thm:md}, we obtain the equivalence (\mref{it:rbb1}) $\Longleftrightarrow$ (\mref{it:rbb3}). \end{proof} Given the aforementioned well-known equivalent characterizations of a Lie bialgebra by a matched pair and a Manin triple of Lie algebras, the characterizations in Theorem~\mref{thm:rbinfbialg} should lead to a notion of bialgebra structure for endo Lie algebras given by Theorem~\mref{thm:rbinfbialg}. \meqref{it:rbb3}. We first analyze the related conditions. \begin{defi} \mlabel{lem:rbcorb} An {\bf endo Lie coalgebra} is a Lie coalgebra $(\g,\delta)$ together with a Lie coalgebra endomorphism $\psi:\g \mto \g$, that is, a $\psi\in \End(\g)$ such that \begin{eqnarray} \mlabel{eq:corb} (\psi\ot \psi)\delta=\delta \psi. \end{eqnarray} \end{defi} \vspace{-.1cm} Under the finite-dimension condition, Eq.~\meqref{eq:corb} is equivalent to the condition that $\psi^*:\g^*\mto \g^*$ is an endomorphism of the Lie algebra $\g^*$. Note that the dual representation conditions in Theorem~\mref{thm:rbinfbialg}. (\mref{it:rbb3}) are either defined or can be rephrased as follows without referring to the dual space $\g^*$ and its operations: \begin{eqnarray} (\id\otimes \phi)\delta&=&(\psi\otimes\id)\delta \phi, \mlabel{eq:pduqrd}\\ \psi[\phi(x),y]_\g&=&[x,\psi(y)]_\g, \quad \forall x, y\in \g. \mlabel{eq:pduql} \end{eqnarray} We are thus led to the key notion of endo Lie bialgebras that applies to vector spaces of any dimensions and indeed to any modules, just like its classical counterpart of Lie bialgebras. \vspace{-.2cm} \begin{defi} \mlabel{de:rbbial} An {\bf endo Lie bialgebra} is a quintuple $(\g,[\;,\;]_\g,\delta,\phi,\psi)$ or simply a triple $((\g,\phi),\delta,\psi)$ in which \begin{enumerate} \item $(\g,[\;,\;]_\g,\delta)$ is a Lie bialgebra, \item $(\g,[\;,\;]_\g, \phi)$ is an endo Lie algebra, \item $(\g,\delta,\psi)$ is an endo Lie coalgebra, \item the compatibility conditions in Eqs.~(\mref{eq:pduqrd}) -- (\mref{eq:pduql}) are satisfied. \end{enumerate} \end{defi} Returning to the finite dimensional case, we immediately have \begin{cor} Consider a quintuple $(\g,[\;,\;]_\g,\delta,\phi,\psi)$ where $(\g,\phi)$ is an endo Lie algebra. Then the quintuple is an endo Lie bialgebra if and only if anyone $($and hence all$)$ of the equivalent conditions in Theorem~\mref{thm:rbbial} is satisfied. \mlabel{co:rbb} \end{cor} \vspace{-.3cm} \subsection{Homomorphisms of Lie bialgebras and Manin triples} As both the main motivation and application of our study of endo Lie bialgebras, we introduce a new notion of homomorphisms of Lie bialgebras that is compatible with those of Manin triples and matched pairs. Then we compare this notion with the existing notion of Lie bialgebra homomorphisms. \vspace{-.2cm} \subsubsection{New homomorphisms for Lie bialgebras} We can rewrite Definition~\mref{de:rbbial} in terms of morphisms of Lie bialgebras. \begin{defi} \mlabel{de:auto-weakhomLie} A {\bf \weak endomorphism} on a Lie bialgebra $(\g,[\;,\;]_\g,\delta)$ consists of a Lie algebra homomorphism $\phi:\g\mto\g$ and a Lie coalgebra homomorphism $\psi:\g\rightarrow \g$ satisfying Eqs.~(\mref{eq:pduqrd}) -- (\mref{eq:pduql}). \end{defi} Then we immediately have \begin{pro} The quintuple $(\g,[\;,\;]_\g,\delta,\phi,\psi)$ is an endo Lie bialgebra if and only if $(\phi,\psi)$ is a \weak endomorphism of the Lie bialgebra $(\g,[\;,\;]_\g,\delta)$. \end{pro} Definition~\mref{de:auto-weakhomLie} motivates us to give the following notion of homomorphisms between any two Lie bialgebras. \begin{defi} \mlabel{de:liebialghom} Let $(\g,[\;,\;]_\g,\delta_\g)$ and $(\h,[\;,\;]_\h,\delta_\h)$ be Lie bialgebras. A {\bf \weak homomorphism of Lie bialgebras} from $(\g,[\;,\;]_\g,\delta_\g)$ to $(\h,[\;,\;]_\h,\delta_\h)$ is a pair $(\phi,\psi)$ of linear maps such that \begin{enumerate} \item $\phi:\g\mto \h$ is a homomorphism of Lie algebras, \item $\psi:\h\mto \g$ a homomorphism of Lie coalgebras, \item the polarizations of Eq.~\meqref{eq:pduqrd} -- \meqref{eq:pduql} hold: \begin{eqnarray} (\id_\g\otimes \phi)\delta_\g&=&(\psi\otimes\id_\h)\delta_\h \phi, \mlabel{eq:pp1}\\ \psi[\phi(x),y]_\h&=&[x,\psi(y)]_\g, \quad \forall x\in \g, y\in \h. \mlabel{eq:pp2} \end{eqnarray} \end{enumerate} If both $\phi$ and $\psi$ are bijective, the pair is called a {\bf \weak isomorphism of Lie bialgebras}. Let $\LB$ denote the category of Lie bialgebras with its morphisms thus defined. \end{defi} The benefit of \weak homomorphisms of Lie bialgebras is that it is compatible with the following naturally defined morphisms of Manin triples derived from Manin triples of endo Lie algebras. \vspace{-.2cm} \begin{defi} Let $(\g\bowtie\g^*,\g,\g^*)$ and $(\h\bowtie\h^*,\h,\h^*)$ be Manin triples of Lie algebras. A {\bf \weak homomorphism} between them is a Lie algebra homomorphism $$f:\g\bowtie \g^*\rightarrow \h\bowtie\h^*$$ that restricts to Lie algebra homomorphisms $$f|_\g:\g\mto \h,\quad f|_{\g^*}:\g^*\mto \h^*.$$ If $f$ is bijective, it is called a {\bf \weak isomorphism of Manin triples}. Let $\MT$ denote the category of Manin triples with the morphisms thus defined. \mlabel{de:whomMT} \end{defi} This notion is justified by the equivalence in the case of endomorphisms. \begin{pro} Let $(\g\bowtie\g^*,\g,\g^*)$ be a Manin triple of Lie algebras. Then a linear map $f\in\End(\g \bowtie \g^*)$ is a \weak endomorphism of the Manin triple if and only if $((\g\bowtie\g^*,f),(\g,f|_\g),(\g^*,f|_{\g^*}))$ is a Manin triple of endo Lie algebras associated to $(\g,f|_\g)$ and $(\g^*,f|_{\g^*})$. \mlabel{pp:endmt} \end{pro} Due to the correspondence between endo Lie bialgebras and Manin triples of endo Lie algebras given in Theorem~\mref{thm:rbinfbialg}, we obtain \begin{pro} Let $(\g,[\;,\;]_\g,\delta)$ be a Lie bialgebra and $(\g\bowtie\g^*,\g,\g^*)$ be the corresponding Manin triple. Then $(\phi,\psi)$ is a \weak endomorphism of the Lie bialgebra $(\g,[\;,\;]_\g,\delta)$ if and only if $\phi+\psi^*$ is a \weak endomorphism of the Manin triple $(\g\bowtie\g^*,\g,\g^*)$. \end{pro} Now we show that the correspondence of Lie bialgebras with Manin triples established by Theorems~\mref{thm:frob} and ~\mref{thm:md} gives rise to an equivalence of the corresponding categories $\LB$ and $\MT$. \begin{pro} \mlabel{thm:catequiv} Assume that all the spaces are finite dimensional. \begin{enumerate} \item Let $(\g,[\;,\;]_\g,\delta_\g)$ and $(\h,[\;,\;]_\h,\delta_\h)$ be Lie bialgebras. Let $(\g\bowtie\g^*,\g,\g^*)$ and $(\h\bowtie\h^*,\h,\h^*)$ be the corresponding Manin triples of Lie algebras. There is a bijection between the set $\Hom_{\bf LB}(\g,\h)$ of \weak homomorphisms between the Lie bialgebras and the set $\Hom_{\bf MT}(\g\bowtie\g^*,\h\bowtie \h^*)$ of \weak homomorphisms between the Manin triples. The bijection is given by sending $(\phi,\psi)$ to $f:=\phi+ \psi^*$ and sending $f$ to $(f|_\g, (f|_{\g^*})^*)$. \mlabel{it:catequiv1} \item \mlabel{it:catequiv2} The correspondence in \eqref{it:catequiv1} gives an equivalence from the category $\LB$ of Lie bialgebras to the category $\MT$ of Manin triples. \end{enumerate} \end{pro} \begin{proof} \meqref{it:catequiv1}. Assume that $(\phi,\psi)$ is a \weak homomorphism of the Lie bialgebras. Let $x,y\in \g, a^*,b^*\in \g^*$. Then for $f:=\phi+\psi^*:\g\bowtie \g^*\rightarrow \h\bowtie\h^*$ we have {\small\begin{eqnarray*} f([x+a^*,y+b^*]_{\bowtie} )&=& \phi([x,y]_\g)+ \phi(\ad_{\g^*}(a^*)y-\ad^*_{\g^*}(b^*)x)\\ &\mbox{}& +\psi^*(\ad_\g^*(x)b^*-\ad_\g^*(y)a^*)+\psi^*([a^*,b^*]_{\g^*}),\\ ~[f(x+a^*),f(y+b^*)]_{\bowtie}&=&[\phi(x),\phi(y)]_\h+\ad_{\h^*}^*(\psi^*(a^*))\phi(y)-\ad_{\h^*}^*(\psi^*(b^*))\phi(x)\\ &\mbox{}&+\ad_\h^*(\phi(x))\psi^*(b^*)-\ad_\h^*(\phi(y))\psi^*(a^*)+[\psi^*(a^*),\psi^*(b^*)]_{\h^*}. \end{eqnarray*}} Since $\phi:\g\rightarrow \h$ and $\psi^*:\g^*\mto \h^*$ are homomorphisms of Lie algebras, we have $$\phi([x,y]_\g)=[\phi(x),\phi(y)]_\h, \quad \psi^*([a^*,b^*]_{\g^*})=[\psi^*(a^*),\psi^*(b^*)]_{\h^*}.$$ By Eq.~(\mref{eq:pp1}), we have $$\phi(\ad_{\g^*}(a^*)y)=\ad_{\h^*}^*(\psi^*(a^*))\phi(y), \quad \phi(\ad^*_{\g^*}(b^*)x)=\ad_{\h^*}^*(\psi^*(b^*))\phi(x).$$ By Eq.~(\mref{eq:pp2}), we have $$\psi^*(\ad_\g^*(x)b^*)=\ad_\h^*(\phi(x))\psi^*(b^*),\quad \psi^*(\ad_\g^*(y)a^*)=\ad_\h^*(\phi(y))\psi^*(a^*).$$ Therefore $f$ is a \weak homomorphism of Manin triples. Conversely, by a similar argument, if $f$ is a \weak homomorphism of Manin triples, then $(f|_\g, (f|_{\g^*})^*)$ is a \weak homomorphism of the corresponding Lie bialgebras. \meqref{it:catequiv2}. It follows from \meqref{it:catequiv1} directly. \end{proof} Due to the correspondence between matched pairs and Manin triples of Lie algebras, we can define the \weak homomorphism of matched pairs of Lie algebras directly from Definition~\mref{de:whomMT}. \begin{defi} Let $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$ and $(\h,\h^*,\ad_\h^*,\ad^*_{\h^*})$ be matched pairs of Lie algebras. A {\bf \weak homomorphism} between them is a Lie algebra homomorphism $$f:\g\bowtie \g^*\rightarrow \h\bowtie\h^*$$ that restricts to Lie algebra homomorphisms $$f|_\g:\g\mto \h,\quad f|_{\g^*}:\g^*\mto \h^*.$$ If $f$ is bijective, it is called a {\bf \weak isomorphism of matched pairs}. Let $\MP$ denote the category of such matched pairs of Lie algebras with the morphisms thus defined. \mlabel{de:whomMP} \end{defi} \begin{rmk} We only define the \weak homomorphisms for matched pairs of the form $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$. This is enough for our purpose. A \weak homomorphism between any two matched pairs can be defined in the same way. \end{rmk} \vspace{-.2cm} On the other hand, by Theorem~\mref{thm:3.1} we also have the following compatibility of \weak homomorphisms of matched pairs. \begin{pro} Let $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$ be a matched pair of Lie algebras. Then a linear map $f:\g\bowtie \g^*\rightarrow \g\bowtie \g^*$ is a \weak endomorphism of this matched pair of Lie algebras if and only if $((\g,f|_\g),(\g^*,f|_{\g^*}), \ad_\g^*,\ad_{\g^*}^*)$ is a matched pair of endo Lie algebras. \mlabel{pp:endmp} \end{pro} We further have \begin{pro} \mlabel{pro:MP} \begin{enumerate} \item \mlabel{it:mp1} Let $(\g\bowtie\g^*,\g,\g^*)$ and $(\g\bowtie\h^*,\h,\h^*)$ be Manin triples of Lie algebras and let $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$ and $(\h,\h^*,\ad_\h^*,\ad^*_{\h^*})$ be the corresponding matched pairs of Lie algebras. Then a linear map $f:\g\bowtie \g^*\rightarrow \h\bowtie\h^*$ is a \weak homomorphism of Manin triples if and only if $f$ is a \weak homomorphism of matched pairs. \item \mlabel{it:mp2} The correspondence in \eqref{it:mp1} gives an equivalence from the category $\MP$ of matched pairs of the form $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$ to the category $\MT$ of Manin triples. \end{enumerate} \end{pro} \begin{proof} \meqref{it:mp1} follows directly from Definitions~\mref{de:whomMT} and~\mref{de:whomMP}. \smallskip \noindent \meqref{it:mp2} By \meqref{it:mp1}, there is a bijection between the set of \weak homomorphisms $f:\g\bowtie \g^*\rightarrow \h\bowtie\h^*$ of Manin triples and the set of \weak homomorphisms $f:\g\bowtie \g^*\rightarrow \h\bowtie\h^*$ of matched pairs by sending $f$ to $f$ itself. This gives the desired equivalence. \end{proof} Combining Propositions~\mref{thm:catequiv} and \mref{pro:MP}, we the following three way equivalence of categories. \begin{thm}\label{thm:equivalence111} Under the finite-dimensional assumption, the following categories are equivalent. \begin{enumerate} \item the category $\LB$ of Lie bialgebras; \item the category $\MT$ of Manin triples; \item the category ${\bf MP}$ of matched pairs of the form $(\g,\g^*,\ad_\g^*,\ad^*_{\g^*})$. \end{enumerate} \mlabel{co:maplie} \end{thm} \vspace{-.3cm} \subsubsection{Comparison with the existing notion of morphisms of Lie bialgebras} We now compare the notion of \weak homomorphisms of Lie bialgebras in Definition~\mref{de:liebialghom} with the existing notions of homomorphisms of Lie bialgebras and Manin triples~\mcite{CP}. \begin{defi} (\mcite{CP}) A {\bf homomorphism of Lie bialgebras} from $(\g,[\;,\;]_\g,\delta_\g)$ to $(\h,[\;,\;]_\h,\delta_\h)$ is a linear map $f:\g\mto \h$ that is both a Lie algebra homomorphism and a Lie coalgebra homomorphism: $\delta_\h f=(f\otimes f)\delta_\g.$ If $f$ is also bijective, then $f$ is called an {\bf isomorphism of Lie bialgebras}. \mlabel{de:hom} \end{defi} The following result shows that in the bijective case, our notion of \weak homomorphisms coincides with the usual notion of isomorphisms of Lie bialgebras. \begin{pro}\label{pro:iso} Let $(\g,[\;,\;]_\g,\delta_\g)$ and $(\h,[\;,\;]_\h,\delta_\h)$ be two Lie bialgebras. Then $(\g,[\;,\;]_\g,\delta_\g)$ is isomorphic to $(\h,[\;,\;]_\h,\delta_\h)$ if and only if there exists a Lie algebra isomorphism $\phi:\g\mto\h$ such that $(\phi,\phi^{-1})$ is a \weak isomorphism. \end{pro} \begin{proof} Take $\psi=\phi^{-1}$. Then Eqs.~(\mref{eq:pp1}) and (\mref{eq:pp2}) hold automatically. Moreover, $\phi^*$ is an isomorphism of Lie algebras if and only if ${\phi^{-1}}^*$ is an isomorphism of Lie algebras. \end{proof} \vspace{-.2cm} \begin{rmk} However, in general, homomorphisms of Lie bialgebras and \weak homomorphisms of Lie bialgebras are not related. For example, when $\g=\h$, $f+f^*$ is usually not an endomorphism of the Lie algebra $\g\bowtie \g^*$. \end{rmk} Next we consider another notion of homomorphisms of Manin triples of Lie algebras, originated from the notion of isomorphisms of Manin triples~\mcite{CP}. \begin{defi} \mlabel{de:homMT} Let $(\g\bowtie\g^*,\g,\g^*)$ and $(\h\bowtie\h^*,\h,\h^*)$ be Manin triples of Lie algebras. A {\bf \strong homomorphism} between them is a Lie algebra homomorphism $$f:\g\bowtie \g^*\rightarrow \h\bowtie\h^*$$ that restricts to Lie algebra homomorphisms $$f|_\g:\g\mto \h,\quad f|_{\g^*}:\g^*\mto \h^*$$ and is compatible with the bilinear forms from the Manin triples: \begin{equation} \mlabel{eq:MT2b} \frakB_{\g,d}(x,y)=\frakB_{\h,d}(f(x),f(y)),\;\;\forall x,y\in \g\bowtie \g^*. \end{equation} \end{defi} A bijective \strong homomorphism between two Manin triples is exactly the known notion of an {\bf isomorphism} between two Manin triples~\cite{CP}. In general, a \strong homomorphism of Manin triples is a \weak homomorphism plus the compatibility condition in Eq.~\meqref{eq:MT2b}. This extra condition has a quite significant consequence. \begin{pro} Let $(\g\bowtie\g^*,\g,\g^*)$ and $(\h\bowtie\h^*,\h,\h^*)$ be two Manin triples of Lie algebras. Let $(\phi,\psi)$ be a \strong homomorphism between them. Then $\psi\phi=\id.$ In particular $\phi$ is injective and $\psi$ is surjective. \mlabel{pp:mthom} \end{pro} \begin{proof} By Eq.~(\mref{eq:MT2b}), we have $\frakB_{\g,d}( x,a)=\frakB_{\h,d}(\phi(x),\psi^*(a))=\frakB_{\g,d}( \psi\phi(x),a)$ for all $x\in \g,a\in \g^*. $ Hence $\psi\phi=\id$. \end{proof} We make the following remarks on the various notions of homomorphisms of Lie bialgebras and of Manin triples. \begin{rmk} \begin{enumerate} \item It is also known that an isomorphism of Lie bialgebras amounts to an isomorphism of the corresponding Manin triples. \item In general, the homomorphisms of Lie bialgebras in Definition~\mref{de:hom} do not correspond to \strong homomorphisms of Manin triples in Definition~\mref{de:homMT}. Indeed, since the underlying Lie algebra $\g$ in a Lie bialgebra corresponds to the Lie subalgebra in a Manin triple $(\g\bowtie\g^*,\g,\g^*)$, it is naturally expected that the homomorphism of the Lie bialgebra is given by $f|_\g$ in the homomorphism $f=f|_\g+f|_{\g^*}$ of the Manin triple. Then by Proposition~\mref{pp:mthom}, the homomorphism $f|_\g$ of Lie bialgebras will need to be injective. This is an unusually strong restriction. \item The injectivity condition is due to the compatibility condition in Eq.~\meqref{eq:MT2b}. Eliminating this condition leaves us with the notion of \weak homomorphisms of Manin triples in Definition~\mref{de:whomMP}. As shown in Proposition~\mref{thm:catequiv}, this notion is compatible with the notion of \weak homomorphisms of Lie bialgebras in Definition~\mref{de:liebialghom}. \end{enumerate} \end{rmk} To finish the discussion in this section, we compare the notion of \weak homomorphisms of Lie bialgebras with the notion of weak homomorphisms introduced in \mcite{TBGS}. \begin{defi} Let $(\g,[\;,\;],\delta_1)$ and $(\g,[\;,\;],\delta_2)$ be Lie bialgebras. A {\bf weak homomorphism} from $(\g,[\;,\;],\delta_2)$ to $(\g,[\;,\;],\delta_1)$ consists of a Lie algebra homomorphism $\phi:\g\rightarrow \g$ and a Lie coalgebra homomorphism $\psi:(\g,\delta_2)\rightarrow (\g,\delta_1)$ such that \begin{equation}\psi[\phi(x),y]=[x,\psi(y)],\;\;\forall x,y\in \g. \mlabel{eq:req3} \end{equation} If in addition, both $\phi$ and $\psi$ are linear isomorphisms, then $(\phi,\psi)$ is called a {\bf weak isomorphism} from $(\g,[\;,\;],\delta_2)$ to $(\g,[\;,\;],\delta_1)$. \end{defi} Note that the above notions of weak homomorphisms and weak isomorphisms are defined only when the two Lie bialgebras have the same underlying Lie algebra $\g$. Further they are mainly available for the triangular Lie bialgebras in \cite{TBGS}, that is, they are constructed from skew-symmetric classical $r$-matrices. In this case, Eq.~\meqref{eq:pp2} is the same as Eq.~\meqref{eq:req3}, and Eq.~(\mref{eq:pp1}) holds automatically, which implies that the two notions of \weak and weak homomorphisms of Lie bialgebras coincide. \vspace{-.3cm} \section{Coboundary endo Lie bialgebras and homomorphisms of classical $r$-matrices} \mlabel{sec:rmat} \vspace{-.1cm} In this section, we study coboundary endo Lie bialgebras and introduce the notions of \weak homomorphisms of classical $r$-matrices. A \weak homomorphism between two classical $r$-matrices gives a \weak homomorphism of their corresponding Lie bialgebras. \vspace{-.2cm} \subsection{Coboundary endo Lie bialgebras} Let $\g$ be a Lie algebra. For a given $r\in \g\otimes \g$, define $\delta_r:\g\mto \g\ot\g$ by \begin{equation} \delta_r(x):=(\id\otimes \ad(x)+\ad(x)\otimes \id)(r), \quad \forall x\in \g. \mlabel{eq:4.1} \end{equation} The following is an important construction of Lie bialgebras. \begin{pro} \mlabel{rmk:4.2} {\rm(\cite{CP})} Let $(\g,[\;,\;])$ be a Lie algebra and $r\in \g\otimes \g$. Then $(\g,[\;,\;],\delta_r)$ is a Lie bialgebra, which is called a {\bf coboundary Lie bialgebra}, if and only if for all $x\in \g$, \begin{equation} \big(\ad(x)\otimes \id+\id\otimes\ad(x)\big)(r+\tau(r))=0, \mlabel{eq:4.111} \end{equation} \begin{equation}\mlabel{eq:4.2} \big(\ad(x)\ot \id\ot \id+\id\ot\ad(x)\ot\id+\id\otimes \id\otimes \ad(x)\big)([r_{12},r_{13}]+[r_{13},r_{23}]+[r_{12},r_{23}])=0. \end{equation} Here $\tau:\g\otimes \g\rightarrow \g\otimes \g$ is the flip map and, writing $r=\sum_ia_i\otimes b_i$, we denote $$ [r_{12},r_{13}]=\sum_{i,j}[a_i,a_j]\otimes b_i\otimes b_j, [r_{13},r_{23}]=\sum_{i,j}a_i\otimes a_j\otimes [b_i,b_j], [r_{23},r_{12}]=\sum_{i,j}a_j\otimes [a_i,b_j]\otimes b_i. $$ \end{pro} For endo Lie bialgebras, we similarly define \begin{defi} \mlabel{de:4.1} An endo Lie bialgebra $((\g,\phi), \delta, \psi)$ is called {\bf coboundary} if $\delta=\delta_r$ for some $r\in \g\ot \g$. \end{defi} Then we obtain \begin{thm} \mlabel{thm:pq} Let $(\g, \phi)$ be an endo Lie algebra and $\psi$ dually represent $(\g,\phi)$. Let $r\in \g\otimes \g$. Then the linear map $\delta_r$ induces an endo Lie bialgebra $((\g, \phi), \delta_r, \psi)$ if and only if $r$ satisfies Eqs.~\meqref{eq:4.111} and \meqref{eq:4.2}, and for any $x\in \g$, the following conditions hold: \begin{eqnarray}\mlabel{eq:corbo} (\psi\ad(x)\ot\id) (\id\ot \psi-\phi\ot \id)(r)+(\id\ot \psi\ad(x))(\psi\ot \id-\id\ot \phi)(r)&=&0,\\ \mlabel{eq:P*admissible1} (\ad(x)\ot \id+\id\ot \ad \phi(x))(\id\ot \phi-\psi\ot \id)(r)&=&0. \end{eqnarray} \end{thm} \begin{proof} By Definition~\ref{de:rbbial} and Proposition~\ref{rmk:4.2}, $((\g, \phi), \delta_r, \psi)$ is an endo Lie bialgebra if and only if $r$ satisfies Eqs.~\meqref{eq:4.111} and \meqref{eq:4.2}, and Eqs.~\meqref{eq:corb} and ~\meqref{eq:pduqrd} hold. Set $r=\sum_ia_i\otimes b_i$. For $x\in\g$, we have \begin{eqnarray*} &&(\psi\otimes \psi)\delta_r(x)-\delta_r\psi(x)\\ &=&\sum_i(\psi([x,a_i])\otimes \psi(b_i)+\psi(a_i)\otimes \psi([x,b_i])-[\psi(x),a_i]\otimes b_i-a_i\otimes [\psi(x),b_i])\\ &=&(\psi\ad(x)\otimes \id )(\id\otimes \psi)(r)+(\id\otimes\psi\ad(x))(\psi\otimes\id)(r)\\ &&-\sum_i\big(\psi[x,\phi(a_i)]\otimes b_i+a_i\otimes \psi[x,\phi(b_i)]\big)\\ &=&(\psi\ad(x)\otimes \id )(\id\otimes \psi)(r)+(\id\otimes\psi\ad(x))(\psi\otimes\id)(r)\\ &&-(\psi\ad(x)\otimes \id)(\phi\otimes \id)(r) -(\id\otimes \psi\ad(x))(\id\otimes \phi)(r)\\ &=&(\psi\ad(x)\ot\id) (\id\ot \psi-\phi\ot \id)(r)+(\id\ot \psi\ad(x))(\psi\ot \id-\id\ot \phi)(r). \end{eqnarray*} Thus Eq.~\meqref{eq:corb} holds if and only if Eq.~\meqref{eq:corbo} holds. Similarly we have \begin{eqnarray*} && (\id\otimes \phi)\delta_r(x)-(\psi\otimes\id)\delta_r \phi(x)\\ &=&\sum_i\big([x,a_i]\otimes \phi(b_i)+a_i\otimes [\phi(x),\phi(b_i)]-\psi[\phi(x),a_i]\otimes b_i-\psi(a_i)\otimes [\phi(x),b_i]\big)\\ &=&(\ad(x)\otimes\id)(\id\otimes \phi)(r)+(\id\otimes \ad\phi(x))(\id\otimes \phi)(r)\\ &&-(\id\otimes \ad\phi(x))(\psi\otimes \id)(r)-(\ad(x)\otimes\id)(\psi\otimes \id)(r)\\ &=&(\ad(x)\ot \id+\id\ot \ad \phi(x))(\id\ot \phi-\psi\ot \id)(r). \end{eqnarray*} Thus Eq.~\meqref{eq:pduqrd} holds if and only if Eq.~\meqref{eq:P*admissible1} holds. Therefore the conclusion holds. \end{proof} Consequently we have the following conclusion on \weak homomorphisms of Lie bialgebras. \begin{cor} Let $(\g,[\;,\;])$ be a Lie algebra and $\phi:\g\rightarrow \g$ be a Lie algebra endomorphism. Let $\psi:\g\rightarrow \g$ be a linear map satisfying Eq.~\meqref{eq:pduql}. Let $r\in \g\otimes \g$. Then $(\g,[\;,\;],\delta_r)$ is a Lie bialgebra and $(\phi,\psi)$ is a \weak endomorphism of Lie bialgebras if and only if Eqs.~\meqref{eq:4.111}--\meqref{eq:P*admissible1} are satisfied. \end{cor} As an application of Theorem~\ref{thm:pq}, we obtain the following doubles from endo Lie bialgebras which are analogues of doubles from Lie bialgebras. \begin{thm} Let $((\g, \phi),\delta, \psi)$ be an endo Lie bialgebra. Let $\alpha:\g^*\mto \g^*\ot \g^*$ be the linear dual of the multiplication on $\g$. Then $((\g^*,\psi^*),-\alpha,\phi^*)$ is also an endo Lie bialgebra. Further there is an endo Lie bialgebra structure on the direct sum $\g\oplus \g^*$ of the underlying vector spaces of $\g$ and $\g^*$ which contains the two endo Lie bialgebras as endo Lie sub-bialgebras. \mlabel{thm:4.7} \end{thm} \begin{proof} Denote the product on the Lie algebra $\g^*$ by $[\;,\;]_{\g^*}$. By \mcite{CP}, $(\g^*,[\;,\;]_{\g^*},-\alpha)$ is a Lie bialgebra. Moreover, $\psi$ \dreping the endo Lie algebra $(\g,\phi)$ whose algebra structure is given by $-\alpha$ if and only if $\psi$ \dreping the endo Lie algebra $(\g,\phi)$ whose algebra structure is given by $\alpha$. Therefore with the fact that $\phi^*$ \dreping $(\g^*,\psi^*)$, we obtain that $((\g^*,\psi^*),-\alpha,\phi^*)$ is an endo Lie bialgebra. Let $\{e_1, e_2, \cdots, e_n\}$ be a basis of $\g$ and $\{e^1, e^2, \cdots, e^n\}$ its dual basis. Let $r=\sum\limits^n_{i=1}e_i\otimes e^i$. Consider the Lie algebra $\g\bowtie \g^*$ induced by the matched pair $(\g,\g^*,\ad^*_\g,\ad^*_{\g^*})$. Define $$\delta_r(u)=(\id\otimes \ad_{\g\bowtie \g^*}(u)+\ad_{\g\bowtie \g^*}(u)\otimes \id)(r),\;\;\forall u\in \g\bowtie \g^*.$$ By Lemma~\mref{lem:abas}, $(\g\bowtie \g^*, \phi+\psi^*)$ is an endo Lie algebra that is \dreped by $(\psi+\phi^*)$. Hence Eq. (\mref{eq:pduql}) holds. Since \vspace{-.2cm} \begin{eqnarray*} ((\phi+\psi^*)\otimes \id- \id\otimes (\psi+ \phi^*))(r) &=&\sum^n_{i=1}(\phi(e_i)\otimes e^i-e_i\otimes \phi^*(e^i))\stackrel{}{=}0,\\ ((\psi+\phi^*)\otimes \id- \id\otimes (\phi+ \psi^*))(r) &=&\sum^n_{i=1}(\psi(e_i)\otimes e^i-e_i\otimes \psi^*(e^i))\stackrel{}{=}0, \end{eqnarray*} Eqs.~(\mref{eq:corbo})--(\mref{eq:P*admissible1}) hold. By \emph{\mcite{CP}}, we know that $r$ satisfies Eqs. (\mref{eq:4.111}) and (\mref{eq:4.2}) and hence $ (\g\bowtie \g^*,[\;,\;]_{\bowtie},\delta_r)$ is a Lie bialgebra containing $(\g,[\;,\;]_\g,\delta)$ and $(\g^*,[\;,\;]_{\g^*},-\alpha)$ as Lie sub-bialgebras. Thus $((\g\bowtie \g^*, \phi+\psi^*), \delta_r, \psi+\phi^*)$ is an endo Lie bialgebra. It is obvious that it contains $((\g, \phi),\delta, \psi)$ and $((\g^*,\psi^*),-\alpha,\phi^*)$ as endo Lie sub-bialgebras. This completes the proof. \end{proof} Therefore there is the following construction of \weak homomorphisms on the doubles of Lie bialgebras. \begin{cor} Let $(\g,[\;,\;],\delta)$ be a Lie bialgebra and $(\phi,\psi)$ be a \weak endomorphism. Let $\alpha:\g^*\mto \g^*\ot \g^*$ be the linear dual of the multiplication on $\g$. Then $(\g^*,-\alpha^*)$ is a Lie bialgebra and $(\psi^*,\phi^*)$ is a \weak endomorphism. Furthermore, there is a Lie bialgebra structure on the direct sum $\g\oplus \g^*$ of the underlying vector spaces of $\g$ and $\g^*$ which contains the two Lie bialgebras as Lie sub-bialgebras and $(\phi+\psi^*,\psi+\phi^*)$ is a \weak homomorphism. \end{cor} \vspace{-.3cm} \subsection{Homomorphisms of classical $r$-matrices} We now lift the relation between classical $r$-matrices and bialgebras to the level of categories. Theorem~\ref{thm:pq} immediately gives \begin{cor} \mlabel{cor:4.10} Let $(\g, \phi)$ be an endo Lie algebra and $\psi$ dually represent $(\g,\phi)$. Let $r\in \g\otimes \g$. Then the linear map $ \delta_r$ given by Eq.~\meqref{eq:4.1} induces an endo Lie bialgebra $((\g, \phi), \delta_r, \psi)$ if Eq.~\meqref{eq:4.111} and the following equations hold: \begin{equation}\mlabel{eq:4.10} [r_{12},r_{13}]+[r_{13},r_{23}]+[r_{12},r_{23}]=0, \end{equation} \begin{equation}\mlabel{eq:4.11} (\phi\otimes \id-\id\otimes \psi)(r)=0, \end{equation} \begin{equation}\mlabel{eq:4.11a} (\psi\otimes \id-\id\otimes \phi)(r)=0. \end{equation} \end{cor} Eq.~(\mref{eq:4.10}) is just the classical Yang-Baxter equation (CYBE) in a Lie algebra and a solution of the CYBE in a Lie algebra is also called a {\bf classical $r$-matrix}. A Lie bialgebra $(\g,[\;,\;],\delta)$ is called {\bf quasi-triangular} if it is obtained from a classical $r$-matrix $r$ by Eq.~\meqref{eq:4.1} and is called {\bf triangular} if it is obtained from a skew-symmetric classical $r$-matrix $r$ (i.e. $r=-\tau(r)$). Moreover, it is straightforward to show that if $r$ is skew-symmetric, then Eq.~(\mref{eq:4.11}) holds if and only if Eq.~(\mref{eq:4.11a}) holds. \begin{defi} Let $(\g, \phi)$ be an endo Lie algebra. Let $r\in\g\otimes \g$ and $\psi\in\End(\g) $. Then Eq.~(\mref{eq:4.10}) with conditions given by Eqs.~(\mref{eq:4.11}) and (\mref{eq:4.11a}) is called the {\bf $\psi$-classical Yang-Baxter equation ($\psi$-CYBE) in $(\g, \phi)$}. \mlabel{de:4.11} \end{defi} As in the case of endo Lie bialgebras, solutions of the $\psi$-CYBE in an endo Lie algebra motivates us to the notion of morphisms of classical $r$-matrices that is compatible with \weak homomorphisms of Lie bialgebras. \begin{defi}\mlabel{defi:isorr} Let $\g,\h$ be Lie algebras and $r_{\g},\;r_{\h}$ be classical $r$-matrices in $\g$ and $\h$ respectively. A {\bf \weak homomorphism} from $r_{\g}$ to $r_{\h}$ consists of a Lie algebra homomorphism $\phi:\g\rightarrow\h$ and a linear map $\psi:\h\mto \g$ satisfying \begin{eqnarray}(\psi\otimes {\rm id}_\h)(r_\h)&=&({\rm id}_\g\otimes \phi)(r_\g),\mlabel{eq:reqqq1}\\ ({\rm id}_\h\otimes \psi)(r_\h)&=&(\phi\otimes {\rm id}_\g)(r_\g),\mlabel{eq:reqqq2}\\ \psi[\phi(x),y]_\h&=&[x,\psi(y)]_\g,\;\;\forall x\in \g,y\in \h.\mlabel{eq:reqqq3} \end{eqnarray} If $\phi$ and $\psi$ are also linear isomorphisms, then $(\phi,\psi)$ is called a {\bf \weak isomorphism} from $r_\g$ to $r_\h$. Let ${\bf Cr}$ denote the category of classical $r$-matrices with the morphisms thus defined. \end{defi} Then by Definitions~\ref{de:4.11} and~\ref{defi:isorr}, we obtain \begin{pro} \mlabel{pp:endocybe} Let $(\g, \phi)$ be an endo Lie algebra and $\psi\in \End(\g)$ dually represent $(\g, \phi)$. Then $r\in \g\ot \g$ is a solution of the $\psi$-CYBE in the endo Lie algebra $(\g,\phi)$ if and only if $r\in\g\ot\g$ is a classical $r$-matrix and $(\phi,\psi)$ is a \weak endomorphism on $r$. \end{pro} Recall from~\mcite{CP} that two classical $r$-matrices $r_1$ and $r_2$ in a Lie algebra $\g$ are said to be {\bf equivalent} if there is a Lie algebra isomorphism $\phi:\g\mto\g$ such that $ (\phi\otimes \phi) (r_1)=r_2.$ \begin{cor} Let $\g$ be a Lie algebra and $r_1,\;r_2$ be classical $r$-matrices in $\g$. Then $r_1$ is equivalent to $r_2$ if and only if there exists a Lie algebra isomorphism $\phi:\g\mto\g$ such that $(\phi,{\phi^{-1}})$ is a \weak isomorphism from $r_1$ to $r_2$. \mlabel{co:r-eq} \end{cor} \begin{proof} If $\phi:\g\mto\g$ is an equivalence from $r_1$ to $r_2$, then it is straightforward to check that $(\phi,{\phi^{-1}})$ satisfies Eqs.~(\mref{eq:reqqq1})--(\mref{eq:reqqq3}). Conversely, Eqs.~(\mref{eq:reqqq1}) implies $(\phi\otimes \phi) (r_1)=r_2$. \end{proof} \begin{rmk} When both $r_1$ and $r_2$ are skew-symmetric in the same Lie algebra $\g$, then Eq.~(\mref{eq:reqqq1}) holds if and only if Eq.~(\mref{eq:reqqq2}) holds. On the other hand, there is a notion of weak homomorphism between two skew-symmetric $r$-matrices in a Lie algebra $\g$ given in \mcite{TBGS} defined by Eqs.~(\mref{eq:reqqq1}) and (\mref{eq:reqqq3}) only, that is, without Eq.~(\mref{eq:reqqq2}). The two notions coincide in the skew-symmetric case. Note that Definition~\ref{defi:isorr} is valid without the skew-symmetric restriction and in different Lie algebras. \end{rmk} Recall that for a Lie algebra $\g$ and a classical $r$-matrix $r$ of in $\g$ satisfying Eq.~\meqref{eq:4.111}, the triple $(\g,[\;,\;],\delta_r)$ is a quasi-triangular Lie bialgebra. On the level of categories, we obtain \begin{thm} \mlabel{co:r12} Let $\g,\h$ be Lie algebras and $r_\g,r_\h$ be classical $r$-matrices in $\g$ and $\h$ respectively satisfying Eq.~\meqref{eq:4.111}. If $(\phi, \psi)$ is a \weak homomorphism of the classical $r$-matrices from $r_\g$ to $r_\h$, then $(\phi,\psi)$ is a \weak homomorphism of the corresponding Lie bialgebras from $(\g,[\;,\;]_\g,\delta_{r_\g})$ to $(\g,[\;,\;]_\h, \delta_{r_\h})$. This correspondence defines a functor from the category ${\bf Crs}$ of classical $r$-matrices satisfying Eq.~\meqref{eq:4.111}, as a full sub-category of ${\bf Cr}$ to the category ${\bf QTLB}$ of quasi-triangular Lie bialgebras, as a full sub-category of ${\bf LB}$. \end{thm} \begin{proof} For the given pair $(\phi,\psi)$, an argument similar to the proof of Theorem~\mref{thm:pq}, yields \begin{enumerate} \item \mlabel{it:pq1-1} $(\psi\ot \psi)\delta_{r_\h}=\delta_{r_\g} \psi$ if and only if for any $x\in \h$, \begin{eqnarray}\mlabel{eq:corbob} &&(\psi\ad_\h(x)\ot\id_\g) ((\id_\h\ot \psi)(r_\h)-(\phi\ot \id_\g)(r_\g))\nonumber\\&&+(\id_\g\ot \psi\ad_\h(x))((\psi\ot \id_\h)(r_\h)-(\id_\g\ot \phi)(r_\g))=0. \end{eqnarray} \item \mlabel{it:pq2-1} $(\id_\g\otimes \phi)\delta_{r_\g}=(\psi\otimes\id_\h)\delta_{r_\h} \phi$ if and only if for any $x\in \g$, \begin{eqnarray}\mlabel{eq:P*admissible1b} &&(\ad_\g(x)\ot \id_\h+\id_\g\ot \ad_\h \phi(x))((\psi\ot \id_\h)(r_\h)-(\id_\g\ot \phi)(r_\g))=0. \end{eqnarray} \end{enumerate} Thus $(\phi,\psi)$ is a \weak homomorphism of Lie bialgebras from $(\g,[\;,\;]_\g,\delta_{r_\g})$ to $(\g,[\;,\;]_\h, \delta_{r_\h})$. The rest of the proof is straightforward. \end{proof} Specializing to the skew-symmetric case gives the conclusion \cite[Proposition 7.17]{TBGS}, obtained by a different approach using $\mathcal O$-operators. \section{Homomorphisms of $\calo$-operators and pre-Lie algebras} \mlabel{sec:oop} We define $\mathcal O$-operators on endo Lie algebras and endo pre-Lie algebras. Then an analysis similar to the previous sections motivates us to define \weak homomorphisms of $\mathcal O$-operators and of pre-Lie algebras, giving rise to the categories of $\mathcal O$-operators and of pre-Lie algebras. Moreover, there are natural functors between these categories and further to the category of triangular Lie bialgebras, completing Diagram~\meqref{eq:bigdiagcat}. As in the previous sections, underneath these functors among categories, there are correspondences among the $\calo$-operator and pre-Lie algebra structures on endo Lie algebras. The discussion is similar and will not be elaborated further. \subsection{$\mathcal{O}$-operators on endo Lie algebras and homomorphisms of $\calo$-operators} \mlabel{ss:roo} For a vector space $\g$, through the isomorphism $\g\ot \g\cong \Hom(\g^*,K)\ot \g\cong \Hom(\g^*,\g)$, any $r\in \g\otimes \g$ is identified with a map from $\g^*$ to $\g$ which we denote by $r^\sharp$. Explicitly, writing $r=\sum_{i}a_i\otimes b_i$, then \begin{equation} r^\sharp:\g^*\mto \g, \quad r^\sharp(a^*)=\sum_{i}\langle a^*, a_i \rangle b_i, ~~\forall a^*\in \g^*. \mlabel{eq:4.12} \end{equation} Note that $r$ is skew-symmetric if and only if \begin{equation} \langle r^\sharp(a^*), b^*\rangle+\langle a^*,r^\sharp(b^*)\rangle=0,\;\;\forall a^*,b^*\in \g^*.\label{eq:skew-symmetry} \end{equation} Recall that an {\bf $\mathcal O$-operator} on a Lie algebra $\g$ associated to a representation $(V,\rho)$ is a linear map $T:V\rightarrow \g$ satisfying \begin{equation}\mlabel{eq:4.18} [T(u),T(v)]=T(\rho(T(u))v-\rho(T(v))u), \;\;\forall u, v\in V. \end{equation} For an endo Lie algebra, the corresponding notion is \begin{defi} \mlabel{de:4.17} Let $(\g, \phi)$ be an endo Lie algebra. Let $(V, \rho)$ be a representation of the Lie algebra $\g$ and $\alpha:V\mto V$ be a linear map. A linear map $T: V\mto \g$ is called an {\bf $\mathcal{O}$-operator on $(\g,\phi)$ associated to $(V, \rho)$ and $\alpha$} if $T$ satisfies Eq.~\meqref{eq:4.18} and \begin{equation}\mlabel{eq:4.17} \phi T=T \alpha. \end{equation} If in addition, $(V,\rho,\alpha)$ is a representation of $(\g,\phi)$, then $T$ is called an {\bf $\mathcal{O}$-operator associated to $(V, \rho,\alpha)$.} \end{defi} We have the following relationship between $\calo$-operators for endo Lie algebras and solutions of the CYBE in endo Lie algebras. \begin{pro} Let $(\g, \phi)$ be an endo Lie algebra and $\psi:\g\mto \g$ be a linear map. Suppose that $r\in \g\otimes \g$ is skew-symmetric. Then $r$ is a solution of the $\psi$-\cybe in $(\g, \phi)$ if and only if $r^\sharp$ is an $\mathcal{O}$-operator associated to $(\g^*, \ad^*)$ and $\psi^*$. If in addition, $\psi$ \dreping $(\g, \phi)$, then $r$ is a solution of the $\psi$-\cybe in $(\g, \phi)$ if and only if $r^\sharp$ is an $\mathcal{O}$-operator on $(\g, \phi)$ associated to the representation $(\g^*,\ad^*,\psi^*)$. \mlabel{ex:4.20} \end{pro} \begin{proof} By \mcite{Ku}, $r$ is a solution of the \cybe in the Lie algebra $\g$ if and only if \begin{equation} [r^\sharp(a^*),r^\sharp(b^*)]=r^\sharp({\ad}^*(r^\sharp(a^*))b^*-\ad^*(r^\sharp(b^*))a^*), \quad \forall a^*, b^*\in \g^*, \mlabel{eq:4.161} \end{equation} that is, $r^\sharp:\g^*\mto \g$ is an $\mathcal O$-operator on $\g$ associated to the representation $(\g^*,\ad^*)$. Moreover, let $r=\sum_ia_i\otimes b_i$ and for any $a^*\in \g^*$, we have $$ r^\sharp(\psi^*(a^*))\stackrel{}{=}\sum\limits_{i=1}^{n}\langle \psi^*(a^*), a_i \rangle b_i= \sum\limits_{i=1}^{n}\langle a^*, \psi(a_i) \rangle b_i, \quad \phi(r^\sharp(a^*))\stackrel{}{=}\sum\limits_{i=1}^{n}\langle a^*, a_i \rangle \phi(b_i). $$ So, $\phi r^\sharp=r^\sharp \psi^*$ if and only if Eq.~\meqref{eq:4.11} holds. This completes the proof. \end{proof} We next show that the notion of $\calo$-operators for endo Lie algebras naturally gives the following notion of morphisms of $\calo$-operators for Lie algebras. \begin{defi} Let $T_\g$ and $T_\h$ be $\mathcal O$-operators on Lie algebras $\g$ and $\h$ associated to representations $(V_\g,\rho_\g)$ and $(V_\h,\rho_\h)$ respectively. A {\bf (\weak) homomorphism of $\calo$-operators} from $T_\g$ to $T_\h$ consists of a Lie algebra homomorphism $\phi:\g\mto\h$ and a linear map $\alpha:V_\g\mto V_\h$ such that for all $x\in\g, v\in V_\g$, \begin{eqnarray} \alpha\rho_\g(x)(v)&=&\rho_\h(\phi(x))(\alpha(v)),\mlabel{defi:isocon2b}\\ T_\h\alpha &=&\phi T_\g.\mlabel{defi:isocon1b} \end{eqnarray} In particular, if $\phi$ and $\alpha$ are invertible, then $(\phi,\alpha)$ is called an {\bf isomorphism} from $T_\g$ to $T_\h$. Let ${\bf OP}$ denote the category of $\mathcal O$-operators with the morphisms thus defined. \mlabel{defi:isoOb} \end{defi} Indeed, we immediately have \begin{cor} Let $(\g, \phi)$ be an endo Lie algebra. Let $(V,\rho)$ be a representation of the Lie algebra $\g$ and $\alpha:\g\rightarrow \g$ be a linear map. Then $(V,\rho,\alpha)$ is a representation of $(\g,\phi)$ and $T$ is an $\mathcal{O}$-operator on $(\g,\phi)$ associated to $(V, \rho,\alpha)$ if and only if $T$ is an $\mathcal O$-operator on the Lie algebra $\g$ associated to the representation $(V,\rho)$ and $(\phi,\alpha)$ is an endomorphism on the $\mathcal O$-operator $T$. \mlabel{cor:homO} \end{cor} We now show that the notion of \weak homomorphisms of $\calo$-operators is the correct one to be compatible with classical $r$-matrices. \begin{thm} \mlabel{pp:oprm} Let $r_\g,\;r_\h$ be skew-symmetric classical $r$-matrices in Lie algebras $\g$ and $\h$ respectively. Let $\phi:\g\mto \h$ be a Lie algebra homomorphism and $\psi:\h\mto \g$ be a linear map. Then $(\phi, \psi)$ is a \weak homomorphism of classical $r$-matrices from $r_\g$ to $r_\h$ if and only if $(\phi,\psi^*)$ is a homomorphism between the corresponding $\calo$-operators $r_\g^\sharp$ and $r_\h^\sharp$. This correspondence defines an equivalence from the category ${\bf SCr}$ of skew-symmetric classical $r$-matrices, as a full sub-category of ${\bf Cr}$ of classical $r$-matrices satisfying Eq.~\meqref{eq:4.111}, to the category ${\bf SOP}_{\rm coad}$ of $\mathcal O$-operators on Lie algebras associated to the coadjoint representations satisfying Eq.~\meqref{eq:skew-symmetry}, as a full subcategory of ${\bf OP}$ of $\mathcal O$-operators. \end{thm} \begin{proof} We only need to prove the first conclusion. Let $x\in \g, y\in \h$ and $a^*\in \g^*$. Then \begin{eqnarray*} \langle \psi^*\ad_\g^*(x)a^*, y\rangle&=&\langle \ad_\g^*(x)a^*,\psi(y)\rangle=-\langle a^*,[x,\psi(y)]_\g\rangle,\\ \langle \ad_\h^*(\phi(x))(\psi^*(a^*)),y\rangle&=&-\langle \psi^*(a^*),[\phi(x),y]_\h\rangle=-\langle a^*,\psi[\phi(x),y]_\h\rangle. \end{eqnarray*} Hence $\psi^*\ad_\g^*(x)a^*=\ad_\h^*(\phi(x))(\psi^*(a^*))$ if and only if $[x,\psi(y)]_\g$=$\psi[\phi(x),y]_\h$. Let $a^*\in\g^*,b^*\in \h^*$. Then we have \begin{eqnarray*} \langle r_\h^\sharp\psi^*(a^*),b^*\rangle&=&\langle r_\h, \psi^*(a^*)\otimes b^*\rangle=\langle r_\h,-b^*\otimes \psi^*(a^*)\rangle =\langle (\id_\h\otimes \psi)(r_\h),-b^*\otimes a^*\rangle,\\ \langle \phi r_\g^\sharp(a^*),b^*\rangle&=&\langle r_\g, a^*\otimes \phi^*(b^*)\rangle=\langle r_\g,-\phi^*(b^*)\otimes a^*\rangle=\langle (\phi\otimes \id_\g)(r_\g),-b^*\otimes a^*\rangle. \end{eqnarray*} Hence $r_\h^\sharp\psi^*(a^*)=\phi r_\g^\sharp(a^*)$ if and only if $(\psi\otimes \id_\h)(r_\h)=(\id_\g\otimes \phi)(r_\g)$, and if and only if $(\id_\h\otimes \psi)(r_\h)=(\phi\otimes \id_\g)(r_\g)$. Therefore the conclusion follows. \end{proof} \begin{rmk} When $\g=\h$, the above conclusion is exactly ~\cite[Proposition~7.10]{TBGS}. \end{rmk} \subsection{From $\mathcal O$-operators to classical $r$-matrices} \mlabel{ss:oor} In Section~\mref{ss:roo}, we see that a skew-symmetric solution of the CYBE gives an $\calo$-operator associated to the adjoint representation. Going in the opposite direction, an $\mathcal O$-operator gives rises to a solution of the CYBE in the semi-direct product Lie algebra as follows. \begin{lem}\label{lem:rt}{\rm (\cite{Bai2})} Let $\g$ be a Lie algebra and $(V,\rho)$ be a representation. Let $T: V\mto \g$ be a linear map which is identified as an element in $(\g\ltimes_{\rho^*} V^*)\otimes (\g\ltimes_{\rho^*} V^*)$ $($through ${\rm Hom}(V,\g)\cong V^*\otimes \g\subseteq (\g\ltimes_{\rho^*} V^*)\otimes (\g\ltimes_{\rho^*} V^*)$$)$. Then \begin{equation} r_{\scriptscriptstyle{T}}:=T-\sigma(T) \mlabel{eq:oor} \end{equation} is a skew-symmetric classical $r$-matrix in the Lie algebra $\g\ltimes_{\rho^*} V^*$ if and only if $T$ is an $\mathcal O$-operator on $\g$ associated to $(V,\rho)$. \end{lem} We now lift Lemma~\ref{lem:rt} to the level of morphisms, that is, to use a \weak homomorphism of $\calo$-operators on Lie algebras to induce a \weak homomorphism of classical $r$-matrices in the corresponding semi-direct product Lie algebras. Since the latter Lie algebras are much larger, extra restraints are needed to give a well-defined correspondence. \begin{thm}\mlabel{cor:maincon} Let $\g$ and $\h$ be Lie algebras, and $(V_\g,\rho_\g), (V_\h,\rho_\h)$ be representations of $\g$ and $\h$ respectively. Let $T_\g: V_\g\to \g$ and $T_\h:V_\h\to \h$ be $\calo$-operators, and $r_{\scriptscriptstyle{T_\g}}$ and $r_{\scriptscriptstyle{T_\h}}$ be the corresponding skew-symmetric classical $r$-matrices defined in Lemma~\mref{lem:rt}. Let $(\phi,\alpha)$ be a homomorphism of $\mathcal O$-operators from $T_\g$ to $T_\h$. Then for linear maps $\psi:\h\to \g$ and $\beta:V_\h\to V_\g$, the pair $(\phi+ \beta^*,\psi+\alpha^*)$ is a \weak homomorphism from $r_{\scriptscriptstyle{T_\g}}$ to $r_{\scriptscriptstyle{T_\h}}$ if and only if Eq.~\meqref{eq:reqqq3} and the following equations hold \begin{eqnarray} \label{eq:ttt}T_\g\beta&=&\psi T_\h,\\ \label{eq:hh1} \beta(\rho_\h(\phi(x))b)&=&\rho_\g(x)(\beta(b)),\;\;\forall x\in \g,b\in V_\h,\\ \beta(\rho_\h(y)\alpha(a))&=&\rho_\g(\psi(y))a,\;\;\forall y\in \h,a\in V_\g.\label{eq:hh2} \end{eqnarray} \end{thm} \begin{proof} It is straightforward to deduce that the linear map $\phi+\beta^*:\g\ltimes_{\rho_\g^*}V_\g^*\rightarrow \h\ltimes_{\rho_\h^*}V_\h^*$ is a homomorphism of Lie algebras if and only if $\phi$ is a homomorphism of Lie algebras and Eq. \meqref{eq:hh1} holds. Let $\{e_1, e_2,\cdots, e_n\}$ be a basis of $V_\g$ and $\{e^1, e^2, \cdots, e^n\}$ be its dual basis. Let $\{f_1, f_2,\cdots, f_m\}$ be a basis of $V_\h$ and $\{f^1, f^2,\cdots, f^m\}$ be its dual basis. Then the $2$-tensors of the $\calo$-operators $T_\g$ and $T_\h$ are $\sum_{i=1}^n T_\g(e_i)\otimes e^i$ and $\sum_{j=1}^m T_\h(f_j)\otimes f^j$ respectively. Hence \vspace{-.1cm} $$r_{\scriptscriptstyle{T_\g}}=\sum_{i=1}^n (T_\g(e_i)\otimes e^i-e^i\otimes T_\g(e_i)),\;\;r_{\scriptscriptstyle{T_\h}}=\sum_{j=1}^m (T_\h(f_j)\otimes f^j-f^j\otimes T_\h(f_j)),$$ giving \vspace{-.3cm} {\small \begin{eqnarray*} ((\phi+\beta^*)\otimes {\rm id}_{\g\ltimes_{\rho_\g^*} V_\g^*})(r_{\scriptscriptstyle{T_\g}})&=&\sum_{i=1}^n(\phi T_\g(e_i)\otimes e^i-\beta^*(e^i)\otimes T_\g(e_i)),\\ ({\rm id_{\h\ltimes_{\rho_\h^*} V_\h^*}}\otimes (\psi+\alpha^*))(r_{\scriptscriptstyle{T_\h}})&=&\sum_{j=1}^m(T_\h(f_j)\otimes \alpha^*(f^j)-f^j\otimes \psi T_\h(f_j)). \end{eqnarray*} } Further, {\small \begin{eqnarray*} \sum_{i=1}^n \beta^*(e^i)\otimes T_\g(e_i)&=&\sum_{i=1}^n\sum_{j=1}^m\langle \beta^*(e^i),f_j\rangle f^j\otimes T_\g(e_i)= \sum_{j=1}^m f^j\otimes \sum_{i=1}^n\langle e^i,\beta(f_j)\rangle T_\g(e_i)\\ &=&\sum_{j=1}^m f^j\otimes T_\g(\sum_{i=1}^n\langle \beta(f_j), e^i\rangle e_i)=\sum_{j=1}^m f^j\otimes T_\g \beta(f_j), \end{eqnarray*} } and similarly, $\sum_{j=1}^m T_\h(f_j)\otimes \alpha^*(f^j)= \sum_{i=1}^n T_\h \alpha (e_i)\otimes e^i$. Then we obtain $$((\phi+\beta^*)\otimes {\rm id}_{\g\ltimes_{\rho_\g^*} V_\g^*})(r_{\scriptscriptstyle{T_\g}})=({\rm id_{\h\ltimes_{\rho_\h^*} V_\h^*}}\otimes (\psi+\alpha^*))(r_{\scriptscriptstyle{T_\h}})$$ if and only if Eqs. \eqref{defi:isocon1b} and \eqref{eq:ttt} hold. One similarly derives that $$( {\rm id}_{\g\ltimes_{\rho_\g^*} V_\g^*}\otimes(\phi+\beta^*))(r_{\scriptscriptstyle{T_\g}})=( (\psi+\alpha^*)\otimes{\rm id_{\h\ltimes_{\rho_\h^*} V_\h^*}})(r_{\scriptscriptstyle{T_\h}})$$ if and only if Eqs. \eqref{defi:isocon1b} and \eqref{eq:ttt} hold. Finally, it is straightforward to deduce that Eq.~\meqref{eq:reqqq3} holds (where $\g$ is replaced by $\g\ltimes_{\rho_\g^*}V_\g^*$, $\h$ by $\h\ltimes_{\rho_\h^*}V_\h^*$, $\phi$ by $\phi+\beta^*$ and $\psi$ by $\psi+\alpha^*$) if and only if Eqs.~\meqref{eq:reqqq3}, \meqref{defi:isocon2b} and (\ref{eq:hh2}) hold. Therefore the proof is completed. \end{proof} Applying Theorems~\mref{cor:maincon} and ~\ref{co:r12}, we obtain a large supply of \weak homomorphisms of Lie bialgebras. \begin{cor} \mlabel{co:opliebial} Under the assumption of Theorem~\mref{cor:maincon}, if the linear maps $\phi,\alpha,\psi,\beta$ satisfy Eqs.~\meqref{eq:reqqq3} and \meqref{eq:ttt}--\meqref{eq:hh2}, then the pair $(\phi+ \beta^*,\psi+\alpha^*)$ is a \weak homomorphism between the Lie bialgebras $(\g\ltimes_{\rho_\g^*} V_\g^*,\delta_{r_{\scriptscriptstyle{T_\g}}})$ and $(\h\ltimes_{\rho_\h^*} V_\h^*,\delta_{r_{\scriptscriptstyle{T_\h}}})$. \end{cor} To obtain from Theorem~\mref{cor:maincon} a functor from the category of $\calo$-operators and the category of classical $r$-matrices, there needs to be a consistent choice of $\psi$ and $\beta$. \begin{cor}\label{cor:zero} Under the assumption of Theorem~\mref{cor:maincon}, the assignment of objects $$ (T:V_\g\to \g) \mapsto r_{\scriptscriptstyle{T}},$$ and the assignment of morphisms $$ \big((\phi,\alpha):T_\g\to T_\h\big) \mapsto \big((\phi,\alpha^*): r_{\scriptscriptstyle{T_\g}}\to r_{\scriptscriptstyle{T_\h}}\big)$$ define a functor from the category ${\bf OP}$ of $\mathcal O$-operators to the category ${\bf SCr}$ of skew-symmetric classical $r$-matrices. \end{cor} \begin{proof} Under the assumption of Theorem~\mref{cor:maincon}, it is obvious that $\beta=\psi=0$ satisfies Eqs.~\meqref{eq:reqqq3} and \meqref{eq:ttt}--\meqref{eq:hh2}. Hence the conclusion holds. \end{proof} The following two examples give other choices for $\beta$ and $\psi$ in Theorem~\mref{cor:maincon}. \begin{ex}\label{co:con} In Theorem~\mref{cor:maincon}, take $\g=\h, V_\g=V_\h=V$ and $\rho_\g=\rho_\h=\rho$. Further take $\beta=\pm \alpha$ and $\psi=\pm \phi$. According to Theorem~\mref{cor:maincon}, assume that $T:V\rightarrow \g$ is an $\mathcal O$-operator on $\g$ associated to $(V,\rho)$ satisfying $T\alpha=\phi T$ and the following equations \begin{eqnarray} && \alpha\rho(x)=\rho(\phi(x))\alpha, \quad \mlabel{eq:semi1} \rho(x)\alpha=\alpha\rho(\phi(x)),\\ && [\phi^2(x),\phi(y)]=[x,\phi(y)], \quad \alpha\rho(x)\alpha=\rho(\phi(x)), \quad \forall x\in \g.\mlabel{eq:semi4} \end{eqnarray} Then $r_{\scriptscriptstyle{T}}=T-\sigma(T)$ is a skew-symmetric classical $r$-matrix in the Lie algebra $\g\ltimes_{\rho^*} V^*$ and $(\phi \pm \alpha^*,\pm \phi+\alpha^*)$ is a \weak endomorphism. Moreover, there is a Lie bialgebra $(\g\ltimes_{\rho^*} V^*,\delta_{r_{\scriptscriptstyle{T}}})$ and $(\phi\pm\alpha^*,\pm\phi+\alpha^*)$ is a \weak endomorphism. \end{ex} \vspace{-.2cm} \begin{ex} Let Lie algebras $\g, \h$, representations $(V_\g,\rho_\g), (V_\h,\rho_\h)$, linear maps $\phi, \alpha$, $T_\g, T_\h$ be as in Theorem~\mref{cor:maincon}. Further assume that $(\phi,\alpha)$ is an isomorphism of $\mathcal O$-operators from $T_\g$ to $T_\h$, that is, $\phi$ and $\alpha$ are linear bijections. For $0\ne \theta\in K$, take $\beta=\theta \alpha^{-1}$ and $\psi=\theta \phi^{-1}$. Then Eq.~(\mref{eq:reqqq3}) holds automatically since it is equivalent to the fact that $\phi$ is an isomorphism of Lie algebras from $\g$ to $\h$. Also Eqs.~\meqref{eq:ttt}, \meqref{eq:hh1} and \meqref{eq:hh2} hold since $(\phi,\alpha)$ is an isomorphism of $\mathcal O$-operators from $T_\g$ to $T_\h$. Thus by Theorem~\ref{cor:maincon}, $(\phi+\theta{\alpha^{-1}}^*,\theta \phi^{-1}+\alpha^*)$ is a \weak isomorphism between the skew-symmetric classical $r$-matrices $r_{\scriptscriptstyle{T_\g}}$ and $r_{\scriptscriptstyle{T_\h}}$. Furthermore, $(\phi+\theta{\alpha^{-1}}^*,\theta \phi^{-1}+\alpha^*)$ is a \weak isomorphism between the Lie bialgebras $(\g\ltimes_{\rho_\g^*} V_\g^*,\delta_{r_{\scriptscriptstyle{T_\g}}})$ and $(\h\ltimes_{\rho_\h^*} V_\h^*,\delta_{r_{\scriptscriptstyle{T_\g}}})$ in Corollary~\mref{co:opliebial}. \end{ex} \vspace{-.2cm} \subsection{Functors among $\calo$-operators, pre-Lie algebras and Lie bialgebras} \mlabel{sec:dend} Now we consider the category of pre-Lie algebras and obtain an adjoint pair of functors from it to the category of $\calo$-operators. \vspace{-.1cm} \begin{defi} Let $A$ be a vector space with a bilinear product denoted by $\cdot$. Then $(A, \cdot)$ is called a {\bf pre-Lie algebra} if \begin{equation} x\cdot(y\cdot z)-(x \cdot y)\cdot z=y\cdot (x\cdot z)-(y\cdot x)\cdot z,\;\;\forall x,y,z\in A. \mlabel{eq:PreLie} \end{equation} A {\bf homomorphism} $f$ from a pre-Lie algebra $(A,\cdot_A)$ to $(B,\cdot_B)$ is defined as usual, that is, $f$ is a linear map satisfying $f(x\cdot_A y)=f(x)\cdot_B f(y)$ for all $x,y\in A$. Let {\bf PL} denote the category of pre-Lie algebras with the morphisms thus defined. \end{defi} For any $a$ in a pre-Lie algebra $A$, let $L(a)$ denote the left multiplication operator. Furthermore, define the linear map \vspace{-.1cm} $$L: A\mto \End(A),~a\mapsto L(a),\;\;\forall a\in A. $$ As is well known, for a pre-Lie algebra $(A,\cdot)$, the multiplication \begin{equation} [a,b]=a\cdot b-b\cdot a,\;\; \forall a,b\in A\mlabel{eq:5.41}\end{equation} defines a Lie algebra $(\g(A),[\;,\;])$, called the {\bf sub-adjacent Lie algebra} of $(A,\cdot)$. Moreover, $(A,L)$ is a representation of the Lie algebra $(\g(A),[\;,\;])$~\mcite{Bai2}. \begin{defi} An {\bf \plhp} is a triple $(A,\cdot, \phi)$ consisting of a pre-Lie algebra $(A,\cdot)$ and a pre-Lie algebra endomorphism $\phi$ on $A$.\mlabel{de:5.1} \end{defi} \vspace{-.2cm} \begin{pro} Let $(A, \cdot, \phi)$ be an \plhp. Then $(\g(A), [\;,\;], \phi)$ is an endo Lie algebra, where $[\;,\;]$ is given by Eq.~\meqref{eq:5.41}. Moreover, $(A, L, \phi)$ is a representation of the endo Lie algebra $(\g(A), [\;,\;], \phi)$. Conversely, let $(\g(A), [\;,\;], \phi)$ be an endo Lie algebra. Suppose that there is a bilinear product denoted by $\cdot$ such that Eq.~\meqref{eq:5.41} holds and $(A, L, \phi)$ is a representation of $(\g(A), [\;,\;], \phi)$. Then $(A,\cdot, \phi)$ is an endo pre-Lie algebra. \mlabel{pro:5.3} \end{pro} \vspace{-.1cm} \begin{proof} It follows from a simple checking from Eq.~(\mref{eq:repn}) and Definition~\mref{de:5.1}. \end{proof} \newpage \begin{rmk} From the viewpoint of operads, due to the second half part of the above conclusion, the operad of endo pre-Lie algebras is the bisuccessor (2-splitting) of the operad of endo Lie algebras, which is consistent with the fact that the operad of pre-Lie algebras is the bisuccessor of the operad of Lie algebras~\cite{BBGN}. \end{rmk} \vspace{-.2cm} \begin{defi} Let $(A, \cdot, \phi)$ be an \plhp. Let $[\;,\;]$ be the product given by Eq.~(\mref{eq:5.41}). The triple $(\g(A),[\;,\;] , \phi)$ is called the {\bf sub-adjacent endo Lie algebra of $(A, \cdot, \phi)$} and $(A, \cdot, \phi)$ is called a {\bf compatible \plhp structure on the endo Lie algebra $(\g(A), [\;,\;], \phi)$}. \mlabel{de:5.4} \end{defi} Then Proposition~\mref{pro:5.3} and Corollary~\ref{cor:homO} give the following conclusion. \begin{cor} Let $(A, \cdot, \phi)$ be an \plhp. Then the identity map $\id_A:A\rightarrow \g(A)$ on $A$ is an $\mathcal O$-operator of the sub-adjacent endo Lie algebra $(\g(A), [\;,\;], \phi)$ associated to the representation $(A, L, \phi)$. Equivalently, let $A$ be a pre-Lie algebra and $\phi:A\rightarrow A$ be a pre-Lie homomorphism. Then the identity map $\id_A$ on $A$ is an $\mathcal O$-operator of the sub-adjacent Lie algebra $\g(A)$ associated to the representation $(A,L)$ and $(\phi,\phi)$ is an endomorphism of $\mathcal O$-operators on $\id$. \mlabel{co:denid} \end{cor} \vspace{-.2cm} The second half part of the above conclusion can be generalized as follows. \begin{pro} Let $(A,\cdot_A)$ and $(B,\cdot_B)$ be pre-Lie algebras and $\phi:A\mto B$ be a pre-Lie algebra homomorphism. The assignments \begin{eqnarray*} (A,\cdot) & \mapsto& (\id_A:A\mto \g(A)),\\ \phi& \mapsto & ((\phi,\phi): \id_A\mto \id_B) \end{eqnarray*} \vspace{-.1cm} defines a functor $F$ from the category $\PL$ of pre-Lie algebras to the category $\OP$ of $\calo$-operators. \mlabel{pro:prelieo} \end{pro} \begin{proof} Since $\phi$ is also a homomorphism between the sub-adjacent Lie algebras and $\id_B\phi=\phi\id_A$, the pair $(\phi,\phi)$ is a (\weak) homomorphism between the $\calo$-operators $\id_A$ and $\id_B$. The other axioms of functors are easy to verify. \end{proof} In the other direction, by~\mcite{Bai2}, for a representation $(V,\rho)$ of a Lie algebra $\g$ and an $\calo$-operator $T:V\mto \g$ on $\g$ associated to $(V,\rho)$, the operation $u\cdot_T v:= \rho(T(u))v,~ u, v\in V$, defines a pre-Lie algebra $(V,\cdot_T)$. \begin{pro} Let $(\g,[\;,\;]_\g)$, $(\h,[\;,\;]_\h)$ be Lie algebras and $(V_\g, \rho_\g)$, $(V_\h, \rho_\h)$ be representations of $(\g,[\;,\;]_\g)$, $(\h,[\;,\;]_\h)$ respectively. Let $T_\g: V_\g\mto \g$ be an $\mathcal{O}$-operator associated to $(V_\g, \rho_\g)$ and $T_\h: V_\h\mto \h$ be an $\mathcal{O}$-operator associated to $(V_\h, \rho_\h)$. Suppose that $(\phi,\alpha)$ is a homomorphism of $\mathcal O$-operators from $T_\g$ to $T_\h$. Then $\alpha$ is a homomorphism of pre-Lie algebras from $(V_\g,\cdot_\g)$ to $(V_\h,\cdot_\h)$. The assignments \vspace{-.2cm} \begin{eqnarray*} T_\g &\mapsto& (V_\g,\cdot_{T_\g}),\\ (\phi,\alpha)&\mapsto& \alpha, \end{eqnarray*} define a functor $G$ from the category $\OP$ of $\calo$-operators to the category {\bf PL} of pre-Lie algebras. Furthermore, the functor $G$ is right adjoint to the functor $F$ in Proposition~\mref{pro:prelieo}. \mlabel{pro:otopl} \end{pro} \vspace{-.2cm} \begin{proof} For any $u,v\in V_\g$, we have $$\alpha(u\cdot_{T_\g} v)=\alpha(\rho_\g(T_\g(u))v)=\rho_\h(\phi(T_\g(u)))\alpha(v)=\rho_\h(T_\h\alpha(u))\alpha(v)=\alpha(u)\cdot_{T_\h} \alpha(v).$$ Hence $\alpha$ is a homomorphism of pre-Lie algebras from $(V_\g,\cdot_{T_\g})$ to $(V_\h,\cdot_{T_\h})$. The other axioms of functors are easily verified. To prove the adjointness of the functors $F$ and $G$, we only need to show that, for any $(A,\cdot)\in \PL$ and $T:W\mto \h$ in $\OP$, there is a bijection $$ \Hom_\OP(F(A,\cdot),T)\cong \Hom_\PL((A,\cdot),GT)$$ that is natural in both arguments. The left hand side consists of pairs $(\phi,\alpha)$ with $\phi:\g(A)\mto \h$ and $\alpha:A\mto W$ such that $\phi \id_A=T\alpha$, that is, $\phi=T\alpha$, and the right hand side consists of pre-Lie algebra homomorphisms $\alpha:(A,\cdot)\mto (W,\cdot_T)$. Then we see that the natural bijection can be given by sending $(\phi,\alpha)$ to $\alpha$ whose inverse is sending $\alpha$ to $(T\alpha,\alpha)$. \end{proof} Utilizing Proposition~\mref{pro:prelieo}, we apply homomorphisms of pre-Lie algebras to obtain three explicitly constructed examples of \weak homomorphisms of classical $r$-matrices and hence of Lie bialgebras. \begin{pro} \mlabel{co:plbialg} Let $(A, \cdot)$ be a pre-Lie algebra. Then $r_\id$ is a skew-symmetric classical $r$-matrix in the Lie algebra $\g(A)\ltimes_{L^*}A^*$ and $(\g(A)\ltimes_{L^*}A^*,\delta_{r_\id})$ is a Lie bialgebra. Furthermore, let $\phi:A\rightarrow A$ be an endomorphism of pre-Lie algebras satisfying \begin{eqnarray}\mlabel{eq:5.8ab} (\phi^2-\id)(x)\cdot \phi(y)=\phi(x)\cdot(\phi^2-\id)(y)=0, \quad \forall x,y\in A. \end{eqnarray} Then $(\phi \pm \phi^*,\pm \phi+\phi^*)$ is a \weak endomorphism on both the triangular $r$-matrix $r_\id$ and the Lie bialgebra $(\g(A)\ltimes_{L^*}A^*,\delta_{r_\id})$. \end{pro} \begin{proof} Note that in this case, Eq.~(\mref{eq:5.8ab}) is equivalent to Eqs.~\meqref{eq:semi1}--\meqref{eq:semi4}. Therefore the conclusion follows from Example~\mref{co:con}. \end{proof} Now let $(A,\cdot_A), (B,\cdot_B)$ be pre-Lie algebras. Then $r_{\id_A}$ and $r_{\id_B}$ are skew-symmetric classical $r$-matrices in the Lie algebras $\g(A)\ltimes_{L^*_A}A^*$ and $\g(B)\ltimes_{L^*_B}B^*$ respectively. Moreover, $(\g(A)\ltimes_{L^*_A}A^*,\delta_{r_{\id_A}})$ and $(\g(B)\ltimes_{L^*_B}B^*,\delta_{r_{\id_B}})$ are triangular Lie bialgebras. We can thus give the following two results. \begin{pro} Let $\phi:(A,\cdot_A) \rightarrow (B,\cdot_B)$ be a pre-Lie algebra homomorphism. Then $(\phi,\phi^*)$ is both a \weak homomorphism of classical $r$-matrices from $r_{\id_A}$ to $r_{\id_B}$ and a \weak homomorphism of Lie bialgebras from $(\g(A)\ltimes_{L^*_A}A^*,\delta_{r_{\id_A}})$ to $(\g(B)\ltimes_{L^*_B}B^*,\delta_{r_{\id_B}})$. \end{pro} \begin{proof} The conclusion follows from Corollaries~\ref{cor:zero} and ~\ref{co:opliebial}. \end{proof} \begin{pro} Let $\phi:(A,\cdot_A) \rightarrow (B,\cdot_B)$ and $\psi:(B,\cdot_B)\rightarrow (A,\cdot_A)$ be pre-Lie algebra homomorphisms such that $\psi\phi=\id_A$. Then for any $0\neq \theta\in K$, the pair $(\phi+\theta{\psi}^*,\theta \psi+\phi^*)$ is both a \weak homomorphism of skew-symmetric classical $r$-matrices from $r_{\id_A}$ to $r_{\id_B}$ and a \weak homomorphism of Lie bialgebras from $(\g(A)\ltimes_{L^*_A}A^*,\delta_{r_{\id_A}})$ to $(\g(B)\ltimes_{L^*_B}B^*,\delta_{r_{\id_B}})$. In particular, if $\phi$ is a linear bijection, then $(\phi+\theta{\phi^{-1}}^*,\theta \phi^{-1}+\phi^*)$ is both a \weak isomorphism of skew-symmetric classical $r$-matrices from $r_{\id_A}$ to $r_{\id_B}$ and a \weak isomorphism of Lie bialgebras from $(\g(A)\ltimes_{L^*_A}A^*,\delta_{r_{\id_A}})$ to $(\g(B)\ltimes_{L^*_B}B^*,\delta_{r_{\id_B}})$. \end{pro} \begin{proof} If $\psi$ is a homomorphism of pre-Lie algebras and $\psi\phi=\id_A$, then $\theta \psi$ satisfies $\id_A \theta \psi=\theta \psi \id_B$, Eqs.~\meqref{eq:reqqq3}, \meqref{eq:hh1} and \meqref{eq:hh2} where $\psi$ is replaced by $\theta \psi$ and $\beta$ by $\theta\psi$. Therefore the first conclusion follows from Theorem~\ref{cor:maincon}. The special case when $\phi$ is an isomorphism then follows directly or from Example~\ref{co:opliebial}. \end{proof} \vspace{-.2cm} \begin{rmk} Note that when $\phi$ is invertible and $\theta\ne 0$, the inverse of $\phi+\theta{\phi^{-1}}^*$ is $\phi^{-1}+\theta^{-1}\phi^*$. Thus by Proposition~\ref{pro:iso}, $\phi+\theta{\phi^{-1}}^*$ gives an isomorphism of Lie bialgebras from $(\g(A)\ltimes_{L^*_A}A^*,\delta_{r_{\id_A}})$ to $(\g(B)\ltimes_{L^*_B}B^*,\delta_{r_{\id_B}})$ if and only if $\theta=1$. Therefore, when $\theta\ne 1$, the above isomorphism of pre-Lie algebras provide non-trivial examples of \weak isomorphisms of Lie bialgebras which are not the usual isomorphisms of Lie bialgebras. \end{rmk} \noindent {\bf Acknowledgments.} This work is supported by National Natural Science Foundation of China (Grant Nos. 11771190, 11931009 and 11922110). C. Bai is also supported by the Fundamental Research Funds for the Central Universities and Nankai ZhiDe Foundation. \vspace{-.2cm}
train/arxiv
BkiUd385qsBDGjv8hz19
5
1
\section{Introduction} While much is known about the invariants of conformal manifolds, the same cannot be said for the invariants of submanifolds in conformal geometries. Codimension-1 embedded submanifolds (or {\em hypersurfaces}) are important for applications in geometric analysis and physics. An extremely interesting example is the Willmore equation \begin{equation}\label{Wore} \bar{\Delta} H +2 H(H^2-K)=0, \end{equation} for an embedded surface $\Sigma$ in Euclidean 3-space $\mathbb{E}^3$. Here $H$ and $K$ are, respectively, the mean and Gau\ss\ curvatures, while $\bar\Delta$ is the Laplacian induced on $\Sigma$. We shall term the left hand side of this equation the {\it Willmore invariant}; as given, this quantity is invariant under M\"obius transformations of the ambient $\mathbb{E}^3$. A key feature is the linearity of its highest order term, $\bar{\Delta} H$. This linearity is important for PDE problems, but also means that the Willmore invariant should be viewed as a fundamental curvature quantity. In the 1992 article~\cite{ACF}, Andersson, Chrusciel and Friedrich (ACF) (building on the works~\cite{AMO,AMO1,AMO2}) identified a conformal surface invariant that obstructs smooth boundary asymptotics for a Yamabe solution on a conformally compact 3-manifold (and gave some information on the obstructions in dimension $n+1=d>3$). It is straightforward to show that this invariant is the same as that arising from the variation of the Willmore energy; in particular its specialisation to surfaces in $\mathbb{E}^3$ agrees with (\ref{Wore}). We show how tools from conformal geometry can be used to describe and compute the asymptotics of the Yamabe problem on a conformally compact manifold. This reveals higher order hypersurface conformal invariants that generalise the curvature obstruction found by~ACF. In particular, for hypersurfaces of arbitrary even dimension this yields higher order conformally invariant analogues of the usual Willmore equation on surfaces in 3-space. The construction also leads to a general theory for constructing and treating conformal hypersurface invariants along the lines of holography and the Fefferman-Graham programme for constructing invariants of a conformal structure via their Poincar\'e-Einstein and ``ambient'' metrics~\cite{FGrnew}. In this announcement, we focus only on main results, a detailed account of this general theory will be presented elsewhere~\cite{GWnew}. \section{The problem} \label{prob} Given a Riemannian $d$-manifold $(M,g)$ with boundary $\Sigma:=\partial M$, one may ask whether there is a smooth real-valued function $u$ on $M$ satisfying the following two conditions: \begin{enumerate} \item $u $ is a defining function for $\Sigma$ ({\it i.e.}, $\Sigma$ is the zero set of $u$, and $\boldsymbol{d} u_x\neq 0$ $\forall x\in \Sigma$); \item $\bar{g}:=u^{-2}g$ has scalar curvature ${\rm Sc}^{\bar{g}}=-d(d-1)$. \end{enumerate} Here $\boldsymbol{d}$ is the exterior derivative. We assume $d\geq 3$ and all structures are $C^\infty$. Assuming $u>0$ and setting $u=\rho^{-2/(d-2)}$, part (2) of this problem gives the Yamabe equation. The problem fits nicely into the framework of conformal geometry: Recall that a conformal structure $\boldsymbol{c}$ on a manifold is an equivalence class of metrics where the equivalence relation $\widehat{g}\sim g$ means that $\widehat{g}= \Omega^2 g$ for some positive function $\Omega$. The line bundle $(\Lambda^d TM)^2$ is oriented and for $w\in \mathbb{R}$ the bundle of {\it conformal densities} of weight~$w$, denoted $\mathcal{E}[w]$, is defined to be the oriented $\frac{w}{2d}$-root of this (we use the same notation for bundles as for their smooth section spaces). Locally each $g\in \boldsymbol{c}$ determines a volume form and, squaring this, globally a section of $(\Lambda^d T^*M)^2$. So, on a conformal manifold $(M,\boldsymbol{c})$ there is a canonical section $\mbox{\boldmath{$ g$}}$ of $S^2T^* M\otimes \mathcal{E}[2]$ called the conformal metric. Thus each metric $g\in c$ is naturally in $1:1$ correspondence with a (strictly) positive section $\tau$ of $\mathcal{E}[1]$ via $g=\tau^{-2} \mbox{\boldmath{$ g$}}$. Also, the Levi-Civita connection $\nabla$ of $g$ preserves $\tau$, and hence~$\mbox{\boldmath{$ g$}}$. Thus we are led to the conformally invariant equation on a weight 1 density $\sigma\in \mathcal{E}[1]$ \begin{equation}\label{Ytwo} S(\sigma):= \big(\nabla \sigma \big)^2 - \frac{2}{d} \sigma\, \Big(\Delta +\frac{\rm Sc}{2(d-1)}\Big) \sigma = 1 , \end{equation} where $\mbox{\boldmath{$ g$}}$ and its inverse are used to raise and lower indices, $\Delta = \mbox{\boldmath{$ g$}}^{ab}\nabla_a\nabla_b$ and ${\rm Sc}$ means $\mbox{\boldmath{$ g$}}^{bd}R_{ab}{}^a{}_d$, with $R$ the Riemann tensor. Choosing $\boldsymbol{c}\ni g=\tau^{-2} \mbox{\boldmath{$ g$}}$, equation~(\ref{Ytwo}) becomes exactly the PDE obeyed by the smooth function $u=\sigma/\tau$ solving part (2) of the problem above. Since $u$ is a defining function this means $\sigma$ is a {\em defining density} for $\Sigma$, meaning that it is a section of $\mathcal{E}[1]$, its zero locus ${\mathcal Z}(\sigma)=\Sigma$, and $\nabla \sigma_x\neq 0$ $\forall x\in \Sigma$. For our purpose we only need to treat the problem formally (so it applies to any hypersurface): \begin{problem}\label{Riemannsfirststep} Let $\Sigma$ be an embedded hypersurface in a conformal manifold $(M,\boldsymbol{c})$ with $d\geq 3$. Given a defining density $\sigma$ for $\Sigma$, find a new, smooth, defining density $\bar \sigma$ such that \begin{equation}\label{ind} S(\bar{\sigma})=1 + \bar{\sigma}^{\ell} A_\ell\, , \end{equation} for some $A_\ell\in \mathcal{E}[-\ell]$, where $\ell \in\mathbb{N}\cup\infty$ is as high as possible. \end{problem} \section{The main results} \label{results} Here we use the notation $\O(\sigma^{\ell})$ to mean plus $\sigma^{\ell} A$ for some smooth $A\in \mathcal{E}[-\ell]$. \begin{theorem} \label{obstr} Let $\Sigma$ be an oriented embedded hypersurface in $(M,\boldsymbol{c})$, where $d\geq 3$, then: \\ $\bullet$ There is a distinguished defining density $\bar\sigma\in \mathcal{E}[1]$ for $\Sigma$, unique to $\O(\bar\sigma^{d+1})$, such that \begin{equation}\label{ddens} S({\bar \sigma})=1+\bar\sigma^d B_{\bar \sigma}\, , \end{equation} where $B_{\bar \sigma} \in \mathcal{E}[-d]$ is smooth on $M$. Given any defining density $\sigma$, then $\bar \sigma$ depends smoothly on $(M,\boldsymbol{c},\sigma)$ via a canonical formula~$\bar\sigma(\sigma)$. \\ $\bullet$ $\mathcal B:=B_{\bar\sigma(\sigma)}\big|_\Sigma$ is independent of $\sigma$ and is a natural invariant determined by $(M,\boldsymbol{c},\Sigma)$. \end{theorem} For any {\it unit conformal defining density} $\bar\sigma$ satisfying Eq.~(\ref{ddens}) of the Theorem, it is straightforward, although tedious, to calculate ${\mathcal B}$. For $d=3$ we obtain \begin{equation}\label{ourWilly} \mathcal B= 2 \big(\bar \nabla_{(i} \bar \nabla_{j)\circ} + H\, \mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij} +R^\top_{(ij)\circ}\big) \mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{ij}\, , \end{equation} where $\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij}$ is the trace-free part of the second fundamental form ${\bf\rm I\hspace{-.2mm}I}_{ij}$, $R^\top_{(ij)\circ}$ is the trace-free part of the projection of the ambient Ricci tensor along~$\Sigma$, and $\bar\nabla$ is the Levi-Civita for the metric on $\Sigma$ induced by $g$. Equation~(\ref{ourWilly}) agrees with~\cite[Theorem 1.3]{ACF} and \cite{G+Yuri} and, by using the Gau\ss--Codazzi equations, agrees with~(\ref{Wore}) for $\Sigma$ in $\mathbb{E}^3$. (We note that Eq.~(\ref{ddens}) is consistent with~\cite[Lemma 2.1]{ACF}.) For~$d=4$ and (specializing to) conformally flat structures~$\boldsymbol{c}$, evaluated on~$g \in \boldsymbol{c}$ with~$g$ flat, our result for the {\em obstruction density} ${\mathcal B}$ of Theorem \ref{obstr} is \begin{equation}\label{4willy} {\mathcal B}=\frac1{6}\Big((\bar\nabla_k\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij})\, (\bar\nabla^k\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{ij})+ 2\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{ij}\bar\Delta\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij}+\frac32\, (\bar\nabla^k\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ik})\, (\bar\nabla_l\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{il}) -2\bar{\mbox{\sf J}}\, \mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij}\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{ij} +(\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij}\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{ij})^2 \Big)\, . \end{equation} For $d\geq 5$ odd, we prove that the obstruction density~${\mathcal B}$ has a linear highest order term, namely $\bar\Delta^{(d-1)/2} H$ (up to multiplication by a non-zero constant). So: ${\mathcal B}$ is an analogue of the Willmore invariant; it can be viewed as a fundamental conformal curvature invariant for hypersurfaces; as an obstruction it is an analogue of the Fefferman--Graham obstruction tensor~\cite{FGrnew}. We see this as follows. From the algorithm for calculating ${\mathcal B}$ it is easily concluded that it is a natural invariant (in terms of a background metric), indeed it is given by a formula polynomially involving the second fundamental form and its tangential (to $\Sigma$) covariant derivatives, as well as the curvature of the ambient manifold $M$ and its covariant derivatives. To calculate the leading term we linearise this formula by computing the infinitesimal variation of~${\mathcal B}$. It suffices to consider an $\mathbb{R}$-parametrised family of embeddings of $\mathbb{R}^{d-1}$ in $\mathbb{E}^{d}$, with corresponding defining densities $\sigma_t$ and such that the zero locus ${\mathcal Z}(\sigma_0)$ is the $x^{d}=0$ hyperplane (where $x^i$ are the standard coordinates on $\mathbb{E}^{d}=\mathbb{R}^{d}$) so that ${\mathcal B}|_{t=0}=0$. Then applying $\frac{\partial}{\partial t}\mid _{t=0}$ (denoted by a dot) we obtain the following: \begin{proposition}\label{Bnature} The variation of the obstruction density is given by $$ \dot {\mathcal B} = \left\{ \begin{array}{ll} a\cdot \bar{\Delta}^{(d+1)/2} \dot{\sigma} + \mbox{\em lower order terms}\, ,&d-1 \mbox{ even, with $a\neq 0$ a constant, }\\[2mm] \mbox{\em non-linear terms}\, ,& d-1 \mbox{ odd.} \end{array}\right. $$ \end{proposition} This establishes the result, as in this setting the highest order term in the variation of mean curvature is $\frac{1}{d-1}\bar{\Delta} \dot \sigma$. It also shows that when $n$ is odd the general formula for ${\mathcal B}$ may be expressed so that it has no linear term. Employing methods from tractor calculus~\cite{BEG}, and the notion of a holographic formula as introduced in~\cite{GW} a simple closed formula for the obstruction density in any dimension can be obtained. Key ingredients of this result are the tractor bundle associated to a conformal structure and the Thomas $D$-operator $D^A$. Also needed is the projector $\Sigma^A_B:=\delta^A_B -N^A N_B$ from the tractor bundle along $\Sigma$ to the normal bundle of the normal tractor $N^A$. The latter is isomorphic to the tractor bundle of the hypersurface~$\Sigma$~\cite{Goal} and $\bar D_A$ is its intrinsic Thomas-$D$ operator. For an explanation of these details see Section~\ref{proofs} below as well as~\cite{BEG,GWnew}. In these terms our result is as follows: \begin{theorem} Let $\bar \sigma$ be a unit conformal defining density. Then, the ASC obstruction density $\mathcal B$ is given by the holographic formula $$ (-1)^{n+1}\, \frac{n!(n+2)!}{4} \, \mathcal B= \bar D_A \Big[\Sigma^A_B\Big((\bar I.D)^n \bar I^B - (\bar I.D)^{n-1}[X^B K]\Big)\Big]\Big|_\Sigma\, , $$ where $K:=P_{AB}P^{AB}$, $P^{AB}:=\widehat{D}^A\bar I^B$ and $\bar I^A=\widehat{D}^A \bar \sigma$. \end{theorem} \begin{remark} In the above Theorem, the operator $(\bar I.D)^n$ is an example of a sequence of holographic formul\ae\ for tractor twistings of the conformally invariant GJMS operators of~\cite{GJMS}. These are a sequence of conformally invariant operators built from powers of the Laplacian with subleading curvature corrections; the simplest examples of these are the Yamabe operator (or conformally invariant wave operator) and Paneitz operator. \end{remark} The $d=3$ invariant~(\ref{ourWilly}) is the variation of the Willmore energy $E=\int_\Sigma \mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij}\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{ij}$, while in $d=4$, for $\boldsymbol{c}$ conformally flat, it can be shown that the invariant~(\ref{4willy}) is the variation of $E=\int_\Sigma \mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{ij}\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}^{jk}\mathring{{\bf\rm I\hspace{-.2mm} I}}{\hspace{.2mm}}_{k}{}^i$. The na\"ive conjecture that powers of traces of the trace-free second fundamental form yield integrands for $d>4$ energy functionals is bound to fail $d$ odd because of the leading behavior of the obstruction density given in Proposition~\ref{Bnature} (see~\cite{Guven,G+Yuri} for a study of conformally invariant $d=4$ bending energies). It seems likely that the higher dimensional obstruction densities are variational, therefore it is interesting to speculate whether they are the same as or closely linked to the variations of the submanifold conformal anomalies studied in~\cite{GrW}. A related question is their relevance for entanglement entropy. Recently in~\cite{Ast} variations of the Willmore energy were employed in a study of surfaces maximizing entanglement entropies. \subsection{A holographic approach to submanifold invariants}\label{holo} Given a conformal manifold $(M,c)$ and a section $\sigma$ of $\mathcal{E}[1]$ one may construct density-valued conformal invariants that couple the data of the jets of the conformal structure with the jets of the section $\sigma$. In the setting of Theorem \ref{obstr}, consider such an invariant $U$ (say), which uses the section $\bar\sigma$ of the Theorem. Suppose that at every point, $U$ involves~$\bar\sigma$ non-trivially, but uses no more than its $d$-jet of $\bar\sigma$. Then it follows from the first part of Theorem \ref{obstr} that $U|_\Sigma$ is determined by $(M,c,\Sigma)$ and so is a conformal invariant of $\Sigma$. On the interior the formula for $U$ as calculated in the scale $\bar\sigma$ (so using $\bar\sigma$ to trivialize the density bundles) is then a regular Riemannian invariant of $(M,\bar{g})$ (where $\bar{g}={\bar\sigma}^{-2}\mbox{\boldmath{$ g$}}$) which corresponds holographically to the submanifold invariant $U|_\Sigma$. \section{The ideas behind the main proofs}\label{proofs} On a conformal manifold $(M,c)$, although there is no canonical connection on $TM$, there is a canonical linear connection $\nabla^{\mathcal{T}}$ on a rank $d+2$ vector bundle known as the tractor bundle and denoted $\mathcal{E}^A$ in an abstract index notation. A choice of metric $g\in c$ determines an isomorphism $ \mathcal{E}^A \stackrel{g}{\cong} \mathcal{E}[1]\oplus T^*\!M[1]\oplus \mathcal{E}[-1] $. This connection preserves a metric $h_{AB}$ on $\mathcal{E}^A$ that we may therefore use to raise and lower tractor indices. For $V^A=(\sigma, \mu^a, \rho)$ and $W^A=(\tau, \nu^a,\kappa)$ this is given by $h(V,W)=h_{AB}V^A W^B=\sigma \kappa +\mbox{\boldmath{$ g$}}_{ab}\mu^a\nu^b+\rho\tau=:V.W$. Closely linked to $\nabla^\mathcal{T}$ is an important, second order conformally invariant operator $D^A:\mathcal{E} [w]\to \mathcal{E}^A[w-1]$; when $w\neq 1-\frac d2$, we denote $\frac{1}{d-2w-2}$ times this by $\widehat{D}$, where $ \widehat{D}^A\sigma\stackrel{g}{=} (\sigma,~ \nabla_a \sigma,~-\frac{1}{d}(\Delta +{\mbox{\sf J}})\sigma )$, for the case $\sigma\in \mathcal{E}[1]$, and $2{\mbox{\sf J}}={\rm Sc}^g/(d-1)$. For $\sigma$ a scale, or even a defining density, we shall write $I^A_\sigma:=\widehat{D}^A \sigma$, which we call the {\em scale tractor}. Now $S(\sigma)$ from above is just $S(\sigma)=I^2_\sigma:= h_{AB}I^A_\sigma I^B_\sigma$, so the equation (\ref{Ytwo}) has the nice geometric interpretation $I_\sigma^2=1$~\cite{Goal}, and this is critical for our treatment. Note that it is essentially trivial to solve (\ref{ind}) for the case $\ell=1$. Theorem \ref{obstr} is then proved inductively via the following Lemma. The Lemma also yields an algorithm for explicit formulae for the expansion, that we cannot explain fully here, but through this and related results the naturality of ${\mathcal B}$ can be seen. \begin{lemma}\label{Isquare} Suppose $I^2_\sigma=S(\sigma)$ satisfies (\ref{ind}) for $\ell=k \geq 1$. Then, if $k\neq d$, there exists $f_k\in \mathcal{E}[-k]$ such that the scale tractor $I_{\sigma'}$ of the new defining density $\sigma':=\sigma+\sigma^{k+1} f_k$ satisfies (\ref{ind}) for $\ell=k+1$. When $k=d$ and $\sigma':=\sigma+\sigma^{d+1} f$, then for any $f\in\mathcal{E}[-d]$, $$ I_{\sigma'}^{\, 2}=I_\sigma^{\, 2}+\O(\sigma^{d+1})\, . $$ \end{lemma} \begin{proof}[Proof - Idea] First because of the scale tractor definition we have $$ \big(\widehat{D} \sigma'\big)^2=I_\sigma^2 +\frac2d \, I_\sigma.D \big(\sigma^{k+1} f_k\big) + \Big[\widehat{D} \big(\sigma^{k+1} f_k\big)\Big]^2\, . $$ Tractor calculus identities show that the last term is $\O(\sigma^{k+1})$, while $I^2_\sigma=1+\sigma^k A_k$. Crucially, the operators $\sigma$ (acting by multiplication) and $\frac{1}{I_\sigma^2}I_\sigma. D$ generate an $\mathfrak{sl}(2)$ \cite{GW}. Using standard ${\mathcal U}\big(\mathfrak{sl}(2)\big)$ identities, we compute that $f_k:= -d\, A_k/(2(d-k)(k+1))$ which deals with the $k\neq d$ cases; the same computation gives the $k=d$ conclusion. \end{proof} \begin{proof}[Proof of Proposition \ref{Bnature} - sketch] The key idea is that for each $t$ we can replace $\sigma_t$ with the corresponding normalised defining density $\bar{\sigma}_t$ which solves $ I^2_{\bar{\sigma}_t}=1+{{\bar{\sigma}}_t}^{d}\mathcal{B}_{\bar{\sigma}_t} $, via Theorem \ref{obstr}, while maintaining smooth dependence on $t$. Then it is easy to prove that ${\mathcal B}_{\bar{\sigma}_0}|_{{\mathcal Z}(\bar\sigma_{t=0})}=0$, while $\partial(I^2_{\bar{\sigma}_t})/\partial t|_{t=0}$ is proportional to $I. D \dot{\bar\sigma}$. So applying $\frac{\partial}{\partial t}\mid _{t=0}$ implies that $\dot{\bar\sigma}$ solves a linear $I. D$ boundary problem up to $\O (\bar{\sigma}^{d})$ with obstruction $\dot{{\mathcal B}}_{\bar{\sigma}}$. Using~\cite[Theorem 4.5]{GW} we can easily deduce the conclusion. \end{proof} \subsection{Acknowledgements} Prior to this work, ARG had discussions of this problem with Fernando Marques and then Pierre Albin and Rafe Mazzeo. We are indebted for the insights so gained.
train/arxiv
BkiUdNo4ubng04WQtbrL
5
1
\section{Introduction} Tidal stream energy is being developed rapidly as a promising complement to intermittent wind and solar energy, owing to the high predictability of tides \citep{Adcock2021}. To ensure tidal energy is successfully embedded into the future net-zero carbon energy mix, we need to understand the flow physics driving the efficiency of tidal turbine arrays when deployed at a large scale. Due to the confined flow environment in which these turbines operate, we cannot simply infer tidal array hydrodynamics from existing knowledge of wind farm aerodynamics \citep{Porte-Agel2020,Nishino2020}. For instance, relatively shallow waters may restrict the ability of turbine wakes to expand vertically, potentially affecting the recovery of wakes in a large turbine array. Future tidal arrays will comprise multiple rows of turbines, whose design requires to consider an appropriate spacing between turbines as well as their impact on the tidal channel flow dynamics. These micro- (turbine to turbine) and macro-scale (array to tidal channel) interactions need to be considered when designing tidal arrays in order to find the optimal trade-off between energy extraction and minimal changes to the tidal flow \citep{DeDominicis2018}. Large tidal arrays deployed at a tidal channel lead to an added resistance to its flow dynamics, which, if too large, can considerably obstruct and modify the overall flow. Therefore, to optimise the design of large tidal arrays, it is required to tune the operating conditions of individual turbines for a given tidal channel (e.g. straight or variable-section), tidal forcing and turbine density \citep{Vennell2010,Vennell2011,Vennell2015}. At the micro-scale, the power production capability of tidal turbines is driven by turbulent wake mixing and acceleration of bypass flow in-between turbines. In relatively closely packed arrays, limiting the negative wake-turbine interactions is often key to minimising power losses \citep{Stallard2013}. However, local blockage due to a small lateral spacing between devices (as well as a shallow water depth confining the flow) may lead to local flow acceleration that enhances individual turbine power, as observed in experimental tests, e.g. \citet{Stallard2013,Noble2020}. The effect of local blockage has also been investigated using the Linear Momentum Actuator Disc Theory (LMADT) for a single lateral row of turbines \citep{Garrett2007,Nishino2012,Nishino2013,Vogel2016,Creed2017} and two rows of turbines \citep{Draper2014}. These studies suggest that two staggered rows of turbines tend to be more efficient than two perfectly aligned (or centred) rows of turbines, but less efficient than a single row of the same total number of turbines with the same array width. Whilst the above findings are important for the performance of arrays with a small number of rows, future tidal arrays will require turbines to be deployed in several rows to generate a sufficiently large amount of energy, which may cause macro-scale flow interactions between the array and the tidal channel. \citet{Vennell2010,Vennell2011} combined the LMADT with a simple theoretical tidal channel flow model to analyse how the resistance and lateral spacing of turbines within each row should be tuned for a given number of rows deployed across a given tidal channel, to maximise the total power generation. However, existing theoretical tidal array models based on the LMADT, including the two-row model of \citet{Draper2014}, do not fully account for the complex effect of turbulent wake mixing. The Vennell-type array models assume that the streamwise spacing between rows is large enough for individual turbine wakes to be fully mixed after each row, whereas the two-row model of \citet{Draper2014} assumes that the two rows are close enough to each other for the effect of wake mixing to be negligible. In narrow tidal channels or straits, turbines in a multi-row array may operate in a fully-waked scenario (similar to perfectly aligned layout) during half of the tidal cycle (e.g. ebb tide), but they may operate in partly-waked conditions (akin to staggered layouts) during the other half of the cycle (e.g. flood tide) \citep{GarciaNovo2018}. This asymmetry between ebb and flood tides further complicates the design of optimal array configurations, requiring to understand the performance and hydrodynamics of a given array design for various incident flow characteristics. Many existing studies looking into tidal array optimisation have adopted low-fidelity flow models, such as two-dimensional shallow water models \citep{Culley2016}, analytical wake models, e.g. Gaussian models \citep{Stansby2015} and aforementioned theoretical models based on the LMADT \citep{Nishino2013,Draper2014}, while high-fidelity simulations have been restricted to relatively small arrays with a limited number of configurations, due to their large computational expense \citep{Afgan2013,Chawdhary2017,Ouro2019JFS}. However, as the performance of tidal devices in arrays is driven by wake-turbine interactions as well as bathymetry-induced turbulence \citep{Stallard2013,Ouro2018FTC}, turbulence-resolving approaches such as Large-Eddy Simulation (LES) are valuable to yield reliable hydrodynamics results as well as to build more accurate low-order models that can improve array optimisation tools. In this paper, we investigate the flow characteristics and efficiency of infinitely large tidal arrays with perfectly aligned and staggered configurations, combining predictions from two contrasting approaches: actuator disc theory and LES. Infinitely large arrays represent an asymptotic case in which the flow passing the turbine rows is fully developed (i.e. flow statistics become identical for all rows). This is similar to large wind farms in which such flow conditions may be attained approximately after 10 to 15 rows, depending on atmospheric stability conditions \citep{Porte-Agel2020,Sharma2018}. We consider a wide range of streamwise and lateral turbine spacing to understand how the array efficiency can be maximised (from the micro-scale perspective) by balancing the negative impact of turbine wakes impinging downstream turbines and the positive local blockage effects. The paper is structured as follows: in \S\ref{sec:theory} we introduce an extended theoretical model developed for periodic turbine arrays with perfectly aligned and staggered configurations. Details of 28 LES runs comprising aligned and staggered layouts are presented in \S\ref{sec:testcases}, with results of flow characteristics and hydrodynamic coefficients in \S\ref{sec:results}. In \S\ref{sec:discussion} we provide further discussion on the comparison between the predictions obtained from the theoretical analysis and LES, followed by main conclusions in \S\ref{sec:conclusion}. \section{Theoretical analysis}\label{sec:theory} We start with a simple theoretical analysis on the efficiency of an infinitely large array of ideal turbines in a steady, uniform and vertically confined flow. The analysis is based on the work of \citet{Draper2014}, who extended the LMADT for laterally confined flows \citep{Garrett2007,Houlsby2008} to predict an upper limit to the efficiency of two aligned or staggered rows of turbines. This two-row analysis is further extended in this study to investigate an infinite number of aligned or staggered rows of tidal turbines, following the idea of hybrid inviscid-viscous approach recently proposed by \citet{Nishino2019}. Figure \ref{fig:theory1} illustrates an example of a periodic staggered array of tidal turbines. The key assumption employed in the hybrid inviscid-viscous approach is that the streamwise extent of the region in which the expansion of the flow through each turbine takes place is much shorter than the distance between each row of turbines. With this assumption, we hypothetically divide the flow field into two types of zones; namely the inviscid flow zones, which are analysed using the LMADT neglecting the effect of viscous (or turbulent) mixing, and the viscous flow zones, which are modelled separately to account for the effect of mixing. This approach is also in line with the results of a recent LES study of flow past a periodic array of actuator discs \citep{West2020} showing that the effects of inviscid and viscous (turbulent) flow processes are dominant, respectively, in the vicinity of the turbines and in the rest of the flow field. \begin{figure} \vspace{0.2cm} \centerline{\includegraphics[width=.9\linewidth]{Fig_Theory_1.png}} \caption{Schematic of the flow past a periodic staggered array of tidal turbines, divided into the hypothetical inviscid (inv.) and viscous (visc.) flow zones. The rectangular region enclosed by the magenta dashed line corresponds to that depicted in figure \ref{fig:theory2}.} \label{fig:theory1} \end{figure} \begin{figure} \vspace{0.2cm} \centerline{\includegraphics[width=.9\linewidth]{Fig_Theory_2.png}} \caption{Schematic of the quasi-1D theoretical model for a periodic staggered array of tidal turbines, with examples of how cross-sectional flow patterns may appear in a corresponding 3D flow problem. The three vertical thin black lines indicate the locations of stations 1, 4 and 5 as well as the (non-dimensional) cross-sectional average velocity, $\psi = u_\mathrm{av}/u_\mathrm{ref}$, for the superposed plot of $u/u_\mathrm{ref}$ at each station, whereas the thick black lines show the profiles of $u/u_\mathrm{ref}$ at the three stations (calculated for the case with $B=0.2$, $K=3$ and $m=0.7$). Reproduced from \citet{Nishino2019} with minor modifications.} \label{fig:theory2} \end{figure} \subsection{Staggered rows of actuator discs}\label{sec:theory_s} A schematic of the theoretical model for the staggered case is shown in figure \ref{fig:theory2}. Here we consider a straight local flow passage containing only one-half of an actuator disc due to the periodic and symmetric nature of the array as shown in figure \ref{fig:theory1}. The cross-sectional area of the flow passage is $A/B$, where $A$ is the half-disc area and $B$ is the area blockage ratio. Although the figure is depicted in a two-dimensional manner, this is still a quasi-one-dimensional flow model as we do not consider any variation of flow quantities (velocity and pressure) over the cross-section of each streamtube in the inviscid zone. Assuming that the flow is incompressible, the average velocity over the cross-section of the entire flow passage, $u_\mathrm{av}$, does not change in the streamwise direction. Following the common notations used in the LMADT \citep{Houlsby2008} we define four stations within the inviscid zone: station 1 is at the inlet of the inviscid zone where the core-flow streamtube (that encompasses the actuator disc) starts to expand, stations 2 and 3 are immediately upstream and downstream of the disc, respectively, and station 4 is at the outlet of the inviscid zone where the pressure is equalised between the core- and bypass-flow streamtubes. In addition, we describe the outlet of the viscous zone as station 5 (which is station 1 for the next row of discs). The pressure at stations 1 to 5 is denoted by $p_1$ to $p_5$, respectively, whereas the velocity is described using the velocity coefficients $\alpha$ and $\beta$ with subscripts as in figure \ref{fig:theory2}. Each velocity coefficient represents the ratio of the velocity there to a reference velocity, $u_\mathrm{ref}$. In the following we take the core-flow velocity at station 1 as $u_\mathrm{ref}$ (i.e. $\alpha_1 = 1$) for convenience. We now consider the conservation of mass, momentum and energy in the inviscid flow zone in a similar manner to the work of \citet{Draper2014}. First, since the Bernoulli equation must be satisfied to conserve energy in each of the three bypass-flow streamtubes between stations 1 and 4, we obtain \begin{eqnarray} && p_1-p_4 = \frac{1}{2}\rho u_\mathrm{ref}^2 \left( \beta_{4a}^2-1 \right), \label{eq:te1}\\ && p_1-p_4 = \frac{1}{2}\rho u_\mathrm{ref}^2 \left( \beta_{4b}^2-\beta_5^2 \right), \label{eq:te2}\\ && p_1-p_4 = \frac{1}{2}\rho u_\mathrm{ref}^2 \left( \alpha_8^2-\alpha_5^2 \right), \label{eq:te3} \end{eqnarray} where $\rho$ is the fluid density. Substituting (\ref{eq:te2}) and (\ref{eq:te3}), respectively, into (\ref{eq:te1}) gives \begin{eqnarray} && \beta_{4b} = \left( \beta_{4a}^2 + \beta_{5}^2 - 1 \right)^{1/2}, \label{eq:tb4b}\\ && \alpha_{8} = \left( \beta_{4a}^2 + \alpha_{5}^2 - 1 \right)^{1/2}. \label{eq:ta8} \end{eqnarray} As the Bernoulli equation must also be satisfied in the upstream part (between stations 1 and 2) and downstream part (between stations 3 and 4) of the core-flow streamtube, we also obtain \begin{eqnarray} p_2-p_3 = p_1-p_4 + \frac{1}{2}\rho u_\mathrm{ref}^2 \left( 1-\alpha_4^2 \right), \label{eq:te4} \end{eqnarray} which, together with (\ref{eq:te1}), leads to an expression for the half-disc thrust, $T$, as \begin{eqnarray} T = (p_2-p_3)A = \frac{1}{2}\rho u_\mathrm{ref}^2 A \left( \beta_{4a}^2 - \alpha_4^2 \right), \label{eq:tthrust} \end{eqnarray} and thus an expression for the half-disc power, $P$, as \begin{eqnarray} P = T\alpha_2 u_\mathrm{ref} = \frac{1}{2}\rho u_\mathrm{ref}^3 A \alpha_2\left( \beta_{4a}^2 - \alpha_4^2 \right). \label{eq:tpower} \end{eqnarray} Hence, to obtain $P$ for a given $\alpha_2$, for example, we need to know how $\alpha_4$ and $\beta_{4a}$ depend on $\alpha_2$. By considering the conservation of mass in the `main' bypass-flow streamtube between station 1 (where the velocity is $\beta_5 u_\mathrm{ref}$) and station 4 (where the velocity is $\beta_{4b} u_\mathrm{ref}$) we obtain \begin{eqnarray} \beta_5 u_\mathrm{ref} A \left(\frac{1}{B} - \frac{\alpha_2}{\alpha_4} - \gamma \right) = \beta_{4b} u_\mathrm{ref} A \left(\frac{1}{B} - \frac{\alpha_2}{\alpha_4} - \frac{\gamma - \alpha_2}{\beta_{4a}} - \gamma \right), \label{eq:tmass} \end{eqnarray} which leads to \begin{eqnarray} \alpha_4 = \frac{\alpha_2}{\dfrac{1}{B} - \dfrac{\beta_{4b}}{\beta_{4a}} \left( \dfrac{\gamma - \alpha_2}{\beta_{4b} - \beta_5} \right) - \gamma}, \label{eq:ta4} \end{eqnarray} where \begin{eqnarray} \gamma = \frac{\alpha_2 \alpha_5}{\alpha_4 \alpha_8} \label{eq:tgamma} \end{eqnarray} is the area ratio for the wake flow upstream of the disc as depicted in figure \ref{fig:theory2}. Note that this wake originates from the disc located two rows upstream (and its velocity increases from $\alpha_5 u_\mathrm{ref}$ to $\alpha_8 u_\mathrm{ref}$ when it passes through the row immediately upstream) due to the staggered configuration. As for $\beta_{4a}$ we consider the conservation of momentum for the entire flow passage to obtain \begin{equation} \begin{split} & (p_1 - p_4)\frac{A}{B} - T \\ & = \rho u_\mathrm{ref}^2 A \left[ \alpha_2 (\alpha_4 - 1) + (\gamma - \alpha_2)(\beta_{4a} - 1) + \gamma \alpha_8 (\alpha_8 - \alpha_5) + \beta_5 \left( \frac{1}{B} - \frac{\alpha_2}{\alpha_4} -\gamma \right) \left( \beta_{4b} - \beta_5 \right) \right] , \label{eq:tmom} \end{split} \end{equation} which, together with (\ref{eq:te1}) and (\ref{eq:tthrust}), leads to \begin{equation} \begin{split} \left( 1-B \right)\beta_{4a}^2 - 2B \left( \gamma-\alpha_2 \right) \beta_{4a} &- 2B \left[ \alpha_2 \alpha_4 - \frac{1}{2}\alpha_4^2 + \gamma \left( \alpha_8^2 - \alpha_5 \alpha_8 - 1 \right) \right] \\ &-2\beta_5 \left( \beta_{4b} - \beta_5 \right) \left[ 1-B \left( \frac{\alpha_2}{\alpha_4} + \gamma \right) \right] - 1 = 0. \label{eq:tb4a} \end{split} \end{equation} The above set of equations for the inviscid zone is not closed as it includes $\alpha_5$ and $\beta_5$, which need to be modelled considering the effect of mixing in the viscous zone. There are several possible ways to model the effect of mixing but here we employ a very simple approach using a single non-dimensional input parameter called the mixing factor, $m$, which represents the completeness of mixing in the viscous zone. Specifically, the value of $m$ (between 0 and 1) describes how much the velocity of flow at each cross-sectional position returns to the cross-sectional average velocity of the entire flow passage, $u_\mathrm{av}$, as the flow passes through the viscous zone. By applying this to the wake flow of the disc we obtain \begin{eqnarray} \alpha_5 = m\psi + \left( 1-m \right) \alpha_4 , \label{eq:ta5} \end{eqnarray} where $\psi = u_\mathrm{av}/u_\mathrm{ref}$ is the non-dimensional cross-sectional average velocity and this can be calculated from the velocity profile at station 1, for example, as \begin{eqnarray} \psi = B \left[ \gamma \left( \alpha_8 + 1 \right) + \beta_5 \left( \frac{1}{B} - \frac{\alpha_2}{\alpha_4} - \gamma \right) \right] . \label{eq:tpsi} \end{eqnarray} For the bypass flow, however, the difficulty is that the number of flow passages at station 4 does not agree with that at station 1, since the actuator disc creates the additional narrow bypass streamtube that has $\beta_{4a}$ at station 4. To obtain a closed set of equations for this periodic flow problem within the framework of quasi-one-dimensional modelling, here we assume that the narrow bypass flow immediately outside of the wake is `merged' or fully mixed with the main bypass flow regardless of the value of $m$. This assumption seems reasonable since the difference between $\beta_{4a}$ and $\beta_{4b}$ tends to be small as depicted in figure \ref{fig:theory2}, and eventually leads to \begin{eqnarray} \beta_5 = m\psi + \left( 1-m \right) \beta_{4m} , \label{eq:tb5} \end{eqnarray} where $\beta_{4m}$ is the `area-weighted' average of $\beta_{4a}$ and $\beta_{4b}$, i.e., \begin{eqnarray} \beta_{4m} = \frac{\left( \gamma - \alpha_2 \right) + \beta_5 \left( \dfrac{1}{B} - \dfrac{\alpha_2}{\alpha_4} - \gamma \right)}{\dfrac{\left( \gamma - \alpha_2 \right)}{\beta_{4a}} + \dfrac{\beta_5}{\beta_{4b}} \left( \dfrac{1}{B} - \dfrac{\alpha_2}{\alpha_4} - \gamma \right)} . \label{eq:tb4m} \end{eqnarray} It should be noted that $\alpha_9 = m\psi + \left( 1-m \right) \alpha_8$ is required together with (\ref{eq:ta5}) to (\ref{eq:tb4m}) to conserve the total mass in the viscous zone, but this is automatically satisfied as we enforce $\alpha_9=\alpha_1=1$ in this analysis due to the periodicity of the flow. Finally, the thrust and power coefficients of the disc are expressed (for a given cross-sectional average velocity of the entire flow, $u_\mathrm{av}=\psi u_\mathrm{ref}$) as \begin{eqnarray} && C_T = \frac{T}{\frac{1}{2}\rho \left( \psi u_\mathrm{ref} \right)^2 A} = \frac{\beta_{4a}^2 - \alpha_4^2}{\psi^2} , \label{eq:tct}\\ && C_P = \frac{P}{\frac{1}{2}\rho \left( \psi u_\mathrm{ref} \right)^3 A} = \frac{\alpha_2\left( \beta_{4a}^2 - \alpha_4^2 \right)}{\psi^3} . \label{eq:tcp} \end{eqnarray} For convenience, we also define the resistance coefficient (or local thrust coefficient) of the disc as \begin{eqnarray} K = \frac{T}{\frac{1}{2}\rho \left( \alpha_2 u_\mathrm{ref} \right)^2 A} , \label{eq:tk} \end{eqnarray} from which and (\ref{eq:tthrust}), we obtain \begin{eqnarray} \alpha_2 = \left( \frac{\beta_{4a}^2 - \alpha_4^2}{K} \right)^{1/2} . \label{eq:ta2} \end{eqnarray} In summary, the above theoretical model for an infinite number of staggered rows of identical ideal turbines consists of three non-dimensional input parameters ($B$, $K$, $m$), ten non-dimensional unknowns to be determined ($\alpha_2$, $\alpha_4$, $\alpha_5$, $\alpha_8$, $\beta_{4a}$, $\beta_{4b}$, $\beta_{4m}$, $\beta_5$, $\gamma$, $\psi$) and a set of ten equations to be solved numerically: (\ref{eq:tb4b}), (\ref{eq:ta8}), (\ref{eq:ta4}), (\ref{eq:tgamma}), (\ref{eq:tb4a}), (\ref{eq:ta5}), (\ref{eq:tpsi}), (\ref{eq:tb5}), (\ref{eq:tb4m}) and (\ref{eq:ta2}). \subsection{Aligned rows of actuator discs}\label{sub:theory_a} Compared to the staggered case, the theoretical model for the aligned case becomes simpler as only two bypass-flow streamtubes (instead of three) need to be considered in the inviscid zone. This is because the wake encounters the disc immediately downstream (instead of two rows downstream). In the following, we again consider the conservation of mass, momentum and energy in the inviscid zone, and the simplified mixing process in the viscous zone, to derive a similar but smaller set of equations for the aligned case. By applying the Bernoulli equation to the two bypass-flow streamtubes and the core-flow streamtube in the same manner as in the staggered case, we obtain (\ref{eq:te1}), (\ref{eq:te2}) and (\ref{eq:te4}), which lead to (\ref{eq:tb4b}), (\ref{eq:tthrust}) and (\ref{eq:tpower}) for $\beta_{4b}$, $T$ and $P$, respectively. Note that these equations are identical for both aligned and staggered cases, whereas (\ref{eq:te3}) and (\ref{eq:ta8}) are only for the staggered case (since the third bypass-flow streamtube does not exist in the aligned case). Next, from the conservation of mass in the `main' bypass-flow streamtube, we obtain \begin{eqnarray} \beta_5 u_\mathrm{ref} A \left(\frac{1}{B} - \frac{\alpha_2}{\alpha_4} \right) = \beta_{4b} u_\mathrm{ref} A \left[\frac{1}{B} - \frac{\alpha_2}{\alpha_4} - \frac{1}{\beta_{4a}}\left( \frac{\alpha_2}{\alpha_4}-\alpha_2 \right) \right], \label{eq:t2mass} \end{eqnarray} which leads to an expression for $\alpha_4$ as \begin{eqnarray} \alpha_4 = \frac{\alpha_2 \left( 1 + \dfrac{1}{\beta_{4a}} - \dfrac{\beta_5}{\beta_{4b}} \right)}{\dfrac{1}{B}\left( 1 - \dfrac{\beta_5}{\beta_{4b}} \right) + \dfrac{\alpha_2}{\beta_{4a}} }. \label{eq:t2a4} \end{eqnarray} For $\beta_{4a}$ we consider the conservation of momentum for the entire flow to obtain \begin{equation} \begin{split} & (p_1 - p_4)\frac{A}{B} - T \\ & = \rho u_\mathrm{ref}^2 A \left[ \alpha_2 (\alpha_4 - 1) + \left( \frac{\alpha_2}{\alpha_4} - \alpha_2 \right)(\beta_{4a} - 1) + \beta_5 \left( \frac{1}{B} - \frac{\alpha_2}{\alpha_4} \right) \left( \beta_{4b} - \beta_5 \right) \right] , \label{eq:t2mom} \end{split} \end{equation} which, together with (\ref{eq:te1}) and (\ref{eq:tthrust}), leads to \begin{equation} \begin{split} \left( 1-B \right)\beta_{4a}^2 - 2B \left( \frac{\alpha_2}{\alpha_4}-\alpha_2 \right) \beta_{4a} &- 2B \left( \alpha_2 \alpha_4 - \frac{1}{2}\alpha_4^2 - \frac{\alpha_2}{\alpha_4} \right) \\ &-2\beta_5 \left( \beta_{4b} - \beta_5 \right) \left( 1-B \frac{\alpha_2}{\alpha_4} \right) - 1 = 0. \label{eq:t2b4a} \end{split} \end{equation} For $\beta_5$ we need to consider the effect of mixing in the viscous zone. By employing the same approach as in the staggered case, we obtain (\ref{eq:tb5}) for the aligned case as well. Note, however, that $\psi$ and $\beta_{4m}$ are different for the aligned case, which are \begin{eqnarray} \psi = B \left[ \frac{\alpha_2}{\alpha_4} + \beta_5 \left( \frac{1}{B} - \frac{\alpha_2}{\alpha_4} \right) \right] , \label{eq:t2psi} \end{eqnarray} and \begin{eqnarray} \beta_{4m} = \frac{\left( \dfrac{\alpha_2}{\alpha_4} - \alpha_2 \right) + \beta_5 \left( \dfrac{1}{B} - \dfrac{\alpha_2}{\alpha_4} \right)}{\dfrac{1}{\beta_{4a}}\left( \dfrac{\alpha_2}{\alpha_4} - \alpha_2 \right) + \dfrac{\beta_5}{\beta_{4b}} \left( \dfrac{1}{B} - \dfrac{\alpha_2}{\alpha_4} \right)} . \label{eq:t2b4m} \end{eqnarray} We also need (\ref{eq:ta5}) together with the above equations to conserve the total mass in the viscous zone, but this is automatically satisfied since here we enforce $\alpha_5=\alpha_1=1$ due to the periodicity of the flow. Finally, $C_T$, $C_P$ and $K$ are all defined as in the staggered case, yielding the same equations (\ref{eq:tct}) to (\ref{eq:tk}) and thus (\ref{eq:ta2}) for $\alpha_2$. In summary, the theoretical model for the aligned case consists of three non-dimensional input parameters ($B$, $K$, $m$), seven non-dimensional unknowns ($\alpha_2$, $\alpha_4$, $\beta_{4a}$, $\beta_{4b}$, $\beta_{4m}$, $\beta_5$, $\psi$) and a set of seven equations to be solved numerically: (\ref{eq:tb4b}), (\ref{eq:tb5}), (\ref{eq:ta2}), (\ref{eq:t2a4}), (\ref{eq:t2b4a}), (\ref{eq:t2psi}) and (\ref{eq:t2b4m}). \subsection{Example solutions}\label{sub:theory_e} Here we present some example solutions of the above theoretical model, first for the aligned case and then for the staggered case. As this is a three parameter ($B$-$K$-$m$) problem, we start with fixing the disc resistance coefficient, $K$, at its optimal value for the complete mixing case, i.e., $m=1$. When $m=1$, our problem (for both aligned and staggered cases) reduces to the two-parameter ($B$-$K$) problem of \citet{Garrett2007} (hereafter referred to as GC07); hence, this optimal value for $m=1$ is known to be $K=2(1+B)^3/(1-B)^2$, which we refer to as $K_\mathrm{GC07}$. Figure \ref{fig:theory3} shows how the performance of aligned rows of such `suboptimal' actuator discs (with $K=K_\mathrm{GC07}$) changes with $m$, at two different blockage ratios, $B=0.05$ and $0.2$. As can be expected intuitively, the power coefficient $C_P$ of aligned discs decreases monotonically as the mixing factor $m$ decreases (regardless of the blockage ratio). This agrees with the common observation that the power of aligned rows of turbines tends to decrease as we reduce the streamwise spacing between the rows, noting that the mixing tends to be less complete (i.e., $m$ tends to be small) when the spacing is small. We also present in figure \ref{fig:theory3} the values of $\alpha_2(\beta_{4a}^2-\alpha_4^2)$ and $1/\psi^3$, to explain why $C_P$ decreases as $m$ decreases. As can be seen from (\ref{eq:tcp}), $C_P$ is equivalent to the product of these two values; the former is a different power coefficient defined using the velocity upstream of the disc ($u_\mathrm{ref}$) instead of the cross-sectional average velocity ($u_\mathrm{av}$), and the latter represents the effect of the difference between $u_\mathrm{ref}$ and $u_\mathrm{av}$. Here we can see that the value of $\alpha_2(\beta_{4a}^2-\alpha_4^2)$ actually increases as $m$ decreases. This is essentially because the local blockage effect is enhanced (or the `effective' blockage ratio increases) when the upstream core flow is slower than the upstream bypass flow \citep{Draper2016}. This enhancement of the local blockage effect caused by the incomplete wake mixing of the upstream disc, however, is not strong enough to compensate for the `loss' of power possessed by the upstream core flow compared to the `original' power possessed by the cross-sectionally averaged flow (i.e., the increase rate of $\alpha_2(\beta_{4a}^2-\alpha_4^2)$ is smaller than the decrease rate of $1/\psi^3$); therefore, $C_P$ eventually decreases as $m$ decreases. \begin{figure} \vspace{0.2cm} \centerline{\includegraphics[width=.7\linewidth]{Fig_Theory_3.png}} \caption{Theoretical prediction of the effect of mixing on the performance of aligned rows of actuator discs at two different blockage ratios: ($a$) $B=0.05$ and ($b$) $B=0.2$. The disc resistance coefficient is fixed at the optimal value for the case with complete mixing ($m=1$), i.e., $K=2(1+B)^3/(1-B)^2$.} \label{fig:theory3} \end{figure} \begin{figure} \vspace{0.2cm} \centerline{\includegraphics[width=1.\linewidth]{Fig_Theory_4.png}} \caption{Theoretical $C_P$ versus $K$ for the aligned case at ($a$) $B=0.05$ and ($b$) $B=0.2$; and ($c$) the maximum $C_P$ obtained by varying $K$ for a given set of $B$ and $m$.} \label{fig:theory4} \end{figure} Although the above analysis was for `suboptimal' discs with $K=K_\mathrm{GC07}$, the same relationship between $C_P$ and $m$ is generally found for aligned discs with any given $K$ values. Figures \ref{fig:theory4}($a$) and ($b$) show $C_P$ versus $K$ curves for five different $m$ values, again at $B=0.05$ and $0.2$, respectively, demonstrating the trend. It can also be seen that the optimal $K$ value (to maximise $C_P$) tends to decrease as $m$ decreases, although the $C_P$ obtained at $K=K_\mathrm{GC07}$ is still close to the maximum $C_P$ for a given $m$ (especially when the blockage ratio is high). Figure \ref{fig:theory4}($c$) shows how the maximum $C_P$, or $C_{P\mathrm{max}}$, changes with $m$. As can be expected, for aligned rows of actuator discs, $C_{P\mathrm{max}}$ also decreases monotonically as $m$ decreases. Next, we focus on staggered rows of actuator discs. Similarly to the aligned case, we start with the performance of `suboptimal' discs with $K=K_\mathrm{GC07}$, which is shown in figures \ref{fig:theory5}($a$) and ($b$) for $B=0.05$ and $B=0.2$, respectively. At $B=0.05$, the value of $1/\psi^3$ slightly increases (meaning that the upstream core flow velocity becomes slightly higher than the cross-sectional average velocity) as $m$ decreases from 1 to about 0.95, and then decreases as $m$ further decreases. In contrast, the value of $\alpha_2(\beta_{4a}^2-\alpha_4^2)$ first decreases slightly and then increases as $m$ decreases from 1; this is again due to changes in the `effective' blockage ratio, i.e., the local blockage effect is enhanced (or diminished) when the upstream core flow is surrounded by a faster (or slower) bypass flow. However, the increase (or decrease) rate of $\alpha_2(\beta_{4a}^2-\alpha_4^2)$ tends to be smaller than the decrease (or increase) rate of $1/\psi^3$, and hence $C_P$ follows the trend of $1/\psi^3$, i.e., $C_P$ slightly increases first and then decreases as $m$ decreases from 1. This initial increase in $1/\psi^3$ (and thus in $C_P$) is more clearly seen at $B=0.2$, demonstrating that the beneficial effect of the staggered layout (allowing the velocity upstream of the disc to become higher than the cross-sectional average velocity) is enhanced at a higher blockage ratio. Figure \ref{fig:theory5}($c$) compares $C_P$ versus $B$ curves for the complete mixing case ($m=1$) and for the `optimal mixing' case ($m=m_{C_P}$, where $m_{C_P}$ is the value of $m$ that maximises $C_P$), again for staggered rows of `suboptimal' discs ($K=K_\mathrm{GC07}$). Also plotted together are $m_{C_P}$ and $m_{\psi}$, the latter of which represents the value of $m$ that minimises $\psi$ (and thus maximises $1/\psi^3$). It can be seen how the increase in $C_P$ due to the beneficial effect of the staggered layout is enhanced, and how the optimal level of mixing decreases, as we increase the blockage ratio. It is also worth noting that the difference between $m_{C_P}$ and $m_{\psi}$ is small, especially at low blockage ratios. This reflects the dominant influence of $1/\psi^3$ on $C_P$ as described earlier in figures \ref{fig:theory5}($a$) and ($b$). The $C_P$ values presented above are for `suboptimal' discs (with $K=K_\mathrm{GC07}$) and can therefore be increased further by adjusting $K$ for a given $m$ (or for a given streamwise distance between the rows). However, this additional increase in $C_P$ achieved by varying $K$ is rather small for the staggered case. Figures \ref{fig:theory6}($a$) and ($b$) show $C_P$ versus $K$ curves for five different $m$ values, again at $B=0.05$ and $0.2$, respectively. When $B$ is small, $K_\mathrm{GC07}$ tends to be already close to the optimal $K$ value for a given $m$; hence, the benefit of further adjusting $K$ is small. When $B$ is large, $K_\mathrm{GC07}$ is not so close to the optimal $K$ for a given $m$, but the $C_P$ versus $K$ curve tends to have a flatter peak; hence again, the additional gain in $C_P$ by adjusting $K$ is small. Figure \ref{fig:theory6}($c$) shows the maximum $C_P$ (achieved by adjusting $K$) versus $m$ curves for five different blockage ratios. The curves for $B=0.05$ and $0.2$ are in fact very similar to the $C_P$ versus $m$ curves for $K=K_\mathrm{GC07}$ plotted earlier in figures \ref{fig:theory5}($a$) and ($b$). At $B=0.2$, for example, the highest $C_P$ value achieved with $K=K_\mathrm{GC07}$ is only 1.1$\%$ lower than that achieved by optimising $K$. \begin{figure} \vspace{0.2cm} \centerline{\includegraphics[width=1.\linewidth]{Fig_Theory_5.png}} \caption{Theoretical prediction of the performance of staggered rows of actuator discs: the effect of mixing at ($a$) $B=0.05$ and ($b$) $B=0.2$; and ($c$) comparison of $C_P$ versus $B$ between the complete mixing case ($m=1$) and optimal mixing case ($m=m_{C_P}$). Plotted together are $m_{C_P}$ and $m_{\psi}$, which are the values of $m$ to maximise $C_P$ and to minimise $\psi$, respectively. The disc resistance coefficient is fixed at the optimal value for $m=1$, i.e., $K=2(1+B)^3/(1-B)^2$.} \label{fig:theory5} \end{figure} \begin{figure} \vspace{0.2cm} \centerline{\includegraphics[width=1.\linewidth]{Fig_Theory_6.png}} \caption{Theoretical $C_P$ versus $K$ for the staggered case at ($a$) $B=0.05$ and ($b$) $B=0.2$; and ($c$) the maximum $C_P$ obtained by varying $K$ for a given set of $B$ and $m$.} \label{fig:theory6} \end{figure} In summary, these example solutions of the simple three-parameter ($B$-$K$-$m$) theoretical model suggest the following: (i) For an infinitely large staggered rows of tidal turbines, there is an optimal streamwise turbine spacing (for a given cross-sectional blockage ratio or a given lateral turbine spacing) to maximise the power of each turbine relative to the power of cross-sectional average flow. This is an outcome of the combined effects of local blockage and wake mixing, i.e., this optimum exists because the local flow velocity upstream of each disc can become higher than the cross-sectional average velocity (and this does help to increase the relative power despite reducing the `effective' blockage ratio) when the streamwise turbine spacing is reasonably (not excessively) large for the wake mixing behind each disc to be largely (but not entirely) completed. (ii) The optimal turbine resistance to maximise this relative power also depends on both lateral and streamwise turbine spacing, but the optimal value for a single row, namely $K=2(1+B)^3/(1-B)^2$, is expected to yield close to the maximum relative power. (iii) For an infinitely large aligned rows of turbines, however, this relative power is maximised simply when the streamwise turbine spacing is large enough for the wake mixing to be entirely completed; hence, the optimal turbine resistance becomes identical to that for a single row. Finally, it should be remembered that, in the real world, the array of turbines cannot be infinitely large and that the cross-sectional average velocity (which was considered as a fixed parameter in the above analysis) will depend on how the flow resistance caused by the entire array alters the natural tidal-channel-scale momentum balance (see, e.g., \citet{Vennell2012, Vennell2015,Gupta2017}). Therefore, the true optimal turbine resistance (to maximise the power of turbines in a given tidal channel) will differ from that discussed in the above analysis. \section{Large-eddy simulation set-up}\label{sec:testcases} The theoretical model presented above has been developed on the assumption that the flow around each turbine is inviscid and steady. However, real turbine wakes are unsteady with coherent flow structures developed at a rotor level, e.g. tip-vortices, and further downstream, e.g. wake meandering. Thus, we now investigate how the theoretical predictions of the performance of infinitely large tidal arrays compare to high-fidelity numerical predictions using Large-Eddy Simulation (LES). We adopt the well-validated in-house LES code Digital Offshore Farms Simulator (DOFAS) in which turbine blades are represented using an Actuator Line Method (ALM) \citep{Ouro2019JFS} and the flow solver is fully parallelised using the Message Passing Interface (MPI), providing a great computational scalability and performance \citep{Ouro2019CAF}. Details of the flow solver are provided in the Appendix. We intentionally set the flow conditions to be as similar as possible to the conditions considered in the theoretical analysis, whilst ensuring that the physical dimensions of the turbines are close to those found in real tidal stream turbines. The 1MW DEEPGen IV tidal stream turbine design from the ReDAPT project is adopted, and the details of the blade hydrodynamic data used for the ALM are available in \cite{Scarlett2019}. The turbines have a diameter ($D$) of 12 m, rotating at a constant speed that corresponds to a tip-speed ratio of 4.0 (which is known to be optimal for the case of single-turbine operation), and include 10 m long ($0.8D$) nacelles. For convenience, we set the cross-sectional average velocity $U_0$ to 2.0 m s$^{-1}$ and consider this as our reference velocity, which yields a rotational speed of $\Omega = 1.35$ rad s$^{-1}$ and a full-revolution period of $T$ = 4.724 s. The computational domain is 432 m long ($L$), 144 m wide ($W$) and 24 m high ($H$), equivalent to $36D \times 12D \times 2D$, which is close to $6\pi H \times 2\pi H \times H$ commonly used in turbulent channel-flow simulations. Turbines are vertically centred at mid-water depth, i.e. $z=H/2$, to reduce vertical asymmetry effects that may complicate the comparison between LMADT and LES. A fixed time step $\Delta t$ is set to 0.045 s together with a uniform spatial resolution $\Delta x$ equal to 0.25 m, yielding a total of 48 mesh elements across the rotor diameter which is similar to the resolution adopted in other LES-ALM studies, e.g. \citet{Churchfield2013,Yang2019,Foti2019}. Hence, the domain is divided into 1692 \(\times\) 576 \(\times\) 96 grid cells over the three spatial directions with a total of about 93.5 $\times$ 10$^6$ elements. Simulations are run using 864 processors on three High-Performance Computing facilities, namely Supercomputing Wales, GW4 Isambard, and ARCHER. In the present LES, we represent infinite turbine arrays by imposing periodic boundary conditions in the streamwise and transverse directions. The flow is driven by the pressure gradient term $\Pi_i$ which enforces the mass flux across the entire domain to be constant, providing the cross-sectional averaged velocity is equal to $U_0$. Fixing the bulk velocity enables a direct comparison between the LES and the theoretical analysis in \S\ref{sec:theory}, unlike fixing $\Pi_i$ as in other infinitely-large wind farm simulations \citep{Calaf2010,Sharma2018}. Note that, to simulate a real large tidal array, both of the bulk velocity and $\Pi_i$ would need to vary depending on the macro-scale momentum balance over the entire tidal channel, which is outside the scope of the present study. Shear-free conditions are adopted at the top boundary to represent a free surface, whilst the bottom shear stress is calculated using wall functions for a hydrodynamically smooth wall, similarly to \citet{Kang2014}. A representation of the computational domain is presented in figure \ref{fig:compdomain} together with instantaneous flow structures visualised using Q-criterion iso-surfaces for one of the configurations studied. \begin{figure} \centerline{\includegraphics[width=.95\linewidth]{Fig_CompDomain_annotated.pdf}} \caption{Dimensions of the computational domain and instantaneous flow structures generated in the ST-9x6 case, visualised using iso-surfaces of the Q-criterion.} \label{fig:compdomain} \end{figure} An initial precursor simulation was performed for over 60 eddy turnover time units ($t_e$), corresponding to nearly 27 flow-through ($FT = L / U_0$), in order to generate fully-developed turbulent flow conditions to be used as initial flow field for each array simulation. We perform 28 infinitely-large array simulations, whose details are presented in table \ref{table:setup} providing values of normalised streamwise and transverse separation between turbines ($S_x/D$, $S_y/D$), local blockage ratio $B$ (relating the turbine's projected area to the open-channel cross-section and number of turbines per row, i.e. $B = n_y \pi D^2 /4 H W$), physical simulated time in terms of flow-through time, and the number of computed revolutions. Also presented in this table are the key results of each simulation, namely the thrust coefficient ($C_T$), power coefficient ($C_P$), their relative difference ($\Delta C_T$, $\Delta C_P$) from the AL-36x12 configuration as a reference case, and the resistance coefficient ($K$). The arrays adopted in the LES are perfectly aligned and staggered layouts labelled as {AL} and {ST} followed by the device spacing $S_x$x$S_y$, e.g. case {ST-9x6} corresponds to a staggered array with $S_x/D$ = 9 and $S_y/D$ = 6 as in figure \ref{fig:compdomain}. \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{lcccccccccccc} Case & $S_x/D$ & $S_y/D$ & $B$ & Time ($FT$) & Revs & $C_T$ & $C_P$ & $\Delta C_T$ & $\Delta C_P$ & $K$ \\ AL-36x12 & 36 & 12 & 0.033 & 27.1 & 1257 & 0.767 & 0.287 & 1.000 & 1.000 & 2.55 \\ AL-18x12 & 18 & 12 & 0.033 & 27.1 & 1257 & 0.766 & 0.277 & 0.999 & 0.965 & 2.74 \\ AL-9x12 & 9 & 12 & 0.033 & 27.1 & 1257 & 0.751 & 0.264 & 0.979 & 0.922 & 2.80 \\ AL-6x12 & 6 & 12 & 0.033 & 20.6 & 957 & 0.737 & 0.252 & 0.961 & 0.879 & 2.82 \\ ST-18x12 & 18 & 12 & 0.033 & 25.0 & 1160 & 0.777 & 0.292 & 1.014 & 1.021 & 2.29 \\ ST-9x12 & 9 & 12 & 0.033 & 25.0 & 1160 & 0.774 & 0.291 & 1.009 & 1.016 & 2.18 \\ ST-6x12 & 6 & 12 & 0.033 & 25.0 & 1160 & 0.774 & 0.292 & 1.009 & 1.019 & 2.07 \\ \\ AL-36x6 & 36 & 6 & 0.065 & 24.6 & 1141 & 0.786 & 0.301 & 1.025 & 1.050 & 2.20 \\ AL-18x6 & 18 & 6 & 0.065 & 26.6 & 1233 & 0.785 & 0.302 & 1.024 & 1.054 & 2.11 \\ AL-9x6 & 9 & 6 & 0.065 & 20.9 & 972 & 0.772 & 0.293 & 1.006 & 1.021 & 2.12 \\ AL-6x6 & 6 & 6 & 0.065 & 20.8 & 967 & 0.764 & 0.285 & 0.996 & 0.996 & 2.02 \\ ST-18x6 & 18 & 6 & 0.065 & 24.8 & 1151 & 0.784 & 0.300 & 1.022 & 1.045 & 2.28 \\ ST-9x6 & 9 & 6 & 0.065 & 23.4 & 1088 & 0.783 & 0.299 & 1.020 & 1.045 & 2.25 \\ ST-6x6 & 6 & 6 & 0.065 & 25.0 & 1160 & 0.777 & 0.293 & 1.013 & 1.024 & 2.27 \\ \\ AL-36x4 & 36 & 4 & 0.098 & 24.7 & 1146 & 0.787 & 0.307 & 1.026 & 1.073 & 1.92 \\ AL-18x4 & 18 & 4 & 0.098 & 25.0 & 1160 & 0.787 & 0.310 & 1.026 & 1.082 & 1.89 \\ AL-9x4 & 9 & 4 & 0.098 & 23.3 & 1083 & 0.777 & 0.300 & 1.013 & 1.046 & 1.88 \\ AL-6x4 & 6 & 4 & 0.098 & 23.4 & 1088 & 0.765 & 0.285 & 0.997 & 0.995 & 2.08 \\ ST-18x4 & 18 & 4 & 0.098 & 23.1 & 1074 & 0.787 & 0.310 & 1.027 & 1.082 & 1.93 \\ ST-9x4 & 9 & 4 & 0.098 & 23.4 & 1088 & 0.781 & 0.314 & 1.018 & 1.094 & 1.82 \\ ST-6x4 & 6 & 4 & 0.098 & 19.9 & 924 & 0.780 & 0.312 & 1.017 & 1.089 & 1.78 \\ \\ AL-36x3 & 36 & 3 & 0.131 & 27.1 & 1257 & 0.796 & 0.316 & 1.038 & 1.102 & 2.09 \\ AL-18x3 & 18 & 3 & 0.131 & 22.7 & 1054 & 0.796 & 0.316 & 1.038 & 1.104 & 1.90 \\ AL-9x3 & 9 & 3 & 0.131 & 23.8 & 1102 & 0.780 & 0.300 & 1.017 & 1.047 & 1.99 \\ AL-6x3 & 6 & 3 & 0.131 & 22.2 & 1030 & 0.767 & 0.285 & 1.000 & 0.996 & 2.19 \\ ST-18x3 & 18 & 3 & 0.131 & 26.9 & 1246 & 0.785 & 0.319 & 1.024 & 1.112 & 1.58 \\ ST-9x3 & 9 & 3 & 0.131 & 22.0 & 1020 & 0.787 & 0.320 & 1.027 & 1.116 & 1.64 \\ ST-6x3 & 6 & 3 & 0.131 & 23.3 & 1083 & 0.788 & 0.320 & 1.028 & 1.116 & 1.64 \end{tabular} \caption{Details of the LES cases with streamwise and spanwise spacing ($S_x/D$, $S_y/D$), local blockage ratio ($B$), simulated physical time in terms of flow-through times, number of revolutions, hydrodynamic coefficients ($C_T$ and $C_P$), their variation compared to the reference case AL-36x12, and resistance coefficient ($K$).} \label{table:setup} \end{center} \end{table} \section{Large-eddy simulation results} \label{sec:results} First, we focus on the array layouts with $S_x/D$ = 9 in order to elucidate how the flow field varies with $S_y/D$. The time-averaged and instantaneous flow fields are presented in \S\ref{sec:flowfield}, followed by some turbulence statistics in \S\ref{sec:turbulence}. We then present the wake centre-line velocities for all 28 cases in \S\ref{sec:centreline} to analyse the effects of both $S_x/D$ and $S_y/D$ on the wake recovery, and finally in \S\ref{sec:farmefficiency} the variations of the power and thrust coefficients are presented and compared with the theoretical predictions. Hereafter, the time-averaged value of any variable is denoted as $\langle \cdot \rangle$ and the instantaneous fluctuation value is represented as $(\cdot)'$ obtained from the Reynolds decomposition. \subsection{Streamwise velocity field} \label{sec:flowfield} We present in figure \ref{fig:les_cont_vel} contours of the normalised time-averaged streamwise velocity ($\langle u \rangle/U_0$) comparing aligned and staggered layouts with the same streamwise spacing of $S_x/D=9$ and lateral spacing of $S_y/D$ = 6, 4 and 3. For aligned arrays, reducing the lateral separation between devices in the same row leads to an increased local blockage that induces larger flow acceleration in the bypass flow between them, which is well observed for AL-9x3 with the bypass-flow velocity being approximately 20\% higher than the bulk velocity. In perfectly staggered arrays, the wake generated behind each turbine is mostly recovered when reaching the following row at a downstream distance of $S_x$. Then, due to the lateral blockage caused by the turbines located on both sides, the mostly-recovered wake accelerates further and then impinges the turbine located in the next row, i.e. at 2$S_x$ downstream of the turbine that originally generated the wake. For layouts with low lateral blockage, i.e. $S_y/D$ = 6 and 4, there is a wider lateral spread of the wakes, which is well-observed in figure \ref{fig:les_cont_vel} for the aligned cases. However, in staggered configurations this lateral wake expansion appears limited compared to the corresponding aligned cases. \begin{figure} \centerline{\includegraphics[width=.99\linewidth]{Fig_AL_ST_LES_U_ann.pdf}} \caption{Contours of time-averaged streamwise velocity $\langle u \rangle/U_0$ at hub height for aligned and staggered cases with $S_x/D$ = 9 and lateral spacing $S_y/D$ = 6 (top), 4 (mid) and 3 (bottom).} \label{fig:les_cont_vel} \end{figure} The different spreading rates of the time-averaged turbine wakes are partly explained by their instantaneous behaviour. In figure \ref{fig:les_cont_instvel} we present contours of instantaneous streamwise velocities ($u/U_0$) for the previous cases shown in figure \ref{fig:les_cont_vel} with $S_x/D$ = 9 and changing $S_y$, which reveals the pronounced meandering nature of the wakes with large oscillation amplitudes \citep{Foti2019}. In aligned configurations, the meandering of the wake is affected by the lateral distance between turbines in the same row, i.e., increasing $S_y$ leads to a larger meandering amplitude. This is presumably because a higher lateral blockage can constrain the spanwise wake motion. It can also be seen that there is a larger wake meandering amplitude in aligned arrays compared to their staggered counterparts. The lateral wake motion due to wake meandering seems almost negligible for ST-9x3, explaining the narrow time-averaged wakes shown earlier in figure \ref{fig:les_cont_vel}. \begin{figure} \centerline{\includegraphics[width=.99\linewidth]{Fig_AL_ST_LES_uinst_ann.png}} \caption{Contours of instantaneous streamwise velocity $u/U_0$ at hub height for aligned and staggered cases with $S_x/D$ = 9 and lateral spacing $S_y/D$ = 6 (top), 4 (mid) and 3 (bottom). Same legend as \ref{fig:les_cont_vel}.} \label{fig:les_cont_instvel} \end{figure} \subsection{Turbulence statistics} \label{sec:turbulence} We further analyse the flow field in figures \ref{fig:les_cont_vel_up} and \ref{fig:les_cont_vel_vp} with contours of turbulence intensity in the streamwise ($\sigma_u/U_0$, with $\sigma_u = \langle u'u' \rangle^{0.5}$) and transverse directions ($\sigma_v/U_0$, with $\sigma_v = \langle v'v' \rangle^{0.5}$), again for the cases with $S_x/D$ = 9. These second-order statistics indicate that having turbines in an aligned layout leads to a notably stronger flow unsteadiness both inside and outside the wake of each turbine compared to the staggered cases. This agrees with the earlier observation that the wake meandering is stronger in the aligned cases. In the near-wake region, in which tip vortices are still coherent and maintain a shear layer between the core flow and the bypass flow \citep{Ouro2019JFS}, both streamwise and spanwise turbulence intensities are substantially higher in the aligned cases than in the staggered cases. Low turbulence intensity regions in the staggered cases are evident in the flow bypassing each turbine, coinciding with the regions in which the wake of an upstream turbine is almost fully recovered as shown earlier in figures \ref{fig:les_cont_vel} and \ref{fig:les_cont_instvel}. The results also show that increasing the blockage ratio $B$ (or reducing the distance $S_y$ between turbines in the same row) reduces the turbulence intensity irrespective of the overall turbine arrangement, as a consequence of constraining the meandering motion. \begin{figure} \centerline{\includegraphics[width=.99\linewidth]{Fig_AL_ST_LES_up_ann2.pdf}} \caption{Contours of streamwise turbulence intensity ($\sigma_u/U_0$) at hub height for aligned and staggered cases with $S_x/D$ = 9 and lateral spacing $S_y/D$ = 6 (top), 4 (mid) and 3 (bottom).} \label{fig:les_cont_vel_up} \end{figure} \begin{figure} \centerline{\includegraphics[width=.99\linewidth]{Fig_AL_ST_LES_vp_ann2.pdf}} \caption{Contours of spanwise turbulence intensity ($\sigma_v/U_0$) at hub height for aligned and staggered cases with $S_x/D$ = 9 and lateral spacing $S_y/D$ = 6 (top), 4 (mid) and 3 (bottom).} \label{fig:les_cont_vel_vp} \end{figure} As can be expected from the above results, the wake meandering also notably affects the distribution of turbulent momentum exchange, which is described in figure \ref{fig:les_cont_vel_upvp} with contours of the Reynolds shear stress $-\langle u'v' \rangle/U_0^2$ at hub height. For each array layout, narrowing the lateral spacing reduces the level of Reynolds shear stress, indicating that there is a lower level of momentum exchange between the wakes and the surrounding bypass flows. Staggered layouts consistently attain a lower shear stress level compared to the aligned layouts irrespective of the number of turbines deployed. \begin{figure} \centerline{\includegraphics[width=.99\linewidth]{Fig_AL_ST_LES_upvp_ann.pdf}} \caption{Contours of Reynolds shear stress ($-\langle u'v' \rangle/U_0^2$) obtained at hub height for aligned and staggered cases with $S_x/D$ = 9 and lateral spacing $S_y/D$ = 6 (top), 4 (mid) and 3 (bottom).} \label{fig:les_cont_vel_upvp} \end{figure} \subsection{Centre-line velocities} \label{sec:centreline} Next, we quantitatively compare the mean streamwise velocity ($\langle u \rangle /U_0$) distribution along the wake centre-line in figure \ref{fig:les_Ucentreline} for every configuration simulated (see table \ref{table:setup}). Note that for $S_x/D$ = 18, 9 and 6 we plot the velocity distribution over two rows of turbines, including a shaded area representing the location at which turbines from the second row are located. When the streamwise spacing is relatively small ($S_x/D$ = 9 and 6), the values of $\langle u \rangle/U_0$ behind the turbine nacelles increase faster for the lower lateral blockage cases, i.e. $S_y/D$ = 6 and 12. This faster wake recovery in lower blockage cases results from a larger entrainment of ambient flow into the wake enhanced by the stronger meandering behaviour, as seen earlier in the distribution of Reynolds shear stresses in figure \ref{fig:les_cont_vel_upvp}. Although lower blockage ratios (or larger $S_y/D$ values) tend to result in higher wake recovery rates in the near-wake region, this trend does not hold in the far-wake region. In the aligned cases with $S_x/D$ = 18, the values of $\langle u\rangle/U_0$ are slightly over unity shortly upstream of the turbines for configurations with $S_y/D \leq $ 6, whereas for $S_y/D$ = 12 the wake velocity is not fully recovered to the bulk velocity value despite the high wake recovery rate in the near-wake region. Further decreasing $S_x/D$ to 9 in aligned cases reduces the amount of wake recovery for every $S_y/D$, resulting in $\langle u\rangle/U_0$ of about 0.9 to 0.95 at one rotor-diameter upstream of the turbines ($x/D$ = 8 and 17). It is clearly seen that the wakes in higher blockage cases (AL-9x4 and AL-9x3) have a lower recovery rate in the near-wake region but a higher recovery rate in the far-wake region. The same trend can be observed in the smallest streamwise spacing cases ($S_x/D$ = 6), where the streamwise velocities upstream of the turbines are even lower due to the lack of space for the wake to recover. Overall, for aligned arrays we may conclude that $S_x/D$ is the main design parameter that determines how much the wake recovers before approaching the turbine downstream, with $S_y/D$ contributing to a lower extent by changing the turbulent wake characteristics. \begin{figure} \centerline{\includegraphics[width=.99\linewidth]{Fig_Ucentreline_LES.png}} \caption{Distribution of time-averaged streamwise velocities ($\langle u\rangle/U_0$) along the wake centre-line at the hub height for aligned (left) and staggered (right) configurations with $S_x/D$ = 36 (top), 18 (mid-top), 9 (mid-bottom) and 6 (bottom), and $S_y/D$ = 12 (black), 6 (blue), 4 (green) and 3 (orange).} \label{fig:les_Ucentreline} \end{figure} In staggered arrays with $S_x/D$ = 18, the centre-line mean velocity notably increases at around $x/D$ = 18, which coincides with the location of the second row shifted laterally with respect to the first row of turbines. This flow acceleration is due to the local blockage caused by the turbines in the second row, and thus is more noticeable when the lateral spacing $S_y/D$ is small. For example, for ST-18x3 and ST-18x4 the maximum velocity is about 1.1$U_0$, whereas for ST-18x6 or ST-18x12 it is about 1.02$U_0$. This agrees qualitatively with the theoretical analysis presented earlier in \S\ref{sub:theory_e}, i.e., the local flow velocity upstream of each turbine may exceed the cross-sectional average velocity in staggered arrays. Reducing the streamwise spacing $S_x/D$ makes this additional local flow acceleration less noticeable, but the rate at which the wake velocity recovers still varies with $S_y/D$, being higher for the higher blockage cases with $S_y/D$ = 4 and 3. Another distinct feature of time-averaged wakes in staggered arrays is that after passing through the streamwise location of the second row (i.e., $x \geq S_x$) the centre-line mean velocity remains fairly constant until about two rotor-diameters upstream of the next turbine. Finally, comparing aligned and staggered cases for a given streamwise spacing, we can confirm that the velocity recovery rate in the near-wake region is higher in the former configurations than in the latter. This is due to stronger wake mixing enhanced by the wake meandering as shown earlier. \subsection{Array efficiency} \label{sec:farmefficiency} As the impact of array layout and turbine spacing on the flow field has been confirmed, we now focus on the thrust and power coefficients of turbines to analyse their efficiency in the simulated infinitely large tidal arrays. For each configuration, time-averaged hydrodynamic coefficients are further averaged for all turbines comprising the array, and the results are presented in figure \ref{fig:les_cxcp} with error bars indicating the standard deviation of array-averaged values. For aligned layouts with a given $S_y/D$, both thrust and power coefficients tend to have their maximum values at $S_x/D$ = 36, but the results at $S_x/D$ = 18 are similar as this streamwise spacing is still large enough for the wakes to almost fully recover. However, when $S_x/D$ is further reduced, both $C_T$ and $C_P$ drop considerably in line with the decrease in the mean streamwise velocity upstream of each turbine, as shown earlier in figure \ref{fig:les_Ucentreline}. In contrast, the performance of turbines in staggered configurations appears less sensitive to $S_x/D$. \begin{figure} \centerline{\includegraphics[width=.9\linewidth]{Fig_CxCp_LES_bars.pdf}} \caption{Results of $C_T$ (top) and $C_P$ (bottom) obtained from the LES for the aligned ($\square$) and staggered ($\bigcirc$) layouts.} \label{fig:les_cxcp} \end{figure} Comparing arrays with large lateral spacing ($S_y/D$ = 12), staggered configurations consistently provide higher $C_T$ and $C_P$ values than the aligned counterparts, with larger differences for shorter streamwise spacing. For cases with $S_y/D$ = 6 and 4, staggered configurations still tend to provide higher $C_T$ and $C_P$ than the aligned counterparts, although the differences are negligibly small at $S_x/D$ = 18. Further reducing the lateral spacing to $S_y/D$ = 3 leads to higher $C_T$ in the aligned case than in the staggered case at $S_x/D$ = 18, but again $C_P$ values are consistently higher in the staggered cases. These results suggest that, as expected, staggered configurations are in general more efficient than the aligned counterparts. Next, we analyse the temporal fluctuations of thrust and power from their root-mean-square (rms) values. The results averaged over all turbines comprising the array are presented in figure \ref{fig:les_rmscxcp}, which shows that decreasing $S_x/D$ leads to an increase in the fluctuations of both thrust and power, as expected from enhanced unsteadiness of the approaching flow. Adopting a large lateral spacing also increases the load fluctuations in comparison to cases with the same streamwise spacing and a smaller lateral spacing. This is in line with the strength of wake oscillations increasing with $S_y/D$, as shown earlier in figure \ref{fig:les_cont_instvel}. It is also worth noting that turbines in aligned configurations tend to have larger load variations than the staggered counterparts, especially when decreasing $S_x/D$. These results suggest that staggered configurations not only enhance the array's efficiency (i.e., mean power coefficient) but can also reduce the load fluctuations and thus the long-term fatigue damage of turbine rotors. \begin{figure} \centerline{\includegraphics[width=.95\linewidth]{Fig_rmsCxCp_LES.pdf}} \caption{Results of root-mean-squared temporal fluctuations of $C_T$ (left) and $C_P$ (right) obtained from the LES of the considered array layouts.} \label{fig:les_rmscxcp} \end{figure} \section{Discussion}\label{sec:discussion} The layout of turbines in a large tidal array has important implications to its power generation capability. This is because the flow through a large array is characterised by complex wake-turbine interactions involving the effects of local blockage and wake mixing, both of which are functions of the layout of turbines. In this study we aimed to understand the impact of turbine layout for infinitely large tidal arrays, using a new theoretical model based on the LMADT and high-fidelity LES-ALM, focusing on how the array efficiency changes depending on the turbine resistance, local blockage ratio within each row of turbines ($B$ in the theoretical analysis, which is a function of $S_y$ in the LES, i.e. $B \propto S_y$) and the completeness of wake mixing between each row ($m$ in the theoretical analysis, which is a function of $S_x$ in the LES, i.e. $m \propto S_x$). It should be borne in mind that the theoretical model allows to predict overall trends and upper bound estimates of the array efficiency with almost negligible computational cost, whereas the LES-ALM allows to resolve the details of complex turbulent flow field within the array but at a high computational expense. Thus, these are two highly contrasting approaches to this fluid flow problem. Before comparing and further discussing the theoretical and LES-ALM results, we emphasise two key differences between these two approaches: (i) In the LES-ALM the operating point of turbines (i.e., rotational speed) is kept constant for all cases, resulting in slightly different $K$ values as presented in table \ref{table:setup}. No further tuning of the rotational speed to obtain a constant $K$ (to match the theoretical analysis) is performed due to the high computational cost that would be required for it. Alternatively, a fairer comparison with the theoretical analysis could be made using LES with an Actuator Disc Model (ADM), but we adopted an ALM in this study as it allows a more realistic representation of the turbulent flow field within the array. Thus, consideration needs to be taken when comparing the theoretical results for a constant $K$ value with scattered LES results with slightly different $K$ values. (ii) The current theoretical model does not provide an explicit relationship between the mixing factor $m$ and the array configuration, whereas the LES-ALM automatically predicts the wake recovery rate for a given configuration by resolving the turbulent flow field. To make a direct comparison, here the mixing factor in the theoretical analysis is assumed to be $m = 1 - (S_x/D)^{-1}$, which is arguably the simplest model to relate $m$ to $S_x/D$ without knowing any detailed characteristics of turbine wake mixing for a given array configuration a priori. We now compare array efficiency predictions between the theoretical analysis (assuming $K$ = 2) and LES-ALM in figure \ref{fig:comparison_CP_LxD}. Since the theoretical $C_P$ values are for ideal turbines (or actuator discs) and they are not directly comparable to $C_P$ values for real rotors, here we normalise $C_P$ with a reference power coefficient $C_{P_{ref}}$, which is defined for the theoretical analysis and LES-ALM separately. For the LES-ALM, $C_{P_{ref}}$ is the power coefficient obtained for the sparsest array (AL-36x12), in which turbine-to-turbine interactions are deemed negligible, whereas for the theoretical analysis, this is the power coefficient for $B=0.033$, $K=2$ and $m=1$. Hence, $C_P/C_{P_{ref}}$ plotted in figure \ref{fig:comparison_CP_LxD} represents the change rate of $C_P$ due to the effect of different turbine layouts. Overall, there is a qualitative agreement in $C_P/C_{P_{ref}}$ between the two approaches. In aligned arrays, the efficiency monotonically decreases with $S_x/D$ (except when $1 - (S_x/D)^{-1}$ is close to unity and $S_y/D \leq$ 6) as turbines increasingly operate in the wake of upstream turbines. The rate of decrease in $C_P/C_{P_{ref}}$ is different between the theoretical and LES results, largely due to the simple relationship between $m$ and $S_x/D$ assumed to make this comparison. It is therefore expected that the agreement would improve if the relationship between $m$ and $S_x/D$ is modelled appropriately in future work. For staggered arrays, again the theoretical predictions agree qualitatively with the LES, showing that $C_P/C_{P_{ref}}$ is insensitive to the streamwise spacing at least within the range of conditions considered here. However, the effect of lateral spacing $S_y/D$ is slightly over-predicted by the theoretical model compared to the LES. \begin{figure} \centerline{\includegraphics[width=.95\linewidth]{Fig_LMADT_LES_Cp_LxD_final.png}} \caption{Comparison of normalised power coefficient ($C_P$/$C_{P_{ref}}$) predicted with the theory (assuming $K=2$ and $m = 1 - (S_x/D)^{-1}$) and computed from the LES.} \label{fig:comparison_CP_LxD} \end{figure} The above comparison of $C_P/C_{P_{ref}}$ suggests that the new theoretical tidal array model is promising as it seems to capture the combined effects of local blockage and wake mixing qualitatively correctly, despite not accounting for some key transient flow phenomena, such as blade tip vortices and wake meandering, which are well-captured in the LES-ALM. As the efficiency of turbines in a large array is determined mainly by the time-averaged flow field within the array, further improvements of the theoretical model could be made in future studies using LES. In particular, further results of LES-ALM for a wider range of parameters would allow us to empirically model the mixing factor $m$ as a function of both $S_x/D$ and $S_y/D$, where it would be important to account for the dependency of wake meandering and other transient flow phenomena (that collectively determine the wake recovery rate) on the layout of turbines. It should be remembered, however, that the efficiency of real tidal arrays would depend not only on micro-scale flow interactions within the array, namely the turbine-to-turbine interactions studied here, but also on macro-scale flow interactions outside the array \citep{Vennell2012,Vennell2015,Gupta2017}. \section{Conclusions}\label{sec:conclusion} This paper has investigated the performance of tidal stream turbines in an infinitely large array, using two different approaches: a quasi-one-dimensional theoretical model based on the Linear Momentum Actuator Disc Theory (LMADT), and Large-Eddy Simulation with an Actuator Line Method (LES-ALM). Two different types of turbine layouts were considered in this study, i.e., perfectly aligned and staggered layouts. For the LES-ALM, 28 different arrays with various streamwise ($6 \leq S_x/D \leq 36$) and lateral ($3 \leq S_y/D \leq 12$) turbine spacing were considered. For the theoretical analysis, a hybrid inviscid-viscous approach was employed to model an infinitely large array using only three input parameters, namely the local blockage ratio $B$, disc resistance coefficient $K$ and wake mixing factor $m$. Our LES results have shown that the lateral spacing has a pronounced effect on the characteristics of wake meandering. In particular, the amplitude of wake meandering is found to decrease as $S_y/D$ is reduced. The main consequence of this change in wake dynamics is that a lower amplitude of wake meandering leads to less entrainment of momentum from the surrounding bypass flow into the wake, and thus a lower wake recovery rate. However, this negative effect of small lateral spacing on the wake recovery rate is observed mainly in the near-wake region only, and the completeness of wake recovery (i.e., how much the wake velocity is recovered before the wake approaches the next turbine) tends to depend more on the streamwise spacing, especially for aligned arrays. We have also confirmed from our LES results that, in staggered arrays, the wake experiences an additional acceleration when it passes through the laterally-shifted row of turbines immediately downstream. This additional acceleration is due to the effect of local blockage, which is enhanced when the lateral spacing is small. When $S_y/D$ is sufficiently small, the centre-line wake velocity is found to even exceed the bulk velocity, resulting in a high power of turbines for a fixed bulk velocity. Resolving the turbulent flow field with LES has also allowed to study the temporal fluctuations of turbine loads in a large tidal array, showing that these are approximately twice larger in aligned arrays than in staggered arrays for a given turbine spacing. Whilst the LES results have revealed the complexity of turbulent flow phenomena that collectively determine the performance of turbines in a large tidal array, the simple theoretical model has captured the basic trend of turbine performance in both aligned and staggered arrays qualitatively correctly. In particular, the theoretical model suggests that there is an optimal streamwise spacing to maximise the performance of turbines (or the power of turbines for a fixed bulk velocity) in a large staggered array. This optimum exists as the local flow velocity upstream of each turbine can exceed the bulk velocity only when the streamwise spacing is reasonably (not excessively) large for the mixing between locally faster and slower flows to be largely (but not entirely) completed within that streamwise distance. However, both theoretical and LES results show that, at least within the range of conditions tested, the effect of streamwise spacing is less than that of lateral spacing, i.e., the performance of turbines in staggered arrays depends more on $S_y/D$ than on $S_x/D$. We have also observed some quantitative differences between the theoretical predictions and the LES. One of the main causes of differences is that, to make a comparison between the two approaches, we have assumed a simple relationship between the mixing factor $m$ (representing the completeness of mixing after each row of turbines) and the streamwise spacing between rows. Further LES results for a wider range of parameters would be helpful in future studies to develop an empirical model of $m$ as a function of both streamwise and lateral spacing. The results obtained in this study will help understand and improve the performance of tidal turbines in future large tidal arrays. It should be borne in mind, however, that the performance of real tidal arrays may depend not only on micro-scale (turbine-to-turbine) flow interactions within the array but also on macro-scale flow interactions between the array and tidal-channel flow, the latter of which was outside the scope of this study. \section*{Declaration of interests} The authors report no conflict of interest \section*{Acknowledgements} This research has been partially funded by the UK's Engineering and Physical Sciences Research Council (EPSRC) (grant number EP/R51150X/1). The first author would like to acknowledge the support of the Supercomputing Wales project, which is partially funded by the European Regional Development Fund (ERDF) via the Welsh Government, the Isambard project funded by the EPSRC (EP/P020224/1), the GW4 alliance, the Met Office, Cray and Arm. This work also used the ARCHER UK National Supercomputing Service (http://www.archer.ac.uk). The second author would like to thank Dr Scott Draper for useful discussions on LMADT. \section*{Appendix A: Large-eddy simulation code DOFAS}\label{app:DOFAS} We used the Digital Offshore Farms Simulator (DOFAS) \citep{Ouro2019JFS}, an in-house LES code fully parallelised with Message Passing Interface (MPI) that also features a hybrid MPI/OpenMP scheme to maximise its computational performance \citep{Ouro2019CAF}. In DOFAS, the spatial domain is divided into rectangular sub-domains and discretised using Cartesian grids with staggered storage of velocities, i.e. velocity components are computed at the cell faces whilst pressure and scalar values are calculated at the cell centres. This scheme allows an even subdivision of the computational region into sub-domains to effectively perform the simulations. The governing equations resolved in DOFAS are the incompressible spatially-filtered Navier-Stokes equations: \begin{eqnarray} && \frac{\partial u_i}{\partial x_i} = 0, \\ && \frac{\partial u_i}{\partial t} + u_j \frac{\partial u_i}{\partial x_j} = -\frac{1}{\rho} \frac{\partial p}{\partial x_i} + (\nu + \nu_t) \frac{\partial^2 u_i}{\partial x_j^2} + f_t + \Pi_i, \label{eq:ns2} \end{eqnarray} where $u_i$ = $(u,v,w)^T$ is the vector of spatially-filtered velocities, the coordinates vector is $x_i$ = $(x,y,z)^T$, $\rho$ denotes the fluid density, $p$ is the relative pressure, $\nu$ is the kinematic viscosity of the fluid, and $f_t$ is a source term resulting from the actuator line method and immersed boundary forcing used for the representation of turbine rotors and nacelles, respectively. $\Pi_i$ is a source term representing the driving pressure gradient responsible for keeping a constant flow rate when periodic boundary conditions are used in the streamwise direction. The eddy-viscosity $\nu_t$ is calculated using the Wall-Adapting Local Eddy-viscosity (WALE) sub-grid scale model from \citet{Nicoud1999}. The velocity field is spatially discretised using a fourth-order central differences scheme. Simulations are advanced in time using a fractional step method, with a three-step low-storage Runge-Kutta scheme to obtain the non-solenoidal velocity field by explicitly computing the convection and diffusion terms, which is then corrected after the Poisson pressure equation is solved using an efficient multigrid solver \citep{Cevheri2016}. For the representation of solid bodies, DOFAS adopts a discrete direct-forcing Immersed Boundary Method (IBM) with pointwise interpolating delta functions, which has been validated in studies including tidal turbines \citep{Ouro2017JFS,Ouro2018FTC}, geophysical flows \citep{Ouro2017POF}, rough open-channel flows \citep{Stoesser2010,Bomminayuni2011,Nikora2019JFM}, and fluid-structure interaction \citep{Kara2015b,Ouro2019PRF}. In the present work, the IBM is adopted to represent the turbine nacelles, using the $\phi_4$ delta function for the interpolation procedures. Turbine rotors are represented using an Actuator Line Model (ALM) validated in \citet{Ouro2019JFS} which discretises the blades into a set of $N_L$ points, evenly spaced as a function of the mesh resolution. The ALM has been proven to provide an adequate description of the wake dynamics for wind and tidal turbines \citep{Breton2017}. In this study we set turbine rotors to have a constant rotational speed, $\Omega$, and use prescribed lift and drag coefficients of hydrofoils tabulated for a range of angles-of-attack to obtain the lift and drag forces at every point comprising the turbine blades. From the drag and lift forces we calculate the thrust force $T$ and tangential force $Q$, and thus determine the generated power $P=Q \Omega$. Eventually, the coefficients of thrust ($C_T$) and power ($C_P$) are computed as \begin{eqnarray} && C_T = \frac{T}{\frac{1}{2}\rho\pi (D/2)^2 U_0^2} \label{eq:CT}\\ && C_P = \frac{P}{\frac{1}{2}\rho\pi (D/2)^2 U_0^3} \label{eq:CP} \end{eqnarray} After the computation of hydrodynamic force from ALM at each time step, the force exerted from every Lagrangian point comprising the turbine rotors (as well as the force computed from IBM for nacelles) is transferred back to the fluid grid to correct the Eulerian velocity field. This interpolation procedure is performed using an isotropic Gaussian projection \citep{Shen2005}, \begin{equation} f_{L_{ALM}} (x_i) = \frac{1}{\varepsilon^3 \pi^{3/2}} \text{exp} \left( - \frac{r_L^2}{\varepsilon^2} \right) \end{equation} where $r_L$ denotes the radial distance between the marker $L$ and the considered cell face $i$, and $\varepsilon$ is the interpolation stencil set to 3.0$\Delta x_i$. A Prandtl-type tip-loss correction is adopted to correct the ALM forcing near the blade tip as a function of the number of blades and the tip-speed ratio \citep{Shen2005}. \bibliographystyle{jfm}
train/arxiv
BkiUd8s4uBhjD09iHz3K
5
1
\section{Introduction} Interprocedural data flow analysis incorporates the effects of procedure calls on the callers and callees. A context-insensitive analysis does not distinguish between distinct calls to a procedure. This causes the propagation of data flow values across interprocedurally invalid paths (i.e. paths in which calls and returns may not match) resulting in a loss of precision. A context-sensitive analysis restricts the propagation to valid paths and hence is more precise. Two most general methods of precise flow and context-sensitive analysis are the {\em Functional\/} approach and the {\em Call Strings\/} approach~\cite{call-strings}. The functional approach constructs summary flow functions for procedures by reducing compositions and meets of flow functions of individual statements to a single flow function, which is used directly in call statements. However, constructing summary flow functions may not be possible in general. The tabulation method of the functional approach overcomes this restriction by enumerating the functions as pairs of input-output data flow values for each procedure, but requires a finite lattice. The call strings method remembers calling contexts in terms of unfinished calls as call strings. However, it requires an exponentially large number of call strings. The technique of value based termination of call string construction~\cite{cs-vbt} uses data flow values to restrict the combinatorial explosion of contexts and improves the efficiency significantly without any loss of precision. Graph reachability based interprocedural analysis~\cite{ifds,ths-ide} is a special case of the functional approach. Formally, it requires flow functions \text{$2^A\mapsto 2^A$} to distribute over the meet operation so that they can be decomposed into meets of flow functions \text{$A\mapsto A$}. Here $A$ can be either a finite set $D$ (for IFDS problems~\cite{ifds}) or a mapping \text{$D\mapsto L$} (for IDE problems~\cite{ths-ide}) from a finite set $D$ to a lattice of values $L$. Intuitively, $A$ represents a node in the graph and a function \text{$A\mapsto A$} decides the nature of the edge from the node representing the argument to the node representing the result. Flow function composition then reduces to a transitive closure of the edges resulting in paths in the graph. The efficiency and precision of interprocedural analyses is heavily affected by the precision of the underlying call graph. This is especially important for object oriented languages like Java where virtual method invocations cause an explosion of spurious call edges if the call graph is constructed naively. Soot~\cite{soot} has been a stable and popular choice for hundreds of client analyses for Java programs, though it has traditionally lacked an interprocedural framework. Bodden \cite{ifds-soot} has recently implemented support for interprocedural analysis using graph reachability. The main limitation of this approach is that it is not suitable for general data flow frameworks with non-distributive flow functions such as heap reference analysis or points-to analysis. For example, consider the Java statement \texttt{x = y.n} to be processed for points-to analysis. If we have points-to edges \text{$y \rightarrow o_1$} and \text{$o_1 . n \rightarrow o_2$} before the statement (where $o_1$ and $o_2$ are heap objects), then it is not possible to correctly deduce that the edge \text{$x \rightarrow o_2$} should be generated after the statement if we consider each input edge independently. The flow function for this statement is a function of the points-to graph as a whole and cannot be decomposed into independent functions of each edge and then merged to get a correct result. We have implemented a generic framework for performing flow and context-sensitive interprocedural data flow analysis that does not require flow functions to be distributive. However, the flow functions must be monotonic and the lattice of data flow values must be finite. The framework uses value-based contexts and is an adaptation of the tabulation method of the functional approach and the modified call strings method. Our implementation is agnostic to any analysis toolkit or intermediate representation as it is parameterized using generic types. Since our core classes are similar to the intra-procedural framework of Soot, it integrates with Soot's Jimple IR seamlessly. We have instantiated our framework with a flow and context-sensitive points-to analysis in Soot, which enables the construction of call graphs that are far more precise than those constructed by Soot's \textsc{spark} engine. The rest of the paper is organized as follows: Section~\ref{sec-background} describes our method. Section~\ref{sec-framework} outlines the API of our implementation framework. Section~\ref{sec-results} presents the results of call graph construction. Finally, Section~\ref{sec-conclusion} concludes the paper by describing the current status and future possibilities of our work. \section{Interprocedural Analysis Using Value Contexts} \label{sec-background} The tabulation method of the functional approach~\cite{call-strings} and the modified call strings approach~\cite{cs-vbt} both revolve around the same key idea: if two or more calls to a procedure $p$ have the same the data flow value (say $x$) at the entry of $p$, then all of them will have an identical data flow value (say $y$) at the exit of $p$. The tabulation method uses this idea to enumerate flow functions in terms of pairs of input-output values \text{$(x,y)$} whereas the modified call strings method uses it to partition call strings based on input values, reducing the number of call strings significantly. \newcommand{\text{\sf\em method}\xspace}{\text{\sf\em method}\xspace} \newcommand{\text{\sf\em entryValue}\xspace}{\text{\sf\em entryValue}\xspace} \newcommand{\text{\sf\em exitValue}\xspace}{\text{\sf\em exitValue}\xspace} The two methods lead to an important conclusion: Using data flow values as contexts of analysis can avoid re-analysis of procedure bodies. We make this idea explicit by defining a \text{\textit{value context}} \text{$X = \langle \text{\sf\em method}\xspace, \text{\sf\em entryValue}\xspace\rangle$}, where \text{\sf\em entryValue}\xspace is the data flow value at the entry to a procedure \text{\sf\em method}\xspace. Additionally, we define a mapping \text{$\text{\sf\em exitValue}\xspace(X)$} which gives the data flow value at the exit of \text{\sf\em method}\xspace. As data flow analysis is an iterative process, this mapping may change over time (although it will follow a descending chain in the lattice). The new value is propagated to all callers of \text{\sf\em method}\xspace if and when this mapping changes. With this arrangement, intraprocedural analysis can be performed for each value context independently, handling flow functions in the usual way; only procedure calls need special treatment. Although the number of value contexts created per procedure is theoretically proportional to the size of the lattice in the worst-case, we have found that in practice the number of distinct data flow values reaching each procedure is often very small. This is especially true for heap-based analyses that use bounded abstractions, due to the locality of references in recursive paths. This claim is validated is Section~\ref{sec-results}, in which we present the results of a points-to analysis. \subsection*{Algorithm} \renewcommand*\Call[2]{\textproc{#1}(#2)} \begin{figure}[t] \begin{algorithmic}[1] \State \textbf{global} $contexts$, $transitions$, $worklist$ \Procedure{initContext}{$X$} \State $\Call{add}{contexts, X}$ \State Set $\Call{exitValue}{X} \gets \top$ \State Let $m \gets \Call{method}{X}$ \ForAll{nodes $n$ in the body of $m$} \State $\Call{add}{worklist, \langle X, n \rangle}$ \State Set $\Call{in}{X,n} \gets \top$ and $\Call{out}{X,n} \gets \top$ \EndFor \State Set $\Call{in}{X,\Call{entryNode}{m}} \gets \Call{entryValue}{X}$ \EndProcedure \Procedure{doAnalysis}{} \State $\Call{initContext}{\langle \texttt{main}, BI \rangle}$ \While{$worklist$ is not empty} \State Let $\langle X, n \rangle \gets \Call{removeNext}{worklist}$ \If{n is not the entry node} \State Set $\Call{in}{X,n} \gets \top$ \ForAll{predecessors $p$ of $n$} \State Set $\Call{in}{X,n} \gets \Call{in}{X,n} \sqcap \Call{out}{X,p}$ \EndFor \EndIf \State Let $a \gets \Call{in}{X,n}$ \If{$n$ contains a method call} \State Let $m \gets \Call{targetMethod}{n}$ \State Let $x \gets \Call{callEntryFlowFunction}{X, m, n, a}$ \State Let $X' \gets \langle m, x \rangle$ \Comment $x$ is the entry value at $m$ \State Add an edge $\langle X, n \rangle \rightarrow X'$ to $transitions$ \If{$X' \in contexts$} \State Let $y \gets \Call{exitValue}{X'}$ \State Let $b_1 \gets \Call{callExitFlowFunction}{X, m, n, y}$ \State Let $b_2 \gets \Call{callLocalFlowFunction}{X, n, a}$ \State Set $\Call{out}{X, n} \gets b_1 \sqcap b_2$ \Else \State $\Call{initContext}{X'}$ \EndIf \Else \State Set $\Call{out}{X,n}~\gets~\Call{normalFlowFunction}{X, n, a}$ \EndIf \If{$\Call{out}{X,n}$ has changed} \ForAll{successors $s$ of $n$} \State $\Call{add}{worklist, \langle X, s \rangle}$ \EndFor \EndIf \If{$n$ is the exit node} \State Set $\Call{exitValue}{X} \gets \Call{out}{X,n}$ \ForAll{edges $\langle X', c \rangle \rightarrow X$ in $transitions$} \State $\Call{add}{worklist, \langle X', c \rangle}$ \EndFor \EndIf \EndWhile \EndProcedure \end{algorithmic} \caption{Algorithm for performing inter-procedural analysis using value contexts.} \label{fig-algorithm} \end{figure} Figure~\ref{fig-algorithm} provides the overall algorithm. Line 1 declares three globals: a set of contexts that have been created, a transition table mapping a context and call site of a caller method to a target context at the called method and a work-list of context-parametrized control-flow graph nodes whose flow function has to be processed. \usetikzlibrary{backgrounds} \tikzstyle{cfg} = [name=#1, rectangle, draw, minimum width=50, minimum height=15, text centered] \begin{figure*}[!t] \begin{tabular}{@{}c|@{}c} \begin{tabular}{@{}c} \begin{tikzpicture}[node distance=0.7 and -0.5] \node[cfg=m1] {\tt main()}; \node[cfg=m2] [below=of m1, label=left:$n_1$] {\tt p = 5}; \node[cfg=m3] [below=of m2, label=left:$c_1$] {\tt q = f(p, -3)}; \node[cfg=m4] [below=of m3, label=left:$c_4$] {\tt r = g(-q)}; \node[cfg=m5] [below=of m4, label=left:$n_6$] {\tt exit}; \draw[->] (m1) to coordinate (m1-m2) (m2); \draw[->] (m2) to coordinate (m2-m3) (m3); \draw[->] (m3) to coordinate (m3-m4) (m4); \draw[->] (m4) to coordinate (m4-m5) (m5); \node[cfg=f1] [node distance=2.1, right=of m1] {\tt f(a, b)}; \node[cfg=f2] [below=of f1, label=left:$n_2$] {\tt if (...)}; \node[cfg=f3] [below left=of f2, label=left:$n_3$] {\tt c = a * b}; \node[cfg=f4] [below right=of f2, label=left:$c_2$] {\tt c = g(10)}; \node[cfg=f5] [below left=of f4, label=left:$n_4$] {}; \node[cfg=f6] [below=of f5, label=left:$n_5$] {\tt return c}; \draw[->] (f1) to coordinate (f1-f2) (f2); \draw[->] (f2) to coordinate (f2-f3) (f3); \draw[->] (f2) to coordinate (f2-f4) (f4); \draw[->] (f3) to coordinate (f3-f5) (f5); \draw[->] (f4) to coordinate (f4-f5) (f5); \draw[->] (f5) to coordinate (f5-f6) (f6); \node[cfg=g1] [node distance=2.1, right=of f1] {\tt g(u)}; \node[cfg=g2] [below=of g1, label=left:$c_3$] {\tt v = f(-u, u)}; \node[cfg=g3] [below=of g2, label=left:$n_6$] {\tt return v}; \draw[->] (g1) to coordinate (g1-g2) (g2); \draw[->] (g2) to coordinate (g2-g3) (g3); \begin{scope}[node distance=0.1, text width=1cm, font=\scriptsize] \node [node distance=0.65, left =of m1-m2] {$\langle X_0, \top \rangle$}; \node [node distance=0.65, left =of m2-m3] {$\langle X_0, p^+ \rangle$}; \node [node distance=0.65, left =of m3-m4] {$\langle X_0, p^+q^- \rangle$}; \node [node distance=0.65, left =of m4-m5] {$\langle X_0, p^+q^-r^- \rangle$}; \node [right=of f1-f2] {$\langle X_1, a^+b^- \rangle$ $\langle X_3, a^-b^+ \rangle$}; \node [node distance=0.8, left =of f2-f3] {$\langle X_1, a^+b^- \rangle$ $\langle X_3, a^-b^+ \rangle$}; \node [right=of f2-f4] {$\langle X_1, a^+b^- \rangle$ $\langle X_3, a^-b^+ \rangle$}; \node [node distance=0.8, left =of f3-f5] {$\langle X_1, a^+b^-c^- \rangle$ $\langle X_3, a^-b^+c^- \rangle$}; \node [right=of f4-f5] {$\langle X_1, a^+b^-c^- \rangle$ $\langle X_3, a^-b^+c^- \rangle$}; \node [right=of f5-f6] {$\langle X_1, a^+b^-c^- \rangle$ $\langle X_3, a^-b^+c^- \rangle$}; \node [right=of g1-g2] {$\langle X_2, u^+ \rangle$}; \node [right=of g2-g3] {$\langle X_2, u^+v^- \rangle$}; \end{scope} \end{tikzpicture} \vspace*{.25cm} \hspace*{.1cm} \\ (a) \ \ \parbox[t]{100mm}{Control flow graphs annotated with context-sensitive data flow values} \end{tabular} & \begin{tabular}{c} \begin{tabular}{c} \begin{tikzpicture}[node distance=0.3 and 0.3] \node[name=top] {$\top$}; \node[below left=of top, name=minus] {$-$}; \node[below=of top, name=zero] {$0$}; \node[below right=of top, name=plus] {$+$}; \node[below=of zero, name=bot] {$\bot$}; \draw (top) to (minus); \draw (top) to (zero); \draw (top) to (plus); \draw (minus) to (bot); \draw (zero) to (bot); \draw (plus) to (bot); \end{tikzpicture} \\ (b) \ \ Lattice for a single variable\rule[-.7em]{0em}{.9em} \end{tabular} \\ \hline\rule{0em}{3.75em} \hspace*{.1cm} \begin{tabular}{|c|c|c|c|} \hline Context & Proc. & Entry & Exit \\ \hline \rule{0em}{1em}% $X_0$ & \texttt{main} & $\top$ & $p^+q^-r^-$ \\ $X_1$ & \texttt{f} & $a^+b^-$ & $a^+b^-c^-$ \\ $X_2$ & \texttt{g} & $u^+$ & $u^+v^-$ \\ $X_3$ & \texttt{f} & $a^-b^+$ & $a^-b^+c^-$ \\ \hline \end{tabular} \\ (c) \ \ Value contexts for the program\rule[-.6em]{0em}{2em} \\ \hline \begin{tabular}{c} \begin{tikzpicture}[node distance=0.5] \node[name=X0] {$X_0$}; \node[name=X1, right=of X0] {$X_1$}; \node[name=X2, right=of X1] {$X_2$}; \node[name=X3, right=of X2] {$X_3$}; \draw[->] [bend left] (X0) to node [above] {$c_1$} (X1); \draw[->] [bend left] (X1) to node [above] {$c_2$} (X2); \draw[->] [bend left] (X2) to node [above] {$c_3$} (X3); \draw[->] [bend left] (X3) to node [below] {$c_2$} (X2); \draw[->] [bend right] (X0) to node [below] {$c_4$} (X2); \end{tikzpicture} \\ (d) \ \ Context transition diagram \end{tabular} \end{tabular} \end{tabular} \caption{A motivating example of a non-distributive sign-analysis performed on a program with mutually recursive procedures.} \label{fig-cfg} \end{figure*} The procedure \textsc{initContext} (lines 2-11) initializes a new context with a given method and entry value. The exit value is initialized to the $\top$ element. IN/OUT values at all nodes in the method body are also initialized to $\top$, with the exception of the method's entry node, whose IN value is initialized to the context's entry value. All nodes of this context are added to the work-list. The \textsc{doAnalysis} procedure (lines 12-51) first creates a value context for the \texttt{main} method with some boundary information (BI). Then, data flow analysis is performed using the traditional work-list method, but distinguishing between nodes of different contexts. A node is removed from the work-list and its IN value is set to the meet of the OUT values of its predecessors (lines 16-21). For nodes without a method call, the OUT value is computed using the normal flow function (line 37). For call nodes, parameter passing is handled by a call-entry flow function that takes as input the IN value at the node, and the result of which is used as the entry value at the callee context (lines 24-26). The transition from caller context and call-site to callee context is also recorded (line 27). If a context with the target method and computed entry value has not been previously created, then it is initialized now (line 34). Otherwise, the exit value of the target context is used as the input to a call-exit flow function, to handle returned values. A separate call-local flow function takes as input the IN value at the call node, and propagates information about local variables. The results of these two functions are merged into the OUT value of the call node (lines 29-32). Once a node is processed, its successors are added to the work-list if its OUT value has changed in this iteration (lines 39-43). If the node is the exit of its procedure (lines 44-49), then the exit value of its context is set and all its callers are re-added to the work-list. The termination of the algorithm follows from the monotonicity of flow functions and the finiteness of the lattice (which bounds the descending chain as well as the number of value contexts). The algorithm can easily be extended to handle multiple entry/exit points per procedure as well as virtual method calls by merging data flow values across these multiple paths. It can also be easily adapted for backward data flow analyses. \subsection*{Example} \begin{comment} Figure~\ref{fig-cfg} illustrates our algorithm for a program containing two mutually recursive procedures (\texttt{f} and \texttt{g}) and starting at \texttt{main}. The example shows the results of a hypothetical \emph{sign analysis}, which tries to determine whether a scalar local variable is negative, positive or zero. \end{comment} Consider the program in Figure~\ref{fig-cfg}~(a), for which we wish to perform a simplified \emph{sign analysis}, to determine whether a scalar local variable is negative, positive or zero. The call from \texttt{main} to \texttt{f} at $c_1$ will only return when the mutual recursion of \texttt{f} and \texttt{g} terminates, which happens along the program path $n_2n_3n_4n_5$. Notice that the arguments to \texttt{f} at call-site $c_3$ are always of opposite signs, causing the value of variable $c$ to be negative after every execution of $n_3$ in this context. Thus, \texttt{f} and hence \texttt{g} always returns a negative value. To compute this result using the algorithm described above, we use data flow values that are elements of the lattice in Figure~\ref{fig-cfg}~(b), where $\top$ indicates an uninitialized variable and $\bot$ is the conservative assumption. We use superscripts to map variables to a sign or $\bot$, and omit uninitialized variables. At the start of the program no variables are initialized and hence the analysis starts with the initial value context \text{$X_0 = \langle \texttt{main}, \top \rangle$}. For work-list removal, we will use lexicographical ordering of contexts (newer first) before nodes (reverse post-order). The flow function of $\langle X_0, n_1 \rangle$ is processed first, which makes $p$ positive (written as $p^+$). The next node picked from the work-list is $c_1$, whose call-entry flow function passes one positive and one negative argument to parameters $a$ and $b$ of procedure \texttt{f} respectively. Thus, a new value context \text{$X_1 = \langle \texttt{f}, a^+b^- \rangle$} is created and the transition $\langle X_0, c_1 \rangle \rightarrow X_1$ is recorded. Analysis proceeds by processing $\langle X_1, n_2 \rangle$ and then $\langle X_1, c_2 \rangle$, which creates a new value context $X_2 = \langle \texttt{g}, u^+ \rangle$ due to the positive argument. The transition $(X_1, c_2) \rightarrow X_2$ is recorded. When $\langle X_2, c_3 \rangle$ is processed, the arguments to \texttt{f} are found to be negative and positive respectively, creating a new value context $X_3 = \langle \texttt{f}, a^-b^+ \rangle$ and a transition $(X_2, c_3) \rightarrow X_3$. The work-list now picks nodes of context $X_3$, and when $\langle X_3, c_2 \rangle$ is processed, the entry value at \texttt{g} is $u^+$, for which a value context already exists -- namely $X_2$. The transition \text{$\langle X_3, c_2 \rangle \rightarrow X_2$} is recorded. The exit value of $X_2$ is at the moment $\top$ because its exit node has not been processed. Hence, the call-exit flow function determines the returned value to be uninitialzed and the OUT of $\langle X_3, c_2 \rangle$ gets the value $a^-b^+$. The next node to be processed is $\langle X_3, n_3 \rangle$, whose flow function computes the sign of $c$ to be negative as it is the product of a negative and positive value. The IN value at $\langle X_3, n_4 \rangle$ is ($a^-b^+c^- \sqcap a^-b^+) = a^-b^+c^-$. Thus, the sign of the returned variable $c$ is found to be negative. As $n_4$ is the exit node of procedure \texttt{f}, the callers of $X_3$ are looked up in the transition table and added to the work-list. \begin{figure*}[!t] \begin{center} \includegraphics[scale=0.6]{classes.pdf} \end{center} \caption{The class diagram of our generic interprocedural analysis framework.} \label{fig-ipa-classes} \end{figure*} The only caller $\langle X_2, c_3 \rangle$ is now re-processed, this time resulting in a hit for an existing target context $X_3$. The exit value of $X_3$ being $a^-b^+c^-$, the returned variable $v$ gets a negative sign, which propagates to the exit node $n_6$. The callers of $X_2$, namely $\langle X_1, c_2 \rangle$ and $\langle X_3, c_2 \rangle$, are re-added to the work-list. $\langle X_3, c_2 \rangle$ is processed next, and this time the correct exit value of target context $X_2$, which is $u^+v^-$, is used and the OUT of $\langle X_3, c_2 \rangle$ is set to $a^-b^+c^-$. When its successor $\langle X_3, n_4 \rangle$ is subsequently processed, the OUT value does not change and hence no more nodes of $X_3$ are added to the work-list. Analysis continues with nodes of $X_1$ on the work-list, such as $\langle X_1, c_2 \rangle$ and $\langle X_1, n_3 \rangle$. The sign of $c$ is determined to be negative and this propagates to the end of the procedure. When exit node $\langle X_1, n_5 \rangle$ is processed, the caller of $X_1$, namely $\langle X_0, c_1 \rangle$, is re-added to the work-list. Now, when this node is processed, $q$ is found to be negative. Value-based contexts are not only useful in terminating the analysis of recursive procedures, as shown above, but also as a simple \emph{cache} table for distinct call sites. For example, when $\langle X_0, c_4 \rangle$ is processed, the positive argument results in a hit for $X_2$, and thus its exit value is simply re-used to determine that $r$ is negative. Figure~\ref{fig-cfg}~(b) lists the value contexts for the program and Figure~\ref{fig-cfg}~(c) shows the transitions between contexts at call-sites. A context-insensitive analysis would have merged signs of $a$ and $b$ across all calls to \texttt{f} and would have resulted in a $\bot$ value for the signs of $c$, $v$, $q$ and $r$. Our context-sensitive method ensures a precise data flow solution even in the presence of recursion. Notice that the flow function for $n_3$ is non-distributive since $f_{n_3}(a^+b^-) \sqcap f_{n_3}(a^-b^+) = a^+b^-c^- \sqcap a^-b^+c^- = a^{\bot}b^{\bot}c^-$ but $f_{n_3}(a^+b^- \sqcap a^-b^+) = f_{n_3}(a^{\bot}b^{\bot}) = a^{\bot}b^{\bot}c^{\bot}$. Hence this problem does not fit in the IFDS/IDE framework, but such flow functions do not pose a problem to our algorithm. \section{Implementation Framework} \label{sec-framework} The implementation framework consists of a handful of core classes as shown in Figure~\ref{fig-ipa-classes}. The use of generic types makes the framework agnostic to any particular toolkit or IR. The classes are parameterized by three types: \texttt{M} represents the type of a method, \texttt{N} represents a node in the control flow graph and \texttt{A} is the type of data flow value used by the client analysis. The framework can be naturally instantiated for Soot using the type parameters \texttt{SootMethod} and \texttt{Unit} for \texttt{M} and \texttt{N} respectively. Users would extend \texttt{ForwardInterProceduralAnalysis} or \texttt{BackwardInterProceduralAnalysis}, which are subclasses of an abstract class \texttt{InterProceduralAnalysis}. The abstract methods \texttt{topValue}, \texttt{boundaryValue}, \texttt{copy} and \texttt{meet} provide a hook for client analyses to express initial lattice values and basic operations on them. The major functionality of the client analysis would be present in the \texttt{*FlowFunction} methods, whose roles were explained in Section~\ref{sec-background}. Additionally, clients are expected to provide a \texttt{ProgramRepresentation} object, which specifies program entry points (for which boundary values are to be defined) and resolves virtual calls. Our framework ships with default program representations for Soot's Jimple IR. The launch point of the analysis is the \texttt{doAnalysis} method, which is implemented as per the algorithm from Figure~\ref{fig-algorithm} in the directional sub-classes. The \texttt{Context} class encapsulates information about a value context. Every context is associated with a method, an entry value and an exit value, each of which can be retrieved using the corresponding \emph{getter} methods. The \texttt{getValueBefore} and \texttt{getValueAfter} methods return data flow values for a context just before and after a node respectively. This is the recommended way for accessing the results of the analysis in a context-sensitive manner. A mapping of methods to a list of all its contexts is available through the \texttt{getContexts} method of the \texttt{InterProceduralAnalysis} class. Alternatively, \texttt{getMeetOverValidPathsSolution} can be used to obtain a solution that is computed by merging data flow results across all contexts of each method. The \texttt{DataFlowSolution} class (not shown in the figure) simply provides \texttt{getValueBefore} and \texttt{getValueAfter} methods to access the resulting solution. \section{The Role of Call Graphs} \label{sec-results} We initially developed this framework in order to implement heap reference analysis~\cite{heap-reference-analysis} using Soot, because it could not be encoded as an IFDS/IDE problem. However, even with our general framework, performing whole-program analysis turned out to be infeasible due to a large number of interprocedural paths arising from conservative assumptions for targets of virtual calls. The \textsc{spark} engine~\cite{spark} in Soot uses a flow and context insensitive pointer analysis on the whole program to build the call graph, thus making conservative assumptions for the targets of virtual calls in methods that are commonly used such as those in the Java library. For example, it is not uncommon to find call sites in library methods with 5 or more targets, most of which will not be traversed in a given context. Some call sites can even be found with more than 250 targets! This is common with calls to virtual methods defined in \texttt{java.lang.Object}, such as \texttt{hashCode()} or \texttt{equals()}. When performing whole-program data flow analysis, the use of an imprecise call graph hampers both efficiency, due to an exponential blow-up of spurious paths, and precision, due to the meet over paths that are actually interprocedurally invalid, thereby diminishing the gains from context-sensitivity. Soot provides a context-sensitive call graph builder called \textsc{paddle} \cite{paddle}, but this framework can only perform $k$-limited call-site or object-sensitive analysis, and that too in a flow-insensitive manner. We were unable to use \textsc{paddle} with our framework directly because at the moment it not clear to us how the $k$-suffix contexts of \textsc{paddle} would map to our value-contexts. \begin{table*}[t] \centering \begin{tabular}{|c|l|r|r|r|r|r|r|r|r|r|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{Benchmark}} & \multirow{2}{*}{Time} & \multicolumn{2}{c|}{Methods ($M$)} & \multicolumn{2}{c|}{Contexts ($X$)} & \multicolumn{2}{c|}{$X/M$} & \multicolumn{2}{c|}{Clean}\\ \cline{4-11} \multicolumn{2}{|c|}{} & & Total & App. & Total & App. & Total & App. & Total & App.\\ \hline \multirow{5}{*}{SPEC JVM98} & \texttt{compress} & 1.15s & 367 & 54 & 1,550 & 70 & 4.22 & 1.30 & 50 & 47 \\ & \texttt{jess} & 140.8s & 690 & 328 & 17,280 & 9,397 & 25.04 & 28.65 & 34 & 30 \\ & \texttt{db} & 2.19s & 420 & 56 & 2,456 & 159 & 5.85 & 2.84 & 62 & 46 \\ & \texttt{mpegaudio} & 4.51s & 565 & 245 & 2,397 & 705 & 4.24 & 2.88 & 50 & 47 \\ & \texttt{jack} & 89.33s & 721 & 288 & 7,534 & 2,548 & 10.45 & 8.85 & 273 & 270 \\ \hline \multirow{2}{*}{DaCapo 2006} & \texttt{antlr} & 697.4s & 1,406 & 798 & 30,043 & 21,599 & 21.37 & 27.07 & 769 & 727 \\ & \texttt{chart} & 242.3s & 1,799 & 598 & 16,880 & 4,326 & 9.38 & 7.23 & 458 & 423 \\ \hline \end{tabular} \caption{Results of points-to analysis using our framework. ``App.'' refers to data for application classes only.} \label{tab-methods} \end{table*} \subsection*{Call Graph Construction using Points-To Analysis} We have implemented a flow and context-sensitive points-to analysis using our interprocedural framework to build a call graph on-the-fly. This analysis is both a demonstration of the use of our framework as well as a proposed solution for better call graphs intended for use by other interprocedural analyses. The data flow value used in our analysis is a points-to graph in which nodes are allocation sites of objects. We maintain two types of edges: $x \rightarrow m$ indicates that the root variable $x$ may point to objects allocated at site $m$, and $m.f \rightarrow n$ indicates that objects allocated at site $m$ may reference objects allocated at site $n$ along the field $f$. Flow functions add or remove edges when processing assignment statements involving reference variables. Nodes that become unreachable from root variables are removed. Type consistency is maintained by propagating only valid casts. The points-to graphs at each statement only maintain objects reachable from variables that are local to the method containing the statement. At call statements, we simulate assignment of arguments to locals of the called method, as well as the assignment of returned values to a local of the caller method. For static fields (and objects reachable from them) we maintain a global flow-insensitive points-to graph. For statements involving static loads/stores we operate on a temporary union of local and global graphs. The call graph is constructed on-the-fly by resolving virtual method targets using type information of receiver objects. Points-to information cannot be precise for objects returned by native methods, and for objects shared between multiple threads (as our analysis is flow-sensitive). Thus, we introduce the concept of a \emph{summary node}, which represents statically unpredictable points-to information and is denoted by the symbol $\bot$. For soundness, we must conservatively propagate this effect to variables and fields that involve assignments to summary nodes. The rules for summarization along different types of assignment statements are as follows: \begin{center} \begin{tabular}{|c|l|} \hline Statement & Rule used in the flow function \\ \hline $x = y$ & If $y \rightarrow \bot$, then set $x \rightarrow \bot$ \\ $x.f = y$ & If $y \rightarrow \bot$, then $\forall o : x \rightarrow o$, set $o.f \rightarrow \bot$ \\ $x = y.f$ & If $y \rightarrow \bot$ or $\exists o : y \rightarrow o$ and $o.f \rightarrow \bot$, \\ & then set $x \rightarrow \bot$ \\ $x = p(a_1, a_2, ...)$ & If $p$ is unknown, then set $x \rightarrow \bot$, and \\ & $\forall o : a_i \rightarrow o$, $\forall f \in fields(o)$ set $o.f \rightarrow \bot$ \\ \hline \end{tabular} \end{center} The last rule is drastically conservative; for soundness we must assume that a call to an unknown procedure may modify the fields of arguments in any manner, and return any object. An important discussion would be on what constitutes an \emph{unknown} procedure. Native methods primarily fall into this category. In addition, if $p$ is a virtual method invoked on a reference variable $y$ and if $y \rightarrow \bot$, then we cannot determine precisely what the target for $p$ will be. Hence, we consider this call site as a \emph{default} site, and do not enter the procedure, assuming worst-case behaviour for its arguments and returned values. A client analysis using the resulting call graph with our framework can choose to do one of two things when encountering a \emph{default} call site: (1) assume worst case behaviour for its arguments (eg. in liveness analysis, assume that all arguments and objects reachable from them are live) and carry on to the next statement, or (2) fall-back onto Soot's default call graph and follow the targets it gives. A related approach partitions a call graph into calls from application classes and library classes~\cite{application-only-call-graph}. Our call graph is partitioned into call sites that we can precisely resolve to one or more valid targets, and those that cannot due to statically unpredictable factors. \subsection*{Experimental Results} Table~\ref{tab-methods} lists the results of points-to analysis performed on seven benchmarks. The experiments were carried out on an Intel Core i7-960 with 19.6 GB of RAM running Ubuntu 12.04 (64-bit) and JDK version 1.6.0\_27. Our single-threaded analysis used only one core. The first two columns contain the names of the benchmarks; five of which are the single-threaded programs from the SPEC JVM98 suite \cite{specjvm98}, while the last two are from the DaCapo suite \cite{dacapo} version 2006-10-MR2. The third column contains the time required to perform our analysis, which ranged from a few seconds to a few minutes. The fourth and fifth columns contain the number of methods analyzed (total and application methods respectively). The next two columns contain the number of value-contexts created, with the average number of contexts per method in the subsequent two columns. It can be seen that the number of distinct data flow values reaching a method is not very large in practice. As our analysis ignores paths with method invocations on null pointers, it was inappropriate for other benchmarks in the DaCapo suite when using stub classes to simulate the suite's reflective boot process. The use of \emph{default} sites in our call graph has two consequences: (1)~the total number of analyzed methods may be less than the total number of reachable methods and (2)~methods reachable from \emph{default} call sites (computed using \textsc{spark}'s call graph) cannot be soundly optimized by a client analysis that jumps over these sites. The last column lists the number of \emph{clean} methods which are not reachable from \emph{default} sites and hence can be soundly optimized. In all but two cases, the majority of application methods are clean. \begin{table*}[t] \centering \begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|r|} \hline \multicolumn{2}{|c|}{Depth $k=$} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} \\ \hline \multirow{3}{*}{\texttt{compress}} & \textbf{\textsc{fcpa}} & 2 & 5 & 7 & 20 & 55 & 263 & 614 & 2,225 & 21,138 & 202,071 \\ & \textbf{\textsc{spark}} & 2 & 5 & 9 & 22 & 57 & 273 & 1,237 & 23,426 & 545,836 & 12,052,089 \\ & \textbf{$\Delta$\%} & \textbf{0} & \textbf{0} & \textbf{22.2} & \textbf{9.09} & \textbf{3.51} & \textbf{3.66} & \textbf{50.36} & \textbf{90.50} & \textbf{96.13} & \textbf{98.32} \\ \hline \multirow{3}{*}{\texttt{jess}} & \textbf{\textsc{fcpa}} & 2 & 5 & 7 & 30 & 127 & 470 & 4,932 & 75,112 & 970,044 & 15,052,927 \\ & \textbf{\textsc{spark}} & 2 & 5 & 9 & 32 & 149 & 924 & 24,224 & 367,690 & 8,591,000 & 196,801,775 \\ & \textbf{$\Delta$\%} & \textbf{0} & \textbf{0} & \textbf{22.2} & \textbf{6.25} & \textbf{14.77} & \textbf{49.13} & \textbf{79.64} & \textbf{79.57} & \textbf{88.71} & \textbf{92.35} \\ \hline \multirow{3}{*}{\texttt{db}} & \textbf{\textsc{fcpa}} & 2 & 5 & 11 & 46 & 258 & 1,791 & 21,426 & 215,465 & 2,687,625 & 42,842,761 \\ & \textbf{\textsc{spark}} & 2 & 5 & 13 & 48 & 443 & 4,726 & 71,907 & 860,851 & 13,231,026 & 245,964,733 \\ & \textbf{$\Delta$\%} & \textbf{0} & \textbf{0} & \textbf{15.4} & \textbf{4.17} & \textbf{41.76} & \textbf{62.10} & \textbf{70.20} & \textbf{74.97} & \textbf{79.69} & \textbf{82.58} \\ \hline \multirow{3}{*}{\texttt{mpegaudio}} & \textbf{\textsc{fcpa}} & 2 & 14 & 42 & 113 & 804 & 11,286 & 129,807 & 1,772,945 & 27,959,747 & 496,420,128 \\ & \textbf{\textsc{spark}} & 2 & 16 & 46 & 118 & 834 & 15,844 & 250,096 & 4,453,608 & 87,096,135 & 1,811,902,298 \\ & \textbf{$\Delta$\%} & \textbf{0} & \textbf{12} & \textbf{8.7} & \textbf{4.24} & \textbf{3.60} & \textbf{28.77} & \textbf{48.10} & \textbf{60.19} & \textbf{67.90} & \textbf{72.60} \\ \hline \multirow{3}{*}{\texttt{jack}} & \textbf{\textsc{fcpa}} & 2 & 18 & 106 & 1,560 & 22,652 & 235,948 & 2,897,687 & 45,480,593 & 835,791,756 & 17,285,586,592 \\ & \textbf{\textsc{spark}} & 2 & 18 & 106 & 1,577 & 27,201 & 356,867 & 5,583,858 & 104,211,833 & 2,136,873,586 & 46,356,206,503 \\ & \textbf{$\Delta$\%} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{1.08} & \textbf{16.72} & \textbf{33.88} & \textbf{48.11} & \textbf{56.36} & \textbf{60.89} & \textbf{62.71} \\ \hline \multirow{3}{*}{\texttt{antlr}} & \textbf{\textsc{fcpa}} & 6 & 24 & 202 & 560 & 1,651 & 4,669 & 18,953 & 110,228 & 975,090 & 11,935,918 \\ & \textbf{\textsc{spark}} & 6 & 24 & 206 & 569 & 1,669 & 9,337 & 107,012 & 1,669,247 & 27,670,645 & 468,973,725 \\ & \textbf{$\Delta$\%} & \textbf{0} & \textbf{0} & \textbf{1.9} & \textbf{1.58} & \textbf{1.08} & \textbf{49.99} & \textbf{82.29} & \textbf{93.40} & \textbf{96.48} & \textbf{97.45} \\ \hline \multirow{3}{*}{\texttt{chart}} & \textbf{\textsc{fcpa}} & 6 & 24 & 217 & 696 & 2,109 & 9,778 & 45,010 & 517,682 & 7,796,424 & 164,476,462 \\ & \textbf{\textsc{spark}} & 6 & 24 & 219 & 714 & 2,199 & 20,171 & 306,396 & 7,676,266 & 192,839,216 & 4,996,310,985 \\ & \textbf{$\Delta$\%} & \textbf{0} & \textbf{0} & \textbf{0.9} & \textbf{2.52} & \textbf{4.09} & \textbf{51.52} & \textbf{85.31} & \textbf{93.26} & \textbf{95.96} & \textbf{96.71} \\ \hline \end{tabular} \caption{Number of $k$-length call graph paths for various benchmarks using \textsc{spark} and \textsc{fcpa} (Flow and Context-sensitive Pointer Analysis).} \label{tab-paths} \end{table*} In order to highlight the benefits of using the resulting call graph, just listing the number of edges or call-sites alone is not appropriate, as our call graph is context-sensitive. We have thus computed the number distinct paths in the call graph, starting from the entry point, which are listed in Table~\ref{tab-paths}. As the total number of call graph paths is possibly infinite (due to recursion), we have counted paths of a fixed length length $k$, for $1 \le k \le 10$. For each benchmark, we have counted these paths using call graphs constructed by our Flow and Context-sensitive Pointer Analysis (\textsc{fcpa}) as well as \textsc{spark}, and noted the difference as percentage savings ($\Delta\%$) from using our context-sensitive call graph. The option \texttt{implicit-entry} was set to \texttt{false} for \textsc{spark}. The savings can be clearly observed for $k > 5$. For $k = 10$, \textsc{spark}'s call graph contains more than 96\% spurious paths for three of the benchmarks, and 62-92\% for the remaining. The gap only widens for larger values of $k$ (for which the number of paths was too large to compute in some cases). Client analyses using our interprocedural framework can be configured to use our context-sensitive call graphs which avoid these spurious paths, hence enabling efficient and precise solutions. \section{Conclusion and Future Work} \label{sec-conclusion} We have presented a framework for performing value-based context-sensitive inter-procedural analysis in Soot. This framework does not require distributivity of flow functions and is thus applicable to a large class of analyses including those that cannot be encoded as IFDS/IDE problems. Another advantage of our method is the context-sensitive nature of the resulting data flow solution, which can be useful in dynamic optimizations. In order to deal with the difficulties in whole-program analysis performed over an imprecise call graph, we constructed call graphs on-the-fly while performing a flow and context-sensitive points-to analysis. This analysis also demonstrated a sample use of our framework and showed that it was practical to use data flow values as contexts because the number of distinct data flow values reaching each method is often very small. The interprocedural framework has been released and is available at \url{https://github.com/rohanpadhye/vasco}. However, our points-to analysis implementation is experimental and makes heavy use of \texttt{HashMap}s and \texttt{HashSet}s, thus running out of memory for very large programs. We would like to improve this implementation by using bit-vectors or maybe even BDDs for compact representation of points-to sets. The precision of our call graphs is still limited in the presence of \emph{default} sites, which are prominent due to the liberal use of \emph{summary} nodes in the points-to graphs. We would like to reduce the number of \emph{summary} nodes by simulating some commonly used native methods in the Java library, and also by preparing a summary of initialized static fields of library classes. Our hope is that these extensions would enable our call graph to be fully complete, thereby enabling users to precisely perform whole-program analysis and optimization for all application methods in an efficient manner. We believe that improving the precision of program analysis actually helps to improve its efficiency, rather than hamper it. \acks We would like to thank the anonymous reviewers for suggesting the inclusion of a non-distributive example and the separation of call/return flow functions. These changes improved the paper. \bibliographystyle{abbrvnat}
train/arxiv
BkiUfSHxK7IAE_K9yvDb
5
1
\section{Introduction} The absence of a fully consistent quantum theory of gravity is felt by many theorists as a challenge. The corresponding research is not motivated by present phenomenology. But it is likely that the difficulty one encounters trying to merge together general relativity and quantum mechanics reflects our misunderstanding of some basic issues and the feeling that it might indeed be so is sufficient to trigger activity. There exist several approaches to the problem. In this paper we shall discuss only one of them. \par It is well known that the formulation of the classical theory of gravity can start with the introduction of interacting tensor fields living in a flat auxiliary space, provided one imposes appropriate constraints, in the first place the gauge invariance. The identification of the field with the metric of the physical space-time is then done at the next stage. It might appear that proceeding that way is the best strategy in the quantization program: one has the correct classical theory at the tree level, computing loops will give quantum corections. Unfortunately one gets a theory plagued with infinities. Upgrading the symmetry to super-symmetry does not resolve the problem. It is by now commonly admitted that the basic objects should be extended, which complicates the story considerably. In spite of the appeal of super-string theories and of their potential ability to unify interactions, it is fair to say that the progress in this direction is slow if not uncertain. \par The approach to be discussed starts from an attempt to give a precise meaning to Feynman's path integral over all metrics of a manifold. This is, perhaps, closer to Einstein's geometrical intuition, since the metric remains the central concept of the theory. The price to pay is that one has to adopt the Euclidean version of the latter \cite{haw}. Someone may object at this point that the equivalence of the Euclidean and Minkowskian theories is questionable in the case of gravity, which renders the whole approach suspect. Notice, however, that Euclidean and Minkowskian gravities have in common several of their salient features: gauge symmetry, perturbative non-renormalizability, bottomless action. A successful quantization of the former, which is doubtlessly more tractable, is likely to be a prerequisite for the understanding of the latter. \par Because of lack of space we shall leave aside most of the developments concerning quantum gravity in less than four dimensions ($4d$). We would like to stress, however, that the study of $2d$ gravity (random surfaces) triggered by Polyakov's famous paper \cite{pol} has to a large extent inspired the research reviewed below. The following sections express a personal view of the subject. Are left aside, in particular, the papers where the dynamical triangulation recipe (see sect. 2) has {\em not} been adopted (see e.g. \cite{berg,ham2}). We appologize to all those whose work has not been given due attention. \section{Discrete theory} The pure gravity theory is defined formally by the path integral \begin{equation} Z = \int [{\cal D}g_{ab}] e^{-S} \label{cz} \end{equation} \noindent Here $S$ is the action, which is usually assumed to have the Einstein-Hilbert form \begin{equation} S = \int_{\cal M} d^dx \sqrt{g} \; (\Lambda - {1 \over {16\pi G}} R \; ) \label{ca} \end{equation} \noindent and ${\cal M}$ is a compact closed manifold. The integration involves, in principle, a summation over all topologies and, for a given topology, the integration over all metrics that can be obtained one from another by a continuous deformation. Actually, for $4d$, the summation over topologies is ill defined, and should be restricted, say, to smooth manifolds. In $2d$, where the classification of topologies is simple, the number of manifolds grows like a factorial of the genus \cite{wein2}, so that the sum is not Borel summable. The situation in $4d$ is certainly not simpler. In practice, most studies at $d > 2$ use a discrete formulation of the theory and assume the topology is fixed. \par The discretization of a theory invariant under general coordinate transformations is not a trivial matter. The first basic idea is due to Regge \cite{reg}, who suggested to replace the continuous manifold ${\cal M}$ by a collection of flat $d$-simplexes forming a simplicial complex. The curvature then resides on ($d-2$)-dimensional hinges. The second important idea is that of {\em dynamical triangulations} \cite{weing,dav,amb,kaz}: the simplexes are assumed to be equilateral and the sum over metrics is replaced by the sum over all possible manners to glue them together. For an ensemble of exactly solvable models in $2d$ one can show that the continuum and the discrete version belong to the same universality class, when the dynamical triangulation recipe is adopted. \par Let us denote by $N_k$ the number of k-simplexes in the complex. For $d=3$ and 4 only two of these quantities are independent. When the simplexes are equilateral, the RHS of (\ref{ca}) discretized \`a la Regge \cite{ham} becomes a linear combination of these two independent numbers, which is remarkably simple. For $4d$ one can write \begin{equation} S = \kappa_4 N_4 -\kappa_2 N_2 , \label{dac} \end{equation} \noindent where $\kappa_2 \sim a^2/G$ and $a$ denotes the lattice unit. The partition function (\ref{cz}) takes the form \begin{equation} Z(\kappa_2, \kappa_4) = \sum_{N_2, N_4} Z_{N_2, N_4} e^{-S} \label{dz} \end{equation} \noindent where \begin{equation} Z_{N_2, N_4} = \sum_{T(N_2, N_4)} W(T) \label{micro} \end{equation} \noindent The sum is over all $4d$ closed manifolds $T(N_2,N_4)$ of, say, spherical topology, with fixed $N_2$ and $N_4$. The symmetry factor W(T) equals the number of distinct labelings of the vertices of $T$ divided by $N_0$!. The model is now defined precisely enough to be converted into a computer code. \par The lattice can be further decorated with matter fields, for example with Ising spins. This does not present any conceptual difficulty. We limit ourselves here to pure gravity, since for $d > 2$ the study of models involving matter fields has not been pushed far enough to warrant discussion in a short review. \section{Numerical algorithms} Any two combinatorially equivalent simplicial complexes\footnote{For $d < 7$ triangulated smooth manifolds of the same topology are combinatorially equivalent.} can be connected by a series of moves introduced long ago by Alexander \cite{alex}. A smaller set of local moves has been proposed in the context of lattice gravity \cite{gg,am}. All these moves can be introduced as follows: let a collection of $d$-simplexes in a $d$-dimensional manifold be a part of the boundary of a $(d+1)$-simplex. A move consists in replacing the collection in question by the rest of the boundary. Since $(d+1)$-simplex has $d+2$ $d$-dimensional faces, there are $d+1$ possible moves of this type. Each move has a reciprocal and, when $d$ is even, there is one self-reciprocal move. It has been pointed out in \cite{bk} that for $d=3$ all Alexander moves can be constructed using these simple ones. The formal proof valid for all $d < 5$ can be found in ref. \cite{gv}. The ergodicity of the simple moves follows from the fact that all the Alexander moves are reducible to them\footnote{The moves used earlier in the context of $2d$ gravity are identical or reducible to these ones. However, the situation in $2d$ is particularly simple and the ergodicity is proved by rather elemenary methods.}. \par A word of caution is in order at this point: although any two simplicial complexes can be deformed one into another by a finite number of simple local moves, the number of steps needed to connect any two lattice configurations might grow so fast with the volume that the ergodicity would not be insured in practice. The possibility of this unpleasant scenario has to be kept in mind. \par The further implementation of these ideas in computer software has been greatly facilitated by the experience gained in developing algorithms appropriate for random surfaces \cite{kaz,adfo,jkp}. In this respect the techniques worked out to simulate the so-called grand-canonical ensemble\footnote{That is the ensemble where the number of nodes is fluctuating.} of random surfaces \cite{adfo,jkp} are particularly instructive. Anyone wishing to participate in the numerical studies of quantum gravity is advised to start by getting conversant with them. \par The efficiency of the algorithms is considerably improved when the ergodic local moves are supplemented by the global ones, where entire baby-universes are cut out at one place and glued elsewhere \cite{abbjp,aj1} \footnote{These global moves are not ergodic by themselves.}. We shall come to baby-universes later on. \section{Entropy of random manifolds} In order for the theory to make sense the entropy of manifolds should be an extensive quantity. In other words, the number $Z(N_d)$ of distinct $d$-dimensional simplicial complexes, made up of $N_d$ $d$-simplexes and with fixed topology, should be bounded by $\exp{(c N_d)}$, with $c$ being some finite positive number. It has been a surprise for the physicists who got interested in the problem to learn that their colleagues from the maths department have no idea how $Z(N_d)$ does behave when $d > 2$ and $N_d \to \infty$. \par In $2d$ the exponential bound has been proved analytically \cite{tut}. A numerical evidence for such a bound in $3d$ has been given first in ref. \cite{av} for spherical topology, and confirmed by later studies. A similar result has been obtained for $d=4$ in \cite{aj2}. There has been a controversy concerning the validity of this result, but the present consensus is that it is correct (see the review in \cite{cat}). Of course, numerical evidence is not a proof. Hence, several people presented analytic arguments to the effect that the exponential bound does hold. It seems that these claims rest on too restrictive assumptions, but we do not feel expert enough in topology to develop this point. \section{Phase diagram} Let us take the existence of the exponential bound discussed in the preceding section for granted : \begin{equation} \log Z(\kappa_2, N_4) \sim \kappa_{4crit} (\kappa_2) N_4 + ... \label{zcan} \end{equation} \noindent where \begin{equation} Z(\kappa_2, N_4) = \sum_{N_2} Z_{N_2,N_4} e^{\kappa_2 N_2} \label{zcan2} \end{equation} \noindent and the subleading terms have not been written explicitely for simplicity. It is clear from (\ref{dz}) that the theory does not exist for $\kappa_4 < \kappa_{4crit}(\kappa_2)$. As $\kappa_4$ approaches the critical line $\kappa_4 = \kappa_{4crit}(\kappa_2)$ from above the partition function $Z(\kappa_2,\kappa_4)$ develops a singularity. \par Notice, that in pure quantum gravity it does not make much sense to attach physical significance to $\Lambda$ and $G$ separately. The content of the theory, as defined by (\ref{cz}) remains unchanged under the rescaling of the metric $g_{ab} \to s g_{ab}$, which corresponds to $\Lambda \to s^{d/2} \Lambda$ and $G \to s^{-d/2+1} G$. Only the invariant combination $\Lambda G^{d/(d-2)}$ is relevant (see the discussion in \cite{kn}). In other words one has to tune both $\kappa_ 4$ and $\kappa_ 2$ in order to define the continuum theory. One needs for that a critical point on the line $\kappa_4 = \kappa_{4crit}(\kappa_2)$. \par Such a point has first been discovered in ref. \cite{bk}, in the context of $3d$ gravity (which in this respect resembles the $4d$ one, except for the order of the transition, see later). It has been found that below that point the system is in a crumpled phase, where the average number of nodes per simplex tends to zero when the number of simplexed is sent to infinity. Above the critical point this ratio tends to a finite limit, so that at least a sensible thermodynamical limit can be defined. An analogous critical point has subsequently been found for $d=4$ \cite{aj3,am2}. Contrary to $3d$, where it is of first order \cite{abkv}, the transition in $4d$ appears to be continuous \cite{aj3,ckr,aj1}. This is what one might hope, since in $4d$ there should be a place for the graviton, absent in $3d$. It is now customary to refer to the crumpled phase as to the {\em hot} one. The phase above the critical point is called {\em cold}. A careful analysis \cite{aj1} of the phase structure in $4d$ further demonstrates that the {\em internal} fractal dimension $d_H$ of the manifolds is close to 2 in the cold phase and presumably infinite in the hot one. \par It is worth mentioning at this place that the latice theory always has a well defined most probable configuration (vacuum state). It appears that this vacuum is not just the state with largest curvature, which on a dynamically triangulated manifold is necessarily finite. The vacuum seems to be nontrivial and is stabilized by the entropy od manifolds, which in this formulation of the discrete theory is defined unambiguously \cite{bk,abkv}. \section{Baby universes} Baby universes (BU) are sub-universes connected to the rest of the universe by a narrow neck. They are Euclidean analogs of black holes and early speculations concerning Euclidean quantum gravity have already introduced this concept \cite{haw}. The numerical simulations of random manifolds have revealed that the emergence of BU is an extremely common phenomenon, in all dimensions that have been considered. It is very unlikely that a random manifold remains more or less smooth (in the intuitive sense of the word). If one starts a simulation with a smooth manifold, soon there are BU growing out of it. Further, there are BU growing on BU and so on. The final structure is tree-like. It can be demonstrated analytically in $2d$ \cite{jm} that this tree is a fractal. In $4d$ the tree-like structure is especially manifest near and above the critical point. Actually, in the cold phase, the tree resembles a branched polymer \cite{aj1}. The tree has the topology of a sphere because the algorithm keeps the topology fixed by construction. It is very likely that the typical configuration would remain a collection of sub-universes connected by wormholes if one succeeded to upgrade the algorithm so as to allow the topology to change. The possible relevance of such a geometry for the cosmological constant problem has been pointed out long ago by Hawking, Coleman and others \cite{haw}. \par The discovery of the tree-like geometry of typical random manifolds with fixed topology is a very important and a very intriguing finding. A quantized manifold does not resemble at all the familiar systems making small quantum fluctuations around a smooth clasical configuration. It is perhaps not surprising that the construction of the quantum space-time starting with interacting elementary entities (see the Introduction) is not a simple matter. \par The average number $n(N_B, N_4)$ of BU with a given volume $N_B$ can be found using a combinatorial argument \cite{jm} \footnote{The neck of a BU can be regarded as a puncture on each of the two parts of the manifold it connects.}. The result is particularly simple when one assumes that \begin{equation} Z(\kappa_2, N_4) \sim N_4^{\gamma-3} e^{\kappa_{4crit} N_4} , \label{zcan3} \end{equation} \noindent which is true in $2d$ and is likely to hold in $4d$ in the vicinity of the critical point: \begin{equation} n(N_B, N_4) \sim N_4 [(1 - {{N_B} \over {N_4}}) N_B]^{\gamma - 2} \; , \; N_B < N_4 \label{buvol} \end{equation} \noindent There exist general arguments \cite{dfj}\footnote{Strictly speaking, this paper deals with random surfaces. However the geometrical arguments employed are certainly of more general validity.} to the effect that generically $\gamma \leq 0$ or else the manifolds degenerate into branched polymers with $\gamma = {1 \over 2}$. Thus the educated guess is that in the sensible sector of the theory the number of BU carrying a {\em finite} fraction of the total volume is $\sim N_4^\gamma$ and tends to a constant or vanishes when $N_4 \to \infty$, i.e. in the continuum limit. \section{Scaling and renormalization group} Recently, much activity has concentrated on the behavior of the discrete theory in the neighbourhood of the critical point. We have no place here to give justice to all this effort and, in particular, to all the facets of the particularly thorough work by Ambj\o rn and Jurkiewicz \cite{aj1} (we have already refered to it on several occasions). \par The geometry of the ensemble of manifolds can be characterized by invariant correlations between local operators $O(x)$. The simplest correlator is the two-point function with $O(x) = 1$. On a lattice it takes the form \begin{equation} G(r, N_4, \kappa_2) = N_4^{-2} \langle \sum_{A,B} \delta (r- \mid x_A-x_B \mid) \rangle_{N_4} \label{pcf} \end{equation} \noindent where $\mid x_A-x_B \mid$ is the geodesic distance between simplexes $A$ and $B$. The large distance behavior at fixed $\kappa_2$ is \cite{aw,aj1} \begin{equation} G(r, N_4, \kappa_2) \sim e^{-c(r/N_4^{1/d_H})^{{d_H} \over {d_H-1}}} \label{pcf2} \end{equation} \noindent where $c$ is some constant and $d_H$ is the internal Hausdorff dimension. Both can depend on $\kappa_2$. For large enough $N_4$ \begin{equation} \langle r \rangle_{N_4} \sim N_4^{1/d_H} \label{avr} \end{equation} \noindent The reciprocal relation \begin{equation} \langle N_4 \rangle_r \sim r^{d_H} \label{avN} \end{equation} \noindent also holds \footnote{The LHS is the average number of simplexes in manifolds with two boundaries separated by invariant geodesic distance $r$.}. The finiteness of $d_H$ has been assumed. The scaling manifest in (\ref{pcf2}) has been earlier observed empirically \cite{dbs} in the full range of $r$: $G(r, N_4)$ is mostly a function of $r/\langle r \rangle$. It has been further claimed in \cite{dbs} that this function has an approximately constant shape along trajectories in $(N_4, \kappa_2)$ plane. \par It is tempting to attack the problem of scaling using the techniques of the real space renormalization group. The very definition of a blocking procedure is non-trivial in this context: ideally, the blocking should be a self-similarity transformation, a constraint difficult to satisfy when one deals with a random lattice. It has been proposed in \cite{jkk} to define the renormalization group (RG) transformation as the process of cutting the last generation of baby universes, that is those BU which have no further BU growing on them \footnote{In practice, one cuts only the minimum-neck BU (minBU), which are easy to identify.}. Under this operation the tree gets smaller, in lattice units, and less branched, which is interpreted as reflecting the loss of the resolving power. \par Let us keep fixed the {\em physical} volume $V =N_4 a^4$ of the manifold. Consider the moments $\langle r^k \rangle$ of the correlator (\ref{pcf}). They transform under RG: $\langle r \rangle \to \langle r \rangle - \delta r$, etc. Assuming that $\kappa_2$ is the only coupling relevant for the in-large geometry one has along the RG flow \begin{equation} \delta r = r_N \delta \ln{N_4} + r_\kappa \delta \kappa_2 \label{step} \end{equation} \noindent where $r_N$ and $r_\kappa$ are the partial derivatives of $\langle r \rangle$ with respect to $\ln{N_4}$ and $\kappa_2$, respectively. Furthermore, \begin{equation} \delta \ln{1 \over a} = {1 \over 4} \delta \ln{N_4} \label{step2} \end{equation} \noindent {}From (\ref{step}) and (\ref{step2}) and using computer data one can calculate the $\beta$-function \cite{bkk,bbkp} \begin{equation} \beta(\kappa_2) = {{d \kappa_2} \over {d \ln{{1 \over a}}}} \label{bet} \end{equation} \noindent It is found that the theory posseses an ultra-violet stable fixed point $\kappa_2 = \kappa_{2crit}$. The value of $\kappa_{2crit}$ obtained from RG is close to that found by other methods. Thus, in the neighborhood of the critical point \begin{equation} \beta(\kappa_2) = \beta_0 (\kappa_{2crit} - \kappa_2) , \; \; \beta_0 > 0 \label{beta} \end{equation} \noindent Integrating (\ref{bet}) one gets \begin{equation} a = a_0 \mid \kappa_{2crit} - \kappa_2 \mid^{1 \over {\beta_0}} \label{latstep} \end{equation} \noindent where $a_0$ is an integration constant, which should be given a value, in physical units, in order to define the theory. The RG flow lines are \begin{equation} N_4^{\beta_0/4} \mid \kappa_{2crit} - \kappa_2\mid \; \equiv t = V/a_0^4 \label{rgflow} \end{equation} \noindent The continuum limit is \begin{eqnarray} N_4 \to \infty \nonumber \\ \kappa_2 \to \kappa_{2crit} \nonumber \\ t = {\rm const} \label{cont} \end{eqnarray} These results are closely analogous to those obtained in $2+\epsilon$ dimensions in the continuum framework (see \cite{kn} and references therein). It follows from the above discussion that one should be careful in interpreting results obtained at fixed $\kappa_2$: the line $\kappa_2 =$ const intersects an infinity of RG trajectories, each representing a distinct version of the theory. \par Other interesting correlators are those obtained setting $O(x) = R(x)$, where $R(x)$ is the scalar curvature. Their integrals are unambiguously defined: \par \begin{equation} m_k(N_4) = {{\partial^k \ln{Z(\kappa_2, N_4)}} \over {\partial \kappa_2^k}} \label{rrcor} \end{equation} \noindent and are the cumulants of the node distribution\footnote{One has $N_2=2(N_4+N_0-2)$.}. Computer data \cite{aj3,ckr,var, bbkp} are compatible with simple finite-size scaling \begin{eqnarray} m_2(N_4) \sim N_4^b f[(\kappa_2 - \kappa_{2crit}) N_4^c] \\ m_3(N_4) \sim N_4^{b+c} f'[(\kappa_2 - \kappa_{2crit}) N_4^c] \label{fss} \end{eqnarray} \noindent This suggests the existence of a finite mass gap scaling to zero when $ \kappa_2 \to \kappa_{2crit}$. As long as one is on the lattice the mass gap {\em is expected} to be finite, since the continuum gauge symmetry responsible for the existence of the graviton is absent. According to the preliminary data \cite{bbkp} $\beta_0/4 > c$. This seems to indicate that the mass gap vanishes in the continuum limit (\ref{cont}) as it should. Much more work will be necessary to check the spin of the corresponding particle. \section{Conclusion} Let us conclude with a few words about the open questions. There are, of course, the fundamental questions relative to the summation over topologies, the continuation to real time etc, which require very brigth new ideas. There are also more accessible problems, within the lattice formalism. The central one is the nature of the continuum limit and, in particular, the search of an evidence for the graviton. The next one is a careful study of the interaction of matter fields with geometry. Is all this sensible and does it for sure correspond to a genuine gravity theory, let it be Euclidean ? What is certain is that the progress in this field is rapid and that people involved have a lot of fun!
train/arxiv
BkiUbLDxK6EuNCvenzWB
5
1
\section{$\mathrm{#2}$}% \tikzsetnextfilename{#2}% \begin{tikzpicture}[scale=0.7]% \input{#1/sketch/#2}% \end{tikzpicture}% \endpgfgraphicnamed% } \newcommand{\inserttikzframe}[2][.]{\section{$\mathrm{#2}$}% \tikzsetnextfilename{#2}% \input{#1/tikz/#2}% \endpgfgraphicnamed% } \section*{Markup and commenting commands} Use the command \var{\textbackslash newcommenter\{cname\}\{color\}} to enable the following commands (substitute \var{cname} with your pseudonim of choice, e.g., \var{rtron}, and \var{color} with a color name supported by the \var{xcolor} package): \begin{itemize} \item \var{\textbackslash cname\{comment\}}: typeset a colored comment labeled with the pseudonim. \item \var{\textbackslash atcname}: use this to refer to another commenter. \item \var{\textbackslash cnamehl\{comment\}\{text\}}: use this to highligth a portion of the text and comment on it. \end{itemize} Examples after using \var{\textbackslash newcommenter\{rtron\}\{Green3\}}: \begin{itemize} \item \var{\textbackslash rtron\{Ignore this comment\}} produces \rtron{Ignore this comment}. \item \var{\textbackslash atrtron} produces \atrtron. \item \var{\textbackslash rtronhl\{Sentence is obvious\}\{The sky is blue\}} produces \rtronhl{Sentence is obvious}{The sky is blue} \end{itemize} Use the command \var{\textbackslash sout} to strike out text. Use \var{\tikztestgrid{4}{3}} to draw, inside a \var{tikzpicture}, a test grid 4 units wide, 3 units tall, and centered at (0,0). \section{INTRODUCTION} Motion planning is a major research area in the context of mobile robots, as it deals with the problem of finding a path from an initial state toward a goal state while avoiding collisions \cite{laumond1998robot,latombe2012robot,choset2005principles}. Traditional path planning methods focus on finding \emph{single nominal paths} in a given \emph{known map}, and the majority of them makes the implicit assumption that the agent possesses a lower-level \emph{state feedback} controller for following such nominal path in the face of external disturbances and imperfect models. In this paper, we instead use the alternative approach of synthesizing a set of output-feedback controllers instead of single a nominal path. Focusing on controllers allows us to directly consider the measurements (outputs) available to the agent, instead of assuming full state knowledge (i.e., a perfect localization in the environment); the main advantage of this approach is that it is intrinsically robust to disturbances and limited changes in the environment. The main challenge, however, is that we need to explicitly take into account the type of measurements that are available. For instance, in this paper we consider the case of a mobile robot that can use a monocular camera to detect a set of landmarks (e.g., objects) in the environment. Using image-based information, the robot can compute the bearing (i.e., the direction vector) with respect to the landmarks. The goal of this work is then to synthesize controllers that can use this type of measurements for feedback-based path planning. \myparagraph{Previous works} A well known class of techniques for motion planning is sampling-based methods. Algorithms such as Probabilistic RoadMaps ({\texttt{PRM}}) \cite{kavraki1996probabilistic}, Rapidly exploring Random Trees ({\texttt{RRT$^*$}}, \cite{rrt,lavalle2001randomized}) and asymptotically optimal Rapidly Exploring Random Tree ({\texttt{RRT$^*$}}{}, \cite{karaman2011sampling}), have become popular in the last few years due to their good practical performance, and their probabilistic completeness \cite{lavalle2006planning,lavalle2001randomized,karaman2011sampling}; there have also been extensions considering perception uncertainty \cite{renganathan2020towards}. However, these algorithms only provide nominal paths, and assume that a separate low-level controller exists to generate collision-free trajectories at run time. The concept of the sample-based output feedback controller is based on \cite{Mahroo,Mahroo2} which, the output feedback controller takes as input relative displacement measurements with respect to a set of landmarks. This algorithm is based on solving a sequence of robust min-max Linear Programming (LP) problems on the elements of a convex cell decomposition of the environment and using linear Control Lyapunov Function (CLF) and Control Barrier Function (CBF) constraints, to provide stability and safety guarantees, respectively. Additionally, it extends planning beyond the discrete nominal paths, as feedback controllers can correct deviations from such paths, and are robust to discrepancies between the planning and real environment maps. The authors in \cite{Mahroo} assumes that a polyhedral convex cell decomposition of the environment is available, which greatly reduces its applicability. The authors in \cite{Mahroo2} extended upon \cite{Mahroo} and introduces a representation of the environment that is obtained via a modified version of the optimal Rapidly-exploring Random Trees ({\texttt{RRT$^*$}}{}), combined with a convex cell decomposition of the environment. In order to observe the environment, modern robots can be equipped with different kinds of sensors, such as Inertial Measurement Units \cite{zhao2011motion}, visual sensors \cite{papanikolopoulos1993visual}, sonar sensors\cite{yata1999fast}, and contact sensors~\cite{lumelsky1990unified}, etc. Among these, monocular cameras are nearly ubiquitous, given their favorable trade-offs with respect to size, power consumption, and the richness of information in the measurements; a peculiarity of this sensor is that, due to perspective projection, it can provide relatively precise \emph{bearing} information (the direction of an object or point in the environment) but not, from a single image alone, \emph{depth} information. Many different techniques have been introduced for monocular navigation and control. An example is visual servoing, where image-based computations are used to find a control law that guides the robot toward a \textit{home} location using bearing alone \cite{lambrinos1998landmark,argyros2001robot,liu2010bearing,tron2014optimization}, or by concurrently estimating depths \cite{pentland1989simple,ens1993investigation,michels2005high}. \myparagraph{Proposed approach and contributions} The goal of the present work is to joint the two threads of work mentioned above: apply the synthesis method of \cite{Mahroo,Mahroo2} but with controllers that use as input a set of bearing measurements between the robot and a set of landmarks. Since we consider a planning problem, we assume that a map of the environment is available, and that the positions of the landmarks are known and fixed in the reference frame of this map; furthermore, we assume that a global compass direction is available (so that the orientation, but not the position of the robot in the map is known). The main contribution of this work is then to provide a way to synthesize bearing-based feedback controllers that can be used to solve the path planning problem by extending the method of~\cite{Mahroo,Mahroo2} via a dynamic rescaling of the measurements. This contribution is tested in both simulations and experiments with real robots. \section{BACKGROUND}\label{sec:background} In this section, we review the system dynamics, bearing direction measurements, CLF and CBF constraints, and the {\texttt{RRT$^*$}}{} method in the context of our proposed work. \subsection{System Dynamics} Consider a driftless control affine system \begin{equation}\label{sys1} \dot{x}= Bu, \end{equation} where $x \in \cX\subset\real{n}$ denotes the state, $u\in\cU\subset\real{m}{}$ is the system input, and $B\in\real{n\times m}$ define the linear dynamics of the system. Note that We assume that the system \eqref{sys1} is controllable, that $\cX$ and $\cU$ are polytopic, \begin{align}\label{state_limits} \cX=\{x\mid A_{x}x\leq b_{x}\},&& \cU=\{u\mid A_{u}u\leq b_u\}, \end{align} and that $0\in\cU$. The assumptions above are more restrictive than those in, e.g., \cite{Mahroo2}, but are necessary to allow our solution to the bearing-based control problem. Note that, in our case, $\cX$ will be a convex cell centered around a sample in the tree (Sec.~\ref{sec:simplified-tree}). \subsection{Bearing direction measurements}\label{sec:bearing} In this work, we assume global compass direction is available (the bearing directions can be compared in the same frame of reference) and the robot has only access to bearing direction measurements (see Fig.~\ref{bearing}). The bearing direction measurements are defined as \begin{equation}\label{bearing_mes} \beta_i= d_i(x)^{-1}(l_i-x),\;\;\;\; i=\{1,\hdots,N\}, \end{equation} where $N$ is the number of \emph{landmarks} and $d_i$ is the relative displacement measurement between the position of the robot and landmark $l_i$ \begin{equation}\label{distance} d_i(x) = \norm{l_i-x}. \end{equation} The location of landmarks ${l}_i$ is assumed to be fixed and known at the planning time. Additionally, we assume at the implementation time, the robot can measure the bearing direction $\beta_i$ between its position $x$ and a set of landmarks $l_i$. \begin{figure*} \subfloat[]{\label{bearing}{\includegraphics[width=5.5cm]{figures/tikz/fig1a}}} \hfill \subfloat[]{\label{circle}{\includegraphics[width=5.5cm]{figures/tikz/fig1b}}} \hfill \subfloat[]{\label{config}{\includegraphics[width=4.5cm]{figures/tikz/fig1c}}} \caption{a) Bearing direction measurements. Landmarks are shown by blue circles and the robot is shown by a black circle. We assume the robot measures the bearing direct to landmarks and they are shown by blue arrows. The position of landmarks is known and bearing direction measurements between landmarks are shown by purple, green, and orange arrows. b) As bearing measurements are unit vectors, the robot sees all the landmarks within 1 unit from itself. c) modify bearing measurements by assuming the landmark $l_2$ is fixed.} \end{figure*} \subsection{Optimal Rapidly-Exploring Random Tree ({\texttt{RRT$^*$}}{})} \label{sec:rrtstar} The {\texttt{RRT$^*$}}{} sampling-based algorithm is typically used for single-query path planning, but that can also be used (as in this paper) to build a tree $\cT=(\cV,\cE)$ representing the free configuration space starting from a given root node. In this work, we build a tree in the free configuration space by implementing the {\texttt{RRT$^*$}}{} algorithm, and the only modification we made to the original {\texttt{RRT$^*$}}{} algorithm is that we store random samples that were found to be in collision with an obstacle in the list $\cV_{\text{collision}}$ instead of discarding them; this list is then returned and used to define the CBF constraints in our algorithm (see Sec.~\ref{sec:tree-CBF}). {\texttt{RRT$^*$}}{} is guaranteed to be asymptotically complete and optimal, although these guarantees do not hold with a finite number of samples. \subsection{Convex Decomposition of the Environment by Simplified {\texttt{RRT$^*$}}{} Trees}\label{sec:simplified-tree} We start with a tree $\cT = (\cV,\cE)$ generated by the traditional {\texttt{RRT$^*$}}{} algorithm from Sec. \ref{sec:rrtstar}. Since the number of samples is finite, the generated tree is not optimal although it has a large number of nodes. We simplify the tree to reduce the number of nodes (while keeping all the samples that are in collision with obstacles) by following the simplified-{\texttt{RRT$^*$}}{} algorithm in \cite{Mahroo2} and denote it as $\cT=(\cV_s,\cE_s)$. Note that as a consequence of the simplifying steps above, it is still possible to connect any sample that was discarded from the original {\texttt{RRT$^*$}}{} to the simplified tree with a straight line, suggests that the simplified tree will be a good road-map representation \cite{Choset:book05} of the free configuration space reachable from the root (up to the effective resolution given by the original sampling). Given the simplified tree $\cT_s=(\cV_s,\cE_s)$, for each node $i\in \cV_s$ in the tree, we define a convex cell $\cX_{ij}$ similar to \cite{Mahroo2} such that the boundaries of $\cX_{ij}$ are defined as the bisectors hyper-plane between node $i$ and other nodes in the tree except node $j$ which node $j$ is the parent of node $i$. The polyhedron $\cX_{ij}$ is similar to a Voronoi region \cite{latombe2012robot}(See Fig.~\ref{fig:CBF-X} for an example). Note that $\cX_{ij}$ contains all the points that are closest to $i$ than other vertices in $\cT_s$, but it also includes the parent~$j$. \begin{figure}[t] \centering \includegraphics[width=5.5cm]{figures/tikz/fig2} \caption{The Voronoi-like region $\cX_{ij}$ for edge $(i,j)$, and the corresponding CBF constraints; green points and arrows: other vertices and edges in the tree $\cT_s$; red points: collision samples in $\cV_{\text{collision}}$ that are inside $\cX_{ij}$; black line: boundaries of the configuration space; dashed lines: boundaries of the convex set $\cX_{ij}$; red dotted lines: CBF constraints $h_i$. }\label{fig:CBF-X} \end{figure} The environment is implicitly represented by a simplified tree graph $\cT_s=(\cV_s,\cE_s)$ via sampling. According to the simplified tree, the environment is decomposed the environment into a set of convex cells. Note that the landmarks $\hat{l}_i$, from the point of view of our algorithms, can be arbitrary as long as there is at least two landmark visible from any point $x$ in the free configuration space. The landmarks do not need to be chosen from the samples of the {\texttt{RRT$^*$}}{} algorithm, or from the obstacles. \subsection{Control Lyapunov and Barrier Functions (CLF, CBF)}\label{sec:ECBF} In this section, we review the CLF and CBF constraints, which are differential inequalities that ensure stability and safety (set invariance) of a control signal $u$ with respect to the dynamics \eqref{sys1}. First, it is necessary to review the following. \begin{definition} Given a sufficiently smooth function $h:\real{n}\to\real{}$ and a vector field $f:\real{n}\to\real{n}$, we use the notation $L_fh=\nabla_x h^\mathrm{T} f$ to denote the Lie derivative of $h$ along $f$, where $\nabla h$ represents the gradient field of $h$. \end{definition} The Lie derivative of a differentiable function $h(x)$ for the dynamics \eqref{sys1} with respect to the vector field $B$ is defined as $\cL_{B}h(x)$. Applying this definition to \eqref{sys1} we obtain \begin{equation}\label{Lie} \cL_Bh(x) = \frac{\partial h(x)}{\partial x} Bu. \end{equation} In this work, we assume that Lie derivatives of $h(x)$ of the first order are sufficient \cite{Isidori:book95} (i.e., $h(x)$ has relative degree $1$ with respect to the dynamics \eqref{sys1}); however, the result could be extended to the higher degrees, as discussed in \cite{Mahroo}. We now pass to the definition of the differential constraints. Consider a continuously differentiable function $V(x):\cX\to\real{}$, $V(x)\geq 0$ for all $x\in\cX$, with $V(x)=0$ for some $x\in\cX$. \begin{definition}\label{def:ECLF} The function $V(x)$ is a \textit{Control Lyapunov Function} (CLF) with respect to \eqref{sys1} if there exists positive constants $c_1,c_2,c_v$ and control inputs $u\in \cU$ such that \begin{equation}\label{cons:clf} \begin{aligned} \cL_BV(x)u+c_v V(x)\leq 0,\forall x \in \cX. \end{aligned} \end{equation} Furthermore, \eqref{cons:clf} implies that $\lim_{t\to\infty}V(x(t))=0$. \end{definition} Consider a continuously differentiable function $h(x):\cX\to\real{}$ which defines a safe set $\cC$ such that \begin{equation}\label{set_c} \begin{aligned} \cC&=\{x\in \real{n}|\;h(x)\geq0\},\\ \partial \cC&=\{x\in \real{n}|\;h(x)=0\},\\ {Int}(\cC)&=\{x\in \real{n}|\;h(x)>0\}. \end{aligned} \end{equation} In our setting, the set $\cC$ will represent a convex local approximation of the free configuration space (in the sense that $x\in\cC$ does not contain any sample that was found to be in collision). We say that the set $\cC$ is \emph{forward invariant} (also said \emph{positive invariant} \cite{positiveInvariant}) if $x(t_0) \in \cC$ implies $x(t)\in \cC$, for all $t\geq 0$~\cite{zcbf1}. \begin{definition}[CBF, \cite{nguyen2016exponential}]\label{def:ECBF} The function $h(x)$ is a \emph{Control Barrier Function} with respect to \eqref{sys1} if there exists a positive constant $c_h$, control inputs $u\in \cU$, and a set $\cC$ such that \begin{equation}\label{cons:cbf} \cL_Bh(x)u+c_hh(x)\geq 0,\forall x \in \cC. \end{equation} Furthermore, \eqref{cons:cbf} implies that the set $\cC$ is forward invariant. \end{definition} \subsection{Stability by CLF}\label{sec:tree-CLF} To stabilize the navigation along an edges of a tree, we define the Lyapunov function $V_{ij}(x)$ as \begin{equation}\label{V} V_{ij}(x)=z_{ij}^T (x-x_{j}), \end{equation} where $z_{ij}= \frac{x_j-x_i}{\norm{x_j-x_i}}$ is the unit vector of \text{\emph{exit direction}} for edge $(i,j)$, $x_{j} \in \text{\emph{exit face}}$ is the position of the parent of node $i$, and $V_{ij}(x)$ reaches its minimum $V(x)=0$ at $x_j$. Note that the Lyapunov function represents, up to a constant, the distance $d(x,x_j)$ between the current system position and the exit face. By Definition~\ref{def:ECLF}, $V_{ij}(x)$ is a CLF. \subsection{Safety by CBF}\label{sec:tree-CBF} In this section, we define barrier functions $h_{ij}(x)$ that define a cone representing a local convex approximation of the free space between $i$ and $j$, in the sense that it excludes all samples in $\cV$ that are on the way from $i$ to $j$. In particular, we use the following steps: \begin{enumerate} \item Define set $\cO_{ij}\subset\cV_{\text{collision}}$ whose are in $\cX_{ij}$. \item From the set $\cO_{ij}$, we choose two samples $\{o_{u},o_{d}\}$ on two sides of the edge $(i,j)$ such that they are closet samples in $\cO_{ij}$ to edge $(i,j)$. \item We write the equations of two lines passing through $\{j,o_{u}\}$ and $\{j,o_{d}\}$ in a matrix form using $A_{h_{i}}\in \real{2\times 2}$, $b_h\in \real{2}$ to define the invariant set \begin{equation} \cC_{ij}=\{x\in\real{2}:A_{h_{ij}}\vct{x}+b_{h_{i}}>0\}. \end{equation} \end{enumerate} The corresponding CBF is then defined as \begin{equation}\label{h} h_{ij}(x)=A_{h_{ij}}x+b_{h_{ij}}. \end{equation} An example of the set $\cC_{ij}$ is shown in Fig.~\ref{fig:CBF-X}. Note that the region $\cC_{ij}$ might not include the entire cell $\cX_{ij}$. However, the controller will be designed to satisfy the CBF and CLF constraints over the entire cell $\cX_{ij}$ \cite{Mahroo2}). \subsection{Output Feedback Controller}\label{sec:tree-controller} \newcommand{x_{\textrm{pos}}}{x_{\textrm{pos}}} We assume that the robot can only measure the relative displacements between the robot's position $x$ and the landmarks in the environment, which corresponds to the output functio \begin{equation}\label{vec-landmarks} \cY=(L-x\vct{1}^\mathrm{T})^\vee=L^\vee-\cI x=\stack{(l_i- x)}, \end{equation} where $L\in\real{n\times n_l}$ is a matrix of landmark locations, $i=1,\hdots,n_l$ that $n_l$ is the number of landmarks, $A^\vee$ represents the vectorized version of a matrix $A$, $\cI=\vct{1}_{nl} \otimes I_n$, and $\otimes$ is the Kronecker product. We define the feedback controller as \begin{equation}\label{u} u_{ij}=K_{ij}\cY, \end{equation} where $K_{ij} \in \real{m\times nn_l}$ are the feedback gains that need to be found for each cell $\cX_{ij}$. In \eqref{u}, the input to the controller is the relative displacement measurements between the position of the robot and the set of landmarks in the environment. The controller in \eqref{u} is a weighted linear combination of the measured displacements $y$. The goal is to design $u$ such that the system is driven toward the exit direction $z_{ij}$ while avoiding obstacles. Assume the controller is in the form of \eqref{u}, and the output function is \eqref{vec-landmarks}, following the approach of \cite{Mahroo,Mahroo2}, and using the CLF-CBF constraints reviewed in Sec.~\ref{sec:background}, we encode our goal in the following feasibility problem: \begin{equation}\label{findK} \begin{aligned} & \textrm{find} \;\;{K_{ij}}\\ & \textrm{subject to:}\\ &\text{CBF:}\;-( \cL_Bh_{ij}(x)u+c^\mathrm{T}_hh_{ij}(x))\leq 0,\\ &\text{CLF:}\;\;\cL_BV_{ij}(x)u+c^\mathrm{T}_v{V_{ij}}(x)\leq 0,\\ &u\in\cU,\;\;\forall x\in \cX_{ij},\;\;(i,j)\in\cE. \end{aligned} \end{equation} The constraints in \eqref{findK} need to be satisfied for all $x$ in the region $\cX_{ij}$, i.e., the same control gains should satisfy the CLF-CBF constraints at every point in the region. In \cite{Mahroo2}, this problem is solved by rewriting \eqref{findK} using a min-max formulation and Linear Programming (LP) method. Starting from a point $x\in\cX_{ij}$, $u_{ij}$ drives the robot toward $x_j$, the robot switches its controller to $u_{jq}$ when $x$ reaches the exit face of $\cX_{ij}$. \section{FEEDBACK CONTROL PLANNING WITH BEARING MEASUREMENTS}\label{sec:problem-setup} As mentioned in the introduction, in this paper we consider a robot equipped with a monocular camera that, in general, does not provide the depth of a target object in the image, and instead measures the corresponding relative bearing (Sec.~\ref{sec:bearing}). In this section, we show that in this case, it is possible to still use the control synthesis method of Sec.~\ref{sec:tree-controller} after rescaling the bearing measurements, such that they are similar to the ideal displacement measurements. We divide the section into three parts. First, we give details on the rescaling procedure; then, we show that the resulting bearing controller still solves the path planning problem, albeit with modified CLF and CBF conditions; finally, we discuss how to handle the practical problem of a camera with a limited field of view. \subsection{Rescaling of the Bearing Measurements}\label{sec:rescaling} For simplicity, in this section we consider only 2-D environments (however, an extension to the 3-D case is possible with minor modifications). Without loss of generality, we assume that the robot position $x$ is equal to the zero vector (if not, all the computations below are still valid after introducing opportune shifts by $x$). The bearings $\{\beta_i\}$ can be identified with points on a unitary circle centered at the origin (Fig.~\ref{bearing}). For the current cell (and hence its associated controller) we pick a \emph{fixed} landmark $f$ among those available. We then define $\tilde{l}_f$ as the point on the circle corresponding to $\beta_f$. Our goal is to rescale all the other bearings $i\neq f$ such that they are similar to the full displacements. For this purpose, we define \emph{inter-landmark} bearing directions between landmark $f$ and every other landmarks $i\neq f$ as \begin{equation} \beta_i^f= d_i(l_f)^{-1}(l_i-l_f); \end{equation} note that these bearings can be pre-computed from the known locations of the landmarks in the map. For each landmark $i\neq f$, let $P_i^f$ be the line passing through the fixed landmark $\tilde{l}_f$ with direction $\beta_i^f$, and let $P_i$ be the line passing through the robot position $x$ with direction $\beta_i$. We then define the scaled landmark position $\tilde{l}_i$ as the intersection between $P_i^f$ and $P_i$. \begin{lemma} Let $s(x)=d_f(x)$ be the distance between landmark $f$ and the robot; then, we have that $l_i-x=s(x) (\tilde{l}_i-x)$ for each landmark $i$. \end{lemma} \begin{proof} The triangles $l_f,x,l_i$ and $\tilde{l}_f,x,\tilde{l}_i$ are similar since they have identical internal angles. Moreover, $\norm{\tilde{l}_f-x}=1$ by construction, the ratio between the segments $l_f,x$ and $\tilde{l}_f,x$ is equal to $s(x)$. Combining these two facts, we have that the ratio between the segments $l_f,x$ and $\tilde{l}_f,x$ is also $s(x)$; the claim then follows. \end{proof} Our proposed solution is then to compute the scale displacements $\tilde{\cY}=\stack(\{\tilde{l}_i-x)$, which are then used with the pre-computed controller \begin{equation}\label{eq:u tilde} \tilde{u}_{ij}(x)=K_{ij}\tilde{\cY}. \end{equation} \subsection{Analysis of the Bearing Controller}% \label{sec:bearing_measurement} The following lemma shows that the original displacement-based controller $u_{ij}$ and our proposed bearing-based controller $\tilde{u}_{ij}$ are essentially equivalent from the point of view of path planning. \begin{proposition}\label{prop:speed reparametrization} Assume $s(x)$ is uniformly upper bounded (i.e., $s(x)<\infty$ for all $x\in \cX_{ij}$). The controllers $u_{ij}$ in \eqref{u} and $\tilde{u}_{ij}$ in \eqref{eq:u tilde} produce the same paths (but traced, in general, with different speeds) for the drifless system \eqref{sys1} when started from the same initial condition. \end{proposition} \begin{proof} Let $x$ and $\tilde{x}$ be the trajectories of the system under $u_{ij}$ and $\tilde{u}_{ij}$, respectively. Since both the dynamics and the controllers are linear, we have that $\dot{x}=s\dot{\tilde{x}}$ when evaluated at the same location. This implies that the two curves $x$ and $\tilde{x}$ are the same up to a reparametrization of the velocity. \end{proof} In fact, we can also relate the new controller to the conditions in the synthesis problem~\eqref{findK}. \begin{proposition}\label{prop:new CBF CLF conditions} Assume that $s_{\min}<s(x)\leq s_{\max}$, and that $\tilde{u}=K_{ij}s\cY\in \cU$ for all $x\in \cX_{ij}$. Then $K_{ij}$ is a feasible solution for \eqref{findK} with the modified CLF and CBF conditions: \begin{align} -( \cL_Bh_{ij}\tilde{u}+\tilde{c}_hh_{ij})&\leq 0, \\ \cL_BV_{ij}\tilde{u}+\tilde{c}_vV_{ij}&\leq 0, \end{align} where $\tilde{c}_h=s_{\min}^{-1} c_h$ and $\tilde{c}_v=s_{\max}^{-1} c_v$. \end{proposition} \begin{proof} The claim follows by dividing the original CBF and CLF conditions by $s(x)$, and then using the bounds $s_{\min}$, $s_{\max}$. \end{proof} Note that the bounds on $s(x)$ translate to bounds on the distance between the cell $\cX_{ij}$ and the landmarks $l_i$ Taken together, Propositions~\ref{prop:speed reparametrization} and~\ref{prop:new CBF CLF conditions} show that the controller $K_{ij}$ found by assuming a displacement-based controller can be used also for the bearing-based case, although the speed of the resulting trajectories might be more aggressive. \subsection{Control With the Limited Field of View}\label{sec:limited-field-of-view} In the formulation Sec.~\ref{findK} it is implicitly assumed that the controller has access to all the landmarks measurements at all times. However, in practice, a robot will only be able to detect a subset of the landmarks due to a limited field of view or environment occlusions (see Fig.~\ref{FOV}). To tackle this issue, we show in this section that the controller $\tilde{u}$ in \eqref{eq:u tilde} can be designed using all available landmarks (as in the preceding section), but then implemented using two landmarks or more. The main idea is to use the two available landmarks to estimate the missing landmarks $\tilde{l}_i$ through the following steps (see also Fig.~\ref{LFOV}): \begin{enumerate} \item Choose one landmark $f$ as the fixed landmark, and denote as $f'$ the other landmark (in the figure $f=2$ and $f'=1$). Compute $\beta_{i}^f$ and $\beta_i^{f'}$ for all $i\notin \{f,f'\}$. \item Following the same steps as in Sec.~\ref{sec:rescaling}, compute $\tilde{l}_f$ (from the bearing $\beta_f$) and $\tilde{l}_f'$ (via intersection). \item Let $P_i^f$ and $P_i^{f'}$ be the lines passing through $l_f$ and $l_{f'}$ in the directions $\beta_i^f$ and $\beta_i^{f'}$, respectively. \item Compute the non-visible modified landmark $\tilde{l}_i$ from the intersection of $P_i^f$ and $P_i^{f'}$. \end{enumerate} After finding the modified bearings for all landmarks, the controller is implemented as in~Sec.~\ref{sec:rescaling}. \begin{figure} \centering \subfloat[]{\label{LFOV}{\includegraphics[width=5.5cm]{figures/tikz/fig3a.PNG}}} \hfill \subfloat[]{\label{FOV}{\includegraphics[width=2.8cm]{figures/tikz/fig3b.PNG}}} \caption{Implement controller with limited field of view. In Fig.~\ref{LFOV}, landmark $l_3$ is not visible from the robot camera. In Fig.~\ref{FOV}, the modified observation $l_3$ is computed using the relative bearing measurements between landmarks.} \label{fig:my_label} \end{figure} \section{SIMULATION AND EXPERIMENTAL RESULTS} To assess the effectiveness of the proposed algorithm, we run a set of validations using both MATLAB simulations and experiments using ROS on a Jackal robot by Clearpath Robotics~\cite{Clearpath}. While the optimization problem guarantees exponential convergence of the robot to the stabilization point, in these experiments the velocity control input $u$ has been normalized to achieve constant velocities along the edges of the tree. \begin{figure}[t] \newcommand{\inpic}[1]{\includegraphics[width=4cm]{#1}} \centering \subfloat[Configuration space]{ \inpic{figures/MATLAB_experiment/fig4a.PNG} \label{fig:Cspace} } \subfloat[Generated RRT*]{ \inpic{figures/MATLAB_experiment/fig4b.PNG} \label{fig:RRT} }% \subfloat[Simplified tree]{ \inpic{figures/MATLAB_experiment/fig4c.PNG} \label{fig:RRT_simple} } \subfloat[Simulation]{ \inpic{figures/MATLAB_experiment/fig4d.PNG} \label{fig:Path} }% \caption{Fig.~\ref{fig:Cspace} shows the configuration space of a kitchen-like environment, where obstacles are represented by gray color. In Fig.~\ref{fig:RRT} the root for {\texttt{RRT$^*$}}{} is located at the origin, red dots show the samples in collision with the obstacles, and the tree generated by {\texttt{RRT$^*$}}{} is plotted in green. Fig.~\ref{fig:RRT_simple} depicts the corresponding simplified tree. In Fig.~\ref{fig:Path}, the agent is initialized at different start points and moves toward the goal point.} \label{fig:exp} \end{figure} \subsection{MATLAB Simulation} The simulated MATLAB environment is presented in Fig.~\ref{fig:exp}, where the obstacles are represented by gray color. For {\texttt{RRT$^*$}}{}, we set the maximum number of iterations in {\texttt{RRT$^*$}}{} to $1500$, we choose $\eta=50$, and we place the root of the tree at the origin of the environment (bottom left corner). The generated tree from {\texttt{RRT$^*$}}{} and its simplified form are shown in Fig.~\ref{fig:RRT} and Fig.~\ref{fig:RRT_simple} respectively. Then, we compute a controller for each edge of the simplified tree as described in Sec.~\ref{sec:tree-controller}. Fig.~\ref{fig:Path} shows the resulting trajectories from four initial positions. In all cases, the robot reaches the desired goal location by applying the sequence of controllers found during planning. \begin{figure}[t] \centering \subfloat[Clearpath Robotics Jackal]{\label{fig:robot}{\includegraphics[width=3.5cm]{figures/Lab_experiment/robot.jpeg} }} \subfloat[AprilTag]{{\label{fig:apriltag-figure}\includegraphics[width=2cm]{figures/Lab_experiment/aprilTag.jpg} }}% \caption{The Jackal robot used for the experiments is shown in Fig.~\ref{fig:robot}. We use AprilTags (Fig.~\ref{fig:apriltag-figure}) as the landmarks for the algorithm.} \label{fig:} \end{figure} \begin{figure} \centering {\includegraphics[width=4cm]{figures/Lab_experiment/Lab_experiment.JPG} } \caption{Experimental Environment} \label{fig:original-env} \end{figure} \subsection{Clearpath Robotics Jackal Experiment} We used the Jackal robot \cite{Clearpath} to test our algorithm in a lab environment. A bird's-eye view of the environment is presented in Fig.~\ref{fig:original-env}. We equipped the Jackal with an Intel RealSense D455 camera \cite{intel}. The Jackal's trajectory is tracked by the OptiTrack motion capture system (with 44 infra-red cameras), and we use AprilTags \cite{Wang2016} with unique codes as the landmark fiducials, which are placed in the environment at known positions and orientations with respect to the inertial reference frame of OptiTrack. The data was managed by the Robot Operating System (ROS) \cite{ROS}. The experimental implementation of our algorithm requires taking into account several practical considerations. First, due to the limited field of view inherent in the RealSense camera, which prevents the Jackal from detecting all required fiducials, we use the proposed method in Sec.~\ref{sec:limited-field-of-view} to compute the controller based on two fiducials detected by the camera at each time instant. Second, all bearing measurements described in previous sections are assumed to be rotationally aligned with the global inertial reference frame. To accommodate for this assumption, we compute the global rotation of the Jackal with respect to the world reference frame as \begin{equation} ^W\mathbf{R}_{J} = {}^W\mathbf{R}_{AT} \left({}^{J}\mathbf{R}_{AT}\right)^{-1}, \end{equation} where ${}^{J}\mathbf{R}_{AT}$ is the measured rotation of the AprilTag with respect to the Jackal. Then, a normalized bearing ${}^J\mathbf{t}_{AT-J}$ detected by the Jackal can be aligned with the global reference frame as ${}^W\mathbf{t}_{AT-J} = {}^W\mathbf{R}_{J} {}^J\mathbf{t}_{AT-J}$. Finally, previous sections assumed a linear dynamical model for the robot, while the Jackal has a unicycle dynamics. We map the original 2D input $u$ to a linear velocity $u_x$ along the $x$ axis of the robot and an angular velocity $\omega_z$ around the $z$ axis using a standard low-level controller: \begin{align} u_x = \small{\dfrac{\alpha}{\left\| u \right\|} \begin{bmatrix} \cos{\varphi} \\ \sin{\varphi} \end{bmatrix}}^\mathrm{T} u,&& \omega_z = \small{\frac{\beta}{\left\| u \right\|} \bmat{0\\0\\1}^\mathrm{T} \left( \begin{bmatrix} \cos{\varphi} \\ \sin{\varphi} \\ 0 \end{bmatrix} \times \begin{bmatrix} u \\ 0 \end{bmatrix} \right)} \end{align} where $\varphi$ is the instantaneous yaw rotation of the robot with respect to the world reference frame, which is extracted from ${}^{C}\mathbf{R}_{AT}$ and ${}^{W}\mathbf{R}_{AT}$, and $\times$ represent the 3-D cross product; $\alpha$ and $\beta$ are user-defined scalar gains, 0.1 and 0.5 respectively. We moved the Jackal across 4 different paths in the environment. The Jackal followed the edges of the simplified {\texttt{RRT$^*$}}{} tree and reached the expected goal for all starting positions, despite the fact that the measurements were obtained with vision alone, and despite different dynamics of the robot. The video of the experiments is available at \url{https://github.com/Mahrooo/NavigationwithBearingMeasurements}. \section{CONCLUSIONS AND FUTURE WORKS}\label{sec:conclusions} In this work, we proposed a novel approach to synthesize a set of out-put feedback controllers on elements of a cell decomposition of the environment that is based on the tree generated by a simplified version of sampling-based {\texttt{RRT$^*$}}{} method. We assumed the robot is equipped with a monocular camera and there exists a global compass. We introduced a new approach to modify the bearing measurements such that they form a uniform scaling of the set of landmarks and defined the output feedback controllers take as input the modified bearing measurements. Then, we computed the output-feedback controllers by implementing the robust Linear Programming that satisfied the stability of the system by the CLF constraint and satisfied the safety of the system by CBF constraint. We validated our approach on both the simulation and the experimental environment and represented the robustness of our approach to the mismatches in the dynamical model of the robot, and to the measurements acquired with a camera with a limited field of view. For the future, we plan to prove to study the robustness of our algorithm to the deformation of the environment. In addition, we plan to find the output feedback controller when there exists an uncertainty in the bearing measurements. \bibliographystyle{IEEEtran}
train/arxiv
BkiUcL_xK6EuNBRhHO9m
5
1
\section{Introduction}\label{secI} In a previous work \cite{TV}, and related to the so called inverse Galois problem, we raised the question of whether or not a given effective group action on a manifold is determined, or ``described", by non-classic tensors in general, or more specifically, by vector fields. More precisely: Consider an effective action of a Lie group $G$ on an $m$-manifold $M$, thus we can think of $G$ as a subgroup of the group $\Diff(M)$ of diffeomorphisms of $M$. Given a vector field $X$ on $M$, we say $X$ is a {\it describing vector field} for the $G$-action if the following hold: \begin{enumerate}[label={\rm (\arabic{*})}] \item $X$ is complete and its flow $\zF_t$ commutes with the action of $G$; so $G\leq \Aut(X)$. \item\label{(2)} The group homomorphism $$\begin{array}{ccc} G\zpor{\mathbb R}&\rightarrow& \Aut(X)\\ (g,t)&\mapsto&g\zci\zF_{t} \end{array}$$ is an isomorphism. \end{enumerate} Notice that we compare $\Aut(X)$ with $G\zpor{\mathbb R}$ instead of $G$ since we always have to take into account the flow of $X$ (see Remark \ref{remA}). Within this setting, the main result in \cite{TV} shows that any finite group action on a connected manifold admits a describing vector field. Here we extend this result to toric actions. \begin{theorem*} Consider an effective action of the torus $\mathbb T^n$ on a connected $m$-manifold $M$. Assume that $n\zmei m-1$. Then there exist a describing vector field for this action. \end{theorem*} The proof of the Theorem A involves three steps. First, the result is established for free actions (Section \ref{secB}) and then extended to effective ones (Section \ref{secC}), in both cases assuming $m-n\zmai 2$. Finally the case $m-n=1$ is considered in Section \ref{secD}. \begin{remark}\label{remA} {\rm Notice that according to Proposition \ref{proAA}, if the $\mathbb T^n$-action on $M^m$ is effective then $n\zmei m$. On one hand, when $n=m$, we can make the identification $M=\mathbb T^n$ endowed with the natural $\mathbb T^n$-action. In this case if $X$ is the fundamental vector field associated to a dense affine vector field on $\mathbb T^n$ (see Section \ref{secA}), then $\Aut(X)=\mathbb T^n$. On the other hand, when $n<m$ no complete vector field $X$ on $M$ verifies $\Aut(X)=\mathbb T^n$. Indeed, assume $\Aut(X)=\mathbb T^n$, and let $X_1 ,\dots,X_n$ be a basis of the Lie algebra of fundamental vector fields and $f_1 ,\dots,f_n$ any $\mathbb T^n$-invariant functions. Then $[X_r ,\zsu_{j=1}^n f_j X_j ]=0$, $r=1,\dots, n$; so the flow of $\zsu_{j=1}^n f_j X_j$ commutes with the action of $\mathbb T^n$ and, as every element of the flow of $X$ belongs to $\Aut(X)$, with this flow too. That is to say the flow of $\zsu_{j=1}^n f_j X_j$ is included in $\Aut(X)$ and, necessarily, $\Aut(X)\znoi \mathbb T^n$ {\it contradiction}. Thus our result is ``minimal" because the flow of $X$ is always included in $\Aut(X)$.} \end{remark} \begin{remark}\label{remB} {\rm Finally, notice that Theorem A cannot be extended to a general compact Lie group. In Section \ref{secE} we construct effective actions of $SO(3)$ (Example \ref{ejeB}), and of a non-connected compact group of dimension two (Example \ref{ejeC}), for which there is no describing vector field. } \end{remark} \noindent \textbf{Terminology:} The reader is supposed to be familiarized with our previous paper \cite{TV}. All structures and objects considered are real $C^{\zinf}$ and manifolds are without boundary, unless another thing is stated. For the general questions on Differential Geometry the reader is referred to \cite{KN} and for those on Differential Topology to \cite{HI}. \medskip \noindent \textbf{Acknowledgements:} The authors would like to thank Prof.\ Arthur Wasserman for suggesting that our original result on finite group actions could be extended to $S^1$-actions, and for his helpful comments on the development of this work. \section{Preliminary results on vector fields}\label{secA} In this section we collect some results on vector fields that are needed in the following sections. On $\mathbb R^k$ we set coordinates $x=(x_1 ,\dots,x_k )$, and define $\zx=\zsu_{j=1}^{k}x_{j}\zpar/\zpar x_{j}$. \begin{lemma}\label{lemA} For any function $g\colon \mathbb R^k \zfl\mathbb R$ with $g(0)=0$ there is a function $f\colon \mathbb R^k \zfl\mathbb R$ such that $\zx\zpu f=g$. \end{lemma} \begin{proof} Although the proof of this result is routine we outline it here. Let $\mathcal T$ denote the Taylor's series of $g$ at $0\zpe\mathbb R^k$. Then there exists a series $\mathcal S$ such that formally $\zx\zpu\mathcal S=\mathcal T$. According to Borel's Theorem \cite[Theorem 1.5.4]{NA} there is a function $\zf$ whose Taylor's series at origin equals $\mathcal S$, and therefore making $g-\zx\zpu\zf$ we may suppose $\mathcal T=0$. Since $\zx$ is hyperbolic at origin, following \cite[Theorem 10, page 38]{RRO} there exist a function $\tilde f\colon\mathbb R^k \zfl\mathbb R$ such that $g-\zx\zpu\tilde f$ vanishes on a open neighborhood $A$ of $0\zpe\mathbb R^k$. Therefore it suffices to show the result when $g_{\zbv A}=0$. But $\mathbb R^k -\{0\}$ can be identified to $S^{k-1}\zpor\mathbb R$ in such a way that $\zx=\zpar/\zpar t$ where $t$ is the variable in $\mathbb R$ and $S^{k-1}\zpor(-\zinf,1)$ corresponds to $B_\ze (0)-\{0\}\zco A$ for some radius $\ze>0$. Finally set $f=\zil_{0}^{t}g{\operatorname d}s$ on $S^{k-1}\zpor\mathbb R$ and $f(0)=0$. \end{proof} Recall that a vector field $T$ on $\mathbb T^n$, endowed with coordinates $\zh=(\zh_1 ,\dots,\zh_n )$, is named {\it affine} if $T=\zsu_{r=1}^{n}a_{r}\zpar/\zpar \zh_{r}$ where $a_1 ,\dots,a_n \zpe \mathbb R$. Then, the trajectories of an affine vector field are dense if and only if $a_1 ,\dots,a_n$ are rationally independent; {\it in this case we say that $T$ is dense}. \begin{lemma}\label{lemB} On $\mathbb R^k \zpor \mathbb T^n$ with coordinates $(x,\zh)=(x_1 ,\dots,x_k ,\zh_1 ,\dots,\zh_n )$ consider the vector field $X=\zx+T$, where $T$ is dense. Then $\L_X$, the set of all vector fields on $\mathbb R^k \zpor \mathbb T^n$ which commute with $X$, is a Lie algebra of dimension $k^2 +n$ with basis $$\left\{ x_j {\frac {\zpar} {\zpar x_{\zlma}}},{\frac {\zpar} {\zpar \zh_r}}\right\},\,\, j,\zlma=1,\dots,k;\, r=1,\dots,n.$$ \end{lemma} \begin{proof} Let $Y\in\L_X$ be the vector field defined as $Y=\zsu_{j=1}^k f_j (x,\zh)\zpar/\zpar x_j +\zsu_{r=1}^n g_r (x,\zh)\zpar/\zpar \zh_r$, and let $\zF_t$ denote its flow. Since $\{0\}\zpor\mathbb T^n$ is compact, $\zF_t$ is defined on $(\{0\}\zpor\mathbb T^n )\zpor(-\ze,\ze)$ for some $\ze>0$. Observe that $\{0\}\zpor\mathbb T^n$ is the set of all points whose $X$-trajectory has compact adherence. Therefore $\zF_t (\{0\}\zpor\mathbb T^n )\zco\{0\}\zpor\mathbb T^n$ for any $t\zpe(-\ze,\ze)$, what implies that $Y$ is tangent to $\{0\}\zpor\mathbb T^n$, and therefore $f_j (0,\zh)=0$, for $j=1,\dots,k$. A computation, taking into account that every $\zpar/\zpar\zh_r$ commutes with $X$, yields $$[X,Y]=\widetilde Y +\zsu_{r=1}^n (X\zpu g_r )\zpar/\zpar \zh_r$$ where $\widetilde Y$ is a functional combination of $\zpar/\zpar x_i$'s. Therefore $X\zpu g_r =0$, i.e.\ $g_r$ is constant along the trajectories of $X$, for $r=1,\dots,n$. But the $\zw$-limit of these trajectories is $\{0\}\zpor\mathbb T^n$ and $g_r$ is constant on this set because $\{0\}\zpor\mathbb T^n$ is the adherence of the trajectory of any of its points, so each $g_r$ is constant on $\mathbb R^k \zpor \mathbb T^n$. By considering $Y-\zsu_{r=1}^n g_r \zpar/\zpar \zh_r$ one may suppose $Y=\zsu_{j=1}^k f_j (x,\zh)\zpar/\zpar x_j$ where each $f_j (\{0\}\zpor\mathbb T^n)=0$. Now from $[X,Y]=0$ follows $X\zpu f_j =f_j$, $j=1,\dots,k$. On the other hand given $f\colon\mathbb R^k \zpor\mathbb T^n \zfl\mathbb R$ such that $f(\{0\}\zpor\mathbb T^n)=0$ and $X\zpu f=f$, then $f$ does not depend on $\zh$ and it is linear on $x$. Indeed if $k=0$ it is obvious; assume $k=1$ by the moment. Then $f=xg$ for some function $g$ since $f$ vanishes on $\{0\}\zpor\mathbb T^n$, and $X\zpu f=f$ becomes $X\zpu g=0$. Therefore $g$ is constant along the trajectories of $X$ and, by the same reason as before, constant on $\mathbb R\zpor\mathbb T^n$. Now suppose $k\zmai 2$. Let $E$ be any vector line in $\mathbb R^k$. As $X$ is tangent to $E\zpor\mathbb T^n$ and the restriction of $\zx$ to $E$ is still the radial vector field, $f\colon E\zpor\mathbb T^n \zfl\mathbb R$ is independent of $\zh$ and linear on $E$ (it is just the case $k=1$). Since the union of all the vector lines $E$ equals $\mathbb R^k$, it follows that $f$ does not depend on $\zh$ and $f\colon\mathbb R^k \zfl \mathbb R$ is linear on each $E$. But, as it is well known, this last property implies that $f\colon\mathbb R^k \zfl \mathbb R$ is linear. In short every $f_j$ is linear on $x$ and independent of $\zh$. \end{proof} \begin{corollary}\label{cor_de_2.2} If $F\colon \mathbb R^k \zpor \mathbb T^n \zfl \mathbb R^k \zpor \mathbb T^n$ is an automorphism of $X=\zx+T$ and $T$ is dense, then there exist an isomorphism $\zf\colon\mathbb R^k \zfl\mathbb R^k$ , and an element $\zl\zpe \mathbb T^n$ such that $F(x,\zh)=(\zf(x),\zh+\zl)$. \end{corollary} \begin{proof} The diffeomorphism $F$ induces an isomorphism on $\L_X$, the Lie algebra described in Lemma~\ref{lemB}. Now observe that: \begin{enumerate} \item The only elements in $\L_X$ with singularities are the elements $\zsu_{j,\zlma=1}^k a_{j\zlma}x_j\zpar/\zpar x_{\zlma}$, where $(a_{j\zlma})\zpe\GL(n,\mathbb R)$. Besides they give rise to the foliation $d\zh_1 =\dots=d\zh_n =0$ (extend it to $\{0\}\zpor\mathbb T^n$ by continuity); so this foliation is an invariant of $F$. \item The center of $\L_X$ is spanned by $\zx, \zpar/\zpar\zh_1 , \dots,\zpar/\zpar\zh_r$. But the adherences of the trajectories of a vector field $b\zx+b_1 \zpar/\zpar\zh_1 +\dots+b_r \zpar/\zpar\zh_r$ are always tori if and only if $b=0$, so $F$ sends the Lie subalgebra spanned by $ \zpar/\zpar\zh_1 ,\dots,\zpar/\zpar\zh_r$ into itself. \end{enumerate} These two facts imply that $F(x,\zh)=(\zf(x),\zq(\zh))$ where $\zq\colon\mathbb T^n \zfl\mathbb T^n$ is an affine transformation of $\mathbb T^n$; that is $$\zq(\zh)=\left(\zsu_{j=1}^{n}c_{1j}\zh_{j},\dots, \zsu_{j=1}^{n}c_{nj}\zh_{j})\right)+\zl$$ where $(c_{\zlma j})\zpe\GL(n,\mathbb Z)$ and $\zl\zpe\mathbb T^n$. As $X=T$ on $\{0\}\zpor\mathbb T^n$, which is an invariant of $F$, it follows that $(a_1 ,\dots,a_n )$ is an eigenvector of $(c_{\zlma j})$ whose eigenvalue equals 1. But in this case $a_1 ,\dots,a_n$ are rationally dependent unless $(c_{\zlma j})=Id$. In short $\zq(\zh)=\zh+\zl$. On the other hand $\zf\colon\mathbb R^k \zfl\mathbb R^k$ has to be an automorphism of $\zx$, which implies that $\zf$ is an isomorphism \cite[Lemma 3.4]{TV}. \end{proof} \begin{lemma}\label{lemC} On $\mathbb R^k \zpor \mathbb T^n$ one considers the vector field $\widetilde X =\tilde\zx +\widetilde T$ where $\tilde\zx=\zsu_{j=1}^{k}\tilde f_{j}(x)\zpar/\zpar x_{j}$ and $\widetilde T=\zsu_{i=1}^{n}\tilde g_{r}(x)\zpar/\zpar \zh_{r}$. Assume that on $\mathbb R^k$ the following hold: \begin{enumerate}[label={\rm (\alph{*})}] \item $\tilde\zx$ is complete, \item $\tilde\zx(0)=0$ and its linear part at the origin is a positive multiple of identity, \item the outset of the origin equals $\mathbb R^k$. \end{enumerate} Then there exists a self-diffeomorphism of $\mathbb R^k \zpor \mathbb T^n$ that commutes with the natural $\mathbb T^n$-action and transforms $\widetilde X$ into $$b\zx+\zsu_{r=1}^{n}b_{r} {\frac {\zpar} {\zpar \zh_r}}$$ where $b\zpe\mathbb R^+$ and $b_1 ,\dots,b_n \zpe\mathbb R$. \end{lemma} \begin{proof} By Sternberg's linearization theorem \cite{SST}, see \cite[Proposition 2.1]{TV}, there exists a diffeomorphism $f\colon\mathbb R^k \zfl\mathbb R^k$ transforming $\tilde\zx$ into $b\zsu_{j=1}^k x_j \zpar/\zpar x_j$ with $b>0$. Dividing by $b$ we may suppose $\tilde\zx=\zx$. By Lemma \ref{lemA} there are functions $\zf_1 ,\dots,\zf_n \colon\mathbb R^k \zfl\mathbb R$ such that $\zx\zpu\zf_r =g_r -g_r (0)$, $r=0,\dots,n$. Now, if $\zf=(\zf_1 ,\dots,\zf_n )$ and $\tilde\zp\colon\mathbb R^n \zfl\mathbb T^n$ is the canonical covering, then the diffeomorphism $F\colon \mathbb R^k \zpor \mathbb T^n \zfl \mathbb R^k \zpor \mathbb T^n$ given by $F(x,\zh) =(x,\zh-\tilde\zp\zci\zf)$, transforms $\widetilde X$ into $\zx+\zsu_{r=1}^n g_r (0)\zpar/\zpar\zh_r$. \end{proof} \begin{remark}\label{remB2} {\rm If $\widetilde X$ matches the hypotheses of Lemma \ref{lemC} and $h\colon \mathbb R^k \zfl \mathbb R$ is a positive bounded function, then $h\widetilde X$ satisfies these hypotheses too (when $h$ is regarded as a function on $\mathbb R^k \zpor \mathbb T^n$ in the obvious way). Therefore $h\widetilde X$ can be written as in Lemma \ref{lemC} for a suitable choice of coordinates, and thus the control of its automorphisms becomes simple.} \end{remark} \section{Free actions}\label{secB} In this section we assume there is a free $\mathbb T^n$-action on the connected $m$-manifold $M$, where $m-n\zmai 2$. This gives rise to a principal fibre bundle $\zp\colon M\zfl B$ whose structure group is $\mathbb T^n$ and $B$ is a connected manifold of dimension $k=m-n\zmai 2$. Then, we construct a suitable vector field on $B$ that is later on lifted to $M$ by means of a connection. In order to construct the vector field on $B$, we closely follow along the lines in \cite[Section 3]{TV} applied to the case of the trivial group action on $B$. Consider a Morse function $\zm\colon B\zfl{\mathbb R}$ that is proper and non-negative. Denote by $C$ the set of its critical points, which is closed and discrete, and therefore countable. As $B$ is paracompact, there exists a locally finite family $\{A_{p}\}_{p\zpe C}$ of disjoint open set such that $p\zpe A_{p}$, $p\zpe C$. Following along the lines in \cite[Section 3]{TV}, there exist a Riemannian metric ${\tilde g}$ on $B$ such that the gradient vector field $Y$ of $\zm$ is complete and, besides, around each $p\zpe C$ there are coordinates $(x_{1},\ldots,x_{k})$ with $p\zeq 0$ and $Y=\zsu_{j=1}^{k}\zl_{j}x_{j}\zpar/\zpar x_{j}$, $\zl_1 ,\dots,\zl_k \zpe\mathbb R -\{0\}$, where \begin{enumerate}[label={\rm (\arabic{*})}] \item $\zl_1 =\dots=\zl_k >0$ if $p$ is a source of $Y$, that is a minimum of $\zm$, \item $\zl_1 =\dots=\zl_k <0$ if $p$ is a sink of $Y$ (a maximum of $\zm$), \item some $\zl_j$ are positive and the remainder negative if $p$ is a saddle. \end{enumerate} Note that these properties still hold when $Y$ is multiplied by a positive bounded function, since they only depend on the Sternberg's Theorem. Let $I$ be the set of local minima of $\zm$, that is the set of sources of $Y$, and $S_{i}$, $i\zpe I$, the outset of $i$ relative to $Y$. Now Lemma 3.3 of \cite{TV} becomes: \begin{lemma}\label{lemD} The family $\{ S_{i}\}_{i\zpe I}$ is locally finite and the set $\zung_{i\zpe I}S_{i}$ is dense in $B$. \end{lemma} In what follows, and by technical reasons, one makes use of the notion of \emph{order of nullity} instead of \emph{chain}. More exactly, for every $i\zpe I$ we choose a subset $P_i$ of $A_i$ with $k+1$ points close enough to $i$ but different from it, in such a way that the linear $\za$-limits of their trajectories are in general position (see \cite[pags.\ 319 and 320]{TV} for definitions). Set $P=\zung_{i\zpe I}P_i$. Consider an injective map $p\zpe P\mapsto n_p \zpe\mathbb N -\{0\}$. Let $\mathbb N'$ be the image of $P$. By definition a differentiable object has {\it order $r$ at point $q$} if its $(r-1)$-jet at this point vanishes but its $r$-jet does not; for instance $Y$ has order one at sources, sinks and saddles. Since $\{A_i \}_{i\zpe I}$ is still locally finite, one may construct a bounded function $\zt\colon B\zfl\mathbb R$ such that $\zt$ is positive on $B-P$ and has order $2n_p$ at every $p\zpe P$. Put $Z=\zt Y$. Then $Z^{-1}(0)=Y^{-1}(0)\zun P$; that is the singularities of $Z$ are the sources, sinks and saddles of $Y$ plus the points of $P$, that we call {\it artificial singularities} and whose order is even $\zmai 2$. Note that two different artificial singularities have different orders. Let $R_i$, $i\zpe I$, be the $Z$-outset of $i$. As $S_i -R_i$ is the union of $k+1$ half-trajectories of $Y$ one has: \begin{lemma}\label{lemE} The family $\{ R_{i}\}_{i\zpe I}$ is locally finite and the set $\zung_{i\zpe I}R_{i}$ is dense in $B$. \end{lemma} On the principal fibre bundle $\zp\colon M\zfl B$ consider a connection $\mathcal C$ which is a product around every fibre $\zp^{-1}(p)$, $p\zpe C$ (that is there exist an open set $p\zpe A\zco B$ and a fiber bundle isomorphism between $\zp\colon \zp^{-1}(A)\zfl A$ and $\zp_1 \colon A\zpor\mathbb T^n \zfl A$ in such a way that $\mathcal C$, regarded on $\zp_1 \colon A\zpor\mathbb T^n \zfl A$, is given by $\mathcal C (q,\zh)=T_q A\zpor \{0\}\zco T_{(q,\zh)}(A\zpor\mathbb T^n )$). This kind of connection always exists because $\{ A_{p}\}_{p\zpe C}$ is locally finite. Let $Y'$ denote the lift of $Y$ to $M$ by means of $\mathcal C$; that is $Y'(u)\zpe\mathcal C (u)$ and $\zp_{*}(Y'(u))=Y(\zp(u))$ for every $u\zpe M$. By construction $Y'$ is $\mathbb T^n$-invariant and $Y'(u)=0$ if and only if $Y(\zp(u))=0$. Let $T$ be a dense affine vector field on $\mathbb T^n$ and $T'$ the fundamental vector field, on $M$, associated to $T$ through the action. As describing vector field we take $X'=(\zt\zci\zp)(Y'+T')$, which clearly is $\mathbb T^n$-invariant and complete. The remainder of this section is devoted to show that $X'$ is a describing vector field. First we study the behavior of $X'$ near some fibres. If $p$ is a source of $Y$ there exist coordinates $(x_1 ,\dots, x_k )$, about $p\zpe B$, with $p\zeq 0$ and $Y=a\zsu_{j=1}^{k}x_{j}\zpar/\zpar x_{j}$, $a>0$. As around $\zp^{-1}(p)$ the connection is a product, these coordinates can be prolonged to a system of coordinates $(x,\zh)$ on a product open set $A\zpor\mathbb T^n$, with the obvious identifications, while $\mathcal C$ is given by the first factor. In this case $$Y'+T'=a\zsu_{j=1}^{k}x_{j}{\frac {\zpar} {\zpar x_j}}+T$$ since $T'$ is just $T$ regarded as a vector field on $A\zpor\mathbb T^n$. The same happens when $p$ is a sink but $a<0$. If $p$ is a saddle then the model of the first part is $\zsu_{j=1}^{k}\zl_{j}x_{j}\zpar/\zpar x_{j}$ with some $\zl_j$ positive and the others negative. Thus, when $p\zpe C$, the torus $\zp^{-1}(p)$ is the adherence of a trajectory of $X'$, this vector field never vanishes on $\zp^{-1}(p)$ and, besides: \begin{enumerate}[label={\rm (\alph{*})}] \item If $p\zpe I$, then $\zp^{-1}(p)$ is the $\za$-limit of some external trajectories but never the $\zw$-limit. \item If $p$ is a sink, then $\zp^{-1}(p)$ is the $\zw$-limit of some external trajectories but never the $\za$-limit. \item If $p$ is a saddle, then $\zp^{-1}(p)$ is the $\za$-limit of some external trajectories and the $\zw$-limit of other ones. \end{enumerate} On the other hand $(X')^{-1}(0)=\zp^{-1}(P)$. Moreover if $p\zpe P$ then $X'$ has order $2n_p$ at each point of $\zp^ {-1}(p)$. If $X'$ is multiplied by a positive, bounded and $\mathbb T^n$-invariant function $\zr\colon M\zfl\mathbb R$, then $\zr=\tilde\zr\zci\zp$ for some positive and bounded function $\tilde\zr\colon B\zfl\mathbb R$. Thus $\zr X'= ((\tilde\zr \zt)\zci\zp)(Y'+T')$. As $\zt$ and $\tilde\zr \zt$ have the same essential properties, the foregoing description still holds for $\zr X'$. In other words, this description is geometric and independent of how trajectories are parameterized. Consider $i\zpe I$ and identify its $Y$-outset $S_i$ to $\mathbb R^k$ in such a way that $i\zeq 0$ and $Y=a\zsu_{\zlma=1}^{k}x_{\zlma}\zpar/\zpar x_{\zlma}$, $a>0$. As $S_i$ is contractible the fibre bundle $\zp\colon\zp^{-1}(S_i )\zfl S_i$ is trivial; so it can be regarded like $\zp_1 \colon\mathbb R^k \zpor\mathbb T^n\zfl\mathbb R^k$ while $$Y'+T'=a\zsu_{\zlma=1}^{k}x_{\zlma}{\frac {\zpar} {\zpar x_{\zlma}}} +\zsu_{r=1}^{n}g_{r}(x){\frac {\zpar} {\zpar \zh_r}}.$$ Finally Lemma \ref{lemC} allows us to suppose $$Y'+T'=a\zsu_{\zlma=1}^{k}x_{\zlma}{\frac {\zpar} {\zpar x_{\zlma}}} +\zsu_{r=1}^{n}a_{r}{\frac {\zpar} {\zpar \zh_r}}$$ with $a>0$ and $a_1 ,\dots,a_n \zpe\mathbb R$. Moreover since $T'=\zsu_{r=1}^{n}a_{r}\zpar/\zpar \zh_{r}$ at any point of $\{0\}\zpor\mathbb T^n$, scalars $a_1 ,\dots,a_n$ are rationally independent (recall that $Y'(u)=0$ whenever $Y(\zp(u))=0$). Now is clear that, for every $p\zpe P_i$ and $u\zpe \zp^{-1}(p)$, there exists a trajectory of $X'=(\zt\zci\zp)(Y'+T')$ whose $\za$-limit and $\zw$-limit are $\zp^{-1}(i)$ and $u$ respectively. As $n_p \znoi n_{p'}$ when $p\znoi p'$, the existence of this kind of trajectories shows that any automorphism of $X'$ has to send $\zp^{-1}(i)$ in itself and, consequently, the $X'$-outset of $\zp^{-1}(i)$ in itself too (here outset means the set of points whose trajectory has its $\za$-limit included in $\zp^{-1}(i)$). The next question is to determine this outset. First we identify the outset $R_i$, of $i$ with respect to $Z=\zt Y$, to $\mathbb R^k$ in such a way that $i\zeq 0$ and $Z=b\zsu_{\zlma=1}^{k}x_{\zlma}\zpar/\zpar x_{\zlma}$, $b>0$. Again $\zp\colon\zp^{-1}(R_i )\zfl R_i$ is trivial and reasoning as before, taking into a account that $X'$ is complete on $\zp^{-1}(R_i )$ which allows to apply Lemma \ref{lemC}, leads us to the case $\zp^{-1}(R_i )\zeq\mathbb R^k \zpor \mathbb T^n \zfl\mathbb R^k \zeq R_i$ and $$X'=b\zsu_{\zlma=1}^{k}x_{\zlma}{\frac {\zpar} {\zpar x_{\zlma}}} +\zsu_{r=1}^{n}b_{r}{\frac {\zpar} {\zpar \zh_r}}$$ where $b>0$ and $b_1 ,\dots,b_n \zpe \mathbb R$ are rationally independent. Therefore the $X'$-outset of $\zp^{-1}(i)$ equals $\zp^{-1}(R_ i)$. Let $F\colon M\zfl M$ be an automorphism of $X'$. Then $F\colon\zp^{-1}(R_ i)\zfl \zp^{-1}(R_ i)$ is a diffeomorphism. As the trajectories of $\zsu_{r=1}^{n}b_{r}\zpar/\zpar \zh_{r}$ are dense, from Corollary \ref{cor_de_2.2} it follows that $F(x,\zh)=(\zf(x),\zh+\zl)$, where $\zl\zpe \mathbb T^n$ and $\zf\colon\mathbb R^k\zfl\mathbb R^k$ is an isomorphism. For any $p\zpe P_i$ some trajectories of $X'$ with $\za$-limit $\zp^{-1}( i)$ have, as $\zw$-limit, a point of $\zp^{-1}(p)$, that is a singularity of order $2n_p$. Since $n_p \znoi n_{p'}$ when $p\znoi p'$, the set of these trajectories has to be an invariant of $F$. Regarded on $R_i \zco B$, and taking into account that $X'$ projects in $Z$, this fact implies that $\zf$ has to map the trajectory of $Z=b\zsu_{\zlma=1}^{k}x_{\zlma}\zpar/\zpar x_{\zlma}$ of $\za$-limit $i$ and $\zw$-limit $p$ into itself. Thus the direction vector of this curve is an eigenvector of $\zf$ with positive eigenvalue. But there are $k+1$ eigenvector (as many as points in $P_i$) and they are in general position, so $\zf$ is a positive multiple of identity. Therefore there exists $t_i$ such that $\zF_{t_i}(x,\zh)=(\zf(x),\zh+\tilde\zl_i)$ where $\zF_{t}$ denotes the flow of $X'$ and $\tilde\zl_i \zpe\mathbb T^n$. Since $\zF_{t}$ and the action of $\mathbb T^n$ commute, that shows the existence of $t_i \zpe\mathbb R$ and $\zl_i \zpe\mathbb T^n$ such that $F=\zl_i \zci\zF_{t_i}$ on $\zp^{-1}(R_i )$. It easily seen that the family $\{\zp^{-1}( R_{i})\}_{i\zpe I}$ is locally finite and the set $\zung_{i\zpe I}\zp^{-1}( R_{i})$ is dense in $M$. On each $\zp^{-1}( R_{i})$ the action of $\mathbb T^n$ and $F$ commute, so they do on $M$. Thus $F$ induces a diffeomorphism $f\colon B\zfl B$ such that $f\zci\zp=\zp\zci F$. Besides, since $Z$ is the projection of $X'$, our $f$ is an automorphism of $Z$. Now from the expression of $F\colon\zp^{-1}( R_{i})\zfl\zp^{-1}( R_{i})$ it follows that $f=\zf_{t_i}$ on $R_i$, $i\zpe I$, where $\zf_t$ is the flow of $Z$. \begin{lemma}\label{lemF} All $t_i$'s are equal and $f=\zf_t$ for some $t\zpe\mathbb R$. \end{lemma} \begin{proof} Notice that $X$ has no regular periodic trajectories (here dimension of $B$ $\zmai 2$ is needed). Then, the lemma follows from the proof of Lemma 3.7 in \cite{TV}, when $G$ is the trivial group and $X$ becomes $Z$. \end{proof} Therefore composing $F$ with $\zF_{-t}$ allows to suppose that $f$ is the identity and $F=\zl_i$ on each $\zp^{-1}( R_{i})$, $i\zpe I$. Consider a $\mathbb T^n$-invariant Riemannian metric on $M$. Then $F$ is an isometry on every $\zp^{-1}( R_{i})$ so, by continuity, on $M$. Take $i_0 \zpe I$; then the isometries $F$ and $\zl_{i_0}$ agree on $\zp^{-1}( R_{i_0})$. But on connected manifolds, isometries are determined by their $1$-jet at any point. Therefore $F=\zl_{i_0}$ on $M$. In other words $(\zl,t)\zpe \mathbb T^n\zpor{\mathbb R}\zfl \zl\zci\zF_t \zpe\Aut(X)$ is an epimorphism. We now prove injectivity. Assume $\zl\zci\zF_t =Id$, that is $\zF_t =(-\zl)$. Then $\zf_t\colon B\zfl B$ equals the identity because $(-\zl)$ induces this map on $B$. Since $Z$ has no periodic regular trajectories this implies $t=0$ and, finally, $\zl=0$ because the action of $\mathbb T^n$ is effective. {\it In short $X'$ is a describing vector field for free actions.} \begin{remark}\label{remC} {\rm Note that if $\zr\colon M\zfl\mathbb R$ is a $\mathbb T^n$-invariant, positive and bounded function, then $\zr X'$ is a describing vector field too. Indeed, there exists a function $\tilde\zr\colon B\zfl\mathbb R$ such that $\zr=\tilde\zr\zci\zp$ and it suffices reasoning with $\tilde\zr\zt$ instead of $\zt$.} \end{remark} \section{Effective actions}\label{secC} Throughout this section we assume that $m-n\zmai 2$ and the $\mathbb T^n$-action is effective. Our next goal is to construct a describing vector field on $M$. Let $S$ be the set of those points of $M$ whose isotropy (stabilizer) is non trivial. By Proposition \ref{proAA} the set $M-S$ is dense, open, connected and $\mathbb T^n$-invariant. Moreover the $\mathbb T^n$-action on $M-S$ is free, so we can consider on this set a describing vector field $X'$ as in Section \ref{secB}. Let $\zf$ be a function like in Proposition~\ref{proAB} for $X'$ and $M-S$, and $\widehat X'$ be the vector field on $M$ given by $\zf X'$ on $M-S$ and zero on $S$. Set $\zq=h\zci\zf$ where $h\colon\mathbb R\zfl\mathbb R$ is defined by $h(t)=0$ if $t\zmei 0$ and $h(t)=exp(-1/t)$ if $t>0$. The function $\zq$, which is $\mathbb T^n$-invariant, vanishes at order infinity at every point of $S$, i.e.\ all its $r$-jets vanish. Besides, it is bounded on $M$ and positive on $M-S$. Now put $\widetilde X=\zq\widehat X'$. Clearly $\widetilde X$ vanishes at order infinity at $u$ if and only if $u\zpe S$; therefore $S$ is an invariant of $\widetilde X$. Moreover $\widetilde X$ is complete on $M$ and $M-S$ respectively. By Remark~\ref{remC} our $\widetilde X$ is a describing vector field on $M-S$ since $\widetilde X=(\zq\zf)X'$ on this set. If $F\colon M\zfl M$ is an automorphism of $\widetilde X$ then $F(S)=S$, and $F\colon M-S\zfl M-S$ is an automorphism of $\widetilde X$; so on $M-S$ one has $F=\zl\zci\widetilde\zF_t$, where $\widetilde\zF_t$ is the flow of $\widetilde X$. By continuity $F=\zl\zci\widetilde\zF_t$ everywhere, which implies that $\widetilde X$ is a describing vector field on $M$ (the homomorphism injectivity is inherited from $M-S$). \section{The case $m-n=1$}\label{secD} First assume the action is free, which gives rise to a principal fibre bundle $\zp\colon M\zfl B$ with $B$ connected and of dimension one. Therefore $B$ is $\mathbb R$ or $S^1$ and $\zp\colon M\zfl B$ can be identified to $\zp_1 \colon B\zpor\mathbb T^n \zfl B$. One will need the following result whose proof is routine. \begin{lemma}\label{lemG} On a open set $0\zpe A\zco\mathbb R$ consider a vector field $X$ such that its $(r-1)$-jet at origin vanishes but its $r$-jet does not, $r\zmai 1$. Let $\zf_t$ be the flow of $X$. Given $f\colon A\zfl\mathbb R$ and $t_1 ,t_2 \zpe\mathbb R$, if $f=\zf_{t_1}$ on $A\zin (0,\zinf)$ and $f=\zf_{t_2}$ on $A\zin (-\zinf,0)$ then $t_1 =t_2$. \end{lemma} Set $Y=q(q^2 +1)^{-1}\zpar/\zpar x$, where $q=x(x-1)(x-2)(x-3)(x-4)$ when $B=\mathbb R$, and $Y=\sin(3\za)\zpar/\zpar\za$ if $B=S^1$ endowed with the angular coordinate $\za$. Clearly $Y$ is complete. In the first case the sources are $0,2,4$ and the sinks $1,3$, and in the second one $0,2\zp/3,4\zp/3$ and $\zp/3,\zp,5\zp/3$ respectively. When $\dim B\zmai 2$ we have created new singularities called artificial. Now instead of that one will increase the order of sinks (otherwise the non-singular set has too many components). Let $\zt\colon B\zfl \mathbb R$ be a bounded function which is positive outside sinks and has order two at $1$ and $\zp/3$, order four at $3$ and $\zp$ and, finally, order six at $5\zp/3$. Set $Z=\zt Y$, which is a complete vector field. This time the $Z$-outsets $\{R_i \}_{i\zpe I}$, where $I=\{0,2,4\}$ if $B=\mathbb R$ or $I=\{0,2\zp/3,4\zp/3\}$ if $B=S^1$, equal those of $Y$, and any of them is an invariant of $Z$ because the $\zw$-limits of its trajectories have different orders or are empty; even more, every side of the outset has to be preserved. On $M=B\zpor\mathbb T^n$ with coordinates $(x,\zh)$ or $(\za,\zh)$ set $X'=\zt(Y+T)$, where $T$ is a dense affine vector field and $\zt, Y, T$ are regarded on $M=B\zpor\mathbb T^n$ in the natural way. Now $(X')^{-1}(0)=(B-\zung_{i\zpe I}R_i )\zpor\mathbb T^n$. Therefore $(X')^{-1}(0)$ is the union of two ($B=\mathbb R$) or three ($B=S^1$) fibres; moreover points of different fibres have different orders. Let $F\colon M\zfl M$ be an automorphism of $X'$; reasoning as in the case $\dim B\zmai 2$ shows that $F=\zl_i \zci \zF_{t_i}$ on $R_i \zpor\mathbb T^n$ for each $i\zpe I$, where $\zF_t$ is the flow of $X'$. Thus the induced automorphism $f\colon B\zfl B$ of $Z$ equals $\zf_{t_i}$ on $R_i$, where $\zf_t$ is the flow of $Z$. Now Lemma \ref{lemG} implies that $t_i =t_j$ if $R_i$ and $R_j$ are contiguous. In short $f=\zf_t$ for some $t\zpe\mathbb R$. The remainder of the proof is similar to that of $\dim B\zmai 2$. Finally notice that $\zr X'$ is a describing vector field too if $\zr\colon M\zfl\mathbb R$ is $\mathbb T^n$-invariant, positive and bounded; therefore the result can be extended to the case of an effective ç action following the lines in Section \ref{secC}. \section{Examples}\label{secE} One starts this section by giving two examples of effective toric actions. The first one is a general construction on the Lie groups. In the second example a describing vector field is constructed for the usual action of $\mathbb T^3$ on $S^3$. On the other hand two examples more show that the main theorem fails for general compact Lie groups. More exactly for effective actions of $SO(3)$ (Example \ref{ejeB}) and for effective actions of a non-connected compact group, of dimension two, with abelian Lie algebra (Example \ref{ejeC}). \begin{example}\label{ejeAcero} {\rm Let $G$ be a connected Lie group, with center $ZG$, and two (non necessary maximal) tori $H,\widetilde H\leq G$. Then there exists a $(H\zpor\widetilde H)$-action on $G$ given by $(h,\tilde h )\zpu g= hg{\tilde h}^{-1}$, whose kernel $K$ equals $\{(h,h)\zbv h\zpe H\zin\widetilde H\zin ZG\}$. Thus an effective action of the torus $(H\zpor\widetilde H)/K$ on $G$ is induced. Now suppose that $G$ is compact with rank $r$ and $ZG$ finite, that is the center of the Lie algebra of $G$ is zero. Let $H$ be a maximal torus of $G$; set $\widetilde H=H$. Then one obtains an effective action of $\mathbb T^{2r}\zeq (H\zpor H)/K$ on $G$. Moreover the isotropy group of any $g\zpe G$ has two or more elements if and only if $(gHg^{-1})\zin H$ is not included in $ZG$; by Proposition \ref{proAA} this happens for almost no $g\zpe G$.} \end{example} \begin{example}\label{ejeA} {\rm On $S^5 =\{y\zpe \mathbb R^6 \zbv y_{1}^2 +\dots+y_{6}^2 =1\}$ consider the action of $\mathbb T^3$ given by the fundamental vector fields $U_j =-y_{2j}\zpar/\zpar y_{2j-1}+y_{2j-1}\zpar/\zpar y_{2j}$, $j=1,2,3$. In order to construct a describing vector field for this action we follow along the lines of Sections \ref{secB} and \ref{secC} up to some minor changes. First observe that the singular set for this action is $S=\{y\zpe S^5 \zbv (y_{1}^2 +y_{2}^2)(y_{3}^2 +y_{4}^2) (y_{5}^2 +y_{6}^2)=0\}$, so the action of $\mathbb T^3$ on $S^5 -S$ is free. Let $\zp\colon S^5 \zfl\mathbb R^2$ be the map given by $\zp(y)=(y_{1}^2 +y_{2}^2 ,y_{3}^2 +y_{4}^2 )$, and $B$ be the interior of the triangle of vertices $(0,0),(1,0),(0,1)$. Then $\zp\colon S^5 -S\zfl\mathbb B$ is the $\mathbb T^3$-principal bundle associated to the action of $\mathbb T^3$. A connection $\mathcal C$ for this principal bundle is defined by $Ker(\za_1 \zex\za_2 \zex\za_3 )$ where each $\za_j =(y_{2j-1}^2 +y_{2j}^2 )^{-1}(-y_{2j}dy_{2j-1}+y_{2j-1}dy_{2j})$, which is flat since $d\za_1 =d\za_2 =d\za_3 =0$. The vector fields $$V_r =(y_{5}^2 +y_{6}^2 )\left( y_{2r-1}{\frac {\zpar} {\zpar y_{2r-1}}} +y_{2r}{\frac {\zpar} {\zpar y_{2r}}}\right) -(y_{2r-1}^2 +y_{2r}^2 )\left( y_{5}{\frac {\zpar} {\zpar y_{5}}} +y_{6}{\frac {\zpar} {\zpar y_{6}}}\right)\, ,$$ $r=1,2$, are tangent to $\mathcal C$ and project in $$2(1-x_{1}-x_{2}){\frac {\zpar} {\zpar x_{r}}}\, ,$$ $r=1,2$, on $B$ endowed with coordinates $x=(x_1 ,x_2)$. Set $$Y=2(1-x_{1}-x_{2})x_1 x_2 \left[ \left( x_1 -{\frac {1} {4}}\right) {\frac {\zpar} {\zpar x_{1}}}+ \left( x_2 -{\frac {1} {4}}\right) {\frac {\zpar} {\zpar x_{2}}}\right]\, ,$$ whose lifted vector field through $\mathcal C$ is $$Y'= (y_{1}^2 +y_{2}^2)(y_{3}^2 +y_{4}^2) \left(( y_{1}^2 +y_{2}^2 -1/4)V_1 + ( y_{3}^2 +y_{4}^2 -1/4)V_2\right).$$ Note that $Y'$ extends in a natural way to $S^5$. Moreover $Y'$ vanishes on $S$. Clearly $Y$ on $B$ and $Y'$ on $S^5$ and $S^5 -S$ are complete. Besides $(1/4,1/4)$ is the only source of $Y$. Let $\zt\colon\mathbb R^2 \zfl\mathbb R$ be the function defined by $$\zt(x)=\zr(x)\left((x_1 -1/8)^{2}+(x_2 -1/8)^{2}\right) \left((x_1 -1/8)^{2}+(x_2 -1/4)^{2}\right)^2 \left((x_1 -1/4)^{2}+(x_2 -1/8)^{2}\right)^3$$ where $\zr(x)=x_{1}^{10}x_{2}^{10}(1-x_{1}-x_{2})^{10}$, whose zeros on in $B$ are $(1/8,1/8)$ with order two, $(1/8,1/4)$ with order four and $(1/4,1/8)$ with order six. Now $X'=(\zt\zci\zp)(Y'+U_1 +e U_2 +e^2 U_3)$, defined on $S^5$, is a describing vector field for the action of $\mathbb T^3$. Indeed, the only difference with respect to the construction of Sections \ref{secB} and \ref{secC} is that every point of $S$ is a singularity of $X'$ with order $\zmai 10$ instead of infinity, but it is not important because the order of the remainder singularities of $X'$ is always $\zmei 6$.} \end{example} \begin{example}\label{ejeB} {\rm Let $H$ be a closed subgroup of a connected Lie group $G$ and $G/H$ be the (quotient) homogeneous space associated to the equivalence relation $g_1 \mathcal R g_2$ if and only of $g_2 =g_1 h$ for some $h\zpe H$. As it is well known $G$ acts on $G/H$ by setting $g\zpu\overline{ g'}=\overline{gg'}$. Now assume $H$ discrete; then the canonical projection $\zp\colon G\zfl G/H$ is a covering. Moreover a vector field $V$ on $G/H$ commutes with the action of $G$, that is to say with every fundamental vector field, if and only if its lifted vector field $V'$, on $G$, is at the same time left $G$-invariant and right $H$-invariant. If one suppose that $V'$ is left $G$-invariant this property is equivalent to say that $V'(e)$, where $e$ is the identity of $G$, is invariant by (the adjoint action of) $H$. Therefore if no element of $T_e G-\{0\}$ is invariant by $H$, then any vector field on $G/H$ which commutes with the action of $G$ identically vanishes. Set $G=SO(3)$ and let $H$ be the subgroup of order four consisting of the identity plus the three rotations of angle $\zp$ around any of the coordinates axes (i.e.\ $H$ is the Klein four-group). Then no element of $T_e SO(3)-\{0\}$ is invariant by $H$. Consider on $M=\mathbb R^k \zpor(SO(3)/H)$, $k\zmai 1$, the action of $SO(3)$ given by $g\zpu(x,\overline{g'})=(x,\overline{gg'})$. {\it This action is effective but it does not have any describing vector field.} Indeed, take a vector field $U$ on $M$ which commutes with the action of $SO(3)$. Then $U$ has to respects the foliation defined by the orbits of $SO(3)$, so $U=\zsu_{j=1}^k f_j (x)\zpar/\zpar x_j +V$ where $V$ is a vector field tangent to the second factor and $x=(x_1 ,\dots,x_k )$ the canonical coordinates in $\mathbb R^k$. Since $U$ and $\zsu_{j=1}^k f_j (x)\zpar/\zpar x_j$ commute with the action, $V$ has to commute with the fundamental vector fields on each orbit, hence $V=0$. In other words $U=\zsu_{j=1}^k f_j (x)\zpar/\zpar x_j$. On the other hand if $\zf\colon SO(3)/H\zfl SO(3)/H$ is a diffeomorphism, then $$F\colon\mathbb R^{k}\zpor(SO(3)/H)\zfl\mathbb R^k \zpor(SO(3)/H)$$ given by $F(x,\overline{g})=(x,\zf(\overline{g}))$ is an automorphism of $U$, so $\Aut(U)$ is strictly greater than $SO(3)\zpor\mathbb R$. Another possibility is to consider the action of $SO(3)$ on the sphere $S^2$. Then for each $p\zpe S^2$ there exists a fundamental vector field $X$ such that $p$ is an isolated singularity of $X$. Therefore if $V$ commutes with $X$ then $V(p)=0$; consequently if $V$ commutes with the action of $SO(3)$ on $S^2$ necessarily $V=0$. By the same reason as before, the action of $SO(3)$ on $\mathbb R^k \zpor S^2$, $k\zmai 1$, defined by $g\zpu(x,p)=(x,g\zpu p)$ has no describing vector field.} \end{example} \begin{example}\label{ejeC} {\rm Let $G$ be the group of affine transformations $\zf\colon\mathbb T^2 \zfl\mathbb T^2$ defined by $\zf(\zh)=a\zh+\zl$ where $a=\zmm 1$ and $\zl\zpe\mathbb T^2$. The fundamental vector fields of the natural action of $G$ on $\mathbb T^2$ are $b_1 \zpar/\zpar\zh_1 +b_2 \zpar/\zpar\zh_2$, $b_1 ,b_2 \zpe\mathbb R$. If $V$ is a vector field on $\mathbb T^2$ which commutes with the action of $G$ it has to commute with the fundamental vector fields, therefore $V=c_1 \zpar/\zpar\zh_1 +c_2 \zpar/\zpar\zh_2$, $c_1 ,c_2 \zpe\mathbb R$. But, at the same time, $V$ commutes with $\tilde\zf(\zh)=-\zh$ since $\tilde\zf\zpe G$, which implies $V=0$. Now reasoning as in Example \ref{ejeB} shows that the effective action of $G$ on $\mathbb R^k \zpor\mathbb T^2$, $k\zmai 1$, defined by $\zf\zpu(x,\zh)=(x,\zf(\zh))$ has no describing vector field. Observe that $G$ is a compact Lie group with abelian Lie algebra, but it is not connected.} \end{example} \section{Two auxiliary results}\label{secAA} Here we include two complementary results which were needed before. The first one is a straightforward consequence of the Principal Orbit Theorem on the structure of the orbits of the action of a compact Lie group (see \cite[Theorem IV.3.1]{bredon}). \begin{proposition}\label{proAA} Consider an effective action of $\mathbb T^n$ on a connected $m$-manifold $M$. Let $S$ be the set of those points of $M$ whose isotropy group has two or more elements, i.e.\ the isotropy group is non trivial. Then the set $M-S$ is connected, dense, open and $\mathbb T^n$-invariant. Moreover $n\zmei m$. \end{proposition} The second result shows how, for connected compact Lie group actions, locally defined invariant vector fields give rise to invariant vector fields defined on the whole manifold. \begin{proposition}\label{proAB} Consider an action of a connected compact Lie group $G$ on a $m$-manifold $M$. Given a vector field $X$ on an open set $A$ of $M$, both of them $G$-invariant, then there exists a $G$-invariant bounded function $\zf:M\zfl{\mathbb R}$, which is positive on $A$ and vanishes on $M-A$, such that the vector field ${\hat X}$ on $M$ defined by ${\hat X}=\zf X$ on $A$ and ${\hat X}=0$ on $M-A$ is differentiable and $G$-invariant. \end{proposition} First let us recall some elementary facts on actions. Let $Z$ be a vector field on $M$ and $\widetilde Z$ the vector field depending on a parameter $g\zpe G$ given by $\widetilde Z(g,p)=(g_*)^{-1}(Z(g\zpu p))$. On $G$ consider a bi-invariant volume form whose integral equals $1$ and the measure associated to it, that is the Haar measure. Then since $\widetilde Z(\{p\}\zpor G)\zco T_p M$ the formula $$Z'(p)=\zil_G \widetilde Z(g,p)$$ defines a $G$-invariant vector field $Z'$ on $M$. Moreover if $Z=\zr U$ where $U$ is a $G$-invariant vector field, then $Z'=\zq_\zr U$ where $\zq_\zr$ is the $G$-invariant function constructed from $\zr$ in the usual way, that is $$\zq_\zr(p)=\zil_G \zr(g\zpu p)\, .$$ In order to prove Proposition \ref{proAB}, we start considering, for $X$ and $A$, a function $\zf\colon M\zfl\mathbb R$ like in \cite[Proposition 5.5, pag.\ 329]{TV}, and the vector field $\hat X$ defined by $\hat X=\zf X$ on $A$ and $\hat X=0$ on M-$A$. Now observe that $\hat X'=\zq_\zf X$ on $A$ because on this open set $X$ is $G$-invariant and $\hat X=\zf X$. Thus $\zq_\zf$ is the required function (up to the name) since $\zq_\zf $ and $\hat X'$ vanish on $M-A$; so {\it Proposition \ref{proAB} is proved}.
train/arxiv
BkiUeDk4dbghcDJYGDeo
5
1
\section{Introduction} High porosity silica aerogels are the only known means for studying the effect of impurities on the otherwise completely pure superfluid phases of $^{3}$He. Recently there has been substantial interest in exploring the role of structural anisotropy in the aerogel and how it pertains to superfluid phase stability, textural orientation, spin dynamics, and nucleation. The narrow window of \emph{A}-like phase stability might be explained by the existence of anisotropy in the aerogel structure \cite{Thu98,Vic05,Aoy06}. The possibility of using anisotropic aerogels to stabilize superfluid phases not present in bulk $^{3}$He, such as the polar phase, has also been explored theoretically \cite{Aoy06} and experimentally \cite{Dav06,Elb08,Dav08}. Experimental work has shown that anisotropy affects the orientational degrees of freedom in both the \emph{A}-like and \emph{B}-like phases \cite{Kun07,Dmi07} and anisotropic aerogels allow for the realization of novel spin dynamical states not possible in bulk \cite{Sat08}. The nucleation of the \emph{B}-like phase may also be modified by anisotropy in the aerogel \cite{Dav08,Kad08}. We report on measurements of the transverse acoustic impedance of superfluid $^{3}$He confined in a 97.8\% porous aerogel with well defined axial anisotropy to study the role of anisotropy on phase stability and nucleation. \section{Axial Strain and Optical Birefringence} Optical birefringence in a transparent or translucent material results from an anisotropic dielectric constant, which leads to an optical axis in the material. The optical axis acts as a linear polarizer whose direction and distribution throughout the sample can be determined by viewing the material between crossed polarizers \cite{Hec98}. We have previously shown that axial strain can be used to produce \emph{global} structural anisotropy in high porosity silica aerogel and that this anisotropy can be characterized by optical birefringence \cite{Pol08}. Fig.~\ref{fig1} depicts how axial strain introduces anisotropy into an initially isotropic 98\% porous aerogel. All aerogels discussed here were synthesized at Northwestern University via a one-step sol-gel process followed by supercritical drying \cite{Pol08}. A cylindrical aerogel sample was placed between two crossed polarizers and illuminated with diffuse white light. The sample was oriented vertically and images were captured with a digital camera. The polarizers were oriented with one at 45$^\circ$ and the other at 135$^\circ$ relative to the cylinder axis. Panel 1 in Fig.~\ref{fig1} shows that the unstrained gel does not impose any preferred direction of polarization when placed between crossed polarizers. We found this to be the case for all orientations of the polarizers with respect to the cylinder axis. We also found that light propagating along the cylinder axis was not transmitted before the sample was strained . Consequently, we refer to the nature of this aerogel sample prior to the application of strain as homogeneously isotropic. The increasing intensity of light transmitted through the sample with increasing strain in Panels 2-9 indicate how global anisotropy was introduced as the sample was strained up to 18.6\%. \begin{figure}[t] \centerline{\includegraphics[height=0.07\textheight]{fig1.jpg}} \caption{\label{fig1}(Color online) Optical birefringence of a $98\%$ porosity aerogel, initially isotropic before it was subjected to increasing axial strain. The strain increases from left to right: 1 (unstrained), 2 (2.3\% strain), 3 (4.7\%), 4 (7.0\%), 5 (9.3\%), 6 (11.6\%), 7 (14.0\%), 8 (16.3\%), 9 (18.6\%).} \end{figure} To determine the direction of the optical axis in the aerogel, we rotated the polarizers while keeping them crossed. The rotation sequence for the sample at 18.6\% strain is presented in Fig.~\ref{fig2}. Intensity maxima (minima), seen when the polarizers are at 45$^\circ$ and 135$^\circ$ (90$^\circ$ and 180$^\circ$), are consistent with the optical axis being oriented along the strain axis. \begin{figure}[h] \centerline{\includegraphics[height=0.07\textheight]{fig2.jpg}} \caption{\label{fig2}(Color online) Optical birefringence of a $98\%$ porosity aerogel strained by 18.6\%. The labels are associated with the rotation of the polarizers relative to the cylinder axis: 1 (45$^\circ$, 135$^\circ$), 2 (60$^\circ$, 150$^\circ$), 3 (75$^\circ$, 165$^\circ$), 4 (90$^\circ$, 180$^\circ$), 5 (105$^\circ$, 195$^\circ$), 6 (120$^\circ$, 210$^\circ$), 7 (135$^\circ$, 225$^\circ$).} \end{figure} \section{Acoustic Impedance Experiment} Transverse acoustic impedance is sensitive to phase transitions of superfluid $^{3}$He in aerogel, and the technique has been used to map out the pressure-temperature phase diagram in unstrained aerogel samples \cite{Ger01,Ger02}. To explore the role of axial strain, we have performed measurements of transverse acoustic impedance at 17.6 MHz in a sample of 97.8\% porous aerogel grown onto the surface of an \emph{AC}-cut quartz piezoelectric transducer and strained by 17\% \cite{Dav08}. The impedance was measured using a continuous wave RF-bridge \cite{DaT08,Ham89}. The sample cell was cooled in liquid $^{3}$He using a dilution refrigerator, followed by adiabatic nuclear demagnetization. We monitored the temperature of the $^{3}$He by measuring the temperature dependent magnetic susceptibiliy of a paramagnetic salt (LCMN) \cite{Ham89}, calibrated with respect to the Greywall temperature scale \cite{Gre86}. The sample cell consisted of an open cavity between two parallel transducers separated by two spacer wires of diameter 0.0305 cm. Alongside and between these spacer wires were two smaller wires of diameter 0.0254 cm. In this configuration the cavity was held under tension by a stainless steel spring, see Fig.~\ref{fig3}. Aerogel was grown directly in the cavity and onto the surfaces of the transducers. Excess aerogel around the cavity was removed such that the outer surfaces of the transducers would be in contact with bulk $^{3}$He. Then the large diameter spacers wires were removed and the spring compressed the aerogel onto the small diameter wires. This procedure resulted in 17\% strain. To verify the existence of global anisotropy in our sample after the compression procedure, we performed optical birefringence measurements on the aerogel cavity after the experiment was completed. Fig.~\ref{fig3} depicts a diagram of the sample cell and the inset shows the birefringence of the aerogel in the acoustic cavity with the polarizers oriented at (0$^\circ$, 90$^\circ$) (top panel) and (45$^\circ$, 135$^\circ$) (lower panel) relative to the surface normal of the transducer. Fig.~\ref{fig3} clearly indicates that the compression cell introduces global anisotropy in our aerogel sample and that the magnitude and direction of the anisotropy are consistent with our previous work on compressed aerogels \cite{Pol08}. \begin{figure}[h] \centerline{\includegraphics[height=0.175\textheight]{fig3.jpg}} \caption{\label{fig3}(Color online) Aerogel acoustic compression cell. Aerogel was grown directly between, and onto, the two quartz transducers and compressed {\it in situ}. The inset depicts optical birefringence of the aerogel with the polarizers oriented at (0$^\circ$, 90$^\circ$) (top panel) and (45$^\circ$, 135$^\circ$) (lower panel) relative to the surface normal of the transducer.} \end{figure} \section{Results and Discussion} On cooling from the normal state we saw a clear indication of the aerogel-superfluid transition, \emph{T}$_{ca}$ and the \emph{A}-like to \emph{B}-like transition, \emph{T}$_{ABa}$. However, on warming the sample from the \emph{B}-like phase we did not observe any other transitions until we reached \emph{T}$_{ca}$ \cite{Dav08}. Our tracking experiments \cite{Dav08} near \emph{T}$_{ca}$ reveal that the window of \emph{A}-like phase stability is identical to previous studies \cite{Vic05,Ger02}. Therefore, we conclude that 17\% axial strain does not stabilize the \emph{A}-like phase. In Fig.~\ref{fig4} we present results on the amount of supercooling in our axially compressed aerogel. For comparison we include the results of Gervais \emph{et al.} \cite{Ger01,Ger02} and Nazaretski \emph{et al.} \cite{Naz04} on presumably isotropic aerogel samples. We find that there was a pronounced increase in supercooling of the \emph{A}-like phase at pressures below 20 bar for our sample. This indicates that the nucleation mechanism for the \emph{B}-like phase is suppressed by the axial anisotropy of our sample. This implies that the metastability of the \emph{A}-like is enhanced by the presence of anisotropic scattering inside the aerogel. \begin{figure} \centerline{\includegraphics[height=0.25\textheight]{fig4.jpg}} \caption{\label{fig4}(Color online) Increased supercooling of the \emph{A}-like to \emph{B}-like transition as a function of pressure for a 17\% axially compressed aerogel (open red circles) along with the previous results of Gervais {\it{et. al}} \cite{Ger01,Ger02} (open green squares) and Nazaretski {\it{et. al}} \cite{Naz04} (open blue triangles). Curves are guides to the eye.} \end{figure} \section{Conclusions} Our results show that global anisotropy produced by 17\% axial anisotropy inhibits the nucleation of the \emph{B}-like phase of superfluid $^{3}$He in aerogel but does not modify the thermodynamic stability of the \emph{A}-like phase. \ack We acknowledge support from the National Science Foundation, DMR-0703656. The authors would like to thank J.A. Sauls for valuable theoretical insights. We are indebted to P.E. Wolf and F. Bonnet for introducing us to the technique of optical polarization studies for aerogel characterization. \section{References}
train/arxiv
BkiUdCzxK7IDPXIhnN67
5
1
\section{Introduction}\label{Introduction} The Orion Kleinmann-Low (KL) nebula is the closest \citep[414 pc,][]{Menten2007} and most studied high-mass star-forming region in the Galaxy. The prevailing chemistry in this source is particularly complex as a result of the interactions between newly formed protostars, their outflows and their environment. The evaporation of dust mantles and the presence of high gas temperatures produce a wide variety of molecules in the gas phase that are responsible for a spectacularly rich and intense line spectrum \citep{Blake1987, Charnley1997}. The chemical complexity of Orion KL has been demonstrated by an extensive number of line surveys, performed over a variety of frequency ranges from 72 GHz up to 1.9 THz \citep[e.g.,][]{Blake1986, Schilke1997, Bergin2010a}. Finally, millimetre and submillimetre aperture synthesis studies have provided the spatial location and extent of many molecular species \citep[e.g.,][]{Blake1996, Beuther2005, Plambeck2009, Zapata2009}, further indicating the presence of distinct physical/kinematic components that show clear chemical differentiation, with NH- and OH-bearing molecules peaking in different sources (the hot core and compact ridge, respectively). Near- and mid-IR subarcsecond resolution imaging and (sub)millimetre interferometric observations have identified the main sources of luminosity, heating, and dynamics in the region \citep[e.g.,][]{Dougados1993, Gezari1998, Shuping2004, DeBuizer2012}. However, the nature of these objects has been the subject of much debate, as has the question of which source(s) are most responsible for the high luminosity produced by the nebula \citep[$\sim$10$^5$ $L_{\sun}$;][]{Menten1995}. The main sources of activity seem to be radio sources \textit{I} and \textit{n}, located just to the south and southwest of IRc2. The Becklin-Neugebauer (BN) object 9$\arcsec$ northwest of the hot core is believed to be internally heated by an embedded massive protostar (or protostars). Proper motion studies have shown that all three sources, \textit{I}, \textit{n}, and BN, are receding from a common origin. This has led to the hypothesis that they were expelled from a massive young stellar system that disintegrated $\sim$500 years ago \citep{Bally2005, Zapata2009, Niederhofer2012, Nissen2012}, and that this explosive event may be the source of heating for the region in and around the hot core \citep{Zapata2009, Zapata2011, Moeckel2012}. The spectacular molecular ``fingers'' that trace the high-velocity outflow emanating from KL in a northwest-southeast direction have also been attributed to this explosive event. Interferometric observations probing the central region of the KL nebula at arcsecond resolution have also revealed another distinct kinematic component, particularly evident in shock tracers such as SiO and SO, that extends in the northeast-southwest direction and has been identified as a low-velocity bipolar outflow originating at source \textit{I} \citep[e.g.,][]{Plambeck1982, Greenhill1998, Niederhofer2012}. This, along with the spectral shape of its continuum emission, has therefore lead various authors to argue that source \textit{I} is likely to be an embedded protostar driving this outflow. Zapata et al. (2009, 2011) have proposed instead that the low-velocity outflow is linked to the explosive event and that the region surrounding IRc2 and the hot core is being externally heated by material ejected from the explosion impacting the stationary gas. The nature of the Orion hot core, BN object, and sources \textit{I} and \textit{n}, and the underlying cause of the dynamics and energetics in the KL nebula are thus matters of continued debate. Reliable measurements of the temperature distribution across the extended gas in the KL nebula can help to better understand the energetics in the region, and therefore the likely driving sources. Methyl cyanide, CH$_3$CN, is an excellent probe of the gas temperature in warm dense environments like Orion KL \citep{Boucher1980}. Its molecular symmetry gives rise to many transitions closely spaced in frequency that span a large range of energies (up to $E_\mathrm{up} \! > \! 1000$ K). These lines can therefore be observed simultaneously at one frequency setting, thus avoiding flux calibration uncertainties, and are excited by collisions (so-called $K$-ladder transitions) so that their level populations are dictated by the gas temperature. Nitrogen-bearing complex organic molecules such as methyl cyanide are generally assumed to form on grain mantles in the cold dense phase of the interstellar medium, before being released into the gas phase by evaporation in the hot regions surrounding massive protostars. Their spatial distribution and abundances can therefore provide important constraints on chemical models of these sources. The methyl cyanide emission in Orion KL has been studied extensively, both with single-dish and interferometric observations. Early observations of the millimetre lines of CH$_3$CN \citep{Loren1981, Loren1984, Sutton1986} revealed the presence of warm gas ($\sim$275~K) in the hot core and weaker extended emission from cooler gas ($\sim$100~K) in the quiescent ridge, with derived fractional abundances of 10$^{-11}$ to 10$^{-10}$. More recent interferometric studies \citep{Wilner1994, Beuther2006, Wang2010} have uncovered the complex and clumpy structure within $\sim$10$\arcsec$ of IRc2, yielding kinetic temperatures above 250~K (and even up to 600~K, based on submillimetre transitions) for the hot core and lower values of $\sim$150--250~K toward the compact ridge. These arcsecond resolution observations suggest much higher CH$_3$CN abundances of 10$^{-8}$ to 10$^{-7}$ in the hot core and compact ridge. We have obtained fully sampled maps of four sets of methyl cyanide $K$-ladder emission lines across a large region centred on Orion IRc2. These maps cover a larger area with improved sensitivity and (single-dish) spatial resolution than those of previous studies, providing a homogeneous dataset that is well-suited to studying the extended emission in the region surrounding the KL nebula. Furthermore, while these data cannot compete with the angular resolution afforded by interferometric techniques, they do not suffer from the problem of missing flux common to many interferometric observations, which tend to emphasise the clumpiness of the gas by filtering out the smooth extended emission. Recovering this extended emission is crucial when studying the structure on larger scales, which is the focus of this paper. Based on these data, we have analysed the kinematics of this extended warm gas and derived maps of temperature and column density across the region. We describe the observations that were obtained and the subsequent reduction of those data in Sect.~\ref{Observations}. The maps are presented in Sect.~\ref{Results:Emission-Maps} and the general kinematic properties of the emitting gas are discussed. We describe the analysis of these maps and the derivation of column densities and rotational temperatures by use of the population diagram method, followed by LVG model fitting in Sect.~\ref{Analysis}. The resulting temperature distributions are presented in Sect.~\ref{Results:Temperature-Maps} and their implications for the source structure are discussed in Sect.~\ref{Discussion}. Our findings are summarised in Sect.~\ref{Conclusions}. \section{Observations and data reduction}\label{Observations} \begin{table} \caption{Spectroscopic properties of the observed methyl cyanide lines.} \label{Table:Line-Properties} \centering \begin{tabular}{c@{\ \ }cccrr} \hline\hline \multicolumn{2}{c}{Transition} & Frequency & $A_{i\!j}$ & \multicolumn{1}{c}{$E_\mathrm{up}$} & \multicolumn{1}{c}{$g_\mathrm{up}$} \\ $J'$$\to$$J''$ & $K$ & (GHz) & (s$^{-1}$) & \multicolumn{1}{c}{(K)} & \\ \hline 6--5 & 0 & 110.383500 & 1.11$\times$10$^{-4}$ & 18.5 & 26 \\ 6--5 & 1 & 110.381372 & 1.08$\times$10$^{-4}$ & 25.7 & 26 \\ 6--5 & 2 & 110.374989 & 9.88$\times$10$^{-5}$ & 47.1 & 26 \\ 6--5 & 3 & 110.364354 & 8.33$\times$10$^{-5}$ & 82.8 & 52 \\ 6--5 & 4 & 110.349470 & 6.17$\times$10$^{-5}$ & 132.8 & 26 \\ 6--5 & 5 & 110.330345 & 3.39$\times$10$^{-5}$ & 197.1 & 26 \\ \hline 12--11 & 0 & 220.747261 & 9.24$\times$10$^{-4}$ & 68.9 & 50 \\ 12--11 & 1 & 220.743011 & 9.18$\times$10$^{-4}$ & 76.0 & 50 \\ 12--11 & 2 & 220.730261 & 8.99$\times$10$^{-4}$ & 97.4 & 50 \\ 12--11 & 3 & 220.709016 & 8.66$\times$10$^{-4}$ & 133.2 & 100 \\ 12--11 & 4 & 220.679287 & 8.21$\times$10$^{-4}$ & 183.1 & 50 \\ 12--11 & 5 & 220.641084 & 7.63$\times$10$^{-4}$ & 247.4 & 50 \\ 12--11 & 6 & 220.594423 & 6.92$\times$10$^{-4}$ & 325.9 & 100 \\ 12--11 & 7 & 220.539323 & 6.08$\times$10$^{-4}$ & 418.6 & 50 \\ 12--11 & 8 & 220.475807 & 5.12$\times$10$^{-4}$ & 525.6 & 50 \\ 12--11 & 9 & 220.403900 & 4.03$\times$10$^{-4}$ & 646.7 & 100 \\ 12--11 & 10 & 220.323631 & 2.81$\times$10$^{-4}$ & 782.0 & 50 \\ \hline 13--12 & 0 & 239.137916 & 1.18$\times$10$^{-3}$ & 80.3 & 54 \\ 13--12 & 1 & 239.133313 & 1.17$\times$10$^{-3}$ & 87.5 & 54 \\ 13--12 & 2 & 239.119504 & 1.15$\times$10$^{-3}$ & 108.9 & 54 \\ 13--12 & 3 & 239.096497 & 1.12$\times$10$^{-3}$ & 144.6 & 108 \\ 13--12 & 4 & 239.064299 & 1.07$\times$10$^{-3}$ & 194.6 & 54 \\ 13--12 & 5 & 239.022924 & 1.00$\times$10$^{-3}$ & 258.9 & 54 \\ 13--12 & 6 & 238.972389 & 9.26$\times$10$^{-4}$ & 337.4 & 108 \\ 13--12 & 7 & 238.912715 & 8.35$\times$10$^{-4}$ & 430.1 & 54 \\ 13--12 & 8 & 238.843926 & 7.30$\times$10$^{-4}$ & 537.0 & 54 \\ 13--12 & 9 & 238.766049 & 6.11$\times$10$^{-4}$ & 658.2 & 108 \\ 13--12 & 10 & 238.679115 & 4.79$\times$10$^{-4}$ & 793.4 & 54 \\ \hline 14--13 & 0 & 257.527383 & 1.48$\times$10$^{-3}$ & 92.7 & 58 \\ 14--13 & 1 & 257.522427 & 1.47$\times$10$^{-3}$ & 99.8 & 58 \\ 14--13 & 2 & 257.507561 & 1.45$\times$10$^{-3}$ & 121.3 & 58 \\ 14--13 & 3 & 257.482792 & 1.41$\times$10$^{-3}$ & 157.0 & 116 \\ 14--13 & 4 & 257.448128 & 1.35$\times$10$^{-3}$ & 207.0 & 58 \\ 14--13 & 5 & 257.403584 & 1.29$\times$10$^{-3}$ & 271.2 & 58 \\ 14--13 & 6 & 257.349179 & 1.20$\times$10$^{-3}$ & 349.7 & 116 \\ 14--13 & 7 & 257.284935 & 1.10$\times$10$^{-3}$ & 442.4 & 58 \\ 14--13 & 8 & 257.210877 & 9.90$\times$10$^{-4}$ & 549.4 & 58 \\ 14--13 & 9 & 257.127035 & 8.62$\times$10$^{-4}$ & 670.5 & 116 \\ 14--13 & 10 & 257.033444 & 7.19$\times$10$^{-4}$ & 805.8 & 58 \\ \hline \end{tabular} \tablefoot{Spectroscopic data taken from the Cologne Database for Molecular Spectroscopy \citep[CDMS;][]{Muller2005}. $A_{i\!j}$ is the Einstein coefficient for spontaneous emission, $E_\mathrm{up}$ is the upper state energy, and $g_\mathrm{up}$ is the upper state degeneracy.} \end{table} \begin{figure*} \centering \includegraphics[width=17cm]{Figures/OrionKL_CH3CN_J=6-5_intensity_maps} \caption{Integrated intensity maps of methyl cyanide $J$=$6_K$--$5_K$ emission across the Orion KL region. Contours are plotted at the 3-sigma level ($3\sigma = 3 \!\times\! T_\mathrm{rms} \, \delta V \sqrt{N}_\mathrm{channels} = 2.1$~K\,km\,s$^{-1}$; grey lines) and from 20 to 140 in 20~K\,km\,s$^{-1}$ intervals (white lines). The positions of the hot core and compact ridge are indicated by a red plus sign and a diagonal cross, respectively. The locations of radio continuum sources BN, \textit{I}, and \textit{n} are also marked on each panel by white triangles. Peaks 1 and 2 of the H$_2$ emission are indicated by white stars and the condensation CS1 is marked by a white cross. Extended emission to the northeast of the hot core is clearly visible in the lower $K$ transitions.} \label{Fig:6-5_intensities} \end{figure*} The observational data used in this paper were obtained as part of a 2-dimensional (2D) line survey of the Orion KL nebula, performed at the Institut de Radioastronomie Millim\'etrique (IRAM) 30m telescope in Pico Veleta, Spain. This mapping survey covers a $\sim$$2\arcmin\times2\arcmin$ region around Orion IRc2, with full spatial sampling and continuous frequency coverage across the 1~mm atmospheric window (from 200 to 282 GHz). A complete description of the 2D survey is presented in Marcelino et al. (\textit{in prep.}). We describe here only the details specific to the observations covering the methyl cyanide transitions analysed in this paper. The observations presented here were obtained on 2008 February 16 ($J$=$12_K$--$11_K$), 2010 February 14 ($J$=$6_K$--$5_K$ and $13_K$--$12_K$), and 2012 January 22 ($14_K$--$13_K$). The EMIR single pixel heterodyne receivers were used for all observations except for those performed in 2008, for which the 9-pixel HERA receiver array was used. The $J$=$6_K$--$5_K$, $J$=$12_K$--$11_K$, $J$=$13_K$--$12_K$, and $J$=$14_K$--$13_K$ series of methyl cyanide $K$-ladder transitions were each observed using a single Local Oscillator setting, at frequencies of 109.983, 220.700, 239.100, and 258.000~GHz, respectively. The possible presence of emission arising from the image sideband was checked by using short wobbler-switching observations on the central position with a slightly different frequency for each setting. All maps were centred on the infrared continuum source IRc2 at $\alpha$=$5$h\,35m\,14.49s, $\delta$=$-5^\circ$\,22$\arcmin$\,29.3$\arcsec$ (J2000.0) and covered an area of 140$\times$140 arcsec$^2$ with map points separated by 4 arcsec. The observations were performed using the \textit{On-The-Fly} (OTF) mapping mode, scanning both in $\alpha$ and $\delta$, with position switching to an emission-free reference position at an offset $(-600\arcsec,0\arcsec)$ with respect to IRc2. The WILMA backend spectrometer was used for all observations, with a total bandwidth of 4 GHz (EMIR) and 1 GHz (HERA) and a spectral resolution of 2 MHz, corresponding to velocity resolutions of 5.4 to 2.3 km\,s$^{-1}$ at 110 and 258 GHz, respectively. Weather conditions were typically good winter conditions, with opacities $\sim$0.1--0.2 at 1 mm and 1--2 mm of precipitable water vapour (pwv), resulting in system temperatures of 200--300 K, except for the 2010 February period, when observations were performed with opacities of 0.3--0.4 and 5 mm of pwv. In this case, system temperatures of 150 K and 300--400 K were obtained at 110 and 239 GHz, respectively. Intensity calibration was performed using two absorbers at different temperatures and corrected for atmospheric attenuation using the Atmospheric Transmission Model \citep[ATM;][]{Cernicharo1985, Pardo2001}. The telescope half-power-beam-width (HPBW) was 22.3$\arcsec$ at 110~GHz, 11.1$\arcsec$ at 220~GHz, 10.3$\arcsec$ at 239~GHz, and 9.6$\arcsec$ at 258~GHz. The telescope pointing was checked every hour on strong nearby quasars and found to have errors of typically less than 3--4 arcsec. The flux accuracy was checked using repeated observations of the planet Mars and the estimated flux uncertainty is $\sim$25\%. The data were processed using the IRAM GILDAS software package\footnote{See \texttt{http://www.iram.fr/IRAMFR/GILDAS} for more information about the GILDAS software package.}. Data reduction consisted of fitting and removing first order polynomial baselines, checking for image sideband contamination and emission from the reference position. The HERA data required further reduction analysis due to the different performance of each pixel in the array. Spectra from all pixels were averaged to obtain a uniform map spacing of 4$\arcsec$, taking into account the different flux calibration and internal pointing errors of each pixel (Marcelino et al.~\textit{in prep.}). \section{Methyl cyanide emission maps}\label{Results:Emission-Maps} \begin{figure*} \centering \includegraphics[width=17cm]{Figures/OrionKL_CH3CN_J=13-12_intensity_maps} \caption{Integrated intensity maps of methyl cyanide $J$=$13_K$--$12_K$ emission across the Orion KL region. Contours are plotted at the 3-sigma level (4.8~K\,km\,s$^{-1}$, grey) and from 20 to 280 in 20~K\,km\,s$^{-1}$ intervals (white lines). The various sources indicated on the maps are as described in Fig.~\ref{Fig:6-5_intensities}.} \label{Fig:13-12_intensities} \end{figure*} Integrated intensity maps for the $J$=$6_K$--$5_K$ and $J$=$13_K$--$12_K$ sets of transitions are shown in Figs.~\ref{Fig:6-5_intensities} and \ref{Fig:13-12_intensities} (the $J$=$12_K$--$11_K$ and $14_K$--$13_K$ lines show similar distributions). The velocity ranges used to determine the integrated intensities were $[-12, +20]$ km\,s$^{-1}$ for the combined $K$=0$+$1 line emission, $[-2,+20]$ for the $K$=2 lines, to exclude emission from adjacent $K$-ladder lines, and $[-10,+25]$ for the higher $K$ lines, with velocity ranges adjusted slightly in some cases to avoid contamination from other species (see also Figs.~\ref{Fig:6-5_transition_labels} and \ref{Fig:12-11_13-12_14-13_transition_labels}). We note that the emission in the $J$=$12_K$--$11_K$ lines (not shown) has a noticeably smoother distribution than the $13_K$--$12_K$ and $14_K$--$13_K$ lines, despite the similar beam size. This is because the $12_K$--$11_K$ data were obtained with the HERA receiver array, whereas the other data were obtained with the EMIR single pixel receivers. Data obtained with the HERA array are more complicated to reduce due to the need to account for the different sensitivity and sideband rejection of each pixel (see details in the previous section) and small scale structure can be lost when averaging the spectra from the various receivers in the array, since minor pointing misalignments between pixels leads to ``blurring out'' of small features when they are averaged together. The $13_K$--$12_K$ observations were also obtained under poor weather conditions, leading to higher noise for these spectra. The $K$-ladder transitions of methyl cyanide have upper levels spanning a wide range of energies, so the spatial extent of their emission show significant differences in morphology. In particular, the lower energy transitions typically show emission over a large expanse of the mapped area, whilst the high-$K$ lines arise in a compact region centred on the hot core, with some extension toward the southwest, suggesting an additional contribution from the compact ridge. Since the Einstein $A_{i\!j}$ coefficients do not drop dramatically with increasing $K$ in a given $K$-ladder (see Table~\ref{Table:Line-Properties}), the sudden decrease in size of the emitting region seen in the $K$=4 and higher lines compared to the lower energy transitions is due to a real change in the physical properties of the gas between the inner and outer parts of Orion KL. This implies that the central region around IRc2 must possess higher temperatures than the gas further out. Emission is detected in all transitions up to $K$=10, but becomes too weak to be detected in higher energy $K$-ladder lines beyond that. The elongated shape of the emission in the direction from northeast to southwest is consistent with that found in previous single-dish and interferometric observations of methyl cyanide \citep[e.g.,][]{Wilner1994, Wang2010}. However, we find the emission to be extended over a wider region -- more than 40 arcsec across -- compared to that found by interferometric observations, since such observations tend to filter out the extended emission on scales much larger than the synthesized beam size (\citealt{Wang2010}, for example, estimated that their interferometric observations missed up to 65\% of the flux in the more extended low-$K$ lines and filtered out structure on spatial scales greater than $\sim$14 arcsec). As such, the maps presented here provide an excellent dataset with which to study the emission properties of methyl cyanide over a much wider area surrounding Orion KL. \begin{figure*} \centering \includegraphics[width=17cm]{Figures/OrionKL_CH3CN_J=13-12_K=3_channel_maps} \caption{Channel maps of methyl cyanide $J$=$13_3$--$12_3$ emission integrated over the velocity ranges indicated on each panel. Contours are plotted from 5 to 70 in 5 K\,km\,s$^{-1}$ intervals. The various sources indicated on the maps are as described in Fig.~\ref{Fig:6-5_intensities}. The direction of the SiO outflow from source \textit{I} \citep[e.g.][]{Plambeck2009} is indicated by the blue arrow. The extended emission appears at velocities of 7--12 km\,s$^{-1}$, typical of the ridge.} \label{Fig:13-12_velocity_cuts} \end{figure*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=3_pv_map}} \caption{Position-velocity diagram for the methyl cyanide $J$=$13_3$--$12_3$ emission along the northeast axis from IRc2. Distances are measured in the direction with position angle 45$^{\circ}$ from the map centre. Contours are plotted from 0.5 to 14 in 0.5 K intervals. A second emission peak is visible $\sim$17 arcsec to the northeast of IRc2, at the position $(+12\arcsec,+12\arcsec)$.} \label{Fig:13-12_pv_diagram} \end{figure} The kinematics of the CH$_{3}$CN emitting gas are revealed by channel maps of the emission integrated over adjacent velocity ranges, as shown in Fig.~\ref{Fig:13-12_velocity_cuts} for the $J$=$13_3$--$12_3$ transition. We choose to study the kinematics using this line because lower $K$-ladder transitions significantly overlap one another, whilst higher $K$-ladder transitions have upper state energies of $\sim$200 K or more and so do not emit in the cooler extended gas. Emission from this line is detected over LSR velocities from approximately \hbox{$-10$ to $+25$ km\,s$^{-1}$}. The broad velocity range of this emission suggests a contribution from gas in the plateau, a region associated with outflows generated by star formation within KL. The extremes of the blue- and red-shifted emission arise in a compact region roughly centred on the hot core and slightly elongated along the direction from northeast to southwest. This is consistent with observations of other molecules that trace the low-velocity outflow \citep{Plambeck1982, Genzel1989, Greenhill1998, Plambeck2009, Niederhofer2012}. The emission peaks at $\sim$6 km\,s$^{-1}$, consistent with the systemic velocity of the hot core, and is located approximately midway between the hot core and compact ridge, coinciding with the radio continuum source \textit{n} (indicated on Fig.~\ref{Fig:13-12_velocity_cuts}). Emission from the ridge components (both the compact and extended ridge) typically displays velocities in the range \hbox{$+8$ to $+12$ km\,s$^{-1}$}, and the methyl cyanide emission in this range (shown in the bottom-left panel of Fig.~\ref{Fig:13-12_velocity_cuts}) indeed peaks closer to the compact ridge position -- further evidence of emission from this component -- and extends over a much wider region around KL, therefore likely arising in the extended ridge. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/CH3CN_J=6-5_transition_labels}} \caption{The methyl cyanide $J$=$6_K$--$5_K$ spectrum observed toward IRc2 (the central position of the map). The $K$-ladder lines of the main isotopologue of methyl cyanide are labelled in red and those of CH$_3$$^{13}$CN are labelled in light blue. Note that the $K$=5 transition of methyl cyanide is contaminated by the CH$_3$$^{13}$CN $J$=$6_0$--$5_0$ and $J$=$6_1$--$5_1$ lines, as is the case for all $K$-ladder series.} \label{Fig:6-5_transition_labels} \end{figure} We also plot the position-velocity diagram for the same $J$=$13_3$--$12_3$ line along the northeast direction from IRc2 in Fig.~\ref{Fig:13-12_pv_diagram}. This direction follows the transition from the hot core into the quiescent ridge and the resulting diagram shows the shift from strong emission over a broad velocity range in the vicinity of IRc2 to weaker emission with narrow line width in the extended ridge. A second, minor emission peak occurs approximately 17 arcsec northeast of IRc2, corresponding to the position $(+12\arcsec,+12\arcsec)$. This peak appears at $\sim$10 km\,s$^{-1}$ and is not evident in the integrated intensity or channel maps, since the line is narrow and its integrated intensity therefore does not rise significantly above that of its surroundings. There is also evidence for weak emission displaying broad line wings in the region just before this emission peak. Taken together, we argue that these features show an abrupt change in the gas kinematics, where the low-velocity outflow meets the quiescent gas in the extended ridge. As we discuss later, this region also has physical properties that suggest it to be a distinct feature with respect to the surrounding gas. The rms noise sensitivity achieved in these maps is typically $T_\mathrm{rms} \! \approx \! 50$--75~mK for the $J$=$6_K$--$5_K$ spectra and $\approx$100--170~mK for the higher frequency transitions. Given the rich line density produced by Orion KL, this high sensitivity means that many emission lines from other species are detected in the observed spectra, in addition to those of CH$_3$CN. To illustrate this, Figs.~\ref{Fig:6-5_transition_labels} and \ref{Fig:12-11_13-12_14-13_transition_labels} show the observed spectra for the $J$=$6_K$--$5_K$ up to $J$=$14_K$--$13_K$ transitions obtained towards IRc2, the central position of the maps. The observed spectra at all positions have frequency ranges much wider than those shown here and line-free regions of the spectra on both sides of the methyl cyanide $K$-ladder series were used to determine appropriate baselines. The $K$-ladder lines of methyl cyanide, its isotopologue CH$_3$$^{13}$CN, and detectable lines from a variety of other species are indicated on the spectra. In all four cases, the $K$=0 and 1 lines significantly overlap one another, leading to difficulties in fitting their line shapes, as discussed in the next section. In addition, the $K$=5 transition of each series of CH$_3$CN $K$-ladder lines is contaminated by emission from the $K$=0 and 1 transitions of CH$_3$$^{13}$CN. Though weak, these contaminating lines nevertheless contribute to the total flux, making the determination of the $K$=5 line intensity somewhat uncertain (also discussed in the next section). Apart from the contamination from its isotopologue, the observed lines of methyl cyanide are mostly unblended, with the exception of the $J$=$12_9$--$11_9$ line, which is completely covered by the $^{13}$CO 2--1 line (see Fig.~\ref{Fig:12-11_13-12_14-13_transition_labels}, top panel) and is therefore excluded from the analysis in this paper. Fitting the methyl cyanide lines with Gaussian profiles therefore provides reasonably accurate integrated intensities, as described in the following section, and allows the kinematics of the emitting gas to be studied in more detail. \begin{figure*} \centering \includegraphics[width=17cm]{Figures/CH3CN_J=12-11_transition_labels} \includegraphics[width=17cm]{Figures/CH3CN_J=13-12_transition_labels} \includegraphics[width=17cm]{Figures/CH3CN_J=14-13_transition_labels} \caption{The methyl cyanide $J$=$12_K$--$11_K$ (\textit{top}), $13_K$--$12_K$ (\textit{middle}), and $14_K$--$13_K$ (\textit{bottom}) spectra observed toward IRc2, identifying the various lines detected in these frequency ranges. The $K$-ladder lines of the main isotopologue of methyl cyanide are labelled in red, those of the CH$_3$$^{13}$CN isotopologue are labelled in light blue, and emission lines from other species are indicated in grey. Note that the $J$=$12_9$--$11_9$ transition suffers strong contamination from the $^{13}$CO $J$=2--1 line.} \label{Fig:12-11_13-12_14-13_transition_labels} \end{figure*} \section{Determining temperatures and column densities}\label{Analysis} Given its regular series of $K$-ladder transitions that are closely grouped in frequency and span a wide range of upper level energies, methyl cyanide is an excellent molecule with which to determine rotational temperatures by use of rotation or population diagram analysis \citep{Turner1991, Goldsmith1999}. Several assumptions are made when using these techniques, foremost of which is the assumption that the level populations are thermalised and so are governed by collisional excitation (as opposed to radiative excitation). In this section we describe the methods used to fit Gaussian profiles to the methyl cyanide lines at each position in the maps and the subsequent rotation and population diagram analyses based on the fitted line intensities. A more advanced analysis made by fitting LVG models to the observed line intensities is also described. \subsection{Gaussian fits to the emission lines}\label{Analysis:Gaussian-Fits} Integrated intensities for the methyl cyanide emission lines were determined at each map position by simultaneously fitting Gaussian profiles to the $K$=0--10 lines of each observed set of $K$-ladder transitions (with the exception of the $J$=6--5 transitions, for which all six $K$=0--5 lines were fitted). Emission from the $K$=0 and $K$=1 transitions frequently overlaps, particularly for the broader line widths found in the Orion hot core. Disentangling the integrated intensities produced by these two transitions is therefore difficult, especially since these lines can often exhibit strong opacities. Assuming that all lines arise from the same gas within a given map position, we first constrained the fits by requiring that all transitions within a given $K$-ladder have identical line widths and central LSR velocities. The Gaussian profile fits for the $K$=0 and $K$=1 lines were also constrained by requiring their fluxes to be equal, a reasonable approximation since their upper state energies and line strengths are comparable. In our fitting routine we allowed for two kinematic components -- one narrow, with allowed line widths of 3--10 km\,s$^{-1}$ (FWHM), and one broad, with line widths of 10--25 km\,s$^{-1}$ -- in order to adequately fit the line profiles at some map positions. The decision to include one or two kinematic components at each position was made based on their relative goodness of fit using the Bayesian information criterion \citep[BIC;][]{Schwarz1978}. This criterion introduces a penalty term for the number of model parameters in the goodness of fit to prevent overfitting. If the BIC value was lower for a fit with a single kinematic component at a given position, this fit was adopted instead of the two-component fit. Generally, we found that two kinematic components were needed to fit the line emission in the central region surrounding the hot core and compact ridge, but that one was sufficient for the majority of positions covering the more quiescent gas. The unconstrained parameters were allowed to vary freely from position to position within the maps, however some degree of consistency was imposed between adjacent map positions by using the best-fit parameters derived at one position as the initial guess values for fitting the lines at the next adjacent position. The map spectra were fitted in sequence, starting at the central position and moving to each neighbouring position following an outward spiral pattern. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10_gaussian-fits-2-component-central}} \caption{Best-fit Gaussian profiles for the $J$=13--12, $K$=0--10 methyl cyanide lines at the central position in the Orion KL map. Identical line widths and central velocities are assumed for all $K$-ladder transitions within a given kinematic component (see Sect.~\ref{Analysis:Gaussian-Fits} for details). Two kinematic components (one narrow, shown in blue, and one broad, shown in green) have been fitted while allowing their respective line widths and central velocities to vary. The combined fit from all Gaussian profiles is shown in red. The inset panel shows a zoom of the region around the $K$=8--10 lines. The residual for the two-component fit is shown in black in the bottom panel; the residual for the best one-component fit is shown in grey for comparison. The residual for the fit using one kinematic component is noticeably worse than that with two.} \label{Fig:13-12_gaussian-fits-centre} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10_gaussian-fits-1-component-offset}} \caption{Best-fit Gaussian profiles for the $J$=13--12, $K$=0--6 methyl cyanide lines at the offset position $(+4\arcsec,-16\arcsec)$ in the Orion KL map. Identical line widths and central velocities were assumed for all $K$-ladder transitions within a given kinematic component (see Sect.~\ref{Analysis:Gaussian-Fits}). The $K$-ladder lines above $K$=6 were not detected at this position, so are excluded from the figure. Only one kinematic component (shown in red) is necessary to obtain a good fit in this case.} \label{Fig:13-12_gaussian-fits-offset} \end{figure} The fit to the $J$=$13_K$--$12_K$ spectrum at the central position of IRc2 is shown in Fig.~\ref{Fig:13-12_gaussian-fits-centre}, illustrating that two kinematic components, one narrow ($\sim$8 km\,s$^{-1}$) and one broad ($\sim$19 km\,s$^{-1}$), both with comparable central velocities (5.7 versus 5.2 km\,s$^{-1}$), fit remarkably well the observed spectrum. Note that a fit with only one component yields residuals twice as large. These kinematic signatures can be assigned to emission arising in the hot core and plateau components, respectively. The narrower hot core component is seen to dominate in the high energy $K$-ladder lines (to the right of the spectrum), consistent with the presence of higher temperatures in this source. In contrast, more quiescent regions away from the centre require only one kinematic component to fit the line profiles, as shown in Fig.~\ref{Fig:13-12_gaussian-fits-offset}, which have kinematic properties similar to those of the extended ridge (narrow lines with FWHM of $\sim$5 km\,s$^{-1}$ at central velocities of $\sim$9 km\,s$^{-1}$). Figure~\ref{Fig:Kinematics} shows the kinematic properties (LSR velocity $V_\mathrm{LSR}$ and FWHM line width $\Delta V$) of the best-fit Gaussian profiles for the $J$=$12_K$--$11_K$, $J$=$13_K$--$12_K$, and $J$=$14_K$--$13_K$ sets of $K$-ladder transitions across the Orion KL region. Separate panels show the properties of the narrow component, broad component and the intensity-weighted average properties from these fits. The extended CH$_3$CN emission shows a sharp north-south velocity gradient around IRc2, in addition to the more gradual velocity gradient over larger scales previously observed in its $J$=$5_K$--$4_K$ line emission by \citet{Wilner1994} and in other molecules that trace the extended ridge \citep[e.g.,][]{Vogel1984, Womack1991, Ungerechts1997}. Since the most blue-shifted emission is extending to the northeast of the hot core, much like the blue-shifted SiO emission studied by \citet{Plambeck2009}, and exhibits broader line widths, this emission is most likely associated with the low-velocity outflow from the KL region. There is an additional emission feature displaying broad line widths ($\gtrsim$16 km\,s$^{-1}$) that extends toward the eastern edge of the maps. The emission in this outer region is generally quite weak, so the derived kinematic properties are less reliable and may be the result of fitting baseline ripples; however, the fact that we consistently see the same region of broad line width emission in all sets of $K$-ladder transitions suggests that it could be a real feature. If so, this region may be associated with the high-velocity outflow, as traced by the bright H$_2$ emission stretching northwest and east of Orion KL \citep{Beckwith1978} and the molecular ``fingers'' \citep{Taylor1984, Allen1993} that may have originated from a prior explosive event, as proposed by \citet{Bally2005}, \citet{Zapata2009, Zapata2011}, \citet{Niederhofer2012}, and \citet{Nissen2012}. \begin{figure*} \centering \includegraphics[width=17cm]{Figures/OrionKL_CH3CN_kinematics} \caption{Distribution of the LSR velocity ($V_\mathrm{LSR}$, coloured contours) and line width ($\Delta V$, greyscale maps) of methyl cyanide $J$=$12_K$--$11_K$ (\textit{left}), $J$=$13_K$--$12_K$ (\textit{centre}), and $J$=$14_K$--$13_K$ (\textit{right}) emission across the Orion KL region, as determined from simultaneous Gaussian fits to the $K$=\,0\,--\,10 lines (see Sect.~\ref{Analysis:Gaussian-Fits} for details). The three rows show the kinematic properties from the two-component fits for the narrow component (\textit{top}), the broad component (\textit{middle}), and the intensity-weighted average of the two (\textit{bottom}). Thick white contours on the bottom panels indicate the regions where two kinematic components were required to obtain a good fit to the line profiles. The various sources indicated are as described in Fig.~\ref{Fig:6-5_intensities}.} \label{Fig:Kinematics} \end{figure*} In the following analysis, at positions where two Gaussian components were required to fit the line profiles, we use the intensity-weighted average line width and the total integrated intensity from both components. This is clearly a simplifying assumption, and in the central region of the maps, where various distinct sources contribute to the emission within the beam, deriving physical properties from a single-component analysis can only yield average conditions for the region, masking the more complex underlying structure. However, the focus of this paper is on the conditions within the extended gas, outside of the hot core and away from IRc2. In the regions further from the centre, a single kinematic component typically dominates the line intensity, with little need for a second component to fit the line profile. Due to the limited spectral resolution of the data (2--5 km\,s$^{-1}$) and the weaker emission from the more extended gas, we have found that treating the two components separately can lead to inconsistent and sometimes unphysical results. We therefore consider the intensity-weighted average line width of the fitted profiles (which is generally dominated by only one of the components), to be the more reliable estimate in the weaker extended emission. The crowded region encompassing the hot core and compact ridge deserves a more detailed analysis, which is beyond the scope of this paper. Instead, such an analysis has been carried out as part of a separate, dedicated study of the central position that includes all methyl cyanide lines detected in both IRAM and \textit{Herschel}-HIFI spectral line surveys of the source \citep{Crockett2014}. This study treats the distinct kinematic components separately and is therefore better able to discuss the detailed properties of the central part of Orion KL. We assume that the total uncertainty on the integrated line intensity is given by the sum of the uncertainty from the Gaussian fit and the estimated flux uncertainty in the observation, added in quadrature. An additional 25\% uncertainty is assumed for the line intensity of each $K$=5 transition to account for contamination from the overlapping CH$_3$$^{13}$CN lines (see Sect.~\ref{Results:Emission-Maps}). This represents an upper limit to the additional emission contributed by these lines, since they are expected to be weak. \subsection{Constructing rotation diagrams}\label{Analysis:Rotation-Diagrams} Estimates for the excitation temperature and column density of a molecular species can be determined via rotation diagram analysis \citep[e.g.,][]{Turner1991}, by assuming that its level populations are in local thermodynamic equilibrium (LTE), that the emission lines are optically thin, and that the emission arises in a source of uniform density and temperature, with no appreciable background radiation. The details of this analysis technique are described in Appendix~\ref{Appendix:Rotation-Diagrams}. We have performed this analysis and constructed rotation diagrams for the methyl cyanide emission lines at all positions in our maps, thereby obtaining the distribution of rotational temperatures and total column densities across the observed region. The relevant spectroscopic quantities for our observed transitions of methyl cyanide have been taken from the Cologne Database for Molecular Spectroscopy \citep[CDMS;][]{Muller2005} and are listed in Table~\ref{Table:Line-Properties}. As previously discussed, at positions where two kinematic components were necessary to reproduce the line profiles, the total integrated intensity from both components is used in the rotation diagram analysis. Since the majority of the extended emission displays line profiles that are clearly dominated by one kinematic component, it is generally the case that this approach yields the properties for the dominant component. However, for the central region where multiple kinematic components contribute significantly to the total emission, this approach yields the average properties at these positions. This analysis has been performed separately for each set of $K$-ladder transitions. The inferred column densities are beam-averaged and do not account for the possible dilution of emission from unresolved clumps. Therefore, the column densities obtained with this method represent lower limits. For each set of $K$-ladder transitions, all the lines are observed simultaneously at one frequency setting and hence with the same beam size, so beam dilution does not affect the relative line strengths. The results from this analysis are presented and discussed in detail in Sect.~\ref{Results:Rotation-Diagrams}. \subsection{Accounting for line opacity and beam dilution}\label{Analysis:Opacity-Dilution} The observed line intensities are further reduced by line opacity when the column of emitting material is large, and beam dilution when the emitting region is smaller than the solid angle subtended by the telescope beam. In the case of methyl cyanide, the issue of opacity can be especially important for the low-$K$ transitions. Both line opacity and beam dilution are accounted for by the population diagram technique (as distinct from rotation diagrams), described in detail in Appendix~\ref{Appendix:Population-Diagrams}, which we use to further constrain the properties of the emitting gas at each position in the maps. In our analysis, we consider a region of parameter space covering rotational temperatures in the range 10\,--\,800~K, total column densities $N_\mathrm{tot}$\,=\,$10^{14}$\,--\,$10^{18}$~cm$^{-2}$, and beam dilution factors $f_\mathrm{beam}$\,=\,$10^{-2}$\,--\,$10^{0}$. The best-fit parameters obtained from this analysis are presented and discussed in Sect.~\ref{Results:Population-Diagrams}. \subsection{LVG model fits}\label{Analysis:LVG-Model-Fits} Population diagrams offer a powerful tool for determining rotational temperatures and analysing the excitation of a molecule, however, the relation to kinetic temperature is always dependent upon the assumption of local thermodynamic equilibrium. While this may be a reasonable assumption for the densest parts of Orion KL, the more tenuous gas in the extended cloud is likely to be below the critical densities needed to excite the transitions of methyl cyanide studied here ($n_\mathrm{crit} \!\sim\! 10^7$ cm$^{-3}$ at 100 K for the $J$=$12_K$--$11_K$ and $13_K$--$12_K$ transitions), leading to subthermally excited states. Furthermore, the opacities of the low-$K$ transitions of CH$_3$CN are known to be high, leading to a flattening of the slope in the population diagram and resulting in rotational excitation temperatures and total column densities that are over- and under-predicted, respectively. We therefore also consider more sophisticated models of the emission; specifically, radiative transfer models that use the local velocity gradient (LVG) approximation to simplify the line transfer problem. By fitting LVG model predictions to the observed line intensities, it is possible to derive the kinetic temperature $T_\mathrm{kin}$, volume number density of molecular hydrogen $n(\mathrm{H_2})$, total column density of methyl cyanide $N_\mathrm{tot}$, and beam filling factor of the emitting region $f_\mathrm{beam}$. In order to achieve this for each position in our maps, we have run a grid of LVG models to predict line intensities for the observed methyl cyanide transitions over a region of parameter space that spans $T_\mathrm{kin}$=\,50\,--\,500 K, $n(\mathrm{H_2})$\,=\,$10^{4}$\,--\,$10^{8}$ cm$^{-3}$, $N_\mathrm{tot}$\,=\,$10^{14}$\,--\,$10^{18}$ cm$^{-2}$, and $f_\mathrm{beam}$\,=\,$10^{-2}$\,--\,$10^{0}$. We use the Madex LVG code \citep{Cernicharo2012} to perform these calculations, adopting the collisional excitation rates determined by \citet{Green1986} and extrapolated to higher temperatures. Best-fit models for each map position were then determined by performing a $\chi^{2}$ minimisation over the entire grid of models, comparing the predicted integrated line intensities to those obtained by the Gaussian fits to the observed spectra (summing the intensities of both fitted components at positions where two components were used; see Sect.~\ref{Analysis:Gaussian-Fits} for details). As with the population diagram analysis, we determine the reduced chi-squared for a given set of observed and model line intensities using equation~\ref{Equation:Chi-Squared} (see Appendix~\ref{Appendix:Population-Diagrams} for details). The best-fit parameters are presented and discussed in Sect.~\ref{Results:LVG-Models} and \ref{Results:Combined-Models}. \subsection{Limitations of the analysis methods}\label{Analysis:Limitations} The three techniques discussed above to derive physical properties from the observed methyl cyanide lines are all subject to certain limitations and make assumptions about the emitting gas. The rotation diagram method yields rotational temperatures, not kinetic temperatures, with the two being equal only when the gas is sufficiently dense that collisions dominate its excitation. In addition, the gas is assumed to be optically thin and that the source fills the telescope beam, thus the derived column densities are beam-averaged values. The central region of Orion KL is sufficiently dense that methyl cyanide is likely to be thermalised, so that the derived rotational temperatures are reasonably close to the kinetic temperatures. However, the expected column densities are high enough in this region that line opacities become non-negligible, more so in the low-lying $K$-ladder lines than in the higher energy transitions. This reduces the observed intensities of these low-$K$ lines, leading to lower values being derived for their upper state column densities and a ``flattening'' of the slope in the resulting rotation diagram, which yields higher rotational temperatures. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=6-5_RT_Trot_Ntot_map}\hspace{5mm} \includegraphics{Figures/OrionKL_CH3CN_J=12-11_RT_Trot_Ntot_map}}\\ \vspace{5mm} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_RT_Trot_Ntot_map}\hspace{5mm} \includegraphics{Figures/OrionKL_CH3CN_J=14-13_RT_Trot_Ntot_map}} \caption{Results of the rotation diagram analysis assuming optically thin emission. Maps of methyl cyanide rotational temperature ($T_\mathrm{rot}$, colour scale) and total beam-averaged column density ($N_\mathrm{tot}$, overlaid contours) across the Orion KL region. Both quantities are determined from rotation diagram analyses of the $J$=$6_K$--$5_K$, $J$=$12_K$--$11_K$, $J$=$13_K$--$12_K$, and $J$=$14_K$--$13_K$ line emission at each map position, neglecting the effects of line opacity and beam dilution (see Sect.~\ref{Analysis:Rotation-Diagrams} for details). Column density contours are plotted at 0.5$\times$10$^{15}$ cm$^{-2}$ intervals, starting at 1.0$\times$10$^{15}$ cm$^{-2}$ (black lines). The various sources indicated are as described in Fig.~\ref{Fig:6-5_intensities}, with the approximate location and extent of the hot zone seen to the northeast of IRc2 also marked by a red circle on each panel (defined in Sect.~\ref{Results:Temperature-Maps}).} \label{Fig:RT_Trot_Ntot_maps} \end{figure*} Population diagram analysis including the effects of line opacity and beam dilution can account for optically thick emission and unresolved source sizes. However, the assumption of LTE still remains, and this method cannot provide an indication of the volume density of the emitting gas. There is also significant degeneracy between the total column density and the source dilution factor, since one can increase to compensate the other. The change in line opacity that comes with increasing column density can help to break this degeneracy somewhat, but not completely. The slope of the population diagram is also linked to the line opacity, introducing some degeneracy in the temperature-column density plane of parameter space (since the opacity is governed by both), though less so than the coupling between total column and dilution factor. Finally, LVG model fits to the observed line intensities can provide additional constraints on the volume density of the gas, though in the case of Orion KL, where densities are high enough to thermalise the population levels, these models may only provide lower limits for the density. LVG models also suffer the same degeneracies as population diagram techniques, though by combining data from various sets of $K$-ladder transitions observed with different beam sizes, some of this degeneracy can be lifted. The collisional excitation rates for CH$_3$CN used in the LVG models are derived from state-to-state rates, $Q(L,M)$, calculated by \citet{Green1986} for temperatures up to 140 K. Rates for higher temperatures were obtained by extrapolation. This is generally considered to be more reliable than extrapolating to lower temperatures, since the rates tend to vary more smoothly at high temperatures. However, the uncertainties involved are hard to quantify and we therefore adopt the estimated uncertainties given in the original paper, namely somewhere between 50\% and a factor of two. \begin{table*} \caption{Temperatures and methyl cyanide column densities for IRc2 and the northeastern hot zone derived from analysis of the $K$-ladder emission lines.} \label{Table:Analysis-Results} \centering \begin{tabular}{c c lc c lc c lc} \hline\hline CH$_3$CN & & \multicolumn{2}{c}{Rotation Diagram} & \hspace{5mm} & \multicolumn{2}{c}{Population Diagram} & \hspace{5mm} & \multicolumn{2}{c}{LVG Model Fit} \\ $J'$$\to$$J''$ & & \multicolumn{1}{c}{$T_\mathrm{rot}$} & \multicolumn{1}{c}{$N_\mathrm{beam}$} & & \multicolumn{1}{c}{$T_\mathrm{rot}$} & \multicolumn{1}{c}{$N_\mathrm{source}$} & & \multicolumn{1}{c}{$T_\mathrm{kin}$} & \multicolumn{1}{c}{$N_\mathrm{source}$} \\ & & \multicolumn{1}{c}{(K)} & \multicolumn{1}{c}{(10$^{15}$ cm$^{-2}$)} & & \multicolumn{1}{c}{(K)} & \multicolumn{1}{c}{(10$^{16}$ cm$^{-2}$)} & & \multicolumn{1}{c}{(K)} & \multicolumn{1}{c}{(10$^{16}$ cm$^{-2}$)} \\ \hline \\[-3.5mm] \multicolumn{10}{c}{IRc2 $(+0\arcsec,+0\arcsec)$} \\ \hline \\[-3mm] $6_K$--$5_K$ & & 495 & 14 & & 260 & 20 & & 270 & 7.9 \\[2mm] $12_K$--$11_K$ & & 415 & 6.8 & & 230 & 6.3 & & 190 & 1.3 \\[2mm] $13_K$--$12_K$ & & 329 & 4.9 & & 220 & 6.3 & & 270 & 1.6 \\[2mm] $14_K$--$13_K$ & & 442 & 6.1 & & 290 & 6.3 & & 340 & 1.6 \\[1mm] \hline \\[-3.5mm] \multicolumn{10}{c}{Hot Zone Peak} \\ \hline \\[-3mm] $6_K$--$5_K$ & & 517 & 5.7 & & 300 & 16 & & 370 & 5.0 \\[2mm] $12_K$--$11_K$ & & 415 & 1.4 & & 550 & 6.3 & & 260 & 2.0 \\[2mm] $13_K$--$12_K$ & & 402 & 1.3 & & 400 & 5.0 & & 360 & 0.8 \\[2mm] $14_K$--$13_K$ & & 476 & 0.4 & & 510 & 5.0 & & 460 & 1.0 \\[1mm] \hline \\[-3.5mm] \multicolumn{10}{c}{Hot Zone Average} \\ \hline \\[-3mm] $6_K$--$5_K$ & & 415 & 5.9 & & 220 & 12 & & 270 & 5.4 \\[2mm] $12_K$--$11_K$ & & 313 & 1.7 & & 250 & 4.7 & & 200 & 1.1 \\[2mm] $13_K$--$12_K$ & & 313 & 1.4 & & 290 & 4.8 & & 240 & 1.0 \\[2mm] $14_K$--$13_K$ & & 418 & 1.2 & & 360 & 5.2 & & 350 & 1.0 \\[1mm] \hline \end{tabular} \tablefoot{Hot zone values are quoted for the peak temperature position (``Peak'') and for the average over all positions (``Average'') within the circled region on each map (see Figs.~\ref{Fig:RT_Trot_Ntot_maps}, \ref{Fig:PD_Trot_Nbeam_maps}, and \ref{Fig:LVG_Tkin_Nbeam_maps}).} \end{table*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10-rotation-diagram-central}\hspace{5mm} \includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10-rotation-diagram-hotzone}} \caption{Rotation diagrams for the CH$_3$CN $J$=$13_K$--$12_K$ lines observed at the centre of the map (\textit{left}), corresponding to the location of IRc2, and an offset position (\textit{right}), corresponding to a temperature peak within the northeastern hot zone.} \label{Fig:13-12_rotation-diagrams} \end{figure*} All these methods suffer from the additional limitation that they assume constant temperature and abundance in the emitting gas, and that the emission from each transition line arises from the same source size. More complex models introducing temperature and abundance gradients are beyond the scope of this study and in any case suffer other issues of degeneracy, requiring some prior knowledge of the source structure in order to constrain them. Properties derived from the techniques included in this study are therefore considered average values for the gas observed within the telescope beam at each map position. In the case where the gas resides in unresolved clumps, these properties are average values for the ensemble of clumps within the beam. Given the unknown degree of inaccuracy introduced by failing to account for possible gradients or more complicated source structure, it is difficult to quantify the uncertainties on the derived properties in this analysis. Formal errors determined from fitting slopes to the rotation diagrams and from the $\chi^{2}$ minimisation procedure were generally found to be $\lesssim 30$\% for the rotational and kinetic temperatures and less than a factor of 2 for the column densities. These uncertainties rise for positions near the map edges, where the emission is weaker, but we discard these points from our analysis, as discussed in the next section. Since it is difficult to place a value on the uncertainties introduced by the more general assumption of uniform source properties, we prefer to adopt conservative estimates for the uncertainties, namely 30\% for the derived temperatures and a factor of 2--3 for the column densities. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=6-5_PD_Trot_Nbeam_map}\hspace{5mm} \includegraphics{Figures/OrionKL_CH3CN_J=12-11_PD_Trot_Nbeam_map}}\\ \vspace{5mm} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_PD_Trot_Nbeam_map}\hspace{5mm} \includegraphics{Figures/OrionKL_CH3CN_J=14-13_PD_Trot_Nbeam_map}} \caption{Results of the population diagram analysis, accounting for the effects of line opacity and beam dilution. Maps of methyl cyanide rotational temperature ($T_\mathrm{rot}$, colour scale) and beam-averaged column density ($N_\mathrm{beam}$, overlaid contours) across the Orion KL region. Both quantities are determined from population diagram analyses of the $J$=$6_K$--$5_K$, $J$=$12_K$--$11_K$, $J$=$13_K$--$12_K$, and $J$=$14_K$--$13_K$ line emission at each map position (see Sect.~\ref{Analysis:Opacity-Dilution} for details). Column density contours are plotted at 0.5$\times$10$^{15}$ cm$^{-2}$ intervals, starting at 1.0$\times$10$^{15}$ cm$^{-2}$ (black lines). The locations of the hot core, compact ridge, radio continuum sources, and the northeastern hot zone are indicated as in previous figures.} \label{Fig:PD_Trot_Nbeam_maps} \end{figure*} \section{Temperature and column density distributions in Orion KL}\label{Results:Temperature-Maps} \subsection{Rotation diagram results}\label{Results:Rotation-Diagrams} Having fitted Gaussians to the methyl cyanide emission lines at each map position (see Sect.~\ref{Analysis:Gaussian-Fits}) and then determining rotational excitation temperatures and total beam-averaged column densities from the resulting rotation diagrams (Sect.~\ref{Analysis:Rotation-Diagrams}), the temperature and column density distributions inferred from the $J$=6--5, 12--11, 13--12, and 14--13 sets of $K$-ladder emission lines are shown in Fig.~\ref{Fig:RT_Trot_Ntot_maps}. In all the maps presented in this section, we have excluded positions where the emission is too weak to obtain reliable fits to the rotation diagrams. To achieve this, we have imposed a lower limit on the integrated intensity of the $K$=5 line in each set of $K$-ladder transitions, only plotting the results for positions where the total $K$=5 integrated intensity determined from the Gaussian fit corresponds to a 3-sigma detection or better (based on the $T_\mathrm{rms}$ and intensity-weighted line width). The choice of the $K$=5 line for this detection limit serves as a reasonable compromise: imposing a stricter detection limit on the higher energy transitions causes too many points to be excluded, while applying a limit to only the lower $K$ lines includes positions where too few lines are detected to be able to reliably determine the gas properties. The resulting temperature and column density distributions should therefore be reliable within the uncertainties discussed in Sect.~\ref{Analysis:Limitations}. Though the temperature distributions derived from the separate sets of $K$-ladder transitions show some variations, a common feature seen in all the maps is a region of hot ($\sim$300--500 K) gas centred on the hot core (indicated by a red plus sign on each figure) with temperatures dropping off in the direction toward the compact ridge (indicated by a red cross). The warm gas also extends to the northeast of the hot core and in several cases shows a temperature peak at a position approximately 10--15 arcsec away from IRc2, with values reaching above 400~K. The morphology of this extended warm region varies depending on the set of $K$-ladder transitions used to derive the temperatures, but it typically shows an arc-like structure around the northeastern side of IRc2 and an abrupt drop in temperature on its northern-most edge. The quiescent gas further from the KL nebula is cooler ($< 200$ K), with relatively low column densities ($<$5$\times$10$^{14}$ cm$^{-2}$), but the weak emission in the outer regions of the maps means that the signal-to-noise is significantly worse, and attempts to construct rotation diagrams for some positions lead to unphysical temperatures and column densities. These points have therefore been omitted from the maps, as discussed above. The extended methyl cyanide emission around Orion KL has previously been mapped by \citet{Wilner1994}, who infer a temperature and column density toward the CS1 condensation in the northeastern ridge (marked as a white cross in the figures) of 50--80 K and $\sim$10$^{14}$ cm$^{-2}$, respectively. These values are consistent with those we obtain from the rotation diagram analysis of the various $K$-ladder lines considered here, with temperatures in the range 70--100 K and column densities of 1--2$\times$10$^{14}$ cm$^{-2}$. The region around the BN source displays significantly higher temperatures than the quiescent material, typically 350--400 K, and column densities of order 10$^{15}$ cm$^{-2}$, intermediate between those inferred for the hot core and those for the extended ridge. Since the separation between IRc2 and BN ($\approx$9$\arcsec$) is less than the beam size of the telescope used to obtain these maps, the temperature and column densities we derive for the BN position may be slightly affected by emission from the hot core that falls within the beam. However, there is clear evidence that the methyl cyanide around BN exhibits raised temperatures and somewhat higher column densities than the surrounding medium, consistent with heating by the embedded BN object. The region observed toward the central position of the maps is known to host a number of distinct kinematic sources, including emission from the hot core and plateau, amongst others. Since we analyse the total line emission at each map position, rather than deriving properties for individual kinematic components, our resulting temperatures and column densities for this crowded region are average values from all sources within the beam. A detailed analysis of the separate kinematic components seen at the central position has been carried out in a separate study \citep{Crockett2014} and finds that the derived properties do indeed differ significantly for the distinct source components. At positions further from the centre, however, the emission is typically dominated by a single kinematic component. The area of hot gas to the northeast of IRc2, which we will subsequently refer to as the ``hot zone'', appears in all the temperature maps obtained from the rotation diagram analysis. Peak temperatures in this region vary in location from map to map, but typically occur at or near the position $(+12\arcsec,+8\arcsec)$ relative to the map centre, and always within one beam width of this point. Accounting for the variations in the shape and position of the hot zone feature seen in the temperature maps produced by each analysis method, its approximate extent is indicated by a red circle on Fig.~\ref{Fig:RT_Trot_Ntot_maps} and on subsequent maps. Rotation diagrams for the methyl cyanide $J$=$13_K$--$12_K$ lines observed toward IRc2 and toward the peak temperature position within the northeastern hot zone are shown in Fig.~\ref{Fig:13-12_rotation-diagrams}. Table~\ref{Table:Analysis-Results} lists the rotational temperatures and beam-averaged CH$_3$CN column densities we derive at the location of IRc2 from the $6_K$--$5_K$ to $14_K$--$13_K$ sets of $K$-ladder lines. The peak temperatures and corresponding beam-averaged CH$_3$CN column densities that we find within the hot zone, in addition to the average values for this region, are also listed. We adopt conservative estimates for the uncertainties of 30\% for the temperature values and a factor of 2--3 for the column densities, as discussed in Sect.~\ref{Analysis:Limitations}. The upper state column densities for the $K$=10 line on the rotation diagrams in Fig.~\ref{Fig:13-12_rotation-diagrams} (rightmost points) appear to fall outside the general trend displayed by the lower-$K$ lines, especially so for the hot zone position to the northeast of IRc2, where the $K$=10 point is at least a factor of three higher than the $K$=9 value. This deviation may be due to a baseline ripple or other artefact in the raw spectra. If the $K$=10 value is excluded from the fit, then the resulting rotational temperature and beam-averaged column density drop to 361$\pm$65~K and 8.7$\pm$2.7$\times$10$^{14}$~cm$^{-2}$. These values are consistent with those obtained with the $K$=10 point included, within the fit uncertainty, and the temperature remains significantly higher than the surrounding region. For IRc2, removing the $K$=10 point leads to a change of only a few degrees in the rotational temperature and negligible change in the best-fit column density. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10-population-diagram-central}}\\ \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10-population-diagram-hotzone}} \caption{Population diagrams comparing observations (blue dots) and best-fit LTE model predictions (red crosses) for CH$_3$CN $J$=$13_K$--$12_K$ lines at the map centre (\textit{top}), corresponding to the location of IRc2, and the offset position (\textit{bottom}), corresponding to a temperature peak within the northeastern hot zone.} \label{Fig:13-12_population-diagrams} \end{figure} The temperatures and column densities derived from the $J$=$6_K$--$5_K$ lines are significantly higher than those derived from the higher $K$-ladder transitions. The higher temperatures are likely due to the fact that the $6_K$--$5_K$ lines have low upper state energies ($<$200 K) and so cannot reliably constrain the temperature when the gas is warm, as is the case in the central region of Orion KL. Conversely, the larger column densities obtained indicate that these lines probe more of the cool gas along the line of sight, owing to their lower critical densities and energies. \subsection{Population diagram results}\label{Results:Population-Diagrams} The distributions of rotational excitation temperature and beam-averaged methyl cyanide column density that we obtain from the population diagram analysis of the $J$=6--5, 12--11, 13--12, and 14--13 sets of $K$-ladder emission lines are shown in Fig.~\ref{Fig:PD_Trot_Nbeam_maps}. The results obtained from this analysis account for line opacity and beam dilution effects (see Sect.~\ref{Analysis:Opacity-Dilution}), so represent a more detailed treatment of the emission with respect to the pure rotation diagram technique (Fig.~\ref{Fig:RT_Trot_Ntot_maps}). The temperatures derived in this analysis are typically lower than those obtained from the rotation diagrams discussed above. The main reason for this is the consideration of the line opacity, which can be significant in the $K$=0 and 1 lines, leading to ``flatter'' slopes in the rotation diagrams and correspondingly higher excitation temperatures, as discussed in Sect.~\ref{Analysis:Limitations}. Accounting for the line opacity therefore reduces the inferred excitation temperatures. We find that the temperature and column density distributions derived from the population diagram analysis are broadly similar to those obtained from rotation diagrams. Most notably, the hot zone again appears as a high temperature region to the northeast of IRc2. Some features become more prominent while others are no longer present, due to the introduction of line opacity into the analysis and the weak degeneracy that exists between opacity and excitation temperature (see Sect.~\ref{Analysis:Limitations}). These differences are unlikely to be due to large uncertainties in the fitting procedure, since positions with weak or undetected lines have been excluded from the maps, as discussed at the start of this section. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=6-5_LVG_Tkin_Nbeam_map}\hspace{5mm} \includegraphics{Figures/OrionKL_CH3CN_J=12-11_LVG_Tkin_Nbeam_map}}\\ \vspace{5mm} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_LVG_Tkin_Nbeam_map}\hspace{5mm} \includegraphics{Figures/OrionKL_CH3CN_J=14-13_LVG_Tkin_Nbeam_map}} \caption{Results of the LVG model fits. Maps of the kinetic temperature ($T_\mathrm{kin}$, colour scale) and beam-averaged column density ($N_\mathrm{beam}$, overlaid contours) across the Orion KL region determined by fitting LVG models to the observed lines of CH$_3$CN $J$=$6_K$--$5_K$, $J$=$12_K$--$11_K$, $J$=$13_K$--$12_K$, and $J$=$14_K$--$13_K$ at each map position (see Sect.~\ref{Analysis:LVG-Model-Fits} for details). Column density contours are plotted at 0.5$\times$10$^{15}$ cm$^{-2}$ intervals, starting at 0.5$\times$10$^{15}$ cm$^{-2}$ (black lines). The locations of the hot core, compact ridge, radio continuum sources, and the northeastern hot zone are indicated as in previous figures.} \label{Fig:LVG_Tkin_Nbeam_maps} \end{figure*} Population diagrams for the $J$=$13_K$--$12_K$ lines observed toward IRc2 and the northeastern hot zone are shown in Fig.~\ref{Fig:13-12_population-diagrams}. The uncertainties listed on the diagrams correspond to the 1-sigma confidence limits from the $\chi^{2}$ distributions and give an indication of how well the properties are constrained by the fits; however, the more conservative uncertainty estimates given in Sect.~\ref{Analysis:Limitations} dominate over these formal 1-sigma limits. From the population diagram analysis of each set of $K$-ladder emission lines, we list the rotational temperatures and source-averaged methyl cyanide column densities that we obtain for IRc2 and the hot zone in Table~\ref{Table:Analysis-Results}. The population diagram analysis of the emission toward the hot zone yields rotational temperatures of 200 to 550 K and source-averaged column densities in the range 10$^{16}$ to 10$^{17}$~cm$^{-2}$. The determination of the beam dilution suffers some degeneracy since we analyse each set of $K$-ladder lines separately (the LVG results discussed in a later section include simultaneous fits to multiple sets of $K$-ladder transitions and are therefore better able to constrain this parameter). That said, we obtain beam dilution factors typically of order 0.1 in the central region around the hot core and compact ridge, corresponding to source sizes of a few arcsec. Further from the centre, the dilution factor drops to $f_\mathrm{beam} \lesssim 0.01$ (source sizes below 1 arcsec). This suggests that the methyl cyanide resides in small unresolved clumps, as has been suggested previously for the hot core \citep{Beuther2005, Wang2010}, and that this is likely also the case for the more extended emission seen in the quiescent cloud. Line opacities for the $K$=0 transition (the most optically thick line in each $K$-ladder) determined from the population diagram analysis are typically $\gtrsim$1 near the map centre, i.e., in the vicinity of the hot core and compact ridge, forming a region of high opacity emission that extends in a northeast-southwest direction encompassing these two sources. The $J$=$13_K$--$12_K$ lines also suggest that this region of high opacity might extend toward BN, though the other $K$-ladder results do not reflect this. In the vicinity of the hot zone, population diagram analysis yields opacities of $\sim$0.5, indicating optically thinner emission falling just outside the region of high opacity. However, the $J$=$6_K$--$5_K$ line analysis shows increased opacity (closer to 1) within the hot zone itself. The line opacities quickly drop off to $\ll$1 outside of the central region, though emission toward the CS1 condensation in the extended ridge shows higher opacity ($\sim$0.1--0.5). Since the line opacities are significant in the central part of the maps, the temperatures derived from the population diagram method are likely to be more reliable in this region than those obtained with the rotation diagram technique in the previous section. As discussed above, the inferred rotational temperatures are generally lower when the line opacity is accounted for (see Table~\ref{Table:Analysis-Results}). These temperatures are also consistent with those obtained from LVG models (see next section) and are therefore considered more accurate. Further from the centre, lower column densities mean that similar temperatures are derived both with and without accounting for the line opacity; however, the population diagram analysis of the hot zone emission is likely to yield more reliable estimates for the temperatures in that region, since the opacities are again higher. \subsection{LVG model results}\label{Results:LVG-Models} Finally, we show the kinetic temperature and beam-averaged methyl cyanide column density distributions derived from our LVG model analysis in Fig.~\ref{Fig:LVG_Tkin_Nbeam_maps}. The four sets of $K$-ladder transitions are first modelled separately, producing temperature maps that are generally similar to those obtained from the population diagram analysis, in particular the presence of the northeastern hot zone. Column densities peak in the hot core, typically somewhere between the continuum sources \textit{I} and \textit{n}, but a slight elevation toward the hot zone is also seen in some $K$-ladders. In addition, LVG models allow the density of the emitting gas to be constrained. The derived number densities are a few $\times$10$^{6}$ cm$^{-3}$ or higher in both the hot core and compact ridge, falling to $\sim$10$^{5}$ cm$^{-3}$ in the quiescent gas of the extended ridge and lower elsewhere in the region. Best-fit model results for the $J$=$13_K$--$12_K$ lines are shown represented as population diagrams in Fig.~\ref{Fig:13-12_LVG-population-diagrams} for the IRc2 and hot zone positions. Inferred kinetic temperatures and source-averaged CH$_3$CN column densities are listed in Table~\ref{Table:Analysis-Results}. We note that excluding the $K$=10 line from the population diagram and LVG model fits for the hot zone position shown in Figs.~\ref{Fig:13-12_population-diagrams} and \ref{Fig:13-12_LVG-population-diagrams} yields temperatures of 310--350~K, which remain consistent with the values listed in Table~\ref{Table:Analysis-Results} (within 1-sigma uncertainties). The population diagram and LVG model fit properties for IRc2 remain unchanged when the $K$=10 point is removed. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10-LVG-pop-diagram-central}}\\ \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=13-12_K=0-10-LVG-pop-diagram-hotzone}} \caption{Population diagrams comparing observations (blue dots) and best-fit LVG model predictions (red crosses) for CH$_3$CN $J$=$13_K$--$12_K$ lines at the map centre (\textit{top}), corresponding to the location of IRc2, and the offset position (\textit{bottom}), corresponding to a temperature peak within the northeastern hot zone.} \label{Fig:13-12_LVG-population-diagrams} \end{figure} The source filling factors (i.e., beam dilution factors) given by the best-fit LVG models indicate the clumpy nature of the CH$_3$CN emitting gas in Orion (see Appendix~\ref{Appendix:Beam-Dilution}). In the region surrounding the hot core and compact ridge, we infer dilution factors of 0.05--0.3, corresponding to source sizes of $\sim$3--6 arcsec. Interferometric observations of methyl cyanide in Orion KL \citep[e.g.,][]{Wang2010} appear to spatially resolve the emission with an arcsecond beam, however, the derived source filling factors from those observations were found to be smaller than unity, suggesting that the observed structures may actually be ensembles of even smaller clumps. Alternatively, such low filling factors could be the result of missing flux. Given the larger beam size of our single-dish observations, the source filling factors we obtain may therefore be upper limits, more representative of the arcsecond-sized structures. Toward the hot zone, the best-fit LVG models have source filling factors of 0.025--0.05, corresponding to source sizes of $\sim$1.5--3 arcsec. The beam dilution maps generally show a region of low filling factor ($<$0.05) around the hot zone. This may imply that the emitting gas in this region resides in similarly sized clumps to those found in the centre, but more sparsely distributed, or alternatively, that the clumps in this region are about half the size of those seen in the hot core and compact ridge. Toward the extended ridge and the CS1 clump within it, we find source filling factors of 0.04--0.15, giving source sizes of $\sim$2--5 arcsec, similar to the size found by \citet{Wilner1994} for CS1. Outside of the central region and extended ridge, the filling factor drops off rapidly to 0.01 or below, though these values are rather unconstrained since the line opacity is low. \subsection{Combined LVG model results}\label{Results:Combined-Models} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=12-11_J=13-12_J=14-13_LVG_Tkin_Nbeam_map}}\\ \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=12-11_J=13-12_J=14-13_LVG_fbeam_map}}\\ \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_J=12-11_J=13-12_J=14-13_LVG_nH2_map}} \caption{Maps of the kinetic temperature ($T_\mathrm{kin}$, \textit{top}), beam-averaged methyl cyanide column density ($N_\mathrm{beam}$, contours), beam dilution factor ($f_\mathrm{beam}$, \textit{middle}), and H$_2$ number density ($n_\mathrm{H_2}$, \textit{bottom}) across the Orion KL region, determined by simultaneously fitting LVG models to the observed $J$=$12_K$--$11_K$, $J$=$13_K$--$12_K$, and $J$=$14_K$--$13_K$ lines at each position. Column density contours are plotted at 0.5$\times$10$^{15}$ cm$^{-2}$ intervals, starting at 1.0$\times$10$^{15}$ cm$^{-2}$ (black). The locations of the hot core, compact ridge, continuum sources, and hot zone are indicated as in previous figures.} \label{Fig:LVG_combo_maps} \end{figure} In order to better constrain the beam dilution and volume densities, the data for the $J$=$12_K$--$11_K$, $13_K$--$12_K$, and $14_K$--$13_K$ lines were fitted simultaneously with a single LVG model at each map position. The $J$=$6_K$--$5_K$ lines were not included in the fit since doing so led to unrealistically low kinetic temperatures ($\sim$100 K for IRc2), which we attribute to the lower energy $6_K$--$5_K$ transitions dominating the goodness of fit and biasing the temperature downward. Since the angular resolution of the $J$=$6_K$--$5_K$ maps is significantly less than that of the higher frequency transitions, there is little additional information lost by excluding these lines from the fit. The resulting maps of kinetic temperature, beam-averaged column density, beam dilution factor, and H$_2$ number density derived from the simultaneous LVG model fits to the $J$=$12_K$--$11_K$, $13_K$--$12_K$, and $14_K$--$13_K$ lines at each map position are shown in Fig.~\ref{Fig:LVG_combo_maps}. These maps clearly reveal the presence of the northeastern hot zone (with $T_\mathrm{kin} \!\approx\! 300$~K) and indicate kinetic temperatures for IRc2 and the hot core of 170--230~K. The methyl cyanide column density peaks just south of IRc2, close to source \textit{n}, at $\sim$8$\times$10$^{16}$~cm$^{-2}$ (corrected for beam dilution). Another column density peak appears to the south of the BN object, though the gas is cooler here, at 130 K. The column density also rises again in the hot zone region and coincides with the temperature peak. The source filling factor distribution is largely the same as that derived from the LVG model fits to the individual $K$-ladders. An overall picture of clumpy structure on large scales therefore emerges from the various analyses of these methyl cyanide maps, consistent with other studies of Orion KL. We note, however, that the derived filling factors are rather poorly constrained in some regions. In the central part of the map where we find high line opacities, the uncertainty on the source filling factor is typically less than a factor of 2, but further from the centre the lines are optically thinner and in areas outside of the ridge the filling factor is essentially unconstrained, having uncertainties of an order of magnitude or more. The temperature and column density distributions within the area of the northeastern hot zone reveal several distinct peaks. Similar peaks were also seen in this region for some of the distributions obtained from the previous analyses. Given this varying morphology, it is unlikely that a single, well-defined source is responsible for the generally high temperatures seen in the hot zone. Instead, the inferred distributions are better explained by several unresolved hot clumps situated within a common warm environment. The slight variation in the positions of the temperature peaks are therefore the result of the limited spatial resolution of the maps and uncertainties in the methodology applied. Finally, we show the H$_2$ number density distribution derived from the simultaneous LVG model fits in the bottom panel of Fig.~\ref{Fig:LVG_combo_maps}. High densities ($\ge$10$^{7}$~cm$^{-3}$) are found in the central region close to the hot core and compact ridge, though they seem to peak slightly to the west of both sources. Intermediate densities of order 10$^{6}$~cm$^{-3}$ are present in the area surrounding these two sources and trace the general shape of the extended ridge to the northeast, rising again slightly near the CS1 condensation. In and around the hot zone, the number densities are somewhat lower, at a few~$\times\,$10$^{5}$~cm$^{-3}$, and are similar to those found in the outer regions of the maps, where densities of about 10$^{5}$~cm$^{-3}$ are maintained across most of the cloud. The inferred densities generally have uncertainties of order $\pm$0.5 dex, based on the 1-sigma confidence limits from the $\chi^{2}$ distributions. \subsection{Abundance distribution of methyl cyanide}\label{Results:Abundance-Map} In order to compare the methyl cyanide emission maps with chemical model predictions, we have derived the CH$_3$CN abundance distribution from the best-fit LVG model results for the simultaneous fit to the three sets of $K$-ladder lines. To do so, we have used a SHARC 350\,$\mu$m continuum emission map of Orion obtained by \citet[][kindly provided by the authors]{Lis1998} and calculated total H$_2$ column densities, $N(\mathrm{H_2})$, at each position by assuming a dust temperature of 50~K, a dust mass opacity coefficient at 350\,$\mu$m of 10.1~cm$^2$\,g$^{-1}$ \citep[column 6 in Table~1 of][corresponding to agglomerated dust grains with thin ice mantles at densities $\sim$10$^{6}$~cm$^{-3}$]{Ossenkopf1994}, a gas-to-dust mass ratio of 100, and that the gas is fully molecular. The beam-averaged CH$_3$CN column density map shown in Fig.~\ref{Fig:LVG_combo_maps} was then divided by the $N(\mathrm{H_2})$ map to obtain the abundance distribution across the Orion KL nebula. The SHARC continuum data have an effective angular resolution of 12$\arcsec$, with 4$\arcsec$ spacing between map points, making them ideally suited for comparison with our methyl cyanide maps. Due to the similar beam sizes, we have used beam-averaged column densities to derive the abundances. Adjusting for the beam dilution factors determined from the LVG models, and assuming that the H$_2$ column densities are undiluted in the same beam, the source-averaged abundances would therefore be about an order of magnitude higher. We note that dust temperatures are likely to be higher in the central region of Orion KL, particularly in the hot core and around neighbouring embedded sources. The densities in this region are sufficiently high that the gas and dust temperatures are coupled. Adopting a dust temperature of 200~K instead for the central part of the map would yield CH$_3$CN abundances about a factor of 5 higher. For the bulk of the extended ridge, including the hot zone, lower dust temperatures are more appropriate and our choice of a constant value of 50 K serves as a reasonable average \citep[e.g.,][]{Lis1998, Dupac2001}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Figures/OrionKL_CH3CN_abundance_map}} \caption{Contour map of the CH$_3$CN abundance (with respect to H$_2$) across the Orion KL region, derived from beam-averaged methyl cyanide column densities obtained from simultaneous LVG model fits to multiple $K$-ladder transitions (see Sect.~\ref{Results:Temperature-Maps} for details). Contours are plotted from 2$\times$10$^{-9}$ (white) to 6$\times$10$^{-9}$ (black) at 0.5$\times$10$^{-9}$ intervals. The locations of the hot core, compact ridge, radio continuum sources, and the hot zone are indicated as in previous figures.} \label{Fig:abundance_map} \end{figure} The resulting distribution of methyl cyanide abundance is shown in Fig.~\ref{Fig:abundance_map}. We find abundances of $\sim$6$\times$10$^{-9}$ in the hot core and the northern part of the compact ridge, and a peak value of 7.4$\times$10$^{-9}$ at a position just south of the BN object. The CH$_3$CN abundances across the rest of the cloud are generally much below 10$^{-9}$, with the exception of the peak position of the hot zone and an arc extending southeast of it, in which the abundance rises to 3--6$\times$10$^{-9}$. Toward the location of CS1, the CH$_3$CN abundance is $\sim$10$^{-10}$, consistent with the value found by \citet{Wilner1994}. The fractional abundance we derive for the hot core is about an order of magnitude lower than that determined by \citet{Wang2010} from interferometric observations. However, accounting for beam dilution and higher dust temperatures, the source-averaged abundance is between 10$^{-8}$ and 10$^{-7}$, which is consistent with their results. Similarly, the source-averaged abundance at the peak position within the hot zone becomes $\approx$5$\times$10$^{-8}$, putting it close to the abundances of methyl cyanide produced by C-shock models, as discussed in the following section. \section{Discussion}\label{Discussion} \subsection{Cause of the northeastern hot zone}\label{Discussion:Hot-Zone} The region of hot gas that we detect in all four series of observed $K$-ladder transitions and that we have termed the northeastern hot zone lies roughly midway between IRc2 and a condensation known as CS1 approximately 20--30 arcsec to the northeast of IRc2. This condensation has been observed in both continuum \citep{Mundy1986} and line emission maps \citep{Blake1987, Mangum1993, Wilner1994, Wright1996} and is believed to be a dense clump residing within the ridge, since it displays the same kinematic signature. Interferometric maps of line emission from a number of molecular species indicate that the dense material in CS1 may extend over a region of 10--20 arcsec, with some variation from species to species \citep{Wilner1994, Wright1996, Wiseman1998}. We therefore propose that the hot gas that we infer at a position approximately 15 arcsec to the northeast of IRc2 is the result of a shock front as the outflowing gas originating in the vicinity of IRc2 impacts the southwestern edge of the CS1 condensation. The heating due to this passing shock front is thus responsible for the high temperatures we find in this region. This scenario is supported by the general morphology and location of the hot zone. The low-velocity outflow in Orion KL is traced by emission from species such as SiO \citep{Plambeck2009}, which shows a clear bipolar morphology in the northeast-southwest direction centred on source \textit{I}. The size of this region is seen to vary in different species, but generally extends about 10--20 arcsec to the northeast of IRc2, consistent with what we find for methyl cyanide. Similarly, \citet{Pardo2005} found that [C\,\textsc{i}] 690\,$\mu$m emission traces an arc around the northeast edge of IRc2 and the hot core (see their figure 2), and also associated this emission with the outflow component. The shape of this emission largely agrees with that of the hot methyl cyanide seen in our maps, showing a bright region to the northeast of IRc2 that begins at approximately the same location as the hot zone and extends toward the CS1 condensation. The clumpy and extended nature of the warm gas we find within the hot zone is also consistent with outflowing material impacting various unresolved clumps in the quiescent ridge. There is therefore good evidence to suggest that this hot zone is linked to the low-velocity outflow. Emission features in the vicinity of the hot zone have been detected previously by several authors. \citet{Habing1991} proposed that the peak that they detected $\sim$10$\arcsec$ northeast of the hot core in the $J$=$19_K$--$18_K$ lines of methyl cyanide was a previously undiscovered embedded source, likely a young protostar. However, \citet{Wilner1994} argued that it was more likely that this emission peak is produced by an interaction zone in which the dense quiescent gas of the ridge is being impacted by the low-velocity outflow from Orion KL. This second interpretation is in line with our proposed shock heating scenario and seems more likely, since the properties we derive for the gas in the hot zone (broad line widths, high temperatures, relatively low column densities, and clumpy structure) are more consistent with the presence of shocked gas than an embedded source. Indeed, studies based on the dust and line emission from other molecules have also lead to similar conclusions \citep[e.g.,][]{Masson1988, Wright1992}. \begin{figure*} \centering \includegraphics[width=17cm]{Figures/Shock-Model-n=1E5-v=30-dpl=61pc} \caption{Results from a C-shock chemical model with shock velocity $v_\mathrm{s}=30$ km\,s$^{-1}$ and pre-shock gas-phase depletion of 60\% (see Sect.~\ref{Discussion:Formation} for details). \textit{Top}: Fractional abundances of gas-phase CH$_3$CN, HCN, and CN, and grain surface CH$_3$CN and HCN (indicated by the \# prefix) as a function of distance into the shock front. \textit{Bottom}: Profiles of gas temperature (red) and total hydrogen number density (blue) as a function of distance into the shock front.} \label{Fig:Shock-Models} \end{figure*} Furthermore, there have been various studies suggesting an association between methyl cyanide emission and outflowing gas in other sources, such as \citet{Codella2009}, \citet{Leurini2011}, and \citet{Moscadelli2013}. \citet{Csengeri2011} also propose that the weak and extended CH$_3$CN emission observed in DR21(OH) is tracing warm gas associated with low-velocity shocks. This further supports the association of the hot methyl cyanide northeast of IRc2 with shocked gas. The proposed role of the prior explosive event in heating and shaping the KL region \citep{Zapata2009, Zapata2011, Moeckel2012} could in theory be responsible for creating the hot zone, if ejected material from the explosion were colliding with the extended ridge at that location. However, the kinematics of the methyl cyanide emission at this position tie it more convincingly to the low-velocity outflow originating from source \textit{I}. We can also rule out infrared pumping as a possible explanation for the elevated temperatures in the hot zone. In this scenario, the strong infrared continuum emitted by IRc2 and neighbouring sources could excite the high energy states of the molecule. However, emission from the $\nu_8$=1 vibrationally excited state of methyl cyanide, which is known to be excited by the strong IR continuum within the hot core, is entirely absent in the vicinity of the hot zone. Other mechanisms that might lead to super-thermally excited emission, such as non-Maxwellian state-to-state chemistry as discussed by \citet{Godard2013} for the case of CH$^{+}$, cannot be currently tested, since the gas-phase formation routes of methyl cyanide are still rather uncertain. Based on the location of the hot zone, the broad line widths and high temperatures we derive in that region, and its similar morphology to other tracers of the low-velocity outflow from IRc2, we argue that shock heating caused by the impact of the low-velocity outflow with the edge of the CS1 condensation is the most plausible scenario. \subsection{Formation mechanisms for methyl cyanide}\label{Discussion:Formation} The main formation route of methyl cyanide is still uncertain. It is generally believed to be the product of grain surface chemistry, primarily through the reaction of CH$_3$ and CN in grain mantles, followed by its evaporation at temperatures above $\sim$90 K \citep{Garrod2008a}, though modest amounts of CH$_3$CN may also form in the gas phase as HCN reacts with CH$_3^+$. If the higher temperatures found within the hot zone are indeed caused by shock heating of the gas, we might reasonably ask if methyl cyanide is expected to survive in such environments, or even if its abundance might be enhanced. It has indeed been suggested that methyl cyanide may be enhanced in C-shocks \citep{Codella2009}, where the magnetic precursor to the shock front causes efficient sputtering of dust grains and lower velocities prevent dissociation fronts from forming. With this in mind, we have run a series of chemical models appropriate for C-shocks to explore the abundances of CH$_3$CN that might be produced for a range of parameter space appropriate for this region. The model we adopt is the time-dependent gas-grain chemical model of \citet{Viti2011} that includes an analytical treatment of the physical properties of a passing C-shock and a full treatment of the grain-sputtering and pre- and post-shock chemistry within the region, both in the gas and on the grain surfaces. The model begins by following the collapse of a diffuse cloud until it reaches a specified pre-shock density, computing the chemistry as gas-phase species freeze out onto grain mantles and are further processed by grain-surface reactions. Grain-surface formation of CH$_3$CN is included in the model through the reaction of CH$_3$ and CN and via hydrogenation of C$_2$N and C$_2$NH$^+$. The degree of depletion of species onto the grain mantles is estimated by calculating the rate per unit volume at which each species freezes out and this is a function of density, temperature, grain parameters, and, of course, the sticking coefficient. Since the latter, as well as the dust size distribution, are unknown, one can control the total freeze-out by a depletion coefficient (which will depend on the abundance of very small grains as well as the binding energy of each species). Non-thermal desorption due to several mechanisms \citep[see][]{Roberts2007a} is also included in the model. The final percentages of freeze-out for our models are 30, 45, and 60\%. Note that the advantage of this approach is that the ice composition is not assumed but is derived by a time-dependent computation of the chemical evolution of the gas-dust interaction process. After this collapse phase, the passing of a C-shock through the cloud is modelled, with the physical properties of the shocked gas described by a parametric model, taken from \citet{Jimenez-Serra2008}, that depends on the assumed pre-shock gas density and shock velocity $v_\mathrm{s}$. The process of grain mantle sputtering in the shock front is treated in detail by the model, leading to efficient release of the grain mantle species back into the gas phase, and the resulting gas-phase chemistry is followed as the cloud cools after the shock front has passed (see \citealt{Viti2011} for further details). Depletion onto grains from the post-shock gas is not taken into account in the model, but would become significant long after the shock front has passed and the gas has cooled sufficiently. In the hot gas immediately after the passing of the shock front, however, it is safe to assume that depletion is negligible. To study the possible enhancement of methyl cyanide abundances as a result of low-velocity C-shocks that may be present in the hot zone, we consider a number of models in which we vary the degree of depletion onto grains and the velocity of the passing shock. We assume a pre-shock gas density of 10$^{5}$ cm$^{-3}$ in all the models, shock velocities of 20, 30, and 40 km\,s$^{-1}$, and depletion levels of 30, 45, and 60\%. Figure~\ref{Fig:Shock-Models} shows the resulting fractional abundances for methyl cyanide and associated species for a model with $v_\mathrm{s}=30$ km\,s$^{-1}$ and 60\% depletion, in addition to the gas temperature and density profiles assumed in the parametric shock model. The shock front appears at $\sim$4$\times$10$^{14}$ cm into the model cloud, at which point the methyl cyanide contained in grain mantles (whose abundance is indicated by the label \#CH$_3$CN in the figure) is released into the gas phase due to sputtering of the grain mantles. This produces a twofold increase in the gas-phase abundance of CH$_3$CN, which then remains constant at $\sim$2$\times$10$^{-8}$ throughout the post-shock gas. This jump in methyl cyanide abundance is not dramatic, especially when compared to that of HCN, which increases by an order of magnitude in the shock. The reason for this is that the pre-shock gas-phase abundance of CH$_3$CN is already greater than its grain-surface abundance, due to the relatively low pre-shock density assumed in the model, so even returning the entire mantle reservoir to the gas phase cannot produce a large abundance increase. The degree of freeze-out onto grain mantles depends on a number of model parameters. More CH$_3$CN would be depleted onto grains if either the density were higher, freeze-out were more efficient, or desorption processes were less effective. This would result in a more significant jump in gas-phase methyl cyanide abundance in the post-shock gas. Comparable methyl cyanide abundances of order 10$^{-8}$ have been inferred previously for the hot core by \citet{Wilner1994} and \citet{Wang2010}. The relatively high abundances attained in our chemical models therefore suggest that high densities and warm environments, such as those found around protostars, are not necessarily the only scenarios that can produce significant quantities of methyl cyanide. Given that the beam-averaged column densities and fractional abundances that we infer for the hot zone in the outflow from IRc2 are higher than those in neighbouring regions, moderate abundance enhancement due to the presence of C-shocks seems plausible. We find little variation of the post-shock methyl cyanide abundance with shock velocity $v_\mathrm{s}$, since all CH$_3$CN residing on the grain mantles is released back into the gas phase at all velocities considered, so the gas-phase abundance effectively saturates, with no dominant gas-phase reactions that form or destroy it afterwards. Changing the degree of pre-shock depletion of species onto the grains produces a minor change in the post-shock gas-phase abundance, but the limited freeze-out of methyl cyanide in the pre-shock phase due to the relatively low density assumed in the model prevents significant changes, as discussed above. Crucially, the methyl cyanide that is released from grain mantles in the passing shock front remains abundant in the warm post-shock gas and is not rapidly destroyed. C-shock models are thus able to reproduce the methyl cyanide abundances and temperatures that we derive from the emission toward the hot zone. We therefore propose that a shock formation scenario is the most plausible explanation for this component. \section{Summary and conclusions}\label{Conclusions} We have obtained full spatially sampled maps of methyl cyanide emission in four sets of its $K$-ladder transitions, $J$=$6_K$--$5_K$, $J$=$12_K$--$11_K$, $J$=$13_K$--$12_K$, and $J$=$14_K$--$13_K$, across the Orion KL region using the IRAM 30m telescope. These maps beautifully display the extended emission from this molecule over an area approximately 40 arcsec across, well beyond the confines of the hot core and compact ridge components. By fitting Gaussian profiles to the line shapes of each set of $K$-ladder transitions, the kinematics of the gas can be studied across the region and reveal a sharp velocity gradient and the distinct kinematic features of the hot core and plateau components. The spectroscopic properties of its emission lines make methyl cyanide an ideal diagnostic of the gas temperature in warm ($>$100 K) and dense ($>$10$^{5}$ cm$^{-3}$) environments. Taking advantage of this, we have used population diagram analysis and LVG modelling to infer the physical conditions [$T_\mathrm{kin}$, $N_\mathrm{tot}$, $n(\mathrm{H_2})$] at each position in our maps. The resulting temperature distributions for the emitting region show a distinct ``hot zone'' surrounding the northeastern edge of the hot core, with inferred temperatures of 300--450 K. This feature is seen in all four sets of $K$-ladder emission lines and emerges from both population diagram and LVG model analyses. The location of the hot zone suggests an association with the northeast-southwest bipolar outflow originating in the vicinity of IRc2. Based on the kinematics and derived physical properties of the methyl cyanide emission in this region, we argue that it is associated with hot shocked gas where the outflow impacts the quiescent material in the extended ridge cloud. We have computed a series of shock chemistry models in order to test this scenario. We find that methyl cyanide can indeed remain present in shock fronts, at high temperatures and elevated abundances. Our main findings are as follows: \begin{itemize} \item Methyl cyanide emission is extended across a 40 arcsec region around IRc2, with temperatures of 100--150 K and inferred densities of order $10^{5}$ cm$^{-3}$ in the quiescent gas. \item The quiescent gas along the molecular ridge shows a global north-south velocity gradient and narrow line widths ($\lesssim$4~km\,s$^{-1}$), while the hot core region at the centre of the maps is clearly identifiable in the CH$_3$CN emission via its distinct kinematic signature, characterised by blue-shifted and broader lines. \item CH$_3$CN emission peaks at the hot core position; column densities derived from population diagrams and LVG modelling both show that methyl cyanide is over an order of magnitude more abundant in the hot core than in the ambient medium. \item Temperatures between 200 and 350 K have been determined for the methyl cyanide emission in the hot core and its environs, however, higher temperatures ($\ge$300 K) are seen in a hot zone to the northeast of the hot core. \item We attribute this feature to shock heating by outflowing gas from active star formation in the vicinity of IRc2. This is supported by the coincidence of this region with emission from shock tracers such as SiO and neutral carbon. \item Chemical models of C-shocks indicate that methyl cyanide abundances may be enhanced in shock fronts under physical conditions appropriate for this region. \end{itemize} Methyl cyanide is a powerful diagnostic probe of the gas temperature in warm dense regions of star formation. \hbox{Millimetre} and submillimetre interferometers such as the Plateau de Bure Interferometer, the Submillimeter Array, the Combined Array for Research in Millimeter-wave Astronomy, and the Very Large Array are capable of resolving the clumpy structure within Orion KL and can provide detailed information on its properties, while the Atacama Large Millimeter Array is now able to observe CH$_3$CN with sufficient sensitivity to discern the morphology of the weaker extended emission. Such follow-up observations will be vital to confirm the shock heating scenario we propose here. Combined with legacy datasets such as those presented in this paper, future observations of methyl cyanide promise to unveil the detailed temperature structure in the very hearts of these star-forming regions. \begin{acknowledgements} We are grateful to D.~Lis for providing us with the 350\,$\mu$m continuum map of Orion. We also thank the referee and the editor, M.~Walmsley, for useful comments which helped to improve an earlier draft of this paper. We thank the Spanish MINECO for funding support from grants CSD2009-00038, AYA2009-07304, and AYA2012-32032. TAB is supported by a JAE-DOC research contract. AP is supported by a JAE-DOC CSIC fellowship co-funded with the European Social Fund under the program `Junta para la Ampliaci\'on de Estudios', by the Spanish MICINN grant AYA2011-30228-C03-02 (co-funded with FEDER funds), and by the AGAUR grant 2009SGR1172 (Catalonia). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \end{acknowledgements}
train/arxiv
BkiUfqHxK19JmhArF1Am
5
1
\section{Introduction} \label{section: intro} In recent years, coding has been reinvented for solving problems in distributed computing systems from different perspectives such as straggler mitigation \cite{lee2017speeding,dutta2016short,ferdinand2016anytime,yu2018straggler,bitar2017minimizing,karakus2017straggler,halbawi2017improving,suh2017matrix,mallick2018rateless,maity2018robust,Aktas2018straggler,wang2018fundamental,ye2018communication,wan2020distributed}, data shuffling \cite{lee2017speeding,attia2019shuffling,Elmahdy2020shuffling,wan2020shuffling}, and robustness \cite{chen2018draco}. In particular, Coded Distributed Computing (CDC), introduced in \cite{li2018fundamental}, offers an efficient approach to reduce the communication load in distributed computing networks such as MapReduce \cite{dean2008mapreduce}. In this type of distributed computing network, in order to compute the output functions, the computation is decomposed into ``Map" and ``Reduce" phases. First, each computing node computes intermediate values (IVs) using local input data files according to the designed Map functions. Then, computed IVs are exchanged among computing nodes and nodes use these IVs as input to the designed reduce functions to compute output functions. The operation of exchanging IVs is called ``data shuffling" and occurs during the ``Shuffle" phase. This severely limits the performance of distributed computing applications due to the very high transmitted traffic load \cite{li2018fundamental}. In \cite{li2018fundamental}, by formulating and characterizing a fundamental tradeoff between ``computation load" in the Map phase and ``communication load" in the Shuffle phase, Li {\em et al.} demonstrated that these two quantities are inversely proportional to each other. This means that if each intermediate value is computed at $r$ carefully chosen nodes, then the communication load in the Shuffle phase can be reduced by a factor of $r$. CDC achieves this multiplicative gain in the Shuffle phase by leveraging coding opportunities created in the Map phase by strategically placing the input files among the computing nodes. However, there are a few limitations of the CDC scheme in \cite{li2018fundamental}. First, it requires an exponentially large number of input files and reduce functions relative to the number of computing nodes. In some cases, the number of files and functions becomes unrealistic and the promised again cannot be achieved in practice. Second, there is an exponential number of multicasting groups compared to the number of nodes and the computation load. When implementing CDC in \cite{li2018fundamental}, the execution time of the code generation step is proportional to the number of multicasting groups. This counteracts the benefits of CDC in reducing overall execution time. Third, the CDC scheme assumes the computing network is homogeneous in that each computing node has the same computation and storage resources which limits its effectiveness on heterogeneous computing networks. Some other aspects of CDC have been investigated in the literature. In \cite{ezzeldin2017communication}, Ezzeldin {\em et al.} revisited the computation-communication tradeoff by computing only necessary IVs in each node. The authors proposed a lower bound on the corresponding computation load via a heuristic scheme, which achieves the lower bound under certain parameter regimes. In \cite{song2017benefit}, Song {\em et al.} considered the case where each computing node has access to a random subset of input files and the system is asymmetric. This means that not all output functions depend on the entire data set and we can decide which node computes which functions. The corresponding communication load was characterized. Later, in \cite{prakash2018coded}, Prakash {\em et al.} extended CDC to graph analytics of Erd\"os-R\'enyi graphs, where the computation at each vertex uses data only from the adjacent vertices. In \cite{Srinivasavaradhan2018distributed}, Srinivasavaradhan {\em et al.} considered the CDC design under a random network topology following a Erd\"os-R\'enyi random graph model. In \cite{konstantinidis2018leveraging}, the Konstantinidis {\em et al.} used resolvable designs to reduce the necessary number of files, functions, and number of multicasting groups. Furthermore, they implemented new designs to demonstrate an overall reduction in execution time compared to implementations of \cite{li2018fundamental} for some cases. Thus far, all aforementioned prior works have assumed the CDC network to be homogeneous, that is, the computing nodes of the network have the same amount of storage, computation, and communication resources. Understanding the performance potential and finding achievable designs for heterogeneous networks remains an open problem. The authors in \cite{kiamari2017Globecom} derived a lower bound for the communication load for a CDC network where nodes have varying storage or computing capabilities. The proposed achievable scheme achieves the information-theoretical optimality of the minimum communication load for a system of $3$ nodes. The authors also demonstrated that the parameters of a heterogeneous CDC network can be translated into an optimization problem to find an efficient Map and Shuffle phase design. In \cite{shakya2018distributed}, the authors studied CDC networks with $2$ and $3$ computing nodes where nodes have varying communication load constraints to find a lower bound on the minimum computation load. These works mainly focus on the heterogeneous placement of the files in the Map phase, however, nodes are assumed to have a homogeneous reduce function assignment. The authors of \cite{xu2019heterogeneous} explore the concept of semi-random file placement and function assignment and develop a heterogeneous computing scheme which can operate on a computing network with arbitrary heterogeneous storage and computation requirements. However, the number of necessary files and functions of this scheme are unclear as files and functions are assigned as fractions of the entire file library and function set, respectively. Our contributions in this paper are as follows. \begin{itemize} \begin{comment} \item First, we propose a novel CDC approach based on a combinatorial design, called {\em hypercube}, for the homogeneous networks, which requires an exponentially less number of input files and multicasting groups as compared to that in \cite{li2018fundamental}. Meanwhile, the resulting computation-communication trade-off maintains the multiplicative gain compared to conventional uncoded MapReduce and achieves the optimal trade-off proposed in \cite{li2018fundamental} asymptotically. \end{comment} \item First, we establish a novel combinatorial framework for CDC that exploits elegant geometric structures-- {\em hypercube} for homogeneous networks and {\em hypercuboid} for heterogeneous networks, to optimize the tradeoff of communication and computing for such networks. The proposed designs require an exponentially less number of input files and multicasting groups as compared to that in \cite{li2018fundamental}. Meanwhile, the resulting computation-communication trade-off maintains the multiplicative gain compared to conventional uncoded MapReduce and achieves the optimal trade-off proposed in \cite{li2018fundamental} asymptotically. \item Second, the proposed hypercuboid design can accommodate large heterogeneous CDC networks where nodes have varying storage and computing capabilities. This is achieved by the combinatorial design of a heterogeneous network (hypercuboid) consisting of multiple interleaved homogeneous networks (hypercubes) with varying dimensions and the design of efficient file mapping and data shuffle schemes across them. Another novelty of the proposed design is to assign more output functions to nodes with more storage space and computing resources. This is in contrast to previous work where each node is assigned by the same number of output functions \cite{kiamari2017Globecom}. Based on the proposed file and function assignments, we characterize an information theoretic converse bound, which is tight within a constant factor. According to our knowledge, this is the first work that develops an explicit and systematic heterogeneous CDC design with optimality guarantees under certain network parameters. \item Third, this work shows that network heterogeneity can actually reduce the communication load and thus, the fundamental tradeoff of \cite{li2018fundamental} no longer applies in this setting.\footnote{A similar phenomenon was also observed in \cite{xu2019heterogeneous}.} For large heterogeneous networks, we show that the proposed heterogeneous design can achieve a communication load that is strictly less than that of an equivalent homogeneous network. of \cite{li2018fundamental}. \end{itemize} The remainder of this paper is outlined as follows. In Section \ref{sec: Network Model and Problem Formulation}, we present the network model and problem formulation. Then, we present the proposed combinatorial CDC design and discuss its performance in Section \ref{sec: Hypercube Caching Network Approach} for the homogeneous case and in Section \ref{sec: hets1} for the more general heterogeneous case. In Section \ref{sec: Discussion}, we compare our design to the state-of-the-art design of \cite{li2018fundamental}. Concluding remarks are provided in Section \ref{sec: Conclusion}. \paragraph*{Notation Convention} We use $|\cdot|$ to represent the cardinality of a set or the length of a vector. Also $[n] := \{1,2,\ldots,n\}$ for some $n\in\mathbb{Z}^+$, where $\mathbb{Z}^+$ is the set of all positive integers, and $\oplus$ represents bit-wise XOR. \section{Network Model and Problem Formulation} \label{sec: Network Model and Problem Formulation} The network model is adopted from \cite{li2018fundamental}. We consider a distributed computing network where a set of $K$ nodes, labeled as $[K]=\{1, \ldots , K \}$, have the goal of computing $Q$ output functions and computing each function requires access to all $N$ input files. The input files, $\{w_1 , \ldots , w_N \}$, are assumed to be of equal size of $B$ bits each. The set of $Q$ output functions is denoted by $\{ \phi_1 , \ldots \phi_Q\}$. Each node $k\in [K] $ is assigned to compute a subset of output functions, denoted by $\mathcal{W}_k \subseteq [Q] $ ({\em function assignment}). The result of output function $i \in [Q]$ is $u_i = \phi_i \left( w_1, \ldots , w_N \right)$. Alternatively, an output value of the targeted function $i$ can be computed using the composition of ``Map" and ``Reduce" functions as follows. \begin{equation} u_i = h_i \left( g_{i,1}\left( w_1 \right), \ldots , g_{i,N}\left( w_N \right) \right), \end{equation} where for every output function $i$ there exists a set of $N$ Map functions $g_{i,j}(\cdot), i \in [Q], j \in [N]$ and one Reduce function $h_i(\cdot), i \in [Q]$. Furthermore, we define the output of the Map function, $v_{i,j}=g_{i,j}\left( w_j \right)$, as the intermediate value (IV) resulting from performing the Map function for output function $i$ on file $w_j$. There are $QN$ intermediate values in total and each is assumed to be size $T$ bits. The MapReduce distributed computing framework allows nodes to compute output functions without having access to all $N$ files. Instead, each node $k$ has access to $M_k$ out of the $N$ files and we define the set of files available to node $k$ as $\mathcal{M}_k \subseteq \{ w_1, \ldots , w_N\}$ ({\em file mapping}). Collectively, the nodes use the Map functions to compute every IV in the {\em Map} phase at least once. Then, in the {\em Shuffle} phase, nodes multicast the computed IVs among one another via a shared link ({\em shuffle method}). The Shuffle phase is necessary so that each node can receive the necessary IVs that it could not compute itself. Finally, in the {\em Reduce} phase, nodes use the reduce functions with the appropriate IVs as inputs to compute the assigned output functions. Throughout this paper, we consider the following design options. First, we assume each computing node computes all possible IVs from locally available files. This means that $|{\cal M}_k|$ represents both storage space and computation load of each node. Second, we consider the design scenario such that each of the $Q$ Reduce functions is computed exactly once ($s=1$) at one node and $|\mathcal{W}_i \cap \mathcal{W}_j| = 0$ for $i\neq j$, where $s$ is defined as the number of nodes which calculate each Reduce function.\footnote{The scenario of $s>1$, meaning that each of the $Q$ Reduce function is computed at multiple nodes, is called {\it cascaded} distributed computing, introduced in \cite{li2018fundamental}. In this paper, we do not consider this case.} Third, we consider the general scenario where each computing node can have heterogeneous storage space and computing resource, or heterogenous size of ${\cal M}_k$ and $\mathcal{W}_k, \; \forall k \in [K]$. The proposed schemes accommodate heterogeneous networks in that nodes can be assigned a varying number of files and functions. This distributed computing network design yields two important performance parameters: the computation load, $r$, and the communication load, $L$. The computation load is defined as the number of times each IV is computed among all computing nodes, or $r = \frac{1}{N}\sum_{k=1}^K|{\cal M}_k|$. In other words, the computation load is the number of IVs computed in the Map phase normalized by the total number of unique IVs, $QN$. The communication load is defined as the amount of traffic load (in bits) among all the nodes in the Shuffle phase normalized by $QNT$. \begin{defn} The optimal communication load is defined as \begin{equation} L^*(r) \stackrel{\Delta}{=} \inf\{L: (r, L) \text{ is feasible}\}. \end{equation} \end{defn} \section{Homogeneous Hypercube Computing Approach} \label{sec: Hypercube Caching Network Approach} In this section, we describe the proposed homogeneous CDC design based on the {\em hypercube} combinatorial structure. Our schemes are defined by {\it node grouping}, {\it file mapping}, {\it function assignment} and {\it shuffle method}. Two detailed examples, one for two-dimensional, and one for three-dimensional, are provided to illustrate the fundamental principles of the proposed design. These will be extended to the more general heterogeneous CDC scheme in Section \ref{sec: hets1}. In this section, we consider the scenario where the network is homogeneous. In other words, each node is assigned the same number of files and reduce functions. Also, every reduce function is computed exactly once at one node ($s=1$). Every node computes a set of $\eta_2$ distinct functions and $Q=\eta_2 K$ where $\eta_2 \in \mathbb{Z}^+$. The novel combinatorial hypercube design splits the nodes into $r$ disjoint sets each of size $\frac{K}{r}$ and batches of $\eta_1$ files are assigned to one node from each set.\footnote{This scheme can be classified as a resolvable design for CDC, which was introduced in \cite{konstantinidis2018leveraging}. In addition, it also falls into the general framework of the Placement Delivery Array (PDA) designed for Device-to-Device coded caching \cite{wang2017placement}. } This is analogous to constructing a hypercube lattice of dimension $r$ with the length of each side $\frac{K}{r}$ to describe the file placement at the nodes. We use this hypercube approach to better illustrate the examples of our new combinatorial design. We show that the required number of files is $N = \eta_1 \left( \frac{K}{r} \right)^r$ where $\eta_1 \in \mathbb{Z}^+$ and the number of multicasting groups is $G=\left( \frac{K}{r} \right)^r$. We first present a 2-dimension (a plane) example where $r=2$. \subsection{2-Dimension Example} \label{sec: 2D example} In this example, we propose a distributed computing network based on a $r=2$ dimensional hypercube (a plane) lattice where each side has length $\frac{K}{r}=3$. There are $K=6$ computing nodes each of which has access to $\frac{1}{3}$ of the file library. Each lattice point represents a file and each node has a set of files available to it represented by a line of lattice points as shown in Fig.~\ref{fig: 2d fig}(a). Specifically, there are two set of nodes: $\mathcal K_1=\{1,2,3\} $ and $\mathcal K_2=\{4,5,6\} $. Each node in $\mathcal K_1$ (or $\mathcal K_2$) has access to three files, represented by three lattice points along a horizontal (or vertical) line. For instance, node 1 in $\mathcal K_1$ has access to three files $w_1$, $w_2$ and $w_3$ along the top horizontal line. Similarly, node 5 in $\mathcal K_2$ has access to three files $w_2$, $w_5$ and $w_8$, along the middle vertical line. Each node is responsible for computing one out of the $Q=6$ reduce functions in the Reduce phase. More specifically, node $i$ computes reduce function $i$. \begin{figure} \centering \centering \includegraphics[width=9cm, height=6.3cm]{2d_fig_3by3_v3} \vspace{-0.7cm} \caption{~\small (a) Lattice plane that defines file availability amongst the $K=6$ computing nodes. Each lattice point represents a file and each node has a set of files available to it represents by a horizontal or vertical line of lattice points. (b) The IVs used locally and transmitted by each node.} \label{fig: 2d fig} \vspace{-0.4cm} \end{figure} \if In the Map phase, nodes compute intermediate values which are necessary to compute the reduce function they are responsible for. These intermediate values do not have to be transmitted and do not contribute to the communication load. For example, as shown in Fig.~\ref{fig: 2d fig}(b), node $1$ computes $v_{1,1}$, $v_{1,2}$ and $v_{1,3}$ and node 5 computes $v_{5,2}$, $v_{5,5}$ and $v_{5,8}$. Next, we consider all possible pairs of nodes, termed node groups, consisting of one node from $\mathcal K_1$ and one node from $\mathcal K_2$. For instance, node $1$ can form node groups with any of the three nodes in $\mathcal K_2=\{4,5,6\}$ and node $5$ can form node groups with any of the three nodes in $\mathcal K_1=\{1,2,3\}$. Within each node group of two nodes, each node computes the intermediate values needed by the other node in the group. For the node group of $\{1,5\}$, node $1$ computes $v_{5,1}$ and $v_{5,3}$ and transmits these values to node $5$. Notice that, node $5$ is incapable of computing these intermediate values itself because it does not have access to files $w_1$ and $w_3$. Similarly, node $5$ computes $v_{1,5}$ and $v_{1,8}$ and transmits these intermediate values to node $1$ because node 1 does not have access to files $w_1$ and $w_8$. Fig.~\ref{fig: 2d fig}(b) shows the intermediate values computed at each node. For instance, node 1 computes $v_{1,1}$, $v_{1,2}$, $v_{1,3}$ because node 1 is assigned to compute reduce function $1$ and it has access to files $w_1, w_2, w_3$. Node 1 can form three node groups with node 4, 5 or 6 in $\mathcal K_2$. Thus, it will transmit intermediate values $v_{4,2}$ and $v_{4,3}$ to node 4, $v_{5,1}$ and $v_{5,3}$ to node 5, and $v_{6,1}$ and $v_{6,3}$ to node 6, respectively. On the other hand, node 1 will receive its requested intermediate values $v_{1,4}$ and $v_{1,7}$ from node 4, $v_{1,5}$ and $v_{1,8}$ from node 5, and $v_{1,6}$ and $v_{1,9}$ from node 6. This allows node 1 to obtain all the intermediate values necessary for computing reduction function $1$. In general, by considering all possible node groups, each node receives an intermediate value {\color{green}[] NW: Shall we use the acronym IV?]} for every file that it does not have. We can see this is true by recognizing that a node consecutively pairs with the three nodes in either $\mathcal K_1$ or $\mathcal K_2$, and the nodes in either $\mathcal K_1$ or $\mathcal K_2$ collectively has access to all the files. \fi In the Map phase, nodes compute all $Q=6$ IVs from each locally available file. Some IVs are necessary to compute the locally assigned reduce function. For example, as shown in Fig.~\ref{fig: 2d fig}(b), node $1$ computes $v_{1,1}$, $v_{1,2}$ and $v_{1,3}$ and node 5 computes $v_{5,2}$, $v_{5,5}$ and $v_{5,8}$. These IVs do not have to be transmitted and do not contribute to the communication load. However, other IVs are transmitted between nodes. We consider all possible pairs of nodes, termed node groups, consisting of one node from $\mathcal K_1$ and one node from $\mathcal K_2$. For instance, nodes $1$ and $5$ form node groups with each of the three nodes in $\mathcal K_2=\{4,5,6\}$ or $\mathcal K_1=\{1,2,3\}$, respectively. For the node group of $\{1,5\}$, node $1$ has computed $v_{5,1}$ and $v_{5,3}$ and transmits these IVs to node $5$. Notice that, node $5$ is incapable of computing these IVs itself because it does not have access to files $w_1$ and $w_3$. Similarly, node $5$ has computed $v_{1,5}$ and $v_{1,8}$ and transmits these IVs to node $1$ because node 1 does not have access to files $w_1$ and $w_8$. Fig.~\ref{fig: 2d fig}(b) also shows the IVs transmitted by each node. For example, node $1$ will transmit IVs $v_{4,2}$ and $v_{4,3}$ to node $4$, $v_{5,1}$ and $v_{5,3}$ to node $5$ and $v_{6,1}$ and $v_{6,3}$ to node $6$. On the other hand, node 1 will receive its requested IVs $v_{1,4}$ and $v_{1,7}$ from node $4$, $v_{1,5}$ and $v_{1,8}$ from node $5$, and $v_{1,6}$ and $v_{1,9}$ from node $6$. Therefore, node $1$ obtains all the IVs necessary for computing reduce function $1$. In general, by considering all possible node groups, each node receives an IV for every file that it does not have. We can see this is true by recognizing that a node consecutively pairs with the three nodes in either $\mathcal K_1$ or $\mathcal K_2$, and the nodes in either $\mathcal K_1$ or $\mathcal K_2$ collectively have access to all the files. Throughout this paper, we mainly consider the case where each node computes all IVs from its available files similar to the original CDC work \cite{li2018fundamental}. In this example, each IV is computed twice and $r=2$ since each file is assigned to $2$ nodes. In general, the computation load $r$ is equivalent to the dimension of the hypercube which defines the file placement. {Note that, nodes will compute some IVs that are never used to transmit, decode or compute a reduce function.} From~\ref{fig: 2d fig}(b) shows the IVs computed by each node that are utilized. Each node computes $3$ IVs which are necessary for its own reduce function. Also, each node participates in $3$ node pairs for which it needs to compute $2$ IVs to transmit to the other node in the pair. {In some applications, it may be possible for nodes to compute a select set of IVs to reduce the computation load as presented in \cite{ezzeldin2017communication,woolsey2018new}. In this toy example, we only consider unicasting, therefore, the communication load is equivalent to the uncoded scenario and $L=\frac{2}{3}$, or the fraction of files not available at each node. This can be verified by recognizing that there are $9$ pairs of nodes for which $2$ IVs are transmitted from each node for each pair. In total, $36$ of the $54$ IVs are transmitted and $L=\frac{2}{3}$. In later examples, we will show how this scheme can be expanded to utilize coded multicasting and outperform the uncoded CDC scheme. \begin{remark} Interestingly, although the general scheme generalized from this example is equivalent to the unicast in this case, we observe that there actually exist multicasting opportunities in this example. For instance, node $1$ could transmit $v_{4,2}\oplus v_{5,1}$ to nodes $4$ and $5$ (assuming that node $4$ and $5$ compute $v_{5,1}$ and $v_{4,2}$, respectively). In fact, all IVs could be transmitted in coded pairs where a node along one dimension transmits to $2$ nodes aligned along the other dimensions, which would reduce the communication load by half.\footnote{This is similar to the scheme outlined in \cite{wang2017placement,9133151} for the analogous coded caching problem. However, as we will see for other examples and as discussed in \cite{wang2017placement}, this scheme does not achieve a multiplicative gain for $r>2$.} \end{remark} In the following, we describe the general scheme for the proposed combinatorial design which expands for the case when $r>2$. \subsection{General Homogeneous Scheme} \label{sec: genscheme_s1} In this subsection, we will introduce the general homogeneous scheme for $s=1$ step by step as follows. {\bf Node Grouping 1}: Let $\mathcal K=\{ 1, 2, \cdots, K \}$ denote the set of $K$ nodes. Assume that $\mathcal K$ is split into $r$ equal-sized disjoint sets $\mathcal{K}_1,\ldots ,\mathcal{K}_r$ that each contains $\frac{K}{r}\in \mathbb{Z}^+$ nodes. We define $\mathcal{T}\subset \mathcal K$ as a node group of size $r$ if it contains exactly one node from each $\mathcal{K}_i$, i.e., $|\mathcal{T}\cap\mathcal{K}_i|=1,\text{ }\forall \text{ } i\in [r]$. There are a total of $X=\left( \frac{K}{r} \right)^r$ possible node groups, denoted by $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$. Furthermore, for each node group $\mathcal T_j$, we define its $i$-th component $\mathcal T_{j,i}=\mathcal T_j \cap \mathcal K_i$ as the node in $\mathcal T_j$ that is chosen from $\mathcal K_i$, where $ i \in [r]$. {\bf Node Group (NG) File Mapping}: Given node groups $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$, we split the $N$ files into $X$ disjoint sets labeled as $\mathcal{B}_{1},\ldots,\mathcal{B}_{X}$. These file sets are of size $\eta_1\in \mathbb{Z}^+$ and $N=\eta_1 X$. Each file set $\mathcal{B}_{i}$ is only available to every node in the node group $\mathcal{T}_i$. It follows that if node $k \in [K]$ belongs to a node group $\mathcal T_i$, then the file set $\mathcal B_i$ is available to this node. Hence, by considering all possible node groups $\mathcal T_i$ that node $k$ belongs to, its available files, denoted by $\mathcal{M}_k$, is expressed as \begin{equation} \mathcal{M}_k:=\bigcup\limits_{i : k\in \mathcal{T}_i}\mathcal{B}_i. \end{equation} {\bf Function Assignment 1}: The $Q$ reduce functions are split into $K$ equal size, disjoint subsets labeled as $\mathcal{W}_1, \ldots , \mathcal{W}_K$. Each set contains $\eta_2\in \mathbb{Z}^+$ reduce functions where $Q = \eta_2 K$. For each $k \in [K]$, define $\mathcal{W}_k$ as the set of reduce functions assigned to node $k$. \begin{remark} By Node Grouping 1 and NG File Mapping, each node set $\mathcal{K}_i$ collectively maps the file library exactly once, and therefore, the file library is mapped $r$ times among all $K$ nodes. Note that, since each file belongs to a unique file set $\mathcal B_i$ and is mapped to a unique set of $r$ nodes (in the node group $\mathcal T_i$), we must have $\frac{1}{N}\sum_{k=1}^K|{\cal M}_k|= \frac{N r}{N}=r$. Moreover, $\eta_1\left( \frac{K}{r} \right)^{r-1}$ files are mapped to each node. Then, by Function Assignment 1, each node is assigned $\eta_2$ reduce functions and each reduce function is assigned to exactly $s=1$ node. \end{remark} The Map, Shuffle and Reduce phases are defined as follows: {\bf Map Phase}: Each node $k\in [K]$ computes the set of IVs $\{v_{i,j} : i\in [Q], w_j \in \mathcal{M}_k \}$. {\bf Node Group (NG) Shuffle Method}: For every $\alpha\in [X]$, a coded message will be multicasted by each node $k\in \mathcal{T}_\alpha$ to serve independent requests of the rest $r-1$ nodes in $\mathcal{T}_\alpha$. Meanwhile, each node $k\in \mathcal{T}_\alpha$ will multicast the same number of coded messages. Here, each IV is requested by a node $z \in \mathcal{T}_\alpha \setminus k$ and must be available to all other nodes in $\mathcal{T}_\alpha \setminus z$ to ensure that each node can decode successfully its own desired IVs from the broadcast. Next, we consider an arbitrary node group $\mathcal T_\alpha$ and a node $z \in \mathcal T_\alpha $. Assume that {$z \in \mathcal K_h$}, and thus {$z=\mathcal T_{\alpha,h}= \mathcal T_\alpha \cap \mathcal K_h$}. In the following, we fix the choice of $\alpha, z,h$ and define \begin{equation} \mathcal L_{z,\alpha}=\{ \ell \in [X]: \mathcal T_{\ell,h}\ne z, \mathcal T_{\ell,i}=\mathcal T_{\alpha,i}, \forall i\in[r]\setminus h \}. \label{eq:define_L_z_alpha} \end{equation} Here, the set $ \mathcal L_{z,\alpha}$ includes all indexes $\ell \in [X] $ such that the node group $\mathcal T_\ell$ differs from $\mathcal T_{\alpha}$ only in the $h$-th element, i.e., the node choice from {$\mathcal K_h$}. In other words, since {$z \in \mathcal K_h$}, then $\mathcal T_{\ell,h}$ can be any node in $\mathcal K_h$ except for $z$. Note that $h$ is suppressed from the subscript of $\mathcal L_{z,\alpha}$ for notation simplicity. The definition of (\ref{eq:define_L_z_alpha}) ensures that for any $\ell \in \mathcal L_{z,\alpha}$, we have $z \notin \mathcal T_\ell$, but for any other node $z'$ in $\mathcal{T}_\alpha \setminus z$, we have $z' \in \mathcal T_\ell$. This follows that while file set $\mathcal B_\ell$ is not mapped to node $z$, it is mapped to all other nodes $z'$ in $\mathcal T_\alpha \setminus z$. Thus, we see that IVs of the type $\{v_{i,j}, i\in \mathcal{W}_z, w_j \in \mathcal B_\ell\}$ are requested by node $z$ because $z$ does not have $\mathcal B_\ell$, but are available to all nodes in $\mathcal{T}_\alpha \setminus z$ because they all have access to $\mathcal B_\ell$. This key idea is used to create multicast opportunities as follows. Formally, let us define \begin{equation} \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}=\bigcup\limits_{\ell \in \mathcal L_{z,\alpha} }\left\{ v_{i,j}: i\in \mathcal{W}_z, w_j \in \mathcal B_\ell\right\}, \label{eq:def_L_shuffle_1} \end{equation} which contains IVs requested by node $z$ and are available at all nodes in $\mathcal{T}_\alpha\setminus z$. Furthermore, $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$ is split into $r-1$ disjoint subsets of equal size\footnote{In general, $|\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}|$ may not be divisible by $r-1$, in which case the IVs of $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$ can be concatenated into a message and split into $r-1$ equal size segments. This process was presented in \cite{li2018fundamental}.} denoted by $ \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},\sigma_1},\ldots , \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},\sigma_{r-1}} $ where $\{ \sigma_1,\ldots , \sigma_{r-1}\}=\mathcal{T}_\alpha\setminus z$. Each node $k\in \mathcal{T}_\alpha$ sends the common multicast message \begin{equation} \label{eq: 1_trans_eq1} \bigoplus \limits_{z\in \mathcal{T}_\alpha\setminus k} \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},k} \end{equation} to all nodes $z\in \mathcal{T}_\alpha\setminus k$. {\bf Reduce Phase}: For all $k\in [K]$, node $k$ computes all output values $u_q$ such that $q\in \mathcal{W}_k$. \begin{remark} For the homogeneous case, we have $|\mathcal L_{z,\alpha}|=\frac{K}{r}-1$ because {$\mathcal T_{\ell,h}$} can only be one of the $\frac{K}{r}-1$ nodes in {$\mathcal K_h\setminus z$}. When using Node Grouping 1, NG File Mapping, and Function Assignment 1, in NG Shuffle Method we find each intermediate value set, $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$, contains $\eta_1 \eta_2 \left( \frac{K}{r} - 1\right)$ IVs. \end{remark} In the following, we will present a more complex $3$-dimension example by accommodating the design procedures and all the notations introduced above. \subsection{3-Dimension Example} \label{sec: 3D_1} To demonstrate the general scheme, we construct a computing network using a three-dimensional hypercube as shown in Fig.~\ref{fig: 3d fig}. Each lattice point in the cube, with its index $i\in [27]$ labeled next to the point, represents a different file set $\mathcal B_i=\{w_i\}$ which contains $\eta_1=1$ files. There are a total of $K=9$ nodes, split into three node sets: $\mathcal{K}_1 = \{1,2,3 \}$, $\mathcal{K}_2 = \{4,5,6 \}$, and $\mathcal{K}_3 = \{7,8,9 \}$, aligned along each of the $r=3$ dimensions of the hypercube. Specifically, the three nodes in $\mathcal{K}_1 = \{1,2,3 \}$ are represented by three parallel planes that go from top surface of the hypercube to the bottom. Node 3 is represented by the green plane that passes through lattice point 7. Node 1 and 2 are represented by the two planes (not shown) parallel to the green plane that go through lattice point 1 and point 4, respectively. The three nodes in $\mathcal{K}_2 = \{4,5,6 \}$ are represented by three parallel planes that go from left surface of the hypercube to the right. Node 5 is represented by the middle plane, shown in red, that goes through lattice point 8, and nodes 4 and 6 are represented by two planes (not shown) parallel to the red plane that go through lattice point 7 and 9, respectively. The nodes in $\mathcal{K}_3 = \{7,8,9 \}$ are represented by three parallel planes that go from the front surface of the hypercube to the back. Node 9 is the blue plane, passing through lattice point 27, and nodes 7, 8 are represented by two planes (not shown) parallel to the blue plane and go through lattice points 9 and 18, respectively. For file mapping, each node is assigned all the files indicated by the 9 lattice points on the corresponding plane. For instance, node $5$, represented by the red plane, is assigned the file set $\mathcal{M}_5=\{ w_2, w_5, w_8, w_{11}, w_{14}, w_{17}, w_{20}, w_{23}, w_{26}\}$. For each $i \in [3]$, the size of $\mathcal K_i$ is $\frac{K}{r}=3$, which is the number of lattice points in the $i$-th dimension. Since the three nodes in each set $\mathcal{K}_i$ are aligned along dimension $i$, they collectively store the entire library of 27 files. Since each point $i$ in the lattice is uniquely determined by the intersection of three planes, one from each dimension, the same point also represents a node group $\mathcal T_i$. For instance, node group $\mathcal T_{26}=\{3,5,9\}$ is represented by the three planes-- green (node 3), red (node 5), and blue (node 9) intersecting at only one lattice point $i=26$. It is clear that each file $w_i$ is mapped to $r=3$ nodes in $\mathcal T_i$. Each node $k$ is assigned the $\eta_2=1$ functions of $\mathcal{W}_k=\{ k\}$ because $Q=K=9$ and each node $k$ is only assigned the $k$-th reduce function. \begin{figure} \centering \centering \includegraphics[width=9cm, height=6cm]{3d_fig_sized_v3} \vspace{-0.6cm} \caption{~\small (left) Cube lattice which defines file availability amongst the $K=9$ computing nodes. Each lattice point represents a file and each node has set of files available to it represented by a plane of lattice points. The green, red and blue planes represent the files locally available to nodes $3$, $5$ and $9$, respectively. (right) Intersections of planes which represent files that are locally available to multiple nodes and yields coded multicasting opportunities.} \label{fig: 3d fig} \vspace{-0.4cm} \end{figure} In the Map phase, each node computes all IVs from locally available files. For example, node 5 will compute all possible IVs $\{ v_{i,j} : i \in[Q],w_j\in \mathcal{M}_5 \}$. The subset of IVs $\{ v_{5,j} : w_j\in \mathcal{M}_5 \}$ is used to calculate of the function output $u_5$. Furthermore, node $5$ will use the subset of IVs in $\{ v_{i,j} : i\in [Q]\setminus\mathcal{K}_2, w_j\in \mathcal{M}_5 \}$ for transmission and decoding purposes when forming multicasting groups with nodes of $\mathcal{K}_1$ and $\mathcal{K}_3$. Note that, similar to the last example, node $5$, and the other nodes, will compute some IVs that are not utilized. We use the example of node group $\mathcal{T}_\alpha=\mathcal{T}_{26}=\{3,5,9\}$ to explain the Shuffle phase. Within node group $\mathcal{T}_{26}$, node 3 will multicast the summation of two IVs to nodes in $\mathcal T_{26}\setminus3$, one intended for node 5, and one intended for node 9. The former must be available at both nodes 3 and 9, and the latter must be available at both nodes 3 and 5. To determine these IVs, we consider the set $\mathcal{V}_{\{3,9\}}^{\{5\}} $ and $\mathcal{V}_{\{3,5\}}^{\{9\}}$. The set $\mathcal{V}_{\{3,9\}}^{\{5\}}$ contains two IVs requested by node 5 that are available at nodes $3,9$. To find these two IVs, we replace node $5$ in $\mathcal{T}_{26}$ by one of the other two nodes in $\mathcal K_2$, which are nodes 4 and 6. This way, we obtain two node sets $\mathcal T_{25}=\{3,4,9\}$ and $\mathcal T_{27}=\{3,6,9\}$ that differ from $\mathcal{T}_\alpha$ only in the second element (the element that intersects $\mathcal K_2$). Thus ${\mathcal L_{5,\alpha}}=\{25, 27\}$. This leads to $\mathcal{V}_{\{3,9\}}^{\{5\}} =\mathcal{V}_{\{3,9\}}^{\{5\},3} \bigcup \mathcal{V}_{\{3,9\}}^{\{5\},9} =\{ v_{5,25},v_{5,27} \}$ which contains two IVs requested by node $5$ and are available at nodes 3 and 9. Similarly, we find ${\mathcal L_{9,\alpha}}=\{8,17\}$ and $\mathcal{V}_{\{3,5\}}^{\{9\}} =\mathcal{V}_{\{3,5\}}^{\{9\},3} \bigcup\mathcal{V}_{\{3,5\}}^{\{9\},5}=\{ v_{9,8},v_{9,17} \} $. Once these two IV sets are found, node 3 transmits the summation of one IV from each set, say $\mathcal{V}_{\{3,9\}}^{\{5\},3} \oplus \mathcal{V}_{\{3,5\}}^{\{9\},3}= v_{5,25} \oplus v_{9,8}$ to nodes 5 and 9. Upon receiving this value, node 5 will subtract $v_{9,8}$ to recover $v_{5,25}$ and node 9 will subtract $v_{5,25}$ to recover $v_{9,8}$. The rest of the IV sets can be found in a similar fashion such that $\mathcal{V}_{\{5,9\}}^{\{3\}} = \mathcal{V}_{\{5,9\}}^{\{3\},5} \bigcup \mathcal{V}_{\{5,9\}}^{\{3\},9} =\{ v_{3,20},v_{3,23} \}$, and $\mathcal{V}_{\{3,5\}}^{\{9\}}= \mathcal{V}_{\{3,5\}}^{\{9\},3} \bigcup \mathcal{V}_{\{3,5\}}^{\{9\},5} = \{ v_{9,8}, v_{9,17} \}$. Node 5 will transmit $v_{3,20} \oplus v_{9,17}$ to nodes 3 and 9. Node 9 will transmit $v_{3,23} \oplus v_{5,27}$ to nodes 3 and 5. In this example, each node participates in $9$ multicasting groups and transmits $1$ coded message per group. Each transmission has the equivalent size of $1$ IV. Therefore, the communication load is $L_{\rm c}=\frac{9 \cdot 9}{QN}= \frac{81}{9\cdot 27}=\frac{1}{3}$, which is half of the uncoded communication load $L_{\rm u}=\frac{2}{3}$, or the fraction of files not available to each node. \subsection{Achievable trade-off between Computation and Communication Loads} The following theorem evaluates the trade-off between the computation and communication loads for the proposed scheme. \begin{theorem} \label{theorem: s1_hom} By using Node Grouping 1, NG File Mapping, Function Assignment 1, and NG Shuffle Method, the communication load of the general homogeneous scheme is \begin{eqnarray} \label{eq: theorem s1_hom} L_{\rm c}(r) &=& \frac{K-r}{K\left( r-1 \right)}. \end{eqnarray} \hfill $\square$ \end{theorem} \begin{IEEEproof} Theorem~\ref{theorem: s1_hom} is proved in Appendix \ref{sec: codedHetPrfs1} as a special case of our general heterogeneous design which is defined in Section \ref{sec: hets1}. \end{IEEEproof} The optimality of this scheme is discussed in Section~\ref{sec: Discussion} by comparing the communication load of this scheme with that of the state-of-the-art scheme in \cite{li2018fundamental}. \section{Heterogeneous Hypercube Computing Approach} \label{sec: hets1} In this section, we expand the proposed combinatorial hypercube design to accommodate heterogeneous computing networks. As mentioned in the introduction, one key novelty of our design is nodes are assigned a varying number of files and reduce functions so that, in practice, nodes with more computational capability perform relatively more of the overall MapReduce execution. In this case, the proposed heterogeneous design becomes a hypercuboid, consisting of $P$ interleaved homogeneous hypercube networks. The homogeneous networks, $\mathcal{C}_p, \; \forall p\in[P]$, reflect hypercubes with different dimensions and lengths, representing distinct classes of nodes with varying storage capacity and computation resources. We start with an example and then present the general scheme. \subsection{3-Dimension Hypercuboid Example} This example is presented in Fig.~\ref{fig: het_s1_exp1}, where there are two classes of nodes $\mathcal C_1=\mathcal K_1 \cup \mathcal K_2$ and $\mathcal C_2=\mathcal K_3$ with different storage capability where $\mathcal{K}_1~=~\{ 1, 2 \}$, $\mathcal{K}_2 = \{ 3, 4 \}$ and $\mathcal{K}_3 = \{ 5, 6, 7 \}$. Each node in $\mathcal C_1$ stores half of the files and each node in $\mathcal C_2$ stores one-third of the files. Each node set, $\mathcal{K}_i$, collectively stores all $N=12$ files. Each file is assigned to a node group $\mathcal{T}_\alpha$ of $3$ nodes such that it contains one node from each set $\mathcal{K}_1$, $\mathcal{K}_2 $ and $\mathcal{K}_3$. For example, file $w_1$ is assigned to the nodes of $\{1,3,5 \}$ and file $w_{11}$ is assigned to the nodes of $\{2,3,7 \}$. All of the files assignments are represented by the cuboid in Fig.~\ref{fig: het_s1_exp1}. In the Map phase, the nodes will compute all IVs from their locally available files. Since every file is assigned to $3$ nodes, the computation load is $r=3$. Different from previous works in CDC, nodes are assigned a varying number of reduce functions. We assign more reduce functions to nodes which have larger storage and computing capability. Assume that there are $Q=11$ reduce functions. We assign $2$ reduce functions to each node of $\mathcal{K}_1 $ and $\mathcal{K}_2$ and just $1$ reduce function to each node of $\mathcal{K}_3$. Specifically, the function assignments are $\mathcal W_1=\{1,2\}$, $\mathcal W_2=\{3,4\}, \mathcal W_3=\{5,6\}, \mathcal W_4=\{7,8\}$, and $\mathcal W_5=\{9\}, \mathcal W_6=\{10\}$, and $\mathcal W_7=\{11\} $. The reason we assigned this specific number of reduce functions to each node will become clear when we discuss the Shuffle phase. \begin{figure*} \centering \centering \includegraphics[width=16cm]{3d_s=1_het_v2} \put(-38.4,24.5){assigned} \put(-38.4,21){functions \;\;\;\;\; transmits} \put(-51.95,17){node $2$: \;\;\;\;\;\;$3$, $4$ \;\;\;\;\;\;$v_{5,12}\oplus v_{11,7}$ } \put(-51.95,13){node $3$: \;\;\;\;\;\;$5$, $6$ \;\;\;\;\;\;\;$v_{3,5}\oplus v_{11,9}$} \put(-51.95,9){node $7$: \;\;\;\;\;\;\;$11$ \;\;\;\;\;\;\;\;\;$v_{4,5}\oplus v_{6,12}$ } \vspace{-0.2cm} \caption{~\small Representations of a hypercuboid with $P=2$, $r_1=2$, $m_1=2$, $r_2=1$ and $m_2=3$. {\it Left $3$ cuboids}: Depictions of the files mapped to the nodes of $\mathcal{K}_1$, $\mathcal{K}_2$ and $\mathcal{K}_3$, respectively. {\it Right most cuboid}: Hypercuboid highlighting the files stored at exactly $2$ nodes of the multicast group $\mathcal{T}_{11}=\{ 2,3,7 \}$. These files, in addition to the assigned functions among these nodes, determine the IVs included in the coded multicasts, which are displayed to the right.} \label{fig: het_s1_exp1} \vspace{-0.4cm} \end{figure*} In the Shuffle phase, the set of multicast groups includes all possible node groups $\mathcal T_{\alpha}$ which contain 1 node from each set $\mathcal{K}_1$, $\mathcal{K}_2 $ and $\mathcal{K}_3$. Within each $\mathcal T_{\alpha}$, nodes send coded pairs of IVs to the other two nodes. For example, consider the node set $\mathcal T_{\alpha}=\mathcal T_{11}=\{2,3,7 \}$. Following notations in Shuffle Method 1, we have $\mathcal L_{2,\alpha}=\{5\}$. This is because when replacing node $2 \in \mathcal K_1$ in $\mathcal T_{\alpha}$ by a different node in $\mathcal K_1$, we obtain $\mathcal T_{5}=\{1,3,7 \}$. Hence, using $\mathcal W_2=\{3,4\}$ and Eqn. (\ref{eq:def_L_shuffle_1}), we obtain $\mathcal V^{\{2\}}_{3,7}=\{v_{3,5}, v_{4,5}\}$, which are IVs requested by node 2 and computed at nodes $3$ and $7$. Similarly, for node 3, we have $\mathcal L_{3,\alpha}=\{12\}$, and $\mathcal V^{\{3\}}_{2,7}=\{v_{5,12}, v_{6,12}\}$. For node 7, we have $\mathcal L_{7,\alpha}=\{7,9\}$, corresponding to $\mathcal T_{7}=\{2,3,5\}$ and $\mathcal T_{9}=\{2,3,6\}$. While the size of $\mathcal L_{7,\alpha}$ is larger than that of $\mathcal L_{2,\alpha}$ and $\mathcal L_{3,\alpha}$, since $\mathcal W_7=\{11\}$ is smaller, we obtain $\mathcal V^{\{7\}}_{2,3}=\{v_{11,7}, v_{11,9}\}$, which is the same size as that of $\mathcal V^{\{2\}}_{3,7}$ and $\mathcal V^{\{3\}}_{2,7}$. Using Eqn.(\ref{eq: 1_trans_eq1}), we see that nodes $2$, $3$, and $7$ transmit $v_{5,12}\oplus v_{11,7}$, $v_{3,5}\oplus v_{11 ,9}$, and $v_{4,5}\oplus v_{6,12}$, respectively. In this example, we see that by assigning a varying number of reduce functions to the nodes we can create symmetry among each node group, $\mathcal{T}_\alpha$, i.e., each node of the group requests the same number of IVs from the other nodes of the group. This symmetry can lead to savings in the communication load. Here, the communication load can be calculated by accounting for the $2\cdot 2 \cdot 3 = 12$ node groups, where within each group, there are $3$ transmissions of size $T$ bits. By normalizing by $QNT$ we find the communication load of the coded scheme is $L_{\rm c} = \frac{36}{12 \cdot 11} = \frac{3}{11}$. We can compare this to the uncoded communication load, where each requested IV is transmitted alone. To compute the uncoded communication load, we count the number of IVs each node requests. Since the $4$ nodes of $\mathcal{K}_1 $ and $\mathcal{K}_2$ request $6\cdot 2 = 12$ IVs each and the $3$ nodes of $\mathcal{K}_3$ request $8$ IVs each, we find $L_{\rm u} = \frac{4\cdot 12 + 3 \cdot 8}{12\cdot 11} = \frac{6}{11}$. In this case, $L_{\rm c} = \frac{1}{2}\cdot L_{\rm u}$ since for the coded Shuffle policy every requested IV is transmitted in coded pairs. In the general heterogeneous CDC scheme proposed here, we will see that $L_c = \frac{1}{r-1}\cdot L_{\rm u}$. \subsection{General Heterogeneous Scheme} \label{sec: gen_het s1} In this subsection, we will introduce the general heterogeneous scheme for step by step. {\bf Node Grouping 2}: The key idea of Node Grouping 2 is to form one heterogeneous network based on a hypercuboid design that consists of $P$ interleaved homogeneous networks, represented by hypercubes of different dimensions $r_p$ and sizes $m_p$ within the hypercuboid. {The $K$ nodes consist of $P$ disjoint sets denoted by $\mathcal{C}_1,\ldots ,\mathcal{C}_P$, where $\sum_{p=1}^{P}|\mathcal{C}_p|=K$. For each $p\in [P]$, split $\mathcal{C}_p$ into $r_p \in \mathbb{Z}^+$ disjoint subsets, each of size $m_p$, denoted by $\{\mathcal{K}_{n_{p\scaleto{-1}{3.5pt}}+1} ,\ldots , \mathcal{K}_{n_p}\}$, where $n_p=\sum_{i=1}^p r_i$. Hence, the entire network is comprised of $r$ node sets, $\mathcal{K}_1 ,\ldots , \mathcal{K}_r$, where $r = \sum_{p=1}^{P}r_p$. Consider all possible node groups $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$ of size $r$ that each contains one node from every node set $\mathcal{K}_1 ,\ldots , \mathcal{K}_r$, here $X=\prod_{i=1}^{r}|\mathcal{K}_i|=\prod_{p=1}^{P}m_p^{r_p}$. Denote $\mathcal T_{j,i}=\mathcal T_j \cap \mathcal K_i, \;\forall j \in [X]$ and $\forall i \in [r]$, as the node in $\mathcal T_j$ that is chosen from $\mathcal K_i$. } The file mapping is then determined by the NG File Mapping defined in Section \ref{sec: genscheme_s1} with node groups $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$ defined by Node Grouping 2. \begin{remark} When using Node Grouping 2 and NG File Mapping, we form a hypercuboid made of $P$ interleved hypercubes of different dimensions. For a given $p\in [P]$, $\mathcal{C}_p$ translates to $r_p$ dimensions of size $m_p$ of the hypercuboid. Moreover, $\mathcal{C}_p$ serves the role that is similar to that of a single hypercube of dimension $r_p$ as in the homogeneous case. Specifically, $\mathcal{C}_p$ contains $r_p$ node sets $\mathcal K_i$, each of size $m_p$. Here $m_p$ is the number of lattice points along each dimension of the hypercube. The total number of nodes in $\mathcal C_p$ is thus $r_p m_p$. Nodes in each $\mathcal K_i$ collectively map the file library once. Hence, all nodes in $\mathcal{C}_p$ have the same storage capacity that each maps a total of $\frac{N}{m_p}$ files. Collectively, nodes in $\mathcal{C}_p$ map the library $r_p$ times. The $P$ disjoint sets of $\mathcal C_1, \cdots, \mathcal C_p$ form one hypercuboid with $r$ dimensions where there are $r_p$ dimensions of size $m_p$ for $p\in[P]$. Hence, each node group $\mathcal{T}_\alpha$ of size $r=\sum_{i=p}^P r_p$, defined in Node Group 2, consists of the union of $P$ node groups, with size $r_1, \cdots, r_P$, respectively, chosen from each of the $P$ interleved hypercubes corresponding to $\mathcal C_p, p \in[P]$. Note that, instead of each hypercube operating independently subject to its own computation load, $r_p$, the hypercuboid design takes full advantage of the total computation load, $r$, across the $P$ hypercubes to achieve the gain of $\frac{1}{r-1}$ for the heterogeneous system. \end{remark} {\bf Function Assignment 2}: Define $Y$ as the least common multiple (LCM) of $\{m_1-1, m_2-1, \ldots , m_P-1 \}$. Split the $Q$ functions into $K$ disjoint sets, labeled $\mathcal{W}_{1},\ldots,\mathcal{W}_{K}$, where, in general, the sets may be different sizes. For each $k \in [K]$, $|\mathcal{W}_{k}| =~\frac{\eta_2Y }{m_p - 1}$ where $k \in \mathcal{C}_p$ and $\eta_2 \in \mathbb{Z}^+$ such that $Q~=~\eta_2 Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1}$. For each $k \in [K]$, let $\mathcal{W}_k$ be the set of reduce functions assigned to node $k$. The Map and Shuffle phases follow our standard definition from Section \ref{sec: genscheme_s1} and the NG Shuffle Method is used for the Shuffle phase with node grouping defined by Node Grouping 2. The correctness of the proposed heterogeneous CDC scheme is proved in Appendix \ref{sec_APP:correctness}. \begin{remark} When using Node Grouping 2, NG File Mapping, Function Assignment 2, and NG Shuffle Method, we find that each intermediate value set $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$ contains $\eta_1 \eta_2 Y$ IVs. \end{remark} \begin{remark} Node Grouping 2 and Function Assignment 2 are a more general case of Node Grouping 1 and Function Assignment 1, respectively. Therefore, the homogeneous scheme of Section \ref{sec: genscheme_s1} is a special case of the general heterogeneous scheme here. By letting $P=1$ such that $\mathcal{C}_1$ is the set of all nodes, we find $r=r_1$, $m_1 = \frac{K}{r}$, $X = \left( \frac{K}{r} \right)^r$ and $Y = \frac{K}{r}-1$. Moreover, each node is assigned $\frac{\eta_2 Y}{m_1-1} = \eta_2$ reduce functions. For file availability, nodes are split into $r$ disjoint, equal size sets, $\mathcal{K}_1,\ldots,\mathcal{K}_r$, and file sets of size $\eta_1$ are available to sets of nodes which contain exactly one node from each set $\mathcal{K}_1,\ldots,\mathcal{K}_r$. \end{remark} \begin{remark} It can be seen that the proposed hypercuboid design may not work for any given heterogeneous individual memories and computation loads due to the constrained combinatorial structure. In practice, we can group nodes with heterogeneous storage capacity and computation resources to fit a hypercuboid design as close as possible (similar to ``quantization") to reap the benefit by taking the heterogeneity of the system into the consideration. \end{remark} \subsection{Achievable Trade-off between Computation and Communication Loads} In this section, we first present the communication load of an uncoded Shuffle phase, $L_{\rm u}$, using Node Grouping 2, NG File Mapping, Function Assignment 2 of the general heterogeneous scheme. Here, uncoded Shuffle phase means that all the requested IVs will be transmitted in a unicast fashion without coded multicasting. Note that, $L_{\rm u}$ represents the fraction of intermediate values which are requested by any node. Then, we demonstrate that the communication load using the the proposed hypercuboid scheme and the NG Shuffle Method is $L_{\rm c} = \frac{1}{r-1}\cdot L_{\rm u}$. More formally, we define $L_{\rm u}$ and $L_{\rm c}$ as functions of $m_1 , \ldots , m_P$ and $r_1 , \ldots , r_P$ which define the number of nodes and the corresponding computation load in each node class of the heterogeneous computing network. Then, $L_u$ and $L_c$ are given in the following theorems. \begin{theorem} \label{theorem: uncoded} By using Node Grouping 2, NG File Mapping, Function Assignment 2, and an uncoded Shuffle phase, the communication load is \begin{align} \label{eq: Lu} L_{\rm u}(m_1 , \ldots , m_P, r_1 , \ldots , r_P) = \frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}. \end{align} \end{theorem} \begin{IEEEproof} Theorem \ref{theorem: uncoded} is proven in Appendix \ref{sec: uncodedHetPrfs1}. \end{IEEEproof} The following theorem states the communication load of the Shuffle phase which uses coded communication. \begin{theorem} \label{theorem: coded} By using Node Grouping 2, NG File Mapping, Function Assignment 2, and NG Shuffle Method, the communication load of the general heterogeneous scheme is \begin{align} \label{eq: Lc} L_{\rm c} (m_1 , \ldots , m_P, & r_1 , \ldots , r_P) = \frac{1}{r-1}\cdot\frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}} = \frac{1}{r-1} \cdot L_{\rm u}(m_1 , \ldots , m_P, r_1 , \ldots , r_P). \end{align} \end{theorem} \begin{IEEEproof} Theorem \ref{theorem: coded} is proven in Appendix \ref{sec: codedHetPrfs1}. \end{IEEEproof} The communication load $L_{\rm c}$ is comprised of two parts: the local computing gain, $L_{\rm u}$, and the global computing gain, $\frac{1}{r-1}$. The local computing gain represents the normalized number of IVs that must be shuffled. As nodes have access to a larger fraction of the files, the nodes will inherently request less in the Shuffle phase. The global computing gain stems from the fact that with the coded design every transmission serves $r-1$ nodes with distinct requests. \subsection{Optimality} The information theoretic lower bound of the communication load derived in \cite{li2018fundamental} is under the assumption of the homogeneous reduce function assignment. Hence, it does not apply when reduce functions are heterogeneously assigned to the computing nodes. In the following we discuss the lower bound of the communication load for two scenarios. First, we demonstrate a straightforward lower bound on communication load when considering all possible file and function assignments for a given $r$ and $K$. Next, we provide a lower bound on the communication load when we use the specific file and function assignments (Node Grouping 2, NG File Mapping and Function Assignment 2) of the heterogeneous design in Section \ref{sec: gen_het s1}. A trivial bound on the communication load is $L \geq 0$. Given $r$ and $K$, the following file and function assignment and Shuffle phase design will yield a communication load meeting this bound. Pick $r$ nodes and assign the entire file library to each of the nodes. Furthermore, for each function, assign it to one of the $r$ nodes with access to the entire file library. As every node that is assigned a reduce function is able to compute all the necessary IVs itself, no Shuffle phase is required such that $L=0$. Note that, in this context, we do not consider any storage or computing limitations on the nodes, rather, we show that optimizing the communication load over all possible function and file assignments is not an interesting problem. The question remains as to the optimality of the proposed Shuffle phase of Section \ref{sec: gen_het s1} given the file and reduce function assignments. Based on the seminal approach introduced in \cite{arbabjolfaei2013capacity,wan2016caching,wan2016optimality,wan2020index} for coded caching with uncoded cache placement, we derive Theorem \ref{theorem: bound} which provides a lower bound on the entropy of all transmissions in the Shuffle phase given a specific function and file placement and a permutation of the computing nodes. \begin{theorem} \label{theorem: bound} Given a particular file placement, $\mathcal{M}_k,\;\forall k\in[K]$ and function assignment $\mathcal{W}_k,\;\forall k\in[K]$, in order for every node $k\in[K]$ to have access to all IVs necessary to compute functions of $\mathcal{W}_k$, the optimal communication load over all achievable shuffle schemes, $L^*$, is bounded by \begin{equation} \label{eq: bound_eq1} L^* \geq \frac{1}{TQN}\sum_{i=1}^{K} H\left(V_{\mathcal{W}_{k_i},:}|V_{:,\mathcal{M}_{k_i}},Y_{\{k_1,\ldots, k_{i-1} \}}\right) \end{equation} where $k_1, \ldots , k_K$ is some permutation of $[K]$, $V_{\mathcal{W}_{k_i},:}$ is the set of IVs necessary to compute the functions of $\mathcal{W}_{k_i}$,\footnote{The notation ``$:$" is used to denote all possible indices.} $V_{:,\mathcal{M}_{k_i}}$ is set of IVs which can be computed from the file set $\mathcal{M}_{k_i}$ and $Y_{\{k_1,\ldots, k_{i-1} \}}$ is the union of the set of IVs necessary to compute the functions of $\bigcup_{j=1}^{i-1}\mathcal{W}_{k_j}$ and the set of IVs which can be computed from files of $\bigcup_{j=1}^{i-1}\mathcal{M}_{k_j}$. \hfill $\square$ \end{theorem} \begin{IEEEproof} Theorem \ref{theorem: bound} is proved in Appendix \ref{appendix: bound}. \end{IEEEproof} In Theorem \ref{thm: optimality} below, we demonstrate that given Node Grouping 2, NG File Mapping, and Function Assignment 2, the NG Shuffle Method introduced in Section~\ref{sec: genscheme_s1} yields a communication load that is within a constant of the lower bound. \begin{theorem} \label{thm: optimality} For a computing network defined by Node Grouping 2, NG File Mapping, and Function Assignment 2, define $L^*$ to be the infimum of the communication load over all possible Shuffle phases, then we have \begin{equation} L_{\rm c} \leq \frac{2r}{r-1} L^*, \end{equation} where $L_{\rm c}$, given in (\ref{eq: Lc}), is the communication load achieved by using the NG Shuffle Method. \end{theorem} \begin{IEEEproof} Theorem \ref{thm: optimality} is proved in Appendix \ref{sec: opt_pf}. \end{IEEEproof} \section{Discussions} \label{sec: Discussion} In this section, we will compare the performance the proposed schemes to the state-of-the-art schemes in \cite{li2018fundamental}. Specifically, we compare the required number of files, the required number of multicast groups and the communication load. When we compare the performance of the proposed heterogeneous CDC scheme with that of the homogeneous CDC in \cite{li2018fundamental}, we fix the computation load, $r$, the number of files, $N$, and the number of reduce functions, $Q$.\footnote{We adjust $N$ and $Q$ to be the same by using the appropriate $\eta_1$ and $\eta_2$.} The scheme in \cite{li2018fundamental} requires $N_1 = {K \choose r}\eta_1$ input files, $Q_1 = {K \choose s}\eta_2$ reduce functions. Moreover, the communication load as a function of $K$, $r$ and $s$ is \begin{equation} \label{eq: li_se1} L_1(r) = \frac{1}{r}\left(1-\frac{r}{K}\right). \end{equation} \subsection{Homogeneous CDC} Using (\ref{eq: theorem s1_hom}), we observe the following comparison \begin{equation} \frac{L_{\rm c}(r)}{L_1(r)} = \frac{rK}{K-r} \cdot\frac{K-r}{K\left( r-1 \right)} = \frac{r}{r-1}. \end{equation} For most values of $r$ there is an insignificant increase in the communication load for the new combinatorial scheme and furthermore for $r \rightarrow \infty$ the two schemes yield the identical communication loads. Since our proposed homogeneous scheme uses the same function assignment as the scheme in \cite{li2018fundamental}, then this hypercube based design is asymptotically optimal in the information theoretic sense in general without fixing the file and function assignments. These findings are verified through simulation of the communication load as shown in Fig.~\ref{fig: s1graph}. While both schemes require the same number of outputs functions, $Q=K\eta_2$, the required number of input files has been drastically reduced in this case. It can be observed that the number of input files for the homogeneous hypercube design is \begin{figure} \vspace{-5cm} \centering \centering \includegraphics[width=12cm]{CDC_1_fig4.pdf} \vspace{-5cm} \caption{~\small A comparison of the resulting communication load for the newly proposed and the state-of-the-art homogeneousdistributed computing schemes.} \label{fig: s1graph} \vspace{-0.5cm} \end{figure} \begin{equation} N_{\rm c} = \left( \frac{K}{r} \right)^r\eta _1 \end{equation} while the scheme of \cite{li2018fundamental} requires $N_1 = {K \choose r}\eta_1$ input files. Assuming $r =\Theta (K)$, by use of Stirling's formula to directly compare the two equations yields \begin{align} \frac{N_1}{N_{\rm c}} &= \frac{{K \choose r}}{\left( \frac{K}{r}\right)^r} = \frac{K!}{r!(K-r)!\left( \frac{K}{r}\right)^r} = \Theta\left( \frac{\sqrt{2\pi K}\left( \frac{K}{e}\right)^K}{2 \pi \sqrt{r \left( K-r \right)}\left( \frac{r}{e}\right)^r\left( \frac{K-r}{e}\right)^{\left( K-r \right)}} \cdot \frac{1}{\left( \frac{K}{r}\right)^r} \right) \notag\\ &= \Theta\left(\sqrt{\frac{K}{2\pi r (K-r)}}\cdot \left( \frac{K}{K-r} \right)^K \right) = \Theta\left(\sqrt{\frac{1}{K}}\cdot \left( \frac{K}{K-r} \right)^K \right)\label{eq: comp_s1_files}. \end{align} When $r<K$, we find that (\ref{eq: comp_s1_files}) grows exponentially with $K$ and, therefore, our proposed scheme has an exponential decrease in the number of required files. As pointed out in \cite{li2018fundamental,konstantinidis2018leveraging}, the required number of multicast group is also an important design parameter in CDC. If this number is large, it may take a long time to build such node groups such that the gain achieved by CDC is completely gone. It can be seen that the number of required multicast groups for the scheme in \cite{li2018fundamental} is $U_1={K \choose r+1}$, while the required number of multicast group of the proposed scheme is $U_c = \left(\frac{K}{r}\right)^r$. Hence, by a similar computation to (\ref{eq: comp_s1_files}), it can be seen that \begin{equation} \frac{U_1}{U_c} = \Theta\left(\frac{r+1}{K-r} \cdot \sqrt{\frac{1}{K}}\cdot \left( \frac{K}{K-r} \right)^K \right), \end{equation} which can grows exponentially with $K$ such that the proposed hypercube scheme reduces the required number of multicast group exponentially. \begin{remark} The hypercube approach has similar performance compared to the CDC scheme based on the resolvable design proposed in \cite{konstantinidis2018leveraging}, e.g., the required number of input files in \cite{konstantinidis2018leveraging} is $\left(\frac{K}{r}\right)^{r-1}$, which is slightly better than the proposed hypercube scheme. However, as we discussed in Section~\ref{sec: hets1}, the proposed hypercube scheme can be extended to the heterogeneous CDC networks naturally while it is unclear how to extend the scheme in \cite{konstantinidis2018leveraging} to heterogeneous CDC networks. \end{remark} \subsection{Heterogeneous CDC} \label{sec: disc_sg1} As shown in (\ref{eq: Lc}), the communication load of the proposed heterogeneous CDC design is $L_c(r)=\frac{1}{r-1}L_{\rm u}(r)$, where $\frac{1}{r-1}$ and $L_{\rm u}(r)$ are the global computing gain and the local computing gain, respectively. In comparison, for the homogeneous design in \cite{li2018fundamental}, we have $L_1(r)=\frac{1}{r} (1-\frac{r}{K})$, where the global computing gain is $\frac{1}{r}$ and the local computing gain is $1-\frac{r}{K}$. Next, we will show that even though the proposed heterogeneous design has an inferior global computing gain than that of \cite{li2018fundamental} ($\frac{1}{r-1}$ versus $\frac{1}{r}$), it has a better local computing gain $L_{\rm u}(r)\le (1-\frac{r}{L})$, and hence can have a better communication load $L_c(r) < L_1(r)$ under certain parameter regimes. Since $\sum_{p=1}^{P} \frac{r_p}{r} = 1$ and $\frac{m_p}{m_p-1}$ is a convex function of $m_p$ for $m_p > 1$, using (\ref{eq: Lu}) and Jensen's inequality, we can obtain \begin{align} \frac{1}{L_{\rm u}(r)} = \frac{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}{r} &= \sum_{p=1}^{P} \frac{r_p}{r}\cdot \frac{m_p}{m_p-1} \geq \frac{\sum_{p=1}^{P} \frac{r_p m_p}{r}}{\left(\sum_{p=1}^{P} \frac{r_p m_p}{r}\right) - 1} = \frac{\frac{K}{r}}{\frac{K}{r}-1} = \frac{K}{K-r} \label{eq:Lu} \end{align} where $\sum_{p=1}^{P}r_p m_p = \sum_{p=1}^{P}|\mathcal{C}_p| = K$. {Note that the inequality in (\ref{eq:Lu}) is strictly ``$>$'' if the network is truly heterogeneous, i.e., not all $\{m_p\}$ are equal.} Hence, \begin{equation} L_{\rm u}(r) \leq \frac{K-r}{K} = 1-\frac{r}{K}, \label{eq:upper_bound_local_gain} \end{equation} which shows that the local computing gain for our heterogeneous design is upper bounded by that of the homogeneous design in \cite{li2018fundamental}. Using (\ref{eq: Lc}), we obtain, \begin{equation} L_{\rm c}(r)=\frac{1}{r-1}L_{\rm u}(r) \leq \frac{1}{r-1}\cdot \left( 1-\frac{r}{K} \right). \end{equation} Thus, $L_{\rm c}(r)$ can be less than $L_1(r)$ for certain choices of $r$ and $K$. For example, given a heterogeneous network defined by $m_1 = 2$, $r_1 = 4$ and $m_2 = 8$, $r_2 = 2$, we have $r=r_1+r_2=6$, $K=r_1m_1+r_2m_2=24$. We compare it with a homogeneous network with $r=6$ and $K=24$. The proposed heterogeneous design has a local computing gain of $L_{\rm u}(r) = \frac{7}{12} \approx 0.583$, which is less than that of the homogeneous design $1-\frac{r}{K} = \frac{3}{4} = 0.75$, and a communication load of $L_{\rm c} = \frac{7}{ 60}\approx 0.117$, that is lower than that of the homogeneous design $L_1(r) = \frac{1}{8} = 0.125$. \begin{remark} In \cite{li2018fundamental}, $L_1(r)$ was proved to be a lower bound on the communication load given $r$ and $K$. However, the proof uses the implicit assumption that every node is assigned the same number of reduce functions. Our new finding is that if the reduce functions can be assigned in a heterogeneous fashion, then the communication load lower bound of \cite{li2018fundamental} does not apply. \end{remark} In Fig.~\ref{fig: het sim_seq1_fig}, we provide additional comparisons of the communication load of the hypercuboid design and the homogeneous scheme of \cite{li2018fundamental} with an equivalent computation load, $r$. Each design has a fixed number of nodes $K=20$. The heterogeneous network is defined with $P=2$ sets of nodes that map a different number of files and are assigned a different number of reduce functions. Specifically, there are $|\mathcal{C}_1|=2(r-1)$ powerful nodes and $|\mathcal{C}_2|=K-2(r-1)$ weaker nodes where $r_1=r-1$, $m_1=2$, $r_2=1$ and $m_2=K-2(r-1)$. In other words, the nodes of $\mathcal{C}_1$ each map $\frac{1}{2}$ of the files and the nodes of $\mathcal{C}_2$ each map a $\frac{1}{K-2(r-1)}$ fraction of the files which can be much less than $\frac{1}{2}$. Fig.~\ref{fig: het sim_seq1_fig} shows that the communication load of the hypercuboid design is less than that of the state-of-the-art homogeneous design of \cite{li2018fundamental} for $4\leq r \leq 7$. {\bf Comparisons for large networks.} Next, we provide comparisons of the communication load of the proposed heterogeneous scheme and the homogeneous scheme \cite{li2018fundamental} for networks with a large number of computing nodes $K$. We consider two cases. Case 1. For the heterogeneous network, assume that $r_1,\ldots , r_P$ and $r$ are fixed, but the fraction of files each node has access to, $\frac{1}{m_1}, \cdots, \frac{1}{m_P}$, decrease as $K$ becomes large. Then, we have \begin{equation} \lim_{K\rightarrow \infty} L_{\rm u}(r)=1 \quad \text{and} \quad \lim_{K\rightarrow \infty} \frac{L_{\rm c}(r)}{L_1(r)} = \frac{r}{r-1}. \end{equation} In other words, $\frac{L_{\rm c}(r)}{L_1(r)} = \Theta (1)$. \begin{figure} \vspace{-5cm} \centering \includegraphics[width=12cm]{cdc_fig5.pdf} \vspace{-5cm} \caption{\small A comparison of the proposed hypercuboid CDC design to the state-of-the-art CDC design of \cite{li2018fundamental} with $K=20$ nodes and an equivalent computation load, $r$. The heterogeneous hypercube is designed with parameters $r_1=r-1$, $m_1=2$, $r_2=1$ and $m_2=K-2(r-1)$. The hypercuboid design has a lower communication load than that of the homogeneous design for for $4\leq r \leq 7$.} \label{fig: het sim_seq1_fig} \vspace{-0.5cm} \end{figure} Case 2. For the heterogeneous network, assume that $\frac{r_1}{K}=\beta_1,\ldots , \frac{r_P}{K}=\beta_K$ and $\frac{r}{K}=\beta$ are kept constant as $K$ gets large. The fraction of files available to each node, $\frac{1}{m_1}, \cdots, \frac{1}{m_P}$, are also kept constant. It then follows from (\ref{eq:Lu}) that when the network is truly heterogeneous (not all $\{m_p\}$ are equal), then we have \begin{equation} \lim_{K\rightarrow \infty} \frac{L_{\rm c}(r)}{L_1(r)} = \lim_{r\rightarrow \infty} \frac{r}{r-1} \cdot \frac{L_{\rm u}(r)}{1-\frac{r}{K}} =\frac{1}{1-\beta} \Bigg(\frac{1}{ \sum_{p=1}^P \frac{\beta_p}{\beta} \frac{m_p}{m_p-1} }\Bigg) < \frac{1}{1-\beta} (1-\beta)= 1. \end{equation} This means that for large networks considered here, the communication load of the proposed heterogeneous scheme is {\it strictly less} than that of the homogeneous scheme. Hence, for some heterogeneous file and computation load assignments, the fundamental trade-off proposed in \cite{li2018fundamental} is ``breakable". As we discussed before, in the extreme case, where there exists a ``super node" that can store all the files and compute all functions, the communication load is straightforwardly $0$. However, for given heterogeneous storage capacities and computation loads, it is non-trivial to design an achievable CDC scheme such that its performance is superior compared to that of homogenous CDC under the same total storage and computation load constraint. For the hypercuboid design, the required number of files is $N=X=\prod_{p=1}^{P}m_p^{r_p}$ and reduce functions is $Q~=~ Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1}$ where $Y$ is the LCM of $\{m_1-1,\ldots ,m_P-1 \}$. Unlike the homogeneous network case, due to the lack of CDC design for general heterogeneous networks, we cannot compare the proposed scheme to other schemes. Nevertheless, we believe that these numbers can serve as a benchmark for the future research in this topic. \section{Conclusions and Future Directions} \label{sec: Conclusion} In this work, we introduced a novel hypercuboid combinatorial approach to design CDC for both homogeneous and heterogeneous distributed computing networks. This new design achieves a significant reduction in the number of files and functions compared to the state-of-the-art scheme in \cite{li2018fundamental}. Moreover, the proposed schemes maintain a multiplicative computation-communication trade-off and are proven to be asymptotically optimal. {Most importantly, we provided an explicit and systematic heterogeneous CDC design with optimality guarantees under certain network parameters.} Surprisingly, {we found} that the optimal trade-off derived in \cite{li2018fundamental} no longer applies when functions are heterogeneously assigned and {as a result}, the communication load of a heterogeneous network can be less than that of an equivalent homogeneous CDC network. For the future research direction, first, it will be interesting to design other achievable schemes with heterogeneous function assignments and a more general communication load bound given a set of storage capacity requirements of computing nodes. Second, it is challenging but important to characterize the information theoretic converse given the storage capacity and the computation load constraints of each node without fixing the file and output function assignments. \appendices \section{Proof of Theorems \ref{theorem: s1_hom} and \ref{theorem: coded}} \label{sec: codedHetPrfs1} \begin{comment} For any $n \in [X]$, and for all $z \in \mathcal{T}_n$ where $z \in \mathcal{K}_p$, we find \begin{align} \big| & \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big| = \left | \mathcal{W}_z \right | \times \left | \left \{ w_j: w_j \notin \mathcal{M}_z, w_j \in \bigcap\limits_{k \in \mathcal{T}_n\setminus z} \mathcal{M}_k , \right \} \right| \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 \left | \left \{ \mathcal{T}_{n'} : \{\mathcal{T}_n\setminus z \} \subset \mathcal{T}_{n'}, z \notin \mathcal{T}_{n'}, n' \in [X] \right \} \right | \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 \left | \left \{ \mathcal{T}_{n'} : \{\mathcal{T}_n\setminus z \} \cup k = \mathcal{T}_{n'}, k \in \mathcal{K}_p\setminus z, \right \} \right | \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 \left | \left \{ k : k \in \mathcal{K}_p\setminus z, \right \} \right | \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 (|\mathcal{K}_p|-1) = \frac{\eta_2Y }{m_p - 1} \cdot \eta_1 (m_p - 1)= \eta_1 \eta_2 Y. \end{align} \end{comment} For any $\alpha \in [X]$, and $z \in \mathcal{T}_\alpha$, where $z \in \mathcal{K}_h \subseteq \mathcal{C}_p$, it follows from Eq. (\ref{eq:define_L_z_alpha}), (\ref{eq:def_L_shuffle_1}), and Remark 3 in Section \ref{sec: genscheme_s1} that \begin{equation} \big| \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big| = \left | \mathcal{W}_z \right | \cdot \eta_1 \big| \mathcal L_{z,\alpha} \big|=\left | \mathcal{W}_z \right | \cdot \eta_1 (|\mathcal{K}_p|-1) = \frac{\eta_2Y }{m_p - 1} \cdot \eta_1 (m_p - 1)= \eta_1 \eta_2 Y . \end{equation} We consider $X$ node groups of size $r$ nodes, where for each group, every node of that group transmits a coded message of size $\big| \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big| / (r-1)$, therefore, the communication load is \begin{align} L_{\rm c} ( m_1 , \ldots & , m_P, r_1 , \ldots , r_P) = \frac{1}{QN}\cdot X \cdot r \cdot \frac{\big| \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big|}{r-1} \\ &= \frac{1}{\left(\eta_2 Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1}\right)\eta_1 X}\cdot X \cdot r \cdot \frac{\eta_1 \eta_2 Y}{r-1} = \frac{1}{r-1}\cdot\frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}. \end{align} For the special homogeneous case, where $P=1$ and $\mathcal{C}_1$ is the set of all nodes, we find $r=r_1$, $m_1 = \frac{K}{r}$ and \begin{align} L_{\rm c} &= \frac{1}{r-1}\cdot\frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}} =\frac{1}{r-1}\cdot\frac{r}{\left(\frac{K}{\frac{K}{r}-1}\right)} =\frac{1}{r-1}\cdot\frac{K-r}{K}. \end{align} Hence, we finished the proof of Theorems \ref{theorem: s1_hom} and \ref{theorem: coded}. \section{Proof of Theorem \ref{theorem: uncoded}} \label{sec: uncodedHetPrfs1} For all $p \in [P]$, the number of files a node $k~\in~\mathcal{K}_j~\subseteq~\mathcal{C}_p$ has local access to is \begin{align} |\mathcal{M}_k| &= \eta_1 \prod\limits_{i\in[r]\setminus j}|\mathcal{K}_i| = \frac{\eta_1 X}{|\mathcal{K}_j|} = \frac{N}{m_p}. \end{align} We count the number of IVs that are requested by any node and normalize by $QN$ \begin{align} L_u & (m_1 , \ldots , m_P, r_1 , \ldots , r_P) = \frac{1}{QN}\sum_{k\in[K]} | \left \{ v_{i,j} : i \in \mathcal{W}_k , w_j \notin \mathcal{M}_k \right \} | \\ &= \frac{1}{QN}\sum_{k\in[K]} \left | \mathcal{W}_k \right|\times \left( N- \left | \mathcal{M}_k \right| \right) = \frac{1}{QN}\sum_{p\in[P]} \sum_{k\in\mathcal{C}_p} \left | \mathcal{W}_k \right|\times \left( N- \left | \mathcal{M}_k \right| \right) \\ &= \frac{1}{QN}\sum_{p\in[P]} \sum_{k\in\mathcal{C}_p} \frac{\eta_2Y }{m_p - 1} \cdot \left( N- \frac{N} {m_p} \right) = \frac{1}{Q}\sum_{p\in[P]} r_p m_p \frac{\eta_2Y }{m_p - 1}\left( \frac{m_p-1}{m_p} \right) \\ &= \frac{\eta_2 Y \sum_{p\in[P]} r_p }{\eta_2 Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1} } = \frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}} \end{align} where $|\mathcal{C}_p| = r_p m_p$ for all $p \in [P]$. Hence, we finished the proof of Theorem \ref{theorem: uncoded}. \section{Correctness of Heterogeneous CDC Scheme} \label{sec_APP:correctness} This proof includes 4 parts: 1) nodes only compute IVs from locally available files, 2) nodes only transmit locally computed IVs, 3) nodes can decode transmissions with requested IVs and 4) after the Map and Shuffle phases, nodes have all necessary IVs to compute their reduce functions. For 1), any node $k\in [K]$ computes intermediate values of the set \begin{equation} \label{eq: pf4} \{ v_{i,j} : i \in [Q] , w_j \in \mathcal{M}_k \} \end{equation} In all cases $w_j \in \mathcal{M}_k$ for any $v_{i,j}$ computed by node $k$, therefore nodes only compute IVs from locally available files. \begin{comment} We demonstrate 2) and 3) simultaneously by observing a node group, $\mathcal{T}_n$ for any $n \in [X]$, in the Shuffle phase. A node $k \in \mathcal{T}_n$ has computed the intermediate values of $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ for all $z\in \mathcal{T}_n\setminus k$. This is true because \begin{equation} \bigcap \limits_{a\in\mathcal{T}_n\setminus z}\mathcal{M}_a \subseteq \mathcal{M}_k \end{equation} where $k \in \mathcal{T}_n \setminus z$ and, therefore, \begin{multline}\label{eq: pf3} \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}=\left\{ v_{i,j}: i\in \mathcal{W}_z,w_j \notin \mathcal{M}_z , \text{ }w_j\in \bigcap \limits_{a\in\mathcal{T}_n\setminus z}\mathcal{M}_a \right\} \subseteq \{ v_{i,j} : i \in [Q] , w_j \in \mathcal{M}_k \}. \end{multline} The the right hand side of (\ref{eq: pf3}) matches that of (\ref{eq: pf4}) and defines a set of intermediate values that node $k$ computes in the Map phase. Since $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ is computed by node $k$ for all $z\in \mathcal{T}_n\setminus k$, it is clear that node $k$ can transmit coded messages consisting of subsets of $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ for all $z\in \mathcal{T}_n\setminus k$ and this satisfies 2). \end{comment} Next, we prove 2) and 3) simultaneously. Consider any node group $\mathcal{T}_\alpha$ and any node $k \in \mathcal{T}_\alpha$. We need to confirm that node $k$ has access to the multicast messages defined in Eq. (\ref{eq:def_L_shuffle_1}) and (\ref{eq: 1_trans_eq1}). This is true because as discussed above Eq. (\ref{eq:def_L_shuffle_1}), all nodes in $\mathcal{T}_\alpha \setminus z$, including node $k$, have access to the file set $\mathcal B_\ell$ where $\{\mathcal{T}_\alpha \setminus z\} \subset \mathcal{T}_\ell$. To see 3), when a node $z_0 \in \mathcal T_\alpha$ receives a multicast message from another node $k \in \mathcal T_\alpha$ that takes the form of Eq. (\ref{eq:def_L_shuffle_1}), only one term, $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z_0\},k}$, is its desired message. The other terms are of the form $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},k}$, intended for node $z$, where $z\in \mathcal T_\alpha$ and , $z\ne z_0,k$. Since node $z_0 \in \mathcal T_\alpha \setminus z$, it has access to $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},k}$, and thus can decode its desired message correctly. \begin{comment} In Appendix \ref{sec: codedHetPrfs1} we show that $\left |\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}\right | = \eta_1 \eta_2 Y$ for any $z\in [K]$ and $n$ such that $z \in \mathcal{T}_n$. Furthermore, node $k$ receives transmissions with requested intermediate values from all nodes $z\in \mathcal{T}_n\setminus k$ where each transmission consists of $r-1$ coded intermediate value sets which are the same size. Only one of these sets are a subset of $\mathcal{V}_{\mathcal{T}_n \setminus k}^{\{ k\}}$ and the rest are subsets of $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ for some $z\in \mathcal{T}_n\setminus k$. Therefore, node $k$ can recover its requested intermediate value set since the other intermediate value sets are locally computed and this satisfies 3). \end{comment} To prove 4), we need to show that for a given $z\in \mathcal K_h$, if some file $w_j \notin \mathcal M_z$, then node $z$ must be able to recover its desired IVs $\{v_{i,j}: i\in \mathcal W_z\}$ from multicast messages of the form Eq. (\ref{eq:def_L_shuffle_1}) and (\ref{eq: 1_trans_eq1}). To see this, assume that $w_j \in \mathcal B_{\ell_0}$. Consider node group $\mathcal T_{\ell_0}$. Since $w_j \in \mathcal B_{\ell_0}$ and $w_j \notin \mathcal M_z$, we must have $z \notin \mathcal T_{\ell_0}$. In other words, $\mathcal T_{\ell_0,h} \ne z$. Now, consider another node group $z \in \mathcal T_\alpha$ such that $\mathcal T_\alpha$ and $\mathcal T_{\ell_0}$ differs only in the $h$-th element: $\mathcal T_{\alpha,h}=z$ and $\mathcal T_{\alpha,i}=\mathcal T_{\ell_0,i}$ for any $i \ne h$. As defined in Eq. (\ref{eq:def_L_shuffle_1}), since $\ell_0 \in \mathcal L_{z,\alpha}$ and $w_j \in \mathcal B_{\ell_0}$, node $z$ will be able to received its desired IVs $\{v_{i,j}: i\in \mathcal W_z\}$ from the multicast group messages from node group $\mathcal T_\alpha$ according to Eq. (\ref{eq:def_L_shuffle_1}) and (\ref{eq: 1_trans_eq1}). \begin{comment} To demonstrate 4), we start by recognizing that any node $z$ receives all intermediate values included in \begin{align}\label{eq: pf0} \bigcup\limits_{z \in \mathcal{T}_n}\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} =& \bigcup\limits_{z \in \mathcal{T}_n}\Bigg\{ v_{i,j}: i\in \mathcal{W}_z,w_j \notin \mathcal{M}_z , w_j\in \bigcap \limits_{k\in\mathcal{T}_n\setminus z}\mathcal{M}_k \Bigg\} \nonumber\\ =&\Bigg\{ v_{i,j}: i\in \mathcal{W}_z,w_j \notin \mathcal{M}_z , w_j\in \bigcup\limits_{z \in \mathcal{T}_n} \Bigg\{\bigcap \limits_{k\in\mathcal{T}_n\setminus z}\mathcal{M}_k \Bigg\} \Bigg\} \end{align} where the third line holds because there is only one condition that is dependent on $n$. Moreover, this set includes intermediate values such that node $z$ needs it to compute one of its Reduce functions, node $z$ cannot compute it itself and it is computed from a file in the set \begin{equation}\ \label{eq: pf1} \bigcup\limits_{z \in \mathcal{T}_n}\Bigg\{\bigcap \limits_{k\in\mathcal{T}_n\setminus z}\mathcal{M}_k \Bigg\}. \end{equation} If a set of nodes have a set of files in common then clearly the nodes are a subset of some set, $\mathcal{T}_\ell$, and, therefore, the set of files in (\ref{eq: pf1}) can be defined as the following \begin{equation} \label{eq: pf2} \left \{ \mathcal{B}_\ell : \left \{ \mathcal{T}_n \setminus z \right \} \subset \mathcal{T}_\ell , z \in \mathcal{T}_n \right \}. \end{equation} Furthermore, for any $\mathcal{T}_\ell$ there exists a $\mathcal{T}_n$ such that $\left\{ \mathcal{T}_n \setminus z \right \} \subset \mathcal{T}_\ell $ and $z \in \mathcal{T}_n$. We can see this is true because given a $\mathcal{T}_\ell$ we can set $\mathcal{T}_n = \left\{ \mathcal{T}_\ell \setminus \sigma \right \} \cup z$ where $\{ \sigma , z\} \in \mathcal{K}_p$. As a result, the set of files in (\ref{eq: pf1}) and (\ref{eq: pf2}) include every file. Therefore, the set defined in (\ref{eq: pf0}), which includes all intermediate values node $z$ receives in the Shuffle phase, is equivalent to \begin{equation} \{ v_{i,j} : i\in \mathcal{W}_z, w_j \notin \mathcal{M}_z\}. \end{equation} As any node $z$ computes intermediate values, $v_{i,j}$, such that $i \in \mathcal{W}_z$ and $w_j \in \mathcal{M}_z$ in the Map phase, it is clear that after the Map and Shuffle phases that node $z$ will have access to the intermediate values necessary to compute its Reduce functions. \end{comment} \section{Proof of Theorem \ref{theorem: bound}} \label{appendix: bound} In this proof, we use the following notation: $\mathcal{K}$ is the set of all nodes, $X_{\mathcal{K}}$ represents the collection of all transmissions by all nodes in $\mathcal{K}$, $\mathcal{W}_{\mathcal{S}}$ is the set of functions assigned to at least one node of $\mathcal{S}$, $\mathcal{M}_{\mathcal{S}}$ is the set files locally available to at least one node in $\mathcal{S}$, $V_{\mathcal{W}_{\mathcal{S}_1},\mathcal{M}_{\mathcal{S}_2}}$ is the set of IVs needed to compute the functions of $\mathcal{W}_{\mathcal{S}_1}$ and computed from the files of $\mathcal{M}_{\mathcal{S}_2}$. Finally, we define the following \begin{equation} Y_\mathcal{S} \triangleq \left(V_{\mathcal{W}_\mathcal{S},:},V_{:,\mathcal{M}_\mathcal{S}}\right) \end{equation} where ``$:$" is used to denote all possible indices. Given all the transmissions from all nodes, $X_\mathcal{K}$, and IVs which can be locally computed by a node $k$, $V_{:,\mathcal{M}_k}$, node $k$ needs to have access to all IVs necessary for its assigned functions, $V_{\mathcal{W}_k,:}$, therefore \begin{equation} H(V_{\mathcal{W}_k,:} | X_\mathcal{K}, V_{:,\mathcal{M}_k}) = 0. \end{equation} Given this assumption, we find \begin{align} H(X_\mathcal{K}) &\geq H(X_\mathcal{K}|V_{:,M_{k_1}}) = H(X_\mathcal{K},V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) - H(V_{\mathcal{W}_{k_1},:} | X_\mathcal{K}, V_{:,\mathcal{M}_{k_1}}) \nonumber\\ &= H(X_\mathcal{K},V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) \nonumber\\ &= H(V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) + H(X_\mathcal{K}|V_{\mathcal{W}_{k_1},:},V_{:,M_{k_1}})\nonumber\\ &=H(V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) + H(X_\mathcal{K}|Y_{k_1}). \label{eq: claim 1 2} \end{align} Similarly, \begin{align} H&(X_\mathcal{K}|Y_{\{k_1,\ldots k_{i-1}\}}) \geq H(X_\mathcal{K}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) \nonumber\\ &= H(X_\mathcal{K},V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) - H(V_{\mathcal{W}_{k_i},:} | X_\mathcal{K}, V_{:,\mathcal{M}_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) \nonumber \end{align} \begin{align} &= H(X_\mathcal{K},V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) \nonumber\\ &= H(V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) + H(X_\mathcal{K}|V_{\mathcal{W}_{k_i},:},V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}})\nonumber\\ &= H(V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{k_1,\ldots k_{i-1}}) + H(X_\mathcal{K}|Y_{\{k_1,\ldots k_i\}}). \label{eq: claim 1 3} \end{align} Also, since nodes can only transmit IVs from locally available files, we see that $H(X_\mathcal{K}|Y_{\{k_1,\ldots k_K\}}) = 0$. By starting with (\ref{eq: claim 1 2}) and iteratively using the relationship of (\ref{eq: claim 1 3}) to account for all $k_i \in \mathcal{K}$, we obtain \begin{equation} \label{eq: th 4 ent sum} H(X_\mathcal{K}) \geq \sum_{i=1}^{K} H\left(V_{\mathcal{W}_{k_i},:}|V_{:,\mathcal{M}_{k_i}},Y_{\{k_1,\ldots, k_{i-1} \}}\right). \end{equation} Moreover, since $H(X_\mathcal{K}) = LTQN$, from (\ref{eq: th 4 ent sum}) we obtain the lower bound on the optimal communication load, $L^*$, of (\ref{eq: bound_eq1}) and proved Theorem \ref{theorem: bound}. \section{Proof of Theorem \ref{thm: optimality}} \label{sec: opt_pf} We define a permutation of the $K$ nodes, $(k_1, \ldots , k_K)$, such that $\{ k_1 , \ldots , k_{m_p} \} = \mathcal{K}_i \subseteq \mathcal{C}_p$ for some $i\in[r]$ and $p\in[P]$ as defined in Section~\ref{sec: gen_het s1}. For $1 \leq j \leq m_p$, given all IVs collectively computed by nodes $k_1 , \ldots , k_j$ and all IVs needed by nodes $k_1 , \ldots , k_{j-1}$ to compute their respective reduce functions, the entropy of the requested IVs of the node $k_j$ is \begin{align} H &\left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{k_1} } , Y_{ \{ k_1,\ldots k_{j-1} \} }\right) = H \left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{\{ k_1 , \ldots , k_{j-1} \}} } \right) = T |\mathcal{W}_{k_j}|\left( N - \bigcup\limits_{j' \in [j]} \mathcal{M}_{k_{j'}} \right) \nonumber \\ &= T\cdot\frac{\eta_2Y}{m_p-1}\left( N - \sum_{j' \in [j]}|\mathcal{M}_{k_{j'}}| \right) =\frac{T\eta_2Y}{m_p-1}\left( N - \frac{jN}{m_p} \right) = \frac{T\eta_2YN}{(m_p-1)m_p}\left( m_p - j \right). \end{align} Furthermore, since the nodes $k_1 , \ldots , k_{m_p}$ collectively have access to all the $N$ files and compute all $QN$ intermediate values, we see that for $m_p \leq j \leq K$ \begin{equation} H \left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{k_1} } , Y_{ \{ k_1,\ldots k_{j-1} \} }\right) = 0. \end{equation} By using of the bound of Theorem \ref{theorem: bound} \begin{align} L^* &\geq \frac{1}{QNT}\sum_{j=1}^{m_p-1} H \left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{k_1} } , Y_{ \{ k_1,\ldots k_{j-1} \} }\right) =\frac{1}{Q}\sum_{j=1}^{m_p-1}\frac{\eta_2Y}{(m_p-1)m_p}\left( m_p - j \right) \nonumber\\ &= \frac{\eta_2Y}{Q(m_p-1)m_p}\sum_{j=1}^{m_p-1} j = \frac{\eta_2Y}{Q(m_p-1)m_p}\cdot \frac{m_p(m_p-1)}{2} = \frac{\eta_2Y}{2Q}= \frac{1}{2\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}. \end{align} Finally, we see that \begin{equation} \frac{L_{\rm c}}{L^*} \le \frac{2r}{r-1} \leq 4 \end{equation} for $r\geq 2$. This completes the proof of Theorem \ref{thm: optimality}. \bibliographystyle{IEEEbib} \section{Introduction} \label{section: intro} In recent years, coding has been reinvented for solving problems in distributed computing systems from different perspectives such as straggler mitigation \cite{lee2017speeding,dutta2016short,ferdinand2016anytime,yu2018straggler,bitar2017minimizing,karakus2017straggler,halbawi2017improving,suh2017matrix,mallick2018rateless,maity2018robust,Aktas2018straggler,wang2018fundamental,ye2018communication,wan2020distributed}, data shuffling \cite{lee2017speeding,attia2019shuffling,Elmahdy2020shuffling,wan2020shuffling}, and robustness \cite{chen2018draco}. In particular, Coded Distributed Computing (CDC), introduced in \cite{li2018fundamental}, offers an efficient approach to reduce the communication load in distributed computing networks such as MapReduce \cite{dean2008mapreduce}. In this type of distributed computing network, in order to compute the output functions, the computation is decomposed into ``Map" and ``Reduce" phases. First, each computing node computes intermediate values (IVs) using local input data files according to the designed Map functions. Then, computed IVs are exchanged among computing nodes and nodes use these IVs as input to the designed reduce functions to compute output functions. The operation of exchanging IVs is called ``data shuffling" and occurs during the ``Shuffle" phase. This severely limits the performance of distributed computing applications due to the very high transmitted traffic load \cite{li2018fundamental}. In \cite{li2018fundamental}, by formulating and characterizing a fundamental tradeoff between ``computation load" in the Map phase and ``communication load" in the Shuffle phase, Li {\em et al.} demonstrated that these two quantities are inversely proportional to each other. This means that if each intermediate value is computed at $r$ carefully chosen nodes, then the communication load in the Shuffle phase can be reduced by a factor of $r$. CDC achieves this multiplicative gain in the Shuffle phase by leveraging coding opportunities created in the Map phase by strategically placing the input files among the computing nodes. However, there are a few limitations of the CDC scheme in \cite{li2018fundamental}. First, it requires an exponentially large number of input files and reduce functions relative to the number of computing nodes. In some cases, the number of files and functions becomes unrealistic and the promised again cannot be achieved in practice. Second, there is an exponential number of multicasting groups compared to the number of nodes and the computation load. When implementing CDC in \cite{li2018fundamental}, the execution time of the code generation step is proportional to the number of multicasting groups. This counteracts the benefits of CDC in reducing overall execution time. Third, the CDC scheme assumes the computing network is homogeneous in that each computing node has the same computation and storage resources which limits its effectiveness on heterogeneous computing networks. Some other aspects of CDC have been investigated in the literature. In \cite{ezzeldin2017communication}, Ezzeldin {\em et al.} revisited the computation-communication tradeoff by computing only necessary IVs in each node. The authors proposed a lower bound on the corresponding computation load via a heuristic scheme, which achieves the lower bound under certain parameter regimes. In \cite{song2017benefit}, Song {\em et al.} considered the case where each computing node has access to a random subset of input files and the system is asymmetric. This means that not all output functions depend on the entire data set and we can decide which node computes which functions. The corresponding communication load was characterized. Later, in \cite{prakash2018coded}, Prakash {\em et al.} extended CDC to graph analytics of Erd\"os-R\'enyi graphs, where the computation at each vertex uses data only from the adjacent vertices. In \cite{Srinivasavaradhan2018distributed}, Srinivasavaradhan {\em et al.} considered the CDC design under a random network topology following a Erd\"os-R\'enyi random graph model. In \cite{konstantinidis2018leveraging}, the Konstantinidis {\em et al.} used resolvable designs to reduce the necessary number of files, functions, and number of multicasting groups. Furthermore, they implemented new designs to demonstrate an overall reduction in execution time compared to implementations of \cite{li2018fundamental} for some cases. Thus far, all aforementioned prior works have assumed the CDC network to be homogeneous, that is, the computing nodes of the network have the same amount of storage, computation, and communication resources. Understanding the performance potential and finding achievable designs for heterogeneous networks remains an open problem. The authors in \cite{kiamari2017Globecom} derived a lower bound for the communication load for a CDC network where nodes have varying storage or computing capabilities. The proposed achievable scheme achieves the information-theoretical optimality of the minimum communication load for a system of $3$ nodes. The authors also demonstrated that the parameters of a heterogeneous CDC network can be translated into an optimization problem to find an efficient Map and Shuffle phase design. In \cite{shakya2018distributed}, the authors studied CDC networks with $2$ and $3$ computing nodes where nodes have varying communication load constraints to find a lower bound on the minimum computation load. These works mainly focus on the heterogeneous placement of the files in the Map phase, however, nodes are assumed to have a homogeneous reduce function assignment. The authors of \cite{xu2019heterogeneous} explore the concept of semi-random file placement and function assignment and develop a heterogeneous computing scheme which can operate on a computing network with arbitrary heterogeneous storage and computation requirements. However, the number of necessary files and functions of this scheme are unclear as files and functions are assigned as fractions of the entire file library and function set, respectively. Our contributions in this paper are as follows. \begin{itemize} \begin{comment} \item First, we propose a novel CDC approach based on a combinatorial design, called {\em hypercube}, for the homogeneous networks, which requires an exponentially less number of input files and multicasting groups as compared to that in \cite{li2018fundamental}. Meanwhile, the resulting computation-communication trade-off maintains the multiplicative gain compared to conventional uncoded MapReduce and achieves the optimal trade-off proposed in \cite{li2018fundamental} asymptotically. \end{comment} \item First, we establish a novel combinatorial framework for CDC that exploits elegant geometric structures-- {\em hypercube} for homogeneous networks and {\em hypercuboid} for heterogeneous networks, to optimize the tradeoff of communication and computing for such networks. The proposed designs require an exponentially less number of input files and multicasting groups as compared to that in \cite{li2018fundamental}. Meanwhile, the resulting computation-communication trade-off maintains the multiplicative gain compared to conventional uncoded MapReduce and achieves the optimal trade-off proposed in \cite{li2018fundamental} asymptotically. \item Second, the proposed hypercuboid design can accommodate large heterogeneous CDC networks where nodes have varying storage and computing capabilities. This is achieved by the combinatorial design of a heterogeneous network (hypercuboid) consisting of multiple interleaved homogeneous networks (hypercubes) with varying dimensions and the design of efficient file mapping and data shuffle schemes across them. Another novelty of the proposed design is to assign more output functions to nodes with more storage space and computing resources. This is in contrast to previous work where each node is assigned by the same number of output functions \cite{kiamari2017Globecom}. Based on the proposed file and function assignments, we characterize an information theoretic converse bound, which is tight within a constant factor. According to our knowledge, this is the first work that develops an explicit and systematic heterogeneous CDC design with optimality guarantees under certain network parameters. \item Third, this work shows that network heterogeneity can actually reduce the communication load and thus, the fundamental tradeoff of \cite{li2018fundamental} no longer applies in this setting.\footnote{A similar phenomenon was also observed in \cite{xu2019heterogeneous}.} For large heterogeneous networks, we show that the proposed heterogeneous design can achieve a communication load that is strictly less than that of an equivalent homogeneous network. of \cite{li2018fundamental}. \end{itemize} The remainder of this paper is outlined as follows. In Section \ref{sec: Network Model and Problem Formulation}, we present the network model and problem formulation. Then, we present the proposed combinatorial CDC design and discuss its performance in Section \ref{sec: Hypercube Caching Network Approach} for the homogeneous case and in Section \ref{sec: hets1} for the more general heterogeneous case. In Section \ref{sec: Discussion}, we compare our design to the state-of-the-art design of \cite{li2018fundamental}. Concluding remarks are provided in Section \ref{sec: Conclusion}. \paragraph*{Notation Convention} We use $|\cdot|$ to represent the cardinality of a set or the length of a vector. Also $[n] := \{1,2,\ldots,n\}$ for some $n\in\mathbb{Z}^+$, where $\mathbb{Z}^+$ is the set of all positive integers, and $\oplus$ represents bit-wise XOR. \section{Network Model and Problem Formulation} \label{sec: Network Model and Problem Formulation} The network model is adopted from \cite{li2018fundamental}. We consider a distributed computing network where a set of $K$ nodes, labeled as $[K]=\{1, \ldots , K \}$, have the goal of computing $Q$ output functions and computing each function requires access to all $N$ input files. The input files, $\{w_1 , \ldots , w_N \}$, are assumed to be of equal size of $B$ bits each. The set of $Q$ output functions is denoted by $\{ \phi_1 , \ldots \phi_Q\}$. Each node $k\in [K] $ is assigned to compute a subset of output functions, denoted by $\mathcal{W}_k \subseteq [Q] $ ({\em function assignment}). The result of output function $i \in [Q]$ is $u_i = \phi_i \left( w_1, \ldots , w_N \right)$. Alternatively, an output value of the targeted function $i$ can be computed using the composition of ``Map" and ``Reduce" functions as follows. \begin{equation} u_i = h_i \left( g_{i,1}\left( w_1 \right), \ldots , g_{i,N}\left( w_N \right) \right), \end{equation} where for every output function $i$ there exists a set of $N$ Map functions $g_{i,j}(\cdot), i \in [Q], j \in [N]$ and one Reduce function $h_i(\cdot), i \in [Q]$. Furthermore, we define the output of the Map function, $v_{i,j}=g_{i,j}\left( w_j \right)$, as the intermediate value (IV) resulting from performing the Map function for output function $i$ on file $w_j$. There are $QN$ intermediate values in total and each is assumed to be size $T$ bits. The MapReduce distributed computing framework allows nodes to compute output functions without having access to all $N$ files. Instead, each node $k$ has access to $M_k$ out of the $N$ files and we define the set of files available to node $k$ as $\mathcal{M}_k \subseteq \{ w_1, \ldots , w_N\}$ ({\em file mapping}). Collectively, the nodes use the Map functions to compute every IV in the {\em Map} phase at least once. Then, in the {\em Shuffle} phase, nodes multicast the computed IVs among one another via a shared link ({\em shuffle method}). The Shuffle phase is necessary so that each node can receive the necessary IVs that it could not compute itself. Finally, in the {\em Reduce} phase, nodes use the reduce functions with the appropriate IVs as inputs to compute the assigned output functions. Throughout this paper, we consider the following design options. First, we assume each computing node computes all possible IVs from locally available files. This means that $|{\cal M}_k|$ represents both storage space and computation load of each node. Second, we consider the design scenario such that each of the $Q$ Reduce functions is computed exactly once ($s=1$) at one node and $|\mathcal{W}_i \cap \mathcal{W}_j| = 0$ for $i\neq j$, where $s$ is defined as the number of nodes which calculate each Reduce function.\footnote{The scenario of $s>1$, meaning that each of the $Q$ Reduce function is computed at multiple nodes, is called {\it cascaded} distributed computing, introduced in \cite{li2018fundamental}. In this paper, we do not consider this case.} Third, we consider the general scenario where each computing node can have heterogeneous storage space and computing resource, or heterogenous size of ${\cal M}_k$ and $\mathcal{W}_k, \; \forall k \in [K]$. The proposed schemes accommodate heterogeneous networks in that nodes can be assigned a varying number of files and functions. This distributed computing network design yields two important performance parameters: the computation load, $r$, and the communication load, $L$. The computation load is defined as the number of times each IV is computed among all computing nodes, or $r = \frac{1}{N}\sum_{k=1}^K|{\cal M}_k|$. In other words, the computation load is the number of IVs computed in the Map phase normalized by the total number of unique IVs, $QN$. The communication load is defined as the amount of traffic load (in bits) among all the nodes in the Shuffle phase normalized by $QNT$. \begin{defn} The optimal communication load is defined as \begin{equation} L^*(r) \stackrel{\Delta}{=} \inf\{L: (r, L) \text{ is feasible}\}. \end{equation} \end{defn} \section{Homogeneous Hypercube Computing Approach} \label{sec: Hypercube Caching Network Approach} In this section, we describe the proposed homogeneous CDC design based on the {\em hypercube} combinatorial structure. Our schemes are defined by {\it node grouping}, {\it file mapping}, {\it function assignment} and {\it shuffle method}. Two detailed examples, one for two-dimensional, and one for three-dimensional, are provided to illustrate the fundamental principles of the proposed design. These will be extended to the more general heterogeneous CDC scheme in Section \ref{sec: hets1}. In this section, we consider the scenario where the network is homogeneous. In other words, each node is assigned the same number of files and reduce functions. Also, every reduce function is computed exactly once at one node ($s=1$). Every node computes a set of $\eta_2$ distinct functions and $Q=\eta_2 K$ where $\eta_2 \in \mathbb{Z}^+$. The novel combinatorial hypercube design splits the nodes into $r$ disjoint sets each of size $\frac{K}{r}$ and batches of $\eta_1$ files are assigned to one node from each set.\footnote{This scheme can be classified as a resolvable design for CDC, which was introduced in \cite{konstantinidis2018leveraging}. In addition, it also falls into the general framework of the Placement Delivery Array (PDA) designed for Device-to-Device coded caching \cite{wang2017placement}. } This is analogous to constructing a hypercube lattice of dimension $r$ with the length of each side $\frac{K}{r}$ to describe the file placement at the nodes. We use this hypercube approach to better illustrate the examples of our new combinatorial design. We show that the required number of files is $N = \eta_1 \left( \frac{K}{r} \right)^r$ where $\eta_1 \in \mathbb{Z}^+$ and the number of multicasting groups is $G=\left( \frac{K}{r} \right)^r$. We first present a 2-dimension (a plane) example where $r=2$. \subsection{2-Dimension Example} \label{sec: 2D example} In this example, we propose a distributed computing network based on a $r=2$ dimensional hypercube (a plane) lattice where each side has length $\frac{K}{r}=3$. There are $K=6$ computing nodes each of which has access to $\frac{1}{3}$ of the file library. Each lattice point represents a file and each node has a set of files available to it represented by a line of lattice points as shown in Fig.~\ref{fig: 2d fig}(a). Specifically, there are two set of nodes: $\mathcal K_1=\{1,2,3\} $ and $\mathcal K_2=\{4,5,6\} $. Each node in $\mathcal K_1$ (or $\mathcal K_2$) has access to three files, represented by three lattice points along a horizontal (or vertical) line. For instance, node 1 in $\mathcal K_1$ has access to three files $w_1$, $w_2$ and $w_3$ along the top horizontal line. Similarly, node 5 in $\mathcal K_2$ has access to three files $w_2$, $w_5$ and $w_8$, along the middle vertical line. Each node is responsible for computing one out of the $Q=6$ reduce functions in the Reduce phase. More specifically, node $i$ computes reduce function $i$. \begin{figure} \centering \centering \includegraphics[width=9cm, height=6.3cm]{2d_fig_3by3_v3} \vspace{-0.7cm} \caption{~\small (a) Lattice plane that defines file availability amongst the $K=6$ computing nodes. Each lattice point represents a file and each node has a set of files available to it represents by a horizontal or vertical line of lattice points. (b) The IVs used locally and transmitted by each node.} \label{fig: 2d fig} \vspace{-0.4cm} \end{figure} \if In the Map phase, nodes compute intermediate values which are necessary to compute the reduce function they are responsible for. These intermediate values do not have to be transmitted and do not contribute to the communication load. For example, as shown in Fig.~\ref{fig: 2d fig}(b), node $1$ computes $v_{1,1}$, $v_{1,2}$ and $v_{1,3}$ and node 5 computes $v_{5,2}$, $v_{5,5}$ and $v_{5,8}$. Next, we consider all possible pairs of nodes, termed node groups, consisting of one node from $\mathcal K_1$ and one node from $\mathcal K_2$. For instance, node $1$ can form node groups with any of the three nodes in $\mathcal K_2=\{4,5,6\}$ and node $5$ can form node groups with any of the three nodes in $\mathcal K_1=\{1,2,3\}$. Within each node group of two nodes, each node computes the intermediate values needed by the other node in the group. For the node group of $\{1,5\}$, node $1$ computes $v_{5,1}$ and $v_{5,3}$ and transmits these values to node $5$. Notice that, node $5$ is incapable of computing these intermediate values itself because it does not have access to files $w_1$ and $w_3$. Similarly, node $5$ computes $v_{1,5}$ and $v_{1,8}$ and transmits these intermediate values to node $1$ because node 1 does not have access to files $w_1$ and $w_8$. Fig.~\ref{fig: 2d fig}(b) shows the intermediate values computed at each node. For instance, node 1 computes $v_{1,1}$, $v_{1,2}$, $v_{1,3}$ because node 1 is assigned to compute reduce function $1$ and it has access to files $w_1, w_2, w_3$. Node 1 can form three node groups with node 4, 5 or 6 in $\mathcal K_2$. Thus, it will transmit intermediate values $v_{4,2}$ and $v_{4,3}$ to node 4, $v_{5,1}$ and $v_{5,3}$ to node 5, and $v_{6,1}$ and $v_{6,3}$ to node 6, respectively. On the other hand, node 1 will receive its requested intermediate values $v_{1,4}$ and $v_{1,7}$ from node 4, $v_{1,5}$ and $v_{1,8}$ from node 5, and $v_{1,6}$ and $v_{1,9}$ from node 6. This allows node 1 to obtain all the intermediate values necessary for computing reduction function $1$. In general, by considering all possible node groups, each node receives an intermediate value {\color{green}[] NW: Shall we use the acronym IV?]} for every file that it does not have. We can see this is true by recognizing that a node consecutively pairs with the three nodes in either $\mathcal K_1$ or $\mathcal K_2$, and the nodes in either $\mathcal K_1$ or $\mathcal K_2$ collectively has access to all the files. \fi In the Map phase, nodes compute all $Q=6$ IVs from each locally available file. Some IVs are necessary to compute the locally assigned reduce function. For example, as shown in Fig.~\ref{fig: 2d fig}(b), node $1$ computes $v_{1,1}$, $v_{1,2}$ and $v_{1,3}$ and node 5 computes $v_{5,2}$, $v_{5,5}$ and $v_{5,8}$. These IVs do not have to be transmitted and do not contribute to the communication load. However, other IVs are transmitted between nodes. We consider all possible pairs of nodes, termed node groups, consisting of one node from $\mathcal K_1$ and one node from $\mathcal K_2$. For instance, nodes $1$ and $5$ form node groups with each of the three nodes in $\mathcal K_2=\{4,5,6\}$ or $\mathcal K_1=\{1,2,3\}$, respectively. For the node group of $\{1,5\}$, node $1$ has computed $v_{5,1}$ and $v_{5,3}$ and transmits these IVs to node $5$. Notice that, node $5$ is incapable of computing these IVs itself because it does not have access to files $w_1$ and $w_3$. Similarly, node $5$ has computed $v_{1,5}$ and $v_{1,8}$ and transmits these IVs to node $1$ because node 1 does not have access to files $w_1$ and $w_8$. Fig.~\ref{fig: 2d fig}(b) also shows the IVs transmitted by each node. For example, node $1$ will transmit IVs $v_{4,2}$ and $v_{4,3}$ to node $4$, $v_{5,1}$ and $v_{5,3}$ to node $5$ and $v_{6,1}$ and $v_{6,3}$ to node $6$. On the other hand, node 1 will receive its requested IVs $v_{1,4}$ and $v_{1,7}$ from node $4$, $v_{1,5}$ and $v_{1,8}$ from node $5$, and $v_{1,6}$ and $v_{1,9}$ from node $6$. Therefore, node $1$ obtains all the IVs necessary for computing reduce function $1$. In general, by considering all possible node groups, each node receives an IV for every file that it does not have. We can see this is true by recognizing that a node consecutively pairs with the three nodes in either $\mathcal K_1$ or $\mathcal K_2$, and the nodes in either $\mathcal K_1$ or $\mathcal K_2$ collectively have access to all the files. Throughout this paper, we mainly consider the case where each node computes all IVs from its available files similar to the original CDC work \cite{li2018fundamental}. In this example, each IV is computed twice and $r=2$ since each file is assigned to $2$ nodes. In general, the computation load $r$ is equivalent to the dimension of the hypercube which defines the file placement. {Note that, nodes will compute some IVs that are never used to transmit, decode or compute a reduce function.} From~\ref{fig: 2d fig}(b) shows the IVs computed by each node that are utilized. Each node computes $3$ IVs which are necessary for its own reduce function. Also, each node participates in $3$ node pairs for which it needs to compute $2$ IVs to transmit to the other node in the pair. {In some applications, it may be possible for nodes to compute a select set of IVs to reduce the computation load as presented in \cite{ezzeldin2017communication,woolsey2018new}. In this toy example, we only consider unicasting, therefore, the communication load is equivalent to the uncoded scenario and $L=\frac{2}{3}$, or the fraction of files not available at each node. This can be verified by recognizing that there are $9$ pairs of nodes for which $2$ IVs are transmitted from each node for each pair. In total, $36$ of the $54$ IVs are transmitted and $L=\frac{2}{3}$. In later examples, we will show how this scheme can be expanded to utilize coded multicasting and outperform the uncoded CDC scheme. \begin{remark} Interestingly, although the general scheme generalized from this example is equivalent to the unicast in this case, we observe that there actually exist multicasting opportunities in this example. For instance, node $1$ could transmit $v_{4,2}\oplus v_{5,1}$ to nodes $4$ and $5$ (assuming that node $4$ and $5$ compute $v_{5,1}$ and $v_{4,2}$, respectively). In fact, all IVs could be transmitted in coded pairs where a node along one dimension transmits to $2$ nodes aligned along the other dimensions, which would reduce the communication load by half.\footnote{This is similar to the scheme outlined in \cite{wang2017placement,9133151} for the analogous coded caching problem. However, as we will see for other examples and as discussed in \cite{wang2017placement}, this scheme does not achieve a multiplicative gain for $r>2$.} \end{remark} In the following, we describe the general scheme for the proposed combinatorial design which expands for the case when $r>2$. \subsection{General Homogeneous Scheme} \label{sec: genscheme_s1} In this subsection, we will introduce the general homogeneous scheme for $s=1$ step by step as follows. {\bf Node Grouping 1}: Let $\mathcal K=\{ 1, 2, \cdots, K \}$ denote the set of $K$ nodes. Assume that $\mathcal K$ is split into $r$ equal-sized disjoint sets $\mathcal{K}_1,\ldots ,\mathcal{K}_r$ that each contains $\frac{K}{r}\in \mathbb{Z}^+$ nodes. We define $\mathcal{T}\subset \mathcal K$ as a node group of size $r$ if it contains exactly one node from each $\mathcal{K}_i$, i.e., $|\mathcal{T}\cap\mathcal{K}_i|=1,\text{ }\forall \text{ } i\in [r]$. There are a total of $X=\left( \frac{K}{r} \right)^r$ possible node groups, denoted by $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$. Furthermore, for each node group $\mathcal T_j$, we define its $i$-th component $\mathcal T_{j,i}=\mathcal T_j \cap \mathcal K_i$ as the node in $\mathcal T_j$ that is chosen from $\mathcal K_i$, where $ i \in [r]$. {\bf Node Group (NG) File Mapping}: Given node groups $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$, we split the $N$ files into $X$ disjoint sets labeled as $\mathcal{B}_{1},\ldots,\mathcal{B}_{X}$. These file sets are of size $\eta_1\in \mathbb{Z}^+$ and $N=\eta_1 X$. Each file set $\mathcal{B}_{i}$ is only available to every node in the node group $\mathcal{T}_i$. It follows that if node $k \in [K]$ belongs to a node group $\mathcal T_i$, then the file set $\mathcal B_i$ is available to this node. Hence, by considering all possible node groups $\mathcal T_i$ that node $k$ belongs to, its available files, denoted by $\mathcal{M}_k$, is expressed as \begin{equation} \mathcal{M}_k:=\bigcup\limits_{i : k\in \mathcal{T}_i}\mathcal{B}_i. \end{equation} {\bf Function Assignment 1}: The $Q$ reduce functions are split into $K$ equal size, disjoint subsets labeled as $\mathcal{W}_1, \ldots , \mathcal{W}_K$. Each set contains $\eta_2\in \mathbb{Z}^+$ reduce functions where $Q = \eta_2 K$. For each $k \in [K]$, define $\mathcal{W}_k$ as the set of reduce functions assigned to node $k$. \begin{remark} By Node Grouping 1 and NG File Mapping, each node set $\mathcal{K}_i$ collectively maps the file library exactly once, and therefore, the file library is mapped $r$ times among all $K$ nodes. Note that, since each file belongs to a unique file set $\mathcal B_i$ and is mapped to a unique set of $r$ nodes (in the node group $\mathcal T_i$), we must have $\frac{1}{N}\sum_{k=1}^K|{\cal M}_k|= \frac{N r}{N}=r$. Moreover, $\eta_1\left( \frac{K}{r} \right)^{r-1}$ files are mapped to each node. Then, by Function Assignment 1, each node is assigned $\eta_2$ reduce functions and each reduce function is assigned to exactly $s=1$ node. \end{remark} The Map, Shuffle and Reduce phases are defined as follows: {\bf Map Phase}: Each node $k\in [K]$ computes the set of IVs $\{v_{i,j} : i\in [Q], w_j \in \mathcal{M}_k \}$. {\bf Node Group (NG) Shuffle Method}: For every $\alpha\in [X]$, a coded message will be multicasted by each node $k\in \mathcal{T}_\alpha$ to serve independent requests of the rest $r-1$ nodes in $\mathcal{T}_\alpha$. Meanwhile, each node $k\in \mathcal{T}_\alpha$ will multicast the same number of coded messages. Here, each IV is requested by a node $z \in \mathcal{T}_\alpha \setminus k$ and must be available to all other nodes in $\mathcal{T}_\alpha \setminus z$ to ensure that each node can decode successfully its own desired IVs from the broadcast. Next, we consider an arbitrary node group $\mathcal T_\alpha$ and a node $z \in \mathcal T_\alpha $. Assume that {$z \in \mathcal K_h$}, and thus {$z=\mathcal T_{\alpha,h}= \mathcal T_\alpha \cap \mathcal K_h$}. In the following, we fix the choice of $\alpha, z,h$ and define \begin{equation} \mathcal L_{z,\alpha}=\{ \ell \in [X]: \mathcal T_{\ell,h}\ne z, \mathcal T_{\ell,i}=\mathcal T_{\alpha,i}, \forall i\in[r]\setminus h \}. \label{eq:define_L_z_alpha} \end{equation} Here, the set $ \mathcal L_{z,\alpha}$ includes all indexes $\ell \in [X] $ such that the node group $\mathcal T_\ell$ differs from $\mathcal T_{\alpha}$ only in the $h$-th element, i.e., the node choice from {$\mathcal K_h$}. In other words, since {$z \in \mathcal K_h$}, then $\mathcal T_{\ell,h}$ can be any node in $\mathcal K_h$ except for $z$. Note that $h$ is suppressed from the subscript of $\mathcal L_{z,\alpha}$ for notation simplicity. The definition of (\ref{eq:define_L_z_alpha}) ensures that for any $\ell \in \mathcal L_{z,\alpha}$, we have $z \notin \mathcal T_\ell$, but for any other node $z'$ in $\mathcal{T}_\alpha \setminus z$, we have $z' \in \mathcal T_\ell$. This follows that while file set $\mathcal B_\ell$ is not mapped to node $z$, it is mapped to all other nodes $z'$ in $\mathcal T_\alpha \setminus z$. Thus, we see that IVs of the type $\{v_{i,j}, i\in \mathcal{W}_z, w_j \in \mathcal B_\ell\}$ are requested by node $z$ because $z$ does not have $\mathcal B_\ell$, but are available to all nodes in $\mathcal{T}_\alpha \setminus z$ because they all have access to $\mathcal B_\ell$. This key idea is used to create multicast opportunities as follows. Formally, let us define \begin{equation} \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}=\bigcup\limits_{\ell \in \mathcal L_{z,\alpha} }\left\{ v_{i,j}: i\in \mathcal{W}_z, w_j \in \mathcal B_\ell\right\}, \label{eq:def_L_shuffle_1} \end{equation} which contains IVs requested by node $z$ and are available at all nodes in $\mathcal{T}_\alpha\setminus z$. Furthermore, $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$ is split into $r-1$ disjoint subsets of equal size\footnote{In general, $|\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}|$ may not be divisible by $r-1$, in which case the IVs of $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$ can be concatenated into a message and split into $r-1$ equal size segments. This process was presented in \cite{li2018fundamental}.} denoted by $ \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},\sigma_1},\ldots , \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},\sigma_{r-1}} $ where $\{ \sigma_1,\ldots , \sigma_{r-1}\}=\mathcal{T}_\alpha\setminus z$. Each node $k\in \mathcal{T}_\alpha$ sends the common multicast message \begin{equation} \label{eq: 1_trans_eq1} \bigoplus \limits_{z\in \mathcal{T}_\alpha\setminus k} \mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},k} \end{equation} to all nodes $z\in \mathcal{T}_\alpha\setminus k$. {\bf Reduce Phase}: For all $k\in [K]$, node $k$ computes all output values $u_q$ such that $q\in \mathcal{W}_k$. \begin{remark} For the homogeneous case, we have $|\mathcal L_{z,\alpha}|=\frac{K}{r}-1$ because {$\mathcal T_{\ell,h}$} can only be one of the $\frac{K}{r}-1$ nodes in {$\mathcal K_h\setminus z$}. When using Node Grouping 1, NG File Mapping, and Function Assignment 1, in NG Shuffle Method we find each intermediate value set, $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$, contains $\eta_1 \eta_2 \left( \frac{K}{r} - 1\right)$ IVs. \end{remark} In the following, we will present a more complex $3$-dimension example by accommodating the design procedures and all the notations introduced above. \subsection{3-Dimension Example} \label{sec: 3D_1} To demonstrate the general scheme, we construct a computing network using a three-dimensional hypercube as shown in Fig.~\ref{fig: 3d fig}. Each lattice point in the cube, with its index $i\in [27]$ labeled next to the point, represents a different file set $\mathcal B_i=\{w_i\}$ which contains $\eta_1=1$ files. There are a total of $K=9$ nodes, split into three node sets: $\mathcal{K}_1 = \{1,2,3 \}$, $\mathcal{K}_2 = \{4,5,6 \}$, and $\mathcal{K}_3 = \{7,8,9 \}$, aligned along each of the $r=3$ dimensions of the hypercube. Specifically, the three nodes in $\mathcal{K}_1 = \{1,2,3 \}$ are represented by three parallel planes that go from top surface of the hypercube to the bottom. Node 3 is represented by the green plane that passes through lattice point 7. Node 1 and 2 are represented by the two planes (not shown) parallel to the green plane that go through lattice point 1 and point 4, respectively. The three nodes in $\mathcal{K}_2 = \{4,5,6 \}$ are represented by three parallel planes that go from left surface of the hypercube to the right. Node 5 is represented by the middle plane, shown in red, that goes through lattice point 8, and nodes 4 and 6 are represented by two planes (not shown) parallel to the red plane that go through lattice point 7 and 9, respectively. The nodes in $\mathcal{K}_3 = \{7,8,9 \}$ are represented by three parallel planes that go from the front surface of the hypercube to the back. Node 9 is the blue plane, passing through lattice point 27, and nodes 7, 8 are represented by two planes (not shown) parallel to the blue plane and go through lattice points 9 and 18, respectively. For file mapping, each node is assigned all the files indicated by the 9 lattice points on the corresponding plane. For instance, node $5$, represented by the red plane, is assigned the file set $\mathcal{M}_5=\{ w_2, w_5, w_8, w_{11}, w_{14}, w_{17}, w_{20}, w_{23}, w_{26}\}$. For each $i \in [3]$, the size of $\mathcal K_i$ is $\frac{K}{r}=3$, which is the number of lattice points in the $i$-th dimension. Since the three nodes in each set $\mathcal{K}_i$ are aligned along dimension $i$, they collectively store the entire library of 27 files. Since each point $i$ in the lattice is uniquely determined by the intersection of three planes, one from each dimension, the same point also represents a node group $\mathcal T_i$. For instance, node group $\mathcal T_{26}=\{3,5,9\}$ is represented by the three planes-- green (node 3), red (node 5), and blue (node 9) intersecting at only one lattice point $i=26$. It is clear that each file $w_i$ is mapped to $r=3$ nodes in $\mathcal T_i$. Each node $k$ is assigned the $\eta_2=1$ functions of $\mathcal{W}_k=\{ k\}$ because $Q=K=9$ and each node $k$ is only assigned the $k$-th reduce function. \begin{figure} \centering \centering \includegraphics[width=9cm, height=6cm]{3d_fig_sized_v3} \vspace{-0.6cm} \caption{~\small (left) Cube lattice which defines file availability amongst the $K=9$ computing nodes. Each lattice point represents a file and each node has set of files available to it represented by a plane of lattice points. The green, red and blue planes represent the files locally available to nodes $3$, $5$ and $9$, respectively. (right) Intersections of planes which represent files that are locally available to multiple nodes and yields coded multicasting opportunities.} \label{fig: 3d fig} \vspace{-0.4cm} \end{figure} In the Map phase, each node computes all IVs from locally available files. For example, node 5 will compute all possible IVs $\{ v_{i,j} : i \in[Q],w_j\in \mathcal{M}_5 \}$. The subset of IVs $\{ v_{5,j} : w_j\in \mathcal{M}_5 \}$ is used to calculate of the function output $u_5$. Furthermore, node $5$ will use the subset of IVs in $\{ v_{i,j} : i\in [Q]\setminus\mathcal{K}_2, w_j\in \mathcal{M}_5 \}$ for transmission and decoding purposes when forming multicasting groups with nodes of $\mathcal{K}_1$ and $\mathcal{K}_3$. Note that, similar to the last example, node $5$, and the other nodes, will compute some IVs that are not utilized. We use the example of node group $\mathcal{T}_\alpha=\mathcal{T}_{26}=\{3,5,9\}$ to explain the Shuffle phase. Within node group $\mathcal{T}_{26}$, node 3 will multicast the summation of two IVs to nodes in $\mathcal T_{26}\setminus3$, one intended for node 5, and one intended for node 9. The former must be available at both nodes 3 and 9, and the latter must be available at both nodes 3 and 5. To determine these IVs, we consider the set $\mathcal{V}_{\{3,9\}}^{\{5\}} $ and $\mathcal{V}_{\{3,5\}}^{\{9\}}$. The set $\mathcal{V}_{\{3,9\}}^{\{5\}}$ contains two IVs requested by node 5 that are available at nodes $3,9$. To find these two IVs, we replace node $5$ in $\mathcal{T}_{26}$ by one of the other two nodes in $\mathcal K_2$, which are nodes 4 and 6. This way, we obtain two node sets $\mathcal T_{25}=\{3,4,9\}$ and $\mathcal T_{27}=\{3,6,9\}$ that differ from $\mathcal{T}_\alpha$ only in the second element (the element that intersects $\mathcal K_2$). Thus ${\mathcal L_{5,\alpha}}=\{25, 27\}$. This leads to $\mathcal{V}_{\{3,9\}}^{\{5\}} =\mathcal{V}_{\{3,9\}}^{\{5\},3} \bigcup \mathcal{V}_{\{3,9\}}^{\{5\},9} =\{ v_{5,25},v_{5,27} \}$ which contains two IVs requested by node $5$ and are available at nodes 3 and 9. Similarly, we find ${\mathcal L_{9,\alpha}}=\{8,17\}$ and $\mathcal{V}_{\{3,5\}}^{\{9\}} =\mathcal{V}_{\{3,5\}}^{\{9\},3} \bigcup\mathcal{V}_{\{3,5\}}^{\{9\},5}=\{ v_{9,8},v_{9,17} \} $. Once these two IV sets are found, node 3 transmits the summation of one IV from each set, say $\mathcal{V}_{\{3,9\}}^{\{5\},3} \oplus \mathcal{V}_{\{3,5\}}^{\{9\},3}= v_{5,25} \oplus v_{9,8}$ to nodes 5 and 9. Upon receiving this value, node 5 will subtract $v_{9,8}$ to recover $v_{5,25}$ and node 9 will subtract $v_{5,25}$ to recover $v_{9,8}$. The rest of the IV sets can be found in a similar fashion such that $\mathcal{V}_{\{5,9\}}^{\{3\}} = \mathcal{V}_{\{5,9\}}^{\{3\},5} \bigcup \mathcal{V}_{\{5,9\}}^{\{3\},9} =\{ v_{3,20},v_{3,23} \}$, and $\mathcal{V}_{\{3,5\}}^{\{9\}}= \mathcal{V}_{\{3,5\}}^{\{9\},3} \bigcup \mathcal{V}_{\{3,5\}}^{\{9\},5} = \{ v_{9,8}, v_{9,17} \}$. Node 5 will transmit $v_{3,20} \oplus v_{9,17}$ to nodes 3 and 9. Node 9 will transmit $v_{3,23} \oplus v_{5,27}$ to nodes 3 and 5. In this example, each node participates in $9$ multicasting groups and transmits $1$ coded message per group. Each transmission has the equivalent size of $1$ IV. Therefore, the communication load is $L_{\rm c}=\frac{9 \cdot 9}{QN}= \frac{81}{9\cdot 27}=\frac{1}{3}$, which is half of the uncoded communication load $L_{\rm u}=\frac{2}{3}$, or the fraction of files not available to each node. \subsection{Achievable trade-off between Computation and Communication Loads} The following theorem evaluates the trade-off between the computation and communication loads for the proposed scheme. \begin{theorem} \label{theorem: s1_hom} By using Node Grouping 1, NG File Mapping, Function Assignment 1, and NG Shuffle Method, the communication load of the general homogeneous scheme is \begin{eqnarray} \label{eq: theorem s1_hom} L_{\rm c}(r) &=& \frac{K-r}{K\left( r-1 \right)}. \end{eqnarray} \hfill $\square$ \end{theorem} \begin{IEEEproof} Theorem~\ref{theorem: s1_hom} is proved in Appendix \ref{sec: codedHetPrfs1} as a special case of our general heterogeneous design which is defined in Section \ref{sec: hets1}. \end{IEEEproof} The optimality of this scheme is discussed in Section~\ref{sec: Discussion} by comparing the communication load of this scheme with that of the state-of-the-art scheme in \cite{li2018fundamental}. \section{Heterogeneous Hypercube Computing Approach} \label{sec: hets1} In this section, we expand the proposed combinatorial hypercube design to accommodate heterogeneous computing networks. As mentioned in the introduction, one key novelty of our design is nodes are assigned a varying number of files and reduce functions so that, in practice, nodes with more computational capability perform relatively more of the overall MapReduce execution. In this case, the proposed heterogeneous design becomes a hypercuboid, consisting of $P$ interleaved homogeneous hypercube networks. The homogeneous networks, $\mathcal{C}_p, \; \forall p\in[P]$, reflect hypercubes with different dimensions and lengths, representing distinct classes of nodes with varying storage capacity and computation resources. We start with an example and then present the general scheme. \subsection{3-Dimension Hypercuboid Example} This example is presented in Fig.~\ref{fig: het_s1_exp1}, where there are two classes of nodes $\mathcal C_1=\mathcal K_1 \cup \mathcal K_2$ and $\mathcal C_2=\mathcal K_3$ with different storage capability where $\mathcal{K}_1~=~\{ 1, 2 \}$, $\mathcal{K}_2 = \{ 3, 4 \}$ and $\mathcal{K}_3 = \{ 5, 6, 7 \}$. Each node in $\mathcal C_1$ stores half of the files and each node in $\mathcal C_2$ stores one-third of the files. Each node set, $\mathcal{K}_i$, collectively stores all $N=12$ files. Each file is assigned to a node group $\mathcal{T}_\alpha$ of $3$ nodes such that it contains one node from each set $\mathcal{K}_1$, $\mathcal{K}_2 $ and $\mathcal{K}_3$. For example, file $w_1$ is assigned to the nodes of $\{1,3,5 \}$ and file $w_{11}$ is assigned to the nodes of $\{2,3,7 \}$. All of the files assignments are represented by the cuboid in Fig.~\ref{fig: het_s1_exp1}. In the Map phase, the nodes will compute all IVs from their locally available files. Since every file is assigned to $3$ nodes, the computation load is $r=3$. Different from previous works in CDC, nodes are assigned a varying number of reduce functions. We assign more reduce functions to nodes which have larger storage and computing capability. Assume that there are $Q=11$ reduce functions. We assign $2$ reduce functions to each node of $\mathcal{K}_1 $ and $\mathcal{K}_2$ and just $1$ reduce function to each node of $\mathcal{K}_3$. Specifically, the function assignments are $\mathcal W_1=\{1,2\}$, $\mathcal W_2=\{3,4\}, \mathcal W_3=\{5,6\}, \mathcal W_4=\{7,8\}$, and $\mathcal W_5=\{9\}, \mathcal W_6=\{10\}$, and $\mathcal W_7=\{11\} $. The reason we assigned this specific number of reduce functions to each node will become clear when we discuss the Shuffle phase. \begin{figure*} \centering \centering \includegraphics[width=16cm]{3d_s=1_het_v2} \put(-38.4,24.5){assigned} \put(-38.4,21){functions \;\;\;\;\; transmits} \put(-51.95,17){node $2$: \;\;\;\;\;\;$3$, $4$ \;\;\;\;\;\;$v_{5,12}\oplus v_{11,7}$ } \put(-51.95,13){node $3$: \;\;\;\;\;\;$5$, $6$ \;\;\;\;\;\;\;$v_{3,5}\oplus v_{11,9}$} \put(-51.95,9){node $7$: \;\;\;\;\;\;\;$11$ \;\;\;\;\;\;\;\;\;$v_{4,5}\oplus v_{6,12}$ } \vspace{-0.2cm} \caption{~\small Representations of a hypercuboid with $P=2$, $r_1=2$, $m_1=2$, $r_2=1$ and $m_2=3$. {\it Left $3$ cuboids}: Depictions of the files mapped to the nodes of $\mathcal{K}_1$, $\mathcal{K}_2$ and $\mathcal{K}_3$, respectively. {\it Right most cuboid}: Hypercuboid highlighting the files stored at exactly $2$ nodes of the multicast group $\mathcal{T}_{11}=\{ 2,3,7 \}$. These files, in addition to the assigned functions among these nodes, determine the IVs included in the coded multicasts, which are displayed to the right.} \label{fig: het_s1_exp1} \vspace{-0.4cm} \end{figure*} In the Shuffle phase, the set of multicast groups includes all possible node groups $\mathcal T_{\alpha}$ which contain 1 node from each set $\mathcal{K}_1$, $\mathcal{K}_2 $ and $\mathcal{K}_3$. Within each $\mathcal T_{\alpha}$, nodes send coded pairs of IVs to the other two nodes. For example, consider the node set $\mathcal T_{\alpha}=\mathcal T_{11}=\{2,3,7 \}$. Following notations in Shuffle Method 1, we have $\mathcal L_{2,\alpha}=\{5\}$. This is because when replacing node $2 \in \mathcal K_1$ in $\mathcal T_{\alpha}$ by a different node in $\mathcal K_1$, we obtain $\mathcal T_{5}=\{1,3,7 \}$. Hence, using $\mathcal W_2=\{3,4\}$ and Eqn. (\ref{eq:def_L_shuffle_1}), we obtain $\mathcal V^{\{2\}}_{3,7}=\{v_{3,5}, v_{4,5}\}$, which are IVs requested by node 2 and computed at nodes $3$ and $7$. Similarly, for node 3, we have $\mathcal L_{3,\alpha}=\{12\}$, and $\mathcal V^{\{3\}}_{2,7}=\{v_{5,12}, v_{6,12}\}$. For node 7, we have $\mathcal L_{7,\alpha}=\{7,9\}$, corresponding to $\mathcal T_{7}=\{2,3,5\}$ and $\mathcal T_{9}=\{2,3,6\}$. While the size of $\mathcal L_{7,\alpha}$ is larger than that of $\mathcal L_{2,\alpha}$ and $\mathcal L_{3,\alpha}$, since $\mathcal W_7=\{11\}$ is smaller, we obtain $\mathcal V^{\{7\}}_{2,3}=\{v_{11,7}, v_{11,9}\}$, which is the same size as that of $\mathcal V^{\{2\}}_{3,7}$ and $\mathcal V^{\{3\}}_{2,7}$. Using Eqn.(\ref{eq: 1_trans_eq1}), we see that nodes $2$, $3$, and $7$ transmit $v_{5,12}\oplus v_{11,7}$, $v_{3,5}\oplus v_{11 ,9}$, and $v_{4,5}\oplus v_{6,12}$, respectively. In this example, we see that by assigning a varying number of reduce functions to the nodes we can create symmetry among each node group, $\mathcal{T}_\alpha$, i.e., each node of the group requests the same number of IVs from the other nodes of the group. This symmetry can lead to savings in the communication load. Here, the communication load can be calculated by accounting for the $2\cdot 2 \cdot 3 = 12$ node groups, where within each group, there are $3$ transmissions of size $T$ bits. By normalizing by $QNT$ we find the communication load of the coded scheme is $L_{\rm c} = \frac{36}{12 \cdot 11} = \frac{3}{11}$. We can compare this to the uncoded communication load, where each requested IV is transmitted alone. To compute the uncoded communication load, we count the number of IVs each node requests. Since the $4$ nodes of $\mathcal{K}_1 $ and $\mathcal{K}_2$ request $6\cdot 2 = 12$ IVs each and the $3$ nodes of $\mathcal{K}_3$ request $8$ IVs each, we find $L_{\rm u} = \frac{4\cdot 12 + 3 \cdot 8}{12\cdot 11} = \frac{6}{11}$. In this case, $L_{\rm c} = \frac{1}{2}\cdot L_{\rm u}$ since for the coded Shuffle policy every requested IV is transmitted in coded pairs. In the general heterogeneous CDC scheme proposed here, we will see that $L_c = \frac{1}{r-1}\cdot L_{\rm u}$. \subsection{General Heterogeneous Scheme} \label{sec: gen_het s1} In this subsection, we will introduce the general heterogeneous scheme for step by step. {\bf Node Grouping 2}: The key idea of Node Grouping 2 is to form one heterogeneous network based on a hypercuboid design that consists of $P$ interleaved homogeneous networks, represented by hypercubes of different dimensions $r_p$ and sizes $m_p$ within the hypercuboid. {The $K$ nodes consist of $P$ disjoint sets denoted by $\mathcal{C}_1,\ldots ,\mathcal{C}_P$, where $\sum_{p=1}^{P}|\mathcal{C}_p|=K$. For each $p\in [P]$, split $\mathcal{C}_p$ into $r_p \in \mathbb{Z}^+$ disjoint subsets, each of size $m_p$, denoted by $\{\mathcal{K}_{n_{p\scaleto{-1}{3.5pt}}+1} ,\ldots , \mathcal{K}_{n_p}\}$, where $n_p=\sum_{i=1}^p r_i$. Hence, the entire network is comprised of $r$ node sets, $\mathcal{K}_1 ,\ldots , \mathcal{K}_r$, where $r = \sum_{p=1}^{P}r_p$. Consider all possible node groups $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$ of size $r$ that each contains one node from every node set $\mathcal{K}_1 ,\ldots , \mathcal{K}_r$, here $X=\prod_{i=1}^{r}|\mathcal{K}_i|=\prod_{p=1}^{P}m_p^{r_p}$. Denote $\mathcal T_{j,i}=\mathcal T_j \cap \mathcal K_i, \;\forall j \in [X]$ and $\forall i \in [r]$, as the node in $\mathcal T_j$ that is chosen from $\mathcal K_i$. } The file mapping is then determined by the NG File Mapping defined in Section \ref{sec: genscheme_s1} with node groups $\mathcal{T}_1,\ldots ,\mathcal{T}_{X}$ defined by Node Grouping 2. \begin{remark} When using Node Grouping 2 and NG File Mapping, we form a hypercuboid made of $P$ interleved hypercubes of different dimensions. For a given $p\in [P]$, $\mathcal{C}_p$ translates to $r_p$ dimensions of size $m_p$ of the hypercuboid. Moreover, $\mathcal{C}_p$ serves the role that is similar to that of a single hypercube of dimension $r_p$ as in the homogeneous case. Specifically, $\mathcal{C}_p$ contains $r_p$ node sets $\mathcal K_i$, each of size $m_p$. Here $m_p$ is the number of lattice points along each dimension of the hypercube. The total number of nodes in $\mathcal C_p$ is thus $r_p m_p$. Nodes in each $\mathcal K_i$ collectively map the file library once. Hence, all nodes in $\mathcal{C}_p$ have the same storage capacity that each maps a total of $\frac{N}{m_p}$ files. Collectively, nodes in $\mathcal{C}_p$ map the library $r_p$ times. The $P$ disjoint sets of $\mathcal C_1, \cdots, \mathcal C_p$ form one hypercuboid with $r$ dimensions where there are $r_p$ dimensions of size $m_p$ for $p\in[P]$. Hence, each node group $\mathcal{T}_\alpha$ of size $r=\sum_{i=p}^P r_p$, defined in Node Group 2, consists of the union of $P$ node groups, with size $r_1, \cdots, r_P$, respectively, chosen from each of the $P$ interleved hypercubes corresponding to $\mathcal C_p, p \in[P]$. Note that, instead of each hypercube operating independently subject to its own computation load, $r_p$, the hypercuboid design takes full advantage of the total computation load, $r$, across the $P$ hypercubes to achieve the gain of $\frac{1}{r-1}$ for the heterogeneous system. \end{remark} {\bf Function Assignment 2}: Define $Y$ as the least common multiple (LCM) of $\{m_1-1, m_2-1, \ldots , m_P-1 \}$. Split the $Q$ functions into $K$ disjoint sets, labeled $\mathcal{W}_{1},\ldots,\mathcal{W}_{K}$, where, in general, the sets may be different sizes. For each $k \in [K]$, $|\mathcal{W}_{k}| =~\frac{\eta_2Y }{m_p - 1}$ where $k \in \mathcal{C}_p$ and $\eta_2 \in \mathbb{Z}^+$ such that $Q~=~\eta_2 Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1}$. For each $k \in [K]$, let $\mathcal{W}_k$ be the set of reduce functions assigned to node $k$. The Map and Shuffle phases follow our standard definition from Section \ref{sec: genscheme_s1} and the NG Shuffle Method is used for the Shuffle phase with node grouping defined by Node Grouping 2. The correctness of the proposed heterogeneous CDC scheme is proved in Appendix \ref{sec_APP:correctness}. \begin{remark} When using Node Grouping 2, NG File Mapping, Function Assignment 2, and NG Shuffle Method, we find that each intermediate value set $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\}}$ contains $\eta_1 \eta_2 Y$ IVs. \end{remark} \begin{remark} Node Grouping 2 and Function Assignment 2 are a more general case of Node Grouping 1 and Function Assignment 1, respectively. Therefore, the homogeneous scheme of Section \ref{sec: genscheme_s1} is a special case of the general heterogeneous scheme here. By letting $P=1$ such that $\mathcal{C}_1$ is the set of all nodes, we find $r=r_1$, $m_1 = \frac{K}{r}$, $X = \left( \frac{K}{r} \right)^r$ and $Y = \frac{K}{r}-1$. Moreover, each node is assigned $\frac{\eta_2 Y}{m_1-1} = \eta_2$ reduce functions. For file availability, nodes are split into $r$ disjoint, equal size sets, $\mathcal{K}_1,\ldots,\mathcal{K}_r$, and file sets of size $\eta_1$ are available to sets of nodes which contain exactly one node from each set $\mathcal{K}_1,\ldots,\mathcal{K}_r$. \end{remark} \begin{remark} It can be seen that the proposed hypercuboid design may not work for any given heterogeneous individual memories and computation loads due to the constrained combinatorial structure. In practice, we can group nodes with heterogeneous storage capacity and computation resources to fit a hypercuboid design as close as possible (similar to ``quantization") to reap the benefit by taking the heterogeneity of the system into the consideration. \end{remark} \subsection{Achievable Trade-off between Computation and Communication Loads} In this section, we first present the communication load of an uncoded Shuffle phase, $L_{\rm u}$, using Node Grouping 2, NG File Mapping, Function Assignment 2 of the general heterogeneous scheme. Here, uncoded Shuffle phase means that all the requested IVs will be transmitted in a unicast fashion without coded multicasting. Note that, $L_{\rm u}$ represents the fraction of intermediate values which are requested by any node. Then, we demonstrate that the communication load using the the proposed hypercuboid scheme and the NG Shuffle Method is $L_{\rm c} = \frac{1}{r-1}\cdot L_{\rm u}$. More formally, we define $L_{\rm u}$ and $L_{\rm c}$ as functions of $m_1 , \ldots , m_P$ and $r_1 , \ldots , r_P$ which define the number of nodes and the corresponding computation load in each node class of the heterogeneous computing network. Then, $L_u$ and $L_c$ are given in the following theorems. \begin{theorem} \label{theorem: uncoded} By using Node Grouping 2, NG File Mapping, Function Assignment 2, and an uncoded Shuffle phase, the communication load is \begin{align} \label{eq: Lu} L_{\rm u}(m_1 , \ldots , m_P, r_1 , \ldots , r_P) = \frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}. \end{align} \end{theorem} \begin{IEEEproof} Theorem \ref{theorem: uncoded} is proven in Appendix \ref{sec: uncodedHetPrfs1}. \end{IEEEproof} The following theorem states the communication load of the Shuffle phase which uses coded communication. \begin{theorem} \label{theorem: coded} By using Node Grouping 2, NG File Mapping, Function Assignment 2, and NG Shuffle Method, the communication load of the general heterogeneous scheme is \begin{align} \label{eq: Lc} L_{\rm c} (m_1 , \ldots , m_P, & r_1 , \ldots , r_P) = \frac{1}{r-1}\cdot\frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}} = \frac{1}{r-1} \cdot L_{\rm u}(m_1 , \ldots , m_P, r_1 , \ldots , r_P). \end{align} \end{theorem} \begin{IEEEproof} Theorem \ref{theorem: coded} is proven in Appendix \ref{sec: codedHetPrfs1}. \end{IEEEproof} The communication load $L_{\rm c}$ is comprised of two parts: the local computing gain, $L_{\rm u}$, and the global computing gain, $\frac{1}{r-1}$. The local computing gain represents the normalized number of IVs that must be shuffled. As nodes have access to a larger fraction of the files, the nodes will inherently request less in the Shuffle phase. The global computing gain stems from the fact that with the coded design every transmission serves $r-1$ nodes with distinct requests. \subsection{Optimality} The information theoretic lower bound of the communication load derived in \cite{li2018fundamental} is under the assumption of the homogeneous reduce function assignment. Hence, it does not apply when reduce functions are heterogeneously assigned to the computing nodes. In the following we discuss the lower bound of the communication load for two scenarios. First, we demonstrate a straightforward lower bound on communication load when considering all possible file and function assignments for a given $r$ and $K$. Next, we provide a lower bound on the communication load when we use the specific file and function assignments (Node Grouping 2, NG File Mapping and Function Assignment 2) of the heterogeneous design in Section \ref{sec: gen_het s1}. A trivial bound on the communication load is $L \geq 0$. Given $r$ and $K$, the following file and function assignment and Shuffle phase design will yield a communication load meeting this bound. Pick $r$ nodes and assign the entire file library to each of the nodes. Furthermore, for each function, assign it to one of the $r$ nodes with access to the entire file library. As every node that is assigned a reduce function is able to compute all the necessary IVs itself, no Shuffle phase is required such that $L=0$. Note that, in this context, we do not consider any storage or computing limitations on the nodes, rather, we show that optimizing the communication load over all possible function and file assignments is not an interesting problem. The question remains as to the optimality of the proposed Shuffle phase of Section \ref{sec: gen_het s1} given the file and reduce function assignments. Based on the seminal approach introduced in \cite{arbabjolfaei2013capacity,wan2016caching,wan2016optimality,wan2020index} for coded caching with uncoded cache placement, we derive Theorem \ref{theorem: bound} which provides a lower bound on the entropy of all transmissions in the Shuffle phase given a specific function and file placement and a permutation of the computing nodes. \begin{theorem} \label{theorem: bound} Given a particular file placement, $\mathcal{M}_k,\;\forall k\in[K]$ and function assignment $\mathcal{W}_k,\;\forall k\in[K]$, in order for every node $k\in[K]$ to have access to all IVs necessary to compute functions of $\mathcal{W}_k$, the optimal communication load over all achievable shuffle schemes, $L^*$, is bounded by \begin{equation} \label{eq: bound_eq1} L^* \geq \frac{1}{TQN}\sum_{i=1}^{K} H\left(V_{\mathcal{W}_{k_i},:}|V_{:,\mathcal{M}_{k_i}},Y_{\{k_1,\ldots, k_{i-1} \}}\right) \end{equation} where $k_1, \ldots , k_K$ is some permutation of $[K]$, $V_{\mathcal{W}_{k_i},:}$ is the set of IVs necessary to compute the functions of $\mathcal{W}_{k_i}$,\footnote{The notation ``$:$" is used to denote all possible indices.} $V_{:,\mathcal{M}_{k_i}}$ is set of IVs which can be computed from the file set $\mathcal{M}_{k_i}$ and $Y_{\{k_1,\ldots, k_{i-1} \}}$ is the union of the set of IVs necessary to compute the functions of $\bigcup_{j=1}^{i-1}\mathcal{W}_{k_j}$ and the set of IVs which can be computed from files of $\bigcup_{j=1}^{i-1}\mathcal{M}_{k_j}$. \hfill $\square$ \end{theorem} \begin{IEEEproof} Theorem \ref{theorem: bound} is proved in Appendix \ref{appendix: bound}. \end{IEEEproof} In Theorem \ref{thm: optimality} below, we demonstrate that given Node Grouping 2, NG File Mapping, and Function Assignment 2, the NG Shuffle Method introduced in Section~\ref{sec: genscheme_s1} yields a communication load that is within a constant of the lower bound. \begin{theorem} \label{thm: optimality} For a computing network defined by Node Grouping 2, NG File Mapping, and Function Assignment 2, define $L^*$ to be the infimum of the communication load over all possible Shuffle phases, then we have \begin{equation} L_{\rm c} \leq \frac{2r}{r-1} L^*, \end{equation} where $L_{\rm c}$, given in (\ref{eq: Lc}), is the communication load achieved by using the NG Shuffle Method. \end{theorem} \begin{IEEEproof} Theorem \ref{thm: optimality} is proved in Appendix \ref{sec: opt_pf}. \end{IEEEproof} \section{Discussions} \label{sec: Discussion} In this section, we will compare the performance the proposed schemes to the state-of-the-art schemes in \cite{li2018fundamental}. Specifically, we compare the required number of files, the required number of multicast groups and the communication load. When we compare the performance of the proposed heterogeneous CDC scheme with that of the homogeneous CDC in \cite{li2018fundamental}, we fix the computation load, $r$, the number of files, $N$, and the number of reduce functions, $Q$.\footnote{We adjust $N$ and $Q$ to be the same by using the appropriate $\eta_1$ and $\eta_2$.} The scheme in \cite{li2018fundamental} requires $N_1 = {K \choose r}\eta_1$ input files, $Q_1 = {K \choose s}\eta_2$ reduce functions. Moreover, the communication load as a function of $K$, $r$ and $s$ is \begin{equation} \label{eq: li_se1} L_1(r) = \frac{1}{r}\left(1-\frac{r}{K}\right). \end{equation} \subsection{Homogeneous CDC} Using (\ref{eq: theorem s1_hom}), we observe the following comparison \begin{equation} \frac{L_{\rm c}(r)}{L_1(r)} = \frac{rK}{K-r} \cdot\frac{K-r}{K\left( r-1 \right)} = \frac{r}{r-1}. \end{equation} For most values of $r$ there is an insignificant increase in the communication load for the new combinatorial scheme and furthermore for $r \rightarrow \infty$ the two schemes yield the identical communication loads. Since our proposed homogeneous scheme uses the same function assignment as the scheme in \cite{li2018fundamental}, then this hypercube based design is asymptotically optimal in the information theoretic sense in general without fixing the file and function assignments. These findings are verified through simulation of the communication load as shown in Fig.~\ref{fig: s1graph}. While both schemes require the same number of outputs functions, $Q=K\eta_2$, the required number of input files has been drastically reduced in this case. It can be observed that the number of input files for the homogeneous hypercube design is \begin{figure} \vspace{-5cm} \centering \centering \includegraphics[width=12cm]{CDC_1_fig4.pdf} \vspace{-5cm} \caption{~\small A comparison of the resulting communication load for the newly proposed and the state-of-the-art homogeneousdistributed computing schemes.} \label{fig: s1graph} \vspace{-0.5cm} \end{figure} \begin{equation} N_{\rm c} = \left( \frac{K}{r} \right)^r\eta _1 \end{equation} while the scheme of \cite{li2018fundamental} requires $N_1 = {K \choose r}\eta_1$ input files. Assuming $r =\Theta (K)$, by use of Stirling's formula to directly compare the two equations yields \begin{align} \frac{N_1}{N_{\rm c}} &= \frac{{K \choose r}}{\left( \frac{K}{r}\right)^r} = \frac{K!}{r!(K-r)!\left( \frac{K}{r}\right)^r} = \Theta\left( \frac{\sqrt{2\pi K}\left( \frac{K}{e}\right)^K}{2 \pi \sqrt{r \left( K-r \right)}\left( \frac{r}{e}\right)^r\left( \frac{K-r}{e}\right)^{\left( K-r \right)}} \cdot \frac{1}{\left( \frac{K}{r}\right)^r} \right) \notag\\ &= \Theta\left(\sqrt{\frac{K}{2\pi r (K-r)}}\cdot \left( \frac{K}{K-r} \right)^K \right) = \Theta\left(\sqrt{\frac{1}{K}}\cdot \left( \frac{K}{K-r} \right)^K \right)\label{eq: comp_s1_files}. \end{align} When $r<K$, we find that (\ref{eq: comp_s1_files}) grows exponentially with $K$ and, therefore, our proposed scheme has an exponential decrease in the number of required files. As pointed out in \cite{li2018fundamental,konstantinidis2018leveraging}, the required number of multicast group is also an important design parameter in CDC. If this number is large, it may take a long time to build such node groups such that the gain achieved by CDC is completely gone. It can be seen that the number of required multicast groups for the scheme in \cite{li2018fundamental} is $U_1={K \choose r+1}$, while the required number of multicast group of the proposed scheme is $U_c = \left(\frac{K}{r}\right)^r$. Hence, by a similar computation to (\ref{eq: comp_s1_files}), it can be seen that \begin{equation} \frac{U_1}{U_c} = \Theta\left(\frac{r+1}{K-r} \cdot \sqrt{\frac{1}{K}}\cdot \left( \frac{K}{K-r} \right)^K \right), \end{equation} which can grows exponentially with $K$ such that the proposed hypercube scheme reduces the required number of multicast group exponentially. \begin{remark} The hypercube approach has similar performance compared to the CDC scheme based on the resolvable design proposed in \cite{konstantinidis2018leveraging}, e.g., the required number of input files in \cite{konstantinidis2018leveraging} is $\left(\frac{K}{r}\right)^{r-1}$, which is slightly better than the proposed hypercube scheme. However, as we discussed in Section~\ref{sec: hets1}, the proposed hypercube scheme can be extended to the heterogeneous CDC networks naturally while it is unclear how to extend the scheme in \cite{konstantinidis2018leveraging} to heterogeneous CDC networks. \end{remark} \subsection{Heterogeneous CDC} \label{sec: disc_sg1} As shown in (\ref{eq: Lc}), the communication load of the proposed heterogeneous CDC design is $L_c(r)=\frac{1}{r-1}L_{\rm u}(r)$, where $\frac{1}{r-1}$ and $L_{\rm u}(r)$ are the global computing gain and the local computing gain, respectively. In comparison, for the homogeneous design in \cite{li2018fundamental}, we have $L_1(r)=\frac{1}{r} (1-\frac{r}{K})$, where the global computing gain is $\frac{1}{r}$ and the local computing gain is $1-\frac{r}{K}$. Next, we will show that even though the proposed heterogeneous design has an inferior global computing gain than that of \cite{li2018fundamental} ($\frac{1}{r-1}$ versus $\frac{1}{r}$), it has a better local computing gain $L_{\rm u}(r)\le (1-\frac{r}{L})$, and hence can have a better communication load $L_c(r) < L_1(r)$ under certain parameter regimes. Since $\sum_{p=1}^{P} \frac{r_p}{r} = 1$ and $\frac{m_p}{m_p-1}$ is a convex function of $m_p$ for $m_p > 1$, using (\ref{eq: Lu}) and Jensen's inequality, we can obtain \begin{align} \frac{1}{L_{\rm u}(r)} = \frac{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}{r} &= \sum_{p=1}^{P} \frac{r_p}{r}\cdot \frac{m_p}{m_p-1} \geq \frac{\sum_{p=1}^{P} \frac{r_p m_p}{r}}{\left(\sum_{p=1}^{P} \frac{r_p m_p}{r}\right) - 1} = \frac{\frac{K}{r}}{\frac{K}{r}-1} = \frac{K}{K-r} \label{eq:Lu} \end{align} where $\sum_{p=1}^{P}r_p m_p = \sum_{p=1}^{P}|\mathcal{C}_p| = K$. {Note that the inequality in (\ref{eq:Lu}) is strictly ``$>$'' if the network is truly heterogeneous, i.e., not all $\{m_p\}$ are equal.} Hence, \begin{equation} L_{\rm u}(r) \leq \frac{K-r}{K} = 1-\frac{r}{K}, \label{eq:upper_bound_local_gain} \end{equation} which shows that the local computing gain for our heterogeneous design is upper bounded by that of the homogeneous design in \cite{li2018fundamental}. Using (\ref{eq: Lc}), we obtain, \begin{equation} L_{\rm c}(r)=\frac{1}{r-1}L_{\rm u}(r) \leq \frac{1}{r-1}\cdot \left( 1-\frac{r}{K} \right). \end{equation} Thus, $L_{\rm c}(r)$ can be less than $L_1(r)$ for certain choices of $r$ and $K$. For example, given a heterogeneous network defined by $m_1 = 2$, $r_1 = 4$ and $m_2 = 8$, $r_2 = 2$, we have $r=r_1+r_2=6$, $K=r_1m_1+r_2m_2=24$. We compare it with a homogeneous network with $r=6$ and $K=24$. The proposed heterogeneous design has a local computing gain of $L_{\rm u}(r) = \frac{7}{12} \approx 0.583$, which is less than that of the homogeneous design $1-\frac{r}{K} = \frac{3}{4} = 0.75$, and a communication load of $L_{\rm c} = \frac{7}{ 60}\approx 0.117$, that is lower than that of the homogeneous design $L_1(r) = \frac{1}{8} = 0.125$. \begin{remark} In \cite{li2018fundamental}, $L_1(r)$ was proved to be a lower bound on the communication load given $r$ and $K$. However, the proof uses the implicit assumption that every node is assigned the same number of reduce functions. Our new finding is that if the reduce functions can be assigned in a heterogeneous fashion, then the communication load lower bound of \cite{li2018fundamental} does not apply. \end{remark} In Fig.~\ref{fig: het sim_seq1_fig}, we provide additional comparisons of the communication load of the hypercuboid design and the homogeneous scheme of \cite{li2018fundamental} with an equivalent computation load, $r$. Each design has a fixed number of nodes $K=20$. The heterogeneous network is defined with $P=2$ sets of nodes that map a different number of files and are assigned a different number of reduce functions. Specifically, there are $|\mathcal{C}_1|=2(r-1)$ powerful nodes and $|\mathcal{C}_2|=K-2(r-1)$ weaker nodes where $r_1=r-1$, $m_1=2$, $r_2=1$ and $m_2=K-2(r-1)$. In other words, the nodes of $\mathcal{C}_1$ each map $\frac{1}{2}$ of the files and the nodes of $\mathcal{C}_2$ each map a $\frac{1}{K-2(r-1)}$ fraction of the files which can be much less than $\frac{1}{2}$. Fig.~\ref{fig: het sim_seq1_fig} shows that the communication load of the hypercuboid design is less than that of the state-of-the-art homogeneous design of \cite{li2018fundamental} for $4\leq r \leq 7$. {\bf Comparisons for large networks.} Next, we provide comparisons of the communication load of the proposed heterogeneous scheme and the homogeneous scheme \cite{li2018fundamental} for networks with a large number of computing nodes $K$. We consider two cases. Case 1. For the heterogeneous network, assume that $r_1,\ldots , r_P$ and $r$ are fixed, but the fraction of files each node has access to, $\frac{1}{m_1}, \cdots, \frac{1}{m_P}$, decrease as $K$ becomes large. Then, we have \begin{equation} \lim_{K\rightarrow \infty} L_{\rm u}(r)=1 \quad \text{and} \quad \lim_{K\rightarrow \infty} \frac{L_{\rm c}(r)}{L_1(r)} = \frac{r}{r-1}. \end{equation} In other words, $\frac{L_{\rm c}(r)}{L_1(r)} = \Theta (1)$. \begin{figure} \vspace{-5cm} \centering \includegraphics[width=12cm]{cdc_fig5.pdf} \vspace{-5cm} \caption{\small A comparison of the proposed hypercuboid CDC design to the state-of-the-art CDC design of \cite{li2018fundamental} with $K=20$ nodes and an equivalent computation load, $r$. The heterogeneous hypercube is designed with parameters $r_1=r-1$, $m_1=2$, $r_2=1$ and $m_2=K-2(r-1)$. The hypercuboid design has a lower communication load than that of the homogeneous design for for $4\leq r \leq 7$.} \label{fig: het sim_seq1_fig} \vspace{-0.5cm} \end{figure} Case 2. For the heterogeneous network, assume that $\frac{r_1}{K}=\beta_1,\ldots , \frac{r_P}{K}=\beta_K$ and $\frac{r}{K}=\beta$ are kept constant as $K$ gets large. The fraction of files available to each node, $\frac{1}{m_1}, \cdots, \frac{1}{m_P}$, are also kept constant. It then follows from (\ref{eq:Lu}) that when the network is truly heterogeneous (not all $\{m_p\}$ are equal), then we have \begin{equation} \lim_{K\rightarrow \infty} \frac{L_{\rm c}(r)}{L_1(r)} = \lim_{r\rightarrow \infty} \frac{r}{r-1} \cdot \frac{L_{\rm u}(r)}{1-\frac{r}{K}} =\frac{1}{1-\beta} \Bigg(\frac{1}{ \sum_{p=1}^P \frac{\beta_p}{\beta} \frac{m_p}{m_p-1} }\Bigg) < \frac{1}{1-\beta} (1-\beta)= 1. \end{equation} This means that for large networks considered here, the communication load of the proposed heterogeneous scheme is {\it strictly less} than that of the homogeneous scheme. Hence, for some heterogeneous file and computation load assignments, the fundamental trade-off proposed in \cite{li2018fundamental} is ``breakable". As we discussed before, in the extreme case, where there exists a ``super node" that can store all the files and compute all functions, the communication load is straightforwardly $0$. However, for given heterogeneous storage capacities and computation loads, it is non-trivial to design an achievable CDC scheme such that its performance is superior compared to that of homogenous CDC under the same total storage and computation load constraint. For the hypercuboid design, the required number of files is $N=X=\prod_{p=1}^{P}m_p^{r_p}$ and reduce functions is $Q~=~ Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1}$ where $Y$ is the LCM of $\{m_1-1,\ldots ,m_P-1 \}$. Unlike the homogeneous network case, due to the lack of CDC design for general heterogeneous networks, we cannot compare the proposed scheme to other schemes. Nevertheless, we believe that these numbers can serve as a benchmark for the future research in this topic. \section{Conclusions and Future Directions} \label{sec: Conclusion} In this work, we introduced a novel hypercuboid combinatorial approach to design CDC for both homogeneous and heterogeneous distributed computing networks. This new design achieves a significant reduction in the number of files and functions compared to the state-of-the-art scheme in \cite{li2018fundamental}. Moreover, the proposed schemes maintain a multiplicative computation-communication trade-off and are proven to be asymptotically optimal. {Most importantly, we provided an explicit and systematic heterogeneous CDC design with optimality guarantees under certain network parameters.} Surprisingly, {we found} that the optimal trade-off derived in \cite{li2018fundamental} no longer applies when functions are heterogeneously assigned and {as a result}, the communication load of a heterogeneous network can be less than that of an equivalent homogeneous CDC network. For the future research direction, first, it will be interesting to design other achievable schemes with heterogeneous function assignments and a more general communication load bound given a set of storage capacity requirements of computing nodes. Second, it is challenging but important to characterize the information theoretic converse given the storage capacity and the computation load constraints of each node without fixing the file and output function assignments. \appendices \section{Proof of Theorems \ref{theorem: s1_hom} and \ref{theorem: coded}} \label{sec: codedHetPrfs1} \begin{comment} For any $n \in [X]$, and for all $z \in \mathcal{T}_n$ where $z \in \mathcal{K}_p$, we find \begin{align} \big| & \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big| = \left | \mathcal{W}_z \right | \times \left | \left \{ w_j: w_j \notin \mathcal{M}_z, w_j \in \bigcap\limits_{k \in \mathcal{T}_n\setminus z} \mathcal{M}_k , \right \} \right| \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 \left | \left \{ \mathcal{T}_{n'} : \{\mathcal{T}_n\setminus z \} \subset \mathcal{T}_{n'}, z \notin \mathcal{T}_{n'}, n' \in [X] \right \} \right | \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 \left | \left \{ \mathcal{T}_{n'} : \{\mathcal{T}_n\setminus z \} \cup k = \mathcal{T}_{n'}, k \in \mathcal{K}_p\setminus z, \right \} \right | \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 \left | \left \{ k : k \in \mathcal{K}_p\setminus z, \right \} \right | \\ &= \left | \mathcal{W}_z \right | \cdot \eta_1 (|\mathcal{K}_p|-1) = \frac{\eta_2Y }{m_p - 1} \cdot \eta_1 (m_p - 1)= \eta_1 \eta_2 Y. \end{align} \end{comment} For any $\alpha \in [X]$, and $z \in \mathcal{T}_\alpha$, where $z \in \mathcal{K}_h \subseteq \mathcal{C}_p$, it follows from Eq. (\ref{eq:define_L_z_alpha}), (\ref{eq:def_L_shuffle_1}), and Remark 3 in Section \ref{sec: genscheme_s1} that \begin{equation} \big| \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big| = \left | \mathcal{W}_z \right | \cdot \eta_1 \big| \mathcal L_{z,\alpha} \big|=\left | \mathcal{W}_z \right | \cdot \eta_1 (|\mathcal{K}_p|-1) = \frac{\eta_2Y }{m_p - 1} \cdot \eta_1 (m_p - 1)= \eta_1 \eta_2 Y . \end{equation} We consider $X$ node groups of size $r$ nodes, where for each group, every node of that group transmits a coded message of size $\big| \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big| / (r-1)$, therefore, the communication load is \begin{align} L_{\rm c} ( m_1 , \ldots & , m_P, r_1 , \ldots , r_P) = \frac{1}{QN}\cdot X \cdot r \cdot \frac{\big| \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} \big|}{r-1} \\ &= \frac{1}{\left(\eta_2 Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1}\right)\eta_1 X}\cdot X \cdot r \cdot \frac{\eta_1 \eta_2 Y}{r-1} = \frac{1}{r-1}\cdot\frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}. \end{align} For the special homogeneous case, where $P=1$ and $\mathcal{C}_1$ is the set of all nodes, we find $r=r_1$, $m_1 = \frac{K}{r}$ and \begin{align} L_{\rm c} &= \frac{1}{r-1}\cdot\frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}} =\frac{1}{r-1}\cdot\frac{r}{\left(\frac{K}{\frac{K}{r}-1}\right)} =\frac{1}{r-1}\cdot\frac{K-r}{K}. \end{align} Hence, we finished the proof of Theorems \ref{theorem: s1_hom} and \ref{theorem: coded}. \section{Proof of Theorem \ref{theorem: uncoded}} \label{sec: uncodedHetPrfs1} For all $p \in [P]$, the number of files a node $k~\in~\mathcal{K}_j~\subseteq~\mathcal{C}_p$ has local access to is \begin{align} |\mathcal{M}_k| &= \eta_1 \prod\limits_{i\in[r]\setminus j}|\mathcal{K}_i| = \frac{\eta_1 X}{|\mathcal{K}_j|} = \frac{N}{m_p}. \end{align} We count the number of IVs that are requested by any node and normalize by $QN$ \begin{align} L_u & (m_1 , \ldots , m_P, r_1 , \ldots , r_P) = \frac{1}{QN}\sum_{k\in[K]} | \left \{ v_{i,j} : i \in \mathcal{W}_k , w_j \notin \mathcal{M}_k \right \} | \\ &= \frac{1}{QN}\sum_{k\in[K]} \left | \mathcal{W}_k \right|\times \left( N- \left | \mathcal{M}_k \right| \right) = \frac{1}{QN}\sum_{p\in[P]} \sum_{k\in\mathcal{C}_p} \left | \mathcal{W}_k \right|\times \left( N- \left | \mathcal{M}_k \right| \right) \\ &= \frac{1}{QN}\sum_{p\in[P]} \sum_{k\in\mathcal{C}_p} \frac{\eta_2Y }{m_p - 1} \cdot \left( N- \frac{N} {m_p} \right) = \frac{1}{Q}\sum_{p\in[P]} r_p m_p \frac{\eta_2Y }{m_p - 1}\left( \frac{m_p-1}{m_p} \right) \\ &= \frac{\eta_2 Y \sum_{p\in[P]} r_p }{\eta_2 Y \sum_{p=1}^{P}\frac{r_p m_p}{m_p - 1} } = \frac{r}{\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}} \end{align} where $|\mathcal{C}_p| = r_p m_p$ for all $p \in [P]$. Hence, we finished the proof of Theorem \ref{theorem: uncoded}. \section{Correctness of Heterogeneous CDC Scheme} \label{sec_APP:correctness} This proof includes 4 parts: 1) nodes only compute IVs from locally available files, 2) nodes only transmit locally computed IVs, 3) nodes can decode transmissions with requested IVs and 4) after the Map and Shuffle phases, nodes have all necessary IVs to compute their reduce functions. For 1), any node $k\in [K]$ computes intermediate values of the set \begin{equation} \label{eq: pf4} \{ v_{i,j} : i \in [Q] , w_j \in \mathcal{M}_k \} \end{equation} In all cases $w_j \in \mathcal{M}_k$ for any $v_{i,j}$ computed by node $k$, therefore nodes only compute IVs from locally available files. \begin{comment} We demonstrate 2) and 3) simultaneously by observing a node group, $\mathcal{T}_n$ for any $n \in [X]$, in the Shuffle phase. A node $k \in \mathcal{T}_n$ has computed the intermediate values of $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ for all $z\in \mathcal{T}_n\setminus k$. This is true because \begin{equation} \bigcap \limits_{a\in\mathcal{T}_n\setminus z}\mathcal{M}_a \subseteq \mathcal{M}_k \end{equation} where $k \in \mathcal{T}_n \setminus z$ and, therefore, \begin{multline}\label{eq: pf3} \mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}=\left\{ v_{i,j}: i\in \mathcal{W}_z,w_j \notin \mathcal{M}_z , \text{ }w_j\in \bigcap \limits_{a\in\mathcal{T}_n\setminus z}\mathcal{M}_a \right\} \subseteq \{ v_{i,j} : i \in [Q] , w_j \in \mathcal{M}_k \}. \end{multline} The the right hand side of (\ref{eq: pf3}) matches that of (\ref{eq: pf4}) and defines a set of intermediate values that node $k$ computes in the Map phase. Since $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ is computed by node $k$ for all $z\in \mathcal{T}_n\setminus k$, it is clear that node $k$ can transmit coded messages consisting of subsets of $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ for all $z\in \mathcal{T}_n\setminus k$ and this satisfies 2). \end{comment} Next, we prove 2) and 3) simultaneously. Consider any node group $\mathcal{T}_\alpha$ and any node $k \in \mathcal{T}_\alpha$. We need to confirm that node $k$ has access to the multicast messages defined in Eq. (\ref{eq:def_L_shuffle_1}) and (\ref{eq: 1_trans_eq1}). This is true because as discussed above Eq. (\ref{eq:def_L_shuffle_1}), all nodes in $\mathcal{T}_\alpha \setminus z$, including node $k$, have access to the file set $\mathcal B_\ell$ where $\{\mathcal{T}_\alpha \setminus z\} \subset \mathcal{T}_\ell$. To see 3), when a node $z_0 \in \mathcal T_\alpha$ receives a multicast message from another node $k \in \mathcal T_\alpha$ that takes the form of Eq. (\ref{eq:def_L_shuffle_1}), only one term, $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z_0\},k}$, is its desired message. The other terms are of the form $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},k}$, intended for node $z$, where $z\in \mathcal T_\alpha$ and , $z\ne z_0,k$. Since node $z_0 \in \mathcal T_\alpha \setminus z$, it has access to $\mathcal{V}_{\mathcal{T}_\alpha\setminus z}^{\{z\},k}$, and thus can decode its desired message correctly. \begin{comment} In Appendix \ref{sec: codedHetPrfs1} we show that $\left |\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}\right | = \eta_1 \eta_2 Y$ for any $z\in [K]$ and $n$ such that $z \in \mathcal{T}_n$. Furthermore, node $k$ receives transmissions with requested intermediate values from all nodes $z\in \mathcal{T}_n\setminus k$ where each transmission consists of $r-1$ coded intermediate value sets which are the same size. Only one of these sets are a subset of $\mathcal{V}_{\mathcal{T}_n \setminus k}^{\{ k\}}$ and the rest are subsets of $\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}}$ for some $z\in \mathcal{T}_n\setminus k$. Therefore, node $k$ can recover its requested intermediate value set since the other intermediate value sets are locally computed and this satisfies 3). \end{comment} To prove 4), we need to show that for a given $z\in \mathcal K_h$, if some file $w_j \notin \mathcal M_z$, then node $z$ must be able to recover its desired IVs $\{v_{i,j}: i\in \mathcal W_z\}$ from multicast messages of the form Eq. (\ref{eq:def_L_shuffle_1}) and (\ref{eq: 1_trans_eq1}). To see this, assume that $w_j \in \mathcal B_{\ell_0}$. Consider node group $\mathcal T_{\ell_0}$. Since $w_j \in \mathcal B_{\ell_0}$ and $w_j \notin \mathcal M_z$, we must have $z \notin \mathcal T_{\ell_0}$. In other words, $\mathcal T_{\ell_0,h} \ne z$. Now, consider another node group $z \in \mathcal T_\alpha$ such that $\mathcal T_\alpha$ and $\mathcal T_{\ell_0}$ differs only in the $h$-th element: $\mathcal T_{\alpha,h}=z$ and $\mathcal T_{\alpha,i}=\mathcal T_{\ell_0,i}$ for any $i \ne h$. As defined in Eq. (\ref{eq:def_L_shuffle_1}), since $\ell_0 \in \mathcal L_{z,\alpha}$ and $w_j \in \mathcal B_{\ell_0}$, node $z$ will be able to received its desired IVs $\{v_{i,j}: i\in \mathcal W_z\}$ from the multicast group messages from node group $\mathcal T_\alpha$ according to Eq. (\ref{eq:def_L_shuffle_1}) and (\ref{eq: 1_trans_eq1}). \begin{comment} To demonstrate 4), we start by recognizing that any node $z$ receives all intermediate values included in \begin{align}\label{eq: pf0} \bigcup\limits_{z \in \mathcal{T}_n}\mathcal{V}_{\mathcal{T}_n\setminus z}^{\{z\}} =& \bigcup\limits_{z \in \mathcal{T}_n}\Bigg\{ v_{i,j}: i\in \mathcal{W}_z,w_j \notin \mathcal{M}_z , w_j\in \bigcap \limits_{k\in\mathcal{T}_n\setminus z}\mathcal{M}_k \Bigg\} \nonumber\\ =&\Bigg\{ v_{i,j}: i\in \mathcal{W}_z,w_j \notin \mathcal{M}_z , w_j\in \bigcup\limits_{z \in \mathcal{T}_n} \Bigg\{\bigcap \limits_{k\in\mathcal{T}_n\setminus z}\mathcal{M}_k \Bigg\} \Bigg\} \end{align} where the third line holds because there is only one condition that is dependent on $n$. Moreover, this set includes intermediate values such that node $z$ needs it to compute one of its Reduce functions, node $z$ cannot compute it itself and it is computed from a file in the set \begin{equation}\ \label{eq: pf1} \bigcup\limits_{z \in \mathcal{T}_n}\Bigg\{\bigcap \limits_{k\in\mathcal{T}_n\setminus z}\mathcal{M}_k \Bigg\}. \end{equation} If a set of nodes have a set of files in common then clearly the nodes are a subset of some set, $\mathcal{T}_\ell$, and, therefore, the set of files in (\ref{eq: pf1}) can be defined as the following \begin{equation} \label{eq: pf2} \left \{ \mathcal{B}_\ell : \left \{ \mathcal{T}_n \setminus z \right \} \subset \mathcal{T}_\ell , z \in \mathcal{T}_n \right \}. \end{equation} Furthermore, for any $\mathcal{T}_\ell$ there exists a $\mathcal{T}_n$ such that $\left\{ \mathcal{T}_n \setminus z \right \} \subset \mathcal{T}_\ell $ and $z \in \mathcal{T}_n$. We can see this is true because given a $\mathcal{T}_\ell$ we can set $\mathcal{T}_n = \left\{ \mathcal{T}_\ell \setminus \sigma \right \} \cup z$ where $\{ \sigma , z\} \in \mathcal{K}_p$. As a result, the set of files in (\ref{eq: pf1}) and (\ref{eq: pf2}) include every file. Therefore, the set defined in (\ref{eq: pf0}), which includes all intermediate values node $z$ receives in the Shuffle phase, is equivalent to \begin{equation} \{ v_{i,j} : i\in \mathcal{W}_z, w_j \notin \mathcal{M}_z\}. \end{equation} As any node $z$ computes intermediate values, $v_{i,j}$, such that $i \in \mathcal{W}_z$ and $w_j \in \mathcal{M}_z$ in the Map phase, it is clear that after the Map and Shuffle phases that node $z$ will have access to the intermediate values necessary to compute its Reduce functions. \end{comment} \section{Proof of Theorem \ref{theorem: bound}} \label{appendix: bound} In this proof, we use the following notation: $\mathcal{K}$ is the set of all nodes, $X_{\mathcal{K}}$ represents the collection of all transmissions by all nodes in $\mathcal{K}$, $\mathcal{W}_{\mathcal{S}}$ is the set of functions assigned to at least one node of $\mathcal{S}$, $\mathcal{M}_{\mathcal{S}}$ is the set files locally available to at least one node in $\mathcal{S}$, $V_{\mathcal{W}_{\mathcal{S}_1},\mathcal{M}_{\mathcal{S}_2}}$ is the set of IVs needed to compute the functions of $\mathcal{W}_{\mathcal{S}_1}$ and computed from the files of $\mathcal{M}_{\mathcal{S}_2}$. Finally, we define the following \begin{equation} Y_\mathcal{S} \triangleq \left(V_{\mathcal{W}_\mathcal{S},:},V_{:,\mathcal{M}_\mathcal{S}}\right) \end{equation} where ``$:$" is used to denote all possible indices. Given all the transmissions from all nodes, $X_\mathcal{K}$, and IVs which can be locally computed by a node $k$, $V_{:,\mathcal{M}_k}$, node $k$ needs to have access to all IVs necessary for its assigned functions, $V_{\mathcal{W}_k,:}$, therefore \begin{equation} H(V_{\mathcal{W}_k,:} | X_\mathcal{K}, V_{:,\mathcal{M}_k}) = 0. \end{equation} Given this assumption, we find \begin{align} H(X_\mathcal{K}) &\geq H(X_\mathcal{K}|V_{:,M_{k_1}}) = H(X_\mathcal{K},V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) - H(V_{\mathcal{W}_{k_1},:} | X_\mathcal{K}, V_{:,\mathcal{M}_{k_1}}) \nonumber\\ &= H(X_\mathcal{K},V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) \nonumber\\ &= H(V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) + H(X_\mathcal{K}|V_{\mathcal{W}_{k_1},:},V_{:,M_{k_1}})\nonumber\\ &=H(V_{\mathcal{W}_{k_1},:}|V_{:,M_{k_1}}) + H(X_\mathcal{K}|Y_{k_1}). \label{eq: claim 1 2} \end{align} Similarly, \begin{align} H&(X_\mathcal{K}|Y_{\{k_1,\ldots k_{i-1}\}}) \geq H(X_\mathcal{K}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) \nonumber\\ &= H(X_\mathcal{K},V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) - H(V_{\mathcal{W}_{k_i},:} | X_\mathcal{K}, V_{:,\mathcal{M}_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) \nonumber \end{align} \begin{align} &= H(X_\mathcal{K},V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) \nonumber\\ &= H(V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}}) + H(X_\mathcal{K}|V_{\mathcal{W}_{k_i},:},V_{:,M_{k_i}},Y_{\{k_1,\ldots k_{i-1}\}})\nonumber\\ &= H(V_{\mathcal{W}_{k_i},:}|V_{:,M_{k_i}},Y_{k_1,\ldots k_{i-1}}) + H(X_\mathcal{K}|Y_{\{k_1,\ldots k_i\}}). \label{eq: claim 1 3} \end{align} Also, since nodes can only transmit IVs from locally available files, we see that $H(X_\mathcal{K}|Y_{\{k_1,\ldots k_K\}}) = 0$. By starting with (\ref{eq: claim 1 2}) and iteratively using the relationship of (\ref{eq: claim 1 3}) to account for all $k_i \in \mathcal{K}$, we obtain \begin{equation} \label{eq: th 4 ent sum} H(X_\mathcal{K}) \geq \sum_{i=1}^{K} H\left(V_{\mathcal{W}_{k_i},:}|V_{:,\mathcal{M}_{k_i}},Y_{\{k_1,\ldots, k_{i-1} \}}\right). \end{equation} Moreover, since $H(X_\mathcal{K}) = LTQN$, from (\ref{eq: th 4 ent sum}) we obtain the lower bound on the optimal communication load, $L^*$, of (\ref{eq: bound_eq1}) and proved Theorem \ref{theorem: bound}. \section{Proof of Theorem \ref{thm: optimality}} \label{sec: opt_pf} We define a permutation of the $K$ nodes, $(k_1, \ldots , k_K)$, such that $\{ k_1 , \ldots , k_{m_p} \} = \mathcal{K}_i \subseteq \mathcal{C}_p$ for some $i\in[r]$ and $p\in[P]$ as defined in Section~\ref{sec: gen_het s1}. For $1 \leq j \leq m_p$, given all IVs collectively computed by nodes $k_1 , \ldots , k_j$ and all IVs needed by nodes $k_1 , \ldots , k_{j-1}$ to compute their respective reduce functions, the entropy of the requested IVs of the node $k_j$ is \begin{align} H &\left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{k_1} } , Y_{ \{ k_1,\ldots k_{j-1} \} }\right) = H \left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{\{ k_1 , \ldots , k_{j-1} \}} } \right) = T |\mathcal{W}_{k_j}|\left( N - \bigcup\limits_{j' \in [j]} \mathcal{M}_{k_{j'}} \right) \nonumber \\ &= T\cdot\frac{\eta_2Y}{m_p-1}\left( N - \sum_{j' \in [j]}|\mathcal{M}_{k_{j'}}| \right) =\frac{T\eta_2Y}{m_p-1}\left( N - \frac{jN}{m_p} \right) = \frac{T\eta_2YN}{(m_p-1)m_p}\left( m_p - j \right). \end{align} Furthermore, since the nodes $k_1 , \ldots , k_{m_p}$ collectively have access to all the $N$ files and compute all $QN$ intermediate values, we see that for $m_p \leq j \leq K$ \begin{equation} H \left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{k_1} } , Y_{ \{ k_1,\ldots k_{j-1} \} }\right) = 0. \end{equation} By using of the bound of Theorem \ref{theorem: bound} \begin{align} L^* &\geq \frac{1}{QNT}\sum_{j=1}^{m_p-1} H \left( \mathcal{V}_{ \mathcal{W}_{k_j} , :} | \mathcal{V}_{ : , \mathcal{M}_{k_1} } , Y_{ \{ k_1,\ldots k_{j-1} \} }\right) =\frac{1}{Q}\sum_{j=1}^{m_p-1}\frac{\eta_2Y}{(m_p-1)m_p}\left( m_p - j \right) \nonumber\\ &= \frac{\eta_2Y}{Q(m_p-1)m_p}\sum_{j=1}^{m_p-1} j = \frac{\eta_2Y}{Q(m_p-1)m_p}\cdot \frac{m_p(m_p-1)}{2} = \frac{\eta_2Y}{2Q}= \frac{1}{2\sum_{p=1}^{P}\frac{r_p m_p}{m_p-1}}. \end{align} Finally, we see that \begin{equation} \frac{L_{\rm c}}{L^*} \le \frac{2r}{r-1} \leq 4 \end{equation} for $r\geq 2$. This completes the proof of Theorem \ref{thm: optimality}. \bibliographystyle{IEEEbib}
train/arxiv
BkiUczQ5qoTAjdlTMAUW
5
1
\subsection*{Introduction} In the Newtonian physics two point-like particles can be in equilibrium if the product of their masses is equal to the product of their charges (we use the units for which $G=c=1$). In General Relativity, till now the equilibrium condition for two particle-like sources imposed on their physical masses, charges and separating distance was not known in explicit and reasonably simple analytical form which would admit a rigorous analysis without a need of numerical experiments. The only exceptional case was the Majumdar-Papapetrou solution \cite{Majumdar:1947, Papapetrou:1947}, for which the charge of each source is equal to its mass. In this case, the equilibrium is independent of the distance between the sources. For each of the static sources of this sort its outer and inner Reissner-Nordstr\"om horizons coincide and such sources are called extreme ones. Accordingly, the sources with two separated horizons are called under-extreme and the sources without horizons -- super-extreme. The problem, which had been under investigation by many researchers and which we solve in the present paper, consists in the search of equilibrium configurations of non-extreme sources. Since the advent of solution generating techniques for stationary axisymmetric Einstein-Maxwell fields, a construction of an exact solution for two charged masses at rest does not represent any principal difficulty. However, in general the asymptotically flat solutions of this kind contain conical singularities on the symmetry axis between the sources which can be interpreted as a presence of some extraneous struts preventing the sources to fall onto or to run away from each other. The equilibrium condition just implies the absence of such struts. Naturally, if the metric is known so is the equilibrium condition. In the static case, the latter means that the product of the metric coefficients $g_{tt}$ and $g_{\rho\rho}$ (in cylindrical Weyl coordinates) should be equal to unity at the axis where $\rho=0$. However, this equilibrium equation in such general form usually is expressed by a set of formal parameters and it is so complicated that its analytical investigation appears to be very difficult. Therefore, it is desirable to have this equation expressed in terms of physical parameters and in a simple enough form making it accessible for an analytical examination of a possibility of realization of equilibrium. Moreover, this realization should be compatible with a condition of a positive value of the distance between the sources. This task have not been accomplished yet, and up to now there were known only some results achieved by numerical calculations. The first researches of the equilibrium of non-extreme sources \cite{Barker-O'Connell:1977} - \cite{Azuma-Koikawa:1994} led to the contradictory conclusions. The authors of the indicated papers used both the exact techniques and pN and ppN approximations. The common opinion expressed in \cite{Barker-O'Connell:1977, Kimura-Ohta:1977} and \cite{Ohta-Kimura:1982} - \cite{Azuma-Koikawa:1994} was that the equilibrium for non-extreme sources is impossible. Nevertheless, in \cite{Tomimatzu:1984} one can find a remark that the analysis performed was insufficient and the existence of equilibrium configurations for the non-extreme objects can not be excluded. The arguments in favour of such possibility can be found also in \cite{Bonnor:1981}. The next step which attracted attention to the problem again have been done by Bonnor in \cite{Bonnor:1993}, where the equilibrium condition for a charged test particle in the Reissner-Nordstr\"om field was analyzed. Examination made there suggested also some plausible assumptions for the exact solutions. As have been indicated in \cite{Bonnor:1993} a charged test body can be at rest in the field of the Reissner-Nordstr\"om source only if they both are either extreme (for the test particle the degree of its extremality is defined just by the ratio between its charge and mass), balanced irrespective of distance, or one of them is super-extreme and the other is under-extreme, and in this case the equilibrium depends on the distance. There is no way for equilibrium in cases when both sources are either super-extreme or under-extreme. It is worth to mention that in the very recent papers \cite{Bini-Geralico-Ruffini:2007} a new perturbative solution describing an equilibrium state of two-body system consisting of a Reissner-Nordstr\"om black hole and a super-extreme test particle has been presented. The whole set of combined Einstein-Maxwell equations has been solved there by using the first order perturbation approach developed in \cite{Zerilli:1974} and based on the tensor harmonic expansion of both the gravitational and electromagnetic fields adopting the Regge-Wheeler \cite{Regge-Wheeler:1957} gauge. (The basic equations for combined gravitational and electromagnetic perturbations of the Reissner-Nordstr\"om background in the decoupled form were found in another gauges in \cite{Sibgatullin-Alekseev:1974} and in the also decoupled Hamiltonian form in \cite{Moncrief:1974}). Both the electromagnetically induced gravitational perturbations and gravitationally induced electromagnetic perturbations \cite{Johnston-Ruffini-Zerilli:1973} due to a mass as well as a charge of the particle have thus taken into account. The expressions in a closed form for both the perturbed metric and electromagnetic field have been explicitly given \cite{Bini-Geralico-Ruffini:2007}. It is interesting that the equilibrium equation (which arises in this case as a self-consistency condition for the set of differential equations for perturbations) remains the same as of Bonnor \cite{Bonnor:1993}. The Bonnor's analysis allows to expect that qualitatively the same can happen also for two Reissner-Nordstr\"om sources. For two extreme sources this is indeed the case because it is known that such generalization exists and leads to the Majumdar-Papapetrou solution. Up to 1997 it remained unknown whether the analogous generalization for the non-extreme bodies can be found. The first solid arguments in favour of existence of a static equilibrium configuration for the "black hole - naked singularity" system was presented in \cite{Perry-Cooperstock:1997}. These results have been obtained thereby numerical calculations and three examples of numerical solutions of the equilibrium equation have been demonstrated. These solutions can correspond to the equilibrium configurations free of struts. For the complete proof it would be necessary to show that such configurations indeed consist of two sources, separated by physically sensible distance between them. However, in \cite{Perry-Cooperstock:1997} it was pointed out that the distance dependence for the equilibrium state is unknown. The authors of \cite{Perry-Cooperstock:1997} also reported that a number of numerical experiments for two black holes and for two naked singularities showed the negative outcomes, i.e. all tested sets of parameters were not in power to satisfy the equilibrium equation. These findings are in agreement with Bonnor's test particle analysis. One year later the similar numerical analysis was made in \cite{Breton-Manko-Sanches:1998}. In this paper, we present an exact solution of the Einstein-Maxwell equations which describes the field of two Reissner-Nordstr\"om sources in static equilibrium as well as the equilibrium condition itself which turns out to have unexpectedly simple form expressed in terms of physical parameters of the sources. This simplicity permits us to prove a validity of conjectures of the papers \cite{Bonnor:1993} and \cite{Perry-Cooperstock:1997} on exact analytical level. It allows also a direct analytical investigation of the physical properties of the equilibrium state of two non-extreme sources. We precede a description of our results with a few words on the methodology of the derivation of our solution. An application for derivation of this solution of the Inverse Scattering Method for electro-vacuum developed in \cite{Alekseev:1980, Alekseev:1988} and described in details in the book \cite{Belinski-Verdaguer:2001} leads to not most convenient parametrization of the solution which give rise to some subsequent technical difficulties (although there are no principal obstacles to use this approach). Instead, we used the Integral Equation Method \cite{Alekseev:1985, Alekseev:1988} which opens a shorter way to the desirable results. The first step was to construct the solution for the two-pole structure of the monodromy data on the spectral plane with a special choice of parameters providing asymptotical flatness and the static character of the solution. This corresponds also to the two-pole structure of the Ernst potentials (as functions of the Weyl cylindrical coordinate z) on the symmetry axis. Then the expressions for physical masses and physical charges for both sources were found with the help of the Gauss theorem and the notion of distance between these sources was also defined. We stress here that the physical character of masses and charges of the sources follows not only from their definition using the Gauss theorem, but also from the analysis of that limiting case in which one of the sources is a test particle (see the formulae (12), (13) below and the text after them). After that we derived the equilibrium equation in terms of these five physical parameters. The miracle arises if one substitutes this equilibrium equation back into the solution. This results in the impressive simplification of all formulas. Below we expose the final outcome which is ready for using in practical purposes without necessity of knowledge of any details of its derivation. It is worthwhile to mention that a correctness of our solution has been confirmed also by its direct substitution into the Einstein - Maxwell field equations. \section*{The solution} For our static solution in cylindrical Weyl coordinates, metric and vector electromagnetic potential take the forms \begin{eqnarray} ds^2 &=& H dt^2-f(d\rho^2+dz^2)-\dfrac{\rho^2}{H} d\varphi^2,\label{Metric} \\ A_t &=& \Phi,\qquad A_\rho=A_z= A_\varphi=0, \label{Potential}\end{eqnarray} where $H$, $f$ and $\Phi$ are real functions of the coordinates $\rho$ and $z$. These functions take the most simple form in bipolar coordinates which consist of two pairs of spheroidal variables $(r_1,\theta_1)$, $(r_2,\theta_2)$ defined by their relations to the Weyl coordinates: \begin{equation} \begin{array}{l} \left\{\begin{array}{lccl} \rho=\sqrt{(r_1-m_1)^2-\sigma_1^2}\sin\theta_1,\\[1ex] z=z_1+(r_1-m_1)\cos\theta_1, \end{array}\right.\\[4ex] \left\{\begin{array}{lccl} \rho= \sqrt{(r_2-m_2)^2-\sigma_2^2}\sin\theta_2,\\[1ex] z=z_2+(r_2-m_2)\cos\theta_2. \end{array}\right. \end{array} \end{equation} Here and below, the indices ${}_1$ and ${}_2$ denote the coordinates and parameters related to the Reissner - Nordstr\"om sources located at the symmetry axis at the points $z=z_1$ and $z=z_2$ respectively. A positive constant $\ell$ defined as \begin{equation}\label{distance} \ell=z_2-z_1 \end{equation} characterizes the z-distance separating these sources (for definiteness we take $z_2 > z_1$). The constants $m_1$ and $m_2$ are physical masses of the sources. Each of the parameters $\sigma_k$ ($k=1,2$) can be either real or pure imaginary and this property characterizes the corresponding Reissner - Nordstr\"om source to be either a black hole or a naked singularity: the real value of $\sigma_k$ means that this is a black hole whose horizon in Weyl coordinates is $\{\rho=0,\,z_k-\sigma_k\le z\le z_k+\sigma_k\}$ while the imaginary $\sigma_k$ corresponds to a naked singularity whose critical spheroid $r_k=m_k$ is $\{0\le \rho\le\vert\sigma_k\vert,\, z=z_k\}$. So the coordinate distance between two black holes (both $\sigma_1$ and $\sigma_2$ are real and positive) we define as the distance along z-axis between the nearest points of its intersections with two horizons and this distance is $\ell-\sigma_1-\sigma_2$. The distance between the black hole located at the point $z=z_2$ and the naked singularity at the point $z=z_1$ ( $\sigma_2$ is real and positive but $\sigma_1$ is pure imaginary) we define as distance between the nearest points of intersections of the symmetry axis with black hole horizon and critical spheroid and this distance is $\ell-\sigma_2$. The distance between two naked singularities (both $\sigma_1$ and $\sigma_2$ are pure imaginary) is simply $\ell$ and it is the length of the segment between the nearest points of intersections of two critical spheroids with the z-axis. In terms of bipolar coordinates our solution reads: \begin{eqnarray} \label{gtt} H &=&[(r_1-m_1)^2-\sigma_1^2+\gamma^2\sin^2\theta_2]\\ \nonumber &&\times[(r_2-m_2)^2-\sigma_2^2+\gamma^2\sin^2\theta_1] \mathcal{D}^{-2},\\ \label{At}\Phi &=&[(e_1-\gamma)(r_2-m_2)+(e_2+\gamma)(r_1-m_1)\\ \nonumber &&+\gamma(m_1\cos\theta_1+m_2\cos\theta_2)] \mathcal{D}^{-1}, \\ \label{Factorf}f&=&[(r_1-m_1)^2-\sigma_1^2\cos^2\theta_1]^{-1}\\ \nonumber&&\times [(r_2-m_2)^2-\sigma_2^2\cos^2\theta_2]^{-1} \mathcal{D}^2, \end{eqnarray} where \begin{equation}\label{Determinant} \mathcal{D}=r_1 r_2-(e_1-\gamma-\gamma \cos\theta_2) (e_2+\gamma-\gamma \cos\theta_1). \end{equation} In these expressions the quantities $e_1$, $e_2$ represent physical charges of the sources. The parameter $\gamma$ and the parameters $\sigma_1$, $\sigma_2$ are determined by the relations: \begin{equation}\label{Sigmas} \begin{array}{l} \sigma_1^2=m_1^2-e_1^2+2 e_1\gamma, \quad \sigma_2^2=m_2^2-e_2^2-2 e_2\gamma,\\[1ex] \gamma=(m_2 e_1-m_1 e_2)(\ell+m_1+m_2)^{-1}. \end{array} \end{equation} The formulas (\ref{Metric})-(\ref{Sigmas}) give the exact solution of the Einstein-Maxwell equations if and only if the five parameters $m_1$, $m_2$, $e_1$, $e_2$ and $\ell$ satisfy the following condition \begin{equation}\label{Equilibrium} m_1 m_2=(e_1-\gamma)(e_2+\gamma). \end{equation} The condition (\ref{Equilibrium}) guarantees the equilibrium without any struts on the symmetry axis between the sources. \section*{Properties of the solution} First of all one can see that the balance equation (\ref{Equilibrium}) do not admit two black holes ($\sigma_1^2>0$, $\sigma_2^2>0$) to be in equilibrium under the condition that there is some distance between them, that is if $\ell-\sigma_1-\sigma_2>0$. This is in an agreement with a non-existence of static equilibrium configurations of charged black holes proved under rather general assumptions in \cite{Chrusciel-Tod:2007}. (To avoid a confusion, we mention here that the results of \cite{Chrusciel-Tod:2007} do not apply in the presence of naked singularities.) The equilibrium is also impossible if one of the sources is extreme and the other is a non-extreme one and a positive distance exists between them, i.e. if $\ell-\sigma_2>0$ for the case $\sigma_1=0$ and $\sigma_2^2>0$ (a negative value for $\sigma_2^2$ is forbidden at all if $\sigma_1=0$)\,\,\footnote{Non-separated objects for which the horizons overlap each other or the horizon intersects with the critical spheroid also may be possible but such cases are not in the scope of this communication.}. The condition (\ref{Equilibrium}) implies also that $\sigma_1^2$ and $\sigma_2^2$ never can be both negative, that is the equilibrium of two naked singularities is impossible. So, for separated sources an equilibrium may exist either between a black hole and a naked singularity or between two extreme sources. The latter case can be realized only if $\sigma_1=\sigma_2=0$, $\gamma=0$ and it is easy to see that the formulas (\ref{Metric})-(\ref{Sigmas}) reduce for this case to the Majumdar-Papapetrou solution. At spatial infinity the variables $r_1$, $r_2$ coincide and one can choose any of them as the radial coordinate. In this region the fields, as can be seen from (\ref{gtt}) and (\ref{At}), acquire the standard Reissner-Nordstr\"om asymptotical form with the total mass $m_1+m_2$ and the total charge $e_1+e_2$. At the symmetry axis $\cos^2\theta_1=\cos^2\theta_2=1$ and the formulas (\ref{gtt}), (\ref{Factorf}) show that the condition $f H=1$ is satisfied there automatically, i.e. there are no conical singularities. Besides the singularities inherent to the sources themselves, any other kind of singularities (such as, for example, the off-axis singularities found in the double-Kerr solution in \cite{Bicak-Hoenselaers:1985}) are also absent in our solution. The constant $\gamma$ vanishes in the limit $\ell\to\infty$ whence it follows from (\ref{Equilibrium}) that the equilibrium condition asymptotically reduces to the Newtonian form $m_1 m_2=e_1 e_2$ for a large distance between the sources. If one of the sources disappears, e.g. $m_1=e_1=0$, our solution reduces to the exact Reissner-Nordstr\"om solution with the mass $m_2$ and the charge $e_2$ in the standard spherical coordinates $r_2$, $\theta_2$. Let us turn now to that limiting case in which one of the sources can be considered as a test particle. For this we assume that $m_1$ and $e_1$ are infinitesimally small but the ratio $e_1/m_1$ is finite. In this case, in the first non-vanishing order with respect to the constants $m_1$ and $e_1$ the equilibrium condition (\ref{Equilibrium}) gives: \begin{equation}\label{Test} (\ell+m_2)(m_1 m_2-e_1 e_2)=(m_1 e_2-m_2 e_1) e_2. \end{equation} We introduce instead of $m_1$ a new parameter $\mu_1$ defined by the relation: \begin{equation}\label{Defmu} \begin{array}{l} m_1=\mu_1[1-2 m_2(\ell+m_2)^{-1}+e_2^2(\ell+m_2)^{-2}]^{1/2}\\[1ex] \phantom{m_1=}+e_1 e_2(\ell+m_2)^{-1}. \end{array} \end{equation} Now the relation (\ref{Test}) takes the form: \begin{equation}\label{Testequilibrium} \begin{array}{l} m_2-e_2^2(\ell+m_2)^{-1}\\ =e_1 e_2\mu_1^{-1}[1-2 m_2(\ell+m_2)^{-1}+e_2^2(\ell+m_2)^{-2}]^{1/2}. \end{array} \end{equation} This last equation is nothing else but the Bonnor's balance condition \cite{Bonnor:1993} for the test particle of the rest mass $\mu_1$ and the charge $e_1$ in the Reissner-Nordstr\"om field of the mass $m_2$ and the charge $e_2$. The particle is at rest on the symmetry axis at the point $R=\ell+m_2$ where $R$ is the radius of the standard spherical coordinates of the Reissner-Nordstr\"om solution. If we calculate from (\ref{At}) the potential $\Phi$ in the linear approximation with respect to the small parameters $m_1$ and $e_1$ for the particular case $e_2=0$ (i.e. for the Schwarzschild background) the result will coincide exactly with the potential which have been found first in \cite{Hanni-Ruffini:1971} - \cite{Hanni-Ruffini:1973} in the form of multipole expansion and then in \cite{Linet:1976} in closed analytical form. The relation (\ref{Defmu}) is important since it exhibits clearly the physical nature of the mass $m_1$ and gives its correct interpretation. This relation shows that the parameters $m_1$, $m_2$ are not the rest masses but they represent the total relativistic energy of each source in the external field produced by its partner. Finally it is worth to mention that our exact solution remains physically sensible also in the case $e_2=0$. This corresponds to a Schwarzschild black hole of the mass $m_2$ hovering freely in the field of a naked singularity of the mass $m_1$ and the charge $e_1$. Such configuration exists due to the repulsive nature of gravity in the vicinity of the naked Reissner-Nordstr\"om singularity. \subsection*{Acknowledgements} GAA is thankful to ICRAnet for the financial support and hospitality during his visit to ICRAnet (Pescara, Italy) during May 2006, when this paper was started. The work of GAA was also supported in parts by the Russian Foundation for Basic Research (grants 05-01-00219, 05-01-00498, 06-01-92057-CE) and the programs "Mathematical Methods of Nonlinear Dynamics" of the Russian Academy of Sciences, and "Leading Scientific Schools" of Russian Federation (grant NSh-4710.2006.1). We are especially grateful to R. Price for useful comments which urged us to improve essentially this paper.
train/arxiv
BkiUfNM5qhLBrnqN0LbQ
5
1
\section{Introduction} \label{intro} Deep neural networks (DNNs), especially convolutional neural networks (CNNs) \cite{lenet}, have received tremendous attention due to their ability to surpass human-level accuracy on a wide range of complex tasks such as recognition, classification and detection \cite{deeplearning}. Depending on their size and complexity, these networks achieve different degrees of classification/recognition accuracy. A CNN is a stack of multiple convolutional layers followed by fully-connected layers: they extract high level abstractions and features of raw data, whereas fully-connected networks are used to learn non-linear combinations of the extracted features. In 2012, a CNN called AlexNet \cite{alexnet} was introduced: it is constituted of 5 convolutional layers followed by 3 fully-connected layers and achieves $42.9\%$ misclassification rate (MCR) on the ImageNet dataset. AlexNet contains 2.3M weights and 58.6M weights in its convolutional and fully-connected layers, respectively, performing 1332M operations (i.e., 666M multiplications-accumulations) in its convolutional layers and 117.2M operations (i.e., 58.6M multiplications-accumulations) in its fully-connected layers. VGGNet-16 \cite{vgg} is another well-known CNN, containing 13 convolutional layers with 14.7M weights and 3 fully-connected layers with 124M weights. VGGNet-16 performs 30.6G operations in its convolutional layers and 248M operations in its fully-connected layers, achieving $27\%$ MCR on ImageNet. Recently, ResNet-50 \cite{resnet}, containing 49 convolutional layers with 23.5M weights and 1 fully-connected layer with 2M weights, achieved a better MCR (i.e., 22.85\% on ImageNet) by going even deeper. ResNet-50 respectively performs 7G and 4M operations within the two types of layers. All these CNNs have won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) \cite{ILSVRC}.\par Regardless of the fact that in almost all the aforementioned CNNs the majority of weights is found in fully-connected layers, the number of operations are dominated by convolutions. As a result, the processing time of CNNs is also dominated by the convolutional processes. This issue can easily be addressed by exploiting parallel processing elements (PEs) to increase throughput. However, a straightforward parallelization requires high data movement and bandwidth, leading to high energy consumption \cite{conv_imp2}. It is worth noting that memory accesses to off-chip memories are more expensive than on-chip storage, as shown in \cite{energy}.\par Pruning techniques were first introduced in \cite{pruning, pruning1} to reduce the number of parameters and memory accesses to off-chip memory. In \cite{pruning} CPU/GPU implementations were considered, showing that 3$\times$ to 4$\times$ layer-wise speedup can be obtained for fully-connected layers without any practical speedup for convolutional layers. To accelerate convolutional processes on GPUs and CPUs, a new method was also introduced in \cite{ssl}, achieving up to 5.1$\times$ speedup. The work presented in \cite{isca} introduces a fully-connected accelerator, called efficient inference engine (EIE), for the pruning technique introduced in \cite{pruning, pruning1}. EIE can obtain 13$\times$ to 307$\times$ speedup, and save 2700$\times$ to 24000$\times$ energy compared to CPUs or GPUs for fully-connected computations. Recently, a new pruning technique and its custom hardware were introduced in \cite{sfc}, using low-cost linear-feedback shift registers (LFSRs) to prune the connectivity of fully-connected layers. This technique also saves up to 90\% energy compared to conventional implementations of fully-connected layers. However, as discussed earlier, convolutional processes are the bottleneck of the processing time of CNNs.\par During the past few years, many convolutional accelerators with different dataflows have been introduced in literature \cite{eyeriss1,moons_new,DaDianNao,origami,1dconv,DNPU}. While these ASIC architectures can successfully reduce the energy consumption of convolutional processes and meet the latency constraints of small CNNs such as AlexNet, they fail to employ the full potential of their architectures, resulting in a low performance efficiency. In fact, there is a huge gap between their peak performance and average runtime performance. For instance, in \cite{eyeriss1} the architecture known as Eyeriss achieves a peak performance of 84 Gops, where each MAC is considered as two operations. However, its performance efficiency is limited to 55\% and 26\% when performing the convolutional computations of AlexNet and VGGNet-16, respectively.\par To improve the performance efficiency and to accelerate the convolutional processes for VGG and VGG-like networks, a dataflow, called fully-connected inspired dataflow (FID), and the architecture implementing it were introduced in \cite{TCAS}. This architecture achieves a high performance efficiency of $90\%$ on the convolutional processes of VGGNet-16. Despite its high performance efficiency, throughput and low silicon area, it is only limited to architectures with $3 \times 3$ filters.\par In this paper, we propose a dataflow supporting all type of filter sizes used in state-of-the-art CNNs by generalizing FID. We provide a theoretical framework showing that the proposed generalized FID (GFID) can perform both the fully-connected and convolutional processes while using the same hardware resources, resulting in a high utilization factor. We then propose a CNN accelerator based on the proposed GFID, that performs both fully-connected and convolutional computations, which is hereafter referred to as multi-mode inference engine (MMIE). MMIE is optimized to achieve high performance efficiency and low memory accesses to the off-chip memory, while keeping the power consumption below the budget of mobile/embedded devices. Finally, we evaluate the performance of MMIE on the state-of-the-art CNN models (i.e., AlexNet, VGGNet-16 and ResNet-50) and show that MMIE performs the convolutional computations of these CNNs with an $83\%$ minimum performance efficiency. \section{Preliminaries} \label{pre} A fully-connected network is a stack of layers where each neuron is connected to every neuron in the previous and next, and to each connection is associated a weight $w$. A fully-connected layer performs the following computations with $n$ inputs and $m$ outputs: \begin{equation} y = \text{ReLU}(w_{m\times n}x_{n\times 1} + b_{m\times 1}), \label{fc_comp} \end{equation} where $x$ denotes the input pixels, $y$ the output pixels, $b$ the biases, and $\text{ReLU}$ is the non-linear activation function $\text{ReLU} = \max(0,x)$. According to (\ref{fc_comp}), the fully-connected computational kernel calculates numerous vector-matrix multiplications followed by the $\text{ReLU}$. Due to parallel memory access requirement for fully-parallel implementations of such networks, a semi-parallel implementation is a typical approach for their hardware implementations \cite{TCAS}. In semi-parallel implementations, only a limited number of PEs is instantiated, and computations for each neuron are performed serially \cite{conv_neuron}. In fact, different trade-offs between area occupation and latency can be obtained by changing the degree of parallelism.\par Inspired by the organization of the animal visual cortex, it was shown that the connectivity of neurons in convolutional layers can be mathematically described by a convolution operation \cite{conv}. All neurons in a convolutional layer share a set of weights, also referred to as a filter.\par \begin{figure}[!t] \centering \includegraphics[scale = 0.5]{CL.pdf} \caption{The high-dimensional convolutions in a convolutional layer.} \label{CL} \end{figure} The main computational kernel of a convolutional layer involves high-dimensional convolutions, as shown in Fig. \ref{CL}. The convolutional layers take input pixels, which are also called input activation maps, arranged in 3 dimensions (i.e., height $H_{in}$, width $W_{in}$ and channel $C_{in}$), and generate output pixels, which are also called output activation maps, arranged in 3 dimensions (i.e., height $H_{out}$, width $W_{out}$ and channel $C_{out}$). This transformation is a result of the convolution between the input activation maps and a set of $C_{out}$ 3D filters. More precisely, every single 2D $H_{out} \times W_{out}$ plane of the output activation maps is a result of the convolution between the 3D input activation maps with a set of 3D filters. In fact, a summation of multiple plane-wise 2D convolutions forms a 3D convolution. At the end, the results of 3D convolutions are also added to 1D bias. In summary, the convolutional processes with the input activation maps, the output activation maps, the filters and the bias matrices denoted as $X$, $Y$, $W$ and $B$, respectively, can be expressed as \begin{equation} \small Y(z,t,q) = B(q) + \sum_{k = 1}^{C_{in}} \sum_{j = 1}^{H_f} \sum_{i = 1}^{W_f} X(zS + j,tS + i,k) \times W(j,i,k,q), \nonumber \end{equation} \begin{equation} H_{out} = (H_{in} - H_{f} + S)/S, \nonumber \end{equation} \begin{equation} W_{out} = (W_{in} - W_{f} + S)/S, \label{eq_conv} \end{equation} where $1 \leq z \leq H_{out}$, $1 \leq t \leq W_{out}$ and $1 \leq q \leq C_{out}$. The stride $S$ represents the number of activation map pixels of which the filter is shifted after each convolution. Contrary to the fully-connected layers, convolutional computations are dominated by numerous MACs according to Eq. (\ref{eq_conv}), leading to a high degree of computational complexity. \subsection{Fully-Connected Inspired Dataflow for Convolutional Computations}\label{pre:FID} In \cite{TCAS}, FID was introduced. It can be used to efficiently perform the computations of convolutional layers with filter parameter $W_f$ fixed to $3$. Let us note that 2D convolution is the weighted summation of each pixel of an input image with its neighboring pixels, and consider an input image as a matrix $X_{8\times 2}$, a filter as a matrix $W_{1\times 3}$ and an output as a matrix $Y_{6\times 2}$, such that \[ X = \begin{bmatrix} X_{1} & X_{2} & \dots & X_{8} \\ X_{9} & X_{10} & \dots & X_{16} \\ \end{bmatrix} , W = \begin{bmatrix} W_{1} & W_{2} & W_{3} \\ \end{bmatrix}, \]\vspace{-10pt} \[ Y = \begin{bmatrix} Y_{1} & Y_{2} & \dots & Y_{6} \\ Y_{7} & Y_{8} & \dots & Y_{12} \\ \end{bmatrix}. \] Considering each output pixel assigned to a neuron, Table \ref{FID} shows the convolutional process of this example in a way similar to the fully-connected layer computations, where input pixels are read sequentially at each clock cycle (CC) and the neurons share the same input pixels. This example considers $C_{in} = 1$, $C_{out} = 1$, $H_{in} = 2$, $W_{in} = 8$, $H_f = 1$, $W_f = 3$ and $S = 1$. Similar to the fully-connected dataflow, each neuron loads a different weight at each time step, subsequently accumulating the weighted input pixels. The number of time steps required to perform the convolutional computations is also equal to the number of input pixels, $H_{in} \times W_{in}$. When passed to the next neuron belonging to the same row of the output activation map, the weights need to be shifted of one position. However, weight passing between neurons of different rows requires a shift of $W_f$ positions, as can be observed between output $\#6$ and $\#7$ in Table \ref{FID}.\par \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \renewcommand{\thefootnote}{\alph{footnote}} \centering \caption{The FID for Convolutional Computations.} \scalebox{0.5}{ \Large \begin{tabular}{c c | c c c c c c : c c c c c c} \hline & & \multicolumn{6}{c:}{1$^{st}$ row of output activation map} & \multicolumn{6}{c}{2$^{nd}$ row of output activation map}\\ \hline \multirow{2}{*}{CC} & \multirow{2}{*}{Inputs} & \multicolumn{12}{c}{Outputs} \\ & & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 & \#8 & \#9 & \#10 & \#11 & \#12 \\ \hline \#1 & $X_1\times$ & \colorbox{black!10}{$W_1$} & $0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ \\ \#2 & $X_2\times $ & \colorbox{black!10}{$ W_2$} & \colorbox{black!30}{$W_1$} & $ 0$ & $ 0$ & $0$ & $0$ & $0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$\\ \#3 & $X_3\times $ & \colorbox{black!10}{$W_3$} & \colorbox{black!30}{$ W_2$} & \colorbox{black!50}{$ W_1$} & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$\\ \#4 & $X_4\times$ & $ 0$ & \colorbox{black!30}{$ W_3$} & \colorbox{black!50}{$ W_2$} & \colorbox{black!10}{$ W_1$} & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$\\ \#5 & $X_5\times $ & $ 0$ & $ 0$ & \colorbox{black!50}{$ W_3$} & \colorbox{black!10}{$ W_2$} & \colorbox{black!30}{$ W_1$} & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$\\ \#6 & $X_6\times $ & $ 0$ & $0$ & $0$ & \colorbox{black!10}{$W_3$} & \colorbox{black!30}{$W_2$} & \colorbox{black!50}{$W_1$} & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\ \#7 & $X_7\times$ & $0$ & $0$ & $0$ & $0$ & \colorbox{black!30}{$ W_3$} & \colorbox{black!50}{$W_2$} & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\ \#8 & $X_8\times$ & $0$ & $0$ & $0$ & $0$ & $ 0$ & \colorbox{black!50}{$ W_3$} & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$\\ \#9 & $X_9\times$ & $0$ & $ 0$ & $0$ & $0$ & $0$ & $ 0$ & \colorbox{black!10}{$ W_1$} & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $0$ \\ \#10 & $X_{10}\times$ & $0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & \colorbox{black!10}{$ W_2$} & \colorbox{black!30}{$ W_1$} & $0$ & $ 0$ & $ 0$ & $ 0$ \\ \#11 & $X_{11}\times$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $0$ & \colorbox{black!10}{$ W_3$} & \colorbox{black!30}{$W_2$} & \colorbox{black!50}{$ W_1$} & $ 0$ & $0$ & $0$\\ \#12 & $X_{12}\times $ & $ 0$ & $0$ & $0$ & $0$ & $ 0$ & $0$ & $ 0$ & \colorbox{black!30}{$W_3$} & \colorbox{black!50}{$W_2$} & \colorbox{black!10}{$W_1$} & $0$ & $0$\\ \#13 & $X_{13}\times $ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & \colorbox{black!50}{$W_3$} & \colorbox{black!10}{$W_2$} & \colorbox{black!30}{$W_1$} & $0$ \\ \#14 & $X_{14}\times $ & $0$ & $0$ & $0$ & $0$ & $ 0$ & $ 0$ & $0$ & $0$ & $0$ & \colorbox{black!10}{$ W_3$} & \colorbox{black!30}{$W_2$} & \colorbox{black!50}{$ W_1$} \\ \#15 & $X_{15}\times $ & $0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $0$ & $0$ & $ 0$ & \colorbox{black!30}{$W_3$} & \colorbox{black!50}{$W_2$}\\ \#16 & $X_{16}\times$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & \colorbox{black!50}{$W_3$}\\ \hline \multicolumn{2}{c}{$\sum$} & $Y_{1}$ & $Y_{2}$ & $Y_{3}$ & $Y_{4}$ & $Y_{5}$ & $Y_{6}$ & $Y_{7}$ & $Y_{8}$ & $Y_{9}$ & $Y_{10}$ & $Y_{11}$ & $Y_{12}$\\ \end{tabular}} \label{FID} \end{table} A direct implementation of the convolutional process in Table \ref{FID} requires a large number of PEs, or neurons, each of them with a low utilization factor (UF). In \cite{TCAS} it was shown that 3 PEs, which are denoted by different colors in Table \ref{FID}, are sufficient to perform the convolutions. In fact, there are only 3 active neurons at each time step. Each PE thus receives its input at clock cycle $3 \times i + 1$, $3 \times i + 2$ and $3 \times i + 3$. Their outputs are also valid after 3 clock cycles in the given example. So far, we only considered a case with $H_f = 1$. In case of $H_f = 3$, the procedure in Table \ref{FID} has to be repeated 2 times more: the first iteration with $W_1$, $W_2$ and $W_3$, the second with $W_4$, $W_5$ and $W_6$, and the final one with $W_7$, $W_8$ and $W_9$. Similarly, for higher values of $C_{in}$, the process has to be to repeated $C_{in}$ times. Therefore, a memory is required to store the partial values generated by the 3 neurons for each output pixel. In general, $N$ output pixels can be computed using 3 neurons (i.e. PEs) and 3 separate $N/3$-element SRAM memories working in parallel. The unit generating the $N$ output pixels of an output activation map is referred to as a 1D tile. Parallel 1D tiles can be also exploited to generate $p$ out of $C_{out}$ output activation maps in parallel. Using $p$ parallel 1D tiles reduces both the latency and memory access by a factor of $p$. The input pixels are shared among all the $p$ 1D tiles. \section{Generalized Fully-Connected Inspired Dataflow (GFID)}\label{sec:GFID} Let us define a generalized form of the FID as a matrix $M$: \begin{equation} \small M = \begin{matrix} \begin{bmatrix} \left.\begin{array}{ccccc} W_1 & 0 & 0 & \cdots & 0 \\[0.3em] W_2 & \rvdots & & & \\[0.3em] & 0 & \rvdots & & \\[0.3em] \hdashline \\[-0.3em] \rvdots & W_1 & & & \\[0.3em] & W_2 & 0 & & \rvdots\\[0.3em] W_{W_f}& & 0 & & \\[0.3em] 0 & \rvdots & W_1 &\rvdots & \\[0.3em] & & W_2 & & \\[0.3em] & W_{W_f} & & & 0\\[0.3em] & 0 & \rvdots & & W_1\\[0.3em] \rvdots& & & & W_2\\[0.3em] & \rvdots & W_{W_f} & & \\[-0.2em] & & \rvdots & \ddots & \rvdots\\[0.3em] 0 & \cdots & 0 & \cdots & W_{W_f}\\[0.3em] \end{array} \right. \end{bmatrix}, \begin{matrix} \left.\begin{array}{l} \MyLBrace{5ex}{S} \\[0.1em] \\[-0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \\[0.3em] \end{array}\right. \end{matrix} \end{matrix} \end{equation} where each column of the matrix $M$ can contain only $W_f$ non-zero elements at most. The shift amount within each row of the output activation map is equal to $S$, denoted with a dashed line in the matrix $M$. The number of columns of the matrix $M$ indicates the $N$ output pixels that belong to the same row of the output activation map, while the number of rows of $M$ denotes the required number of clock cycles.\par In this Section, we use the GFID matrix $M$ to represent different filter sizes used in the state-of-the-art CNNs (i.e., AlexNet, VGGNet and ResNet). AlexNet uses filter sizes of $11 \times 11$ with $S = 4$, $5 \times 5$ with $S = 1$, and $3 \times 3$ with $S = 1$. The filter sizes used in VGGNets are fixed to $3 \times 3$ with $S = 1$. Finally, ResNets use filter sizes of $7 \times 7$ with $S = 2$, $3 \times 3$ with $S = 1$, and $1 \times 1$ with $S = 1$. \subsection{Filters with $W_f = 3$ and $S = 1$}\label{GFID:3x3} In Section \ref{pre:FID}, we showed that 3 PEs are sufficient to perform the convolutions for filter size of $3 \times 3$ with $S = 1$. Therefore, a 1D tile containing only 3 neurons can perform the convolutional computations. Considering a convolution of a row of a filter map with its corresponding input pixels, $N + 2$ clock cycles are required to generate $N$ output pixels which belong to the same row of the output activation map. For instance, in the given example in Table \ref{FID}, 8 clock cycles are required to generate the output pixels of the first row of the output activation map (i.e., the first 6 output pixels). This example can also be expressed using the GFID matrix $M$ as follows: \begin{equation} \small M_{8\times 6} = \begin{bmatrix} \left.\begin{array}{cccccc} W_1 & 0 & 0 & 0 & 0 & 0 \\ W_2 & W_1 & 0 & 0 & 0 & 0 \\ $\colorbox{black!50}{$W_3$}$ & $\colorbox{black!50}{$W_2$}$ & $\colorbox{black!50}{$W_1$}$ & 0 & 0 & 0 \\ 0 & W_3 & W_2 & W_1 & 0 & 0 \\ 0 & 0 & W_3 & W_2 & W_1 & 0 \\ 0 & 0 & 0 & W_3 & W_2 & W_1 \\ 0 & 0 & 0 & 0 & W_3 & W_2 \\ 0 & 0 & 0 &0 & 0 & W_3 \\ \end{array} \right. \end{bmatrix}. \end{equation} The matrix $M$ also confirms that there are only 3 active neurons at each time steps, highlighted in dark gray. \subsection{Filters with $W_f = 5$ and $S = 1$}\label{GFID:5x5} The convolutional computations for filters with $W_f = 5$ and $S = 1$ are performed in a way similar to the convolutional computations of the filters with $W_f = 3$ and $S = 1$, with the difference that $5$ neurons are active at each time step. Thus, a 1D tile with 5 PEs can perform the computations for this filter size. Moreover, $N + 4$ clock cycles are required to generate $N$ output pixels which belong to the same row of the output activation map. \subsection{Filters with $W_f = 1$ and $S = 1$}\label{GFID:1x1} The following matrix $M$ shows the convolutional computations for filters with $W_f = 1$ and $S = 1$. \begin{equation} \small M_{5\times 5} = \begin{bmatrix} \left.\begin{array}{cccccc} $\colorbox{black!50}{$W_1$}$ & 0 & 0 & 0 & 0 \\ 0 & W_1 & 0 & 0 & 0 \\ 0 & 0 & W_1 & 0 & 0 \\ 0 & 0 & 0 & W_1 & 0 \\ 0 & 0 & 0 & 0 & W_1 \\ \end{array} \right. \end{bmatrix}. \end{equation} Contrary to other filter sizes, its GFID matrix $M$ is square: the number of clock cycles required to generate $N$ output pixels is equal to $N$. As denoted in the matrix $M$, there is only one active neuron at each clock cycle. Consequently, its 1D tile requires only one PE to perform the convolutional computations. \subsection{Filters with $W_f = 7$ and $S = 2$}\label{GFID:7x7} So far, we only considered a stride value $S=1$. However, both AlexNet and ResNet contain layers computing convolutions with $S \ge 1$. Considering filters with $W_f = 7$ and $S = 2$, the shift amounts within each row of the output activation map is equal to 2 as shown in the following matrix $M$: \begin{equation} \small M_{15\times 5} = \begin{bmatrix} \left.\begin{array}{cccccc} W_1 & 0 & 0 & 0 & 0 \\ W_2 & 0 & 0 & 0 & 0 \\ W_3 & W_1 & 0 & 0 & 0 \\ W_4 & W_2 & 0 & 0 & 0 \\ W_5 & W_3 & W_1 & 0 & 0 \\ W_6 & W_4 & W_2 & 0 & 0 \\ $\colorbox{black!50}{$W_7$}$ & $\colorbox{black!50}{$W_5$}$ & $\colorbox{black!50}{$W_3$}$ & $\colorbox{black!50}{$W_1$}$ & 0 \\ 0 & W_6 & W_4 & W_2 & 0 \\ 0 & W_7 & W_5 & W_3 & W_1 \\ 0 & 0 & W_6 & W_4 & W_2 \\ 0 & 0 & W_7 & W_5 & W_3 \\ 0 & 0 & 0 & W_6 & W_4 \\ 0 & 0 & 0 & W_7 & W_5 \\ 0 & 0 & 0 & 0 & W_6 \\ 0 & 0 & 0 & 0 & W_7 \\ \end{array} \right. \end{bmatrix}. \end{equation} While the higher stride value linearly decreases the number of pixels in the output activation maps, it also reduces the number neurons required to perform the convolutional computations. For instance, the above matrix $M$ shows that there are only $4$ active neurons at each time step, while the width of the filter $W_f=7$. According to the matrix $M$, $15$ clock cycles are required to generate $5$ output pixels in the given example. \subsection{Filters sizes with $W_f = 11$ and $S = 4$}\label{GFID:11x11} The matrix $M$ for filters with $W_f = 11$ and $S = 4$ is as follows: \begin{equation} \small M_{23\times 4} = \begin{bmatrix} \left.\begin{array}{ccccc} W_1 & 0 & 0 & 0 \\ W_2 & 0 & 0 & 0 \\ W_3 & 0 & 0 & 0 \\ W_4 & 0 & 0 & 0 \\ W_5 & W_1 & 0 & 0 \\ W_6 & W_2 & 0 & 0 \\ W_7 & W_3 & 0 & 0 \\ W_8 & W_4 & 0 & 0 \\ W_9 & W_5 & W_1 & 0 \\ W_{10} & W_6 & W_2 & 0 \\ $\colorbox{black!50}{$W_{11}$}$ & $\colorbox{black!50}{$W_7$}$ & $\colorbox{black!50}{$W_3$}$ & 0 \\ 0 & W_8 & W_4 & 0 \\ 0 & W_9 & W_5 & W_1 \\ 0 & W_{10} & W_6 & W_2 \\ 0 & W_{11} & W_7 & W_3 \\ 0 & 0 & W_8 & W_4 \\ 0 & 0 & W_9 & W_5 \\ 0 & 0 & W_{10} & W_6 \\ 0 & 0 & W_{11} & W_7 \\ 0 & 0 & 0 & W_8 \\ 0 & 0 & 0 & W_9 \\ 0 & 0 & 0 & W_{10} \\ 0 & 0 & 0 & W_{11} \\ \end{array} \right. \end{bmatrix}. \end{equation} Despite of the large width of the filter, the number of active neurons at each time step is only 3, thanks to the large stride value. However, the number of clock cycles required to generate 4 output pixels is $23$, which is rather high and can result in a long latency. \subsection{Utilization Factor for Different Filter Sizes}\label{GFID:UF} As discussed in Section \ref{pre:FID}, the number of clock cycles required to perform the convolutions using FID is equal to the number of input pixels, and it is the same for GFID. Considering $C_{in} = 1$ and $H_f = 1$, in order to generate $N$ pixels of an output activation map, $S \times N + W_f -S$ clock cycles are required to perform the convolutions according to Eq. (\ref{eq_conv}). Let us define the number of required PEs in the 1D tile as $T$. The number of pixels computed by each neuron is equal to $N/T$ when $N$ is a multiple of $T$. Each neuron also requires $W_f$ clock cycles to generate an output pixel. Therefore, the utilization factor of GFID can be expressed as \begin{equation} \text{UF} = \dfrac{\dfrac{N}{T} \times W_f}{S \times N + W_f -S} \times 100. \label{UF} \end{equation}\par In Section \ref{intro}, we discussed the importance of high performance efficiency. The utilization factor of PEs in a convolutional accelerator is also linearly proportional to its performance efficiency. Any increasing in the utilization factor of PEs exploited in the 1D tile results in an increase in performance efficiency. Considering the fact that $W_f$ and $S$ are usually small, a high UF is achieved for a large value of $N$. In other word, the maximum achievable utilization factor can be obtained as \begin{equation} \text{UF$_{max}$} = \lim_{N\to\infty} \text{UF} = \dfrac{W_f}{T \times S} \times 100. \label{UFmax} \end{equation} Eq. (\ref{UFmax}) suggests that the highest performance efficiency is obtained when $N \gg (W_f - S)$. The maximum utilization factors for filters with [$W_f, S$] equal to [1, 1], [3, 1], [5, 1], [7, 2] and [11, 4] are 100\%, 100\%, 100\%, 88\% and 92\%, respectively, showing the high performance efficiency of the proposed GFID. \section{Multi-Mode Inference Engine}\label{sec:MMIE} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \renewcommand{\thefootnote}{\alph{footnote}} \caption{Breakdown of Number of PEs Required Per Tile} \centering \small \scalebox{1}{ \begin{tabular}{c c c c c} \hline\hline \multicolumn{1}{c}{Network} & \multicolumn{1}{c}{$H_f \times W_f$} & $S$ & $T$ & \# layers\\ \hline \hline \multirow{3}{*}{AlexNet \cite{alexnet}} & $11\times11$ & 4& 3 & 1 out of 5 \\ & $5\times5$ & 1 & 5 & 1 out of 5 \\ & $3\times3$ & 1 & 3 & 3 out of 5 \\ \hline \multirow{2}{*}{ResNet-50 \cite{resnet}} & $7\times7$ & 2 & 4 & 1 out of 49 \\ & $3\times3$ & 1 & 3 & 16 out of 49 \\ & $1\times1$ & 1 & 1 & 32 out of 49 \\ \hline \multirow{1}{*}{VGG-16 \cite{vgg}} & $3\times3$ & 1 & 3 & 13 out of 13 \\ \hline \hline \end{tabular}} \label{PE_number} \end{table} In Section \ref{sec:GFID}, we showed that different filter sizes require different number of PEs per tile. Table \ref{PE_number} summarizes the number of required PEs per tile for each layer of AlexNet, VGGNet-16 and ResNet-50. AlexNet consists of 5 layer of convolutions with filter sizes of $11 \times 11$, $5 \times 5$ and $3 \times 3$. Performing the GFID on the AlexNet layers show that 4 out of 5 layers (i.e., the layers with filter sizes of $11 \times 11$ and $3 \times 3$) only require 3 PEs to perform the computations, while the remaining layer requires 5 PEs per tile. Therefore, $T = 3$ is the most frequent number in AlexNet for convolutional processes. The filter size and stride are fixed to $3 \times 3$ and one pixel for the convolutional computations of VGGNets \cite{vgg}, respectively: as a result, the whole computations of VGGNets can be performed using 3 PEs per tile. There are different VGGNet models in literature: in this paper, we use VGGNet-16, which contains 13 convolutions and 3 fully-connected layers, for experimental purposes. Similar to VGGNets, ResNets also come in different flavors. The first layer of ResNets is fixed to the receptive field of $7 \times 7$ and stride of $S = 2$. The filter sizes of the remaining layers are either fixed to $3 \times 3$ (for ResNet-18 and ResNet-34) or a combination of $1 \times 1$ and $3 \times 3$ (for ResNet-50, ResNet-101 and ResNet-152) \cite{resnet}. Therefore, the dominant filter sizes are $1 \times 1$ and $3 \times 3$, which require one and 3 PEs per tile to perform the convolutional computations, respectively. In Table \ref{PE_number}, we report the requirements for ResNet-50. \begin{figure}[!t] \centering \includegraphics[scale = 0.37]{Tile.pdf} \caption{The high-Level Architecture of a 1D Reconfigurable Tile.} \label{Tile} \end{figure} Fig. \ref{Tile} shows the high level architecture of the 1D tile. It consists of two main sub-blocks: the weight generator and $K$ PEs working in parallel. All the PEs share the same input activation pixel while their weights are different. Each PE takes an input activation map and its corresponding weight according to the proposed GFID and performs the accumulation-multiplication for the first row of the first input filter, i.e., $W_1$, $W_2$, $\dots$, $W_{W_f}$. This process takes $W_f$ clock cycles and the computed partial value is stored in a memory of $L$ elements. Afterwards, the PE starts the processing of another output activation pixel, using the same weights. The convolutional computations of the first row of the first input filter require $S \times N + W_f - S$ clock cycles, as discussed in Section \ref{GFID:UF}. Upon reaching this point, the partial value of the first output activation pixel is read from the memory and the computations of the second row of the first input filter are performed for $S \times N + W_f -S$ clock cycles. In general, this procedure is repeated for $H_f$ times until the computations of the first filter are finished (i.e., upon completion of $H_f \times (S \times N + W_f -S)$ clock cycles). At this point, the computation of the second of the $C_{in}$ filters starts. Upon completion of $C_{in} \times H_f \times (S \times N + W_f -S)$ clock cycles, the output value of each PE is passed through the ReLU and the result is stored in the off-chip memory. So far, we introduced a high-level architecture for the 1D tile and explained the high level procedure of convolutional computations. In order to perform the computations while achieving a high performance efficiency, the number of PEs per tile has to be reconfigurable. In order words, $K$ instantiated PEs have to dynamically adapt to act as a multiple of $T$ PEs to achieve the maximum possible utilization factor. The closed form solution for this strategy is \begin{equation} K = \text{LCM}(T_i),~~i \in \{1,3,4,5\}, \end{equation} where LCM denotes the least common multiple. Using this approach, 60 PEs are required to achieve the maximum possible utilization factor for all the network sizes listed in Table \ref{PE_number}. Depending on the required $T$, the 60 PEs can dynamically behave as a set of $T$ PEs. For instance, they can act as 60, 20, 15 and 12 parallel tiles for $T$ equal to 1, 3, 4 and 5, respectively, where each tile also contains 1, 3, 4 and 5 PEs. However, using 60 reconfigurable PEs is not trivial and results in a complex address generator.\par Table \ref{PE_number} shows that $T = 1$ and $T = 3$ are the dominant minimum numbers of PEs for the three well-known CNNs. More precisely, the two filters with $W_f = 5$ and $W_f = 7$ have the least impact on the overall performance efficiency of CNNs, since they are used in only one layer of CNNs. Therefore, we use $K=6$ PEs inside the reconfigurable tile: the reason is twofold. First of all, 6 PEs can be easily used as 2 and 6 tiles containing 3 and 1 PEs for $T = 3$ and $T = 1$, which are the dominant minimum numbers of PEs for the three well-known CNNs. Secondly, they can perform the computations for $T = 4$ and $T = 5$ with a minimum level of complexity for the address generator unit. In this case, with $K$ larger than what strictly necessary, the number of clock cycles required to perform the convolutional computations remains the same. However, the utilization factors of PEs for these cases decreases. \subsection{Reconfigurable Weight Generator Unit}\label{MMIE:WG} The weight generator unit provides each neuron an appropriate weight according to the proposed GFID. The weight generator unit consists of 6 register sets where each set contains 11 registers. The appropriate weight for each neuron is provided by selecting among these shift registers. \begin{figure*}[!t] \centering \subfigure[]{ \includegraphics[scale = 0.35]{WG_3x3.pdf} \label{WG_3x3} } \subfigure[]{ \includegraphics[scale = 0.35]{WG_5x5.pdf} \label{WG_5x5} } \subfigure[]{ \includegraphics[scale = 0.35]{WG_1x1.pdf} \label{WG_1x1} } \subfigure[]{ \includegraphics[scale = 0.35]{WG_7x7.pdf} \label{WG_7x7} } \subfigure[]{ \includegraphics[scale = 0.35]{WG_11x11.pdf} \label{WG_11x11} } \subfigure[]{ \includegraphics[scale = 0.35]{WG_fc.pdf} \label{WG_fc} } \caption{Involved hardware resources and paths in case of convolution computations for (a) $W_f = 3$ and $S = 1$, (b) $W_f = 5$ and $S = 1$, (c) $W_f = 1$ and $S = 1$, (d) $W_f = 7$ and $S = 2$, (e) $W_f = 11$ and $S = 4$, and (f) fully-connected computations.} \label{mult} \end{figure*} \subsubsection{Filters With $W_f = 3$ and $S = 1$}\label{MMIE:WG3x3} As discussed in Section \ref{sec:MMIE}, in case of $W_f = 3$ and $S = 1$, the 1D reconfigurable tile containing 6 neurons can function as two tiles of 3 neurons each. Fig. \ref{WG_3x3} shows the weight generator unit and its working path highlighted in black when using $W_f = 3$ and $S = 1$. It is worth noting that tiles are separated using a dashed line. Each tile loads the weights of the first row of the first filter (i.e., $W_1$, $W_2$ and $W_3$) through the input ports denoted as In \#1 and In \#2 in Fig. \ref{WG_3x3}. These weights then loop through the first register of each set to provide one clock cycle delay for each neuron according to (\ref{GFID:3x3}). Considering Eq. (\ref{UF}), the utilization factor of each neuron for this case can be computed as \begin{equation} \text{UF} = \dfrac{N}{N + 2}, \end{equation} which approaches 100\% for large values of $N$. \subsubsection{Filters With $W_f = 5$ and $S = 1$}\label{MMIE:WG5x5} In case of $W_f = 5$ and $S = 1$, we use 6 neurons to perform the convolutional processes while we showed that the minimum required number of neurons is 5 for this case (see Section \ref{sec:MMIE}). Therefore, the reconfigurable tile works as a single tile containing 6 PEs as shown in Fig. \ref{WG_5x5}. The tile takes the first row of the first filter (i.e., $W_1$, $W_2$, $\dots$, and $W_5$) through the input port denoted as In \#1. It then provides the required one clock cycle delay for each PE by passing the weights through the first register of each register set as highlighted in black in Fig. \ref{WG_5x5}. It is worth noting that 6 registers are used in this paradigm while only 5 of them required to store the weights. Therefore, the value of one register among the 6 registers is always zero to cancel out its effect on the computations. More precisely, we can assume the 5 weights (i.e., $W_1$, $W_2$, $\dots$, and $W_5$) as a set of 6 weights in which one of them is zero (i.e., $W_1$, $W_2$, $\dots$, $W_5$ and $0$). The utilization factor of each PE is also can be expressed as \begin{equation} \text{UF} = \dfrac{5N}{6N + 24}. \end{equation} In fact, using 6 neurons to perform the convolutions of $W_f = 5$ reduces the maximum achievable utilization factor from 100\% to 83\%. \subsubsection{Filters With $W_f = 1$ and $S = 1$}\label{MMIE:WG1x1} In Section \ref{GFID:1x1}, we showed that only one PE is sufficient to perform the computations for $W_f = 1$ and $S = 1$. Therefore, the reconfigurable 1D tile can be used as 6 parallel tiles, as depicted in Fig. \ref{WG_1x1}. The 6 tiles are separated using dashed lines and the involved hardware units and paths are highlighted in black. Each tile takes its weight (i.e., $W_1$) at the first clock cycle through the input ports In \#1 to In \#6. Afterwards, the imported weight loops through each tile and the first register of each register set. The utilization factor of each PE is equal to 100\% regardless of $N$, according to (\ref{UF}). \subsubsection{Filters With $W_f = 7$ and $S = 2$}\label{MMIE:WG7x7} Similar to the case of $W_f = 5$ and $S = 1$, 6 PEs are used to compute the convolutions for $W_f = 7$ and $S = 2$, while only 4 neurons are sufficient. As a result, the reconfigurable tile functions as a single tile containing 6 PEs (see Fig. \ref{WG_7x7}). The tile loads the weights of the first row of the first filter (i.e., $W_1$, $W_2$, $\dots$, and $W_7$) through the input port In \#1 and they loop through the black paths in Fig. \ref{WG_7x7}. In this scheme, the first two registers of each register set are used to provide the required two delays for each PE, as shown in (\ref{GFID:7x7}). It is worth mentioning that while 12 registers are used in this case, only 7 of them contain the weights. The utilization factor for this configuration is computed as follows: \begin{equation} \text{UF} = \dfrac{7N}{12N + 30}. \end{equation} Since 4 PEs are sufficient to perform the computations of this case, using 6 neurons highly affects the utilization factor and results in 53\% for large values of $N$. However, the final impact of this case when considering the computations of whole system is negligible due to the fact that this configuration is only used for one layer out of 49 in ResNet-50. \subsubsection{Filters With $W_f = 11$ and $S = 4$}\label{MMIE:WG11x11} Similar to the case with $W_f = 3$ and $S = 1$, 3 PEs are sufficient to perform the convolutional processes when using $W_f = 11$ and $S = 4$. Therefore, the reconfigurable tile functions as two tiles where each contains 3 PEs. The weights of the first row of the first filter (i.e., $W_1$, $W_2$, $\dots$, and $W_{11}$) are passed through input ports In \#1 and In \#4 to each tile, as shown in Fig. \ref{WG_11x11}. Since a stride value of 4 is used, the first four registers of each register set are used to provide the required four clock cycle delays (\ref{GFID:11x11}). A total of 12 registers are used in each tile while only 11 weights exist. Therefore, the remaining register is zero. The utilization factor of this case is also computed as \begin{equation} \text{UF} = \dfrac{11N}{12N + 21}, \end{equation} achieving up to 92\% for large values of $N$. \subsubsection{Fully-Connected Computations}\label{MMIE:WG_fc} As discussed in Section \ref{pre}, semi-parallel architectures are a common approach to implement fully-connected layers, where the computations of each neuron are performed serially. For instance, considering a single neuron with 512 inputs (i.e., $n = 512$ and $m = 1$), 512 clock cycles are required to perform the computations of (\ref{fc_comp}) using a single PE. We can perform the computations of multiple neurons by instantiating multiple PEs in parallel as discussed in \cite{TCAS}. In this way, each PE shares the same input pixels while loading different weights. This approach can be easily realized using the proposed reconfigurable tile as illustrated in Fig. \ref{WG_fc}. In fact, the reconfigurable tile passes the incoming 6 parallel weights directly to each PE through multiplexers highlighted in black. The utilization factor of PEs for fully-connected computations is 100\%. \subsection{Handling Weight Passing}\label{MMIE:WP} So far, we discussed both the convolutional and fully-connected computations while not considering the weight passing cases for the sake of simplicity. However, weight passing occurrence is inevitable in convolutional computations and impacts both the processing time and utilization factor of PEs. Weight passing occurs when a tile performs the computations of more than one row of the output activation map. In this case, the weight passing from a neuron of a row to a neuron of another row takes $W_f$ clock cycles regardless of the stride value $S$, resulting in a longer latency and consequently a lower utilization factor for PEs. The total number of weight passing occurrences for computations of a single convolutional layer is equal to $H_{out} - 1$. We considered 11 registers for each register set to support the weight passing delay up to 11 clock cycles. Therefore, in case of weight passing in any of PEs, its corresponding register set provides the required delay depending on $W_f$. \subsection{Exploiting Parallel Tiles}\label{MMIE:PT} While the proposed reconfigurable tile can efficiently performs both fully-connected and convolutional computations, using a single tile results in a long latency and numerous memory accesses, as discussed in \cite{TCAS}. To address this issue, $p$ tiles are instantiated in parallel to generate multiple activation maps in parallel. Since the reconfigurable tile itself can function as up to 6 parallel tiles, the upper bound for the maximum number of tiles is $6p$ in MMIE. Therefore, the computational latency of MMIE is effectively reduced by a $p$ factor when compared to a single reconfigurable tile. Moreover, the memory accesses are reduced as well, since the input pixels are shared among the parallel tiles (see Fig. \ref{CL}), while each tile is fed by a different set of weights. Exploiting parallel tiles requires an input bandwidth of $(1 + 6 \times P) \times 16$ bits ($6 \times p \times 16$ for weights and 16 for input pixels). However, most of the embedded and mobile devices cannot provide such a high bandwidth. To overcome this problem, MMIE leverages the pipelining technique first introduced in \cite{TCAS}. As discussed in Section \ref{sec:GFID}, each input pixel is read at each clock cycle while $W_f$ weights are read only for the first $W_f$ clock cycles when performing the convolutional process of the first row of the first input filter. The parameter $W_f$ is also a small value compared to the processing time of convolutions for the first row of the first input filter (i.e., $W_f \ll (S \times N + W_f - S)$). More precisely, the input bandwidth from $(W_f)^{th}$ clock cycle to $(S \times N + W_f - S)^{th}$ clock cycle is only occupied with input pixels. Therefore, we can fill out this available bandwidth by pipelining the tiles with up to $\lfloor (S \times N + W_f - S)/W_f \rfloor$ stages, while the additional latency overhead is negligible compared to the overall latency of the system. \subsection{Processing Time and Memory Accesses of MMIE}\label{MMIE:PTMA} \subsubsection{Convolutional Processes}\label{MMIE:PTMA:cp} Earlier in Section \ref{sec:MMIE} we showed that in convolutional processes, a single tile computes $N$ out of $H_{out} \times W_{out}$ pixels of one of $C_{out}$ output activation maps within $C_{in} \times H_f \times (S \times N + W_f - S)$ clock cycles. We also showed that the total number of weight passing occurrences for the computation of a single convolutional layer is equal to $H_{out} - 1$, which causes additional $(W_f - 1) \times (H_{out} - 1)$ clock cycles for the computations of each row of the input filters. Considering $p$ parallel tiles, the number of required clock cycles is expressed as \begin{align} \text{CC} & = \dfrac{W_{out} \times H_{out}}{N} \times (S \times N + W_f - S) \times H_f \times C_{in} \times \left \lceil \dfrac{C_{out}}{p} \right \rceil \nonumber \\ & + (W_f - 1) \times (H_{out} - 1) \times H_f \times C_{in} \times \left \lceil \dfrac{C_{out}}{p} \right \rceil. \label{cc} \end{align} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \renewcommand{\thefootnote}{\alph{footnote}} \caption{The effective value of $N$ and $p$ in MMIE depending on $W_f$ and $S$} \centering \small \scalebox{1}{ \begin{tabular}{c c c c c} \hline\hline \multicolumn{1}{c}{$H_f \times W_f$} & $S$ & $N_{eff}$ & $p_{eff}$\\ \hline \hline $11\times11$ & 4 & 192 & 64 \\ $7\times7$ & 2 & 384 & 32 \\ $5\times5$ & 1 & 384 & 32 \\ $3\times3$ & 1 & 192 & 64 \\ $1\times1$ & 1 & 64 & 192 \\ \hline \hline \end{tabular}} \label{eff_par} \end{table} \begin{figure*}[!t] \centering \subfigure[]{ \includegraphics[scale = 0.6]{Main_achitecture.pdf} \label{MMIE} } \subfigure[]{ \includegraphics[scale = 0.25]{layout.png} \label{layout} } \caption{(a) The architecture of MMIE and (b) its layout.} \end{figure*} Eq. (\ref{cc}) suggests that the number of required clock cycles for convolutional computations is independent of $N$ for large values of $N$ (i.e., $S \times N \gg (W_f - S)$) when not considering the weight passing overheads. In Section \ref{MMIE:PT} we showed that input pixels are shared among all tiles and each pixel is read at each clock cycle. This means that the number of memory accesses by input activation maps (MA$_{imaps}$) is equal to the number of clock cycles required to complete the convolution. On the other hand, the weights are read in the first $W_f$ clock cycles out of a total $(S \times N + W_f - S)$. As a result, the number of required memory accesses by filters to compute $N$ out of $H_{out} \times W_{out}$ pixels of one out of $C_{out}$ output activation map is equal to $C_{in} \times H_f \times W_f$. In general, the number of memory accesses by filters (MA$_{filters}$) can be computed as follows: \begin{equation} \text{MA$_{filters}$} = H_f \times W_f \times C_{in} \times \left \lceil \dfrac{W_{out} \times H_{out}}{N} \right \rceil \times C_{out}. \end{equation} Finally, the total number of memory accesses (MA) is a summation of memory accesses by filters, input activation maps and output activation maps, where the number of memory accesses by output activation maps (MA$_{omaps}$) is equal to $W_{out} \times H_{out}$. It is worth noting that while the number of clock cycles and MA$_{imaps}$ are independent of $N$, MA$_{filters}$ depends on it. On the other hand, MA$_{filters}$ is independent of $p$ while the number of clock cycles and MA$_{imaps}$ are not. It is worth mentioning that while higher values of $p$ and $N$ optimize MMIE towards lower memory accesses and processing latencies, they also increases its power consumption and silicon area. \subsubsection{Fully-Connected Computations}\label{MMIE:PTMA:fc} In Section \ref{MMIE:WG_fc}, we showed that MMIE can perform the fully-connected computations in a similar way to convolutional computations, with each PE loading a different set of weights. The processing time of each PE is thus equal to the number of inputs $n$. The number of clock cycles required to generate $m$ output pixels can be expressed as \begin{equation} \text{No. CC} = \left \lceil \dfrac{m}{p} \right \rceil \times n. \end{equation} Unlike weights, input pixels are shared among PEs. Therefore, the number of memory accesses by input pixels (MA$_{ip}$) is equal to the number of clock cycles required for fully-connected computations. Since each output pixel relies on a distinct set of $n$ weights, the number of memory accesses by weights (MA$_{weights}$) is computed as follows: \begin{equation} \text{MA$_{weights}$} = m \times n. \end{equation} The number of memory accesses by output pixels (MA$_{op}$) is equal to $m$. The total number of memory accesses (MA) is also a summation of memory accesses by weights, input and output pixels. \section{Implementation Results} \label{sec:HIR} In this paper, we optimize MMIE for a low-latency, low-memory access implementation while keeping its power consumption below the power budget of mobile devices, limited to a few hundred mW \cite{yodann}. Fig. \ref{MMIE} shows the architecture of MMIE which is consisted of three main sub-blocks: tiles, pipelining stages and a distributor unit. MMIE contains 32 reconfigurable tiles, each of which with 6 PEs. Each PE is also associated with a $L = 64$ 24-bit memory. The pipelining stages provide the required amount of shifts depending on the value of $W_f$ using shift registers and multiplexers, as discussed in Section \ref{MMIE:PT}. The distributor unit also provides the required bandwidth for fully-connected weights using shift registers working at lower frequency than the off-chip memory.\par The $p$ and $N$ parameters do not only affect latency and number of memory accesses, but also impact power and area costs. Therefore, it is possible to obtain different trade-offs between processing time, throughput and implementation costs depending on $p$ and $N$. Since the reconfigurable tile functions differently based on $W_f$ and $S$, the effective values of $N$ and $p$ vary for each case. Table \ref{eff_par} shows the effective values of $N$ and $p$ for AlexNet, VGGNet and ResNet filter sizes. The effective values of $N$ and $p$, denoted as $N_{eff}$ and $p_{eff}$ respectively, have to be used in all the equations reported in this paper that rely on these two values. MMIE was implemented in TSMC 65nm GP CMOS technology and its layout are shown in Fig. \ref{layout}. MMIE works at a nominal frequency of 200 MHz and 40 MHz for convolutional and fully-connected processes respectively. MMIE performs the fully-connected computations at a lower frequency since they require a high input bandwidth, as each neuron loads its own set of weights. We also used the run-length compression technique introduced in \cite{eyeriss1} to reduced the required bandwidth. MMIE uses the distributor unit to decode the compressed values. Considering MMIE working at 10$\times$ lower frequency compared to the off-chip memory for fully-connected computations, the required bandwidth of 193 16-bit values are obtained using this technique. \subsection{Hardware Implementation Results on State-of-the-Art Networks}\label{HIR:breakdown} Fig. \ref{PE} shows the breakdown of performance efficiency for each layer of AlexNet, VGGNet-16 and ResNet-50 when using MMIE. In our simulations, the input pixels and weight values are quantized to 16 bits while using 2 and 15 fractional bits, respectively. It is worth noting that this quantization scheme only results in less than 0.5\% accuracy degradation on the aforementioned CNNs using \cite{matconvnet,caffe}. The implementation results show that the lowest performance efficiency of AlexNet and VGGNet-16 was obtained at the first layer of these networks. The number of output activation maps $C_{out}$ of the first layer of AlexNet is 96 while MMIE provide 64 parallel tiles when $W_f = 11$ and $S = 4$. As a result, for the first 64 output activation maps, MMIE achieves a high performance efficiency while the remaining 32 output activation maps are computed using 32 parallel tiles out of 64, which explain the low performance efficiency of this layer. On the other hand, MMIE successfully performs the computations of the first layer of VGGNet-16 with a high performance efficiency. However, since the required time for writing the computed output activation pixels is longer than the computation time, the low performance efficiency is inevitable. In ResNet-50, layers with a receptive field of $1 \times 1$ show lower performance compared to other filter sizes, while it was shown in Section \ref{MMIE:WG1x1} that such receptive field yields a $100\%$ performance efficiency. Such performance efficiency degradation is expected, as $C_{out}$ of the layers with receptive field of $1 \times 1$ are not multiple of 192 available parallel tiles. For instance, the number of output activation maps of the second layer of ResNet-50 is 64, while 192 parallel tiles are available. Therefore, 128 tiles are not being used for this layer. Fig. \ref{PC} shows the breakdown of power consumption for each layer of AlexNet, VGGNet-16 and ResNet-50. The power consumption of MMIE follows a descending trend as the number of zeros in output/input activations maps and filters increases for each layer of AlexNet, VGGNet-16 and ResNet-50. Moreover, it also increases as the performance efficiency of layers rises. The power numbers reported in this paper are obtained by measuring switching activities of all models. \par \begin{figure*}[!t] \centering \subfigure[]{ \small \scalebox{1.2}{ \begin{tikzpicture} \begin{axis}[ title={}, xlabel={Layer Number}, ylabel={Peformance Efficiency (\%)}, xmin=1, xmax=50, legend style ={ at={(0.5,0.33)}, anchor=north west, draw=black, fill=white,align=left}, cycle list name=black white ] \addplot coordinates{ (1,62.4) (2,79.9) (3,97.3) (4,97.3) (5,97.3) (6,100) (7,100) (8,100) }; \addlegendentry{AlexNet} \addplot coordinates{ (1,34.7) (2,91) (3,90.9) (4,94.8) (5,94.8) (6,96.9) (7,96.9) (8,96.7) (9,97.9) (10,97.9) (11,97.9) (12,97.9) (13,97.9) (14,99.8) (15,100) (16,100) }; \addlegendentry{VGGNet-16} \addplot coordinates{ (1,83.1) (2,25.7) (3,91.2) (4,44.6) (5,31.1) (6,91.2) (7,44.6) (8,31.1) (9,91.2) (10,44.6) (11,51.5) (12,94.1) (13,66.9) (14,63.7) (15,94.1) (16,66.9) (17,63.7) (18,94.1) (19,66.9) (20,63.7) (21,94.1) (22,66.9) (23,60.8) (24,97.1) (25,74.3) (26,63.7) (27,97.1) (28,74.3) (29,63.7) (30,97.1) (31,74.3) (32,63.7) (33,97.1) (34,74.3) (35,63.7) (36,97.1) (37,74.3) (38,63.7) (39,97.1) (40,74.3) (41,83.6) (42,97.1) (43,89.2) (44,85.2) (45,97.1) (46,89.2) (47,85.2) (48,97.1) (49,89.2) (50,100) }; \addlegendentry{ResNet-50} \end{axis} \end{tikzpicture}} \label{PE} } \subfigure[]{ \small \scalebox{1.2}{ \begin{tikzpicture} \begin{axis}[ title={}, xlabel={Layer Number}, ylabel={Power Consumption (mW)}, xmin=1, xmax=50, legend style ={ at={(0.5,0.33)}, anchor=north west, draw=black, fill=white,align=left}, cycle list name=black white ] \addplot coordinates{ (1,194.35) (2,259.96) (3,290.93) (4,288.7) (5,292.01) (6,40.2) (7,37.14) (8,34.89) }; \addlegendentry{AlexNet} \addplot coordinates{ (1,328.3) (2,316.86) (3,313.38) (4,309.3) (5,306.14) (6,297.67) (7,297.95) (8,301.41) (9,296.81) (10,291.95) (11,286.82) (12,290.01) (13,285.05) (14,42.7) (15,41.01) (16,36.7) }; \addlegendentry{VGGNet-16} \addplot coordinates{ (1,263.2) (2,218.17) (3,287.3) (4,265.4) (5,260.19) (6,286.9) (7,271.57) (8,264.18) (9,290.15) (10,266.44) (11,264.01) (12,286.49) (13,262.8) (14,264.42) (15,285.35) (16,229.07) (17,228.28) (18,265.96) (19,225.17) (20,229.51) (21,272.49) (22,226.95) (23,229.6) (24,269.13) (25,228.78) (26,224.61) (27,267.05) (28,222.62) (29,225.35) (30,266.59) (31,223.62) (32,226.51) (33,268.87) (34,223.43) (35,226.41) (36,264.54) (37,224.92) (38,226.67) (39,261.32) (40,225.84) (41,226.64) (42,264.88) (43,226.84) (44,223.70) (45,262.2) (46,221.8) (47,222.68) (48,257.26) (49,223.28) (50,35.5) }; \addlegendentry{ResNet-50} \end{axis} \end{tikzpicture}} \label{PC} } \subfigure[]{ \small \scalebox{1.2}{ \begin{tikzpicture} \begin{axis}[ title={}, xlabel={Layer Number}, ylabel={Memory Access (MB)}, xmin=1, xmax=50, ymajorgrids=true, grid style=dashed, ymode = log, log basis y = 10, legend style ={ at={(0.47,0.97)}, anchor=north west, draw=black, fill=white,align=left}, cycle list name=black white ] \addplot coordinates{ (1,3.3) (2,4.4) (3,3.5) (4,2.6) (5,1.7) (6,75.9) (7,33.7) (8,8.2) }; \addlegendentry{AlexNet} \addplot coordinates{ (1,8.2) (2,45.2) (3,22.7) (4,42.1) (5,21.4) (6,41.1) (7,41.1) (8,22.3) (9,43.9) (10,43.9) (11,14.5) (12,14.5) (13,14.5) (14,206.6) (15,33.7) (16,8.2) }; \addlegendentry{VGGNet-16} \addplot coordinates{ (1,3.3) (2,1.2) (3,2.9) (4,4) (5,3.6) (6,2.9) (7,4) (8,3.6) (9,2.9) (10,4) (11,1.4) (12,2.9) (13,3.1) (14,2.7) (15,2.9) (16,3.1) (17,2.7) (18,2.9) (19,3.1) (20,2.7) (21,2.9) (22,3.1) (23,1.5) (24,3.7) (25,3.1) (26,3) (27,3.7) (28,3.1) (29,3) (30,3.7) (31,3.1) (32,3) (33,3.7) (34,3.1) (35,3) (36,3.7) (37,3.1) (38,3) (39,3.7) (40,3.1) (41,1.4) (42,6) (43,2.8) (44,3.3) (45,6) (46,2.8) (47,3.3) (48,6) (49,2.8) (50,4.1) }; \addlegendentry{ResNet-50} \end{axis} \end{tikzpicture}} \label{MA} } \subfigure[]{ \small \scalebox{1.2}{ \begin{tikzpicture} \begin{axis}[ title={}, xlabel={Layer Number}, ylabel={Processing Latency (ms)}, xmin=1, xmax=50, ymajorgrids=true, grid style=dashed, ymode = log, log basis y = 10, legend style ={ at={(0.47,0.97)}, anchor=north west, draw=black, fill=white,align=left}, cycle list name=black white ] \addplot coordinates{ (1,4.4) (2,7.3) (3,4) (4,3) (5,2) (6,4.9) (7,2.2) (8,0.5) }; \addlegendentry{AlexNet} \addplot coordinates{ (1,6.5) (2,52.9) (3,26.5) (4,50.8) (5,25.4) (6,49.7) (7,49.7) (8,24.9) (9,49.2) (10,49.2) (11,12.3) (12,12.3) (13,12.3) (14,13.4) (15,2.2) (16,0.5) }; \addlegendentry{VGGNet-16} \addplot coordinates{ (1,3.7) (2,1.3) (3,3.3) (4,3) (5,4.3) (6,3.3) (7,3) (8,4.3) (9,3.3) (10,3) (11,1.3) (12,3.2) (13,2) (14,2.1) (15,3.2) (16,2) (17,2.1) (18,3.2) (19,2) (20,2.1) (21,3.2) (22,2) (23,1.1) (24,3.1) (25,1.8) (26,2.1) (27,3.1) (28,1.8) (29,2.1) (30,3.1) (31,1.8) (32,2.1) (33,3.1) (34,1.8) (35,2.1) (36,3.1) (37,1.8) (38,2.1) (39,3.1) (40,1.8) (41,0.8) (42,3.1) (43,1.5) (44,1.57) (45,3.1) (46,1.5) (47,1.57) (48,3.1) (49,1.5) (50,0.3) }; \addlegendentry{ResNet-50} \end{axis} \end{tikzpicture}} \label{PT} } \caption{(a) The performance efficiency, (b) power consumption, (c) memory access and (d) computation latency breakdowns of AlexNet, VGGNet-16 and ResNet-50 at 200 MHz for convolutional processes and 40 MHz for fully-connected computations in TSMC 65 nm CMOS technology.} \label{fig6} \end{figure*} Fig. \ref{MA} shows the breakdown of the memory accesses for each layer of AlexNet, VGGNet-16 and ResNet-50. The memory accesses for each layer of the aforementioned networks are limited to a few MB. More precisely, AlexNet and ResNet-50 layers require a lower number of memory accesses compared to VGGNet-16. While the memory accesses for each layer of AlexNet and ResNet-50 are roughly in the same order, the total memory accesses of ResNet-50 are significantly more due to its numerous layers. The processing latency of each layer also follows a similar trend to the memory accesses as shown in Fig. \ref{PT}. In fact, the latency of each layer in AlexNet and ResNet-50 is limited to a few milliseconds while each layer of VGGNet-16 requires roughly 10$\times$ more clock cycles. \subsection{Comparison With State-of-the-Art Implementations}\label{HIR:comp} The implementation results of MMIE on AlexNet, VGGNet-16 and ResNet-50 are shown in Table \ref{tab_res}. As discussed in Section \ref{intro}, MCR of these networks varies depending on their sizes. Therefore, different implementation results are expected when running MMIE on these models. MMIE performs the convolutional and fully-connected computations of AlexNet within 20.8 ms and 7.6 ms while requiring 15.6 MB and 117.8 MB memory accesses to the off-chip memory, respectively. The convolutional and fully-connected processes of VGGNet-16 are performed within 421.8 ms and 16.4 ms and require 375.5 MB and 247.3 MB memory accesses, respectively. Finally, performing convolutional and fully-connected computations of ResNet-50 on MMIE requires 106.6 ms and 0.3 ms while memory accesses are 154.6 MB and 4.1 MB, respectively. Therefore, AlexNet computations require the lowest latency while its total memory accesses are roughly similar to those of ResNet-50. VGGNet-16 is the most complex network in terms of both processing latency and memory accesses. MMIE also yields 83\%, 94\% and 94\% performance efficiency for convolutional computations of AlexNet, VGGNet-16 and ResNet-50, respectively. It is worth mentioning that the performance efficiency of fully-connected computations is roughly 100\% for all the aforementioned networks.\par \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \renewcommand{\thefootnote}{\alph{footnote}} \caption{Comparison of the Baseline Architecture with State-of-the-art Implementations.} \centering \scalebox{0.6}{ \large \begin{threeparttable} \centering \begin{tabular}{c | c c c c c c c c c c} \hline\hline \multicolumn{1}{l}{Reference} & ISSCC'17 \cite{DNPU} & \multicolumn{2}{c}{ISSCC'17 \cite{moons_new}} & \multicolumn{2}{c}{JSSC'17 \cite{eyeriss1}} & TCAS'17 \cite{TCAS} & \multicolumn{3}{c}{This work}\\ \hline \multicolumn{1}{l}{Technology} & NA/65 nm & \multicolumn{2}{c}{UTBB FD-SOI/28 nm} & \multicolumn{2}{c}{TSMC/65 nm} & TSMC/65 nm & \multicolumn{3}{c}{TSMC/65 nm} \\ \multicolumn{1}{l}{Gate Count\tnote{*} (NAND-2)} & NA & \multicolumn{2}{c}{1950k} & \multicolumn{2}{c}{1852k} & 1117k & \multicolumn{3}{c}{1036k}\\ \multicolumn{1}{l}{Core Area\tnote{*} (mm$^2$)} & 16 & \multicolumn{2}{c}{1.87} & \multicolumn{2}{c}{12.52} & 3.5 & \multicolumn{3}{c}{6 (2.45$\times$2.45)}\\ \multicolumn{1}{l}{\# PE} & 768(16b)-3072(4b)\tnote{c}, 64\tnote{f} & \multicolumn{2}{c}{256(16b)-1024(4b)} & \multicolumn{2}{c}{168} & 192 & \multicolumn{3}{c}{192\tnote{c,f}}\\ \multicolumn{1}{l}{On-chip SRAM (kB)} & 290 & \multicolumn{2}{c}{144} & \multicolumn{2}{c}{181.5} & 86 & \multicolumn{3}{c}{36.9}\\ \multicolumn{1}{l}{Nominal Frequency (MHz)} & 50-200\tnote{c,f} & \multicolumn{2}{c}{200\tnote{c}} & \multicolumn{2}{c}{250\tnote{c}} & 200\tnote{c} & \multicolumn{3}{c}{200\tnote{c}, 40\tnote{f}}\\ \multicolumn{1}{l}{Peak Performance (Gops)} & 300(16b)-1200(4b)\tnote{c}, 25\tnote{f} & \multicolumn{2}{c}{102(16b)-408(4b)\tnote{c}} & \multicolumn{2}{c}{84\tnote{c}} & 76\tnote{c} & \multicolumn{3}{c}{76.8\tnote{c}, 15.4\tnote{f}}\\ \multicolumn{1}{l}{Bitwidth (bits)} & 4-16 programmable\tnote{c}, 4-7\tnote{f} & \multicolumn{2}{c}{1-16 programmable\tnote{c}} & \multicolumn{2}{c}{16 fixed\tnote{c}} & 16 fixed\tnote{c} & \multicolumn{3}{c}{16 fixed\tnote{c,f}}\\ \hline \multicolumn{1}{l}{CNN type for ImageNet} & AlexNet & AlexNet & VGG-16 & AlexNet & VGG-16 & VGG-16 & AlexNet & VGG-16 & ResNet-50\\ \multicolumn{1}{l}{Top-1 Error (\%)} & 42.9 & 42.9 & 27 & 42.9 & 27 & 27 & 42.9 & 27 & 20\\ \multicolumn{1}{l}{Voltage (V)} & 0.77-1.1 & NA & NA & 1 & 1 & 1 & 1 & 1 & 1 \\ \multicolumn{1}{l}{Power\tnote{*} (mW)} & 63\tnote{c}, 3.5\tnote{f} & 44\tnote{c} &26\tnote{c} & 278\tnote{c} & 236\tnote{c} & 260\tnote{c} & 265\tnote{c}, 37\tnote{f} & 301\tnote{c}, 40\tnote{f} & 248\tnote{c}, 35.5\tnote{f}\\ \multicolumn{1}{l}{Total Latency (ms)} & 5.7\tnote{c}, 0.8\tnote{f} & 21.3\tnote{c} & 598.8\tnote{c} & 115.3\tnote{c} & 4309.5\tnote{c} & 453.3\tnote{c} & 20.8\tnote{c}, 7.6\tnote{f} & 421.8\tnote{c}, 16.4\tnote{f} & 103.6\tnote{c}, 0.3\tnote{f}\\ \multicolumn{1}{l}{Throughput (fps)} & 177\tnote{c}, 1.2k\tnote{f} & 47\tnote{c} & 1.67\tnote{c} & 34.7\tnote{c} & 0.7\tnote{c} & 2.21\tnote{c} & 48.1\tnote{c}, 131.6\tnote{f} & 2.2\tnote{c}, 61\tnote{f} & 9.6\tnote{c}, 3.3k\tnote{f}\\ \multicolumn{1}{l}{Performance (Gops)} & 235.4\tnote{c}, 140.6\tnote{f} & 62.6\tnote{c} & 51.3\tnote{c} & 46.1\tnote{c} & 21.4\tnote{c} & 67.7\tnote{c} & 63.9\tnote{c}, 15.4\tnote{f} & 72.5\tnote{c}, 15.1\tnote{f} & 74.5\tnote{c}, 15\tnote{f}\\ \multicolumn{1}{l}{Performance Efficiency} & 50\%\tnote{c}, 562\%\tnote{f} & 38\%\tnote{c} & 32\%\tnote{c} & 55\%\tnote{c} & 26\%\tnote{c} & 89\%\tnote{c} & 83\%\tnote{c}, 100\%\tnote{f} & 94\%\tnote{c}, 98\%\tnote{f} & 88\%\tnote{c}, 97\%\tnote{f}\\ \multicolumn{1}{l}{Energy-Efficiency\tnote{*} (Gops/W)} & 4200\tnote{c}, 40.2k\tnote{f} & 1423\tnote{c} & 1973\tnote{c} & 166\tnote{c} & 90.7\tnote{c} & 260.4\tnote{c} & 241.1\tnote{c}, 416.2\tnote{f} & 240.9\tnote{c}, 377.5\tnote{f} & 300.4\tnote{c}, 422.5\tnote{f}\\ \multicolumn{1}{l}{Memory Access / Batch (MB)} & NA & NA & NA & 15.4\tnote{c} & 321.1\tnote{c} & 331.7\tnote{c} & 15.6\tnote{c}, 117.8\tnote{f} & 375.5\tnote{c}, 247.3\tnote{f} & 154.6\tnote{c}, 4.1\tnote{f}\\ \hline \hline \end{tabular} \begin{tablenotes} \normalsize \item[*] Including on-chip SRAM. \item[f] Fully-connected. \item[c] Convolutional. \end{tablenotes} \end{threeparttable}} \label{tab_res} \vspace{-10pt} \end{table*} During the past few years, numerous works have been conducted towards ASIC implementations of DNNs. However, most of them were only tested on either small datasets or outdated CNNs which require order of magnitudes lower parameters and computations \cite{dac,origami2,origami3,yodann,conv_imp1}. Recently, Google released a custom DNN accelerator tensor processing unit (TPU) \cite{TPU}. TPU is a programmable and reconfigurable accelerator that can perform both fully-connected and convolutional computations. However, its power consumption exceeds the power budgets of embedded devices \cite{TCAS}. In \cite{eyeriss1,eyeriss}, a convolutional accelerator, called Eyeriss, was introduced. Eyeriss was fabricated in 65 nm CMOS technology and tested on AlexNet and VGGNet-16. Eyeriss uses high batch sizes to obtain a lower number of memory accesses, but using this method results in a higher computational latency. Eyeriss performs convolutional computations of AlexNet and VGGNet-16 in 115.3 ms and 4.3 s while requiring 15.4 MB and 321.1 MB memory accesses and using batch size of 4 and 3, respectively. Its performance efficiency is also limited to only 55\% and 26\% on AlexNet and VGGNet-16, resulting in large silicon area of 12.52 mm$^2$ (1852kgates). Eyeriss also uses clock gating to reduce its power consumption. \par Recently, a few works have focused on minimizing energy by modulating precision, frequency and supply voltage of their accelerator for each convolutional layer \cite{moons1,moons2, moons_new}. In \cite{moons_new}, a precision-scalable convolutional accelerator, fabricated in 28 nm UTBB FD-SOI technology, was introduced. This architecture dynamically adapts itself depending on the required precision for each layer, instead of using a fixed precision. More precisely, it exploits a reconfigurable multiplier which is able to perform a 16-bit, two 8-bit and four 4-bit multiplications, depending on the required precision. As a result, using a dynamic fixed-point technique allows to change frequency and supply voltage over time which results in a lower power/energy consumption. This accelerator performs the convolutional computations of AlexNet to 21.3 ms, and those of ResNet to 598.8 ms, while its performance efficiency is respectively limited to 38\% and 32\% on average. Similar to Eyeriss, the low performance efficiency of this architecture results in a large gate count of 1950kgates.\par In \cite{DNPU}, a DNN accelerator, fabricated in 65 nm CMOS 1P8M, was introduced. This accelerator can perform both fully-connected and convolutional computations while using two separate cores and the dynamic fixed-point technique to minimize power/energy consumption. This architecture exploits a reconfigurable 16-bit multiplier for convolutional processes which allows it to work with lower frequency and supply voltage. This architecture performs convolutional and fully-connected computations within 5.7 ms and 833 $\mu$s, respectively. The convolutional core of this architecture contains 768 16-bit reconfigurable PEs, which can be used as 3072 4-bit PEs. Despite its high convolutional throughput, its performance efficiency is limited to $50\%$ on average. The fully-connected core contains only 64 PEs, and uses a quantization table-based matrix multiplication to reduce off-chip memory accesses and remove redundancy. This technique reduces the memory accesses by $75\%$ and avoids $90\%$ of the 16-bit fixed-point multiplications in fully-connected computations \cite{DNPU}. While the fully-connected core is highly optimized, it requires separate PEs and hardware resources, which leads to a large silicon area of 16 mm$^2$.\par In \cite{TCAS}, a convolutional accelerator was proposed as a first attempt to improve the performance efficiency for filters fixed to $3 \times 3$. This architecture performs the convolutional computations of VGGNet-16 within 453.3 ms and requires 331.7 MB memory accesses. In this paper, we proposed MMIE which supports all the filter sizes that require less than or equal to 6 parallel PEs in each tile. MMIE can perform both the convolutional and fully-connected computations while using the same PEs. Since both Eyeriss and MMIE were implemented in TSMC 65nm CMOS technology and use 16-bit fixed-point representations, a direct comparison of these two implementations constitutes a fair comparison. As shown in Table \ref{tab_res}, MMIE outperforms Eyeriss \cite{eyeriss1} in terms of gate count (1.8$\times$ smaller), latency (5.5$\times$ and 10.2$\times$ lower), throughput (1.4$\times$ and 3.1$\times$ faster), performance efficiency (1.5$\times$ and 3.6$\times$ better) and energy scalability (1.5$\times$ and 2.7$\times$ more efficient) while having roughly the same number of memory accesses per batch. It is worth noting that a direct comparison of MMIE with the works published in \cite{moons_new,DNPU} does not constitute a fair comparison, since they dynamically modulate precision, frequency and supply voltage and use advanced technology nodes, which allows them to instantiate more PEs while still having a low-power/energy consumption. However, the introduced performance efficiency metric can be used for a fair comparison as it reflects the performance of the accelerators independent of their technology nodes, precisions and optimization techniques. Therefore, MMIE has better the performance efficiency than the works published in \cite{DNPU} (2$\times$ better) and \cite{moons_new} (2.2$\times$ and 2.9$\times$ better) when performing convolutions of AlexNet and VGGNet-16. \section{Conclusion}\label{conclusion} CNN accelerators in literature promise a high peak throughput, but their performance is limited to less than 55 \% when running the state-of-the-art networks such as AlexNet, VGGNets and ResNets. We proposed a dataflow inspired to the fully-connected computations to perform both convolutional and fully-connected processes with a high utilization factor. We then introduced a multi-mode inference engine (MMIE) based on the proposed dataflow and theoretically formalized its implementation performance. Finally, we implemented MMIE in TSMC 65nm CMOS technology and tested it on three state-of-the-art networks, AlexNet, VGGNet-16 and ResNet-50. The implementation results show that MMIE performs both the fully-connected and convolutional computations with performance efficiency no less than 84\%, outperforming the state of the art also in terms of area occupation. \bibliographystyle{ieeetr}
train/arxiv
BkiUdcQ5qhLBXU5kwGX9
5
1
\section{Introduction} A harmonic polynomial is a complex-valued harmonic function given by: \[H_{n,m}(z)=p(z)+q(\overline{z}),\] where $p$ and $q$ are analytic complex polynomials of degree $n$ and $m$ respectively and $\overline{z}$ denotes the complex conjugate of $z$. Since $H$ is not analytic, the Fundamental Theorem of Algebra does not apply, and it is natural to ask how many zeros $H$ can have, i.e. how many solutions are there to $H_{n,m}(z)=0$. Assuming $n\ge m$ and denoting by $\mathcal{N}(T)$ the number of zeros of $H$ in a domain $T\subseteq\mathbb{C}$ we can bound $\mathcal{N}$ for a generic choice of $p$ and $q$ as such: \[n\le\mathcal{N}(\mathbb{C})\le n^2.\] The upper bound is due to Wilmshurst who applied Bezout's Theorem to the real and imaginary part of $H(z)=0$ \cite{Wilmshurst}. The lower bound is a consequence of a generalized argument principle. In fact, these bounds are sharp for each $n$, though for $m=1$ the upper bound has been improved to $3n-2$ \cite{Khavinson}, and it has been conjectured for fixed $m$ that the upper bound is linear in $n$ \cite{Saez}, \cite{Lee}, \cite{Wilmshurst}. Given the wide range of values that $\mathcal{N}$ can take and the lack of an explicit formula in $n$ and $m$ for $\mathcal{N}$, the next question is: given an ``arbitrary'' harmonic polynomial, what is the expected value of $\mathcal{N}$? The question has been well studied in the framework of analytic polynomials. Kac initiated the study by deriving an explicit formula for the expected number of zeros of a random real polynomial \cite{Kac}. Subsequently, other authors developed similar formulae for trigonometric polynomials \cite{Dunnage}, complex polynomials over an arbitrary domain \cite{Vanderbei}, and polynomial vector fields \cite{Azais}. In the context of harmonic polynomials, Li and Wei showed an explicit formula for $\mathbb{E}[\mathcal{N}_{n,m}]$ when the coefficients are independent complex Gaussians \cite{LiWei}. Moreover, they showed that if $H$ satisfies \begin{equation} \label{int:harmonic} H_{n,m}(z)=\sum_{j=0}^na_jz^j+\sum_{j=0}^mb_j\overline{z}^j \end{equation} with $0\le m\le n$, where $a_j$ and $b_j$ are complex Gaussian random variables satisfying: \begin{equation}\label{int:exp} \mathbb{E}[a_j]=\mathbb{E}[b_j]=0\text{ , } \mathbb{E}[a_j\overline{a}_k]=\delta_{jk}\binom{n}{j}\text{ , and } \mathbb{E}[b_j\overline{b}_k]=\delta_{jk}\binom{m}{j} \end{equation} then as $n\rightarrow\infty$, \begin{equation*} \mathbb{E}[\mathcal{N}_{n,m}]\sim\left\{ \begin{array}{ll} \frac{\pi}{4}n^{3/2}&m=n \\n&m=\alpha n+o(n),\ \alpha\in[0,1). \end{array}\right. \end{equation*} Notice that in the case where $m=\alpha n$ the modulus of $p$ is much larger than that of $q$ so that $H$ tends toward an analytic polynomial as $n$ increases. Similarly, in this case $H$ asymptotically obeys the fundamental theorem of algebra. A related result was proved by Lerario and Lundberg. Choosing a slightly different definition of ``random,'' Lerario and Lundberg showed that if in (\ref{int:exp}) one instead defines $\mathbb{E}[b_j\overline{b}_k]=\delta_{jk}\binom{j}{k}$ then \[\mathbb{E}[\mathcal{N}_{n,m}]\sim c_\alpha n^{3/2}\qquad\text{when $m=\alpha n$}\] where $c_\alpha$ is a constant depending only on $\alpha\in(0,1]$ \cite{Lundberg}. Moreover, $c_\alpha\rightarrow\frac{\pi}{4}$ as $\alpha\rightarrow1$ giving the asymptotic value of $\mathbb{E}[\mathcal{N}_{n,m}]$ a satisfying continuity in $\alpha$, a property shared by the particular model of random $H$ that is the focus of this article. A well studied choice of random coefficients for polynomilas is for $a_j$ and $b_j$ to be i.i.d. complex Gaussians (e.g. $\mathbb{E}[a_j\overline{a}_k]=\mathbb{E}[b_j\overline{b}_k]=\delta_{jk}$). The first author provided asymptotics for the expected number of zeros in the cases when $m$ is fixed and when $m = n$ \cite{Thomack}. \[\mathbb{E}[\mathcal{N}_{n,m}]\sim n\text{ as $n\rightarrow\infty$ and $m$ is fixed,}\] and there exists $c_1,c_2>0$ such that for large $n$ \[c_1n\log n\le\mathbb{E}[\mathcal{N}_{n,n}]\le c_2n\log n.\] In this paper, we introduce another Gaussian model of harmonic polynomials, obtained by independently sampling $p$ and $q$ from the Weyl model. That is, the polynomials satisfying (\ref{int:harmonic}) where $a_j$ and $b_j$ are complex Gaussian random variables satisfying: \begin{equation} \mathbb{E}[a_j]=\mathbb{E}[b_j]=0\text{ , }\mathbb{E}[a_j\overline{a}_k] =\delta_{jk}\frac{1}{j!}\text{ , and }\mathbb{E}[b_j\overline{b}_k] =\delta_{jk}\frac{1}{j!}. \end{equation} The number of zeros when $m$ is fixed resembles that of the analogous complex analytic Weyl polynomials studied in \cite{Tao}. The real Weyl polynomials have been found to have an expected number of real zeros asymptotic to $\sqrt{n}$ \cite{Kostlan}. The formula for the expected number of zeros obtained by Li and Wei can be extended to more general Gaussian models of harmonic polynomials (see Theorem \ref{thmLiWeiGeneral} below) including the Weyl model. Using this result along with classical methods in the asymptotic analysis of integrals, we prove the following: \begin{theorem}\label{MainResult} If $\mathcal{N}_{n,m}$ denotes the number of zeros of a random Weyl polynomial then \begin{equation} \label{MainThm} \mathbb{E}\left[\mathcal{N}_{n,m}(\mathbb{C})\right]\sim\left\{ \begin{array}{ll} n&m\text{ is fixed,} \\\frac{1}{3}m^{3/2}&m=\alpha n+O(1),\ \alpha\in(0,1] \end{array}\right._. \end{equation} \end{theorem} Moreover, the proof of the theorem provides detailed information on the so-called ``first intensity'', the average local density of zeros which is determined by the integrand appearing in the Kac-Rice formula stated in Theorem \ref{thmLiWeiGeneral}. We observe distinct behavior over three regions (see Figure \ref{denistyPlots}). The high density in the central core is particularly striking and provokes further study. It would be desireable to find a heuristic explanation of this phenomenon. \begin{figure}[h] \begin{subfigure}[b]{.35\textwidth} \includegraphics[width=\textwidth]{./img/DensityPlot1664crop} \caption{$m=16$, $n=64$} \end{subfigure} \begin{subfigure}[b]{.35\textwidth} \includegraphics[width=\textwidth]{./img/DensityPlot25100crop} \caption{$m=25$, $n=100$} \end{subfigure} \caption{The first intensity function as a density plot for two values of $n$ and $m=0.25n$. In both plots we see three regions, one disc of radius $\sqrt{m}$ containing a high density of zeros, one annulus of radius $\sqrt{n}$ with a less dense, almost constant, distribution of zeros, and the complement of the disc of radius $\sqrt{n}$ which has extremely low density of zeros.} \label{denistyPlots} \end{figure} \section{A Kac-Rice formula for Gaussian harmonic polynomials} \begin{theorem}\label{thmLiWeiGeneral} The expectation $\mathbb{E}\mathcal{N}_H(T)$ of the number of zeros of \[H_{n,m}(z)=\sum_{j=0}^na_jz^j+\sum_{j=0}^mb_j\overline{z}^j\] with $\mathbb{E}a_j=\mathbb{E}b_j=0$ and $\mathbb{E}a_j\overline{a}_j=\alpha_j$ and $\mathbb{E}b_j\overline{b}_j=\beta_j$ on a domain $T\subset\mathbb{C}$ is given by: \begin{equation} \label{LiWeiGeneral} \mathbb{E}\mathcal{N}_F(T)=\frac{1}{\pi}\int_T\frac{1}{\lvert z\rvert^2} \frac{r_1^2+r_2^2-2r_{12}^2}{r_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}dA(z), \end{equation} where $dA(z)$ denotes the Lebesgue measure on the plane, and \begin{align*} r_3=&\sum_{j=0}^n\alpha_j\lvert z\rvert^{2j} +\sum_{j=0}^m\beta_j\lvert z\rvert^{2j},\qquad\qquad\quad r_{12}=\left(\sum_{j=1}^nj\alpha_j\lvert z\rvert^{2j}\right) \left(\sum_{j=1}^mj\beta_j\lvert z\rvert^{2j}\right), \\r_1=&r_3\sum_{j=1}^nj^2\alpha_j\lvert z\rvert^{2j} -\left(\sum_{j=1}^nj\alpha_j\lvert z\rvert^{2j}\right)^2,\ \ r_2=r_3\sum_{j=1}^mj^2\beta_j\lvert z\rvert^{2j} -\left(\sum_{j=1}^mj\beta_j\lvert z\rvert^{2j}\right)^2. \end{align*} \end{theorem} As written, this theorem is more general than as it is written in \cite{LiWei} and \cite{Lundberg}, by replacing specific variances of $a_j$ and $b_j$ with arbitrary positive values $\alpha_j$ and $\beta_j$, though the same proof suffices. \section{Proof of Theorem \ref{MainResult}} Our strategy for finding the asymptotic value of $\mathcal{N}_H$ as $n\rightarrow\infty$ is to find an appropriate change of variables, and then use the Lebesgue Dominated Convergence Theorem, or the following generalization of it from \cite[\S4.4]{Royden}: \begin{theorem}[General Lebesgue Dominated Convergence Theorem]\label{GLDCT} Let $\{f_n\}$ be a sequence of measurable functions on $E$ that converges p.w. a.e. on $E$ to $f$. Suppose there is a sequence $\{g_n\}$ of nonnegative measurable functions on $E$ that converges p.w. a.e. on $E$ to $g$ and dominates $\{f_n\}$ on $E$ in the sense that $\lvert f_n\rvert\le g_n$ on $E$ for all $n$. If \[\lim_{n\rightarrow\infty}\int_Eg_n=\int_Eg<\infty,\ \ then\ \ \lim_{n\rightarrow\infty}\int_Ef_n=\int_Ef.\] \end{theorem} We proceed by finding the limit of the integrand, then we show that we can find a sequence of integrable bounding functions satisfying the theorem above. We first apply Theorem \ref{LiWeiGeneral} and introduce some useful notation. Let \[a_n=\sum_{j=0}^n\frac{\lvert z\rvert^{2j}}{j!},\quad b_n=\sum_{j=1}^nj\frac{\lvert z\rvert^{2j}}{j!},\quad c_n=\sum_{j=1}^nj^2\frac{\lvert z\rvert^{2j}}{j!}.\] Then we may write $r_1$, $r_2$, $r_3$, and $r_{12}$ as follows \begin{equation*} r_3=a_n+a_m,\quad r_{12}=b_nb_m,\quad r_1=r_3c_n-b_n^2,\quad r_2=r_3c_m-b_m^2. \end{equation*} Throughout the proof we may apply change of variables, in which case it is assumed that $a_n$, $b_n$, and $c_n$ are the same function with the appropriate change of variables applied. \subsection{Pointwise limit of the first intensities} We begin by addressing the case when $m=\alpha n$, $\alpha\in(0,1]$. We first notice that we have some convenient relationships between $a_k$, $b_k$, and $c_k$. \begin{equation} b_k=\lvert z\rvert^2a_{k-1}\qquad c_k=\lvert z\rvert^4a_{k-2}+\lvert z\rvert^2a_{k-1} \end{equation}Since the limit of $a_n$ is equal to that of $a_{n-1}$, $a_{n-2}$, $a_m$, $a_{m-1}$, and $a_{m-2}$ we get that as $n\rightarrow\infty$ \begin{equation} \frac{r_1^2+r_2^2-2r_{12}^2}{\lvert z\rvert^2r_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}\rightarrow\frac{\sqrt{\lvert z\rvert^2+1}}{2}. \end{equation} However, this pointwise convergence is not dominated by an integrable function. In order to obtain a dominated convergence we will need to perform a change of variable and divide by the constant $n^{3/2}$ which allows us to integrate the limit of the new integrand. We perform the change of variables, first from Cartesian to polar, \begin{equation*} \frac{1}{\pi}\int_{\mathbb{C}}\frac{r_1^2+r_2^2-2r_{12}^2}{\lvert z\rvert^2r_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}dA(z)=2\int_0^\infty\frac{r_1^2+r_2^2-2r_{12}^2}{\lvert z\rvert r_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}d\lvert z\rvert, \end{equation*} and then using the change of variables $\lvert z\rvert^2=nt$ this becomes \begin{equation}\label{MainInt} \int_0^\infty\frac{r_1^2+r_2^2-2r_{12}^2}{t r_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}dt, \end{equation} where $r_1$, $r_2$, $r_3$ and $r_{12}$ are defined as before with $nt$ substituted for $\lvert z\rvert^2$. Note that with this change of variable, the notation $a_n$ becomes ambiguous, thus we clarify that the $n$ in $a_n$ refers only to the upper bound in the index of the sum, that is, $a_{n-1}=\sum_{j=0}^{n-1}(nt)^j/j!$. We use the incomplete Gamma function in combination with Laplace's method to evaluate the limit of the integrand. We have the following identity for the incomplete Gamma function when $n\in\mathbb{N}$, \[\Gamma(n,t):=\int_t^\infty x^{n-1}e^{-x}dx=(n-1)!e^{-t}\sum_{j=0}^{n-1}\frac{t^j}{j!}\] which implies \[\sum_{j=0}^n\frac{t^j}{j!}=\frac{e^t\Gamma(n+1,t)}{n!}\] and for a function $k=k(n)$ which is $O(1)$ as $n\rightarrow \infty$ such that $\alpha n +k\in\mathbb{N}$, \begin{equation} \label{incGamma} \sum_{j=0}^{\alpha n-k}\frac{(nt)^j}{j!} =\frac{e^{nt}\Gamma(\alpha n-k+1,nt)}{(\alpha n-k)!} =\frac{e^{nt}}{(\alpha n-k)!}\int_{nt}^\infty x^{\alpha n-k}e^{-x}dx. \end{equation} We make the change of variables, $x=ny$ \begin{equation} \label{GammaChange} \int_t^\infty(ny)^{\alpha n-k}e^{-ny}n\ dy =n^{\alpha n-k+1}\int_t^\infty e^{(\alpha \ln y-y-\frac{k}{n}\ln y)n}dy. \end{equation} We let $g(y)=\alpha \ln y-y-\frac{k}{n}\ln y$ and thus $g'(y)=\frac{1}{y}\left(\alpha-\frac{k}{n}\right)-1$ implies $g$ is maximized at $\alpha-\frac{k}{n}$. For $t<\alpha$ and $k$ (not dependent on $t$) there is a large enough $n$ so that $\alpha-\frac{k}{n}\in(t,\infty)$. We may then use the Laplace method. \begin{align*} \int_t^\infty e^{(\alpha\ln y-y-\frac{k}{n}\ln y)n}dy\sim &\int_{-\infty}^\infty e^{n\left(\left(\alpha-\frac{k}{n}\right) \left(\ln\left(\alpha-\frac{k}{n}\right)-1\right) -\frac{1}{2}\frac{1}{\alpha-\frac{k}{n}} \left(y-\alpha+\frac{k}{n}\right)^2\right)}dy \\=&e^{n\left(\left(\alpha-\frac{k}{n}\right) \left(\ln\left(\alpha-\frac{k}{n}\right)-1\right)\right)} \int_{-\infty}^\infty e^{\frac{-\left(y-\alpha+\frac{k}{n}\right)^2} {2\frac{\alpha n-k}{n^2}}}dy \\=&e^{n\left(\left(\alpha-\frac{k}{n}\right) \left(\ln\left(\alpha-\frac{k}{n}\right)-1\right)\right)} \frac{\sqrt{2\pi(\alpha n-k)}}{n}. \end{align*} Combining the above asymptotic with (\ref{incGamma}) and (\ref{GammaChange}), it follows from Stirling's formula that \[\lim_{n\rightarrow\infty}\frac{a_{\alpha n-k}}{e^{nt}}=\lim_{n\rightarrow\infty}\frac{a_{n}}{e^{nt}}=1,\] where we simply plug in $1$ for $\alpha$ and $0$ for $k$ to find $a_n$. Then $b_{\alpha n}\sim b_n\sim nte^{nt}$ and $c_{\alpha n}\sim c_n\sim(nt+n^2t^2)e^{nt}$ and \[\lim_{n\rightarrow\infty}\frac{r_1^2+r_2^2-2r_{12}^2}{n^{3/2}t r_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}=\frac{1}{2}\sqrt{t},\quad\text{for $t<\alpha$}.\] For $t>\alpha$ we will use the endpoint analysis of the Laplace method as outlined in ~\cite[\S3.3]{Miller}. We begin by using the incomplete gamma function as before, defining $\kappa=\alpha+O(\frac{1}{n})$ so that $\kappa n\in\mathbb{N}$ \[a_{\kappa n}=\frac{e^{nt}n^{\kappa n+1}}{(\alpha n)!}\int_{t}^\infty e^{n(\kappa\log y-y)}dy.\] Now set $g(y)=\kappa\log y-y$. For large enough $n$, this function is maximized at $t$ for $t>\alpha$, which can be seen by $g'(y)=\frac{\kappa}{y}-1<0$ for large enough $n$. Let $\tilde{g}(y)=g(y)-g(t)$. Then $a_{\kappa n}$ can be rewritten as \[a_{\kappa n}=\frac{n^{\kappa n+1}t^{\kappa n}}{(\kappa n)!}\int_{0}^\infty e^{n\tilde{g}(t+y)}dy.\] Then there is a function, $y(s)$, guaranteed by the implicit function theorem, such that $\tilde{g}(t+y(s))=-s$. We use this as a change of variables and obtain \[a_{\kappa n}=\frac{n^{\kappa n+1}t^{\kappa n}}{(\kappa n)!}\int_{0}^\infty e^{-ns}y'(s)ds\] which is in the form necessary to use Watson's lemma, which we rewrite to our specific context. \begin{theorem} Suppose $y'(s)$ is smooth around a neighborhood of $s=0$ and absolutely integrable on $[0,\infty)$. Then the exponential integral \[F(n):=\int_0^\infty e^{-ns}y'(s)ds\] is finite for all $n>0$ and it has the asymptotic expansion \[F(n)\sim\sum_{k=0}^\infty\frac{y^{(k+1)}(0)} {n^{k+1}}\quad\text{as $n\rightarrow\infty$.}\] \end{theorem} One can solve for the derivatives of $y(s)$ by recursively using Taylor polynomials, though the first three in terms of $g(y)$ and its derivatives are computed in ~\cite{Miller}. \[y'(0)=\frac{-1}{g'(t)}=-\frac{t}{\kappa-t},\qquad y''(0)=-\frac{g''(t)}{g'(t)^3}=\frac{\kappa t}{(\kappa-t)^3},\] \[y'''(0)=\frac{g'''(t)g'(t)-3g''(t)^2}{g'(t)^5}=\frac{t}{(\kappa-t)^5}\left(2(\kappa-t)-3\kappa^2\right)\] thus \begin{equation}\label{Milleran} a_{\kappa n}=\frac{(nt)^{\kappa n}}{(\kappa n)!} \left(\frac{t}{t-\kappa}-\frac{\kappa t}{(t-\kappa)^3}\frac{1}{n} -\frac{t(2\kappa-3\kappa^2-2t)}{(t-\kappa)^5}\frac{1}{n^2}+O(n^{-3})\right) \end{equation} as $n\rightarrow\infty$. It follows that \begin{align} b_{\kappa n}=&\frac{(nt)^{\kappa n+1}}{(\kappa n)!} \left(\frac{\kappa}{t-\kappa}-\frac{\kappa t}{(t-\kappa)^3}\frac{1}{n} -\frac{t(2\kappa-3\kappa^2-2t)}{(t-\kappa)^5}\frac{1}{n^2} +O(n^{-3})\right), \label{Millerbn} \\c_{\kappa n}=&\frac{(nt)^{\kappa n+2}}{(\kappa n)!} \left(\frac{\kappa^2}{t(t-\kappa)}+\frac{(\kappa-1)t^2-2t\kappa^2 +\kappa^3}{t(t-\kappa)^3}\frac{1}{n}\right) \label{Millercn} \\ &+\frac{(nt)^{\kappa n+2}}{(\kappa n)!} \left(\frac{(2-\kappa)t^2+(5\kappa^2-2\kappa) t-\kappa^3}{(t-\kappa)^5} \frac{1}{n^2}+O(n^{-3})\right).\nonumber \end{align} We now notice that for $0<\alpha\le t<1$, \[\lim_{n\rightarrow\infty}\frac{(nt)^{\kappa n}}{(\kappa n)!e^{nt}}=0.\] Since $a_n\sim e^{nt}$ is still true when $\alpha\le t<1$ it follows that \[\lim_{n\rightarrow\infty}\frac{r_3}{e^{nt}}=\lim_{n\rightarrow\infty}\frac{a_n}{e^{nt}}=1,\quad\lim_{n\rightarrow\infty}\frac{r_1}{nte^{2nt}}=1,\quad\text{and}\quad\lim_{n\rightarrow\infty}\frac{r_2}{nte^{2nt}}=\lim_{n\rightarrow\infty}\frac{r_{12}}{nte^{2nt}}=0.\] Thus when $0<\alpha\le t<1$, we have \[\lim_{n\rightarrow\infty}\frac{r_1^2+r_2^2-2r_{12}}{n^{3/2}tr_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}=\lim_{n\rightarrow}\frac{1}{\sqrt{n}}=0.\] Similarly, when $0<\alpha<1<t$, \[\lim_{n\rightarrow\infty}\frac{n!(nt)^{\kappa n}}{(\kappa n)!(nt)^{n}}=0.\] For this region, $a_{\kappa n}$, $b_{\kappa n}$, $c_{\kappa n}$, $a_n$, $b_n$, and $c_n$ are all calculated based on (\ref{Milleran}), (\ref{Millerbn}), and (\ref{Millercn}) where $\kappa=1$ for $a_n$, $b_n$ and $c_n$. Therefore, \[\lim_{n\rightarrow\infty}\frac{r_3}{a_n}=1,\quad\lim_{n\rightarrow\infty}\frac{r_1}{a_nc_n-b_n^2}=1,\quad\text{and}\quad\lim_{n\rightarrow\infty}\frac{r_2}{\frac{(nt)^{2n+2}}{(n!)^2}}=\lim_{n\rightarrow\infty}\frac{r_{12}}{\frac{(nt)^{2n+2}}{(n!)^2}}=0.\] Thus when $0<\alpha<1<t$, \begin{align*} \lim_{n\rightarrow\infty}\frac{r_1^2+r_2^2-2r_{12}} {n^{3/2}tr_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}} =&\lim_{n\rightarrow\infty}\frac{(a_nc_n-b_n^2)^2} {n^{3/2}ta_n^2(a_nc_n-b_n^2)} \\=&\lim_{n\rightarrow\infty}\frac{1}{n^{3/2}(t-1)^2}=0. \end{align*} In the case that $m$ is fixed, $a_n$, $b_n$, and $c_n$ again have asymptotic growth of $e^{nt}$, $nte^{nt}$, and $(n^2t^2+nt)e^{nt}$, respectively when $t<1$. In the case that $t>1$, we have the same asymptotic growth given by (\ref{Milleran}), (\ref{Millerbn}), and (\ref{Millercn}), with $\alpha=1$. Clearly, \[\lim_{n\rightarrow\infty}\frac{k^{\beta}(nt)^k}{e^{nt}}=0\quad\text{so}\quad\lim_{n\rightarrow\infty}\frac{a_m}{e^{nt}}=\lim_{n\rightarrow\infty}\frac{b_m}{e^{nt}}=\lim_{n\rightarrow\infty}\frac{c_m}{e^{nt}}=0\] for $0<t<1$. Similarly, since for $t>1$, $\displaystyle\lim_{n\rightarrow\infty}n!/(nt)^{n-j}=0$ then \[\lim_{n\rightarrow\infty}\frac{a_mn!}{(nt)^n}=\lim_{n\rightarrow\infty}\frac{b_mn!}{(nt)^{n+1}}=\lim_{n\rightarrow\infty}\frac{c_mn!}{(nt)^{n+2}}=0.\] If we compute the pointwise limit when $t<1$, \[\frac{r_1^2+r_2^2-2r_{12}}{tr_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}\cdot\frac{e^{-4nt}}{e^{-4nt}}\sim\frac{(nt+n^2t^2-n^2t^2)^2+0-0}{t\sqrt{(nt+n^2t^2-n^2t^2)^2-0}}=n.\] Then, for $t>1$, \[\frac{r_1^2+r_2^2-2r_{12}}{tr_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}\cdot\frac{\frac{(n!)^4}{(nt)^{4n}}}{\frac{n!}{(nt)^n}}\sim\frac{(\frac{t^3}{(t-1)^4})^2+0-0}{t\left(\frac{t}{t-1}\right)^2\sqrt{(\frac{t^3}{(t-1)^4})^2-0}}=\frac{1}{(t-1)^2}.\] Thus if we divide the integrand by $n$, as $n\rightarrow\infty$ we have a pointwise limit of \begin{equation} \label{lim:fixed} \frac{r_1^2+r_2^2-2r_{12}}{ntr_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}} \rightarrow\left\{ \begin{array}{ll} 1&t<1\\0&t>1 \end{array}\right._. \end{equation} \subsection{Bounding Functions} In this section we obtain the bounding functions necessary to apply Theorem \ref{GLDCT}. When $m=\alpha n$ for $\alpha\in(0,1)$, we divide \ref{MainInt} by $m^{3/2}=\alpha^{3/2}n^{3/2}$ in order to establish (\ref{MainThm}). \begin{equation}\label{MainInt2}\frac{r_1^2+r_2^2-2r_{12}^2}{n^{3/2}tr_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}. \end{equation} Then we have the following to help us simplify the denominator: \begin{lemma}\label{CSApplication} If $x$ is a positive real number and $a_k=\sum_{j=0}^k\frac{x^j}{j!}$, $b_k=\sum_{j=1}^k\frac{jx^j}{j!}$, and $c_k=\sum_{j=1}^k\frac{j^2x^j}{j!}$, then $(a_n+a_m)c_nc_m\ge c_nb_m^2+c_mb_n^2$, for $n,m\in\mathbb{N}$. \end{lemma} \noindent which can be proven using a couple applications of Cauchy-Schwarz. We can find a lower bound on the quantity in the square root: \begin{align*} (r_1+r_2)^2-4r_{12}^2=&r_1^2+r_2^2+2r_1r_2-4r_{12}^2 \\=&r_1^2+r_2^2+2(a_n+a_m)^2c_nc_m-2(a_n+a_m)c_nb_m^2 \\&-2(a_n+a_m)c_mb_n^2-2r_{12}^2 \\\ge& r_1^2+r_2^2+2(a_n+a_m)^2(c_nc_m-c_nc_m)-2r_{12}^2 \\=&r_1^2+r_2^2-2r_{12}^2 \end{align*} resulting in a bounding function for (\ref{MainInt2}) of \begin{align*}\frac{\sqrt{r_1^2+r_2^2-2r_{12}^2}}{n^{3/2}tr_3^2}=&\frac{\sqrt{(r_1-r_2)^2+2r_3(r_1r_2-r_{12}^2)}}{n^{3/2}tr_3^2} \\\le&\frac{r_1-r_2}{n^{3/2}tr_3^2}+\frac{\sqrt{2(r_1r_2-r_{12}^2)}}{n^{3/2}tr_3^{3/2}}. \end{align*} Then we note that \[\frac{d}{dt}\left[a_k\right]=n\sum_{j=1}^kj\frac{(nt)^{j-1}}{j!}=\frac{1}{t}b_k,\text{ and }\frac{d}{dt}\left[b_k\right]=\frac{1}{t}c_k,\] and thus \[\frac{d}{dt}\left[\frac{b_n-b_m}{a_n+a_m}\right]=\frac{(a_n+a_m)(c_n-c_m)-b_n^2+b_m^2}{t(a_n+a_m)^2}=\frac{r_1-r_2}{tr_3^2}.\] Then \[\lim_{t\to\infty}\int_\mathbb{R}\frac{r_1-r_2}{n^{3/2}tr_3^2}dt=\lim_{t\rightarrow\infty}\frac{\sum_{k=m+1}^nk\frac{(nt)^k}{k!}}{n^{3/2}(a_m+a_n)}=n^{-1/2}.\] Moreover, \[\frac{r_1r_2-r_{12}^2}{n^3t^2r_3^3}=\frac{(a_n+a_m)c_nc_m-c_nb_m^2-c_mb_n^2}{n^3t^2(a_n+a_m)^2}.\] To establish a bound for this portion we will require the following lemma. \begin{lemma}\label{ac-b2} $a_kc_k-b_k^2\le a_{k-1}b_k$. \end{lemma} \begin{proof} We have that $c_k=ntb_{k-1}+b_k$ and $nta_k=b_{k+1}$. Then \[a_kc_k=a_k(ntb_{k-1}+b_k)=b_{k+1}b_{k-1}+a_kb_k\] so \begin{align*}a_kc_k-b_k^2=b_{k+1}b_{k-1}-b_k^2+a_kb_k=&\frac{(nt)^{k+1}}{k!}b_{k-1}-\frac{(nt)^{k}}{(k-1)!}b_k+a_kb_k \\=&\frac{(nt)^{k+1}}{k!}b_{k-1}-\frac{(nt)^{k}}{(k-1)!}b_k+\frac{(nt)^k}{k!}b_k+a_{k-1}b_k. \end{align*} Thus we prove our lemma if $\frac{(nt)^{k+1}}{k!}b_{k-1}-\frac{(nt)^{k}}{(k-1)!}b_k+\frac{(nt)^k}{k!}b_k\le0$. We use the fact that $b_k=nta_{k-1}$ to say that \[\frac{(nt)^k}{k!}b_k=\frac{(nt)^{k+1}}{k!}a_{k-1}.\] Thus, \begin{align*}\frac{(nt)^{k+1}}{k!}b_{k-1}-\frac{(nt)^{k}}{(k-1)!}b_k+\frac{(nt)^k}{k!}b_k=&\frac{(nt)^{k+1}}{k!}(b_{k-1}+a_{k-1})-\frac{(nt)^{k}}{(k-1)!}b_k \\\le&\frac{(nt)^{k+1}}{k!}((k-1)a_{k-1}+a_{k-1})-\frac{(nt)^{k}}{(k-1)!}b_k \\=&k\frac{(nt)^{k}}{k!}(nta_{k-1})-\frac{(nt)^{k}}{(k-1)!}b_k=0. \end{align*} \end{proof} Equipped with this lemma and the fact that $c_k=ntb_{k-1}+b_k$ and $nta_k=b_{k+1}$ we can see \begin{align*}r_1r_2-r_{12}^2\le &b_n^2a_nb_{m-1}+b_na_na_{n-1}b_m+b_n^2a_{m-1}a_{m-1}+b_na_{n-1}b_{m+1}b_{m-1} \\&+b_{n+1}b_{n-1}b_ma_m+b_na_nb_ma_m+b_{n-1}b_m^2a_m+b_nb_ma_ma_{m-1}. \end{align*} Since $n\ge m$, it is clear each term in this sum is less than or equal to $b_n^2a_nb_{m}$ or $b_na_na_{n-1}b_{m}$ for sufficiently large $n$ and thus we search for an integrable bound on the root of these terms divided by $n^{3/2}tr_3^2$. \[\frac{\sqrt{b_n^2a_nb_{m}}}{n^{3/2}tr_3^2}=\frac{a_{n-1}\sqrt{ta_na_{m-1}}}{(a_n+a_m)^2}\le\sqrt{\frac{ta_{m-1}}{a_n}}\le\left\{\begin{array}{ll} \sqrt{t},& t<\delta\\t^{-(n-m)/2},&t>\delta \end{array}\right.\]which is integrable for $\delta>1$ and $m\le n-4$. \[\frac{\sqrt{b_na_na_{n-1}b_m}}{n^{3/2}tr_3^2}=\frac{a_{n-1}\sqrt{a_na_{m-1}}}{n^{1/2}(a_n+a_m)^2}\le \sqrt{\frac{a_{m-1}}{na_n}}\le\left\{\begin{array}{ll} n^{-1/2},& t<\delta\\n^{-1/2}t^{-(n-m+1)/2},&t>\delta \end{array}\right.\]which is again integrable for $\delta>1$ and $m\le n-3$. Thus for $m=\alpha n$, $\alpha\in(0,1)$ we have the integrable sequence of bounding functions \begin{equation}\label{alphanbound} g_{n,m}(t)=\frac{r_1-r_2}{n^{3/2}tr_3^2}+16\left\{\begin{array}{ll} \sqrt{t+1},&t<\delta\\t^{-(n-m)/2},&t>\delta \end{array}\right._. \end{equation} Notice that we must exclude the case when $\alpha=1$ in order for $g_{n,\alpha n}$ to be integrable for some large $n$, thus we address this case separately. When $m=n$, we may reduce the integrand in (\ref{MainInt}) to \begin{equation}\label{eqnMequalsN} \frac{\sqrt{a_nc_n(a_nc_n-b_n^2)}}{2n^{3/2}ta_n^2} \end{equation} We notice \begin{align*} \frac{a_nc_n(a_nc_n-b_n^2)}{n^3t^2a_n^4}=\frac{c_n}{n^2ta_n}\frac{a_nc_n-b_n^2}{nta_n^2}\le\frac{c_n}{n^2ta_n}\frac{a_{n-1}b_n}{nta_n^2}\le\frac{(n^2t^2+nt)a_n}{n^2ta_n}\frac{nta_n}{nta_n^2}=t+\frac{1}{n} \end{align*} giving us an upper bound of $\frac{1}{2}\sqrt{t+1}$ for (\ref{eqnMequalsN}), where the first inequality follows from Lemma \ref{ac-b2}. For the tail we use the fact that $a_n\ge ta_{n-1}$ along with $c_n\le n^2a_n$ and Lemma \ref{ac-b2} to see that \[\frac{\sqrt{a_nc_n(a_nc_n-b_n^2)}}{n^{3/2}ta_n^2}=\sqrt{\frac{c_n}{n^2a_n}}\sqrt{\frac{a_nc_n-b_n^2}{nt^2a_n^2}}\le\sqrt{\frac{a_{n-1}b_n}{nt^2a_n^2}}=\sqrt{\frac{a_{n-1}^2}{ta_n^2}}\le t^{-3/2}\] which gives us a bounding function for $t\in(0,\infty)$ of \[g(t)=\frac{1}{2}\cdot\left\{\begin{array}{ll} \textstyle\sqrt{t+1},&t<\delta\\t^{-3/2},&t>\delta \end{array}\right.\] For $m$ fixed we need to show asymptotic growth of $\mathbb{E}\mathcal{N}$ to be $n$ and thus we divide by this before we take the limit. We again use lemma \ref{CSApplication} and the subsequent simplification used in the $m=\alpha n$ case to bound our integrand by \[\frac{\sqrt{r_1^2+r_2^2-2r_{12}^2}}{ntr_3^2}=\frac{\sqrt{(r_1-r_2)^2+2(r_1r_2-r_{12}^2)}}{ntr_3^2}\le\frac{r_1-r_2}{ntr_3^2}+\sqrt{2}\frac{\sqrt{r_1r_2-r_{12}^2}}{ntr_3^2}.\] Then we have \[\sqrt{2\frac{r_1r_2-r_{12}^2}{n^2t^2r_3^4}}=\sqrt{2\frac{r_3c_nc_m-c_nb_m^2-c_mb_n^2}{n^2t^2r_3^3}}\le\frac{2c_m}{ntr_3}+\frac{r_3c_n-b_n^2-\frac{c_nb_m^2}{c_m}}{ntr_3^2}\]by comparison of arithmetic and geometric means. By manipulating sums, we can easily see that $c_mb_n\le c_nb_m$ so \[\sqrt{2\frac{r_1r_2-r_{12}^2}{n^2t^2r_3^4}}\le\frac{2c_m}{ntr_3}+\frac{r_1-r_{12}}{ntr_3^2}.\] Now, we need to bound $c_m/ntr_3$ by an integrable function. \[\frac{c_m}{ntr_3}\le\frac{m^2a_m}{nta_{m+1}}\le\frac{m^3\frac{(nt)^{k-1}}{k!}}{\frac{(nt)^{k-1}}{(k-1)!}+\frac{(nt)^{k+1}}{(k+1)!}}\le\frac{m^3(m+1)}{1+(nt)^{2}}\]which is integrable on $(0,\infty)$ with an integral of $\frac{m^3(m+1)\pi}{2n}$. Furthermore, \[\frac{d}{dt}\left[\frac{2b_n-b_m}{a_n+a_m}\right]=\frac{2r_1-r_2-r_{12}}{tr_3^2}\]which is the remaining portion of our integrand bound. Since \[\lim_{t\rightarrow\infty}\frac{1}{n}\frac{2b_n-b_m}{a_n+a_m}=2,\] this is also integrable on $t\in(0,\infty)$ giving us an integrable bounding function of \[\frac{r_1^2+r_2^2-2r_{12}^2}{ntr_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}\le \frac{2r_1-r_2-r_{12}}{ntr_3^2}+\frac{m^3(m+1)}{1+(nt)^{2}}.\] \subsection{Applying Theorem \ref{GLDCT}} In all cases we now have the tools to apply the Generalized Dominated Convergence Theorem. For $m=\alpha n$ we have the sequence of bounding functions given in (\ref{alphanbound}) whose limit integral is \begin{equation} \lim_{n\rightarrow\infty}\int_\mathbb{R}g_{n,\alpha n}(t)\,dt=\lim_{n\rightarrow\infty}n^{-1/2}+{\textstyle\frac{16}{3}}(\delta+1)^{3/2}+\frac{8\delta^{-((1-\alpha)n-2)/2}}{n(1-\alpha)-2}={\textstyle\frac{16}{3}}(\delta+1)^{3/2} \end{equation}if $n>2/(1-\alpha)$, and whose limit is \begin{equation} g(t)=\left\{\begin{array}{ll}8\sqrt{t+1},&t<\delta\\0,&t>\delta\end{array}\right. .\end{equation} Thus we have the necessary condition \begin{equation} \lim_{n\rightarrow\infty}\int_\mathbb{R}g_{n,\alpha n}(t)\,dt=\int_\mathbb{R}g(t)\,dt=\frac{16}{3}(\delta+1)^{3/2} \end{equation} and we may apply the theorem to say \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{n^{3/2}}\mathbb{E}\mathcal{N}_H(\mathbb{C})=\int_0^\alpha\frac{1}{2}\sqrt{t}dt=\frac{1}{3}\alpha^{3/2} \end{equation} implying $\mathbb{E}\mathcal{N}_H(\mathbb{C})\sim \frac{1}{3}m^{3/2}$. For $m=n$, the result is the same. This can be see by noting \[\int_\mathbb{R}g(t)=\lim_{n\rightarrow\infty}\frac{1}{3}((\delta+1)^{3/2}-1)+\frac{1}{4}\delta^{-1/2}\] and using the Lebesgue Dominated Convergence Theorem. For the $m$ fixed case, we have the sequence of bounding functions, \begin{equation} g_n=\frac{2r_1-r_2-r_{12}}{ntr_3^2}+\frac{m^3(m+1)}{1+(nt)^2} \end{equation} and as we mentioned above \begin{equation} \int_0^\infty\lim_{n\rightarrow\infty}g_n(t)dt=\int_0^12dt=2=\lim_{n\rightarrow\infty}2+\frac{m^3(m+1)\pi}{2n}=\lim_{n\rightarrow\infty}\int_0^\infty g_n. \end{equation} Applying theorem \ref{GLDCT}, we get \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{n}\mathbb{E}\mathcal{N}_H(\mathbb{C})=\int_0^1dt=1\quad\implies \quad\mathbb{E}\mathcal{N}_H(\mathbb{C})\sim n. \end{equation} \section{First Intensity Functions}\label{sec1stInt} Inspecting the first intensity functions offers some insight into the distribution of zeros of the Weyl polynomials. Define the first intensity function for the expected number of zeros of $H_{n,m}(z)$ over a domain $T$, \[I_{n,m}(z)=\frac{r_1^2+r_2^2-2r_{12}^2}{\lvert z\rvert^2r_3^2\sqrt{(r_1+r_2)^2-4r_{12}^2}}\]then for the analytic case we have \[I_{n,0}(z)=\frac{r_1}{\lvert z\rvert^2r_3^2}.\] Interestingly $I_{n,m} \approx \chi_{|z|\leq 1}$ (in fact by (\ref{lim:fixed}) we have equality as $n \rightarrow \infty$). \begin{figure}[ht] \begin{subfigure}[b]{.25\textwidth} \includegraphics[width=\textwidth]{./img/HarmonicPolyPlotsAnalytic} \end{subfigure}\quad \begin{subfigure}[b]{.3\textwidth} \includegraphics[width=\textwidth]{./img/HarmonicPolyPlotsAnalytic2D} \end{subfigure} \caption{Plot of $I_{100,0}(z)$ in 3 dimensions $(x,y,I(z))$, $z=x+iy$.} \end{figure} This is quite different than the traditional Kac model in which all the zeros accumulate on the circle of radius $1$. $I_{n,0}$ is of particular interest since $I_{n,m=m_0}\rightarrow I_{n,0}$ as $n\rightarrow\infty$ and, perhaps more importantly, it offers a clue into how $p$ and $q$ contribute to the zeros of $H$. Consider the difference of the $I_{100,16}$ and $I_{100,0}$. \begin{figure}[ht]\label{fig:alphan} \begin{subfigure}[b]{.55\textwidth}\label{alphan1} \includegraphics[width=.45\textwidth]{./img/HarmonicPolyPlotsn100m16} \quad \includegraphics[width=.45\textwidth]{./img/HarmonicPolyPlotsn100m162D} \caption{$I_{100,16}(z)$} \end{subfigure} \hspace{-.04\textwidth} \begin{subfigure}[b]{.27\textwidth}\label{alphan2} \includegraphics[width=\textwidth]{./img/HarmonicPolyPlotsn100m16SubtractAnalytic} \caption{$I_{100,16}-I_{100,0}$} \end{subfigure} \caption{} \end{figure} It appears that $q_m$ only contributes to the zeros of $H$ which lie inside the disc of radius $\sqrt{m}$. \section{Conclusion} We have shown that in the case that $m=\alpha n+O(1)$ that the expected number of zeros of a random Weyl polynomial has growth-order $\frac{1}{3}m^{\frac{3}{2}}$ Moreover, in this paper as well as in Li and Wei's work on the Kostlan model [4], Lerario and Lundberg's work on the truncated model [3] and Thomack's work on the naive model [5] we have the result that when $m$ is fixed the number of zeros grows linearly in $n$. This raises the question: Is there a class of Gaussian harmonic polynomials such that the expected number of zeros in the $m$ fixed case increases faster than $n$? We end with a conjecture motivated by the suggestive material in \S\ref{sec1stInt}. \begin{conjecture} For $H_{n,m}(z)$ a random harmonic polynomial following the Weyl model, \[\mathbb{E}\mathcal{N}_{H}(\mathbb{C})\sim \textstyle\frac{1}{3}m^{3/2}+n+O(\sqrt{n}).\] \end{conjecture} \bibliographystyle{amsplainABBREV}
train/arxiv
BkiUbKc25V5hc_OxwJ48
5
1
\section{Introduction} There are different ways to establish large deviation principles (LDP) for dynamical systems. One of them is the so-called "Laplace Method", which relies on the spectral properties of the Perron-Frobenius, or transfer, operator. This strategy has been developed in a very general and abstract setting by Hennion and Herv\'e in \cite{HH}. They assume that the transfer operator acts on a Banach spaces of measurable functions, and it is quasi-compact on it, i.e. it has a spectral gap. The existence of an invariant probability measure follows immediately, and, by using perturbation theory for linear operators, they derive a few others statistical properties. This approach covers a lot of systems, for instance expanding maps of the interval \cite[XII.1]{HH}, Gibbs measures for subshift of finite type \cite[XII.2]{HH}, and expanding Young towers \cite{LSY1}. Nevertheless, this theory seems inappropriate for expanding (discontinuous) maps in higher dimension, like those treated by several authors \cite{Ad, Bl, BG, Bu, BK, C, Liv2, PGB, S, Th, Tho, Ts}. Indeed, Hennion and Herv\'e assume that the Banach space on which the transfer operator acts, consists of measurable functions defined everywhere, but for the higher dimensional systems quoted above, the functional spaces usually considered (bounded variation, quasi-H\"older or Sobolev spaces) consist of classes of equivalence of functions modulo the reference measure, and hence they are only defined almost everywhere. Furthermore, in \cite{HH}, the Dirac masses must belong to the topological dual of the Banach space, so this theory cannot be applied directly for those systems. Nevertheless it appears that one could slightly modify the proofs from \cite{HH} in order to deal with a Banach space consisting of classes of functions. In particular we will do it for the functional space of quasi-H\"older functions mostly investigated in \cite{S} and which verify an additional algebraic assumption which also plays a role in the Hennion-Herv\'e approach. As a consequence we will get the large deviations principle for such systems and, as far as we know, this result is not present in the literature. We also prove the central limit theorem, but for the latter one already disposes of the Gordin-Liverani theorems \cite{Go, Liv}. \\Actually a weaker result for the large deviations of systems like those considered above has been recently obtained in \cite{AFLV}. We will comment about the difference with the spectral technique presented in this note in the Remark 2 below. We anticipate here that the paper \cite{AFLV} furnishes an {\em upper} bound for the deviation functions and whenever the correlation functions involving $L^1$ observables decay to zero with a summable rate. In order to check these assumptions for our systems we should further require that the density of the invariant measure is essentially bounded from below, but this assumption is not necessary in the spectral approach discusses later on. \section{Assumptions and statement of the results} We now give the precise assumptions under which the LDP is valid. Let $(X, \mathcal{A}, m)$ a probability space, and $T : X \to X$ a measurable transformation, non singular with respect to $m$. Under these conditions, the Perron-Frobenius operator $P : L^1(m) \to L^1(m)$ is well defined by $Pf = \frac{d m_f}{d m}$, where $m_f(A) = \int_{T^{-1}A} f \, dm$ is absolutely continuous with respect to $m$. We stress here the fact that the functions under consideration are complex valued, as required by the spectral theory we are going to use below. The transfer operator enjoys some classical properties that we resume below; see \cite{BoGo} or \cite{LM} for more details. \begin{enumerate} \item {\em Linearity} : $P$ is a linear operator on $L^1(m)$, satisfying $||Pf||_1 \le ||f||_1$ for all $f \in L^1(m)$; \item {\em Positivity} : For all $f \in L^1(m)$ such that $f \ge 0$ $m$-ae, we have $Pf \ge 0$ $m$-ae; \item {\em Preservation of integrals} : For all $f \in L^1(m)$, we have $\int Pf \, dm = \int f \, dm$; \item {\em Duality} : For all $f \in L^1(m)$ and $g \in L^{\infty}(m)$, we have $\int f (g \circ T) \, dm = \int (Pf) g \, dm$; \item {\em Invariant Measures} : $f \in L^1(m)$ is the density of a $T$-invariant probability if and only if $f \ge 0$, $\int f \, dm = 1$ and $Pf = f$. \end{enumerate} ~ \\ \noindent Let us suppose now that we have a subspace $\mathcal{B} \subset L^1(m)$, equipped with a norm $|| \, . \, ||_{\mathcal{B}}$ such that \begin{enumerate} \item $(\mathcal{B}, || \, . \, ||_{\mathcal{B}})$ is a complex Banach space with continuous injection $\mathcal{B} \to L^1(m)$; \item Constant functions lie in $\mathcal{B}$; \item $\mathcal{B}$ is a Banach algebra : there exists $C > 0$ such that for all $f, g \in \mathcal{B}$ we have $fg \in \mathcal{B}$ with $||fg||_{\mathcal{B}} \le C ||f||_{\mathcal{B}} ||g||_{\mathcal{B}}$; \item $\mathcal{B}$ is a complex Banach lattice : for every $f \in \mathcal{B}$, we have $\bar{f}, |f| \in \mathcal{B}$; \item $\mathcal{B}$ is stable under $P$ : $P(\mathcal{B}) \subset \mathcal{B}$; \item $P$ is a bounded operator on $\mathcal{B}$, with spectral radius equal to one; \item $P$ is quasi-compact of diagonal type on $\mathcal{B}$. \end{enumerate} The last assertion means that there exists a decomposition $$P = \sum_{i=1}^s \lambda_i \Pi_i + Q$$ where $\lambda_i$ are complex numbers of modulus $1$, $\Pi_i$ are finite-rank projections satisfying $\Pi_i \Pi_j = 0$ when $i \neq j$ and $Q$ is a bounded operator on $\mathcal{B}$ with spectral radius strictly less than $1$ and satisfying $Q \Pi_i = \Pi_i Q = 0$ for all $i$. The spectrum of $P$ consists then of a finite number of eigenvalues of modulus $1$, with finite multiplicity, and the rest of the spectrum lies in a disc centered at $0$ with radius strictly less than $1$. When $\mathcal{B}$ is compactly injected in $L^1(m)$, this can be deduced from a Lasota-Yorke type inequality, by means of the Ionescu-Tulcea and Marinescu theorem \cite{ITM, H}. See \cite{HH} for precise definitions and results about quasi-compactness. Under those conditions, the existence of an $T$-invariant probability $\mu$ absolutely continuous w.r.t $m$, such that $\frac{d \mu}{dm} \in \mathcal{B}$ is a classical result : for every $f \in \mathcal{B}$ such that $f \ge 0$, $\int f \, dm = 1$, and so in particular for $f = \mathds{1}$, quasi-compactness implies that the sequence $\frac{1}{n} \sum_{k=0}^{n-1} P^k f$ converges in $\mathcal{B}$ to a function $f^{\star}$ such that the measure $\mu_f$ with $\frac{d\mu_f}{dm} = f^{\star}$ is an acip. Furthermore, $1$ is an eigenvalue of $P$. If we assume that $1$ is a simple eigenvalue of $P$, then there exists an unique acip $\mu$ such that $\frac{d \mu}{ dm} \in \mathcal{B}$. From now, we will always assume that $1$ is a simple eigenvalue, and that there is no other eigenvalue of modulus $1$. $\mu$ will denote the unique acip, and $v \in \mathcal{B}$ its density. We then have \footnote{When $\varphi \in \mathcal{B}^{\star}$ belongs to the topological dual of $\mathcal{B}$, we denote $<\varphi, f> = \varphi(f)$. The linear form $f \to \int f \, dm$ belongs to $\mathcal{B}^{\star}$, and we denote it by $m$.}, for all $n \ge 1$ and $f \in \mathcal{B}$ $$P^nf = <m,f> v + Q^n f$$ As a consequence, we get exponential decay of correlation : there exists $C \ge 0$ and $0 \le \lambda < 1$ such that for every $f \in \mathcal{B}$ and every $g \in L^{\infty}(\mu)$ we have $$\left\vert \int f(g \circ T^n) \, d \mu - \int f \, d \mu \int g \, d \mu \right\vert \le C \lambda^n ||f||_{\mathcal{B}} ||g||_{L^{\infty}_{\mu}}$$ Let now $\phi : X \to \mathbb{R}$ a bounded observable which lie in $\mathcal{B}$, with zero mean $\int \phi \, d \mu = 0$. Denote by $S_n$ the Birkhoff sums : $$S_n = \sum_{k=0}^{n-1} \phi \circ T^k$$ We are now able to state the LDP : \begin{The} (Large Deviation Principle) \em \noindent Under the above conditions, the limit $\sigma^2 = \lim_{n \to \infty} \int ( \frac{S_n}{\sqrt{n}} )^2 \, d \mu$ exists, and if $\sigma^2 > 0$, then there exists for some $\epsilon_0 > 0$ a rate function $c : \, ]-\epsilon_0, + \epsilon_0[ \to \mathbb{R}$, continuous, strictly convex, vanishing only at $0$, such that for every $0 < \epsilon < \epsilon_0$ and every probability measure $\nu$ with $\nu \ll m$ and $\frac{d \nu}{dm} \in \mathcal{B}$, we have $$\lim_{n \to \infty} \frac{1}{n} \log \nu (S_n > n \epsilon) = - c(\epsilon)$$ \end{The} As an easy consequence of the techniques introduced in the next section, we also get the central limit theorem. We denote with $\mathcal{N}(0, \sigma^2)$ the Dirac mass $\delta_0$ if $\sigma^2 = 0$, and the probability with density $\frac{1}{\sigma \sqrt{2\pi}} e^{- \frac{t^2}{2 \sigma^2}}$ with respect to Lebesgue if $\sigma^2 > 0$. \begin{The} (Central Limit Theorem) \em \noindent Under the same assumptions of Theorem 1, $\frac{S_n}{\sqrt{n}}$ converges in distribution to $\mathcal{N}(0, \sigma^2)$ in the probability space $(X, \mathcal{A}, \nu)$ for every probability $\nu$ with $\nu \ll m$ and $\frac{d\nu}{dm} \in \mathcal{B}$ : for every bounded continuous function $g : \mathbb{R} \to \mathbb{R}$, we have $$\lim_{n \to \infty} \int g(\frac{S_n}{\sqrt{n}}) d\nu = \int g \, d \mathcal{N}(0, \sigma^2)$$ \end{The} \begin{Rem} ~ \begin{enumerate} \item Theorems 1 and 2 apply in particular for $\nu = m$ and $\nu = \mu$, so the LDP and the CLT are valid for both reference and invariant measures. \item As we anticipated in the Introduction, the paper \cite{AFLV} gives an upper bound for the large deviation function and under related assumptions. In particular Th. E in \cite{AFLV} states the following, with our notations. Let us suppose that $T$ preserves an ergodic probability measure $\mu$; then let $\mathcal{B} \subset L^1(\mu)$, $\phi\in \mathcal{B}$, and assume that there exists $\xi(n)$ with $\sum_{n=0}^{\infty}\xi(n)<\infty$ such that for all $\psi\in L^1(\mu)$ we have $\left\vert \int \phi(\psi \circ T^n) \, d \mu - \int \phi \, d \mu \int \psi \, d \mu \right\vert \le \xi(n) ||f||_{\mathcal{B}} ||g||_{L^{\infty}_{\mu}}$. Then there exists $\tau=\tau(\phi)>0$ and, for every $\epsilon>0$, there exists $C=C(\phi,\epsilon)>0$ such that $\mu(S_n> n\epsilon)\le Ce^{-\tau n}$. The proof of this result relies on a martingale approximation. Section C in \cite{AFLV} provides examples of systems for which one gets exponential decay of correlations (and hence $\xi(n)$ is summable) and against $\psi\in L^1(\mu)$: this last assumption requires the density to be bounded from below. In conclusion the result of our note extends the previous one in two directions : first, we obtain a LDP, and not only an upper bound; secondly, we do not have to assume anything about the density, but the fact that it is in the $L^1$ norm with respect to the conformal (reference) measure. \end{enumerate} \end{Rem} \section{Proofs} We begin by observing that the existence of the variance $\sigma^2$ results from a straightforward computation : since the sequence $\int \phi(\phi \circ T^n) \, d\mu$ decays exponentially fast, it is absolutely summable, and then we see, by expanding the term $S_n^2$, that the limit $\sigma^2 = \lim_{n \to \infty} \int (\frac{S_n}{\sqrt{n}})^2 \, d \mu$ exists and we have $$\sigma^2 = \int \phi^2 \, d \mu + 2 \sum_{n=1}^{+ \infty} \int \phi (\phi \circ T^n) \, d \mu$$ We assume from now $\sigma^2 > 0$. Our proof of the LDP follows closely \cite{HH} except for a minor modification, which will be mentioned later. The same approach had been employed in \cite{RBY}. Let $f \in \mathcal{B}$ the density of the measure $\nu$ with respect to $m$. We will apply Gartner-Ellis theorem \cite{DZ, El}, so we are interested in the convergence of the sequence $\frac{1}{n} \log \int e^{\theta S_n} f \, dm$ for $\theta \in \mathbb{R}$ small enough. We introduce the "Laplace transform" operators $P_z$, for $z \in \mathbb{C}$, defined by $$P_z(f) = P(e^{z \phi}f), \: f \in \mathcal{B}$$ Assuming for a moment that $P_z$ is well defined, we see immediately that we have $\int e^{\theta S_n} f \, dm = \int P^n_{\theta}(f) \, dm$. In order to prove that $P_z$ is a bounded operator on $\mathcal{B}$, we just have to check that $e^{z \phi} \in \mathcal{B}$. Since $\mathcal{B}$ is a Banach algebra, the sequence $\sum_{k=0}^n \frac{(z\phi)^k}{k!}$ converges in $\mathcal{B}$, and hence in $L^1(m)$. On the other hand, this sequence converges uniformly, and hence in $L^1(m)$, to $e^{z \phi}$, and so we get that $$e^{z \phi} = \sum_{n=0}^{+\infty} \frac{(z \phi)^n}{n!}$$ in $\mathcal{B}$. It also proves that the map $z \to P_z$ is holomorphic and we have the expansion $$P_z = \sum_{n=0}^{+\infty}\frac{C_n}{n!} z^n$$ where $C_n(f) = P(\phi^n f)$. \\ We can now apply perturbation theory for linear operator to prove the following result. The proof relies on analytic functions of operators, see \cite{DS}, or on the implicit function theorem, see \cite{HH}. For $\theta > 0$, we denote $\mathbb{D}_{\theta} = \{ z \in \mathbb{C} \, / \, |z| < \theta \}$. \begin{Pro} \em There exist $\theta_0 > 0$, $C > 0$, $\eta_1, \eta_2 > 0$ and holomorphic functions $\lambda(.) : \mathbb{D}_{\theta_0} \to \mathbb{C}$, $v(.) : \mathbb{D}_{\theta_0} \to \mathcal{B}$, $\varphi(.) : \mathbb{D}_{\theta_0} \to \mathcal{B}^{\star}$ and $Q(.) : \mathbb{D}_{\theta_0} \to \mathcal{L}(\mathcal{B})$ such that for all $z \in \mathbb{D}_{\theta_0}$ \\ (i) $ \lambda(0) = 1, v(0) = v, \varphi(0) = m, Q(0) = Q$; \\ (ii) $P_z(f) = \lambda(z) <\varphi(z), f> v(z) + Q(z) f$ for all $f \in \mathcal{B}$; \\ (iii) $<\varphi(z), v(z)> = 1$; \\ (iv) $Q(z)v(z) = 0$ and $\varphi(z) Q(z) = 0$; \\ (v) $|\lambda(z)| > 1 - \eta_1$; \\ (vi) $||Q(z)^n|| \le C (1 - \eta_1 - \eta_2)^n$. \end{Pro} So, for all $n \ge 1$, we have $$P_z^n(f) = \lambda(z)^n <\varphi(z), f> v(z) + Q(z)^n f$$ We can say much more on eigenvalues and eigenvectors when $z = \theta$ is real. At this point, we need to show that for every positive function $f \in \mathcal{B}$ with $f \neq 0$, there exists a positive linear form $\varphi \in \mathcal{B}^{\star}$ such that $<\varphi, f> \, > 0$. In the context of \cite{HH}, since functions are defined everywhere, there exists $x \in X$ such that $f(x) > 0$, and so the Dirac mass $\delta_x$ does the job. In our context, Dirac masses are not available, but the reference measure is usable , since necessarily $<m,f> > 0$, otherwise, $f$ would be $0$ $m$-ae, and so $f = 0$ in $\mathcal{B}$. This was not the case in \cite{HH} because they consider functions defined everywhere, and not classes of equivalence. We can also use arguments from complex Banach lattice theory \cite{MeN} : a modification of the Hahn-Banach theorem shows that there exists a positive bounded linear form $\varphi$ on $\mathcal{B}_{\mathbb{R}} = \{ f \in \mathcal{B} \, / \, f(x) \in \mathbb{R} \; m{\rm -ae}\}$, such that $< \varphi, f> = 1$, and then we can extend it on all $\mathcal{B}$. This argument could be employed in more abstract contexts, where the Banach space $\mathcal{B}$ consists of distributions-like objects and when we don't have a good knowledge of its topological dual. \begin{Pro} \em There exists $0 < \theta_1 < \theta_0$ such that for every $\theta \in \mathbb{R}$ with $|\theta| < \theta_1$, we have $\lambda(\theta) > 0$. Furthermore, $v(.)$ and $\varphi(.)$ can be redefined such that $v(\theta) \ge 0$, $\varphi(\theta) \ge 0$. \end{Pro} \begin{proof} As $P_{\theta}$ is a real operator, we have $P_{\theta} \overline{f} = \overline{P_{\theta}f}$ for all $f \in \mathcal{B}$. So, we have $P_{\theta} \overline{v(\theta)} = \overline{P_{\theta} v(\theta)} = \overline{\lambda(\theta)} \, \overline{v(\theta)}$. Since $\lambda(\theta)$ is the unique eigenvalue of $P_{\theta}$ with maximal modulus, we get $\overline{\lambda(\theta)} = \lambda(\theta)$, and hence $\lambda(\theta) \in \mathbb{R}$. Since $\lambda(0) = 1$, by a continuity argument, we obtain $\lambda(\theta) > 0$ for small $\theta$. For $z \in \mathbb{C}$ small enough, $<\varphi(z), \mathds{1}> \neq 0$. We define $\tilde{v}(z) = <\varphi(z), \mathds{1}> v(z)$ and $\tilde{\varphi}(z) = <\varphi(z), \mathds{1}>^{-1} \varphi(z)$. Those new eigenfunctions satisfy obviously the conclusions of the previous proposition. We have just to prove that $\tilde{v}(\theta)$ and $\tilde{\varphi}(\theta)$ are positive for $\theta \in \mathbb{R}$ small enough. By the spectral decomposition of $P_{\theta}$, we see that $\lambda(\theta)^{-n}P_{\theta}^n \mathds{1}$ goes to $\tilde{v}(\theta)$ in $\mathcal{B}$, and hence in $L^1(m)$. We then get $\tilde{v}(\theta) \ge 0$ because $P_{\theta}$ is a positive operator and $\lambda(\theta)$ is positive too. Now, let $\psi(\theta) \in \mathcal{B}^{\star}$ positive such that $<\psi(\theta), \tilde{v}(\theta)> = 1$. Then, $\lambda(\theta)^{-n}(P_{\theta}^{\star})^n \psi(\theta)$ goes to $<\psi(\theta), v(\theta)> \varphi(\theta) = \tilde{\varphi}(\theta)$, which proves that $\tilde{\varphi}(\theta)$ is a positive linear form. \end{proof} We denote $$\Lambda(\theta) = \log \lambda(\theta)$$ We then have \begin{Pro} \em There exists $0 < \theta_2 < \theta_1$ such that for every $\theta \in \mathbb{B}$ with $|\theta| < \theta_2$ and every $f \in \mathcal{B}$ with $f \ge 0$ and $\int f \, dm = 1$, we have $$\lim_{n \to \infty} \frac{1}{n} \log \int e^{\theta S_n} f \, dm = \Lambda(\theta)$$ \end{Pro} \begin{proof} We have the identity $$\begin{aligned} \int e^{\theta S_n} f \, dm = <m, P_{\theta}^n(f)> & = \lambda(\theta)^n < \varphi(\theta), f> \, <m, v(\theta)> + <m, Q(\theta)^n f> \\ & = \lambda(\theta)^n ( < \varphi(\theta), f> \, <m, v(\theta)> + \lambda(\theta)^{-n} <m, Q(\theta)^n f>) \end{aligned}$$ All involved quantities are positive, hence we can write $$\frac{1}{n} \log \int e^{\theta S_n} f \, dm = \log \lambda(\theta) +\frac{1}{n} \log (<\varphi(\theta), f> \, <m, v(\theta)> + \lambda(\theta)^{-n} <m, Q(\theta)^n f>)$$ Since $$\lim_{\theta \to 0} <\varphi(\theta), f> \, <m, v(\theta)> = 1$$ and since the spectral radius of $Q(\theta)$ is strictly less than $\lambda(\theta)$, it's easy to see that for $\theta$ small enough, we have $$\lim_{n \to \infty} \frac{1}{n} \log (<\varphi(\theta), f> \, <m, v(\theta)> + \lambda(\theta)^{-n} <m, Q(\theta)^n f>) = 0$$ \end{proof} In order to apply Gartner-Ellis theorem, we just have to show that $\Lambda$ is differentiable function, strictly convex in a neighborhood of $0$. Since $\lambda$ is real-analytic, $\Lambda$ is too. Computations from perturbation theory \footnote{See corollaries III.11 and III.6 in \cite{HH}.} show that $\lambda'(0) = \int \phi \, d \mu = 0$ and $\lambda''(0) = \sigma^2$, so we have $\Lambda''(0) = \frac{\lambda''(0)\lambda(0) - \lambda'(0)^2}{\lambda(0)^2} = \sigma^2 > 0$ and we can now apply the following local version of Gartner-Ellis theorem, whose proof can be found in lemma XIII.2 in \cite{HH} : \begin{Pro} \em For all $n \ge 1$, denote by $\mathbb{P}_n$ a probability measure on some measurable space $(\Omega, \mathcal{T})$, by $\mathbb{E}_n$ the corresponding expectation operator and by $S_n$ a real valued random variable. Assume that on some interval $[- \theta_{\Lambda}, \theta_{\Lambda}]$, $\theta_{\Lambda} > 0$, we have $$\lim_{n \to \infty} \frac{1}{n} \log \mathbb{E}_n[\exp(\theta S_n)] = \Lambda(\theta),$$ where $\Lambda$ is a strictly convex continuously differentiable function satisfying $\Lambda'(0) = 0$. \\ Define $\epsilon_+ = \frac{\Lambda(\theta_{\Lambda})}{\theta_{\Lambda}} > 0$, $\epsilon_- = \frac{\Lambda(\theta_{\Lambda})}{\theta_{\Lambda}} < 0$ and $c(\epsilon) = \underset{| \theta | \le \theta_{\Lambda}}{\sup} \{\theta \epsilon - \Lambda(\theta) \}$. Then $c$ is a positive function, strictly convex on $[\epsilon_-, \epsilon_+]$, continuous, vanishing only at $0$, and, for all $0 < \epsilon < \epsilon_0 = \epsilon_+$, we have $$ \lim_{n \to \infty} \frac{1}{n} \log \mathbb{P}_n (S_n > n \epsilon) = - c(\epsilon)$$ \end{Pro} We now prove our Theorem 2. \begin{proof}[Central Limit Theorem] By Levy's continuity theorem, it suffices to show that for all $t \in \mathbb{R}$ $$\lim_{n \to \infty} \int e^{i t \frac{S_n}{\sqrt{n}}} f \, dm= e^{- \frac{t^2 \sigma^2}{2}}$$ We have $$\int e^{it \frac{S_n}{\sqrt{n}}} f \, dm = <m, P_{\frac{it}{\sqrt{n}}}^n(f)> = \lambda(\frac{it}{\sqrt{n}})^n <\varphi(\frac{it}{\sqrt{n}}), f> \, <m, v(\frac{it}{\sqrt{n}})> + <m, Q(\frac{it}{\sqrt{n}})^n f>$$ We just have to prove that $$\lim_{n \to \infty} \lambda(\frac{it}{\sqrt{n}})^n = e^{- \frac{t^2 \sigma^2}{2}}$$ But the Taylor's expansion says that in a complex neighborhood of $0$ $$\lambda(z) = \lambda(0) + \lambda'(0) z + \frac{\lambda''(0)}{2}z^2 + z^2 \eta(z) = 1 + \frac{\sigma^2 z^2}{2} + z^2 \eta(z)$$ where $\lim_{z \to 0} \eta(z) = 0$. Then, a standard computation concludes the proof. \end{proof} \section{Application to uniformly expanding maps}The main application of our Theorem 1 will be to multidimensional piecewise uniformly expanding maps, in particular when we equip them with the space of the quasi-H\"older functions. This space, introduced by Keller \cite{K}, developed by Blank~\cite{Bl} and successfully applied by Saussol \cite{S} and successively by Buzzi~\cite{Bu} (see also \cite{BK}) and Tsujii~\cite{Ts}, reveals to be very useful to control the oscillations of a function under the iteration of the transfer operator across the discontinuities of the map. Moreover it verifies the algebraic assumption 3 in Section 2 above; it is not straightforward to replace this condition and in order to fit with the Hennion-Herv\'e theory, if one uses the more conventional spaces of bounded variation functions (see for instance \cite{Ad, BG, C, Liv2, PGB}) or the Sobolev spaces \cite{Tho}, and this topic deserves to be investigated in the future \footnote{In fact, if we check the proof, we only need the fact that $\mathcal{B}$ is Banach algebra and $\phi \in \mathcal{B}$ to prove that the operators $P_z$ are well defined and holomorphic in $z$. So we can suppose the weaker assumption that $\phi$ is such that $P_z$ define a holomorphic family of bounded operators on $\mathcal{B}$ for $z$ in a complex neighbourhood of $0$.}. \\ Let us now recall the precise definitions of our system by following closely the assumptions imposed in \cite{S}. Let $M \subset \mathbb{R}^d$ be a compact subset with $\overline{{\rm int} M} = M$ and piecewise $C^1$ boundary. We denote by $d$ the Euclidean distance and by $m$ the Lebesgue measure on $\mathbb{R}^d$. We can assume without loss of generality that $m(M) = 1$. For $A \subset M$ and $\epsilon > 0$, we denote $B_{\epsilon}(A) = \{x \in \mathbb{R}^d \, / \, d(x, A) \le \epsilon \}$. Let $T : M \to M$ a measurable application, and suppose there exists $0 < \alpha \le 1$ such that for some small enough $\epsilon_0$ we have : \begin{enumerate} \item There are finitely many disjoint open sets $U_i \subset M$ with $m(M \setminus \cup_{i} U_i) = 0$ such that for each $i$, $T_i := T \vert_{U_i} \to M$ is $C^{1+\alpha}$ and can be extended on a neighborhood $V_i$ of $U_i$ to a $C^{1+\alpha}$ map $T_i : V_i \to \mathbb{R}^d$ such that $B_{\epsilon_0}(T_iU_i) \subset T_i(V_i)$. Moreover, each $T_i : V_i \to \mathbb{R}^d$ is injective with $C^{1+\alpha}$ inverse; \item There exists $c > 0$ such that for any $i$, and any $x,y \in T(U_i)$ with $d(x,y) \le \epsilon_0$ we have $$\vert \det DT_i^{-1}(x) - \det DT_i^{-1}(y) \vert \le c \vert \det DT_i^{-1}(x) \vert d(x,y)^{\alpha} ;$$ \item There exists $s(T) < 1$ such that $$\sup_i \sup_{x \in T_i (V_i)} || DT_i^{-1}(x) || < s(T);$$ \item Boundaries of $U_i$ are piecewise $C^1$ codimension one embedded compact submanifolds and we have $\eta_0(T) < 1$ where $$\eta_0(T) = s(T)^{\alpha} + \frac{4s(T)}{1-s(T)}Y(T) \frac{\gamma_{d-1}}{\gamma_d}$$ $$Y(T) = \sup_{x \in \mathbb{R}^d} \sum_i \sharp \{ {\rm smooth \; pieces \; intersecting \; } \partial U_i {\rm \; and \; containing \; } x \}$$ and $\gamma_d = \frac{\pi^{d/2}}{(d/2)!}$ is the $d$-volume of the $d$-dimensional unit ball of $\mathbb{R}^d$. \end{enumerate} The last condition can be greatly weakened, but the condition in \cite{S} is of a very abstract nature, and it's more easy to handle with this one when the boundaries of the $U_i$ are smooth. We define then the functional space on which acts the transfer operator. Let $f \in L^1(\mathbb{R}^d)$. If $A \subset \mathbb{R}^d$ is a Borel subset, we define the oscillation of $f$ over $A$ by $${\rm osc}(f,A) = \underset{x_1, x_2 \in A}{\rm ess \, sup} \, |f(x_1) - f(x_2)|$$ where the essential supremum is taken with respect to the product measure $m \times m$ on $A \times A$. We get a lower semi-continuous and hence measurable function $x \to {\rm osc}(f, B_{\epsilon}(x))$. We set $$|f|_{\alpha} = \sup_{0 < \epsilon \le \epsilon_0} \frac{1}{\epsilon^{\alpha}} \int_{\mathbb{R}^d} {\rm osc}(f, B_{\epsilon}(x)) dx$$ We define $$V_{\alpha}(\mathbb{R}^d) = \{f \in L^1(\mathbb{R}^d) \, / \, |f|_{\alpha} < \infty \}$$ and $$V_{\alpha}(M) = \{f \in V_{\alpha}(\mathbb{R}^d) \, / \, {\rm supp} \, f \subset M \}$$ both endowed with the norm $||f||_{\alpha} = ||f||_{L^1_m} + |f|_{\alpha}$. Adapting proofs from \cite{K}, we can show that $V_{\alpha}(M)$ is Banach space, with compact injection in $L^1(M)$. It's proven in \cite{S} that $V_{\alpha}(M)$ is also a Banach algebra, and it's obviously a Banach lattice. So, if we want to apply our previous results to those maps, we are left to prove that the transfer operator for $T$ acts on $V_{\alpha}(M)$ and is quasi-compact of diagonal type. But Saussol proved in this context a Lasota-Yorke inequality (lemma 4.1 in \cite{S}) which implies the quasi-compactness of the Perron-Frobenius operator. Hence, assuming that the system is mixing, we get the central limit theorem and the large deviations principle for bounded real observables in $V_{\alpha}(M)$.
train/arxiv
BkiUbp7xaJiQn5NNiKPH
5
1
\section{Introduction}\label{sec:intro} The Large and Small Magellanic Clouds (LMC/SMC), as the closest pair of interacting dwarf satellites of the Milky Way \citep[at distances of \textasciitilde50 and \textasciitilde60~kpc respectively:][]{pietrzynskiDistanceLargeMagellanic2019,graczykDistanceDeterminationSmall2020a}, are ideally situated for detailed study of the influence of tidal interactions on galaxy evolution. The SMC has long been known to be heavily distorted, with a line of sight depth of up to 20~kpc \citep[e.g.][]{hatzidimitriouStellarPopulationsLargescale1989,ripepiVMCSurveyXXV2017} which varies as a function of position angle. It possesses an asymmetric, irregular morphology exhibiting striking differences between the locations of young and old stars \citep[e.g.][]{elyoussoufiVMCSurveyXXXIV2019,mackeySubstructuresTidalDistortions2018}, and kinematic evidence for tidal expansion \citep[e.g.][]{deleoRevealingTidalScars2020a,zivickDecipheringKinematicStructure2021}. The LMC, although more kinematically ordered than the SMC, also displays substantial deviations from a simple rotating disk structure. It has multiple warps \citep{olsenWarpLargeMagellanic2002,choiSMASHingLMCTidally2018}, sharp truncations in the outer disk \citep{mackeySubstructuresTidalDistortions2018}, ring-like overdensities \citep{kunkelDynamicsLargeMagellanic1997,choiSMASHingLMCMapping2018}, and an off-centre stellar bar \citep[e.g.][]{vandermarelMagellanicCloudStructure2001}. Each of these features encodes valuable information about the extensive interaction history of the Clouds. Precise measurements of the masses and orbits of the LMC and SMC, and their internal kinematics, are key to understanding how interactions between both Clouds, and the Milky Way, form the disturbed features observed. While the Clouds are strongly suspected to have experienced a close passage \textasciitilde150~Myr ago \citep{zivickProperMotionField2018b}, and are likely just past pericentre on their first infall into the Milky Way potential \citep{kallivayalilThirdEpochMagellanicCloud2013}, particulars of their interactions beyond this remain relatively unconstrained. Recent studies of the star-formation history of the Clouds provide evidence of potential past interactions, with spikes in the global star formation rate of both Clouds \textasciitilde1-2~Gyr ago \citep[e.g.][]{rubeleVMCSurveyXXXI2018a,ruiz-laraLargeMagellanicCloud2020a}. However, these studies have lower time-resolution than dynamical studies, and alone provide limited constraints on, for example, the impact parameter or the relative location and orientation of the Clouds during close interactions. One useful method to explore past dynamical interactions is to study stars in the outskirts of the Clouds. These stars are most strongly susceptible to external perturbations, and the resulting structural and kinematic signatures are more persistent compared to the central regions, where dynamical timescales are much shorter. Recent studies of the Clouds using deep photometric data \citep[e.g.][]{mackey10KpcStellar2016,mackeySubstructuresTidalDistortions2018,pieresStellarOverdensityAssociated2017a} and multi-dimensional phase-space information from Gaia \citep[e.g.][]{belokurovCloudsArms2019,gaiacollaborationGaiaEarlyData2021a} have revealed a wealth of substructure in the periphery of the Magellanic system. Many of these features are thought to be due to dynamical perturbation and, as a result, are ideal targets for studying the history of interactions between the LMC and SMC, and between the Clouds and the Milky Way. Of particular interest is a large arm-like feature to the north of the LMC discovered in first year data from the Dark Energy Survey (DES) by \citet[][henceforth referred to as M16]{mackey10KpcStellar2016}. The feature begins \textasciitilde13$^\circ$ due north of the LMC centre where it appears to join the northern outskirts of the LMC disk, and has an on-sky width of \textasciitilde2$^\circ$. Initial photometric analysis, limited by the extent of the DES footprint, traced the substructure for \textasciitilde12.5$^\circ$ eastward. Utilising astrometric proper motion and parallax information provided by Gaia DR2, \citet{belokurovCloudsArms2019} were also able to recover the feature, tracing it for at least an additional \textasciitilde10$^\circ$ beyond the initial discovery. Several papers have attempted to elucidate the origin of the feature using dynamical models, with varying conclusions. \citetalias{mackey10KpcStellar2016} present an $N$-body model of the LMC undergoing infall over \textasciitilde2~Gyr into a 3-component MW potential as described in \citet{gomezItMovesDangers2015}. That simulation produces a qualitatively similar stream of debris in the northern outskirts of the LMC disk, due solely to the tidal influence of the Milky Way (i.e., without requiring the presence of the SMC). In contrast, \citet{beslaLowSurfaceBrightness2016} present $N$-body models of an LMC and SMC interacting in isolation for 6~Gyr, before undergoing infall into a MW halo potential for 1~Gyr. Even prior to entering the MW potential, qualitatively similar asymmetrical spiral structures, formed in the LMC disk after repeated SMC passages, are seen in the LMC's northern outskirts; these persist during infall to the MW potential. \citet{belokurovCloudsArms2019} also show a number of simpler models of tracer particles within high-mass and low-mass LMC potentials, undergoing infall for 1~Gyr into the 3-component MW potential described in \citet{bovyGalpyPythonLIBRARY2015}. Models both with and without the presence of an SMC potential form qualitatively similar features in the northern outskirts of the LMC, with the best qualitative match occurring due to the combined influence of both the SMC and MW. With multiple scenarios each reproducing qualitatively similar structures to that observed, the origin of the feature remains uncertain. However, these studies have been fundamentally limited by a lack of kinematic data along the arm. This restricts analysis to only qualitatively reproducing the feature's shape which -- as demonstrated above -- results in ambiguity regarding its origin. Indeed, \citetalias{mackey10KpcStellar2016} note that line-of-sight (LOS) velocities would assist in distinguishing between material tidally stripped from the LMC, and overdense features in the extended LMC disk. An investigation into the kinematics of the northern arm is therefore critical. In this paper, we present a comprehensive analysis of the LMC's northern arm using data from the Magellanic Edges Survey \citep[MagES:][]{C20}. This spectroscopic survey targets red clump (RC) and red giant branch (RGB) stars in the extreme Magellanic periphery, using the 2dF/AAOmega instrument \citep{lewisAngloAustralianObservatory2dF2002,sharpPerformanceAAOmegaAAT2006} on the 3.9~m Anglo-Australian Telescope (AAT) at Siding Spring Observatory. In conjunction with Gaia astrometry, it is the first large-scale survey to study 3D kinematics in the outskirts of the Clouds. MagES fields are specifically selected to cover low-surface-brightness substructures in the Magellanic periphery – including the northern arm. With seven fields located across the length of the feature, providing 3D kinematics for hundreds of individual stars, detailed study of the arm's dynamical properties becomes possible. The paper is arranged as follows. Section~\ref{sec:data} presents an overview of the data, and \S\ref{sec:obsprops} describes the derived kinematic, structural, and abundance properties of the feature. In \S\ref{sec:models} we present new dynamical models of the LMC and SMC undergoing infall into the Milky Way potential, aimed at quantitatively reproducing the kinematics of the northern arm, and discuss the main implications for the origin of this structure. Our conclusions are presented in \S\ref{sec:concs}. \section{Data}\label{sec:data} MagES utilises the 2dF multi-object fibre positioner, and the dual-beam AAOmega spectrograph on the AAT. The 2dF positioner allows for the observation of \textasciitilde350 science targets per 2 degree diameter field. As described in \citet[][henceforth referred to as Paper I]{C20}, we configure the blue arm on AAOmega with the 1500V grating, to give coverage of the MgIb triplet with resolution R\textasciitilde$3700$, and the red arm with the 1700D grating to give coverage of the near-infrared CaII triplet with R\textasciitilde$10000$. \citetalias{C20} also outlines in detail the target selection procedures, observation characteristics, and data reduction pipeline for MagES; here we briefly present details of the observations specific to the northern arm. Seven MagES fields are located along the arm; field positions are shown in Fig.~\ref{fig:map}. We note that with the exception of field 22 (as well as fields 12 and 18 located in the northern LMC disk) all fields along the arm were observed prior to the release of Gaia DR2, and thus selection for those fields was performed without using parallax and proper motion information. As a consequence, the selection efficiency for true Magellanic members in these fields is relatively low -- these correspond to `D' and `M' fields as defined in \citetalias{C20}. We discuss the implications of this in greater detail below. \begin{figure*} \includegraphics[height=12cm]{fig1} \caption{Location of observed MagES fields across the Magellanic periphery. Purple circles indicate fields along the LMC's northern arm analysed in this paper, with blue circles indicating other MagES fields. The background image shows the log density of Magellanic red clump and red giant stars per square degree, selected from Gaia DR2 (the target catalogue from which most MagES stars are drawn) as per \protect\cite{belokurovCloudsArms2019}. On this map, north is up and east is to the left; ($\eta, \xi$) are coordinates in a tangent-plane projection centred on the LMC ($\alpha_0=82.25^{\circ}$, $\delta_0=-69.5^{\circ}$). Orange dashed circles mark angular separations of $8^\circ$, $12^{\circ}$, $16^{\circ}$ and $20^{\circ}$ from the LMC centre and $4^\circ$, $8^\circ$ from the SMC centre. The red x-signs mark the location of Canopus -- the second brightest star in the sky, which limits MagES field placement on the northern arm to avoid spectral contamination from scattered light -- and the south celestial pole.} \label{fig:map} \end{figure*} Reduction of the spectra using the \textaltfont{2dFDR} pipeline, and derivation of LOS velocities, are described in \citetalias{C20}. Stars with heliocentric velocity estimates are cross-matched against the Gaia EDR3 catalogue\footnote{While \citetalias{C20} describes cross-matching against Gaia DR2, we have updated our procedures to incorporate the latest astrometry from Gaia EDR3 \citep{gaiacollaborationGaiaEarlyData2021b}.}, and further quality cuts based on Gaia parameters \textaltfont{ruwe}<1.4 and $C^*$<4$\sigma_{C^*}$\footnote{$C^*$ and $\sigma_{C^*}$ are defined using Eqs. 6 and 18 of \cite{rielloGaiaEarlyData2021} respectively.} applied. The resulting sample of stars includes both true Magellanic stars, and foreground contaminants. We use a statistical framework, described in detail in \citetalias{C20}, to probabilistically associate stars, based on their kinematics, to either the Clouds, or one of several possible Milky Way contaminant populations. These association probabilities are used to weight the fitting of a multi-dimensional Gaussian distribution describing the aggregate Magellanic kinematic properties of each field: the LOS velocity ($V_{\text{LOS}}$) and dispersion ($\sigma_{\text{LOS}}$), plus the two components of proper motion ($\mu_\alpha$, $\mu_\delta$)\footnote{$\mu_\alpha$ refers to proper motion in the $\alpha\cos(\delta)$ direction, as obtained directly from the Gaia EDR3 source catalogue using the column \textaltfont{PMRA}.} and their dispersions ($\sigma_\alpha$, $\sigma_\delta$). We assume there is no covariance between the LOS velocity and either proper motion component, but do account for covariance between the two proper motion components as presented in Gaia EDR3. Fitting is performed using the Markov Chain Monte Carlo ensemble sampler \textsc{emcee} \citep{foreman-mackeyEmceeMCMCHammer2013} in order to maximise the log-likelihood of the Gaussian model given the data; we report the 68 per cent confidence interval as the $1\sigma$ uncertainty in each of the six fitted parameters. As part of this process, we additionally obtain a fitted estimate of the total fraction of likely Magellanic stars per field. Table~\ref{tab:fieldbase} provides the inferred kinematic properties for each of the seven fields along the northern arm, as well as the number of stars in the field with an individual probability $P_i\geq$50\% of being associated with the Clouds. This number is typically very similar ($\pm$1-2 stars) to that inferred from the fitted total fraction of Magellanic stars. In each case, the number of likely Magellanic stars is significantly lower than the total number of stars observed in the field. This is primarily due to the relatively inefficient target selection used in all fields except field 22 (and disk fields 12 and 18 as discussed above). These fields were observed prior to the release of Gaia DR2, and thus target selection was based only on colour-magnitude diagram (CMD) position. As there is moderate Milky Way contamination within the selection boxes used to isolate Magellanic red clump stars (see Fig.~2 of \citetalias{C20}), a significant fraction of the targets observed in these fields are not genuinely Magellanic members. Fields observed later in the survey, after the release of Gaia DR2 (`G' fields), use updated target selection procedures that incorporate kinematic priors, and consequently suffer far less from contamination by non-members. This is demonstrated in field 22, which uses the updated selection procedure. Despite being located near the end of the arm -- where the density of Magellanic stars is intrinsically low, and the density of contaminants is high due to the field's proximity to the Galactic plane -- a comparable number of Magellanic stars are detected as in e.g. field 15, located much closer to the LMC disk in areas where the density of members is significantly higher. Table~\ref{tab:fieldbase} also provides kinematic data for two fields in the northern LMC disk located close to the northern arm, previously discussed in \citetalias{C20} and re-analysed using Gaia EDR3 data in this paper. Notable in Table~\ref{tab:fieldbase} is field 20, which contains no stars with a significant probability of being Magellanic. In addition to using the relatively inefficient CMD-only selection procedure, the field centre is \textasciitilde1$^\circ$ offset from the feature track. This offset was not apparent when the field was initially observed in 2017, as at the time it was located at the extreme limit of the known structure. It is only with astrometric cuts as afforded by Gaia that the feature could be traced further, revealing the offset. As a result, no stars in this field are convincingly Magellanic in origin, and we therefore exclude this field from further analysis. \begin{table*} \centering \caption{MagES fields along the northern arm and in the nearby northern LMC disk. Columns give the field number and classification as described in \protect\citetalias{C20}; location of the field centre as RA($\alpha$), DEC($\delta$) in J2000.0; on-sky distance of the field from the centre of the LMC ($R_{\text{LMC}}$), number of likely Magellanic stars per field, and aggregate kinematic parameters (described in \S\ref{sec:data}).} \label{tab:fieldbase} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lllllllllll} \hline \multicolumn{1}{>{\centering\arraybackslash}m{1.5cm}}{Field (Class)} & \multicolumn{1}{c}{RA} & \multicolumn{1}{c}{DEC} & \multicolumn{1}{>{\centering\arraybackslash}m{1.3cm}}{$R_{\text{LMC}}$ ($^\circ$)} & \multicolumn{1}{>{\centering\arraybackslash}p{1.4cm}}{$N_{\text{Magellanic}}$ ($P_i\geq50\%$)} & \multicolumn{1}{>{\centering\arraybackslash}p{1.2cm}}{$V_{\text{LOS}}$ \newline(km~s$^{-1}$)} & \multicolumn{1}{>{\centering\arraybackslash}p{1.2cm}}{$\sigma_{\text{LOS}}$\newline (km~s$^{-1}$)} & \multicolumn{1}{>{\centering\arraybackslash}p{1.2cm}}{$\mu_\alpha$\newline (mas~yr$^{-1}$)} & \multicolumn{1}{>{\centering\arraybackslash}p{1.2cm}}{$\sigma_\alpha$\newline (mas~yr$^{-1}$)} & \multicolumn{1}{>{\centering\arraybackslash}p{1.2cm}}{$\mu_\delta$\newline (mas~yr$^{-1}$)} & \multicolumn{1}{>{\centering\arraybackslash}p{1.2cm}}{$\sigma_\delta$\newline (mas~yr$^{-1}$)} \\ \hline 11 (D) & 05 19 42.63 & -56 53 06.88 & 12.7 & 75 & $ 280.8\pm2.2 $ & $ 17.2\pm2.0 $ & $ 1.72\pm0.03 $ & $ 0.13\pm0.03 $ & $ 0.06\pm0.04 $ & $ 0.24\pm0.03 $ \\ 13 (D) & 05 35 05.69 & -55 06 03.11 & 14.6 & 38 & $ 294.3\pm1.7 $ & $ 8.0\pm1.9 $ & $ 1.58\pm0.04 $ & $ 0.12\pm0.06 $ & $ -0.03\pm0.04 $ & $ 0.16\pm0.06 $ \\ 15 (D) & 06 00 07.40 & -54 17 53.14 & 16.0 & 32 & $ 311.9\pm2.7 $ & $ 12.6\pm2.1 $ & $ 1.50\pm0.04 $ & $ 0.09\pm0.06 $ & $ 0.12\pm0.04 $ & $ 0.06\pm0.05 $ \\ 16 (D) & 06 12 13.07 & -53 52 32.45 & 16.8 & 25 & $ 323.2\pm2.0$ & $ 8.3\pm1.7 $ & $ 1.50\pm0.05 $ & $ 0.11\pm0.07 $& $ 0.24\pm0.04 $ & $ 0.07\pm0.06 $ \\ 19 (M) & 06 40 29.00 & -53 29 04.00 & 18.6 & 13 & $ 351.3\pm4.8 $ & $ 14.5\pm4.5 $ & $ 1.16\pm0.09 $ & $ 0.23\pm0.10 $ & $ 0.57\pm0.07 $ & $0.13\pm0.09 $ \\ 20 (M) & 07 04 01.00 & -53 37 01.00 & 19.9 & 0 & - & - & - & - & - & - \\ 22 (G) & 07 25 34.00 & -52 04 52.00 & 22.8 & 27 & $ 372.9\pm1.6 $ & $ 7.1\pm1.3 $ & $ 1.15\pm0.03 $ & $ 0.06\pm0.04 $ & $ 0.71\pm0.02 $ & $ 0.04\pm0.03 $ \\ 18 (G) & 06 40 00.00 & -62 30 00.00 & 10.7 & 299 & $ 324.5\pm1.2 $ & $ 20.3\pm0.9 $ & $ 1.49\pm0.01 $ &$ 0.11\pm0.01 $ & $ 1.00\pm0.01 $ & $ 0.12\pm0.01 $ \\ 12 (G) & 05 20 00.00 & -59 18 00.00 & 10.3 & 284 & $ 287.1\pm1.5 $& $ 24.8\pm1.1 $ & $ 1.78\pm0.01 $ & $ 0.12\pm0.01 $ & $ 0.20\pm0.01 $ & $ 0.19\pm0.01 $ \\ \hline \end{tabular} \end{adjustbox} \end{table*} In addition to kinematic properties, MagES also reports [Fe/H] estimates for sufficiently bright red giant branch stars, derived from the equivalent width of the 8542\AA{} and 8662\AA{} CaII triplet lines (see \citetalias{C20} and \citealt{dacostaCaIiTriplet2016} for details). However, such stars are only included in the target selection for field 22 along the arm (as well as fields 12 and 18, previously described in \citetalias{C20}). For the fainter red clump stars observed in the remaining fields along the arm, the S/N for any individual star is too low to accurately measure the equivalent width of the two lines, particularly as the 8662\AA{} line is within a region of the spectrum relatively heavily contaminated by night sky emission. Therefore, in order to derive metallicity estimates for these fields, spectra for likely ($P_i$$\geq$50\%) Magellanic stars are shifted into the rest frame using their (geocentric) LOS velocities and then stacked to create a single `representative' RC spectrum for the field. This increases the contrast of the two CaII lines relative to the (stochastically over- or under-subtracted) residual night-sky emission, allowing for equivalent width measurements to be performed. As the stacked clump stars only occupy a small magnitude range, stacking spectra is not expected to substantially bias the derived equivalent widths, and the resulting [Fe/H] estimates are expected to tend towards the mean metallicity within a given field. All metallicity estimates are assumed to have systematic uncertainties of 0.2dex (we refer the interested reader to \citetalias{C20} for details). \subsection{An arm-like coordinate system}\label{sec:armloc} For the following analysis, it is convenient to have a coordinate system in which the northern arm, as projected on the sky, is straight -- similar to coordinate systems used to describe stellar streams in the MW halo. However, while coordinate systems for most halo streams can be derived by assuming that the stream follows a great circle on the sky, this is not the case for the northern arm. Consequently, in this section we describe derivation of a custom coordinate system which follows the track of the structure on the sky, with the origin of the feature nearest the LMC disk. In this derivation, we neglect uncertainties on the position of each individual star, as these are negligibly small (\textasciitilde0.15" in each component). To locate the feature, we use a catalogue of Magellanic red clump and red giant branch stars selected from Gaia DR2 according to \citet{belokurovCloudsArms2019}, which incorporates astrometric, photometric, and quality cuts. This catalogue provides a relatively clean selection of Magellanic stars with contiguous coverage across the entire length of the arm. We calculate an orthographic projection of the stars into Cartesian X,Y coordinates, as per Eq.~2 of \cite{gaiacollaborationGaiaDataRelease2018b}, repeated here as Eq.~\ref{eq:xy}. Here, ($\alpha_0,\delta_0= 79.88^\circ,-69.59^\circ$): the LMC centre-of-mass (COM) position reported by \citet[][henceforth referred to as vdM14]{vandermarelThirdEpochMagellanicCloud2014} for their 'PMs+Old vLOS Sample'. This is a kinematic centre, derived from a simultaneous fit of HST field-aggregate proper motions, combined with LOS velocities for an 'old'\footnote{Comprised of carbon stars, AGB and RGB stars that are predominantly older than 1–2~Gyr and therefore similar in age to the red clump population.} stellar sample. This is as similar as possible to the data used in the present work. \begin{equation} \label{eq:xy} \begin{split} X &= -\cos(\delta)\sin(\alpha-\alpha_0) \\ Y &= \sin(\delta)\cos(\delta_0) - \cos(\delta)\sin(\delta_0)\cos(\alpha - \alpha_0) \end{split} \end{equation} To describe the feature, we select stars in the region $-20$<X<0.8 and 10<Y<18, as seen in Fig.~\ref{fig:xysel}. In addition to containing stars associated with the northern arm feature, this region also includes a significant number of stars associated with the outer LMC disk, the high density of which necessitates masking before fitting the northern arm itself. All stars within the solid selection box in panel \textit{a} of Fig.~\ref{fig:xysel} are masked. Additionally, we mask stars within a two-degree diameter circle centred on the Carina dwarf galaxy ($\alpha_C,\delta_C=100.40^\circ, -50.97^\circ$), just north of the feature; many stars associated with the Carina dwarf pass the selection criteria described in \citet{belokurovCloudsArms2019}. \begin{figure*} \setlength\tabcolsep{1.5pt} \begin{tabular}{cc} \includegraphics[width=0.52\textwidth]{fig2a} & \vspace{-1pt}\includegraphics[width=0.5\textwidth]{fig2b} \\ \multicolumn{2}{c}{\includegraphics[height=5cm]{fig2c}} \end{tabular} \caption{Normalised density of red clump and red giant branch stars, selected as per \protect\cite{belokurovCloudsArms2019}, in the region surrounding the northern arm, used to derive a feature coordinate system. Panels \textit{a} and \textit{b} have data binned in $0.4^\circ\times0.4^\circ$ squares within an orthographic projection in Cartesian coordinates. Panel \textit{a} shows the full selection of stars, with solid grey lines indicating the regions surrounding the LMC disk and the Carina dwarf galaxy masked prior to fitting the feature track. Also shown is an example demonstrating the calculation of the two feature coordinates ($\phi_1$ and $\phi_2$) for any point ($X,Y$) in the feature; ($\epsilon_X,\epsilon_Y$) is the nearest point on the feature track. Panel \textit{b} shows the post-masking data used in the fitting routine. The best-fitting polynomial (as described in Eq.~\ref{eq:track}) is shown in solid black, with fitted 1$\sigma$ and $2\sigma$ contours represented by dashed grey lines. Centres of MagES fields along the arm are overplotted in orange; the fitted feature track passes close to the centre of each field. Panel \textit{c} shows the full data selection after transformation into the feature coordinates. In this coordinate system, the feature track is a straight line at $\phi_2=0^\circ$. The selection box used to describe the feature location is shown in dashed grey.} \label{fig:xysel} \end{figure*} Remaining stars are binned into $0.4^\circ\times0.4^\circ$ bins to smooth their distribution; smaller bins contain too few stars near the low-density end of the feature, while larger bins do not sufficiently resolve the feature. We describe the resulting binned distribution $Z$ by a Gaussian profile in $Y$, as in Eq.~\ref{eq:gaussfit}, where the peak height ($A_Y$), centre ($Y_{\text{T}}$), and width ($\sigma_Y$) are each allowed to vary as an $n$th-order polynomial as a function of $X$. \begin{equation} \label{eq:gaussfit} \begin{split} Z(X,Y) &= A_Y(X) \exp{\left[\frac{-\left(Y - Y_{\text{T}}\left(X\right)\right)^2}{2 \left(\sigma_Y\left(X\right)\right)^2}\right]} \\ A_Y(X) &= a_nX +a_{n-1}X^{n-1}+...+a_0 \\ \sigma_Y(X) &= b_nX +b_{n-1}X^{n-1}+...+b_0 \\ Y_{\text{T}}(X) &= c_nX +c_{n-1}X^{n-1}+...+c_0 \end{split} \end{equation} We perform a least-squares fit to the polynomial coefficients, for all combinations of $n$th-order polynomials up to a maximum of 2\textsuperscript{nd} order in $A_Y$, 2\textsuperscript{nd} order in $\sigma_Y$, and 5\textsuperscript{th} order in $Y_{\text{T}}$; polynomials of higher orders overfit the data, resulting in unrealistic contours particularly near the ends of the feature. The set of coefficients with the lowest sum-of-square residuals are taken as the final track parameters for the arm: Eq.~\ref{eq:track} gives the resulting best-fit equations describing the on-sky feature track. \begin{equation} \label{eq:track} \begin{split} A_Y(X) &= 2.642\times10^{-2}X + 0.7229\\ \sigma_Y(X) &= 8.198\times10^{-3}X + 1.168 \\ Y_{\text{T}}(X) &= -3.386\times10^{-6}X^4 - 1.669\times10^{-3}X^3 \\ &- 6.825\times10^{-2}X^2- 0.7099X + 12.62 \end{split} \end{equation} We note coefficients describing the variation in width and peak height are, for the process of deriving the feature track, nuisance parameters; it is the polynomial describing the centre position that is of interest. However, we do find the peak height ($A_Y$), indicative of the stellar density, decreases by \textasciitilde70\%, and the feature width ($\sigma_Y$) decreases by \textasciitilde14\% along the length of the structure. The track and $1\sigma$ width contours from the fit are shown in panel \textit{b} of Fig.~\ref{fig:xysel}, with the MagES field centres marked in orange. Whilst the polynomial fit is not at all constrained by the locations of MagES fields -- which were deliberately selected to be centred on the feature -- it nonetheless passes very closely to each field centre. We define the origin, where the arm appears to meet the LMC disk, to sit at $X=0^\circ$. We use the best-fit track to define a coordinate system for the arm, with components denoted $\phi_1$ and $\phi_2$, in which the central track is a straight line at $\phi_2=0$. For each star, we determine the nearest point on the track given by Eq.~\ref{eq:track} (which we refer to as $\epsilon$). The coordinate $\phi_1$ is defined as the distance (or line integral) along Eq.~\ref{eq:track} from $X=0$ to $X=\epsilon_X$. We calculate the direction normal to Eq.~\ref{eq:track} at $\epsilon$, and define $\phi_2$ as the distance along the normal vector from $\epsilon$ to the star's location\footnote{We note this procedure does not result in a 1:1 mapping of X,Y to $\phi_1$,$\phi_2$ across the entire X,Y domain, as due to the shape of the feature normal vectors to Eq.~\ref{eq:track} for negative values of $\phi_2$ eventually intersect. However, these intersections only occur at relatively large negative values of $\phi_2$, within the LMC disk where $\phi_1$,$\phi_2$ coordinates are not meaningful. In the vicinity of the northern arm X,Y locations are mapped to unique $\phi_1$,$\phi_2$ coordinates.}. The outcome of this process is a set of $\phi_1$, $\phi_2$ coordinates for each star; the resulting density plot for stars along the northern arm is shown in panel \textit{c} of Fig.~\ref{fig:xysel}. For convenience, we also transform the MagES field centres into the feature coordinate system. Table~\ref{tab:fieldloc} presents the location of MagES fields along the arm in both cartesian and feature coordinates. When selecting member stars later in our analysis, we define a box of width 2.5$^\circ$ in $\phi_2$, between $-0.5^\circ\leq\phi_1\leq25^\circ$ as in panel \textit{c} of Fig.~\ref{fig:xysel}, which describes the location of the feature. \begin{table} \centering \caption{Orthographic cartesian coordinates centred on the LMC COM, and feature coordinates along and across the northern arm (calculated as in \S\ref{sec:armloc}) for MagES fields.} \label{tab:fieldloc} \begin{tabular}{cllll} \hline Field & \multicolumn{1}{c}{X (deg)} & \multicolumn{1}{c}{Y (deg)} & \multicolumn{1}{c}{$\phi_1$ (deg)} & \multicolumn{1}{c}{$\phi_2$ (deg)} \\ \hline 11 & -0.03 & 12.60 & 0.05 & -0.05 \\ 13 & -2.23 & 14.26 & 2.74 & 0.36 \\ 15 & -5.89 & 14.62 & 6.35 & -0.15 \\ 16 & -7.70 & 14.68 & 8.17 & -0.10 \\ 19 & -11.80 & 13.92 & 12.36 & -0.24 \\ 22 & -18.40 & 12.37 & 19.15 & -0.21 \\ \hline \end{tabular} \end{table} \section{Observed Properties of the Northern Arm}\label{sec:obsprops} \subsection{Metallicity}\label{sec:met} [Fe/H] measurements for MagES fields along the arm, as a function of both LMC galactocentric radius ($R$) and $\phi_1$ distance along the feature, are presented in Fig.~\ref{fig:met}. We find very weak (<2$\sigma$) evidence for a negative metallicity gradient along the feature when performing a least-squares fit to the stacked field measurements, which are expected to trend to the field mean. The gradients we derive ($-0.015\pm0.007$ dex per degree in $R$, and $-0.025\pm0.014$ dex per degree in $\phi_1$) both imply a drop from [Fe/H]\textasciitilde$-0.9$ at the base of the feature, to [Fe/H]\textasciitilde$-1.2$ at the most distant measured point ($R$\textasciitilde$23^\circ$, $\phi_1$\textasciitilde20$^\circ$). Whilst only fields 22 and 12 have multiple metallicity measurements, we do note in these fields a relatively large scatter in [Fe/H], with metallicity measurements covering an \textasciitilde0.5~dex range even in the outermost field 22. We discuss the implications of the potential decrease in mean [Fe/H] along the feature on estimates of its structure using RC photometry in \S\ref{sec:phot}. \begin{figure} \includegraphics[width=\columnwidth]{fig3} \caption{[Fe/H] measurements for stars in the northern arm and nearby outer LMC disk, as a function of LMC galactocentric radius ($R$: top) and $\phi_1$ distance along the feature (bottom). Red triangles indicate MagES measurements for individual stars, while squares indicate metallicities derived from stacked spectra, which tend to the mean metallicity of the field. The dashed grey line shows the best-fitting metallicity gradient along the feature.} \label{fig:met} \end{figure} Whilst literature [Fe/H] measurements in the outskirts of the LMC are sparse, they are generally consistent with our results. \citet{gradyMagellanicMayhemMetallicities2021} reports photometric metallicities along the feature utilising Gaia DR2 photometry of RC/RGB stars, finding similar [Fe/H] values along the feature to our spectroscopic measurements. Any potential gradient along the feature, however, is masked by a large dispersion (defined in that paper as the difference between the 10\textsuperscript{th} and 90\textsuperscript{th} percentile of the distribution) of up to \textasciitilde0.6~dex within each square degree pixel. Both \citet{majewskiDiscoveryExtendedHalolike2008a} and \citet{carreraMETALLICITIESAGEMETALLICITYRELATIONSHIPS2011} find a decrease in the mean metallicity of RGB stars beyond a LMC galactocentric radius of \textasciitilde$7^\circ$, with mean [Fe/H]\textasciitilde$-1$ at distances of $\geq$10$^\circ$, with a scatter of \textasciitilde1dex at these large distances. We also note that \citet{munozExploringHaloSubstructure2006} measure a mean [Fe/H]$=-0.67$ and a dispersion of $0.62$dex -- somewhat higher than our measurements -- for a group of 15 stars in the vicinity of the Carina dwarf (located near the armlike feature at $\phi_1$\textasciitilde$12.5^\circ$) with heliocentric velocities indicating a potential LMC association. However, cross-matching these stars against Gaia EDR3 returns at least three stars with proper motions strongly inconsistent with being associated with the LMC, suggesting their reported mean metallicity could be too high (assuming the non-Magellanic stars are metal-rich Galactic contaminants). Unfortunately, \citet{munozExploringHaloSubstructure2006} do not report individual [Fe/H] measurements so we cannot calculate a corrected value. Our [Fe/H] measurements indicate the feature is likely comprised of disturbed LMC disk material. The median metallicity near the base of the feature is consistent with measurements in the nearby outer LMC disk fields \citepalias{C20}, and given the negative metallicity gradient at smaller LMC radii \citep{carreraMETALLICITIESAGEMETALLICITYRELATIONSHIPS2011,majewskiDiscoveryExtendedHalolike2008a}, a mild negative metallicity gradient could be expected under the assumption that the feature is an overdensity in the extreme LMC disk outskirts, such that stars currently located at large distances along the feature had their origin at larger radii than stars currently located at smaller galactocentric radii. We explore formation mechanisms for the feature using models to test this idea in \S\ref{sec:models}. We can, however, rule out the feature being the disrupted remains of an accreted dwarf satellite of the LMC as discussed in \citetalias{mackey10KpcStellar2016}. Considering the mass-metallicity relation for dwarf galaxies \citep[as presented in][]{hidalgoMassmetallicityRelationDwarf2017,kirbyUNIVERSALSTELLARMASSSTELLAR2013}, a stellar mass of $\geq$10$^{7.6}$M$_\odot$ is required for a mean [Fe/H]$\gtrsim$-1.2, corresponding to an integrated luminosity $M_V\lesssim-11.5$ \citep{mcconnachieOBSERVEDPROPERTIESDWARF2012}. In contrast, \citetalias{mackey10KpcStellar2016} find the integrated luminosity of the feature is only $M_V$\textasciitilde$-7.4$. Even accounting for the increased spatial extent of the feature traced using more recent data, and uncertainties in the mass-metallicity relation, this is still $\gtrsim$30 times fainter than the luminosity of the required satellite. \subsection{Structure}\label{sec:phot} In order to place constraints on the geometry of the feature, we carefully analyse Gaia EDR3 photometry of stars along its length. Although Gaia parallaxes lack the precision to provide useful distances for the Clouds \citep{gaiacollaborationGaiaEarlyData2021a}, the apparent magnitude of the red clump can instead be used as a standardizable candle to provide information about the relative geometry of the feature. However, the apparent magnitude of the red clump is not purely distance dependent: population effects including the age and metallicity of clump stars affect their intrinsic luminosity \citep[see][for a review]{girardiRedClumpStars2016}, and interstellar extinction along the line-of-sight also affects the measured clump magnitude. To determine the relative geometry of the feature therefore requires dereddened photometry, as well as assumptions about its constituent stellar populations. Following the procedure described in \cite{gaiacollaborationGaiaEarlyData2021c}, we deredden our photometry utilising the \citet{schlegelMapsDustInfrared1998} dust maps, corrected as described in \citet{schlaflyMEASURINGREDDENINGSLOAN2011}, in conjunction with the mean extinction coefficients for the Gaia passbands described in \citet{casagrandeUseGaiaMagnitudes2018}. No correction is made for reddening internal to the Clouds as this is not expected to be significant in the low-density peripheral regions targeted by MagES \citep[c.f.][henceforth referred to as C18]{choiSMASHingLMCTidally2018}. We correct the Gaia G-band photometry for the 6-parameter solution as described in \cite{rielloGaiaEarlyData2021} prior to applying the dereddening procedure. In order to effectively utilise the clump magnitude as a distance estimator along the feature, we initially assume that the stellar population comprising the clump does not vary, and is identical to that in the nearby LMC outer disk. This implies any magnitude differences observed along the feature are due entirely to distance effects. However, as discussed in \S\ref{sec:met}, there is weak evidence for a mild negative metallicity gradient along the feature. In the Gaia G passband (which substantially overlaps the optical V-band investigated in \citealt{girardiPopulationEffectsRed2001}), this is expected to result in an increase in clump luminosity along the feature, as well as a shift to bluer colours. We discuss the scale of this potential population effect and its implications on our results in detail below. To determine an appropriate CMD selection box for RC stars, we utilise dereddened Gaia EDR3 photometry within the northern LMC disk, where the clump is well-populated. We select stars within a 1$^\circ$ radius of two MagES disk fields (fields 12 and 18: see Fig.~\ref{fig:map}), with parallax<0.2 and proper motions within a box of full width five times the dispersion of the field median motions reported in Table~\ref{tab:fieldbase} (i.e. $\pm$2.5$\sigma_{\alpha/\delta}$), and passing the quality cuts \textaltfont{ruwe}<1.4 and $C^*$<4$\sigma_{C^*}$. Fig.~\ref{fig:diskcmd} shows the resulting Hess diagrams for the two fields. We define a selection box of 0.85<$(G_{\text{BP}}-G_{\text{RP}})_0$<1.05, and 18.0<$G_0$<19.25, to select red clump stars. The selection is designed to minimise contamination from the RGB and potential RGB bump (which, at a similar magnitude to the RC, could bias estimation of the clump magnitude), whilst being sufficiently wide in magnitude range to accommodate potential distance variations along the northern arm. The resulting median clump magnitude and colour, and associated dispersion calculated as the standard deviation, are provided for the two fields in Table~\ref{tab:diskphot}. Note the observed \textasciitilde0.1~mag difference in median $G_0$ for these fields is expected due to the inclined disk geometry of the LMC. We test small (\textasciitilde0.25~mag) adjustments to the selection box (including stricter blue and bright magnitude cutoffs to minimise contamination from horizontal branch and blue loop stars respectively) and find these do not significantly affect our results. \begin{table}\label{tab:diskphot} \caption{Median RC magnitude $\left\langle G_0\right\rangle$ and colour $\left\langle(G_{\text{BP}}-G_{\text{RP}})_0\right\rangle$, and associated dispersion, for two MagES northern disk fields. Standard errors on both the median and dispersion are reported. } \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{lllll} \hline Field & $\left\langle G_0\right\rangle$ & $\sigma_{G_0}$ & $\left\langle(G_{\text{BP}}-G_{\text{RP}})_0\right\rangle$ & $\sigma_{(G_{\text{BP}}-G_{\text{RP}})_0}$ \\ \hline 18 & $18.66\pm0.02$ & $0.24\pm0.01$ & $0.975\pm0.004$ & $0.052\pm0.002$ \\ 12 & $18.75\pm0.02$ & $0.23\pm0.01$ & $0.972\pm0.004$ & $0.052\pm0.002$\\ \hline \end{tabular} \end{adjustbox} \end{table} \begin{figure} \includegraphics[width=0.455\columnwidth]{fig4a} \includegraphics[width=0.499\columnwidth]{fig4b} \caption{Colour-magnitude selection boxes used to isolate red clump stars, overlaid on Hess diagrams of stars within 2$^\circ$ diameter fields centred on MagES fields 18 (panel \textit{a}) and 12 (panel \textit{b}), located in the northern LMC disk. Only stars passing proper motion, parallax, and quality cuts as described in the text are included. The RC selection box (dashed grey line) is designed to minimise contamination from non-RC populations (including RGB, horizontal branch, and blue loop stars) whilst allowing for colour and magnitude shifts along the arm.} \label{fig:diskcmd} \end{figure} We apply the same CMD, parallax, and quality cuts to stars within the feature selection box in $\phi_1/\phi_2$ coordinates presented in \S\ref{sec:armloc}. Unlike in an individual disk field, the mean proper motion varies along the length of the feature, and therefore a simple global proper motion cut is insufficient to minimise contamination. As such, we perform a least-squares fit to each of the two proper motion components measured for the MagES fields along the feature as a function of $\phi_1$, weighted by the proper motion uncertainty. We define each proper motion selection to be a box centred on the resulting fit, with a full width 5 times the mean proper motion dispersion of all MagES fields along the arm (i.e. $\pm$2.5$\left\langle\sigma_{\alpha/\delta}\right\rangle$). The resulting selections are presented in Fig.~\ref{fig:photpmbox}, overlaid on 2D histograms of the underlying proper motion distribution (limited to stars with proper motions 0<$\mu_\alpha$<3 and $-2$<$\mu_\delta$<2); the fitted relations follow the underlying overdensities in proper motion space associated with the feature. Our final selection includes only stars which pass the selection in both proper motion components. \begin{figure} \includegraphics[width=\columnwidth]{fig5} \caption{Proper motion selection boxes (dashed grey) used to isolate likely LMC stars along the northern arm. Orange points indicate MagES field aggregate motions, with error bars representing the field aggregate $1\sigma$ dispersion. These are overlaid on 2D histograms of $\mu_\alpha$ (panel \textit{a}) and $\mu_\delta$ (panel \textit{b}) as a function of $\phi_1$ location along the feature for RC stars in the vicinity of the northern arm. } \label{fig:photpmbox} \end{figure} We bin our final selection into segments of $2.5^\circ$ in $\phi_1$, and determine the median $(G_{\text{BP}}-G_{\text{RP}})_0$ colour, $G_0$ magnitude, and associated dispersions (calculated as the standard deviation of the distribution) for each bin. Bins are chosen such that at least 60 stars are present in each bin, in order to provide robust estimates of the colour-magnitude distributions. Fig.~\ref{fig:photprops} shows the resulting photometric trends as a function of the $\phi_1$ distance along the feature; error bars represent the standard error on each parameter. The standard error on all quantities increases along the feature due to the decreasing density of Magellanic stars further from the LMC disk. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{fig6} \caption{Photometric properties of red clump stars calculated within $2.5^\circ$ bins in $\phi_1$ distance along the feature. Panels show (\textit{a}) median $(G_{\text{BP}}-G_{\text{RP}})_0$ colour, (\textit{b}) standard deviation in the $(G_{\text{BP}}-G_{\text{RP}})_0$ distribution, (\textit{c}) median $G_0$ magnitude, and (\textit{d}) standard deviation in the $G_0$ distribution. The dashed grey line in panel \textit{a} shows a linear least-squares fit to the median colour as a function of $\phi_1$.} \label{fig:photprops} \end{figure} An underlying assumption of the following analysis of the structure's photometric properties is that any underlying contaminant (i.e. non-Magellanic) population of stars within our final selection is uniformly distributed within the CMD selection box along the length of the feature. To test this, we utilise the Besan{\c c}on Model of the Galaxy \citep{robinSyntheticViewStructure2003}\footnote{Accessed as version 1603 through the web service\\ \url{https://model.obs-besancon.fr/}}. We generate an empirical representation of the observed Milky Way contaminant profile within the feature selection box by applying the same CMD, position, parallax, and proper motion selection cuts as our observed sample to the Model. We find the underlying MW population is distributed relatively uniformly within the CMD selection box, and remains so along the length of the feature, indicating this does not bias either the median RC colour or magnitude inferred from our final selection. The number of contaminant stars within the selection does increase along the length of the feature; we discuss this in further detail below. As seen in Fig.~\ref{fig:photprops}, there is a mild (\textasciitilde0.01~mag) trend to bluer colours as the $\phi_1$ distance along the feature increases, such that the mean colours at either end of the feature differ by approximately 1$\sigma$. A linear least-squares fit, weighted by the standard error on the median colour, has a slope of $(-6\pm2)\times10^{-4}$ mag/degree. Such a trend is qualitatively consistent with the mild trend to lower metallicities with increasing $\phi_1$ as observed in \S\ref{sec:met}: red clump stars become bluer at lower metallicities \citep{girardiRedClumpStars2016}. We test whether the magnitude of this colour shift is consistent with that expected from the metallicity gradient along the feature using PARSEC isochrones \citep{bressanParsecStellarTracks2012}\footnote{Accessed as version 3.4 of the web form \\ \url{http://stev.oapd.inaf.it/cmd}}, assuming the default parameters for IMF and mass loss, in Gaia EDR3 passbands. For isochrones of an 11~Gyr old population\footnote{the best-fitting isochrone for the outer LMC disk in \cite{mackeySubstructuresTidalDistortions2018}.}, at metallicities of [Fe/H]$=-0.9$ and [Fe/H]$=-1.2$ (the maximum inferred metallicity difference along the feature), we calculate the luminosity-weighted mean magnitude in the $G$, $G_{\text{BP}}$, and $G_{\text{RP}}$ passbands for core He-burning stars\footnote{with label `4' in the isochrone.} as in Eqs. 3 and 4 of \citet{girardiPopulationEffectsRed2001}. The calculated (reddening-free) $G_{\text{BP}}-G_{\text{RP}}$ colour difference between the two metallicities is \textasciitilde0.06~mag: significantly larger than the measured \textasciitilde0.01~mag difference in $(G_{\text{BP}}-G_{\text{RP}})_0$ along the feature. This is not unexpected: dispersion in the clump age and metallicity (\textasciitilde0.5~dex as measured in \S\ref{sec:met}), as well as photometric uncertainties, will act to `smear out' the clump colour and reduce the measured colour difference along the feature. We also note the dispersion in $(G_{\text{BP}}-G_{\text{RP}})_0$ remains constant along the length of the feature, implying the underlying scatter within the RC population remains relatively constant along the arm. Whilst the most straightforward interpretation of the shift to bluer colours along the arm is due to an underlying metallicity gradient, it is not the only possibility. Stellar age also affects the median RC colour, with young ($\lesssim$2~Gyr) RC stars significantly bluer than older RC stars \citep{girardiPopulationEffectsRed2001}. However, DECam CMDs in the vicinity of the feature reveal a lack of main sequence stars above an ancient \citep[\textasciitilde$10^{11}$~Gyr:][]{mackeySubstructuresTidalDistortions2018} turnoff. We can thus infer age is not the dominant driver of the shift in RC colour. In contrast, we cannot rule out the possibility of systematics in the reddening correction affecting the median $(G_{\text{BP}}-G_{\text{RP}})_0$ colour at the level of 0.01~mag, noting the mean $E(B-V)$ value along the feature remains relatively constant at \textasciitilde0.08 between 0$^\circ$<$\phi_1$<15$^\circ$, but increases to \textasciitilde0.25 at $\phi_1$\textasciitilde22.5$^\circ$. However, we do note the minimal change in $(G_{\text{BP}}-G_{\text{RP}})_0$ colour implies any systematic in the reddening correction must be small. We now consider the gradient in $G_0$ observed along the northern arm. The median magnitude increases from $G_0$\textasciitilde18.83 at the base of the feature to $G_0$\textasciitilde18.63 far from the LMC disk. To check the possible effect of changing metallicity on G-band magnitude, we utilise the same isochrones as described above to quantify the maximum potential difference in $G_0$ from metallicity, and find only a \textasciitilde0.06~mag increase in magnitude for more metal-poor populations. This is similar to the standard error on the $G_0$ magnitude per bin. Further, as in the case of the RC colour, the large metallicity scatter along the feature is expected to reduce the severity of the observed magnitude difference due to metallicity along the feature. We therefore conclude that the gradient in $G_0$ observed along the northern arm can be entirely attributed to a change in distance, with the structure becoming closer with increasing $\phi_1$: as would be expected given the inclination ($i$) and orientation ($\Omega$) of the LMC disk. To investigate how closely the northern arm follows the plane of the LMC disk, we calculate the expected magnitude difference along the feature under the assumption of disk geometry. As multiple estimates of the LMC disk geometry exist, even considering only `old' stellar populations, we investigate two geometries which span the range of recent measurements reported in the literature: that from \citetalias{vandermarelThirdEpochMagellanicCloud2014} ($i=34.0^\circ\pm7^\circ$, $\Omega= 139.1^\circ\pm4.1^\circ$; similar to that reported by \citealt{gaiacollaborationGaiaEarlyData2021a}), and that from \citetalias{choiSMASHingLMCTidally2018} ($i=25.86^\circ\pm1.4^\circ$, $\Omega=149.23^\circ\pm8.35^\circ$). Accounting for changes in both the LMC galactocentric radius and position angle along the feature, the expected magnitude difference along the feature is $0.16\pm0.03$~mag under the \citetalias{choiSMASHingLMCTidally2018} geometry, and $0.21\pm0.08$~mag under the \citetalias{vandermarelThirdEpochMagellanicCloud2014} geometry: corresponding to end-to-end changes in distance of $3.4\pm0.6$~kpc and $4.1\pm1.4$~kpc respectively. Our measured difference of $0.21\pm0.05$~mag is, within uncertainty, consistent with both of these estimates, if somewhat closer that of \citetalias{vandermarelThirdEpochMagellanicCloud2014}. We can therefore infer the feature does, to first order, follow the plane of the LMC disk, though the precision of our measurements limits our ability to isolate a preferred disk geometry at these large radii. We also investigate the thickness of the feature using the $G_0$ dispersion of the RC. Within a given bin and passband, the measured dispersion $\sigma_{G_0}$ can be parameterised by Eq.~\ref{eq:disp}, where $\sigma_{\text{geo}}$ is the apparent dispersion due to global distance differences along the length of the feature, $\sigma_{\text{int}}$ is the intrinsic dispersion of the clump due to population effects, $\sigma_{\text{err}}$ is dispersion introduced through photometric uncertainties, $\sigma_{\text{depth}}$ is due to the intrinsic thickness of the feature, and $\sigma_{\text{cont}}$ is the apparent broadening of the RC due to the presence of an underlying uniformly-distributed model contaminant population within the selection box. Note that under this parameterisation, $\sigma_{\text{cont}}$ only accounts for Milky Way contamination, and not contamination from non-RC Magellanic populations, such as RGB stars, discussed further below. Of interest is whether $\sigma_{\text{depth}}$ within the feature is comparable to that in the outer LMC disk. \begin{equation}\label{eq:disp} \sigma_G^2= \sigma_{\text{geo}}^2 +\sigma_{\text{int}}^2+\sigma_{\text{err}}^2+\sigma_{\text{depth}}^2+\sigma_{\text{cont}}^2 \end{equation} As we bin the data into 2.5$^\circ$ lengths along the feature, within each bin we expect the global distance gradient $\sigma_{\text{geo}}$ to be small: assuming either \citetalias{choiSMASHingLMCTidally2018} or \citetalias{vandermarelThirdEpochMagellanicCloud2014} geometries results in a maximum magnitude difference of \textasciitilde0.05~mag across each bin due to a global distance gradient. We expect $\sigma_{\text{geo}}$ to be similar, if slightly smaller, within our two LMC disk reference fields (MagES fields 12 and 18) as these fields also have a diameter of \textasciitilde$2^\circ$. We subtract the predicted $\sigma_{\text{geo}}$ effect from the measured $G_0$ dispersion both along the feature and within the disk fields prior to comparison. As discussed above, the dispersion in RC colour remains constant along the feature, implying similar population effects along its length, and in \S\ref{sec:met} a metallicity dispersion of \textasciitilde0.5~dex is measured along the feature: consistent with the dispersions measured for the two disk fields in \citetalias{C20}. As such, it is not unreasonable to assume the stellar populations within the feature are similar to those in the outer LMC disk, and we can infer that $\sigma_{\text{int}}$ is constant both along the feature, and within the two reference disk fields. Similarly, as we utilise the same photometric dataset and implement the same quality cuts throughout our analysis, we expect $\sigma_{\text{err}}$ to be approximately constant both along the feature, and within the disk. Under these assumptions, any difference in $G_0$ dispersion between the feature and the disk fields is due entirely to a difference in feature thickness, or the effects of contamination. We expect $\sigma_{\text{cont}}$ to be effectively zero within the disk fields, due to the very high density of Magellanic stars compared to the expected MW contamination within the selection (a factor of $\gtrsim$100). In contrast, the level of predicted MW contamination within the selection increases along the length of the feature, increasing by a factor of \textasciitilde3 from the disk fields to the outermost bins along the feature. We hypothesise that this increase in contamination, and associated increase in $\sigma_{\text{cont}}$, is the dominant driver of the increased $G_0$ dispersion measured beyond 10$^\circ$ along the feature. In contrast, for bins within the first 10$^\circ$ of the feature, the predicted number of contaminant stars is significantly less than the total number of observed stars per bin. This implies $\sigma_{\text{cont}}$ is much smaller, if not zero, within these bins. When we compare the measured $G_0$ dispersion within these bins to that for the two disk fields, we find these are equal within uncertainty: implying the thickness of the feature is approximately the same as the thickness of the LMC disk. This is further evidence the feature is made from perturbed LMC disk material. To test the hypothesis that contamination is responsible for the observed increase in $G_0$ dispersion along the arm, and to check that contamination is not adversely affecting any of the other measured parameters, we fitted a mixture model that explicitly tries to account for non-RC populations within each bin. The model assumes the density of red clump stars takes the form of a two-dimensional Gaussian on the CMD, while the background density is described by linearly varying terms in both colour and magnitude. The relative fraction of contaminants and members in a given bin is left as a free parameter. We sample the posterior probability distributions for the model parameters using the Markov Chain Monte Carlo ensemble sampler {\sc emcee}. Whilst this approach is more comprehensive in modelling the stellar populations within the selection box than our original method, a disadvantage is that it requires a substantial number of stars per bin to robustly converge. As a result, we can only reliably perform the fit within four bins (of length \textasciitilde5$^\circ$, compared to 2.5$^\circ$ in our original method) along the arm. Nonetheless, our results agree closely with the RC $(G_{\text{BP}}-G_{\text{RP}})_0$ colour and $G_0$ magnitude trends determined from the simple medians, as well as the $(G_{\text{BP}}-G_{\text{RP}})_0$ colour dispersion. The fitted RC dispersion in $G_0$ also remains constant along the full length of the arm, with the non-member fraction increasing by a factor of \textasciitilde2 from $\phi_1$=0 to $\phi_1$=20 degrees. This supports our conclusion that it is the contaminating populations that drive the increasing $G_0$ dispersion along the arm in our simple measurements, while the intrinsic thickness does not substantially change. \subsection{Kinematics}\label{sec:kinematics} Whilst Table~\ref{tab:fieldloc} reports kinematic information in observable units, our finding that the northern arm sits close to the expected plane of the LMC disk and is likely comprised of disk material means it is more informative to consider its kinematics in the LMC disk frame. As such, the framework presented in \citet{vandermarelMagellanicCloudStructure2001} and \citet{vandermarelNewUnderstandingLarge2002} is used to transform the observed components into velocities in a cylindrical coordinate system. This coordinate system is aligned with the LMC disk, and has its origin at the LMC centre of mass (COM). As in \citetalias{C20}, we choose the COM to be ($\alpha_0=79.88^\circ$, $\delta_0=-69.59^\circ$) as reported by \citetalias{vandermarelThirdEpochMagellanicCloud2014} for their 'PMs+Old $V_{\text{LOS}}$ Sample', and the associated systemic motions applicable for this choice of centre: i.e. $\mu_{\delta,0}=0.287\pm0.054$ mas~yr$^{-1}$, $\mu_{\alpha,0}=1.895\pm0.024$ mas~yr$^{-1}$, and $V_{\text{LOS},0} = 261.1\pm2.2$km~s$^{-1}$. The orientation of the LMC disk relative to the line-of-sight must also be assumed during this coordinate transform. From \S\ref{sec:phot}, the feature remains roughly within the plane of the LMC disk, though the moderate uncertainties in our measurement preclude distinguishing between varying literature measurements of the disk geometry. As such, for this paper we choose to utilise the \citetalias{choiSMASHingLMCTidally2018} geometry when calculating kinematics in the plane of the LMC disk. This is motivated by preliminary results from Mackey et al (in prep), which indicate the inclination of the LMC decreases at large radii. Using this assumed geometry, we transform the observed kinematic parameters for the feature fields into physical velocities and dispersion in the LMC disk frame. We calculate $V_\theta$, the azimuthal streaming or rotation velocity; $V_r$, the radial velocity in the disk plane; and $V_z$, the vertical velocity perpendicular to the disk plane, as well as dispersions ($\sigma_\theta$, $\sigma_r$, $\sigma_z$) in each of these components. These disk measurements are reported in Table~\ref{tab:armkin}, and Fig.~\ref{fig:armkin} plots each component as a function of $\phi_1$ position along the feature. \begin{table*} \caption{Disk velocities for northern arm feature fields, calculated assuming \citetalias{choiSMASHingLMCTidally2018} disk geometry. In $V_\theta$, positive values indicate clockwise rotation. In $V_r$, positive values indicate movement outward from the LMC COM in the LMC disk plane. In $V_z$, positive values indicate movement perpendicular to the disk plane, in a direction primarily towards the observer: `in front' of the LMC disk. We also give the number of likely Magellanic stars per field, repeated from Table~\ref{tab:fieldloc}.} \label{tab:armkin} \begin{tabular}{clllllll} \hline Field & $N_{\text{Magellanic}}$ ($P_i\geq50\%$) &\multicolumn{1}{c}{$V_\theta$ (km~s$^{-1}$)}& \multicolumn{1}{c}{$\sigma_\theta$ (km~s$^{-1}$)} & \multicolumn{1}{c}{$V_r$ (km~s$^{-1}$)} & \multicolumn{1}{c}{$\sigma_r$ (km~s$^{-1}$)} & \multicolumn{1}{c}{$V_z$ (km~s$^{-1}$)} & \multicolumn{1}{c}{$\sigma_z$ (km~s$^{-1}$)}\\ \hline 11 & 75 &$54.2 \pm 9.8$ & $29.2\pm7.5$ & $5.7\pm16.7$ & $56.7\pm8.2$ & $9.4\pm6.2$ & $19.9 \pm2.4$ \\ 13 & 38 &$73.4 \pm 11.7$ & $25.7\pm 11.7$ & $-39.3 \pm 16.3$ & $36.1 \pm12.4$ & $19.5 \pm 6.0$ & $12.3 \pm3.5$ \\ 15 & 32 &$67.3 \pm 12.9$ & $19.0 \pm10.7$ & $-47.1 \pm15.0$ & $14.7 \pm9.1$ & $24.2\pm6.2$ & $13.5 \pm2.4$ \\ 16 & 25 &$59.2 \pm 14.2$ & $24.5\pm 13.1$ & $-40.4 \pm 15.8$ & $17.6 \pm10.2$ & $21.6 \pm 6.6$ & $10.4 \pm2.7$ \\ 19 & 13 &$108.8 \pm22.9$ & $47.7\pm 18.6$ & $-30.2 \pm20.0$ & $34.6\pm 14.9$ & $22.4 \pm9.9$ & $17.4 \pm4.5$ \\ 22 & 27 &$47.6 \pm14.6$ & $12.7\pm 5.9$ & $-26.3 \pm 9.8$ & $11.1\pm 5.2$ & $28.3 \pm 6.1$ & $7.2 \pm1.3$ \\ \hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[width=\textwidth]{fig7} \caption{Observed velocities and dispersions for MagES fields along the northern arm as a function of $\phi_1$, calculated assuming \protect\citetalias{choiSMASHingLMCTidally2018} disk geometry. Top panels show, in order, the azimuthal, in-plane radial, and out-of-plane vertical velocities; bottom panels show the velocity dispersions in each component. Positive azimuthal velocities indicate clockwise rotation (i.e. in a direction from North towards West), positive radial velocities indicate movement outward from the LMC COM in the LMC disk plane, and positive vertical velocities indicate movement perpendicular to the disk plane, in a direction primarily towards the observer. Grey dashed lines indicate the expected kinematics for an equilibrium disk, with the rotational velocity (and uncertainties) taken from \protect\citetalias{C20}, recalculated assuming the \protect\citetalias{choiSMASHingLMCTidally2018} disk geometry for consistency.} \label{fig:armkin} \end{figure*} With the exception of field 19 (discussed below), clear trends are observed in each of the disk velocity components and their dispersions. Within uncertainty, the azimuthal rotation velocity in each field is consistent with that derived from the two MagES disk fields 12 and 18 in \citetalias{C20}, recalculated using Gaia EDR3 astrometry and the assumption of a \citetalias{choiSMASHingLMCTidally2018} disk geometry to maintain consistency with fields along the northern arm. The dominant source of uncertainty in estimates of the disk kinematics -- both in the \citetalias{C20} values, and those measured here -- are uncertainties in the assumed disk geometry. The measured azimuthl velocity is also within the uncertainty of that derived for RC sources in \citet{gaiacollaborationGaiaEarlyData2021a}, which is approximately flat at \textasciitilde70~km~s$^{-1}$ (with perhaps a very mild \textasciitilde5~km~s$^{-1}$ downturn at their outermost radii of \textasciitilde8~kpc). In contrast to the relatively ordered and disk-like kinematics observed in the azimuthal velocity, the in-plane radial velocity ($V_r$) and out-of-plane vertical velocity ($V_z$) along the feature are strongly out of equilibrium. Both these values are expected to be near zero in an equilibrium disk, and measurements of these in MagES northern disk fields reported in \citetalias{C20} are consistent with zero within uncertainty. However, the in-plane radial velocity drops sharply from approximately zero (in field 11) to $-40$~km~s$^{-1}$ by the next field \textasciitilde3$^\circ$ away along the feature. This strongly-inward velocity remains roughly constant along the feature before a slight decrease in magnitude to approximately $-30$~km~s$^{-1}$ in the outermost field 22. In addition, the vertical velocity gradually increases along the feature from near zero to a maximum of nearly 30~km~s$^{-1}$ in field 22: a significant out of plane motion. Such dramatic kinematic signatures strongly suggest perturbation by an external gravitational potential: we investigate this possibility in greater detail in \S\ref{sec:models}. Since the geometry of the outer LMC disk is uncertain, it is possible the observed kinematic perturbations are simply reflections of incorrectly assuming \citetalias{choiSMASHingLMCTidally2018} parameters for the disk geometry. To test this, we solve for the disk orientation required for the feature kinematics to simply be a projection of purely rotational motion within the LMC disk plane by simultaneously minimizing the sum of squares of $V_r$ and $V_z$. However, we find the derived disk orientations (typically with inclinations approximately $-40^\circ$) are strongly inconsistent with the constraints derived in \S\ref{sec:phot}, indicating genuinely perturbed kinematics. In general, the velocity dispersion in each component decreases along the length of the feature. The azimuthal ($\sigma_\theta$) and vertical ($\sigma_z$) velocity dispersions in field 11 nearest the outer LMC disk are similar to those observed in the nearby outer disk in \citetalias{C20}; the dispersions gradually decrease to approximately half their disk values by the outermost field along the structure. As moving along the feature also increases the galactocentric radius of the fields, this is not surprising: the velocity dispersion is expected to decrease with radius for material within an axisymmetric disk potential \citep{gaiacollaborationGaiaEarlyData2021a,vasilievInternalDynamicsLarge2018b,binneyGalacticDynamics2008b}. More interesting is the dispersion of the in-plane radial velocity. The dispersion in the two innermost feature fields (>40~km~s$^{-1}$) are similar to that in the nearby outer LMC disk \citepalias[\textasciitilde45~km~s$^{-1}$:][]{C20}; the innermost feature field is even larger than this value. This is in stark contrast to the canonical disk value of \textasciitilde25~km~s$^{-1}$ measured at large ($\geq8^\circ$) radii in undisturbed fields \citep{C20,gaiacollaborationGaiaEarlyData2021a,vasilievInternalDynamicsLarge2018b}. As discussed in \citet{wanSkyMapperViewLarge2020}, a large in-plane radial velocity dispersion can be indicative of perturbation by an external gravitational force: in their model, an interaction with the SMC can elevate the radial velocity dispersion in the outskirts of the LMC. In fields further along the feature, the radial velocity dispersion drops to values more consistent with the outer disk, continuing to drop along the length of the feature to \textasciitilde10~km~s$^{-1}$ in the outermost feature field. In Fig.~\ref{fig:armkin}, field 19 is a notable outlier when compared with the kinematic trends observed in the other fields. This is especially true of its azimuthal velocity (\textasciitilde1.8$\sigma$ higher than the other fields) and three dispersion components (\textasciitilde1$\sigma$ higher than the other fields). In addition, field 19 has generally larger uncertainties for all parameters. Investigation revealed that these characteristics can be traced to a combination of two issues. Firstly, field 19 has substantially fewer Magellanic members than any of the other fields -- a factor of two smaller than adjacent fields 16 and 22. Secondly, it transpires that the peak of one of the contamination populations we model during our membership analysis (see \citetalias{C20}) sits very close (\textasciitilde0.4~mas~yr$^{-1}$) to the kinematic peak for field 19 in the proper motion plane. As a consequence, our algorithm finds it difficult to robustly distinguish between Magellanic members and non-members. In fields where a large number of Magellanic stars are present, this is not an issue: the contamination populations are generally very broad compared to the narrow Magellanic kinematic peaks, allowing reliable association of stars with the appropriate population. However, the low number of genuinely Magellanic stars in field 19 broadens the observed Magellanic peak in proper motion space, resulting in misclassification of some genuinely Magellanic stars as belonging to the contaminant population (and potentially vice-versa, though the large difference in LOS velocity between Magellanic and non-Magellanic stars typically mitigates misclassification in this direction). This biases the derived Magellanic $\mu_\alpha$; indeed, most of the deviance observed in $V_\theta$ can be directly mapped to the $\mu_\alpha$ component of proper motion. Because of these issues, we do not attribute any physical significance to the fact that field 19 appears to be a kinematic outlier, and downweight its importance when comparing our measurements with numerical models in the next section. \section{Modelling and Analysis}\label{sec:models} In order to interpret the observed properties of the northern arm, we have created a suite of dynamical models of the LMC+SMC+Milky Way system with which to compare our observations. These comprise an existing $N$-body model of the MW and LMC only, presented in \citetalias{mackey10KpcStellar2016}, and five new model ensembles with varying LMC, SMC, and MW masses. Within each of the five new model ensembles, we sample from literature uncertainties in the 6D phase space properties of the LMC and SMC centres in order to investigate the allowed distribution of orbits -- and hence past interactions -- of the Clouds. We utilise these models to test the relative importance of tidal forces from the MW and SMC in generating structures akin to the northern arm. \subsection{General methodology} While we analyse several different models, calculated using two distinct numerical methods, we utilise a common procedure for making mock observations of these models. The simulations are evolved in Cartesian coordinates which are centered on the present-day location of the Milky Way. Mock observations are made on the final snapshot from the location of the Sun, which is assumed to be at a distance of 8.178~kpc \citep{gravitycollaborationGeometricDistanceMeasurement2019} from the Galactic center and moving with a velocity of $(11.1,242.5,7.3)$ km~s$^{-1}$ (motivated by the results of \citealt {schonrichLocalKinematicsLocal2010} and \citealt{bovyMILKYWAYCIRCULARVELOCITY2012}). These mock observations are made for the same observables as the real data, i.e. $\alpha$, $\delta$, $D$, $\mu_\alpha$, $\mu_\delta$, $V_{\text{LOS}}$. We subsequently convert these observables into the same (X,Y) coordinate system as the observed data using Eq.~\ref{eq:xy}. Note that in this transformation, we set $\alpha_0$, $\delta_0$ to be the defined LMC centre for each individual model, rather than the observed LMC centre, as the defined centre by design varies between model iterations. To determine the model kinematics within each field for comparison with our observations, we select all particles within a one-degree radius of the central (X,Y) coordinates of each field reported in Table~\ref{tab:fieldloc} -- the same size as a MagES field observed with 2dF. We calculate the resulting median and dispersion of each kinematic component ($V_{\text{LOS}}$, $\mu_\alpha$, and $\mu_\delta$), which are suitable for direct comparison with the equivalent MagES observations. We further convert the model kinematics for each field into the reference frame of the assumed LMC disk plane using the same process as for the observed data, described in \S\ref{sec:kinematics}, to facilitate comparison with the equivalent observations. However, we make one key change to this process, as unlike the observed stars, the true distance to each model particle is known. We therefore utilise the true particle distances to calculate the out-of-plane distance ($z$) relative to the assumed \citetalias{choiSMASHingLMCTidally2018} LMC disk plane for each particle, rather than making the assumption that all particles are in the LMC disk plane ($z=0$) as required for the observations. We use the calculated out-of-plane distances to assess the accuracy of this earlier assumption (see below). We note there are two possible approaches for comparing model fields to MagES fields. The first, which we have adopted, is to select model locations at the same (X,Y) coordinates as each MagES field. However, these positions are not always precisely co-located with any northern overdensity that may appear in a given model. An alternative is to fit a unique feature track following any northern overdensity for each model realisation, using the same method as described in \S\ref{sec:armloc}, and compare model fields centred at the same $\phi_1/\phi_2$ coordinates as each MagES field as reported in Table~\ref{tab:fieldloc}. While this ensures model fields are co-located with any northern overdensity generated in the models, differences in the shape of the feature between model iterations can result in model fields located at significantly different LMC galactocentric radii and position angles. We adopt the first approach described above as this is equivalent to selecting particles at the same projected LMC galactocentric radius and position angle as the MagES fields, ensuring particles feel comparable gravitational forces from the LMC+SMC+MW as the observed stars. In comparison, under the second approach outlined above, the different galactocentric radii of the model fields means the gravitational forces felt by particles at each field location can differ, potentially significantly, between model iterations. The derived kinematics are thus not strictly comparable, even between individual model realisations. Nonetheless, we have tested the second approach by fitting a feature track to each model realisation, selecting all particles within a one-degree radius of the $\phi_1/\phi_2$ coordinates of each MagES field, and calculating the resulting field kinematics. Comparison of the two approaches reveals the choice of field location does not significantly affect the derived model kinematics, nor the resulting conclusions regarding the origin of the feature. \subsection{$N$-body model}\label{sec:nbody} We first compare our data to an existing $N$-body simulation of an LMC flyby of the MW presented in \citetalias{mackey10KpcStellar2016}. The LMC is modelled as a two-component galaxy (stellar disk and NFW halo), with a total mass of $1.4\times10^{11}$M$_\odot$ and stellar disk mass of $4\times10^{9}$M$_\odot$. The disk and halo are comprised of $10^6$ particles each, with a softening length of 75 and 500~pc respectively. The disk has a scale radius of 1.5~kpc, and a scale height of 0.3~kpc; the total LMC mass within 8.7~kpc is $1.8\times10^{10}$M$_\odot$ and has a circular velocity of \textasciitilde90~km~s$^{-1}$, consistent with \citetalias{vandermarelThirdEpochMagellanicCloud2014}. The Milky Way is modelled as a three-component system with a bulge, disc, and dark matter halo as described in \cite{gomezItMovesDangers2015}. The model was integrated for 2~Gyr, with initial positions and velocities of the Milky Way and LMC chosen using backward integration from the current position as in \cite{gomezItMovesDangers2015}, and initial LMC disk orientation chosen to match that reported in \citetalias{vandermarelThirdEpochMagellanicCloud2014}. The resulting present-day LMC position and systemic velocities were within $2\sigma$ of the Galactocentric Cartesian values reported in \cite{kallivayalilThirdEpochMagellanicCloud2013}. Whilst the $N$-body model was run prior to the availability of more recent (i.e. post-\textit{Gaia}) structural and kinematic measurements in the outer LMC disk, we still perform a comparison to the $N$-body model as it surpasses our newer models in several aspects. In particular, it captures the self-gravity of the LMC disk and the deformation of the LMC dark matter halo during infall to the MW potential, both potentially significant in forming the northern arm, and follows the evolution of the LMC for twice as long as the newer models, allowing for a better understanding of the arm's formation timescale. For the present analysis, we have shifted the final LMC position and systemic velocities to new coordinates ($\alpha_0=80.86^\circ$, $\delta_0=-69.89^\circ$, $D_0=49.74$~kpc, $V_{\text{LOS},0}=262.7$ km~s$^{-1}$, $\mu_{\alpha,0}=1.995$ mas~yr$^{-1}$, $\mu_{\delta,0}=0.265$ mas~yr$^{-1}$) in order to more closely match recent estimates of the LMC's systemic properties, and facilitate comparison with our new model suites which include realisations with these same central properties. We stress that as the true endpoint values of the simulation are different to these shifted values, the orbital history of the LMC in this $N$-body model is slightly different to that of later models which have the shifted values as their true endpoints. Some small differences are therefore expected when comparing predictions from this $N$-body model to later model suites. In order to verify the applicability of the $N$-body model, we briefly discuss the model kinematics for the two MagES fields located in the northern LMC disk discussed in \citetalias{C20}: any systematic differences between the observed and modelled kinematics in this comparison will likely also occur for fields along the arm. Table~\ref{tab:diskfield} presents the three velocity components in the plane of the LMC disk ($V_\theta$, $V_r$, and $V_z$), as well as dispersions in each of these components, for MagES fields 18 and 12 and the corresponding $N$-body model predictions. We note the observed values are slightly different to those presented in \citetalias{C20} as we re-calculate these using Gaia EDR3 astrometry and assumption of a \citetalias{choiSMASHingLMCTidally2018} disk geometry to maintain consistency with fields along the northern arm. \begin{table*} \caption{Disk velocities for MagES fields 18 and 12, located in the northern LMC disk, derived assuming \citetalias{choiSMASHingLMCTidally2018} disk geometry. Measurements are presented for both observed data, and for the \citetalias{mackey10KpcStellar2016} $N$-body model. As model particles have precisely known positions and kinematics, no uncertainties are reported for the model fields. } \label{tab:diskfield} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{cllllllllllll} \hline Field & \multicolumn{2}{c}{$V_\theta$ (km~s$^{-1}$)} & \multicolumn{2}{c}{$\sigma_\theta$ (km~s$^{-1}$)} & \multicolumn{2}{c}{$V_r$ (km~s$^{-1}$)} & \multicolumn{2}{c}{$\sigma_r$ (km~s$^{-1}$)} & \multicolumn{2}{c}{$V_z$ (km~s$^{-1}$)} & \multicolumn{2}{c}{$\sigma_z$ (km~s$^{-1}$)} \\ \hline & Measured & $N$-body & Measured & $N$-body & Measured & $N$-body & Measured & $N$-body & Measured & $N$-body & Measured & $N$-body \\ \hline 18 & $66.0 \pm 12.9$ & 106.8 & $25.6 \pm2.1$ & 9.0 & $19.7 \pm 8.3$ & 20.5 & $25.6 \pm2.0$ & 9.6 & $5.4 \pm 6.8$ & 3.0 & $20.8 \pm1.1$ & 6.1 \\ 12 & $42.2 \pm 7.3$ & 84.9 & $27.5 \pm2.5$ & 6.1 & $25.3 \pm 14.4$ & $-1.8$ & $43.2 \pm3.8$ & 23.3 & $-3.7 \pm 5.4$ & 1.7 & $25.4 \pm1.3$ & 6.4 \\ \hline \end{tabular} \end{adjustbox} \end{table*} From Table~\ref{tab:diskfield}, one clear difference between the $N$-body model and the observations is the velocity dispersions: each of the three components are significantly lower in the model than the observations. This directly contributes to the overestimation of the azimuthal velocity ($V_\theta$) by the model. The model does not explicitly set the azimuthal velocity; instead, the circular velocity ($V_{\text{circ}}$) is fixed at a singular radius by enforcing an enclosed mass of $1.8\times10^{10}$M$_\odot$ at a radius of 8.7~kpc \citep{vandermarelThirdEpochMagellanicCloud2014}. However, the circular velocity is higher than the azimuthal velocity due to asymmetric drift, with the difference roughly a factor of the disk velocity dispersion at the large distances of these fields. As such, the too-low velocity dispersions in the model directly contribute to its too-high azimuthal velocity. We consequently expect these same discrepancies in fields along the northern arm. In contrast, the radial and vertical velocities are generally similar in both the model and observations. We also assess the geometry of the model, calculating the median out-of-plane distance ($z$)\footnote{Following convention we consider positive $z$ to indicate `above' the disk and negative $z$ to indicate `below' the disk. More informative is to note that in the \citet{vandermarelNewUnderstandingLarge2002} framework, $z$ increases in the direction of the observer such that `above' corresponds to `in front of' the disk plane while `below' corresponds to `behind' the disk plane relative to the observer.} at the location of the two fields. If the model geometry matches the assumed disk geometry, the median distance above the disk plane should be zero. We find the median out-of-plane distances are smaller under the assumption of \citetalias{choiSMASHingLMCTidally2018} disk geometry ($\leq$0.5~kpc for both fields, with field 18 above the disk plane and field 12 below the disk plane) than the assumption of \citetalias{vandermarelThirdEpochMagellanicCloud2014} disk geometry (\textasciitilde0.8-1.6~kpc below the disk plane for fields 18 and 12 respectively). The smaller out-of-plane distances calculated using \citetalias{choiSMASHingLMCTidally2018} geometry imply this is closer to the model inclination, and supports our choice in \S\ref{sec:kinematics} to assume this geometry when transforming observed MagES kinematics into the LMC disk plane. Having established caveats associated with the kinematics of the model, and substantiated the assumption of \citetalias{choiSMASHingLMCTidally2018} disk geometry in calculating these, we now compare the model kinematics to MagES fields along the northern arm. Fig.~\ref{fig:nbody} shows the three disk velocity components, as well as dispersions in each of these, for both the $N$-body model (represented by magenta points) and observations within each field. The figure also shows results for the base-case suite of newer models, discussed in more detail below. To improve figure clarity, particularly when comparing several model suites to observations, we plot each field spaced equally along the x-axis, with model points slightly offset from observations. We list the LMC galactocentric radius for each field on the top axis for reference. Whilst overall kinematic trends as a function of position along the feature are similar in both the $N$-body model and the observations, kinematics within each individual field differ. \begin{figure*} \includegraphics[width=\textwidth]{fig8} \caption{Modelled velocities and dispersions for MagES fields along the northern arm, calculated assuming a \protect\citetalias{choiSMASHingLMCTidally2018} disk geometry. Top panels show, in order, the azimuthal ($V_\theta$), radial ($V_r$), and vertical ($V_z$) velocity components, with bottom panels showing the corresponding velocity dispersion in each component. Orange points show the observations and associated $1\sigma$ uncertainties, and magenta diamonds show results from the $N$-body model. Purple box‐and‐whisker plots show the distribution of the new base-case model suite across 100 realisations: the shaded box shows the 25\textsuperscript{th}‐75\textsuperscript{th} percentiles of the distribution, with whiskers representing the 5\textsuperscript{th} and 95\textsuperscript{th} percentiles, and the central shaded line the 50\textsuperscript{th} percentile of the distribution. For clarity, fields are artificially spaced equally along the x-axis, with model points slightly offset. The top axis lists the LMC galactocentric radius $R$, in degrees, for each field.} \label{fig:nbody} \end{figure*} As expected from the analysis of the two MagES disk fields, the azimuthal and vertical velocity dispersions (panels \textit{d} and \textit{f} of Fig.~\ref{fig:nbody}) in all fields are substantially lower than the observations, with the in-plane radial velocity (panel \textit{e}) dispersion also lower in all but two fields mid-way along the arm. This is likely a reflection of the underestimated velocity dispersions within the model more generally. The model azimuthal velocity (panel \textit{a}) is also significantly higher than the observations in the innermost feature field. This is likely due to the too-low velocity dispersion of the model, as it is this field where the all three components of velocity dispersion are most significantly underestimated. The model in-plane radial velocities (panel \textit{b}) do show the same general shape as the observations, with a drop to approximately $-35$km~s$^{-1}$ in field 13, and an increasing velocity moving further along the feature. However, the model radial velocity increases much too sharply compared to the observations, with the outermost field having a predicted radial velocity close to 40~km~s$^{-1}$. This is clearly inconsistent with the strong negative in-plane radial velocity measured along the entire length of the arm. The vertical velocity (panel \textit{c}) follows the same trend as the observations, but offset in magnitude: while increasing along the length of the feature, with the exception of the innermost field it is consistently lower than the observations by \textasciitilde10-15~km~s$^{-1}$. The overall qualitative agreement between model velocity trends and observations suggest it is plausible that the northern arm could be formed solely as a consequence of the tidal force of the Milky Way; however, the quantitative disagreements indicate there must be differences between this specific model realisation -- and associated perturbation -- compared to the actual LMC. \subsection{Simpler model suites}\label{sec:simple} Whilst the $N$-body model has some qualitative agreement with the observed kinematic trends, it is nonetheless a single model that does not include the SMC. To learn more about the origin of the northern arm, a suite of models is required to (i) probe the allowed range of physical parameters, such as varying galaxy masses and the effect of SMC interactions, and (ii) account for the effects of uncertainties on the LMC and SMC central positions and systemic velocities on the orbits of the Clouds. As it is prohibitively computationally expensive to run such large model suites as full $N$-body models, we instead generate a suite of simpler models. We note that while there are limitations associated with these simpler models, which we discuss further below, they are valuable as an initial exploration of the allowable parameter space. Our models are inspired by those presented in \cite{belokurovCloudsArms2019} who modelled the LMC disk as a collection of test-particles initially on circular orbits in a single plane. In order to account for the velocity dispersion of the LMC disk, as well as the disk thickness, we instead initialise the LMC disk as an exponential disk made of test-particles. We model the LMC potential as a rigid exponential disk and a Hernquist \citep{hernquistAnalyticalModelSpherical1990} dark matter halo. For the exponential disk, we use a disk mass of $2\times10^9$M$_\odot$, a scale radius of 1.5~kpc, and a scale height of 0.4~kpc. For the Hernquist profile, we assume a mass of $1.5\times10^{11}$M$_\odot$ \citep[motivated by the results of][]{erkalTotalMassLarge2019} and a scale radius of 20~kpc in our fiducial model. We also consider a lighter LMC model where the Hernquist profile mass is $1.5\times10^{10}$M$_\odot$ and the scale radius is 1~kpc. In both cases, the scale radius is chosen so that the circular velocity is approximately 90~km~s$^{-1}$ at 10~kpc. The disks are initialised using \textsc{agama} \citep{vasilievAGAMAActionbasedGalaxy2019}. We note that since all of the features examined in this work are focused on the outskirts of the LMC ($>10$~kpc), we only include particles with apocenters larger than 7~kpc when initialising the disk for computational efficiency. The Milky Way is modelled as a 3 component system with a bulge, disk, and dark matter halo similar to the \texttt{MWPotential2014} from \citet{bovyGalpyPythonLIBRARY2015}. We use an NFW halo \citep{navarroUniversalDensityProfile1997} with a mass of $8\times10^{11}$M$_\odot$, a scale radius of $16$~kpc, and a concentration of $15.3$. For the disk, we use a Miyamoto-Nagai potential \citep{miyamotoThreedimensionalModelsDistribution1975} with a mass of $6.8\times10^{10}$M$_\odot$, a scale radius of $3$~kpc, and a scale height of $0.28$~kpc. For the bulge, we use a Hernquist profile with a mass of $5\times10^9$M$_\odot$ and a scale length of $0.5$~kpc. We also consider a more massive Milky Way case where the NFW mass is raised to $1.2\times10^{12}$~M$_\odot$ with all other parameters kept the same. The SMC is modelled as a rigid Hernquist profile with a mass of $2.5\times10^9$M$_\odot$ and a scale radius of $0.043$~kpc. We also consider a more massive SMC with a mass of $5\times10^9$M$_\odot$ and a scale radius of 1.26~kpc. In both cases, the scale radius is chosen so that the SMC has a circular velocity of $60$~km~s$^{-1}$ at $2.9$~kpc \citep[motivated by the results of][]{stanimirovicNewLookKinematics2004}. As the entire SMC mass is enclosed within this radius in our models, this results in much smaller scale radii than in e.g. \citet{beslaRoleDwarfGalaxy2012a}, who model an initially more massive SMC which experiences mass loss through repeated interactions with the LMC. As in \cite{erkalTotalMassLarge2019}, we treat each system (i.e. MW, LMC, SMC), as a particle sourcing a potential. This allows us to account for the motion of the Milky Way in response to the LMC. We account for the dynamical friction of the Milky Way on the LMC using the results of \citet{jethwaMagellanicOriginDwarfs2016a}. The LMC and SMC are initialised at their present day locations, then rewound for 1~Gyr in the presence of each other and the Milky Way. At this time, the LMC disk is initialised with \textasciitilde$2.5\times10^6$ tracer particles, and the system is evolved to the present. During initialisation, the LMC disk is aligned such that its geometry matches that from \citetalias{choiSMASHingLMCTidally2018} -- equal to that assumed for our observations in Section~\ref{sec:kinematics}. Due to the rigid nature of the disk potential, the orientation of the disk does not evolve during the simulation. No tracer particles are placed within the SMC potential. We verify that the present-day kinematics of the inner LMC ($R$<10~kpc; noting that particles with apocentres <7~kpc are not included in our simulation) remain consistent with those observed in the equilibrium LMC disk at these radii (i.e. $V_{\text{circ}}$\textasciitilde90~km~s$^{-1}$, $V_r\sim V_z\sim0$, $z\sim0$, and each of $\sigma_\theta,\sigma_r,\sigma_z$ approximately constant). This indicates our simulations are suitable for comparison with our observations, and that any deviations from equilibrium at larger radii in our simulations are genuinely the result of perturbations from the Milky Way, SMC, or both. As a summary of our setup, Table~\ref{tab:modpars} shows the properties of each model set. For the `base-case' model set -- our best estimate of realistic parameters for each of the LMC, SMC, and MW -- we run 100 realisations, sampling from within Gaussian uncertainties on the LMC and SMC distances and systemic velocities as presented in Table~\ref{tab:modinitprops}. Also presented in Table~\ref{tab:modinitprops} are the current-day relative velocities and positions of the SMC compared to the LMC, in the frame of the LMC disk, that result from our sampling of the 12-dimensional LMC/SMC parameter space. \begin{table} \caption{Simulation parameters for each simpler model ensemble.} \label{tab:modpars} \begin{adjustbox}{max width=\columnwidth} \begin{threeparttable} \begin{tabular}{lllll} \hline Ensemble & Realisations & LMC mass & MW mass & SMC mass \\ \hline base-case & 100 & $1.5\times10^{11}$M$_\odot$\tnote{a} & $8\times10^{11}$M$_\odot$\tnote{b} & $2.5\times10^{9}$M$_\odot$\tnote{c} \\ No SMC & 12 & $1.5\times10^{11}$M$_\odot$ & $8\times10^{11}$M$_\odot$ & - \\ Light LMC & 12 & $1.5\times10^{10}$M$_\odot$\tnote{d} & $8\times10^{11}$M$_\odot$ & - \\ Heavy MW & 12 & $1.5\times10^{11}$M$_\odot$ & $1.2\times10^{12}$M$_\odot$\tnote{e} & - \\ Heavy SMC & 12 & $1.5\times10^{11}$M$_\odot$ & $8\times10^{11}$M$_\odot$ & $5\times10^{9}$M$_\odot$\tnote{c} \\ \hline \end{tabular} \begin{tablenotes}[para]\footnotesize \item[a]\protect\citealt{erkalTotalMassLarge2019}; \item[b]\protect\citealt{bovyGalpyPythonLIBRARY2015}; \item[c]\protect\citealt{harrisSpectroscopicSurveyRed2006a}; \item[d]\protect\citetalias{vandermarelThirdEpochMagellanicCloud2014}; \item[e]\protect\citealt{bland-hawthornGalaxyContextStructural2016}. \end{tablenotes} \end{threeparttable} \end{adjustbox} \end{table} \begin{table*} \caption{Model parameters for the present-day systemic properties of the LMC and SMC. Parameters are sampled from a Gaussian distribution centred on the peak value, with a $1\sigma$ width equal to the literature uncertainty on that parameter. The bottom half of the table presents the present-day distribution of the 3D position and velocity of the SMC relative to the LMC, which results from sampling the systemic properties of the Clouds reported in the upper section of the table. We report the median value, with the uncertainty values corresponding to the 1$\sigma$width of the distribution.} \label{tab:modinitprops} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{llllp{0.6\linewidth}} \hline Variable & Value & Unit & Reference & Comment \\ \hline LMC $\alpha_0$ & $79.88$ & degrees & \protect\citetalias{vandermarelThirdEpochMagellanicCloud2014} & RA of the LMC COM. Taken from their `PMs+Old $V_{\text{LOS}}$ Sample' result. Held fixed. \\ LMC $\delta_0$ & $-69.59$ & degrees & \protect\citetalias{vandermarelThirdEpochMagellanicCloud2014} & DEC of the LMC COM. Taken from their `PMs+Old $V_{\text{LOS}}$ Sample' result. Held fixed. \\ LMC $V_{\text{LOS},0}$ & $261.1\pm2.2$ & km~s$^{-1}$ & \protect\citetalias{vandermarelThirdEpochMagellanicCloud2014} & LOS velocity of the LMC COM. Taken from their `PMs+Old $V_{\text{LOS}}$ Sample' result. \\ LMC $\mu_{\alpha,0}$ & $-1.895\pm0.024$ & mas~yr$^{-1}$ & \protect\citetalias{vandermarelThirdEpochMagellanicCloud2014} & Proper motion in the $\alpha\cos(\delta)$ direction of the LMC COM. Taken from their `PMs+Old $V_{\text{LOS}}$ Sample' result. \\ LMC $\mu_{\delta,0}$ & $0.287\pm0.054$ & mas~yr$^{-1}$ & \protect\citetalias{vandermarelThirdEpochMagellanicCloud2014} & Proper motion in the $\delta$ direction of the LMC COM. Taken from their `PMs+Old $V_{\text{LOS}}$ Sample' result. \\ LMC $D_0$ & $50.1\pm2.5$ &~kpc & \protect\citet{freedmanFinalResultsHubble2001} & Distance to the LMC COM. Used in the \protect\citetalias{vandermarelThirdEpochMagellanicCloud2014} analysis. Whilst more recent (and precise) distance estimates are available \protect\citep[e.g.][]{pietrzynskiDistanceLargeMagellanic2019}, we permit $D_0$ to vary over this range in order to investigate a larger range of allowed LMC orbits. \\ SMC $\alpha_0$ & $13.38$ & degrees & \protect\citet{subramanianThreeDimensionalStructureSmall2012a} & RA of the SMC COM. Held fixed. \\ SMC $\delta_0$ & $-73.0$ & degrees & \protect\citet{subramanianThreeDimensionalStructureSmall2012a} & DEC of the SMC COM. Held fixed.\\ SMC $V_{\text{LOS},0}$ & $145.6\pm0.6$ & km~s$^{-1}$ & \protect\cite{harrisSpectroscopicSurveyRed2006a} & LOS velocity of the SMC COM. \\ SMC $\mu_{\alpha,0}$ & $0.772\pm0.063$ & mas~yr$^{-1}$ & \protect\citet{kallivayalilThirdEpochMagellanicCloud2013} & Proper motion in the $\alpha\cos(\delta)$ direction of the SMC COM. \\ SMC $\mu_{\delta,0}$ & $-1.117\pm0.061$ & mas~yr$^{-1}$ & \protect\citet{kallivayalilThirdEpochMagellanicCloud2013} & Proper motion in the $\delta$ direction of the SMC COM. \\ SMC $D_0$ & $62.1\pm1.9$ & km~s $^{-1}$ & \protect\citet{graczykARAUCARIAProjectDistance2013} & Distance to the SMC COM. \\ \hline $r_{\text{SMC}}$ & $23.5\pm1.5$ & kpc & - & Total distance between the LMC and SMC centres of mass. \\ $V_{\text{tot}}$ & $122.6\pm32.0$ & km~s$^{-1}$ & - & Total velocity of the SMC relative to the LMC. \\ \hline \end{tabular} \end{adjustbox} \end{table*} Sampling these parameters results in a range of allowable orbits for the SMC around the LMC, and both Clouds around the Milky Way. In general, the orbit of the Clouds around the Milky Way does not vary too significantly between realisations, with the Clouds always just past their first pericentric passage around the Milky Way \citep[c.f.][]{kallivayalilThirdEpochMagellanicCloud2013}. However, the orbit of the SMC around the LMC can vary significantly. Fig.~\ref{fig:orbits} shows both the total distance $r$ (top panel)\footnote{$r$, the total distance, is distinct from $R$, which is the in-plane cylindrical radius.} and the height above the disk plane $z$ (bottom panel) of the SMC from the LMC centre of mass as a function of time during each of the 100 realisations of the base-case model setup. In general, whilst the orbit of the SMC within the past \textasciitilde250~Myr from today is broadly consistent across all realisations, the orbital history beyond this can diverge quite significantly depending on the specific location in parameter space of each realisation. In the following, we report statistics after running an additional 900 realisations of the base-case model setup (noting these are not full realisations initialised with test particles, but instead simply trace the orbits of the LMC and SMC COM), sampling from the present-day positions and systemic motions of the Clouds as in Table~\ref{tab:modinitprops}, to give 1000 total past orbits. \begin{figure} \includegraphics[width=\columnwidth]{fig9} \caption{Total distance $r$ (top) and out-of-plane distance $z$ (bottom) of the SMC from the LMC as a function of time from the present day, up to the 1~Gyr cutoff of our models. Each grey line represents a single realisation of the base-case model suite, associated with an allowable orbit sampling from the uncertainties in the present-day systemic motions of the Clouds. Where these lines cross the dashed blue line at $z=0$ in the bottom panel indicates an SMC crossing of the LMC disk plane. Minima in the top panel indicate SMC pericentric passages around the LMC. Red and orange lines represent individual realisations which experience, respectively, one and two SMC crossings of the LMC disk plane prior to today, which are discussed further in Section \ref{sec:origin}.} \label{fig:orbits} \end{figure} We find the SMC is currently in the process of crossing the LMC disk plane, with \textasciitilde81\% of orbits having already had a crossing at a distance of $17\pm5$~kpc, and the remaining \textasciitilde19\% set to cross the plane in the very near future. This crossing is likely to affect the dynamics of the LMC disk in the future, but is sufficiently recent (with the median crossing time only $45_{-26}^{+25}$~Myr ago) that it will not have a significant effect on the present-day kinematics of the disk as a whole. We additionally see that in all realisations, the SMC has had a recent pericentric passage around the LMC $147^{+42}_{-31}$~Myr ago \citep[in agreement with][]{zivickProperMotionField2018b}. However, as seen in Fig.~\ref{fig:orbits}, while these are relatively close pericenters, with $r_{\text{peri}} = 8.0^{+2.4}_{-2.0}$~kpc, we also find that they occur significantly below the plane of the LMC disk, with $z_{\text{peri}} = -6.8_{-2.6}^{+2.5}$~kpc. Beyond this, the orbit of the SMC varies significantly. Approximately 51\% of our realisations have a second SMC crossing of the LMC disk $398^{+84}_{-68}$~Myr ago, which occurs across a broad range ($28.8^{+11.4}_{-9.2}$~kpc) of distances\footnote{as these are disk crossings, $z=0$ and thus the in-plane radial distance $R$ is equal to the total distance $r$}. The remaining \textasciitilde49\% of orbits either i) do not quite cross the disk plane but do closely approach it during this time period, or ii) remain significantly behind the LMC's disk plane for the entirety of the 1~Gyr over which our models are run. A handful of models (\textasciitilde9\%) additionally have a third disk crossing $906_{-163}^{+61}$~Myr ago, though we note a much larger fraction would experience this crossing if our models were rewound for a greater length of time than the 1~Gyr for which they are currently run. Due to the increasing uncertainty in the SMC's orbit at earlier times, the particulars of this third crossing are much less robustly constrained than the \textasciitilde400~Myr crossing, with a crossing distance of $53.8_{-46.3}^{+13.1}$~kpc. In fact, \textasciitilde20\% of these third crossings pass within 10~kpc of the LMC center (i.e. this occurs in \textasciitilde2\% of the total model set). At around this time, we additionally find a small fraction (\textasciitilde4\%) of our models show a second SMC pericentric passage, again noting this fraction would be substantially increased were our models rewound further than 1~Gyr. These passages have similar pericentric distances to the most recent pericentric passage ($r_{\text{peri}} = 6.2^{+3.8}_{-2.3}$~kpc), but occur at smaller out-of-plane distances ($z_{\text{peri}} = 2.2^{-1.1}_{+2.5}$~kpc) due to the similarly-timed disk crossing in these realisations. Whilst these statistics are for the base-case model setup, we find broadly similar results for the heavy-SMC model setup -- that is, all realisations experience a SMC pericentre \textasciitilde150~Myr ago at a reasonably large out-of-plane distance, a moderate fraction of realisations experience a second SMC crossing of the LMC disk plane \textasciitilde400~Myr ago, and a modest number of realisations have additional disk crossings and pericentric passages \textasciitilde1~Gyr ago (noting again the number of such realisations would increase were our models rewound for a greater length of time). It is important to note that the relative simplicity, and in particular the lack of self-gravity (see \S\ref{sec:caveats} for a detailed discussion), of our models mean the interactions described above are only estimates; more realistic models of the Magellanic/Milky Way system will be required to confirm the precise orbit of the SMC relative to the LMC. Nevertheless, our models provide a useful first look at understanding the likely relative importance of different interactions on the northern arm. We defer detailed discussion of the overall effects of these interactions on the LMC disk, as well as which regions of parameter space correspond to different orbits of the SMC, to a forthcoming paper which incorporating MagES data across a larger region of the LMC disk (Cullinane et al. in prep). \subsubsection{Model Caveats}\label{sec:caveats} Whilst the ability to explore the large allowable parameter space is a significant advantage of our simple model suites, this approach does have limitations. Particularly significant is the lack of self-gravity, which has two significant effects on the system. The first of these is that the gravitational potentials used to model the dark matter halo of each galaxy are unable to deform in response to one another \citep[e.g.][]{garavito-camargoQuantifyingImpactLarge2021}. This can potentially influence both the global orbits of the Clouds, and the response of stars within them. The second is that model particles describing stars in the LMC disk cannot directly affect one another -- i.e. the LMC disk potential is also fixed in shape and orientation -- which can affect the response of the stellar disk to interactions (particularly those which might introduce overdensities to the disk). We discuss in turn these effects on each pair of galaxies in the MW/LMC/SMC system below. We first discuss the effect of self-gravity on the MW/LMC pair, as we can to some extent quantify these effects through comparison of the simpler models to the $N$-body model. In the $N$-body model the gravitational potentials of both galaxies, which in the simpler models are rigid profiles, are allowed to move in response to one another \citep[i.e. the reflex motion of the Milky Way in response to the LMC is captured][]{gomezItMovesDangers2015}; but this is a global shift in position as opposed to a change in shape. However, models capturing this deformation process \citep[see e.g. Fig.~10 of][]{erkalTotalMassLarge2019} demonstrate the shape of the MW potential is not significantly affected even during the infall of a massive ($1.5\times10^{11}$M$_\odot$) LMC; and at the distance of our outermost field ($R_{\text{LMC}}$\textasciitilde23~kpc), the deformation of the LMC potential is also minimal. In terms of the disk, as the stellar density is highest near the base of the feature, in the $N$-body model the higher concentration of particles here better maintains the disk kinematics. This contributes to the $N$-body having a strong negative in-plane radial velocity in field 13, but one much closer to zero in field 11: the stronger LMC gravitational potential at smaller galactocentric radii, in combination with the stronger self-gravity of the disk, helps maintain the disk kinematics near equilibrium levels. In contrast, the lower stellar density in the outskirts of the feature mean the self-gravity of the disk contributes negligibly to the overall gravitational potential, with these regions therefore more easily perturbed. However, we find in \S\ref{sec:genmod} below that $V_r$ is less significantly perturbed in the simpler model suites all along the arm compared to the $N$-body model: somewhat unexpected given the lack of self-gravity in these models should allow for larger perturbations in their kinematics. We therefore conclude the effect of self-gravity in the MW/LMC pair is not responsible for the largely negative in-plane radial velocity along the arm. We next discuss the effect of self-gravity on the LMC/SMC pair. Several studies have investigated the effects of close interactions with a smaller satellite (like the SMC) on a larger host (like the LMC) in fully self-gravitating systems \citep[e.g.][]{berentzenNumericalSimulationsInteracting2003,bekkiFormationOffcenterBar2009,beslaRoleDwarfGalaxy2012a,yozinTidalinducedLopsidednessMagellanictype2014,pardyTidallyInducedOffset2016}. These studies typically assess the effects of a near-direct collision between the two galaxies: that is, a crossing of the host's disk plane which occurs at relatively small host galactocentric radii. A common finding of each of these studies is that such interactions can introduce asymmetries in the disk of the host, and offsets of up to \textasciitilde2.5~kpc between the dynamical centres of the disk and a central bar. This results in an off-centre and potentially tilted bar, as is observed in the LMC today. Of greater interest to this paper is that these crossings can also produce density waves and features similar to spiral arms out to large radii \citep[$\gtrsim10$~kpc:][]{berentzenNumericalSimulationsInteracting2003,beslaLowSurfaceBrightness2016} relatively shortly after the crossing time \citep[100-200~Myr:][]{berentzenNumericalSimulationsInteracting2003,pardyTidallyInducedOffset2016}, with these features persisting for \textasciitilde Gyr after the crossing \citep{berentzenNumericalSimulationsInteracting2003,yozinTidalinducedLopsidednessMagellanictype2014}. Our simple models would not fully capture these effects. However, we do note this specific type of interaction -- that is, a disk plane crossing at small galactocentric radii -- is not typical of the interactions observed in our models, with this only occurring in \textasciitilde9\% of our models and \textasciitilde900~Myr ago, where uncertainties in the orbit of the SMC are very large. The more recent disk crossing observed in our models, which occurs \textasciitilde400~Myr ago and in \textasciitilde51\% of our models, occurs at a much larger radius ($28.8^{+11.4}_{-9.2}$~kpc) than is typically modelled in these studies. In fact, \citet{bekkiFormationOffcenterBar2009} finds that interactions at larger galactocentric radii distances ($R\sim5$-$10$kpc) are unable to produce an off-centre stellar bar in the LMC; and \citet{poggioMeasuringVerticalResponse2021}, while studying the impact of the Sagittarius dwarf on the Milky Way, find that disk crossings at large radii (i.e. which do not align with a simultaneous pericentric passage) affect the MW disk significantly less than crossings at smaller radii. We therefore expect the \textasciitilde400~Myr disk plane crossing will have a comparatively small effect on the LMC disk as a whole. In addition, as discussed above, our models suggest the SMC's recent pericentric passage \textasciitilde150~Myr ago, occurs at a relatively large out-of-plane distance ($-6.8_{-2.6}^{+2.5}$kpc), with the SMC only approximately now crossing the LMC disk plane. Thus, whilst the radius of the pericentric passage is similar to the interactions modelled in the above studies, we expect its effect on the LMC disk to be commensurately reduced, though \cite{laporteResponseMilkyWay2018a}, in studying the MW/LMC system, find out-of-plane pericentres may introduce mild ($z\sim1$~kpc) warping of the host galaxy disk which would not be captured in our simpler models. Likely a more significant effect of the SMC's recent pericentre is an indirect one: studies of the MW/LMC system \citep[e.g.][]{garavito-camargoHuntingDarkMatter2019} suggest pericentres produce both local and global dark matter wakes in the halo of the host galaxy. These wakes can induce torques on the satellite galaxy \citep{tamfalRevisitingDynamicalFriction2021}, thus affecting its orbit. Along similar lines, \cite{kallivayalilThirdEpochMagellanicCloud2013} note that such dynamical friction effects would result in a more eccentric orbit of the SMC around the LMC, though the magnitude of this effect is not explicitly calculated. As our simple models do not capture these effects, this is a source of increased uncertainty in the orbit of the SMC, and thus its interactions with the LMC, beyond the recent pericentric passage. Finally, we briefly discuss the effect of self-gravity on the MW/SMC pair. In contrast to the MW/LMC pair, we do not capture the effect of dynamical friction from the Milky Way on the SMC. This may affect the recent orbit of the SMC, which in turn would affect specifics of interactions between the LMC and SMC. However, we expect the direct effect of the LMC -- being much closer and having likely experienced repeated interactions with the SMC prior to the current infall to the Milky Way potential -- is more significant in this case. We additionally note that the gravitational potential used to represent the SMC in our models is relatively simple, particularly given recent findings that indicate it is currently being tidally disrupted by the LMC \citep[e.g.][]{zivickProperMotionField2018b,deleoRevealingTidalScars2020a}. More detailed modelling which captures this disruption, as well as mass loss from the SMC over time (due to likely repeated interactions with the LMC) would be necessary to fully describe these effects and assess how such a varying potential affects the SMC's orbit. The above simplifications mean our models do not capture all the subtleties of interactions between the Clouds, and thus cannot definitively establish the origin of substructures such as the northern arm. However, we stress our aim is qualitative, not quantitative, agreement with observations; and our simpler models do permit an exploration of the allowable parameter space which can indicate the plausibility of various interactions in forming substructures. This ability to isolate which interactions are more or less likely to contribute to the origin of substructures is valuable, as these can be investigated using more detailed models in the future. \subsection{Simple model kinematics along the northern arm}\label{sec:modkin_int} We first discuss the base-case suite of 100 models (represented by purple points in Figs.~\ref{fig:nbody}-\ref{fig:moddist}). Kinematics for this suite are presented in Fig.~\ref{fig:nbody}. The distribution of results across the ensemble are represented by box-and-whisker plots, displaying the 5\textsuperscript{th}, 25\textsuperscript{th}, 50\textsuperscript{th}, 75\textsuperscript{th}, and 95\textsuperscript{th} percentiles within each field. For a given field, the spread in ensemble kinematics -- \textasciitilde10~km~s$^{-1}$ in $V_r$ and $V_z$, 10-20~km~s$^{-1}$ in $V_\theta$, and 5-10~km~s$^{-1}$ in each velocity dispersion component -- is due entirely to differences in the orbits of the Clouds parameterised by sampling from within the uncertainties in their central positions and motions. These variations can be a similar order of magnitude to the observational uncertainties within the fields, and demonstrate the importance of sampling these parameters as compared to running a single model realisation. We find a number of key differences between the ensemble kinematics and the $N$-body model. The azimuthal and in-plane radial velocity dispersions (panels \textit{d} and \textit{e} of Fig.~\ref{fig:nbody}) are up to 10-15~km~s$^{-1}$ higher than the $N$-body model, and the vertical velocity dispersion (panel \textit{f}) up to 5~km~s$^{-1}$ higher. This is by design, as the model suites are initialised with higher velocity dispersions to more closely match recent MagES measurements in the outer LMC disk \citepalias[see][]{C20}. However, the velocity dispersion in each component remains underestimated in field 11 (closest to the LMC disk) relative to observations. The ensemble azimuthal velocity (panel \textit{a}) is lower than the $N$-body model, and remains flat at approximately the value measured in the outer LMC disk in \citetalias{C20}. This is more consistent with the observations, particularly in the inner- and outermost fields along the feature where the $N$-body model is most discrepant. The vertical velocity (panel \textit{c}) is also more consistent with observations in the three outermost feature fields. However, in the three fields closest to the LMC disk, the vertical velocity is not significantly different from the $N$-body model and remains lower than the observations. The greatest difference in model kinematics is in the in-plane radial velocity (panel \textit{b}). In the base-case ensemble, this remains almost flat along the length of the feature, with only a very mild (\textasciitilde5~km~s$^{-1}$) drop and subsequent increase in the mean radial velocity along the feature. This is significantly different from the very negative values seen in the observations and the inner fields of the $N$-body model. However, the lack of a steep increase in radial velocity along the feature in the model suite is a trend that is less discrepant with observations than the $N$-body model in the outermost feature fields. In order to understand the drivers of these kinematic differences, we now compare the different simple model ensembles to assess the impact of varying galaxy masses on the kinematics of the northern arm. Fig.~\ref{fig:modsuite} shows the three disk velocity components ($V_\theta$, $V_r$, and $V_z$), as well as dispersions in each of these components, for both the model ensembles and the observations within each field. Each point represents the median of the 12 realisations, with error bars showing the full range across each suite. We include the base-case ensemble in this comparison, but for consistency sample only the same 12 realisations as included for the other suites. Pale dashed extensions to the base-case ensemble results show the full range from all 100 realisations of the ensemble, for comparison. Fig.~\ref{fig:moddist} shows, in the same format, the out-of-plane distance ($z$) for each of the model ensembles. \begin{figure*} \includegraphics[width=\textwidth]{fig10} \caption{Model velocities and dispersions for MagES fields along the arm-like feature for the simpler model suites, calculated assumming a \protect\citetalias{choiSMASHingLMCTidally2018} disk geometry. Top panels show, in order, the azimuthal, radial, and vertical velocity component, with bottom panels showing the corresponding velocity dispersion in each component. Orange points show the observations and associated $1\sigma$ uncertainties. Coloured model points show ensemble medians, and error bars show ensemble ranges, with each suite represented by a different colour and symbol. Points without error bars have sufficiently small ranges that these are not observable. For clarity, fields are artificially spaced equally along the x-axis, and each suite of model points slightly offset. The top axis lists the LMC galactocentric radius of the fields. Dashed extensions of the error bars for the base-case ensemble (purple) show the full range of data from the 100 realisations relative to the range associated with the subsample of 12 realisations used for the other model suites.} \label{fig:modsuite} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{fig11} \caption{Model out-of-plane distance ($z$) for fields along the northern arm-like feature, calculated relative to a \protect\citetalias{choiSMASHingLMCTidally2018} disk geometry. Magenta diamonds show results from the $N$-body model results. Coloured model points show ensemble medians, and error bars show ensemble ranges, with each suite represented by a different colour and symbol. Points without error bars have sufficiently small ranges that these are not observable. For clarity, fields are spaced equally along the x-axis, and each suite of model points slightly offset. The top axis lists the LMC galactocentric radius of the fields. The dashed grey line indicates the $z$=0 assumption utilised for the observed data, with the shaded grey region indicating the distance range associated with the uncertainty in the median $G_0$ magnitude along the northern arm as in \S\ref{sec:phot}.} \label{fig:moddist} \end{figure} \subsubsection{General comments}\label{sec:genmod} In general, most of the model suites follow the same overall kinematic trends, and do not provide substantially better matches to the observations than the base-case model. The largest differences between model suites typically occur in the outermost field, with the median response typically very similar for most of the length of the feature. This as expected: at smaller radii, the LMC potential dominates, and any external perturbing potential, regardless of origin, does not significantly affect the kinematics. In contrast, at large galactocentric radii the perturbing potential can dominate, leading to different kinematics depending on the origin of the perturbation. We find the out-of-plane distances (Fig.~\ref{fig:moddist}) are typically small (<3~kpc) for each of the model suites, consistent with our findings in \S\ref{sec:phot} that the northern arm roughly follows the plane of the LMC disk. We have tested utilising the median out-of-plane distance for each field in each model suite as the assumed distance for the MagES fields when deriving the observed kinematics in the plane of the disk. However, we find the resulting differences in kinematics are typically less than \textasciitilde5~km~s$^{-1}$: well within the uncertainties of the measurements calculated assuming the fields exactly follow the LMC disk plane. We also find the median distance dispersion within each field is negligibly different (\textasciitilde0.15~kpc) between model suites, with a mild (\textasciitilde0.3~kpc) decrease in the median distance dispersion along the length of the feature. This is, within uncertainty, consistent with our measurements of constant $G_0$ dispersion along the first half of the feature. It is further suggestive that the increase in the $G_0$ dispersion measured in the outer regions of the feature is due to contamination as speculated in \S\ref{sec:phot}, and not a genuine thickening of the feature -- if so, we would expect the distance dispersion of the models to increase along the length of the northern arm. Notably, regardless of model suite, each component of velocity dispersion is underestimated in the innermost feature field. In the case of the vertical velocity dispersion, this is due to the initial conditions of the model, which even within the outer LMC disk is lower than observations at \textasciitilde10~km~s$^{-1}$. This is due to the modest scale height of the disk used in our model and the relatively small contribution of the disk to the gravitational field in the outer parts of the disk. It is possible that previous encounters with the SMC may be needed to inflate this dispersion. Alternatively, the outer disk of the LMC may be thicker than assumed in our model. We also note that whilst selection of the 12 realisations attempts to sample the full possible kinematic distribution, comparison of the set of 12 realisations to the full suite of 100 base-case models as in Fig.~\ref{fig:modsuite} reveals the full range is somewhat underestimated, with the upper limits of $V_\theta$, $V_r$, $V_z$, and $\sigma_z$ underestimated by up to 30\%. In general, this does not improve the consistency between the model kinematics and observations, with the possible exception of $V_z$ in the inner fields. Median values are not significantly different in the restricted set, and lower limits are typically well-sampled. The largest differences in median values are only on the order of \textasciitilde10\%, with overestimated $\sigma_z$ medians and underestimated $V_\theta$ medians along the length of the arm. \subsubsection{Effect of the MW mass}\label{sec:MWmass} The effect of the heavy MW (indicated by dark green points in Figs.~\ref{fig:modsuite} and~\ref{fig:moddist}) is most apparent in the azimuthal velocity (panel \textit{a} of Fig.~\ref{fig:modsuite}), with a mild decrease in the median $V_\theta$ (by \textasciitilde15~km~s$^{-1}$) along the length of the feature. However, we note this remains consistent with the observations within uncertainty. Further, the $V_\theta$ medians are typically underestimated as compared to a full suite of 100 realisations. Consequently, this does not preclude a heavy MW from matching the observations. Comparing the in-plane radial and vertical velocities (panels \textit{b} and \textit{c} respectively), in the innermost fields the heavy MW suite is not significantly different from the base-case suite (indicated by purple points). A larger difference occurs in the outermost fields along the feature, with the heavy MW generating slightly higher vertical velocities, and slightly more negative in-plane radial velocities. It is not surprising the strongest effects are felt in the outermost fields: it is here where the MW gravitational potential is strongest compared to the LMC gravitational potential, and can induce the strongest perturbations. The heavy MW has a negligible effect on the azimuthal velocity dispersion (panel \textit{d}), and minimally (\textasciitilde2~km~s$^{-1}$) increases the in-plane radial velocity dispersion (panel \textit{e}) -- not sufficient to meet the large measured velocity dispersions in the innermost observed field. It does have an increased (\textasciitilde5~km~s$^{-1}$) vertical velocity dispersion(panel \textit{f}) in all but the innermost field, which provides a closer match to observations than any of the other model suites. The out-of-plane distance (Fig.~\ref{fig:moddist}) is also negligibly different from the no-SMC and $N$-body models, remaining within 1.5~kpc of the assumed \citetalias{choiSMASHingLMCTidally2018} inclined disk geometry. Out-of-plane distances of this magnitude can be accommodated within our photometric uncertainties. As discussed in \S\ref{sec:phot}, the typical uncertainty in our median $G_0$ magnitudes along the feature is \textasciitilde0.05~mag. At the distance of the feature, this corresponds to an \textasciitilde1.5~kpc uncertainty in the derived distance. We can understand the increased effects of the heavy MW in the $z$-direction when considering the LMC's orbit and inclination relative to the MW during its infall. The orientation of the LMC is such that the northern half of the disk is inclined closer to the MW plane. As such, there is an increasingly strong gravitational force from the MW along the length of the northern arm. Further, the orbit of the LMC is such that it is approaching the MW from underneath the MW disk plane. The force of the MW thus pulls forward in the positive $z$-direction on the LMC disk, increasing $V_z$ particularly in the outermost fields where this pull is strongest relative to the LMC potential. This may also explain the increased vertical velocity dispersion $\sigma_z$ as compared to the lighter MW models, and the increasingly positive out-of-plane distance along the feature. As the primary effect of the MW on the LMC is in the positive $z$-direction, we hypothesise the MW is also responsible for the asymmetric LOS velocity distributions observed in the northern LMC disk in \citetalias{C20}. The distribution of $V_{\text{LOS}}$ for likely-Magellanic stars in MagES fields 18 and 12 were found to have tails to low LOS velocities; the low inclination of the LMC disk implies stars in these tails have positive vertical velocities of up to \textasciitilde40~km~s$^{-1}$. This is similar to the positive $V_z$ velocities found along the northern arm. We therefore suggest stars in the northern LMC disk showing this perturbation signature are, like the northern arm, disturbed during the LMC's infall to the MW. \subsubsection{Effect of the SMC}\label{sec:smcmass} We now consider the effect of the SMC, comparing the no-SMC (indicated by dark blue points in Figs.~\ref{fig:modsuite} and~\ref{fig:moddist}) and heavy SMC (indicated by turquoise points) model suites to the regular SMC base-case ensemble (indicated by purple points). The median azimuthal and vertical velocities (panels \textit{a} and \textit{c} of Fig.~\ref{fig:modsuite} respectively) are negligibly affected by the presence of the SMC, although some individual realisations have quite large differences from the median. In the case of the heavy SMC, certain individual realisations have \textasciitilde20~km~s$^{-1}$ higher azimuthal velocities in the outermost feature fields, and \textasciitilde20~km~s$^{-1}$ lower vertical velocities in the innermost feature fields. However, those realisations are very inconsistent with observations -- the associated negative vertical velocities being \textasciitilde$5\sigma$ inconsistent with the positive vertical velocities measured. We can conclude the strong perturbations associated with these individual realisations are not realistic, despite being within the allowed uncertainties for the central positions and systemic velocities of the SMC in particular, given these large discrepancies are only observed in model suites including the SMC. The SMC also negligibly affects the median velocity dispersion in any of the three components (panels \textit{d-f}), with the only difference being an increased allowable range of azimuthal and radial velocity dispersions in the outermost fields compared to the no-SMC models. Instead, the SMC has the strongest effect on the in-plane radial velocities (panel \textit{b}), with model suites including the SMC having higher median radial velocities than the no-SMC suite, and the heavy SMC suite generating the largest increase of up to \textasciitilde20~km~s$^{-1}$. Notably, this perturbation is in the wrong direction: these median kinematics are further from the negative observed radial velocities than the no-SMC suite (although we note some individual realisations of the base-case and heavy SMC suites do overlap the range of the no-SMC suite). This suggests recent interactions with the SMC, as captured by these models, are not the source of the perturbation generating the northern arm-like feature. In the innermost two fields along the northern arm, the base-case and heavy SMC suites have slightly smaller (\textasciitilde0.4~kpc) median out-of-plane distances than the no-SMC suite (Fig.~\ref{fig:moddist}). However, in fields further along the feature, these distances significantly increase, with the median out-of-plane distance \textasciitilde3~kpc in front of the LMC disk in the outermost field, and some individual realisations >5~kpc from the assumed disk plane. Notably, we find the realisations which produce the largest out-of-plane distances are the same realisations that produce very negative vertical velocities in the innermost fields, strongly inconsistent with observations. Even so, the median out-of-plane distances are moderately larger than the \textasciitilde1.5~kpc uncertainties in distance accommodated by our photometric uncertainties in \S\ref{sec:phot}. Whilst some individual realisations of these models do have out-of-plane distances within this range, a majority of these model realisations are ruled out as these geometries would result in brighter $G_0$ magnitudes along the arm, inconsistent with those measured. This provides further evidence that recent SMC interactions are not responsible for formation of the northern arm. \subsubsection{Effect of the LMC mass}\label{sec:lmcmass} Whilst a number of recent studies have indicated the total LMC mass is large \citep[$\geq$10$^{11}$M$_\odot$: e.g. ][]{erkalTotalMassLarge2019,penarrubiaTimingConstraintTotal2016}, we additionally explore the formation of the northern arm assuming a factor-of-ten lighter LMC (indicated by light green points in Figs.~\ref{fig:modsuite} and~\ref{fig:moddist}) as used in traditional models of the LMC assuming tidal truncation \citep[see e.g. section 2 of][for a review]{garavito-camargoHuntingDarkMatter2019}. The most significant kinematic difference this induces is in the azimuthal velocity (panel \textit{a} of Fig.~\ref{fig:modsuite}), which displays a strong drop from \textasciitilde60~km~s$^{-1}$ in the innermost field to only \textasciitilde20~km~s$^{-1}$ in the outermost field. This is a result of the model setup, rather than a physical perturbative effect from the MW. As the model suites are initialised to match the rotation curve of the LMC ($V_{\text{circ}}$\textasciitilde90~km~s$^{-1}$) at 10~kpc, this necessitates an enclosed mass of nearly $1.5\times10^{10}$M$_\odot$ at this radius. In order to facilitate this, and maintain the total mass of $1.5\times10^{10}$M$_\odot$ for the model, there is negligible dark (or baryonic) matter beyond this radius. As a result, the azimuthal velocity drops off with approximately 1/$R$ dependence as expected given the lack of matter beyond this radius. Given this is strongly inconsistent with the approximately flat azimuthal velocities measured, we can conclude the LMC is not this light, and must be at least $1.5\times10^{11}$M$_\odot$ in order to maintain the flat rotation curve observed across these large galactocentric radii. All other kinematic component medians do not differ significantly under the light-LMC case as compared to the other model suites. \subsection{Origin of the northern arm}\label{sec:origin} As discussed in \S\ref{sec:MWmass}, the increasingly positive vertical velocity observed along the northern arm is consistent with a MW origin, with qualitatively similar trends observed along the arm in all models, including those omitting the SMC. The heavy-MW model suite also produces the closest $\sigma_z$ to that observed, indicating the strength of the Milky Way's gravitational force in this direction. More difficult to understand is the strongly negative radial velocity observed along the arm, which none of our models replicate. The model suite producing the closest match to these kinematics is the heavy MW suite (indicated by dark green points in Figs.~\ref{fig:modsuite} and~\ref{fig:moddist}); individual model realisations in this suite provide the most negative radial velocities along the length of the feature, albeit significantly weaker than those observed (approximately $-10$~km~s$^{-1}$, compared to approximately $-40$~km~s$^{-1}$). Also notable is the $N$-body model (magenta points in Figs.~\ref{fig:nbody} and~\ref{fig:moddist}), which does have a significant negative radial velocity in field 13 (approximately $-35$~km~s$^{-1}$), but which increases rapidly resulting in a strongly positive radial velocity in the outermost feature field. As discussed in \S\ref{sec:caveats}, we do not expect the lack of self-gravity between the LMC and MW in the simpler model suites to significantly affect the kinematics of the northern arm. However, we do note that the geometry of the feature differs significantly in the simpler models as compared to the $N$-body model. Fig.~\ref{fig:heavymw} shows the density of model particles for the same individual realisation of the base-case and heavy MW model suites, in addition to the $N$-body model. Notably, the debris forming the arm-like feature in the $N$-body model (panel \textit{a}) has a significantly different geometry to the simpler model realisations (panels \textit{b} and \textit{c}), with the structure increasing steeply in Y-position along the length of the feature. This means that when comparing measurements at the same X/Y positions as the observed MagES fields, the regions compared in the $N$-body model are not along the stream of debris actually forming the arm-like structure, and as a result there are fewer model particles within each field. This is in contrast to the simpler model realisations, where the northern overdensities are not significantly different from the observed feature track, albeit without the observed gap between the northern arm and the disk. As a result, it is perhaps not surprising that the kinematics of the $N$-body model are somewhat different to those in the simpler model ensembles, and the observations: different areas of the feature, under the influence of different gravitational forces, are being compared. Nonetheless, as discussed above, when a feature track is fitted to the $N$-body model and equivalent $\phi_1/\phi_2$ locations along the northern arm are compared, the resultant kinematics are not substantially different from those derived when equivalent X/Y positions are compared. \begin{figure*} \includegraphics[width=0.33\textwidth]{fig12a} \includegraphics[width=0.30\textwidth]{fig12b} \includegraphics[width=0.35\textwidth]{fig12c} \caption{Density plots of model particles for different model suites. Panels show the $N$-body model (\textit{a}), an individual realisation of the base-case suite (\textit{b}), and the same realisation in the heavy MW suite (\textit{c}). Dashed magenta lines in each panel show the observed feature track of the northern arm. The two realisations of the simpler model suites produce relatively flat northern overdensities in the Y direction, similar to that observed, while the debris track in the $N$-body model increases strongly in Y along the length of the feature, to larger values than those observed.} \label{fig:heavymw} \end{figure*} Given individual realisations of the heavy MW ensemble produce the closest kinematics to those observed, it might be inferred that an even heavier MW is necessary in order to produce the strong observed perturbations in $V_r$. However, we note there is an upper limit on the MW mass beyond which the LMC and SMC become bound, and have experienced multiple previous pericentric passages around the MW. That scenario is inconsistent with results that the Clouds are only now on their first infall into the MW potential \citep{kallivayalilThirdEpochMagellanicCloud2013}. Given the 50\% increase in MW halo mass in our models only has a relatively small effect on $V_r$ (reducing these by \textasciitilde5~km~s$^{-1}$ compared to the no-SMC models with a regular-mass MW), the MW mass required to reproduce the observed kinematics would likely exceed that binding threshold. This implies a heavy MW likely contributes to, but is not the only required condition for, reproduction of the feature kinematics. Further, Fig.~\ref{fig:radius} shows the distribution of model particles for an individual heavy MW realisation, colour-coded by the ratio of each particle's current LMC galactocentric radius $R_{\text{final}}$, to its origin radius $R_{\text{initial}}$. Particles in the region of the northern arm generally move outwards over the course of the simulation, with $R_{\text{final}}/R_{\text{initial}}$\textasciitilde1.2 along most of the arm. This implies particles located at large distances along the arm originate at marginally larger galactocentric radii than particles at the base of the arm: consistent with the mild negative metallicity gradient observed along the arm. However, the fact that $R_{\text{final}}/R_{\text{initial}}$>1 along the length of the arm indicates the MW acts to push stars that form the northern arm outwards from the LMC disk. Notably, immediately below the observed feature track and crossing its base, model particles move strongly outwards: $R_{\text{final}}/R_{\text{initial}}$ reaches up to \textasciitilde3. This may contribute to forming an overdensity along the feature track, with particles immediately below the feature track pushed strongly outwards to form the feature and generate a gap between the feature and the observed LMC disk. This scenario, however, does not explain the strongly negative radial velocities observed along the arm -- indicating models including only the Milky Way do not capture the full perturbation to the LMC. \begin{figure} \includegraphics[width=\columnwidth]{fig13} \caption{Binned map of particles within a single realisation of the heavy MW model, colour-coded by the mean ratio within each bin of the current LMC galactocentric radius ($R_{\text{final}}$) to the initial particle galactocentric radius 1~Gyr ago during model initialisation ($R_{\text{initial}}$). The dashed magenta line shows the observed feature track of the northern arm. The central 8$^\circ$ of the LMC disk is masked to emphasise the variation in $R_{\text{final}}/R_{\text{initial}}$ in the outskirts of the LMC.} \label{fig:radius} \end{figure} We next consider the potential effects of recent interactions with the SMC on the northern arm, discussing first the recent pericentric passage of the SMC around the LMC \textasciitilde150~Myr ago. As discussed in \S\ref{sec:simple}, while this is a close pericentre (with the SMC passing within $8.0^{+2.4}_{-2.0}$~kpc of the LMC centre), it is not coincident with a disk plane crossing: the SMC remains \textasciitilde7~kpc below the LMC disk plane during the encounter. As such, we find the SMC does not substantially affect the LMC disk during this interaction and, as discussed in \S\ref{sec:caveats}, the inclusion of self-gravity in the models is unlikely to significantly change this conclusion. In addition, we point out that for every model, the projected location of the pericentric passage is towards the southwest of the LMC: almost directly opposite to the northern arm. At this radius, the circular velocity of the LMC (which as seen in \S\ref{sec:kinematics}, remains constant even along the arm) implies a timescale of \textasciitilde300~Myr for the stars most strongly perturbed by this interaction to reach the north-eastern disk. This is approximately double the \textasciitilde150~Myr that has passed since the pericentric passage, further indicating this interaction is unlikely to be the origin of the northern arm. Interactions with a greater possibility of contributing to the formation of the northern arm are SMC crossings of the LMC disk plane, as these directly affect the nearby stars as the SMC passes through the disk. In the \textasciitilde50\% of our base-case and heavy SMC model realisations which experience disk crossings in the past 1~Gyr (beyond that which is currently occurring), we find the LMC disk is most strongly affected by the disk crossing \textasciitilde400~Myr ago. This crossing can occur across a broad range of distances ($28.8^{+11.4}_{-9.2}$~kpc: see \S\ref{sec:simple}) from the LMC centre, but those which occur at the smallest radii have the largest effect on the LMC disk -- and a much more significant effect than the recent SMC pericentric passage in the regions of interest. A handful of models (\textasciitilde9\%) have yet another SMC disk crossing \textasciitilde900~Myr ago, which can occur across a very wide distance range (1$\sigma$ limits of 7.5 and 66.9~kpc). While crossings which occur at distances toward the upper end of this range are unlikely to significantly affect the LMC, the \textasciitilde20\% which pass within 10~kpc of the LMC center could potentially affect the northern arm, and we consider these in addition to the \textasciitilde400~Myr crossing in the discussion that follows. Fig.~\ref{fig:smctrunc} presents results from two realisations of the base case model, as highlighted in Fig.~\ref{fig:orbits}, demonstrating the effect of these disk crossings on the LMC disk. The left panels present a realisation which has only experienced the most recent \textasciitilde400~Myr disk crossing -- which in this model occurs at a distance of \textasciitilde18~kpc from the LMC center -- and the centre panels present one of the few realisations which has experienced two SMC disk crossings (excluding that which is currently occurring) within the past 1~Gyr. These occur \textasciitilde360 and \textasciitilde980~Myr ago, at LMC distances of \textasciitilde14.5~kpc and \textasciitilde8~kpc respectively. The upper panels show the original locations of these disk plane crossings, and the present-day location of the crossing, computed by rotating the location of the original disk-crossing within the LMC's disk plane assuming a circular velocity of 90~km~s$^{-1}$. Note these are different to the present-day location of the SMC itself. Lower panels show the binned current LMC particle distribution, colour-coded by the distance of each particle from the SMC at the time of each disk crossing. \begin{figure*} \setlength\tabcolsep{0.5pt} \begin{tabular}{cccc} \includegraphics[height=0.3\textwidth]{fig14a.pdf} & \includegraphics[height=0.3\textwidth]{fig14b.pdf} & \includegraphics[height=0.3\textwidth]{fig14c.pdf} & \includegraphics[height=0.3\textwidth]{f14_cb1.pdf}\\ \includegraphics[height=0.332\textwidth]{fig14d.pdf} & \includegraphics[height=0.332\textwidth]{fig14e.pdf} & \includegraphics[height=0.332\textwidth]{fig14f.pdf} & \raisebox{15pt}{\includegraphics[height=0.3\textwidth]{f14_cb2.pdf}} \end{tabular} \caption{Upper panels: Density of particles for base-case model realisations having experienced one (left) or two (center) SMC crossings of the LMC disk plane, compared to the density of observed LMC stars selected using very similar criteria to \protect\cite{gaiacollaborationGaiaEarlyData2021a} (right). Locations of crossings are marked by coloured x-signs, with the present-day location of the crossing (different to the present-day location of the SMC itself) marked with circles of the corresponding colour. Lower panels: Current model particle distribution, colour-coded by the particle distance from the SMC at the time of each SMC crossing of the LMC disk plane. In order, panels show the realisation with a single crossing \textasciitilde400~Myr ago (left) corresponding to the density map in panel \textit{a}, the \textasciitilde400~Myr crossing (centre) in the two-crossing model corresponding to the density map in panel \textit{b}, and the \textasciitilde980~Myr crossing in the two-crossing model (right) corresponding to the density map in panel \textit{b}. Stars closely perturbed during the most recent SMC disk crossing \textasciitilde400~Myr ago, in both realisations, now comprise the western LMC disk (which appears truncated in Gaia maps of the periphery, as in panel \textit{c}.} \label{fig:smctrunc} \end{figure*} We first discuss the SMC crossing of the LMC disk plane \textasciitilde400~Myr ago. Considering first the realisation which has experienced only this crossing, shown in the leftmost panels of Fig.~\ref{fig:smctrunc}, we find the geometry of the northern arm in this realisation is very similar to that in both the base-case and heavy MW model suite realisations in Fig.~\ref{fig:heavymw}. This indicates the crossing does not significantly impact the geometry of the northern arm. Further, as seen in panels \textit{d} and \textit{e} of Fig.~\ref{fig:smctrunc}, particles which today form the northern arm are not closely perturbed by the SMC during its crossing of the LMC disk plane. The fact that this crossing typically occurs at large LMC galactocentric radii (typically double that of the recent pericentric passage) further indicates that, as discussed in \S\ref{sec:caveats}, the inclusion of self-gravity in the models is unlikely to change this result. This fact, in conjunction with the fact that model realisations which experience this crossing (and indeed most realisations in both the base-case and heavy SMC model suites) do not produce negative in-plane radial velocities as observed along the northern arm, lead us to conclude the disk crossing \textasciitilde400~Myr ago is likely not the origin of the northern arm. Instead, we do note particles most closely perturbed during this disk plane crossing have in fact moved clockwise with the LMC's rotation (from red cross to red circle in panels \textit{a} and \textit{b} of Fig.~\ref{fig:smctrunc}), and are now located in the western outskirts of the LMC disk: the same region as the observed apparent truncation in the western LMC disk at a radius of \textasciitilde10$^\circ$ in panel \textit{c} of Fig.~\ref{fig:smctrunc}. The MagES collaboration is currently investigating this truncation feature, and the potential role of the SMC in its formation, in more detail (Cullinane et al. in prep). We next consider the model realisation which experiences disk crossings both \textasciitilde400 and \textasciitilde900~Myr ago, focussing on the older disk crossing which occurs in this model \textasciitilde980~Myr ago. Panel \textit{f} reveals some particles closely perturbed in this crossing are now located in the vicinity the northern arm. This is evidence that historical interactions with the SMC can potentially influence stars which now form the northern arm. We do find that the few realisations in both the base-case and heavy-SMC models which have experienced this older disk crossing still do not produce the negative in-plane radial velocities as observed along the northern arm. However, it is possible for these early disk crossings to occur at small LMC radii (see \ref{sec:simple}); and in such a case, as discussed in \S\ref{sec:caveats}, our relatively simple models would not capture the full effect of the interaction due to the lack of self-gravity incorporated in the models. Notably, \citet{beslaLowSurfaceBrightness2016} find that multiple LMC/SMC close passages over the course of 6~Gyr can produce significant overdensities and apparent spiral arms in the outer LMC disk, particularly in its northern outskirts at similar distances to the location of the arm today, though they do not report kinematics for these features. It is thus plausible that early interactions with the SMC may have perturbed stars which today form the northern arm, producing both the characteristic gap between the arm and the nearby northern LMC disk, and the strongly negative in-plane radial velocities observed, neither of which are replicated in our simpler models. More realistic models are thus required to confirm this possibility, and better constrain these early interactions between the Clouds. In summary, we posit the following scenario for the formation of the northern arm. Prior to the Clouds' infall into the MW potential and up to \textasciitilde1~Gyr ago, historical interactions between the LMC and SMC, potentially including disk crossings at small LMC galactocentric radii, perturb stars that, at the present day, comprise the northern outskirts of the LMC, inparting a strongly negative radial velocity to the stars which will eventually form the arm. Over the last \textasciitilde Gyr, the Clouds have fallen into a relatively massive MW potential, which acts to further perturb these stars -- particularly in the $z$-direction -- whilst they rotate around the LMC, producing the arm-like feature seen today. Recent interactions between the LMC and SMC during the past Gyr, particularly the SMC's recent pericentric passage \textasciitilde150~Myr ago and an SMC crossing of the LMC disk plane \textasciitilde400~Myr ago, likely do not strongly affect the stars that form the northern arm, but do closely impact stars which today form a truncation in the western LMC disk. \section{Summary}\label{sec:concs} We have performed a detailed investigation of the arm-like feature in the extreme northern outskirts of the LMC first discovered by \citet{mackey10KpcStellar2016}. Our analysis utilises spectroscopic data for red clump and red giant branch stars from seven MagES fields located along the full length of the feature to obtain [Fe/H] abundances, and in conjunction with Gaia EDR3 data, the first 3D kinematics for individual stars within the arm. We also use Gaia photometry of the red clump to probe the structure of the arm. We find the northern arm generally follows the inclination of the LMC disk plane, and has a similar thickness to the outer LMC disk. The median metallicity near the base of the arm is consistent with that in the nearby outer LMC disk, and we find weak evidence for a mild negative gradient in [Fe/H], decreasing from approximately $-0.9$ at 11~kpc from the LMC centre, to approximately $-1.2$ at an LMC galactocentric radius of \textasciitilde22~kpc in the outermost MagES feature field. We therefore conclude the arm is comprised of LMC disk material. The kinematics of the northern arm also indicate it is comprised of perturbed LMC material. The azimuthal velocity remains reasonably constant along the feature, at approximately \textasciitilde60~km~s$^{-1}$: similar to that measured in the outer LMC disk. In contrast, the in-plane radial velocity and out-of-plane vertical velocities are strongly perturbed. Both of these velocity components are near zero at the base of the arm, consistent with the equilibrium values in the outer LMC disk. However, the in-plane radial velocity drops to approximately $-40$~km~s$^{-1}$ just two degrees from the base of the arm, remaining near this value along its length, and the vertical velocity steadily increases to \textasciitilde30~km~s$^{-1}$ along the length of the arm. The velocity dispersion in each component decreases along the length of the arm, from values comparable to those in the outer LMC disk near the base of the arm, to roughly half this in the outermost MagES field. In order to understand the formation of the northern arm, we develop a new suite of dynamical models, sampling from uncertainties in the LMC and SMC central locations and systemic motions, and investigating the effect of different LMC/SMC/MW masses on the structure and kinematics of the feature. Our models describe the LMC as a collection of \textasciitilde$2.5\times10^6$ tracer particles within a rigid two-component potential, and the SMC as a rigid Hernquist potential. The geometry of the LMC disk plane is aligned with that from \citetalias{choiSMASHingLMCTidally2018}. Both Clouds are initialised at their present day locations, then rewound for 1~Gyr in the presence of each other and the Milky Way. The tracer particle distribution of the LMC disk is then generated, and the system allowed to evolve to the present. In order to explore the large and complex parameter space of the Magellanic system, these models are necessarily somewhat simplified -- they lack self-gravity, as well as dynamical friction between the LMC and SMC, and the Hernquist potential used to describe the SMC does not capture its tidal disruption. Each of these simplifications can affect the orbits of, and thus interactions between, the two Clouds; we therefore perform only qualitative comparisons with observations. We do note, however, that potential LMC-SMC interactions of interest -- particularly the recent SMC pericentre \textasciitilde150~Myr ago and a possible crossing of the LMC disk plane by the SMC \textasciitilde400~Myr ago -- are relatively recent, and are suggested to occur at reasonable distances from the LMC COM. The resultant effects (or lack thereof) on the northern arm are thus unlikely to differ significantly from those predicted by the simple model suites, although full N-body simulations are required to confirm this. We find models with a heavy MW ($1.2\times10^{12}$M$_\odot$) and without an SMC have the closest match to the observed kinematics, reproducing the same qualitative velocity trends as those observed. In these models, the LMC's infall to the Milky Way's gravitational potential produces the increasingly positive out-of-plane velocity along the arm. However, even this model is insufficient to fully reproduce the feature kinematics: most significantly, the observed in-plane radial velocity is \textasciitilde20-30~km~s$^{-1}$ more negative than in the model. Our models also suggest that, under the conditions explored, recent (i.e. within the past Gyr) interactions with the SMC do not strongly contribute to the formation of the northern arm. Model LMC particles most significantly disturbed in these interactions, including the recent SMC pericentre \textasciitilde150~Myr ago and a potential recent crossing of the LMC disk plane by the SMC \textasciitilde400~Myr ago, are today located predominantly in the southern and western LMC disk, and are far from the northern arm. Further, model realisations in which the SMC plays a more important dynamical role (particularly those including a heavy SMC) become increasingly inconsistent with observations, with positive in-plane radial velocities and negative vertical velocities (in contrast to the negative in-plane radial velocities and positive vertical velocities observed). However, as it is likely the LMC and SMC are a long-lived binary pair, it is possible that historical interactions with the SMC prior to the past \textasciitilde900~Myr have perturbed LMC stars which now form the northern arm. Indeed, such interactions could be responsible for the strongly negative observed in-plane radial velocity, which is not replicated in any of our models. In summary, we suggest the following origin for the northern arm. Prior to the Clouds' infall into the MW potential \textasciitilde1~Gyr ago, interactions between the LMC and SMC perturbed the kinematics of stars in what is now the northern outskirts of the LMC, generating negative in-plane radial velocities. Over the last \textasciitilde Gyr, as the LMC has fallen into the Milky Way potential (where a higher Milky Way mass is preferred), these stars have been further perturbed, producing the characteristic shape of the northern arm, and the positive out-of-plane velocities observed along its length. Self-gravitating models that are able to more accurately trace the dynamical influence and evolution of the SMC over longer timescales will be required to quantitatively test this scenario. \section*{Acknowledgements} We thank Eugene Vasiliev for helpful advice on how to use \textsc{agama}. This work has made use of data from the European Space Agency (ESA) mission {\it \textit{Gaia}} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it \textit{Gaia}} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\textit{Gaia}} Multilateral Agreement. Based on data acquired at the Anglo-Australian Observatory. We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past, present and emerging. This research has been supported in part by the Australian Research Council (ARC) Discovery Projects grant DP150103294. ADM is supported by an ARC Future Fellowship (FT160100206). \section*{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
train/arxiv
BkiUeanxK7IDOE3osmhz
5
1
\section{Introduction} In \cite{mal} Malkin studied the bifurcation of $T$-periodic solutions in the $n$--dimensional $T$-periodic systems of the form \begin{equation}\label{mig} \dot x = f(t,x)+\varepsilon g(t,x,\varepsilon) \end{equation} where both functions $f$ and $g$ are sufficiently smooth. It is assumed in \cite{mal} that the unperturbed system (namely (\ref{mig}) with $\varepsilon=0$) has a family of $T$-periodic solutions, denoted $x(\cdot, \xi(h))$, whose initial conditions are given by a smooth function $\xi:\mathbb{R}^k\to\mathbb{R}^n$, $h\mapsto \xi(h)$. In these settings the adjoint linearized differential system \begin{equation}\label{nnn} \dot u=-\left(D f(t,x(t,\xi(h)))\right)^*u \end{equation} has $k$ linearly independent $T$-periodic solutions $u_1(\cdot,h), \dots , u_k(\cdot,h)$ and the geometric multiplicity of the multiplier $+1$ of (\ref{nnn}) is, therefore, $k.$ Assuming that the algebraic multiplicity of $+1$ is $k$ either, Malkin proved \cite{mal} that if the bifurcation function $$ M(h)=\int\limits_0^T \left(\begin{array}{c} \left<u_1(\tau,h),g(\tau,x(\tau,\xi(h)),0)\right>\\ ... \\ \left<u_k(\tau,h),g(\tau,x(\tau,\xi(h)),0)\right>\end{array}\right) d\tau $$ has a simple zero $h_0\in\mathbb{R}^k$, then for any $\varepsilon>0$ sufficiently small system (\ref{mig}) has a unique $T$-periodic solution $x_\varepsilon$ such that $ x_\varepsilon(0)\to \xi(h_0)$ as $\varepsilon\to 0. $ Here simple zero means that $M(h_0)=0$ and the Jacobian determinant of $M$ at $h_0$ is nonzero. As usual $\left< \cdot ,\cdot \right>$ denotes the inner product in $\mathbb{R}^n$. Moreover, Malkin related the asymptotic stability of the solution $x_\varepsilon$ with the eigenvalues of the Jacobian matrix $DM(h_0).$ The same result has been proved independently by Loud \cite{loud}. Malkin's result has been developed by Kopnin \cite{kop} and Loud \cite{loud} who studied the case when the zero $h_0$ of $M$ is not necessary simple and both authors obtained conditions for the existence, uniqueness and asymptotic stability of the $T$-periodic solution $x_\varepsilon$ of (\ref{mig}) satisfying $ x_\varepsilon(0)\to \xi(h_0)$ as $\varepsilon\to 0.$ Other improvements suppress the smoothness requirement for $g$ and are due to Fe\v{c}kan \cite{fec} and Kamenskii-Makarenkov--Nistri \cite{nach}, but only the existence and the calculation of the topological index of solutions $x_\varepsilon$ of (\ref{mig}) has been considered. The analysis of the situation when the algebraic multiplicity of $+1$ is $n$ goes back to Melnikov \cite{melnikov} and stability for simple and singular zeros $h_0$ of $M$ has been achieved by Yagasaki \cite{yagasaki}. In this paper (Theorems~2 and~3 below are our main results) we only assume that (\ref{nnn}) has $k$ linearly independent $T$-periodic solutions (i.e. the geometric multiplicity of the multiplier $+1$ of (\ref{nnn}) is $k$. Also, we neither assume that the zero $h_0$ of $M$ is simple, nor that $g$ is differentiable. More precisely, we assume that in a small neighborhood of $h_0$ the topological degree of $M$ is nonzero and that $M$ is a so-called dilating mapping, while for $g$ we assume that it is Lipschitz and ``piecewise differentiable" in a suitable sense. Both assumptions are explicit and weaker than the Malkin's ones. Note that one of the conditions for $g$, denoted below by (A9), has its roots in \cite{bd,blm,blmS}. On the other hand we do not obtain asymptotic stability of the $T$-periodic solution $x_\varepsilon$ of (\ref{mig}) but we prove its existence and uniqueness, in particular we prove that it is isolated. In order to study the asymptotic stability one can eventually use a result of Kolesov's \cite{kol}. This result is a generalization of the Lyapunov linearization stability criterium for Lipschitz systems and it requires the isolateness of the $T$-periodic solution $x_\varepsilon.$ In order to prove our main result we need to generalize the Lyapunov--Schmidt reduction method (see \cite{chic}) for the case of nonsmooth finite dimensional functions. The application of the generalized Lyapunov--Schmidt reduction method for proving Theorem~2 is done by following the ideas of \cite{adriana}. The paper is organized as follows. In the next section we summarize our notations. In Section~\ref{secLS} we generalize the Lyapunov--Schmidt reduction method for nonsmooth finite dimensional functions. In Section~\ref{sec3} we prove Theorem~2 and the main result of the paper, Theorem~\ref{thm3}. An application of this theorem is done in Section~\ref{sec4}. \section{Notations} The following notations will be used throughout this paper. Let $n,m,k\in\mathbb{N},$ $k\leq n$, $i\in\mathbb{N}\cup\{0\}$. We denote the projection onto the first $k$ coordinates by $\pi:\mathbb{R}^n \to \mathbb{R}^k$, and the one onto the last $n-k$ coordinates by $\pi^\perp:\mathbb{R}^n \to \mathbb{R}^{n-k}$. We denote by $I_{n\times n}$ the identity $n\times n$ matrix, while $0_{n\times m}$ denotes the null $n\times m$ matrix. For an $n\times n$ matrix $A$ we denote by $A^*$ the adjoint of $A$, that in the case the matrix is real reduces to the transpose. We consider a norm in $\mathbb{R}^n$ denoted by $\|\cdot\|$. Let $\Psi$ be an $n\times n$ real matrix. Then $\|\Psi\|$ denotes the operator norm, i.e. $\|\Psi\|=\sup _{\|\xi\|=1}\|\Psi \xi\|$. Let $\xi\in\mathbb{R}^n$ and $\mathcal{Z}\subset\mathbb{R}^n$ compact, then we denote by $\rho(\xi,\mathcal{Z})=\min_{\zeta\in\mathcal{Z}}\|\xi-\zeta\|$ the distance between $\xi$ and $\mathcal{Z}$. For $\delta>0$ and $z\in \mathbb{R}^n$ the ball in $\mathbb{R}^n$ centered in $z$ of radius $\delta$ will be denoted by $B_\delta(z)$. For a subset $\mathcal{U}\subset\mathbb{R}^n$ we denote by $\mbox{int}(\mathcal{U})$, $\overline{\mathcal{U}}$ and $\overline{\mbox{co}} \, \mathcal{U}$ its interior, closure and closure of the convex hull, respectively. We denote by $C^i(\mathbb{R}^n,\mathbb{R}^m)$ the set of all continuous and $i$ times continuously differentiable functions from $\mathbb{R}^n$ into $\mathbb{R}^m.$ Let $\mathcal{F}\in C^0(\mathbb{R}^n,\mathbb{R}^n)$ be a function that does not have zeros on the boundary of some open bounded set $\mathcal{U}\subset\mathbb{R}^n$. Then $d(\mathcal{F},\mathcal{U})$ denotes the Brouwer topological degree of $\mathcal{F}$ on $\mathcal{U}$ (see \cite{brouwer} or \cite[Ch.~1, \S~3]{krazab}). For $\mathcal{F}\in C^1(\mathbb{R}^n,\mathbb{R}^m)$, $D\mathcal{F}$ denotes the Jacobian matrix of $\mathcal{F}$. If $\mathbb{R}^n=\mathbb{R}^k\times \mathbb{R}^{n-k}$ and $\alpha \in \mathbb{R}^k,\beta\in \mathbb{R}^{n-k}$, then $D_\alpha\mathcal{F}(\cdot,\beta)$ denotes the Jacobian matrix of $\mathcal{F}(\cdot,\beta).$ For $\mathcal{F}\in C^2(\mathbb{R}^n,\mathbb{R})$, $H\mathcal{F}$ denotes the Hessian matrix of $\mathcal{F}$, i.e. the Jacobian matrix of the gradient of $\mathcal{F}$. Let $\delta>0$ be sufficiently small. With $o(\delta)$ we denote a function of variable $\delta$ such that $o(\delta)/\delta\to 0$ as $\delta \to 0$, while $O(\delta)$ denotes a function of $\delta$ such that $O(\delta)/\delta$ is bounded as $\delta \to 0$. Here the functions $o$ and $O$ can depend also on other variables, but the above properties hold uniformly when these variables lie in a fixed bounded region. We say that the function $Q:\mathbb{R}^n\times \mathbb{R}^m\to \mathbb{R}^m$ is {\it locally uniformly Lipschitz with respect to its first variable} if for each compact $K\subset \mathbb{R}^n\times \mathbb{R}^m$ there exists $L>0$ such that $\| Q(z_1,\lambda)-Q(z_2,\lambda)\|\leq L \| z_1-z_2\|$ for all $(z_1,\lambda), \, (z_2,\lambda)\in K$. For any Lebesgue measurable set $M\subset [0,T]$ we denote by mes($M$) the Lebesgue measure of $M$. \section{Lyapunov--Schmidt reduction method for nonsmooth finite dimensional functions}\label{secLS} If the continuously differentiable function $P:\mathbb{R}^n\to\mathbb{R}^n$ vanishes on some set $\mathcal{Z}\subset\mathbb{R}^n$, then sufficient conditions for the existence of zeros near $\mathcal{Z}$ of the perturbed function \begin{equation}\label{a} F(z,\varepsilon)=P(z)+\varepsilon Q(z,\varepsilon), \quad z\in \mathbb{R}^n, \,\,\, \varepsilon>0 \mbox{ small enough} \end{equation} can be expressed in terms of the restrictions to $\mathcal{Z}$ of the functions $z\mapsto DP(z)$ and $z\mapsto Q(z,0)$. Roughly speaking, this is what is known in the literature as the Lyapunov--Schmidt reduction method, as it is presented for instance in \cite{chic,adriana} or \cite[\S 24.8]{krazab}. In these references it is assumed that $Q$ is a continuously differentiable function. We show in this section that this last assumption can be weaken. The following theorem is the main result of this section and generalizes a theorem of \cite{adriana}. \begin{thm}\label{thm1} Let $P\in C^1(\mathbb{R}^n,\mathbb{R}^n)$, let $Q\in C^0(\mathbb{R}^n\times[0,1],\mathbb{R}^n)$ be locally uniformly Lipschitz with respect to its first variable, and let $F:\mathbb{R}^n\times[0,1]\to \mathbb{R}^n$ be given by \eqref{a}. Assume that $P$ satisfies the following hypotheses. \begin{itemize} \item[(A1)] \label{hyp:i} There exist an invertible $n\times n$ matrix $S,$ an open ball $V\subset\mathbb{R}^k$ with $k\le n$, and a function $\beta_0\in C^1(\overline{V},\mathbb{R}^{n-k})$ such that $P$ vanishes on the set $\mathcal{Z}=\bigcup\limits_{\alpha\in\overline{V}} \left\{S\left(\begin{array}{c}\alpha \\ \beta_0(\alpha)\end{array}\right)\right\}$. \item[(A2)] \label{hyp:ii} For any $z\in \mathcal{Z}$ the matrix $DP(z)S$ has in its upper right corner the null $k\times(n-k)$ matrix and in the lower right corner the $(n-k)\times(n-k)$ matrix $\Delta(z)$ with ${\rm det}\left(\Delta(z)\right)\not=0.$ \end{itemize} For any $\alpha\in \overline V$ we define \begin{equation} \label{bif_fun} \widehat{Q}(\alpha)= \pi Q\left(S\left(\begin{array}{c}\alpha \\ \beta_0(\alpha)\end{array}\right),0\right). \end{equation} Then the following statements hold. \begin{itemize} \item[(C1)] For any sequences $(z_m)_{m\geq 1}$ from $\mathbb{R}^n$ and $(\varepsilon_m)_{m\geq 1}$ from $[0,1]$ such that $z_m\to z_0\in\mathcal{Z}$, $\varepsilon_m\to 0$ as $m\to\infty$ and $F(z_{_m},\varepsilon_m)=0$ for any $m\geq 1$, we have $\widehat{Q}\left(\pi S^{-1}z_0\right)=0$. \item[(C2)] If $\widehat{Q}:\overline V \to \mathbb{R}^k$ is such that $ \widehat{Q}(\alpha)\not=0\,\,{\rm for\ all\ }\alpha\in \partial V$ and $ d\left(\widehat{Q},V\right)\not=0$, then there exists $\varepsilon_1>0$ sufficiently small such that for each $\varepsilon\in (0,\varepsilon_1]$ there exists at least one $z_\varepsilon\in \mathbb{R}^n$ with $F(z_\varepsilon,\varepsilon)=0$ and $\rho(z_\varepsilon,\mathcal{Z})\to 0$ as $\varepsilon\to 0.$ \end{itemize} In addition we assume that there exists $\alpha_0\in V$ such that $\,\widehat{Q}(\alpha_0)=0$, $\,\widehat{Q}(\alpha)\neq 0$ for all $\alpha\in \overline V\setminus \{\alpha_0\}$ and $d\left(\widehat{Q},V\right)\not=0$, and we denote $z_0=S\left(\begin{array}{c}\alpha_0 \\\beta_0(\alpha_0)\end{array}\right) $. Moreover we also assume: \begin{itemize} \item[(A3)] $P$ is twice differentiable in the points of $\mathcal{Z}$, and for each $i\in \overline{1,k}$ and $z\in {\mathcal{Z}}$ the Hessian matrix $HP_i(z)$ is symmetric. \item[(A4)] There exists $\delta_1>0$ and $L_{\widehat Q}>0$ such that \[||\widehat Q (\alpha_1)-\widehat Q(\alpha _2)|| \geq L_{\widehat Q} ||\alpha_1-\alpha_2|| \quad \mbox{ for all } \alpha_1,\alpha_2\in B_{\delta_1}(\alpha_0). \] \item[(A5)] For $\delta>0$ sufficiently small we have that \begin{eqnarray*} \label{sd2} &&\left\|\pi{Q}\left(z_1+ \zeta,\varepsilon\right)-\pi{Q}\left(z_1,0\right)- \left.\pi{Q}\left(z_2+ \zeta,\varepsilon\right)+\pi{Q}\left(z_2,0\right)\right.\right\| \le\\ \nonumber && \frac{o(\delta)}{\delta} \|z_1-z_2\|, \end{eqnarray*} for all $z_1,z_2\in B_{\delta}(z_0)\cap \mathcal{Z}$, $\varepsilon\in [0,\delta]$ and $\zeta\in B_{\delta}(0)$. \end{itemize} Then the following conclusion holds. \begin{itemize} \item[(C3)] There exists $\delta_2>0$ such that for each $\varepsilon \in (0,\varepsilon_1]$ there is exactly one $z_\varepsilon \in B_{\delta_2}(z_0)$ with $F(z_\varepsilon,\varepsilon)=0$. Moreover $z_\varepsilon\to z_0$ as $\varepsilon\to 0.$ \end{itemize} \end{thm} We note that a map that satisfy (A4) is usually called {\it dilating map} (cf. \cite{altman}). For proving Theorem~\ref{thm1} we shall use the following version of the Implicit Function Theorem. \begin{lem} \label{implicit} Let $P\in C^1(\mathbb{R}^n,\mathbb{R}^n)$ and let $Q\in C^0(\mathbb{R}^n\times[0,1],\mathbb{R}^n)$ be locally uniformly Lipschitz with respect to its first variable. Assume that $P$ satisfies the hypotheses {\em (A1)} and {\em (A2)} of Theorem \ref{thm1}. Then there exist $\delta_0>0,$ $\varepsilon_0>0$ and a function $\beta:\overline{V}\times[0,\varepsilon_0]\to\mathbb{R}^{n-k}$ such that \begin{itemize} \item[(C4)] $\pi^\bot F\left(S\left(\begin{array}{c}\alpha \\ \beta(\alpha,\varepsilon)\end{array}\right),\varepsilon\right)=0$ for all $\alpha\in \overline{V}$ and $\varepsilon\in [0,\varepsilon_0]$. \item[(C5)] $\beta(\alpha,\varepsilon)=\beta_0(\alpha)+\varepsilon\mu(\alpha,\varepsilon)$ where $\mu:\overline{V}\times(0,\varepsilon_0]\to\mathbb{R}^{n-k}$ is bounded. Moreover for any $\alpha\in \overline V$ and $\varepsilon\in[0,\varepsilon_0]$, $\beta(\alpha,\varepsilon)$ is the only zero of\\ $\pi^\bot F\left(S\left(\begin{array}{c}\alpha \\ \cdot\end{array}\right),\varepsilon\right)$ in $B_{\delta_0}(\beta_0(\alpha))$ and $\beta$ is continuous in $\overline V\times [0,\varepsilon_0]$. \end{itemize} In addition if $P$ is twice differentiable in the points of $\mathcal{Z}$, then \begin{itemize} \item[(C6)] there exists $L_\mu>0$ such that $ \|\mu(\alpha_1,\varepsilon)-\mu(\alpha_2,\varepsilon)\|\le L_\mu\|\alpha_1-\alpha_2\|$ for all $\alpha_1,\alpha_2\in\overline{V}$ and $\varepsilon\in (0,\varepsilon_0]$. \end{itemize} \end{lem} \noindent{\bf Proof.} (C4) Let $\widetilde{F}:\mathbb{R}^k\times\mathbb{R}^{n-k}\times[0,1]\to\mathbb{R}^n$ be defined by $$ \widetilde{F}(\alpha,\beta,\varepsilon)=F\left(S\left(\begin{array}{c} \alpha\\ \beta \end{array}\right),\varepsilon\right), $$ and let $\widetilde{P}$, $\widetilde{Q}$ and $\widetilde{\Delta}$ be defined in a similar way. Now the assumptions (A1) and (A2) become $\widetilde{P}(\alpha,\beta_0(\alpha))=0$ and, respectively, the matrix $D\widetilde{P}(\alpha,\beta_0(\alpha))$ has in its upper right corner the null $k\times(n-k)$ matrix and in the lower right corner the $(n-k)\times(n-k)$ invertible matrix $\widetilde{\Delta}{(\alpha,\beta_0(\alpha))}$ for any $\alpha\in \overline V$. Then $$ \widetilde{F}(\alpha,\beta_0(\alpha),0 =0\quad{\rm for\ any\ }\alpha\in\overline{V}, $$ and \begin{equation} {\rm det}\left(D_{\beta}\left(\pi^\bot{\widetilde{F}}\right)(\alpha,\beta_0(\alpha),0)\right ={\rm det}\left(\widetilde{\Delta}{(\alpha,\beta_0(\alpha))}\right)\not=0\quad {\rm for\ any\ } \alpha\in\overline{V}.\label{g1} \end{equation} It follows from (\ref{g1}) that there exists a radius $\delta>0$ such that \begin{equation}\label{g2} \pi^\bot\widetilde{F}(\alpha,\beta,0)\not=0\quad{\rm for\ any\ } \beta\in \overline{B_{\delta}({\beta}_0(\alpha))}\backslash \left\{ {\beta}_0(\alpha)\right\},\ \alpha\in \overline{V}. \end{equation} The relations (\ref{g1}) and (\ref{g2}) give (see \cite[Theorem~6.3]{krazab}) $$ d(\pi^\bot\widetilde{F}(\alpha,\cdot,0),B_{\delta}({\beta}_0(\alpha)))={\rm sign}\left({\rm det}(\widetilde{\Delta}{(\alpha,\beta_0(\alpha)})\right)\not=0,\quad\alpha\in\overline{V}. $$ Hence, by the continuity of the topological degree with respect to parameters (using the compactness of $\overline{V}$) there exists $\varepsilon(\delta)>0$ such that \begin{equation* d(\pi^\bot\widetilde{F}(\alpha,\cdot,\varepsilon),B_\delta({\beta}_0(\alpha)))\not=0\quad{\rm for\ any\ }\varepsilon\in[0,\varepsilon(\delta)],\ \alpha\in\overline{V}. \end{equation*} This assures the existence of $\beta(\alpha,\varepsilon)\in B_\delta({\beta}_0(\alpha))$ such that conclusion (C4) holds with $\delta_0=\delta$ and $\varepsilon_0=\varepsilon(\delta_0)$. Without loss of generality we can consider in the sequel that $\varepsilon(\delta)\to 0$ as $\delta\to 0.$ The value of the radius $\delta$ eventually may decrease in a finite number of steps during this proof (consequently, also the value of $\varepsilon(\delta)$). Sometimes we decrease only the value of $\varepsilon(\delta)$, letting $\delta$ maintaining its value. Without explicitly mentioning it, finally, in the statement of the lemma, we replace $\delta_0$ by the least value of the radius $\delta$ and $\varepsilon_0$ by $\varepsilon(\delta)$. (C5) Since $P$ and $\beta_0$ are $C^1$ and $\overline{V}$ is bounded, there exists $\eta >0$ such that the invertible matrix $\Delta$ defined by (A2) satisfies $||\widetilde{\Delta}({\alpha,\beta_0(\alpha)})||\geq 2\eta$ for all $\alpha\in \overline V$. Using again that $P$ is $C^1$ and $\widetilde{\Delta}(\alpha,\beta_0(\alpha))=D_{\beta}\left( \pi ^\bot \widetilde{P}\right)(\alpha,\beta_0(\alpha))$ , we obtain that the radius $\delta>0$ found before at (C4) can be decreased, if necessary, in such a way that $||\widetilde{\Delta}({\alpha,\beta_0(\alpha)})-D_{\beta}\left( \pi ^\bot \widetilde{P}\right)(\alpha,\beta)||\leq \eta\,$ for all $\beta\in B_\delta\left( \beta_0(\alpha)\right)$ and $\alpha\in \overline V$. Then $||D_{\beta}\left( \pi ^\bot \widetilde{P}\right)(\alpha,\beta)||\geq \eta\,$ for all $\beta\in B_\delta\left( \beta_0(\alpha)\right)$, $\alpha\in \overline V$. Applying the generalized Mean Value Theorem (see \cite[Proposition 2.6.5]{clark}) to the function $\pi ^\bot \widetilde{P}(\alpha,\cdot)$, we obtain \begin{equation} \label{Pinv} ||\pi^\bot \widetilde{P}(\alpha,\beta_1)-\pi^\bot \widetilde{P}(\alpha,\beta_2)||\geq \eta ||\beta_1-\beta_2||,\quad \beta_1,\beta_2\in B_\delta(\beta_0(\alpha)),\,\,\alpha \in \overline V. \end{equation} We take $M_Q>0$ such that $||\widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)||\leq M_Q$ for all $\alpha\in \overline V$ and $\varepsilon\in[0,\varepsilon_0]$. Using \eqref{Pinv} we obtain for all $\alpha\in \overline V$ and $\varepsilon\in[0,\varepsilon(\delta)]$ \begin{eqnarray*} 0&=&||\pi^\bot \widetilde{P}(\alpha,\beta(\alpha,\varepsilon))-\pi^\bot \widetilde{P}(\alpha,\beta_0(\alpha))+\varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)||\\ &\geq& \eta ||\beta(\alpha,\varepsilon)-\beta_0(\alpha)||-\varepsilon M_Q\,. \end{eqnarray*} From these last relations, denoting $m=M_Q/\eta$, we deduce that \begin{equation} \label{mu_bdd} \|\mu(\alpha,\varepsilon)\|\leq m \quad \mbox{ for all }\alpha\in \overline V, \, \varepsilon\in (0,\varepsilon(\delta)]. \end{equation} We choose $L_Q>0$ such that \begin{equation} \label{Q_Lip} || \widetilde{Q}(\alpha_2,\beta_2,\varepsilon)- \widetilde{Q}(\alpha_1,\beta_1,\varepsilon)||\leq L_Q\left(||\alpha_2-\alpha_1||+ ||\beta_2-\beta_1||\right)\, , \end{equation} for all $\beta_1,\beta_2\in B_{\delta_0}(\beta_0(\overline V))$, $\alpha_1,\alpha_2\in \overline V,\varepsilon\in [0,\varepsilon_0].$ We decrease $\delta>0$ in such a way that $\eta-\varepsilon L_Q>0$ for any $\varepsilon\in[0,\varepsilon(\delta)].$ Let $\alpha\in \overline V$, $\varepsilon\in[0,\varepsilon(\delta)]$ and assume that $\beta(\alpha,\varepsilon)$ and $\beta_2$ are two zeros of $\pi^\bot F\left(S\left(\begin{array}{c}\alpha \\ \cdot\end{array}\right),\varepsilon\right)$ in $B_\delta(\beta_0(\alpha))$. Taking into account (\ref{Pinv}) and \eqref{Q_Lip}, we obtain \begin{eqnarray*} 0&=&||\pi^\bot \widetilde{P}(\alpha,\beta_2)-\pi^\bot \widetilde{P}(\alpha,\beta(\alpha,\varepsilon))+\\ &~& \varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta_2,\varepsilon)-\varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)||\\ &\geq& (\eta-\varepsilon L_Q) ||\beta_2-\beta(\alpha,\varepsilon)||. \end{eqnarray*} Since $\eta-\varepsilon L_Q>0$ for any $\varepsilon\in[0,\varepsilon(\delta)]$ we deduce from this last relation that $\beta_2$ and $\beta(\alpha,\varepsilon)$ must coincide. \medskip We prove in the sequel the continuity of the function $\beta:\overline V \times [0,\varepsilon(\delta)]\to {\mathbb{R}}^{n-k}$. Let $(\alpha_1,\varepsilon_1)\in \overline V \times [0,\varepsilon(\delta)]$ be fixed and $(\alpha,\varepsilon)\in \overline V \times [0,\varepsilon(\delta)]$ be in a small neighborhood of $(\alpha_1,\varepsilon_1)$. Consider $L_P>0$ such that $||\widetilde{P}(\alpha_1,\beta)-\widetilde P (\alpha,\beta)||\leq L_P||\alpha_1-\alpha||$ for all $\alpha_1,\alpha\in \overline V$ and $\beta\in B_{\delta_0}(\beta_0(\overline V))$. We diminish $\varepsilon(\delta)>0$, if necessary, and we consider $\alpha$ so close to $\alpha_1$ that $\beta(\alpha,\varepsilon)\in B_\delta(\beta_0(\alpha_1))$. Then using (\ref{Pinv}) and (\ref{Q_Lip}) we obtain \begin{eqnarray*} 0&=&||\pi^\bot \widetilde{P}(\alpha_1,\beta(\alpha_1,\varepsilon_1))-\pi^\bot \widetilde{P}(\alpha,\beta(\alpha,\varepsilon))+\\ &~& \varepsilon_1 \pi^\bot \widetilde{Q}(\alpha_1,\beta(\alpha_1,\varepsilon_1),\varepsilon_1)-\varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)|| \\ &\geq& \eta ||\beta(\alpha_1,\varepsilon_1)-\beta(\alpha,\varepsilon)||-L_P||\alpha_1-\alpha||-\\ &~& ||\varepsilon_1\pi^\bot \widetilde{Q}(\alpha_1,\beta(\alpha_1,\varepsilon_1),\varepsilon_1)-\varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon) || \end{eqnarray*} and \begin{eqnarray*} && -||\varepsilon_1\pi^\bot \widetilde{Q}(\alpha_1,\beta(\alpha_1,\varepsilon_1),\varepsilon_1)-\varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon) || \\ &\ge & -\varepsilon_1 L_Q ||\alpha_1-\alpha||-\varepsilon_1 L_Q||\beta(\alpha_1,\varepsilon_1)-\beta(\alpha,\varepsilon)||-\\ &~&||\varepsilon_1\pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon_1)-\varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)||. \end{eqnarray*} Combining these last two relations we obtain \begin{eqnarray*} &&(\eta -\varepsilon_1 L_Q)||\beta(\alpha_1,\varepsilon_1)-\beta(\alpha,\varepsilon)||\leq (L_P+\varepsilon_1 L_Q) ||\alpha_1-\alpha||+\\ &~&||\varepsilon_1\pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon_1)-\varepsilon \pi^\bot \widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)||\, , \end{eqnarray*} from where it follows easily that $\beta(\alpha,\varepsilon)\to \beta (\alpha_1,\varepsilon_1)$ when $(\alpha,\varepsilon)\to (\alpha_1,\varepsilon_1)$. \\ (C6) We define $\Phi(\alpha,\xi)=\pi^\bot \widetilde{P}(\alpha,\beta_0(\alpha)+\xi)$ for all $\alpha\in\overline V$ and $\xi\in\mathbb{R}^{n-k}$. From (\ref{Pinv}) we have that \begin{equation} \label{Phiinv} ||\Phi(\alpha,\xi_1)-\Phi(\alpha,\xi_2)||\geq \eta ||\xi_1-\xi_2|| \mbox{ for all } \alpha\in \overline V, \,\, \xi_1,\xi_2\in B_\delta(0). \end{equation} Since $\widetilde P(\alpha,\beta_0(\alpha))=0$ for all $\alpha\in \overline V$, we have that $\Phi(\alpha,\xi)=\pi^\bot \widetilde{P}(\alpha,\beta_0(\alpha)+\xi)-\pi^\bot \widetilde{P}(\alpha,\beta_0(\alpha))$ and that \begin{eqnarray*} D_\alpha \Phi(\alpha,\xi)&=&D_\alpha \left(\pi^\bot \widetilde{P}\right)(\alpha,\beta_0(\alpha)+\xi) - D_\alpha \left(\pi^\bot \widetilde{P}\right)(\alpha,\beta_0(\alpha)) +\\ &&\left[ D_\beta \left( \pi^\bot \widetilde{P}\right)(\alpha,\beta_0(\alpha)+\xi)-D_\beta \left( \pi^\bot \widetilde{P}\right)(\alpha,\beta_0(\alpha))\right]D\beta_0(\alpha). \end{eqnarray*} From this expression, using that $ \widetilde{P}$ is twice differentiable in $(\alpha,\beta_0(\alpha))$ and $\beta_0$ is $C^1$, we obtain for some $L_\Phi>0$ that the radius $\delta$ can be eventually decreased in a such way that $$||D_\alpha \Phi(\alpha,\xi)||\leq L_{\Phi}||\xi|| \,\mbox{ for all }\alpha\in \overline V,\, \xi\in B_\delta(0).$$ Hence using the Mean Value Inequality we have \begin{equation} \label{Philip} ||\Phi(\alpha_1,\xi)-\Phi(\alpha_2,\xi)||\leq L_{\phi}||\xi||\cdot ||\alpha_1-\alpha_2|| \, \mbox{ for all }\alpha_1,\alpha_2\in \overline V,\, \xi\in B_\delta(0). \end{equation} Now we use (\ref{Phiinv}) with $\xi_1=\varepsilon \mu(\alpha_1,\varepsilon)$, $\xi_2=\varepsilon\mu(\alpha_2,\varepsilon)$ diminishing $\varepsilon(\delta)$, if necessary, in order that $\xi_1,\xi_2\in B_\delta(0)$ for all $\alpha_1,\alpha_2\in\overline V$ and $\varepsilon\in(0,\varepsilon(\delta)]$. Using also (C5), (\ref{mu_bdd}) and (\ref{Philip}) we obtain \begin{equation} \label{eq:despreP} \begin{array}{ll} &||\pi^\bot \widetilde{P}(\alpha_1,\beta(\alpha_1,\varepsilon))-\pi^\bot \widetilde{P}(\alpha_2,\beta(\alpha_2,\varepsilon))||= ||\Phi(\alpha_1,\xi_1)-\Phi(\alpha_2,\xi_2)|| \\ &\geq \eta ||\xi_1-\xi_2||- L_\Phi ||\xi_1||\cdot ||\alpha_1-\alpha_2|| \\ &\geq \eta\varepsilon||\mu(\alpha_1,\varepsilon)-\mu(\alpha_2,\varepsilon)||-L_\Phi m \varepsilon ||\alpha_1-\alpha_2||\ , \end{array} \end{equation} for all $\alpha_1,\alpha_2\in \overline V$ and $\varepsilon\in (0,\varepsilon(\delta)]$. Also using (\ref{Q_Lip}) we have \begin{equation} \label{eq:despreQ} \begin{array}{ll} ||\pi^\bot \widetilde{Q}(\alpha_1,\beta(\alpha_1,\varepsilon),\varepsilon)- \pi^\bot \widetilde{Q}(\alpha_2,\beta(\alpha_2,\varepsilon),\varepsilon) || \leq \\ \leq \varepsilon L_Q||\mu(\alpha_1,\varepsilon)-\mu(\alpha_2,\varepsilon)||+ L_Q(1+L_{\beta_0}) ||\alpha_1-\alpha_2||\ , \end{array} \end{equation} for all $\alpha_1,\alpha_2\in \overline V$ and $\varepsilon\in (0,\varepsilon(\delta)]$, where $L_{\beta_0}$ is the Lipschitz constant of $\beta_0$ in $\overline V$. By definition of $\beta(\alpha,\varepsilon)$ we have $\,\pi^\bot \widetilde{P}(\alpha_i,\beta(\alpha_i,\varepsilon))+\varepsilon\pi^\bot \widetilde{Q}(\alpha_i,\beta(\alpha_i,\varepsilon),\varepsilon)=0$ for $i\in \overline{1,2}$. Using (\ref{eq:despreP}) and (\ref{eq:despreQ}) we obtain \[ 0\geq \varepsilon [\eta -\varepsilon L_Q ]\cdot ||\mu(\alpha_1,\varepsilon)-\mu(\alpha_2,\varepsilon)||-\varepsilon [L_\Phi m +L_Q(1+L_{\beta_0})]\cdot ||\alpha_1-\alpha_2||\ , \] for all $\alpha_1,\alpha_2\in \overline V$ and $\varepsilon\in (0,\varepsilon(\delta)]$. Therefore $\mu :\overline V\times(0,\varepsilon(\delta)]\to {\mathbb{R}}^{n-k}$ satisfies (C6) with $L_\mu =[L_\Phi m +L_Q(1+L_{\beta_0})]/[\eta -\varepsilon(\delta) L_Q ]$. Hence all the conclusions hold with $\delta_0=\delta$ and $\varepsilon_0=\varepsilon(\delta)$. \qed We remark that (C4) and the uniqueness part of (C5) can be obtained by means of the Lipschitz generalization of the Inverse Function Theorem (see e.g. \cite[Theorem 5.3.8]{krantz}), but we provide a different proof because the inequalities (\ref{Pinv}) and (\ref{mu_bdd}) are used for proving the rest of (C5) and (C6). \ \noindent{\bf Proof of Theorem~\ref{thm1}.} Let $\delta_0$, $\varepsilon_0$, $\beta(\alpha,\varepsilon)$ and $\mu(\alpha,\varepsilon)$ be as in Lemma~\ref{implicit}. We consider the notations $\widetilde F,\widetilde P$ and $\widetilde Q$ like in the proof of Lemma \ref{implicit}. \medskip (C1) Let the sequences $(z_m)_{m\geq 1}$ from $\mathbb{R}^n$ and $(\varepsilon_m)_{m\geq 1}$ from $[0,1]$ be such that $z_m\to z_0\in {\mathcal{Z}}$, $\varepsilon_m\to 0$ as $m\to\infty$ and $F(z_m,\varepsilon_m)=0$ for any $m\geq 1$. We define $\alpha_0\in \mathbb{R}^k$, the sequences $(\alpha_m)_{m\geq 1}$ from $\mathbb{R}^k$ and $(\beta_m)_{m\geq 1}$ from $\mathbb{R}^{n-k}$ by $z_0=S\left(\begin{array}{c}\alpha_0\\ \beta_0(\alpha_0)\end{array}\right)$ and $z_m=S\left(\begin{array}{c}\alpha_m\\ \beta_m\end{array}\right)$. Then we have that $\alpha_0=\lim \limits_{m\to\infty}\alpha_m$, $\beta_0(\alpha_0)=\lim \limits_{m\to\infty}\beta_m$ and there exists $m_0\in\mathbb{N}$ such that $\beta_m\in B_{\delta_0}(\beta_0(\alpha_m))$ and $\varepsilon_m\in [0,\varepsilon_0]$ for all $m\geq m_0$. Therefore, since $F(z_m,\varepsilon_m)=0$, Lemma~\ref{implicit} implies $\beta_m=\beta(\alpha_m,\varepsilon_m)$ for any $m\geq m_0.$ Since $\pi\widetilde{P}(\alpha_m,\beta_0(\alpha_m))=0$ and $D_\beta (\pi\widetilde{P})(\alpha_m,\beta_0(\alpha_m))=0$, we obtain that $\lim \limits_{m\to \infty}\dfrac{1}{\varepsilon_m}\pi\widetilde{P}(\alpha_m,\beta(\alpha_m,\varepsilon_m))=0.$ Hence \begin{eqnarray*} 0&=&\lim \limits_{m\to \infty}\frac{1}{\varepsilon_m} \pi \widetilde{F}(\alpha_m,\beta(\alpha_m,\varepsilon_m),\varepsilon_m)\\ &=&\lim \limits_{m\to \infty}\left[\frac{1}{\varepsilon_m} \pi\widetilde{P}(\alpha_m,\beta(\alpha_m,\varepsilon_m))+\pi\widetilde{Q}(\alpha_m,\beta(\alpha_m,\varepsilon_m),\varepsilon_m)\right]= \widehat{Q}(\alpha_0) \end{eqnarray*} from where (C1) follows. \medskip (C2) Using (C4) of Lemma~\ref{implicit}, we note that it is enough to prove the existence of at least one zero in $V$ of the function $\alpha\mapsto \pi \widetilde{F}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)$ for each $\varepsilon\in(0,\varepsilon_1]$ where $\varepsilon_1$ with $0<\varepsilon_1\leq \varepsilon_0$ has to be found. This will follow from the claim that the Brouwer topological degree $d\left(\frac{1}{\varepsilon} \pi \widetilde{F}(\cdot,\beta(\cdot,\varepsilon),\varepsilon),V\right)\neq 0$ for $\varepsilon\in (0,\varepsilon_1]$. Now we prove this claim. Since $\beta(\alpha,\varepsilon)=\beta_0(\alpha)+\varepsilon \mu(\alpha,\varepsilon)$ with $\mu:\overline V\times (0,\varepsilon_0]\to \mathbb{R}^{n-k}$ a bounded function, $\pi\widetilde{P}(\alpha,\beta_0(\alpha))=0$ and $D_\beta (\pi\widetilde{P})(\alpha,\beta_0(\alpha))=0$, we have \[ \lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\pi\widetilde{P}(\alpha,\beta(\alpha,\varepsilon))=0.\] Therefore $$ \lim_{\varepsilon\to 0}\frac{1}{\varepsilon} \pi \widetilde{F}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)=\lim_{\varepsilon\to 0}\left[\frac{1}{\varepsilon} \pi\widetilde{P}(\alpha,\beta(\alpha,\varepsilon))+\pi\widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)\right]= \widehat{Q}(\alpha). $$ Using the continuity of the Brouwer degree with respect to the parameter $\varepsilon$, and taking into account that, by hypothesis, $d(\widehat{Q},V)\not=0$, for each $\varepsilon\in (0,\varepsilon_1]$ there exists $\varepsilon_1>0$ sufficiently small such that $$ d\left(\frac{1}{\varepsilon}\pi\widetilde{F}(\cdot,\beta(\cdot,\varepsilon),\varepsilon),V\right)=d(\widehat{Q},V)\not=0. $$ Hence the claim is proved. Then for each $\varepsilon \in (0,\varepsilon_1]$ there exists $\alpha_\varepsilon \in V$ such that $\pi\widetilde{F}(\alpha_\varepsilon,\beta(\alpha_\varepsilon,\varepsilon),\varepsilon)=0$ and, moreover, using also (C4) of Lemma~\ref{implicit}, we have that $\widetilde{F}(\alpha_\varepsilon,\beta(\alpha_\varepsilon,\varepsilon),\varepsilon)=0$. Denoting $z_\varepsilon=S\left(\begin{array}{c} \alpha_\varepsilon \\ \beta(\alpha_\varepsilon,\varepsilon) \end{array} \right)$ we have that $F(z_\varepsilon,\varepsilon)=0$. From the definitions of $z_\varepsilon$ and $\mathcal{Z}$, and the continuity of $\beta$, it follows easily that $\rho(z_\varepsilon,\mathcal{Z})\to 0$ as $\varepsilon\to 0$. \medskip (C3) Since $\alpha_0\in V$ is an isolated zero of $\widehat Q$, applying the topological degree arguments like in (C2) for $V$ that shrinks to $\{\alpha_0\}$, we obtain the existence of $\alpha_{\varepsilon}$ such that $\alpha_{\varepsilon}\to \alpha_0$ as $\varepsilon\to 0$, and $\pi\widetilde{F}(\alpha_\varepsilon,\beta(\alpha_\varepsilon,\varepsilon),\varepsilon)=0$ for any $\varepsilon\in (0,\varepsilon_1]$. Hence $z_\varepsilon=S\left(\begin{array}{c} \alpha_\varepsilon \\ \beta(\alpha_\varepsilon,\varepsilon) \end{array} \right)$ and $z_0=S\left(\begin{array}{c} \alpha_0 \\ \beta_0(\alpha_0) \end{array} \right)\in \mathcal{Z}$ are such that $F(z_\varepsilon,\varepsilon)=0$ and $z_\varepsilon \to z_0$ as $\varepsilon\to 0$. In order to prove that $z_\varepsilon$ is the unique zero of $F(\cdot,\varepsilon)$ in a neighborhood of $z_0$, we define $$r_1(\alpha,\varepsilon)=\frac{1} {\varepsilon}\pi\widetilde{P}(\alpha,\beta(\alpha,\varepsilon)), \quad r_2(\alpha,\varepsilon)=\pi\widetilde{Q}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)-\pi\widetilde{Q}(\alpha,\beta_0(\alpha),0),$$ for all $\alpha\in\overline V$ and $\varepsilon\in(0,\varepsilon_1]$, and we study the Lipschitz properties with respect to $\alpha$ of these two functions. Since $\widetilde{P}(\alpha,\beta_0(\alpha))=0$ for all $\alpha\in \overline V$, by taking the derivative with respect to $\alpha$ we obtain \begin{equation}\label{tmp} D_\alpha \left(\pi \widetilde{P}\right)(\alpha,\beta_0(\alpha))+D_\beta \left(\pi \widetilde{P}\right) (\alpha,\beta_0(\alpha))D \beta_0(\alpha)=0\quad{\rm for\ all\ }\alpha\in \overline V. \end{equation} Assumption (A2) assures that $D_\beta \left(\pi \widetilde{P}\right)(\alpha,\beta_0(\alpha) =0$ for all $\alpha\in \overline V$. Taking the derivative with respect to $\alpha$, we have \begin{equation} \label{secondorder} D_{\beta\alpha}\left(\pi {\widetilde{P}}\right)(\alpha,\beta_0(\alpha))+D_{\beta\beta}\left(\pi {\widetilde{P}}\right)(\alpha,\beta_0(\alpha))D\beta_0(\alpha)=0\quad{\rm for\ any\ }\alpha\in \overline V. \end{equation} For any $\alpha\in\overline V$ and $\xi\in \mathbb{R}^{n-k}$ we define $\Phi(\alpha,\xi)=\pi \widetilde{P}(\alpha,\beta_0(\alpha)+\xi)$. Taking into account the relations (\ref{tmp}) and (\ref{secondorder}) and that, by hypothesis (A3) we have that\\ $D_{\beta\alpha}\left(\pi {\widetilde{P}}\right)(\alpha,\beta_0(\alpha))=D_{\alpha\beta}\left(\pi {\widetilde{P}}\right)(\alpha,\beta_0(\alpha))$, we obtain \begin{eqnarray*} D_{\alpha}\Phi (\alpha,\xi)&=& D_\alpha \left(\pi \widetilde{P}\right)(\alpha,\beta_0(\alpha)+\xi)+D_\beta \left(\pi \widetilde{P}\right) (\alpha,\beta_0(\alpha)+\xi)D \beta_0(\alpha)-\\ &~& D_\alpha \left(\pi \widetilde{P}\right)(\alpha,\beta_0(\alpha))-D_\beta \left(\pi \widetilde{P}\right) (\alpha,\beta_0(\alpha))D \beta_0(\alpha)-\\ &~&D_{\alpha\beta}\left(\pi {\widetilde{P}}\right)(\alpha,\beta_0(\alpha))\xi-D_{\beta\beta}\left(\pi {\widetilde{P}}\right)(\alpha,\beta_0(\alpha))D\beta_0(\alpha)\xi. \end{eqnarray*} From this last equality, using that $D_\alpha \left(\pi \widetilde{P}\right)$ and, respectively, $D_\beta \left(\pi \widetilde{P}\right)$ are differentiable at $(\alpha,\beta_0(\alpha))$, we deduce that $D_{\alpha}\Phi (\alpha,\xi)=o(\xi)$ for all $\alpha\in\overline V$ and $\xi\in \mathbb{R}^{n-k}$ with $||\xi||$ sufficiently small. Hence the mean value inequality assures that $$||\Phi(\alpha_1,\xi)-\Phi(\alpha_2,\xi)||\leq o(\xi)||\alpha_1-\alpha_2|| \quad \mbox{ for all } \alpha_1,\alpha_2\in\overline V.$$ In the last inequality we replace $\xi = \varepsilon \mu (\alpha_1,\varepsilon)$ (where $\mu$ is given by Lemma \ref{implicit}). We use that $D_\xi \Phi(\alpha,0)=D_\beta \pi\widetilde{P}(\alpha,\beta_0(\alpha))=0$ for any $\alpha\in\overline V,$ and that $\mu$ is Lipschitz with respect to $\alpha \in\overline V$. Then we obtain, considering that $\varepsilon_1$ is small enough, for all $\varepsilon\in (0,\varepsilon_1]$ $$||\Phi(\alpha_1,\varepsilon\mu(\alpha_1,\varepsilon))-\Phi(\alpha_2,\varepsilon \mu(\alpha_2,\varepsilon))||\leq o(\varepsilon)||\alpha_1-\alpha_2|| \quad \mbox{ for all } \alpha_1,\alpha_2\in V.$$ Now coming back to our notations and recalling that $\beta(\alpha,\varepsilon)=\beta_0(\alpha)+\varepsilon \mu (\alpha,\varepsilon)$, we obtain for $\varepsilon\in (0,\varepsilon_1]$ \begin{equation} \label{lr1} ||r_1(\alpha_1,\varepsilon)-r_1(\alpha_2,\varepsilon)||\leq \frac{o(\varepsilon)}{\varepsilon}||\alpha_1-\alpha_2|| \quad \mbox{ for all } \alpha_1,\alpha_2\in\overline V.\end{equation} We will prove that a similar relation holds for the function $r_2$. First we note that the hypothesis (A5) and the fact that $Q$ is locally uniformly Lipschitz with respect to the first variable imply that \begin{equation}\label{sd3} \begin{array}{c} \left\|\pi{Q}\left(z_1+ \zeta_1,\varepsilon\right)-\pi{Q}\left(z_1,0\right)- \left.\pi{Q}\left(z_2+ \zeta_2,\varepsilon\right)+\pi{Q}\left(z_2,0\right)\right.\right\|\le\\ \frac{o(\delta)}{\delta} \|z_1-z_2\|+L_Q\|\zeta_1-\zeta_2\|, \end{array} \end{equation} for all $z_1,z_2\in B_{\delta}(z_0)\cap \mathcal{Z}$, $\varepsilon\in [0,\delta]$ and $\zeta_1,\zeta_2\in B_{\delta}(0)$. We diminish $\delta_1>0$ given in (A4) and $\varepsilon_1>0$ in such a way that $\delta_1\leq \delta$, $\varepsilon_1\leq \delta$, $S\left(\begin{array}{c}\alpha\\ \beta_0(\alpha)\end{array}\right)\in B_{\delta}(z_0)$ and $S\left(\begin{array}{c}0_{k\times 1}\\ \varepsilon\mu(\alpha,\varepsilon)\end{array}\right)\in B_{\delta}(0)$ for any $\alpha\in B_{\delta_1}(\alpha_0),$ $\varepsilon\in(0,\varepsilon_1].$ Replacing $z_i=S\left(\begin{array}{c} \alpha_i\\ \beta_0(\alpha_i) \end{array}\right)$, $\zeta_i= S\left(\begin{array}{c} 0_{k\times1}\\ \varepsilon \mu(\alpha_i,\varepsilon) \end{array}\right)$, $i\in \overline{1,2}$ in \eqref{sd3} we obtain that \begin{eqnarray*} &&||r_2(\alpha_1,\varepsilon)-r_2(\alpha_2,\varepsilon)||\leq \\%=\\&&||\pi\widetilde{Q}(\alpha_1,\beta_0(\alpha_1)+\varepsilon \mu && \leq \frac{o(\delta)}{\delta} \left(\, ||\alpha_1-\alpha_2|| + ||\beta_0(\alpha_1)-\beta_0(\alpha_2)|| \,\right) + \varepsilon L_Q ||\mu(\alpha_1,\varepsilon)-\mu (\alpha_2,\varepsilon)||\ , \end{eqnarray*} for all $\alpha_1,\alpha_2\in B_{\delta_1}(\alpha_0)$ and $\varepsilon\in(0,\varepsilon_1]$. By hypothesis, $\beta_0$ is $C^1$ in $\overline V$ and, by Lemma~\ref{implicit} (conclusion (C6)), $(\alpha,\varepsilon)\mapsto \mu(\alpha,\varepsilon)$ is Lipschitz with respect to $\alpha\in\overline V$ (with a Lipschitz constant that does not depend on $\varepsilon$). Hence for $\delta_1,\varepsilon_1\leq \delta$ small enough, \begin{equation} \label{lr2} ||r_2(\alpha_1,\varepsilon)-r_2(\alpha_2,\varepsilon)||\leq \frac{o(\delta)}{\delta}||\alpha_1-\alpha_2||, \quad \alpha_1,\alpha_2\in B_{\delta_1}(\alpha_0), \, \varepsilon\in (0,\varepsilon_1].\end{equation} Therefore we have proved that $r_1$ and $r_2$ satisfy the Lipschitz conditions \eqref{lr1} and, respectively, \eqref{lr2}. In what follows we define some constant $\delta_2>0$, and after we prove that it is the one that satisfies the requirements of (C3). We diminish $\delta_1>0$ in such a way that there exists $\delta_3>0$ such that $\delta_3\leq \delta_0$ and $B_{\delta_3}(\beta_0(\alpha_0))\subset \bigcap\limits_{\alpha\in B_{\delta_1}(\alpha_0)}B_{\delta_0}(\beta_0(\alpha))$. We choose $\delta_2>0$ so small that $S^{-1}(B_{\delta_2}(z_0))\subset B_{\delta_1}(\alpha_0)\times B_{\delta_3}(\beta_0(\alpha_0))$. We diminish $\varepsilon_1>0$, if necessary, such that $z_\varepsilon \in B_{\delta_2}(z_0)$ for any $\varepsilon\in(0,\varepsilon_1].$ For any $\varepsilon\in(0,\varepsilon_1]$ we claim that $z_\varepsilon$ is the only zero of $F(\cdot,\varepsilon)$ in $B_{\delta_2}(z_0).$ Assume by contradiction that there exists $\varepsilon_2\in(0,\varepsilon_1]$ such that $z_{\varepsilon_2}$ and $z_2$ are two different zeros of $F(\cdot,\varepsilon_2)$ in $B_{\delta_2}(z_0).$ Denoting $\alpha_2=\pi S^{-1}z_2$ and $\beta_2=\pi^{\bot} S^{-1}z_2$ we have that $\beta_2\in B_{\delta_0}(\beta_0(\alpha_2))$. By (C5) of Lemma~\ref{implicit}, since $\beta_2$ is a zero of $\pi ^{\bot} F \left( S \left( \begin{array}{c} \alpha_2 \\ \cdot \end{array}\right),\varepsilon_2 \right)$ (using the notations introduced before, $\pi ^{\bot}\widetilde{F}(\alpha_2,\cdot,\varepsilon_2)$), we must have that $\beta_2=\beta(\alpha_2,\varepsilon_2)$. Therefore $\alpha_{\varepsilon_2}$ and $\alpha_2$ are two different zeros of $\pi \widetilde{F}(\cdot,\beta(\cdot,\varepsilon_2),\varepsilon_2)$ in $B_{\delta_1}(\alpha_0).$ We have the identity $$ \frac{1}{\varepsilon}\pi\widetilde{F}(\alpha,\beta(\alpha,\varepsilon),\varepsilon)=\widehat{Q}(\alpha)+r_1(\alpha,\varepsilon)+r_2(\alpha,\varepsilon) \mbox{ for all } \alpha\in \overline V ,\,\,\, \varepsilon\in (0,\varepsilon_1].$$ We denote $r(\alpha,\varepsilon)=r_1(\alpha,\varepsilon)+r_2(\alpha,\varepsilon)$. Then assumption (A4), properties (\ref{lr1}) and (\ref{lr2}) give $$0=||\widehat{Q}(\alpha_{\varepsilon_2})-\widehat{Q}(\alpha_2)+r(\alpha_{\varepsilon_2},\varepsilon_2)-r(\alpha_2,\varepsilon_2)||\geq (L_{\widehat{Q}}-o(\varepsilon_2)/{\varepsilon_2}-o(\delta)/\delta)||\alpha_{\varepsilon_2}-\alpha_2||.$$ Since $\varepsilon_1>0$ and $\delta>0$ are sufficiently small and $0<\varepsilon_2\leq \varepsilon_1$, the constant $(L_{\widehat{Q}}-o(\varepsilon_2)/{\varepsilon_2}-o(\delta)/\delta)$ must be positive and, consequently, $\alpha_{\varepsilon_2}$ and $\alpha_2$ must coincide. Hence also $z_{\varepsilon_2}$ and $z_2$ must coincide and we conclude the proof. \qed \section{A generalization of Malkin's result on the existence of $T$-periodic solutions for $T$-periodically perturbed differential equations when the perturbation is nonsmooth}\label{sec3} In this section we consider the problem of existence and uniqueness of $T$-periodic solutions for the $T$-periodic differential system \begin{equation} \label{ps} \dot{x}= f(t,x)+\varepsilon g(t,x,\varepsilon)\, , \end{equation} where $f\in C^2(\mathbb{R}\times\mathbb{R}^n, \mathbb{R}^n)$ and $g\in C^0(\mathbb{R}\times\mathbb{R}^n\times[0,1],\mathbb{R}^n)$ are $T$-periodic in the first variable and $g$ is locally uniformly Lipschitz with respect to its second variable. For $z\in {\mathbb{R}}^n$ we denote by $x(\cdot,z,\varepsilon)$ the solution of \eqref{ps} such that $x(0,z,\varepsilon)=z$. We consider the situation when the unperturbed system \begin{equation}\label{np} \dot{x}= f(t,x)\, , \end{equation} has a non--degenerate (in a sense that will be precised below) family of $T$-periodic solutions. The main tool for the proof of our main result is Theorem \ref{thm1}. We will show that the assumptions of Theorem~\ref{thm1} can be expressed in terms of the function $g$ and of the solutions of the linear differential system \begin{equation} \label{ls} \dot{y}=D_xf(t,x(t,z,0))y. \end{equation} Indeed we have the following theorem, which generalizes a related result by Roseau and improves it with the uniqueness of the periodic solution. The above mentioned result by Roseau is proved in a shorter way in \cite{adriana}. Here we will use the same main ideas from \cite{adriana} to prove the next result. \begin{thm}\label{thm2} Assume that $f\in C^2(\mathbb{R}\times\mathbb{R}^n, \mathbb{R}^n)$ and $g\in C^0(\mathbb{R}\times\mathbb{R}^n\times[0,1],\mathbb{R}^n)$ are $T$-periodic in the first variable, and that $g$ is locally uniformly Lipschitz with respect to the second variable. Suppose that the unperturbed system \eqref{np} satisfies the following conditions. \begin{itemize} \item[(A6)] There exist an invertible $n\times n$ real matrix $S$, an open ball $V\subset {\mathbb{R}^k}$ with $k\leq n$, and a $C^1$ function $\beta_0:\overline V\to {\mathbb{R}^{n-k}}$ such that any point of the set $\mathcal{Z}=\bigcup\limits_{\alpha\in \overline V}\left\{S\left(\begin{array}{c} \alpha\\ \beta_0(\alpha)\end{array}\right)\right\}$ is the initial condition of a $T$-periodic solution of \eqref{np}. \item[(A7)] For each $z\in \mathcal{Z}$ there exists a fundamental matrix solution $Y(\cdot,z)$ of (\ref{ls}) such that ***$Y(0,z)$ is $C^1$ with respect to $z$ and *** such that the matrix $\left(Y^{-1}(0,z)-Y^{-1}(T,z)\right)S$ has in the upper right corner the null $k\times (n-k)$ matrix, while in the lower right corner has the $(n-k)\times (n-k)$ matrix $\Delta(z)$ with $\det(\Delta(z))\neq 0$. \end{itemize} \noindent We define the function $G:\overline V \to {\mathbb{R}^k}$ by \begin{equation} \nonumber G(\alpha)=\pi \int _0^T Y^{-1}\left(t,S\left(\begin{array}{c} \alpha\\ \beta_0(\alpha)\end{array}\right)\right)g\left(t,x\left(t,S\left(\begin{array}{c} \alpha\\ \beta_0(\alpha)\end{array}\right) ,0 \right),0\right) dt. \end{equation} Then the following statements hold. \begin{itemize} \item[(C7)] For any sequences $(\varphi_m)_{m\geq 1}$ from $C^0(\mathbb{R}, \mathbb{R}^n)$ and $(\varepsilon_m)_{m\geq 1}$ from $[0,1]$ such that $\varphi_m(0)\to z_0\in \mathcal{Z}$, $\varepsilon_m\to 0$ as $m\to \infty$ and $\varphi_m$ is a $T$-periodic solution of \eqref{ps} with $\varepsilon=\varepsilon_m$ for any $m\geq 1$, we have that $G(\pi S^{-1} z_0)=0$. \item[(C8)] If $G(\alpha)\neq 0$ for any $\alpha\in \partial V$ and $d(G,V)\neq 0$, then there exists $\varepsilon_1>0$ sufficiently small such that for each $\varepsilon\in(0,\varepsilon_1]$ there is at least one $T$-periodic solution $\varphi_\varepsilon$ of system \eqref{ps} such that $\rho(\varphi_\varepsilon(0),\mathcal{Z})\to 0$ as $\varepsilon\to 0$. \end{itemize} In addition we assume that there exists $\alpha_0\in V$ such that $G(\alpha_0)=0$, $G(\alpha)\neq 0$ for all $\alpha\in \overline V \setminus \{\alpha_0\}$ and $d(G,V)\neq 0$, and we denote $z_0=S\left( \begin{array}{c}\alpha_0\\\beta_0(\alpha_0)\end{array}\right)$. Moreover we also assume: \begin{itemize} \item[(A8)] There exists ${\delta_1}>0$ and $L_G>0$ such that \[ ||G(\alpha_1)-G(\alpha_2)||\geq L_G ||\alpha_1-\alpha_2||, \, \mbox{ for all } \alpha_1,\alpha_2\in B_{\delta_1}(\alpha_0),\] \item[(A9)] For $\delta>0$ sufficiently small there exists $M_\delta\subset [0,T]$ Lebesgue measurable with $mes(M_\delta)=o(\delta)/\delta$ such that \begin{equation*} ||g(t,z_1+\zeta,\varepsilon)-g(t,z_1,0)-g(t,z_2+\zeta,\varepsilon)+g(t,z_2,0)||\leq o(\delta)/\delta ||z_1-z_2||\ , \end{equation*} for all $t\in [0,T]\setminus M_\delta$ and for all $z_1,z_2\in B_{\delta}(z_0)$, $\varepsilon\in[0,\delta]$ and $\zeta\in B_\delta(0)$. \end{itemize} Then the following conclusion holds. \begin{itemize} \item[(C9)] There exists $\delta_2>0$ such that for any $\varepsilon \in (0,\varepsilon_1]$, $\varphi_{\varepsilon}$ is the only $T$-periodic solution of \eqref{ps} with initial condition in $B_{\delta_2}(z_0)$. Moreover $\varphi_{\varepsilon}(0)\to z_0$ as $\varepsilon\to 0$. \end{itemize} \end{thm} \noindent To prove the theorem we need three preliminary lemmas that are interesting by themselves. For example, in Lemma \ref{lem1} we prove the existence of the derivative (in $\varepsilon=0$) with respect to some parameter denoted $\varepsilon$ of the solution of some initial value problem without assuming that the system is $C^1$. We also study the properties of this derivative. \begin{lem} \label{lemlip} Let $f\in C^2(\mathbb{R}^n,\mathbb{R}^n)$ and $K_1,K_2$ be compact subsets of $\mathbb{R}^n.$ Then the following inequality holds for all $x^0_1,x^0_2\in K_1$, $y_1,y_2\in K_2$ and $\varepsilon\in[0,1]$. \begin{equation} \label{pty} ||f(x^0_1+\varepsilon y_1)-f(x^0_1)-f(x^0_2+\varepsilon y_2)+f(x^0_2)||\leq O(\varepsilon)||x^0_1-x^0_2||+O(\varepsilon) ||y_1-y_2||\, . \end{equation} In addition for $m>0$ sufficiently small and $ u_1,\, u_2,\, v_1,\, v_2\in B_m(0)\subset \mathbb{R}^n$ we have \begin{equation} \label{ptu} \begin{array}{ll} \,||f\left( x^0_1+v_1 +\varepsilon y^0_1+\varepsilon u_1 \right) - f\left( x^0_1+v_1 \right)- \varepsilon f'(x^0_1)y^0_1-\\ \medskip f\left( x^0_2+ v_2+\varepsilon y^0_2+\varepsilon u_2 \right)+f\left( x^0_2+ v_2 \right)+\varepsilon f'(x^0_2)y^0_2|| \leq\\ \medskip \left[ o(\varepsilon) + \varepsilon O(m \right] ||x^0_1-x^0_2||+O(\varepsilon) || v_1-v_2||+\\ \medskip \left[ o(\varepsilon)+\varepsilon O(m)\right] || y^0_1-y^0_2||+O(\varepsilon)||u_1-u_2||\, . \end{array} \end{equation} \end{lem} \noindent {\bf Proof.} We define $\Phi(x^0,y,\varepsilon)=f(x^0+\varepsilon y)-f(x^0)$ for all $x^0\in \overline{\mbox{co}}K_1$, $y\in \overline{\mbox{co}}K_2$ and $\varepsilon\in[0,1]$. Relation (\ref{pty}) follows from the mean value inequality applied to $\Phi_i$ with $i\in\overline{1,n}$ and the following estimations. \begin{eqnarray*} \frac{\partial \Phi_i}{\partial x^0}(x^0,y,\varepsilon)&=& (f_i)^{\prime}(x^0+\varepsilon y)- (f_i)^{\prime}(x^0)=O(\varepsilon)\quad{\rm and}\\ \frac{\partial \Phi_i}{\partial y}(x^0,y,\varepsilon)&=&\varepsilon (f_i)^{\prime}(x^0+\varepsilon y)=O(\varepsilon). \end{eqnarray*} In order to prove relation (\ref{ptu}) we define \[ \Phi(x^0,v,y^0,u,\varepsilon)=f( x^0+v+\varepsilon y^0+\varepsilon u ) - f( x^0+v)- \varepsilon f'(x^0)y^0\, ,\] for all $x^0\in \overline{\mbox{co}}K_1$, $y^0\in \overline{\mbox{co}}K_2$, $u,v\in B_m(0)$ and $\varepsilon\in[0,1]$. We apply again the mean value inequality to the components $\Phi_i,$ $i\in\overline{1,n},$ using the following estimations. \begin{eqnarray*} \frac{\partial \Phi_i}{\partial x^0}(x^0,v,y^0,u,\varepsilon)&&= (f_i)^{\prime}( x^0+v+\varepsilon y^0+\varepsilon u )- (f_i)^{\prime}(x^0+v)- \varepsilon (f_i)''(x^0)y^0\\ &&=o(\varepsilon)+\varepsilon (f_i)''(x^0+v)u\\&&+ \ \varepsilon \left[ (f_i)''(x^0+v)-(f_i)''(x^0)\right] y^0\\&&=o(\varepsilon)+\varepsilon O(m)+\varepsilon o(m)/m=o(\varepsilon)+\varepsilon O(m),\\ \frac{\partial \Phi}{\partial v}(x^0,v,y^0,u,\varepsilon)&&= (f_i)^{\prime}( x^0+v+\varepsilon y^0+\varepsilon u )- (f_i)^{\prime}(x^0+v)=O(\varepsilon),\\ \frac{\partial \Phi_i}{\partial y^0}(x^0,v,y^0,u,\varepsilon)&&=\varepsilon (f_i)^{\prime}( x^0+v+\varepsilon y^0+\varepsilon u )- \varepsilon (f_i)^{\prime}(x^0)\\&&=\varepsilon (f_i)^{\prime}( x^0+v+\varepsilon y^0+\varepsilon u )- \varepsilon (f_i)^{\prime}(x^0+v)\\ & & +\ \varepsilon (f_i)^{\prime}(x^0+v)-\varepsilon (f_i)^{\prime}(x^0)\\&&=o(\varepsilon)+\varepsilon O(m),\\ \frac{\partial \Phi}{\partial u}(x^0,v,y^0,u,\varepsilon)&&=\varepsilon (f_i)^{\prime}( x^0+v+\varepsilon y^0+\varepsilon u )=O(\varepsilon)\, . \end{eqnarray*} $\Box$ \bigskip \begin{lem}\label{lem1} We consider $f\in C^2(\mathbb{R}\times \mathbb{R}^n,\mathbb{R}^n)$ and $g\in C^0(\mathbb{R}\times \mathbb{R}^n\times[0,1],\mathbb{R}^n)$ a locally uniformly Lipschitz function with respect to the second variable. For $z\in \mathbb{R}^n$ and $\varepsilon\in [0,1]$, we denote by $\,x(\cdot,z,\varepsilon)\,$ the unique solution of \[ \dot{x}=f(t,x)+\varepsilon g(t,x,\varepsilon)\, , \quad x(0)=z,\] and by $\,y(t,z,\varepsilon)= \left[x(t,z,\varepsilon)-x(t,z,0)\right]/\varepsilon\,$ $($here $\varepsilon\neq 0$ $)$. We assume that for a given $T>0$ there exist a compact set $K\subset \mathbb{R}^n$ with nonempty interior and ${\delta}>0$ such that $x(t,z,\varepsilon)$ is well--defined for all $t\in [0,T]$, $z\in K$ and $\varepsilon\in[0,{\delta}]$. Then the following statements hold. \begin{itemize} \item[(C10)] There exists $y(t,z,0)=\lim \limits_{\varepsilon \to 0}y(t,z,\varepsilon)$ being the solution of the initial value problem \begin{equation*} \dot{y}(t)=D_xf(t,x(t,z,0))y+g(t,x(t,z,0),0)\, ,\quad y(0)=0. \end{equation*} The above limit holds uniformly with respect to $(t,z)\in[0,T]\times K$. \item[(C11)] The functions $x,y:[0,T]\times K \times [0,\delta]\to \mathbb{R}^n$ are continuous and uniformly Lipschitz with respect to their second variable. \item[(C12)] In addition if there exists $z_0\in \mbox{int}(K)$ such that assumption {\em (A9)} of Theorem \ref{thm2} holds with the same small $\delta>0$ as above, then \begin{equation*} ||y(t,z_1+\zeta,\varepsilon)-y(t,z_1,0)-y(t,z_2+\zeta,\varepsilon)+y(t,z_2,0)||\leq o(\delta)/\delta ||z_1-z_2||\, , \end{equation*} for all $t\in [0,T]$, $z_1,z_2\in B_\delta (z_0)$, $\varepsilon \in [0,\delta]$ and $\zeta \in B_\delta (0)$. \end{itemize} \end{lem} \noindent {\bf Proof.} (C10) We define $\displaystyle{\tilde{f}(t,z,\varepsilon)=\frac{f(t,x(t,z,\varepsilon))-f(t,x(t,z,0))}{x(t,z,\varepsilon)-x(t,z,0)}}$ for $\varepsilon\neq 0$ and $\tilde{f}(t,z,0)=D_xf(t,x(t,z,0))$. In this way we obtain the continuous function $\tilde f:[0,T]\times K \times [0,\delta]\to \mathbb{R}^n$. For $\varepsilon\neq 0$, using the definitions of $x(t,z,\varepsilon)$ and $y(t,z,\varepsilon)$ we deduce immediately that $y(0,z,\varepsilon)=0$ and also that \begin{equation} \label{eq:y} \dot{y}(t,z,\varepsilon)= \tilde{f}(t,x(t,z,\varepsilon)) y(t,z,\varepsilon) + g(t,x(t,z,\varepsilon),\varepsilon).\end{equation} Passing to the limit as $\varepsilon \to 0$, we obtain that $y(\cdot,z,0)$ is the solution of the given initial value problem. Hence \eqref{eq:y} holds also for $\varepsilon=0$. Since the right hand side of \eqref{eq:y} is given by a continuous function, we have that the limit $y(t,z,0)=\lim \limits_{\varepsilon \to 0}y(t,z,\varepsilon)$ holds uniformly with respect to $(t,z)\in[0,T]\times K$. \medskip (C11) The facts that the functions $x,y:[0,T]\times K \times [0,\delta]\to \mathbb{R}^n$ are continuous, and that $x$ is Lipschitz with respect to its second variable can be obtained as a corollary of the general theorem on the dependence of the solutions of an ordinary differential equation on the parameters (see \cite[Lemma~8.2]{ama}). It remains to prove that $y:[0,T]\times K \times [0,\delta]\to \mathbb{R}^n$ is uniformly Lipschitz with respect to its second variable. There exist compact subsets $K_1$ and $K_2$ of $\mathbb{R}^n$ such that $x(t,z,\varepsilon)\in K_1$ and $y(t,z,\varepsilon)\in K_2$ for all $(t,z,\varepsilon)\in [0,T]\times K \times [0,\delta]$. Moreover the representation $x(s,z,\varepsilon)=x(s,z,0)+\varepsilon y(s,z,\varepsilon)$ allows to use Lemma \ref{lemlip}, relation (\ref{pty}) with $x^0_1=x(s,z_1,0)$, $x^0_2=x(s,z_2,0),$ $y_1=y(s,z_1,\varepsilon),$ $y_2=y(s,z_2,\varepsilon)$ in order to obtain \begin{eqnarray*} & \|f(t,x(t,z_1,\varepsilon))-f(t,x(t,z_1,0))-f(t,x(t,z_2,\varepsilon))+ f(t,x(t,z_2,0))\|\le \\ & O(\varepsilon)\|x(t,z_1,0)-x(t,z_2,0)\|+O(\varepsilon)\|y(t,z_1,\varepsilon)-y(t,z_2,\varepsilon)\|, \end{eqnarray*} for all $t\in[0,T]$, $z\in K$ and $\varepsilon\in[0,\delta]$. This last inequality and the fact that $g$ is locally uniformly Lipschitz, used together with the representation \begin{equation*} y(t,z,\varepsilon)=\frac{1}{\varepsilon}\int\limits_0^t\left[f(s,x(s,z,\varepsilon))-f(s,x(s,z,0))\right]ds+\int\limits_0^t g(s,x(s,z,\varepsilon),\varepsilon)ds,\label{stst} \end{equation*} imply that \begin{eqnarray*} \left\|y(t,z_1,\varepsilon)-y(t,z_2,\varepsilon)\right\|&\leq& O(\delta)/\delta \int\limits_0^t\left\|y(s,z_1,\varepsilon)-y(s,z_2,\varepsilon)\right\|ds \\ && +O(\delta)/\delta \int\limits_0^t\left\|x(s,z_1,\varepsilon)-x(s,z_2,\varepsilon)\right\|ds, \end{eqnarray*} for all $t\in [0,T]$, $z_1,z_2\in K$ and $\varepsilon \in [0,\delta].$ \noindent We use now the fact that the function x(t,z,\varepsilon)$ is Lipschitz with respect to $z$ and we deduce \[ \left\|y(t,z_1,\varepsilon)-y(t,z_2,\varepsilon)\right\|\le O(\delta)/\delta\left\|z_1-z_2\right\|+O(\delta)/\delta\int\limits_0^t\left\|y(s,z_1,\varepsilon)-y(s,z_2,\varepsilon)\right\|ds. \] Applying Gr\"{o}nwall lemma (see \cite[Lemma~6.2]{hale} or \cite[Ch.~2, Lemma \S~11]{dem}) we finally have for all $t\in [0,T]$, $z_1,z_2\in K$, $\varepsilon \in [0,\delta]$, $||y(t,z_1,\varepsilon)-y(t,z_2,\varepsilon)||\leq O(\delta)/\delta\left\|z_1-z_2\right\|$. \medskip (C12) First we note that assumption (A9) of Theorem 2 and the fact that $g$ is locally uniformly Lipschitz with respect to the second variable assure that the following relation holds \begin{equation} \label{gunic} \begin{array}{ll} ||g(t,z_1+\zeta_1,\varepsilon)-g(t,z_1,0)-g(t,z_2+\zeta_2,\varepsilon)+g(t,z_2,0)||\leq\\ \leq o(\delta)/\delta||z_1-z_2||+O(\delta)/\delta ||\zeta_1-\zeta_2||, \end{array} \end{equation} for all $t\in [0,T]\setminus M_{\delta}$, $z_1,z_2\in B_{\delta} (z_0)$, $\varepsilon \in [0,\delta]$ and $\zeta_1,\zeta_2 \in B_\delta (0)$. We introduce the notations $ v(t,z,\zeta)= x(t,z+\zeta,0)-x(t,z,0)$, $\widetilde{\zeta}(s,z,\zeta,\varepsilon)=v(s,z,\zeta)+\varepsilon y(s,z+\zeta,\varepsilon)$ and $u(t,z,\zeta,\varepsilon)=y(t,z+\zeta,\varepsilon)-y(t,z,0)$. Since the function $x(\cdot,\cdot,0)$ is $C^1$, $v$ is Lipschitz with respect to $z$ on $[0,T]\times K \times B_\delta(0)$ with some constant $o(\delta)/\delta$, we have \begin{eqnarray*} &&u(t,z,\zeta,\varepsilon)=y(t,z+\zeta,\varepsilon)-y(t,z,0)\\&&=\frac 1{\varepsilon} \int _0^t \left[ f(s,x(s,z+\zeta,\varepsilon))-f(s,x(s,z+\zeta,0)) - \varepsilon D_x f(s,x(s,z,0))y(s,z,0) \right] ds \\ &&+ \int _0^t \left[ g(s,x(s,z+\zeta,\varepsilon),\varepsilon)-g(s,x(s,z,0),0) \right]ds. \end{eqnarray*} Our aim is to estimate a Lipschitz constant with respect to $z$ of the function $u$ on $[0,T]\times B_{\delta}(z_0)\times B_\delta(0)\times [0,\delta]$. We will apply Lemma \ref{lemlip}, relation (\ref{gunic}), the fact that $g$ is locally uniformly Lipschitz, and using the following decompositions and estimations that hold for $(s,z,\zeta,\varepsilon)\in [0,T]\times B_{\delta}(z_0)\times B_\delta(0)\times [0,\delta]$, \begin{eqnarray*} x(s,z+\zeta,\varepsilon)&=&x(s,z,0)+v(s,z,\zeta)+\varepsilon y(s,z,0) +\varepsilon u(s,z,\zeta,\varepsilon),\\ x(s,z+\zeta,0)&=&x(s,z,0)+ v(s,z,\zeta),\\ x(s,z+\zeta,\varepsilon)&=&x(s,z,0)+\widetilde{\zeta}(s,z,\zeta,\varepsilon),\end{eqnarray*} \[ ||v(t,z,\zeta)||\leq o(\delta)/\delta, \quad ||u(t,z,\zeta,\varepsilon)||\leq o(\delta)/\delta, \quad ||\widetilde{\zeta}(s,z,\zeta,\varepsilon)||\leq \delta O(\delta)/\delta,\] we obtain \begin{eqnarray*} &&||u(t,z_1,\zeta,\varepsilon) - u(t,z_2,\zeta,\varepsilon)||\leq \\ && \frac 1 {\varepsilon} \int_0^t \left[ o(\varepsilon)+\varepsilon o(\delta)/\delta \right] ||x(s,z_1,0)-x(s,z_2,0)|| +O(\varepsilon) ||v(s,z_1,\zeta)-v(s,z_2,\zeta) ||+ \\ && \left[ o(\varepsilon)+\varepsilon o(\delta)/\delta \right] ||y(s,z_1,0)-y(s,z_2,0)||+O(\varepsilon)|| u(s,z_1,\zeta,\varepsilon) - u(s,z_2,\zeta,\varepsilon)|| ds+\\ && \int_{(0,t)\setminus M_{\delta}} o(\delta)/\delta||x(s,z_1,0)-x(s,z_2,0)|| +O(\delta)/\delta||\widetilde{\zeta}(s,z_1,\zeta,\varepsilon)-\widetilde{\zeta} (s,z_2,\zeta,\varepsilon)||ds+\\&& o(\delta)/\delta ||z_1-z_2|| . \end{eqnarray*} Now we use that some Lipschitz constants with respect to $z$ for the functions $x$ and $y$ on $[0,T]\times B_{\delta}(z_0)\times [0,\delta]$ are $O(\delta)/\delta$, while for the functions $v$ on $[0,T]\times B_{\delta}(z_0)\times [0,\delta]$ and $\widetilde{\zeta}$ on $[0,T]\times B_{\delta}(z_0)\times B_\delta(0)\times [0,\delta]$ are $o(\delta)/\delta$, and finally we obtain that \begin{eqnarray*} &&||u(t,z_1,\zeta,\varepsilon) - u(t,z_2,\zeta,\varepsilon)||\leq\\&& o(\delta)/\delta ||z_1-z_2|| + O(\delta)/\delta \int _0^t ||u(t,z_1,\zeta,\varepsilon) - u(t,z_2,\zeta,\varepsilon)||ds\,. \end{eqnarray*} The conclusion follows after applying the Gr\"{o}nwall inequality. \qed \begin{lem}\label{lem2} We consider the $C^1$ function $Y$ acting from $\mathbb{R}^n$ into the space of $n\times n$ matrices, the $C^2$ function $\widetilde{P}: \mathbb{R}^n\to\mathbb{R}^n$ and $z_*\in \mathbb{R}^n$ such that $\widetilde{P}(z_*)=0$. We denote $P:\mathbb{R}^n\to\mathbb{R}^n$ the $C^1$ function given by $P(z)=Y(z)\widetilde{P}(z)$ for all $z\in \mathbb{R}^n$. Then $DP(z_*)=Y(z_*)D\widetilde{P}(z_*)$, $P$ is twice differentiable in $z_*$ and, for each $i\in \overline{1,n}$, the Hessian matrix $HP_i(z_*)$ is symmetric. \end{lem} \noindent {\bf Proof.} We have $\displaystyle{DP(z)=\left( \frac{\partial Y}{\partial z_1}(z)\widetilde{P}(z),..., \frac{\partial Y}{\partial z_n}(z)\widetilde{P}(z)\right)+Y(z)D\widetilde{P}(z)}$ for all $z\in \mathbb{R}^n$. From this it follows the formula for $DP(z_*)$ since $\widetilde{P}(z_*)=0$.\\ In order to prove that $P$ is twice differentiable in $z_*$, taking into account the above expression of $DP$, it is enough to prove that for each $i\in \overline{1,n}$, $\displaystyle{z\mapsto \frac{\partial Y}{\partial z_i}(z)\widetilde{P}(z)}$ and $z\mapsto Y(z)D\widetilde{P}(z)$ are differentiable in $z_*$. The last map is $C^1$, hence it remains to prove the differentiability only for the first one. We fix $i\in \overline{1,n}$. From the relation $$ \begin{array}{l} \displaystyle{ \frac{\partial Y}{\partial z_i}(z_*+h)\widetilde{P}(z_*+h)-\frac{\partial Y}{\partial z_i}(z_*)\widetilde{P}(z_*)}=\\ \displaystyle{ \frac{\partial Y}{\partial z_i}(z_*+h)\left(\widetilde{P}(z_*+h)-\widetilde{P}(z_*)\right) = \frac{\partial Y}{\partial z_i}(z_*+h)D\widetilde{P}(z_*)+o(h)}, \end{array} $$ we deduce that $\displaystyle{z\mapsto \frac{\partial Y}{\partial z_i}(z)\widetilde{P}(z)}$ is differentiable in $z_*$ and that\\ $\displaystyle{D\left(\frac{\partial Y}{\partial z_i}\cdot \widetilde{P}\right)(z_*)=\frac{\partial Y}{\partial z_i}(z_*)D\widetilde{P}(z_*).}$ In order to prove that the Hessian matrix $HP_i(z_*)$ is symmetric, for every $j,\, k\in \{1,...,n\}$ we must prove that \[ \frac{\partial^2 P_i}{\partial z_j \partial z_k}(z_*)=\frac{\partial^2 P_i}{\partial z_k \partial z_j}(z_*).\] We denote by $Y_i(z)$ the $i$--th row of the $n\times n$ matrix $Y(z)$. For all $z\in \mathbb{R}^n$ we have \[ \frac{\partial P_i}{\partial z_j}(z)=Y_{i}(z)\frac{\partial \widetilde{P}}{\partial z_j}(z)+\frac{\partial Y_{i}}{\partial z_j}(z)\widetilde{P}(z).\] Then \begin{equation*} \frac{\partial^2 P_i}{\partial z_j \partial z_k}(z_*)= \frac{\partial Y_{i}}{\partial z_k}(z_*)\frac{\partial \widetilde{P}}{\partial z_j}(z_*)+Y_{i}(z_*)\frac{\partial ^2\widetilde{P}}{\partial z_j\partial z_k}(z_*)+\frac{\partial Y_{i}}{\partial z_j}(z_*)\frac{\partial \widetilde{P}}{\partial z_k}(z_*)\,. \end{equation*} Since $\widetilde{P}$ is $C^2$ it is easy to check the symmetry of this last relation with respect to $(j,k)$. \qed\\ \noindent {\bf Proof of Theorem~\ref{thm2}.} We need to study the zeros of the function $z\mapsto x(T,z,\varepsilon)-z\ ,$ or equivalently of $$F(z,\varepsilon)=Y^{-1}(T,z)(x(T,z,\varepsilon)-z).$$ The function $F$ is well defined at least for any $z$ in some small neighborhood of $\mathcal{Z}$ and any $\varepsilon\geq 0$ sufficiently small. We will apply Theorem 1. We denote \[P(z)=Y^{-1}(T,z)\left( x(T,z,0)-z \right),\quad Q(z,\varepsilon)=Y^{-1}(T,z)y(T,z,\varepsilon), \] where $y(t,z,\varepsilon)=[x(t,z,\varepsilon)-x(t,z,0)]/\varepsilon$, like in Lemma \ref{lem1}. Hence $F(z,\varepsilon)=P(z)+\varepsilon Q(z,\varepsilon)$. The fact that $f$ is $C^2$ assures that the function $z\mapsto x(T,z,0)$ is also $C^2$ (see \cite[Ch.~4, \S~24]{pont}). Since (see \cite[Ch.~III, Lemma \S~12]{dem}) $\left(Y^{-1}(\cdot,z)\right)^*$ is a fundamental matrix solution of the system $$ \dot u=-(D_x f(t,x(t,z,0),0))^*u\, , $$ and $f$ is $C^2$, we have that the matrix function $(t,z)\mapsto \left(Y^{-1}(t,z)\right)^*$ is $C^1$. Therefore the matrix function $(t,z)\mapsto Y^{-1}(t,z)$, and consequently also the function $P$ are $C^1$. By Lemma~\ref{lem1} we now conclude that $Q$ is continuous, locally uniformly Lipschitz with respect to $z$, and \begin{equation} \label{eq:qz0} Q(z,0)=\int _0^T Y^{-1}(s,z)g(s,x(s,z,0),0) ds.\end{equation} Since, by our hypothesis (A6), $x(\cdot,z,0)$ is $T$-periodic for all $z\in\mathcal{Z}$ we have that $x(T,z,0)-z=0$ for all $z\in \mathcal{Z}$, and consequently $P\left( z\right)=0$ for all $z\in \mathcal{Z}$. This means that hypothesis (A1) of Theorem 1 holds. Moreover applying Lemma \ref{lem2} we have that \begin{equation*} DP(z)=Y^{-1}(T,z)\left(\frac{\partial x}{\partial z}(T,z,0)-I_{n\times n}\right)\quad{\rm for \ any\ }z\in\mathcal{Z}, \end{equation*} and $P$ satisfies hypothesis (A3) of Theorem \ref{thm1}. But $\displaystyle{\left({\partial x}/{\partial z}\right)(\cdot,z,0)}$ is the normalized fundamental matrix of the linearized system (\ref{ls}) (see \cite[Theorem 2.1]{kraope}). Therefore $\displaystyle{\left({\partial x}/{\partial z}\right)(t,z,0)}=Y(t,z)Y^{-1}(0,z)$, and we can write \begin{equation}\label{zg} DP\left( z\right)=Y^{-1}(0,z)-Y^{-1}(T,z)\quad{\rm for \ any\ }z\in\mathcal{Z}. \end{equation} Using our hypothesis (A7) we see that also assumption (A2) of Theorem~\ref{thm1} is satisfied. From the definition of $G$ and relation (\ref{eq:qz0}) we have that $$ G(\alpha)=\pi Q\left(S\left(\begin{array}{c}\alpha \\ \beta_0(\alpha)\end{array} \right),0\right)\,. $$ That is, the function denoted in Theorem~\ref{thm1} by $\widehat{Q}$ is here $G$, and it satisfies the hypotheses of Theorem~\ref{thm1}. Moreover, note that when $G$ satisfies (A8) then assumption (A4) of Theorem \ref{thm1} is fulfilled. \medskip (C7) Follows from (C1) of Theorem~\ref{thm1}. \medskip (C8) Follows from (C2) of Theorem~\ref{thm1}. \medskip (C9) In order to prove the uniqueness of the $T$-periodic solution, it remains only to check (A5) of Theorem \ref{thm1}. For doing this we show that the function $(z,\zeta,\varepsilon)\in B_{\delta}(z_0)\times B_\delta(0)\times [0,\delta]\mapsto Q(z+\zeta,\varepsilon)-Q(z,0)$ is Lipschitz with respect to $z$ with some constant $o(\delta)/\delta$. We write \begin{eqnarray*} Q(z+\zeta,\varepsilon)-Q(z,0)&=&Y^{-1}(T,z+\zeta)\left[ y(T,z+\zeta ,\varepsilon)-y(T,z,0)\right]+\\&&\left[ Y^{-1}(T,z+\zeta)-Y^{-1}(T,z)\right]y(T,z,0)\,. \end{eqnarray*} It is known that for proving that a sum of two functions is Lipschitz with some constant $o(\delta)/\delta$, it is enough to prove that each function is Lipschitz with such constant; while in order to prove that a product of two functions is Lipschitz with some constant $o(\delta)/\delta$, it is sufficient to prove that both functions are Lipschitz and only one of them is bounded by some constant $o(\delta)/\delta$ and Lipschitz with respect to $z$ with some constant $o(\delta)/\delta$. By Lemma \ref{lem1} we know that the function $z\in B_{\delta}(z_0)\mapsto y(T,z,0)$ is Lipschitz. The fact that $z\mapsto Y^{-1}(T,z)$ is $C^1$ assures that $(z,\zeta)\in B_{\delta}(z_0)\times B_\delta(0)\mapsto Y^{-1}(T,z+\zeta)$ is Lipschitz with respect to $z$. From Lemma \ref{lem1} we have that the function \[(z,\zeta,\varepsilon)\in B_\delta(z_0)\times B_\delta(0)\times [0,\delta]\mapsto y(T,z+\zeta ,\varepsilon)-y(T,z,0)\] is bounded by some constant $o(\delta)/\delta$ and Lipschitz with some constant $o(\delta)/\delta$. Since $z\mapsto Y^{-1}(T,z)$ is $C^1$, the same is true for the function \[(z,\zeta)\in B_{\delta}(z_0)\times B_\delta(0)\mapsto \left[ Y^{-1}(T,z+\zeta)-Y^{-1}(T,z)\right].\] Hence $Q$ satisfies (A5) of Theorem \ref{thm1} and the conclusion holds. \qed\\ By using Theorem~\ref{thm2} we can provide now a result which includes both the existence-uniqueness results by Malkin \cite{mal} and by Melnikov \cite{mel}. The main contribution of our result is that we do not impose any assumptions on the algebraic multiplicity of the multiplier $+1$ of (\ref{nnn}). Also, the condition imposed on the function $z\mapsto g(t,z,\varepsilon)$ is weaker than the $C^1$ condition used by Malkin and Melnikov. In particular our Theorem~\ref{thm3} covers the class of piecewise differentiable systems. \begin{thm} \label{thm3} Assume that $f\in C^2(\mathbb{R}\times\mathbb{R}^n, \mathbb{R}^n)$ and $g\in C^0(\mathbb{R}\times\mathbb{R}^n\times[0,1],\mathbb{R}^n)$ are $T$-periodic in the first variable, and that $g$ is locally uniformly Lipschitz with respect to the second variable. Assume that the unperturbed system \eqref{np} satisfies the following conditions. \begin{itemize} \item[(A10)] There exists an open ball $U\subset \mathbb{R}^k$ with $k\leq n$ and a function $\xi\in C^1(\overline U,\mathbb{R}^n)$ such that for any $h\in \overline U$ the $n\times k$ matrix $D\xi(h)$ has rank $k$ and $\xi(h)$ is the initial condition of a $T$-periodic solution of \eqref{np}. ***\item[(A11)] For each $h\in \overline U$ the linear system \eqref{ls} with $z=\xi(h)$ has the Floquet multiplier $+1$ with the geometric multiplicity equal to $k$. \end{itemize} Let $u_1(\cdot,h), \dots ,u_k(\cdot,h)$ be linearly independent $T$-periodic solutions of the adjoint linear system \begin{equation}\label{ss} \dot u=-\left(D_x f(t,x(t,\xi(h),0))\right)^*u, \end{equation} such that $u_1(0,h), \dots ,u_k(0,h)$ are $C^1$ with respect to $h$ *** and define the function $M:\overline U \to \mathbb{R}^k$ (called the Malkin's bifurcation function) by $$ M(h)=\int\limits_0^T \left(\begin{array}{c} \left<u_1(s,h),g(s,x(s,\xi(h),0),0)\right>\\ ... \\ \left<u_k(s,h),g(s,x(s,\xi(h),0),0)\right>\end{array}\right) ds. $$ Then the following statements hold. \begin{itemize} \item[(C13)] For any sequences $(\varphi_m)_{m\geq 1}$ from $C^0(\mathbb{R}, \mathbb{R}^n)$ and $(\varepsilon_m)_{m\geq 1}$ from $[0,1]$ such that $\varphi_m(0)\to \xi(h_0)\in \xi(\overline U)$, $\varepsilon_m\to 0$ as $m\to \infty$ and $\varphi_m$ is a $T$-periodic solution of \eqref{ps} with $\varepsilon=\varepsilon_m$, we have that $M(h_0)=0$. \item[(C14)] If $M(h)\neq 0$ for any $h\in \partial U$ and $d(M,U)\neq 0$, then there exists $\varepsilon_1>0$ sufficiently small such that for each $\varepsilon\in (0,\varepsilon_1]$ there is at least one $T$-periodic solution $\varphi_\varepsilon$ of system \eqref{ps} such that $\rho(\varphi_\varepsilon(0),\xi(\overline U))\to 0$ as $\varepsilon\to 0$. \end{itemize} In addition we assume that there exists $h_0\in U$ such that $M(h_0)=0$, $M(h)\neq 0$ for all $h\in \overline U \setminus \{h_0\}$ and $d(M,U)\neq 0$. Moreover we assume that hypothesis {\em (A9)} of Theorem \ref{thm2} holds with $z_0=\xi(h_0)$ and that \begin{itemize} \item[(A12)] There exists $\delta_1>0$ and $L_M>0$ such that \[ ||M(h_1)-M(h_2)||\geq L_M ||h_1-h_2||, \mbox{ for all } h_1,h_2\in B_{\delta_1}(h_0).\] \end{itemize} Then the following conclusion holds. \begin{itemize} \item[(C15)] There exists $\delta_2>0$ such that for any $\varepsilon\in (0,\varepsilon_1]$, $\varphi_\varepsilon$ is the only $T$-periodic solution of \eqref{ps} with initial condition in $B_{\delta_2}(z_0)$. Moreover $\varphi _{\varepsilon}(0)\to \xi(h_0)$ as $\varepsilon\to 0$. \end{itemize} \end{thm} ***\noindent {\bf Remark.} {\it The existence of $k$ linearly independent $T$-periodic solutions of the adjoint linear system \eqref{ss} follows by hypothesis {\em (A10)} (see e.g. \cite[Ch.~III, \S~23, Theorem~2]{dem}). Indeed, we have that $y_i(t,h)=D_zx(t,\xi(h),0)D_{h_i}\xi(h)$ for $i\in\overline{1,k}$ are solutions of \eqref{ls} and they are linearly independent on the base of {\em (A10)}. The assertion follows by the fact that a linear system and its adjoint have the same number of linearly independent solutions. Moreover, hypothesis {\em (A11)} assures that there is no other $T$-periodic solution to \eqref{ls} linearly independent of these. } ***\\ ***\noindent {\bf Proof.} We apply Theorem \ref{thm2}. For the moment we describe the set $\mathcal{Z}$ that appear in hypothesis (A6) as $\mathcal{Z}=\bigcup\limits_{h\in\overline{U}} \left\{\xi(h)\right\}.$ First we find the matrix $S$ such that hypothesis (A7) holds. In order to achieve this, for each $z\in\mathcal{Z}$ we denote $U(t,z)$ some fundamental matrix solution of \eqref{ss} that has in its first $k$ columns the $T$-periodic solutions $u_1, \dots ,u_k$ and such that $U(0,z)$ is $C^1$. Then the first $k$ columns of the matrix $U(0,z)-U(T,z)$ are null vectors. The matrix $Y(t,z)$ such that $Y^{-1}(t,z)=[U(t,z)]^*$ is a fundamental matrix solution of \eqref{ls}, i.e. of the system ($z=\xi(h)\in\mathcal{Z}$) \begin{equation}\label{lsh} \dot y=D_x f(t,x(t,\xi(h),0))y. \end{equation} Then the first $k$ lines of the matrix $Y(0,z)^{-1}-Y(T,z)^{-1}$ are null vectors. Since the Floquet multiplier $1$ of \eqref{ls} has geometric multiplicity $k$ we have that the matrix $Y^{-1}(0,z)-Y^{-1}(T,z)$ has range $n-k$. Hence this matrix has $n-k$ linearly independent columns. We claim that there exists an invertible matrix $S$ such that the matrix $\left(Y^{-1}(0,z)-Y^{-1}(T,z)\right)S$ has in the first $k$ lines null vectors and in the lower right corner some $(n-k)\times (n-k)$ invertible matrix $\Delta(z)$. With this we prove that (A7) holds. In order to justify the claim we note first that whatever the matrix $S$ would be, the first $k$ lines of $\left(Y^{-1}(0,z)-Y^{-1}(T,z)\right)S$ are null vectors. Now we choose an invertible matrix $S$ such that its last $(n-k)$ columns are vectors of the form $$ e_i=\left(\begin{array}{c} 0_{(i-1)\times 1}\\ 1\\ 0_{(n-i)\times 1}\end{array}\right),\quad i\in\overline{1,n} $$ distributed in such a way that the $n-k$ linearly independent columns of $Y^{-1}(0,z)-Y^{-1}(T,z)$ become the last $n-k$ columns of $\left(Y^{-1}(0,z)-Y^{-1}(T,z)\right)S$. Now it is easy to see that the $(n-k)\times (n-k)$ matrix from the lower right corner of $\left(Y^{-1}(0,z)-Y^{-1}(T,z)\right)S$ is invertible. Now we come back to prove (A6). By taking the derivative with respect to $h\in U$ of $ \dot{x}(t,\xi(h))=f(t,x(t,\xi(h)))$ we obtain that $D_\xi x(\cdot,\xi(h))\cdot D\xi(h)$ is a matrix solution for (\ref{lsh}). But $x(\cdot,\xi(h))$ is $T$-periodic for any $h\in U,$ therefore $D_\xi x(\cdot,\xi(h))\cdot D\xi(h)$ is $T$-periodic. This fact assures that each column of $D\xi(h)$ is the initial condition of some $T$-periodic solution of (\ref{lsh}) and these $T$ periodic solutions are the columns of $Y(t,\xi(h))Y^{-1}(0,\xi(h))D\xi(h)$. Then $Y(T,\xi(h))Y^{-1}(0,\xi(h))D\xi(h)=D\xi(h)$, that further gives $\left[ Y^{-1}(0,\xi(h)) -Y^{-1}(T,\xi(h))\right]S S^{-1}D\xi(h)=0$. Hence the columns of $S^{-1}D\xi(h)$ belong to the kernel of $\left[ Y^{-1}(0,\xi(h)) -Y^{-1}(T,\xi(h))\right]S$. Since (A7) holds we have that the kernel of $\left[ Y^{-1}(0,\xi(h)) -Y^{-1}(T,\xi(h))\right]S$ contains vectors whose last $n-k$ components are null. We deduce that there exists some $k\times k$ matrix, denoted by $\Psi$, such that \begin{equation}\label{nu1} S^{-1}D\xi(h)=\left(\begin{array}{c} \Psi \\ 0_{(n-k)\times k} \end{array}\right). \end{equation} Since by the assumption (A10) the matrix $D\xi(h)$ has rank $k$ and $S^{-1}$ is invertible, we have that the matrix $S^{-1}D\xi(h)$ should also have rank $k$, that is only possible if \begin{equation}\label{nu2} {\rm det} \Psi \not=0. \end{equation} We fix some $h_*\in U$ and we denote $\alpha_*=\pi S^{-1}\xi(h_*).$ Using (\ref{nu1}) and (\ref{nu2}), and applying the Implicit Function Theorem we have that there exists an open ball, neighborhood of $\alpha_*$, denoted ${V}\subset\mathbb{R}^k$, and a $C^1$ function $\widetilde{h}:\overline V\to {U}$ such that \begin{equation} \label{eq:implicit} \pi S^{-1}\xi (\widetilde{h}(\alpha))=\alpha\quad {\rm for\ any} \quad \alpha\in{\overline V}. \end{equation} Now we define the $C^1$ function $\beta_0:{\overline V}\to \mathbb{R}^{n-k}$ as $ \beta_0(\alpha)=\pi^\bot S^{-1}\xi(\widetilde{h}(\alpha)). $ Note that $S\left(\begin{array}{c}\alpha \\ \beta_0(\alpha)\end{array}\right)=\xi(\widetilde{h}(\alpha))$. Hence the assumption (A6) of Theorem~\ref{thm2} is satisfied with $S$, $V$ and $\beta_0$ defined as above. The bifurcation function $G$ defined in Theorem~\ref{thm2} can be written using our notations as $$ G(\alpha)=\pi\int\limits_0^T Y^{-1}(s,\xi(\widetilde{h}(\alpha)))g(s,x(s,\xi(\widetilde{h}(\alpha)),0))ds. $$ Since $Y^{-1}(s,\xi(h))=[U(t,h)]^*$ (see the beginning of the proof) we have that in the first $k$ lines of $Y^{-1}(s,\xi(h))$ are the vectors $(u_1(s,h))^*$, ... , $(u_n(s,h))^*$ and so $$ G(\alpha)=\int\limits_0^T \left(\begin{array}{c} \left<u_1(s,{\widetilde{h}(\alpha)}),\,g(s,x(s,\xi(\widetilde{h}(\alpha)),0),0)\right>\\ ... \\ \left<u_k(s,{\widetilde{h}(\alpha)}),\,g(s,x(s,\xi(\widetilde{h}(\alpha)),0),0)\right>\end{array}\right) ds. $$ From here one can see that there is the following relation between $G$ and the Malkin bifurcation function $M$, \begin{equation} \label{eq:G} G(\alpha)=M\left( \tilde{h}(\alpha)\right) \mbox{ for any } \alpha\in \overline V.\end{equation} *** (C13) Follows from (C7) of Theorem~2. (C14) Without loss of generality (we can diminish $U,$ if necessary) we can consider that $\widetilde{h}$ is a homeomorphism from $V$ onto $U$ and taking into account that $C$ is an invertible matrix by \cite[Theorem 26.4]{krazab} we have $$ {\rm deg}(G,V)={\rm deg}(M,U).$$ Thus (C14) follows applying conclusion (C8) of Theorem~\ref{thm2}.\\ (C15) We need only to prove assumption (A8) of Theorem~\ref{thm2} provided that our hypothesis (A12) holds. First, taking the derivative of (\ref{eq:implicit}) with respect to $\alpha$ and using (\ref{nu1}), we obtain that $D\widetilde{h}(\alpha_*)=\Psi^{-1}$, hence it is invertible and, moreover, $L_h=\|D\widetilde{h}(\alpha_*)\|/2\neq 0$. We have that there exists $\delta>0$ sufficiently small such that $\|D\widetilde{h}(\alpha)-D\widetilde{h}(\alpha_*)\|\leq L_h$ for all $\alpha\in B_\delta(\alpha_*)$. Using the generalized Mean Value Theorem (see \cite[Proposition 2.6.5]{clark}), we have that \[||\widetilde{h}(\alpha_1)-\widetilde{h}(\alpha_2)||\geq L_h ||\alpha_1-\alpha_2||\, \mbox{ for all } \alpha_1,\alpha_2\in B_\delta(\alpha_0).\] Since $C$ is invertible, $M$ satisfies (A10) and (\ref{eq:G}), we deduce that $G$ satisfies hypothesis (A8) of Theorem 2. \qed \section{An example} \label{sec4} In this section we illustrate Theorem~\ref{thm3} by studying the existence of $2\pi$-periodic solutions for the following four--dimensional nonsmooth system \begin{equation} \label{system} \begin{array}{lll} \dot{x_1}&=&\,\,\,\,\,x_2-x_1(x_1^2+x_2^2-1),\\ \dot{x_2}&=&-x_1-x_2(x_1^2+x_2^2-1)+\varepsilon \left(\sin t + \varphi (k_1x_3)\right),\\ \dot{x_3}&=&\,\,\,\,\,x_4-x_3(x_3^2+x_4^2-1),\\ \dot{x_4}&=&-x_3-x_4(x_3^2+x_4^2-1)+\varepsilon \varphi (k_2x_2), \end{array} \end{equation} when $\varepsilon>0$ is sufficiently small, $k_1$ and $k_2$ are arbitrary reals and $\varphi:\mathbb{R}\to \mathbb{R}$ is the piecewise linear function \begin{equation} \nonumber \varphi(x)=\left\{ \begin{array}{ll} -1,~~~\mbox{ for }x\in (-\infty,-1),\\ \,\,\,\,\,x,~~~~\mbox{ for }x\in [-1,1],\\ \,\,\,\,\,1,~~~~\mbox{ for }x\in (1,\infty). \end{array} \right. \end{equation} Before proceeding to this study we write down a sufficient condition in order that the function $g$ satisfies (A9). This is of general interest, not only for this example. It was also used in \cite{blm}. \begin{itemize} \item[(A13)] For any $\delta>0$ sufficiently small there exists $M_\delta\subset [0,T]$ Lebesgue measurable with mes$(M_\delta)=o(\delta)/\delta$ and such that for every $t\in [0,T]\setminus M_\delta$ and for all $z\in B_\delta(z_0)$, $\varepsilon\in[0,\delta]$ , $ \, ||D_zg(t,z,\varepsilon)-D_zg(t,z_0,0)||\leq o(\delta)/\delta \,. $ \end{itemize} The fact that (A13) implies (A9) follows from the Mean Value Theorem.\\ Coming back to system (\ref{system}) we define $g:\mathbb{R}\times \mathbb{R}^4\to \mathbb{R}^4$ as $g(t,z_1,z_2,z_3,z_4)=(0,\,\sin t + \varphi (k_1z_3),\,0, \,\varphi (k_2z_2))$. This function is uniformly locally Lipschitz and satisfies (A13) (hence also (A9)) for any $z_0=(z_1^0,z_2^0,z_3^0,z_4^0)$ with $|k_2z_2^0|\neq 1$ and $|k_1z_3^0|\neq 1$.\\ We study now the unperturbed system. For $\varepsilon=0$ system (\ref{system}) becomes \begin{eqnarray*} \dot{x_1}&=&x_2-x_1(x_1^2+x_2^2-1), ~~~~~~\,\,\,\,\,\,~~~~~~\dot{x_2}=-x_1-x_2(x_1^2+x_2^2-1),\\ \dot{x_3}&=&x_4-x_3(x_3^2+x_4^2-1), ~~~~~~~\,\,\,\,\,\,~~~~~\dot{x_4}=-x_3-x_4(x_3^2+x_4^2-1), \end{eqnarray*} and it has a family of $2\pi$-periodic orbits, whose initial conditions are given by \[\xi(\theta,\eta)=(\sin \theta, \cos \theta, \sin \eta, \cos \eta),\quad (\theta,\eta)\in \mathbb{R}^2.\] We have $\,x(t,\xi(\theta,\eta),0)=\left( \sin (t+\theta), \cos(t+\theta), \sin (t+\eta), \cos(t+\eta)\right).$ It is easy to see that the function $\xi:\mathbb{R}^2\to \mathbb{R}^4$ satisfies assumption (A10) of Theorem~\ref{thm3}. We consider now the linearized system, \begin{equation} \label{lin} \dot{y}=D_xf(t,x(t,\xi(\theta,\eta),0))y, \end{equation} where $f(x_1,x_2,x_3,x_4)=(x_2-x_1(x_1^2+x_2^2-1),-x_1-x_2(x_1^2+x_2^2-1),x_4-x_3(x_3^2+x_4^2-1),-x_3-x_4(x_3^2+x_4^2-1))$, and the following $2\pi$-periodic matrix \[ \Phi(t,(\theta,\eta))=\left( \begin{array}{ll} -\cos(t+\theta) \hspace{1cm} 0 \hspace{1cm} \sin(t+\theta) \hspace{1cm} 0\\ \,\,\,\,\,\,\sin(t+\theta) \hspace{1cm} 0 \hspace{1cm} \cos(t+\theta) \hspace{1cm} 0\\ \hspace{1cm}0 \hspace{0.9cm} -\cos(t+\eta) \hspace{1cm} 0 \hspace{1cm}\sin(t+\eta)\\ \hspace{1cm}0\hspace{1.4cm} \sin(t+\eta) \hspace{1cm}0 \hspace{0.5cm}-\cos(t+\eta) \end{array} \right).\] Denoting $\hat{\Phi}(t,(\theta,\eta))=\Phi(t,(\theta,\eta))\Phi^{-1}(0,(\theta,\eta))$, it can be checked that the normalized fundamental matrix of system (\ref{lin}) is \[ \widehat{Y}(t,\xi(\theta,\eta))=\hat{\Phi}(t,(\theta,\eta))\left( \begin{array}{ll} I_{2\times 2} \hspace{0.8cm} 0_{2\times2}\\ \hspace{0.3cm}0_{2\times2} \hspace{0.5cm} e^{-2t}I_{2\times 2} \end{array} \right),\] and that it satisfies the hypothesis (A11) of Theorem~\ref{thm3}. A couple of linearly independent $2\pi$-periodic solutions of the adjoint system to system (\ref{lin}) is \begin{eqnarray*} u_1(t)&=&\left( -\cos (t+\theta),\,\,\sin (t+\theta),\,\,0,\,\,0\right)^T,\\ u_2(t)&=&\left( 0,\,\,0,\,\,-\cos (t+\eta),\,\,\sin (t+\eta)\right)^T. \end{eqnarray*} Therefore the Malkin's bifurcation function $M(\theta,\eta)$ takes the form $$ M(\theta,\eta)= \left(\begin{array}{c} M_1(\theta,\eta)\\ M_2(\theta,\eta)\end{array}\right)=\left(\begin{array}{c} \int\limits_0^{2\pi} \sin (s+\theta)\left[ \sin s+ \varphi\left( k_1\sin (s+\eta)\right)\right]ds\\ \int\limits_0^{2\pi} \sin (s+\eta) \varphi\left( k_2\cos (s+\theta)\right)ds\end{array}\right). $$ We denote \begin{eqnarray*} I(k)=\int_0^{2\pi} \sin t \, \varphi(k\sin t)dt, \quad i_1=I(k_1), \\ J(k)=\int_0^{2\pi} \cos t \, \varphi(k\cos t)dt, \quad j_2=J(k_2). \end{eqnarray*} Then \begin{eqnarray*} M_1(\theta,\eta)&=& \pi \cos \theta + i_1\cos(\theta-\eta),\\ M_2(\theta,\eta)&=& -j_2\sin (\theta-\eta). \end{eqnarray*} It is easy to see that the function $M$ is differentiable and that the determinant of its Jacobian at each point $(\theta,\eta)$ is \[\det\left(DM(\theta,\eta)\right)=-\pi j_2\sin \theta \cos(\theta-\eta).\] Studying the zeros of the function $M$, we obtain the following results. \medskip If $j_2\neq 0$ and $|i_1|< \pi$, $M$ has exactly $4$ zeros, namely \[(a_1,\pi+a_1),\quad (\pi+a_1,a_1), \quad (\pi-a_1,\pi-a_1), \quad (2\pi-a_1,2\pi-a_1),\] and all have nonvanishing index (here $a_1=\arccos (i_1/\pi)$). If $j_2=0$, we have that $M_2\equiv 0$, hence the zeros are not isolated. If $j_2\neq 0$ and $|i_1|=\pi$, $M$ has exactly $2$ zeros, namely \[(a_1,\pi+a_1), \quad (\pi-a_1,\pi-a_1), \] and both have index $0$ (note that $a_1$ is $0$ or $\pi$). If $j_2\neq 0$ and $|i_1|>\pi$, $M$ has no zeros. \medskip In order to complete the study, we calculate \[ I(k)=J(k)=\left\{ \begin{array}{ll} k\pi,~~~~0\leq k \leq 1,\\ 2k\arcsin \frac 1k +2 \frac{\sqrt{k^2-1}}{k}, ~~~~k>1. \end{array} \right.\] We note that $I$ and $J$ are odd functions. For $k>1$, $I^{\prime}(k)=\arcsin \frac 1k -\frac 1k \sqrt{1-\frac 1{k^2}}>\frac 1k -\frac 1k \sqrt{1-\frac 1{k^2}}>0$. Then $I(k)>I(1)=\pi$ for all $k>1$. In fact $|I(k)|>\pi$ if and only if $|k|>1$. Also $|I(k)|<\pi$ if and only if $|k|<1$, $|I(k)|=\pi$ if and only if $|k|=1$, and $J(k)=0$ if and only if $k=0$.\\ One can see that when $k_2\neq 0$ and $|k_1|<1$ the bifurcation function $M$ has exactly four zeros. The values of the function $\xi$ on these zeros are of the form $(\pm \sin a_1, \, \pm \cos a_1,\,\pm \sin a_1, \, \pm \cos a_1)$ where $a_1=\arccos k_1$.\\ Using all these facts and applying Theorem~\ref{thm3}, we obtain the following result concerning the existence of $2\pi$--solutions of system (\ref{system}) when $k_2\neq 0$. \begin{pro} Let $k_2\neq 0$ and $\varepsilon>0$ be sufficiently small. If $|k_1|>1$ system \eqref{system} has no $2\pi$-periodic solutions with initial conditions converging to some point of $\xi\left( \mathbb{R}\times \mathbb{R}\right)$ as $\varepsilon\to 0$. If $|k_1|<1$ system \eqref{system} has at least four $2\pi$-periodic solutions with initial conditions converging to exactly four points of $\xi\left( \mathbb{R}\times \mathbb{R}\right)$ as $\varepsilon\to 0$. If $|k_1k_2|\neq 1$ then the obtained $2\pi$-periodic solutions are isolated. \end{pro} For $k_2=0$ the behavior of system (\ref{system}) is the same as of the uncoupled system whose last two equations remain the same and the first two change to \begin{equation} \label{system2} \begin{array}{lll} \dot{x_1}&=&\,\,\,\,\,x_2-x_1(x_1^2+x_2^2-1),\\ \dot{x_2}&=&-x_1-x_2(x_1^2+x_2^2-1)+\varepsilon \left(\sin t + \varphi (k_1\sin(t+\eta))\right). \end{array} \end{equation} In the following we study the existence of $2\pi$-periodic solutions of the above planar system. We consider only the case that $(k_1,\eta) \in \mathbb{R}\times \mathbb{R}\setminus \{ (1,\pi), (-1,0)\}$, otherwise the perturbation term is identically zero. Note also that the perturbation term does not depend on the space variable. System (\ref{system2}) with $\varepsilon=0$ has a family of $2\pi$-periodic orbits whose initial conditions are given by \[\xi(\theta)=(\sin \theta,\cos\theta),\quad \theta\in \mathbb{R}.\] For each fixed $\eta$ the Malkin's bifurcation function is \[M^\eta(\theta)=\pi \cos \theta + i_1\cos(\theta-\eta),\] and it can be easily seen that it has exactly $2$ zeros, and both have nonvanishing index. If $k_1=0$ or $\eta\in \{ 0,\pi\}$, the zeros of $M^\eta$ are $\pi/2$ and $3\pi/2$. Otherwise the zeros are $\pi-\theta^*$ and $2\pi-\theta^*$, where $\displaystyle{\theta^*=\arctan \frac{\pi+i_1\cos \eta}{i_1\sin\eta}.}$ Hence applying Theorem \ref{thm3} we obtain the result. \begin{pro} Let $(k_1,\eta) \in \mathbb{R}\times \mathbb{R}\setminus \{ (1,\pi), (-1,0)\}$ and $\varepsilon>0$ be sufficiently small. Then system \eqref{system2} has two isolated $2\pi$-periodic solutions with initial conditions converging to exactly two points of $\xi^\eta\left( \mathbb{R}\right)$ as $\varepsilon\to 0$. \end{pro}
train/arxiv
BkiUaWPxK3xgpbERAtex
5
1
\section{Introduction} In the past decade, an abundance of magnetic materials has been discovered featuring skyrmions, i.e., topologically nontrivial spin textures.\cite{2013:Nagaosa:NN} An incomplete list of bulk systems includes cubic chiral magnets such as the B20 transition metal compounds \cite{2009:Muhlbauer:Science, 2010:Munzer:PhysRevB, 2011:Yu:NatureMater, Kanazawa:2016fd}, rhombohedral lacunar spinels \cite{2015:Kezsmarki:NM,2017:Bordacs:SR}, hexagonal M-type ferrites \cite{2011:Yu:PNAS}, perovskites \cite{2011:Ishiwata:PRB}, and tetragonal Heusler compounds.\cite{2017:Nayak:Nature} The formation of skyrmions has also been reported in epitaxially thin films of these bulk systems, as well as nanowhiskers and nanoparticles.\cite{2013:Yu:NanoLett,2017:Meynell:PRB} Further, tailored magnetic bubbles featuring skyrmion characteristics have been detected in a wide range of heterostructures \cite{2017:Fert:NatRevMat,2017:Jiang:PhysRep}, as well as in carefully selected atomically thin films.\cite{2016:Wiesendanger:NatRevMat} While the microscopic interactions across this exceptionally wide range of materials are inherently different, all of the known systems were believed to exhibit a single temperature and field regime of the skyrmion lattice order. Thus, even though different mechanisms may allow to stabilize skyrmions, there has been no example in which two different mechanisms are sufficiently strong to cause the formation of skyrmion lattice order in the same material in disconnected temperature and field regimes. A first example for two such independent skyrmion phases was recently reported in {Cu$_{2}$OSeO$_{3}$}.\cite{2018:Chacon:NatPhys} Using small-angle neutron scattering, two disconnected phases could be identified driven by different mechanisms. On the one hand, {Cu$_{2}$OSeO$_{3}$} displays a well-known skyrmion phase near the onset of helimagnetic order in a small applied magnetic field.\cite{2012:Seki:Science,2012:Adams:PhysRevLett} In the following this state will be referred to as high-temperature skyrmion (HTS) phase. As a key characteristic, the temperature versus field range of the HTS phase varies only very weakly under changes of field direction. Phenomenological considerations as well as Monte-Carlo calculations establish that the HTS phase is stabilized by entropic contributions of thermal Gaussian fluctuations.\cite{2009:Muhlbauer:Science,2013:Buhrandt:PRB} In addition, a second skyrmion phase was identified in {Cu$_{2}$OSeO$_{3}$} at the border between the conical phase and the field polarized phase in the low-temperature limit. In the following this state will be referred to as low-temperature skyrmion (LTS) phase. Experimentally, the LTS phase differs from the HTS phase in two distinct ways. First, as it stabilizes at low temperatures it does not require the effects of thermal fluctuations. Second, it exists for magnetic fields close to the crystallographic {$\langle100\rangle$} axis only but not for fields along {$\langle110\rangle$} and {$\langle111\rangle$}. In addition, the LTS phase is accompanied by another new phase in which the propagation direction of the conical state under increasing field strength increasingly tilts away from the field direction. The magnetic phase diagram of {Cu$_{2}$OSeO$_{3}$} including the two new phases may be fully accounted for in terms of the theoretical Ginzburg-Landau theory developed for the class of cubic chiral magnets (cf. Refs.\onlinecite{2009:Muhlbauer:Science, 2010:Munzer:PhysRevB, 2016:Bauer:Book, 2018:Chacon:NatPhys} and references therein). At the heart of this model is the notion of a hierarchical set of energy scales, comprising in decreasing strength of the ferromagnetic exchange, Dzyaloshinsky-Moriya spin-orbit terms, dipolar interactions, and higher order crystal-field contributions \cite{Landau}. Even though {Cu$_{2}$OSeO$_{3}$} exhibits ferri- instead of ferromagnetic order with a long-wavelength modulation driven by DM interactions, this Ginzburg-Landau model appears to be in excellent agreement with experiment. Indeed, taking into account conventional cubic magnetic anisotropies of a fairly strong $\langle100\rangle$ easy magnetic axis, the LTS and tilted conical phase may be explained in excellent qualitative and semi-quantitative agreement with experiment.\cite{2018:Chacon:NatPhys} The search for several topologically non-trivial phases in the cubic chiral magnets has a long history. For instance, data recorded in FeGe were interpreted in terms of complex mesophases. \cite{2006:Rossler:Nature,wilhelm:PRL:11,2013:Cevey:pssb} In contrast, comprehensive measurements in MnSi unambiguously established a single skyrmion phase.\cite{2012:Bauer:PRB} This motivated to revisit FeGe, where an analysis using the same set of criteria as established in MnSi provided a phase diagram identical to that observed in MnSi.\cite{2016:Bauer:Book} Further, a splitting of the skyrmion phase reported in {Cu$_{2}$OSeO$_{3}$} under Zn doping could be attributed to chemical phase segregation. \cite{2015:Wu:SciRep,Stefanici:arXiv} A strong dependence on the temperature and field history of the HTS phase was found in cubic chiral magnets subject to disorder, notably {Fe$_{1-x}$Co$_{x}$Si} \cite{2010:Munzer:PhysRevB,2016:Bauer:PRB} and the series of {Co-Zn-Mn} alloys.\cite{2015:Tokunaga:NatCommun,2016:Karube:NatMater} These have been exploited to obtain detailed information on the processes of nucleation and decay of skyrmions.\cite{2013:Milde:Science,2017:Poellath:PRL} Following the discovery of two phases in {Cu$_{2}$OSeO$_{3}$}, a similar observation was reported in Co$_7$Zn$_7$Mn$_6$ with a high-temperature phase as observed in {Cu$_{2}$OSeO$_{3}$} and the B20 systems and a three-dimensionally disordered skyrmion phase at low temperatures.\cite{2018:Karube:SciAdv} However, in contrast to {Cu$_{2}$OSeO$_{3}$}, the stability of the low-temperature skyrmion phase is attributed to the interplay of DM interactions with the effects of frustration as opposed to magnetocrystalline anisotropies. As the observations of the LTS and tilted conical phases in {Cu$_{2}$OSeO$_{3}$} are so far based on small-angle neutron scattering (SANS), several pressing questions exist concerning the thermodynamic signatures and the sensitivity to the temperature and field history addressed in this paper. Beginning with an account of the experimental methods in section \ref{methods}, our paper proceeds in sections \ref{designations} and \ref{diagrams} with an account of the designations of the phase transitions and the magnetic phase diagrams, respectively. Typical magnetization and ac susceptibility data as recorded under zero-field cooling (ZFC) and high-field cooling (HFC) are presented in sections \ref{zfc-data} and \ref{hfc-data}, respectively. This is followed by a microscopic justification of the transition fields in section \ref{sans}, where the intensity patterns observed in neutron scattering are compared with the magnetization and susceptibility. Evidence underscoring the presence of dissipation and hysteresis is presented in section \ref{harmonics} followed by information on different sample shapes and thus demagnetizing fields in section \ref{demag}, which are compared to calculations in a Ginzburg-Landau model. Finally, a comparison of the magnetic field dependence of the specific heat and magnetization at low temperatures is presented in section \ref{specific_heat}. The discussion of the experimental data in section \ref{discussion} begins with an account of the anisotropies observed for different crystallographic orientations in section \ref{anisotropy}. The second part of the discussion presented in section \ref{energy} addresses the temperature dependence of the anisotropy energy. The field dependence of the magnetic anisotropy as reflected in the magnetization is finally discussed in section \ref{potential}. The paper closes with a brief summary in section \ref{conclusions}. \section{Experimental Methods} \label{methods} Several {Cu$_{2}$OSeO$_{3}$} single crystals were prepared from ingots grown by vapor transport. Numerous studies on samples from the same ingots have been reported in the literature.\cite{2015:Schwarze:NatMater,2016:Milde:NanoLett,2016:Zhang:NanoLett,2016:Zhang:PhysRevB,2017:Poellath:PRL,2017:Stasinopoulos:APL} This concerns in particular the small-angle neutron scattering reported in Refs.\,\cite{2012:Adams:PhysRevLett,2018:Chacon:NatPhys} as summarized below. All single crystals investigated here displayed the same high sample quality in terms of their optical appearance, the crystallinity as inferred from the lattice mosaic spread and sharpness of diffraction spots observed in x-ray and neutron diffraction, as well as their physical properties, for instance, the value of the paramagnetic-to-helimagnetic transition temperature. In our study we focussed on five specimens, all of which were of cuboid shape. The designations, dimensions, orientations, and demagnetizing factors of these samples are summarised in table\,\ref{table:samples}. \begin{table}[t!] \vspace{-1mm} \caption{\label{table:samples} Designation, field direction, dimensions ($a$, $b$, and $c$), and demagnetization factor $N$ of the {Cu$_{2}$OSeO$_{3}$} single-crystal samples investigated as part of this study. The magnetic field was applied along the direction denoted $a$. The value of $N$ corresponds to this field direction. Samples are listed twice when measured for different directions of the field as stated in the table. } \vspace{2mm} \begin{tabular}[t]{llllc} \hline\hline sample & $B \parallel $ \hspace{0mm} & $a$ $\times$ $b$ $\times$ $c$ ($\rm {mm}^3$) \hspace{1mm} & $N$ \hspace{3mm} & data in Figs. \\ \hline\hline VTG1-19 & $\langle 100 \rangle$ & $1.87 \times 1.28 \times 1.53$ & 0.28 & \ref{fig:phasediagrams} to \ref{work_temp},\ref{fig:magnetic_work} \\ VTG1-20 & $\langle 110 \rangle$ & $1.76 \times 1.27 \times 1.74$ & 0.29 & \ref{fig:phasediagrams},\ref{fig:zfc},\ref{fig:hfc},\ref{fig:harmonics},\ref{fig:cu2seo4:heat_capacity},\ref{fig:materials},\ref{work_temp},\ref{fig:magnetic_work} \\ VTG1-20 & $\langle 111 \rangle$ & $1.74 \times 1.27 \times 1.76$ & 0.30 & \ref{fig:phasediagrams},\ref{fig:zfc},\ref{fig:hfc},\ref{fig:cu2seo4:heat_capacity},\ref{fig:materials},\ref{work_temp},\ref{fig:magnetic_work} \\ \hline VTG1-10-1 & $\langle 100 \rangle$ & $3.00 \times 0.50 \times 0.50$ & 0.07 & \ref{fig:demag} and \ref{fig:demag-N} \\ VTG1-18-2 & $\langle 100 \rangle$ & $1.85 \times 0.86 \times 0.78$ & 0.18 & \ref{fig:demag} and \ref{fig:demag-N} \\ VTG1-18-2 & $\langle 100 \rangle$ & $0.86 \times 1.85 \times 0.78$ & 0.39 & \ref{fig:demag} and \ref{fig:demag-N} \\ VTG1-18-3 & $\langle 100 \rangle$ & $0.15 \times 0.85 \times 1.88$ & 0.77 & \ref{fig:demag} and \ref{fig:demag-N} \\ \hline\hline \end{tabular} \end{table} Essentially all data for the field along $\langle100\rangle$ reported in this paper were recorded in sample VTG1-19, which was closest to a cubic shape with a demagnetization factor close to $\sim\!1/3$. This way the sample shape and the associated demagnetization effects were close to the spherical sample studied in SANS \cite{2018:Chacon:NatPhys} making the differences of the corrections of demagnetizing fields tiny. For the same reasons data for different crystallographic orientations were recorded in sample VTG1-20, which was also close to cubic. All other samples served to study the effects of different demagnetization factors under a field along $\langle100\rangle$. The magnetization, ac susceptibility, and specific heat were measured in a Quantum Design physical properties measurement system (PPMS). The magnetization was determined with an extraction technique. All ac susceptibility measurements were performed at an excitation frequency $f = 911\,\mathrm{Hz}$ and an excitation field of $H_{ac}=0.1\,\mathrm{mT}$. It is important to note that the susceptibilities presented in our paper essentially represent the coefficients of a Fourier expansion in response to the excitation frequency. Technically speaking, these coefficients are typically referred to as \textit{harmonic} susceptibilities. In comparison, expanding the response to a magnetic field in a Taylor series, the expansion coefficients are referred to as \textit{nonlinear} susceptibilities. In principle, it is possible to convert the harmonic into the nonlinear susceptibilities as discussed in Sec.\ref{harmonics} and Ref.\,\onlinecite{mydosh:1993}. For the comparison of the magnetization and ac susceptibility with small-angle neutron scattering a data set was recorded at the beam-line SANS-1 \cite{muhlbauer:NIaMiPRSAASDaAE:16} at MLZ following the technical procedure described in Ref.\,\onlinecite{2018:Chacon:NatPhys}. The experimental data are either shown as a function of applied field, without correction of demagnetizing effects, or as a function of internal field, where the effects of demagnetizing fields were corrected by means of an analytical expression as reported in Ref.\,\onlinecite{1998:Aharoni:JApplPhys} taking into account the cuboid sample shape. For those figures in which data are shown as a function of internal field the susceptibility calculated from the magnetization, ${\rm d}M/{\rm d}H$, as well as the real and imaginary parts of the ac susceptibility, $\chi'$ and $\chi''$, are consistently presented with respect to the internal field. That is, ${\rm d}M/{\rm d}H$ was computed with respect to the internal field, whereas measured values of $\chi'$ and $\chi''$ were converted following convention (for details see e.g. Ref.\onlinecite{2000:youssif}). The heat capacity was measured by means of a conventional heat pulse method using the standard PPMS set up. Data were recorded in a sequence of increasing field values, where typical heat pulses generated a temperature increase of $\sim\!1\,{\%}$. All specific heat data shown in this paper represent the average over 15 measurements recorded at the same unchanged applied field and reference temperature. For the scientific questions addressed in our study, the temperature and field history prove to be very important. The protocols used in our study correspond to those used in the small-angle neutron scattering study reported in Ref.\,\onlinecite{2018:Chacon:NatPhys}. However, whereas data in the SANS study were dominantly recorded as a function of temperature, essentially all of the data reported here were recorded as a function of field. Focusing on the formation of two new phases at low temperatures, we have confirmed that our experimental data in {Cu$_{2}$OSeO$_{3}$} may be classified in terms of data recorded under increasing and decreasing magnitude of the magnetic field denoted ``up" and ``down", respectively. In addition, we distinguish two different temperature versus field protocols, namely zero-field cooling (ZFC) and high-field cooling (HFC). For ZFC, the sample was cooled from a starting temperature well above $T_c$ in zero magnetic field with a cooling rate of $\sim\!10\,\mathrm{K\,min^{-1}}$ to the desired temperature of the field sweep. It is important to note that for the pristine helical state created this way the domain populations for different $\langle100\rangle$ directions are the same. Subsequently, data were recorded while the magnitude of the magnetic field was increased in steps of $0.5\,\mathrm{mT}$. For HFC, the sample was cooled from a starting temperature well above $T_c$ in a large negative applied field of $-140\,\mathrm{mT}$, i.e., below $-H_{c2}$, with a cooling rate $\sim 10\,\mathrm{K\,min^{-1}}$ until the desired temperature of the field sweep was reached. Subsequently, data were recorded in a single field sweep in which the magnetic field was stepped from the field value at which the sample was cooled up to positive values exceeding $+H_{c2}$. Note that this field sweep comprises data under decreasing magnitude of the magnetic field (down) in the range $-H_{c2} < H < 0$ and data under increasing magnitude of the magnetic field (up) in the range $0 < H < +H_{c2}$, respectively. For fields exceeding $H_{c1}$, we find accurately the same behavior under increasing magnitude of the magnetic field no matter whether the sweep was initiated following ZFC or HFC. Discrepancies in the range $0 < H \leq +H_{c1}$ may be attributed to different domain populations in the helical state, notably for field along $\langle 100\rangle$, i.e., the easy magnetic axis, only domains along the field direction are populated after HFC. In the following, such differences below $H_{c1}$ will be pointed out where relevant. \section{Results} \label{results} The presentation of the experimental results begins in section \ref{designations} with a summary of the designations and terminology used to describe the transitions of the different magnetic phases. Next, the magnetic phase diagrams are summarized in section \ref{diagrams} before typical data used to infer these phase diagrams are presented in section \ref{data}. Higher-harmonic contributions in the ac susceptibility, providing first hand information on the presence and nature of hysteresis, are reported in section \ref{harmonics}. The presentation of the experimental results continues in section \ref{demag} with data for different sample shapes and demagnetizing fields and closes in section \ref{specific_heat}, where typical specific heat data are shown. \subsection{Designations and terminology} \label{designations} The temperature versus field diagram of {Cu$_{2}$OSeO$_{3}$} as determined in SANS displays five different noncollinear magnetic phases. In addition, the paramagnetic state at high temperatures and low fields and the field-polarized (ferromagnetic) state at low temperatures and high fields may be distinguished. This implies considerable complexities when inferring the magnetic phase diagram from bulk properties such as the magnetization, ac susceptibility and specific heat. In turn, it proves to be helpful to start with the designations and terminology used to address the phase boundaries before presenting the experimental data. Among the five noncollinear magnetic phases three are well known, namely, (i) the helical order, (ii) the conical order, and (iii) the high-temperature skyrmion (HTS) phase at the border between the paramagnetic phase and the conical state. Recent SANS studies identified in addition (iv) the tilted conical (TC) state and (v) a low-temperature skyrmion (LTS) phase at the border between the conical and field-polarized state. Accordingly, we distinguish in the following eight transition fields as summarized in Table\,\ref{table:transition fields}. The boundaries between the helical and the conical, and the conical and field-polarized phases are denoted $H_{\rm c1}$ and $H_{\rm c2}$, respectively. The boundaries of the high-temperature skyrmion phase are denoted $H_{\rm a1}$ and $H_{\rm a2}$. Likewise, the boundaries of the low-temperature skyrmion phase and the tilted conical phase, are denoted $H_{\rm s1}$/$H_{\rm s2}$ and $H_{\rm t1}$/$H_{\rm t2}$, respectively. Values determined under increasing (up) and decreasing (down) magnetic field are labeled by the superscript ``u" and ``d", respectively. We note that for the high-temperature and low-temperature skyrmion phases as well as the tilted conical phase the subscripts 1 or 2 denote boundaries at lower and higher absolute field values, respectively. \begin{table}[t] \caption{\label{table:transition fields} Definitions of the transition fields between the different phases observed in our study. The labels ``u" and ``d" denote increasing (up) and decreasing (down) direction of the field sweep, respectively. The subscripts ``1" and ``2" denote values at lower and higher absolute field values, respectively. } \vspace{2mm} \begin{tabular}[t]{ll c} \hline\hline fields \hspace{6mm} & sweep dir. \hspace{3mm} & phase boundary \\ \hline\hline \vspace{1mm} $H_{\rm c1}^{\rm u}$, $H_{\rm c1}^{\rm d}$ & up, down & helical to conical\\ \vspace{1mm} $H_{\rm c2}^{\rm u}$, $H_{\rm c2}^{\rm d}$ & up, down & conical to field-polarized \\ \vspace{1mm} $H_{\rm a1}^{\rm u}$, $H_{\rm a1}^{\rm d}$ & up, down & high-temp. skyrmion, low field \\ \vspace{1mm} $H_{\rm a2}^{\rm u}$, $H_{\rm a2}^{\rm d}$ & up, down & high-temp. skyrmion, high field \\ \hline \vspace{1mm} $H_{\rm s1}^{\rm u}$, $H_{\rm s1}^{\rm d}$ & up, down & low-temp. skyrmion, low field \\ \vspace{1mm} $H_{\rm s2}^{\rm u}$, $H_{\rm s2}^{\rm d}$ & up, down & low-temp. skyrmion, high field \\ \vspace{1mm} $H_{\rm t1}^{\rm u}$, $H_{\rm t1}^{\rm d}$ & up, down & tilted conical, low field \\ \vspace{1mm} $H_{\rm t2}^{\rm u}$, $H_{\rm t2}^{\rm d}$ & up, down & tilted conical, high field \\ \hline\hline \end{tabular} \end{table} \subsection{Magnetic phase diagrams} \label{diagrams} Shown in Fig.~\ref{fig:phasediagrams} are the magnetic phase diagrams for field along the major crystallographic directions, namely $\langle 111\rangle$, $\langle 110\rangle$, and $\langle 100\rangle$. All diagrams are shown as a function of internal magnetic field, following correction of demagnetizing fields. The phase boundaries, as marked here by circles, were inferred from the susceptibility data, where the definitions for the transitions are given below. The color coding shown in the background of the phase diagrams reflects the size of the susceptibility, $\mathrm{d}M/\mathrm{d}H$, calculated from the magnetization. The temperature versus field protocols used, namely zero-field cooled and high-field cooled, are stated in each panel, where the direction of the field sweeps is marked by an arrow. For magnetic fields along $\langle 111\rangle$ and $\langle 110\rangle$, shown in Figs.~\ref{fig:phasediagrams}(a) and \ref{fig:phasediagrams}(b), the behavior reported in the literature is observed.\cite{2012:Adams:PhysRevLett} Here the phase diagrams comprise helical (H), conical (C), and HTS lattice order. With decreasing temperature both $H_{\rm c1}$ and $H_{\rm c2}$ increase monotonically. Only the ZFC phase diagrams are shown, since $H_{\rm c1}$ and $H_{\rm c2}$ exhibit very little and no hysteresis between increasing and decreasing field, respectively. It is also helpful to note that the magnetization displays a clear signature of the transition from the conical to the helical state under decreasing field (not shown). Taken together, the magnetic phase diagrams display all the characteristics that are well known from stoichiometric cubic chiral magnets, such as MnSi and FeGe.\cite{2016:Bauer:Book} The phase diagrams for a field parallel to $\langle 100\rangle$, as shown in Figs.~\ref{fig:phasediagrams}(c) and \ref{fig:phasediagrams}(d), display the following similarities with $\langle 111\rangle$ and $\langle 110\rangle$. The HTS phase emerges in a temperature and field range, that varies somewhat with crystallographic orientation.\cite{2012:Adams:PhysRevLett} Further, the phase boundary between the conical and field-polarized state at $H_{\rm c2}$ increases with decreasing temperature down to about 40\,K. Also, under ZFC, the phase boundary between the helical and the conical state at $H_{\rm c1}$ increases, albeit the absolute value is smaller than for a field along $\langle 111\rangle$ and $\langle 110\rangle$. \begin{figure}[ht] \includegraphics[width=\linewidth]{fig-1} \caption{\label{fig:phasediagrams} Magnetic phase diagrams inferred from magnetization and susceptibility data measured in field sweeps after zero-field cooling (a)--(c) and high-field cooling (d) with fields along $\langle 111 \rangle$ (a), $\langle 110 \rangle$ (b), and $\langle 100 \rangle$ (c), (d). The following phases may be distinguished: helical (H), conical (C), field polarized (FP), tilted conical (TC), high-temperature skyrmion (HTS) and low-temperature skyrmion phase (LTS). } \end{figure} In contrast to the behavior for field parallel to $\langle 111\rangle$ and $\langle 110\rangle$, $H_{c2}$ for a field parallel to $\langle 100\rangle$ reaches the highest value of $\sim\!40\,{\rm mT}$ around 40\,K followed by a gentle decrease. Along with this decrease in $H_{\rm c2}$, clear signatures in the magnetization of two new phases, identified microscopically in neutron scattering \cite{2018:Chacon:NatPhys}, may be distinguished. Namely, below $\sim\!\SI{30}{\kelvin}$ the tilted conical (TC) phase emerges. This is followed by the LTS phase, which emerges below $\sim \SI{15}{\kelvin}$. The absence of the tilted conical phase and LTS phase for field directions other than $\langle 100\rangle$ provides compelling evidence that magnetocrystalline anisotropies must play a decisive role in their formation. We note that for all temperature versus field protocols, our data are consistent with the tilted conical state appearing first, followed by the LTS phase as described in the SANS study reported in Ref.\,\onlinecite{2018:Chacon:NatPhys}. Moreover, once the tilted conical state and LTS phase have formed, the tilted conical phase disappears before the LTS phase. The precise temperature and field range are thereby subject to the temperature versus field protocol. Namely, under increasing fields [positive field values in Figs.~\ref{fig:phasediagrams}(c), \ref{fig:phasediagrams}(d)] the LTS phase extents to larger field values, whereas for decreasing fields [negative field values in Fig.~\ref{fig:phasediagrams}(d)] it persists down to lower magnetic field values. Another difference of field along the $\langle 100\rangle$ direction as compared with $\langle 111\rangle$ and $\langle 110\rangle$ concerns, finally, the conical to helical transition at $H_{\rm c1}$. While a clear signature is observed after ZFC under increasing field, no evidence suggesting the existence of $H_{\rm c1}$ is observed under HFC. This originates in differences of domain populations as explained above and was previously observed in other cubic chiral magnets.\cite{2016:Bauer:PRB,2017:Bauer:PRB} Namely, in this configuration, one easy axis is parallel to the magnetic field while the other easy axes are perpendicular to the field direction. Therefore, when the magnetic field is reduced from the conical to the helical state, only the domain population parallel to the field is populated leading to a single domain helical state that is indistinguishable from the conical state. \subsection{Experimental data} \label{data} The presentation of the experimental data is organized in three parts. First, typical magnetization and ac susceptibility data for different field orientations recorded under ZFC will be reported in section \ref{zfc-data}. This serves to illustrate the key features and associated definitions of the different phase transitions. It is followed by related data recorded under HFC in section \ref{hfc-data}, emphasizing the role of the temperature versus field protocol. Finally, selected susceptibility data are compared with neutron scattering data to justify the interpretation and definitions of the transition fields in section \ref{sans}. \subsubsection{Magnetization and susceptibility under ZFC} \label{zfc-data} Shown in Figs.~\ref{fig:zfc}\,(a)--\ref{fig:zfc}\,(d) are typical data of the magnetization, $M$, the susceptibility calculated from the magnetization, ${\rm d}M/{\rm d}H$, as well as the real and imaginary parts of the ac susceptibility, $\chi'$ and $\chi''$, respectively. All quantities are determined with respect to the internal field as described above. Note that the real part of the susceptibility is presented on a logarithmic scale, whereas the imaginary part is presented on a linear scale. Data are shown as a function of internal magnetic field, $H_{\rm int}$, across the high-temperature skyrmion phase for a high temperature of 57.5\,K after ZFC, where the helimagnetic transition temperature is $T_{c}=58.5\,{\rm K}$. For fields along $\langle 111 \rangle$, $\langle 110 \rangle$, and $\langle 100 \rangle$ data are presented in blue, green, and red, respectively. In perfect agreement with the literature, the magnetization increases at low fields quasilinearly, with a distinct change of slope at $H_{\rm c2}$. Small changes of slope at low and intermediate fields are related to the helical-to-conical transition and the HTS phase, respectively. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig-2} \caption{\label{fig:zfc} Typical magnetization and susceptibility data as a function of internal field following zero-field cooling with fields aligned along $\langle 111 \rangle$ (blue), $\langle 110 \rangle$ (green), and $\langle 100 \rangle$ (red). Data were recorded at $T=57.5\,\mathrm{K}$ (left column) and $T=2\,\mathrm{K}$ (right column). [(a) and (e)]~Magnetization data. [(b)--(d) and (f)--(h)]~Susceptibility calculated from the magnetization, $\mathrm{d}M/\mathrm{d}H$ (symbols), the real part of ac susceptibility $\chi'$ (black line) and the imaginary part of ac susceptibility $\chi''$ (colored line) for a field along $\langle 111 \rangle$ [second row: (b), (f)], field along $\langle 110 \rangle$ [third row: (c), (g)] and a field along $\langle 100 \rangle$ [fourth row: (d), (h)]. } \end{figure} The detailed features associated with the transitions between the different phases may be seen in $\mathrm{d}M/\mathrm{d}H$ shown in Figs.~\ref{fig:zfc}\,(b)--\ref{fig:zfc}\,(d), depicted as discrete data points. Related features are also reflected in the ac susceptibility, $\chi'$, shown as a line. With increasing field, three distinct maxima may be distinguished corresponding to the helical-to-conical transition at $H_\mathrm{c1}^\mathrm{u}$, the conical-to-skyrmion transition at $H_\mathrm{a1}^\mathrm{u}$, and the skyrmion-to-conical transition at $H_\mathrm{a2}^\mathrm{u}$. The change of slope at high fields, denoted as $H_\mathrm{c2}^\mathrm{u}$, marks the transition between the conical and the field-polarized state. Quantitatively, the lowest value of $H_\mathrm{c2}^\mathrm{u}$ is observed for $\mathbf{H}\!\parallel\!\langle 100 \rangle$, whereas the largest value is observed for $\mathbf{H}\!\parallel\!\langle 111 \rangle$. In comparison with the conical state, the susceptibility is reduced both in the helical and the HTS phase. Further, the real part of the ac susceptibility, $\chi'$, tracks $\mathrm{d}M/\mathrm{d}H$ except for the transition regions around $H_\mathrm{c1}^\mathrm{u}$, $H_\mathrm{a1}^\mathrm{u}$, and $H_\mathrm{a2}^\mathrm{u}$, suggesting the presence of reorientations of long-range textures and viscous processes occurring on time-scales that are slow as compared with the oscillation period of the ac fields.\cite{2012:Bauer:PRB,2016:Bauer:PRB} The underlying dissipative processes may account also for the presence of nonvanishing contributions to the imaginary part of the ac susceptibility, $\chi''$, at $H_\mathrm{a1}^\mathrm{u}$ and $H_\mathrm{a2}^\mathrm{u}$. With decreasing temperature, the well-known signatures of the HTS phase disappear, before the characteristics shown for \SI{2}{\kelvin} in Figs.\,\ref{fig:zfc}\,(e)--\ref{fig:zfc}\,(h) emerge. For magnetic field along $\langle 111\rangle$ and $\langle 110\rangle$ two transitions may be distinguished, which may be attributed to the helical to conical transition at $H_{\rm c1}^\mathrm{u}$ and the conical to field-polarized transition at $H_{\rm c2}^\mathrm{u}$. In contrast, for $\langle 100\rangle$, the field dependence is qualitatively different and four transition fields may be discerned in the magnetization and ac susceptibility. As shown below, comparison with SANS identifies these as the helical to conical transition at $H_{\rm c1}^\mathrm{u}$, the transitions of the tilted conical state at $H_{\rm t1}^\mathrm{u}$ and $H_{\rm t2}^\mathrm{u}$, and the high field transition of the LTS phase at $H_{\rm s2}^\mathrm{u}$. As the signatures of the onset of the tilted conical state are rather pronounced, it is not possible to identify a well-defined signature of the onset of the LTS phase at $H_{\rm s1}^\mathrm{u}$. For magnetic field between $H_{\rm c1}^\mathrm{u}$ and $H_{\rm c2}^\mathrm{u}$ distinctly anisotropic behavior may be observed between two crossing points in the field-dependence of the magnetization. Namely, at low fields the magnetization for all three directions is at first essentially identical. As a side note we remark that the signature in $M$ seen for $\langle 100\rangle$ at $H_{\rm c1}^\mathrm{u}$ is tiny. Above $H_{\rm c1}^\mathrm{u}$ the magnetization for field along $\langle 100 \rangle$ (red curve) is slightly larger. This situation changes at $H_{\rm c1}^\mathrm{u}$ for $\langle 111 \rangle$, where a pronounced jump in the magnetization for $\langle 111 \rangle$ (blue curve) defines the first crossing point. For fields between this crossing point and the onset of the tilted conical phase at $H_{\rm t1}^\mathrm{u}$, the magnetization for $\langle 111 \rangle$ (blue curve) is largest. At $H_{\rm t1}^\mathrm{u}$ the magnetization for field along $\langle 100 \rangle$ (red curve) displays a jump that defines the second crossing point. For field values above this second crossing point the magnetization for field along $\langle 100 \rangle$ (red curve) is again largest. Thus, between the two crossing points the easy magnetic axis appears to have changed from $\langle 100 \rangle$ to $\langle 111 \rangle$. Interestingly, closer inspection of the crossing points reveals that the magnetization is not exactly identical for all three directions. However, when calculating the magnetic work $W(M)$ considered below, a sharp crossing point is observed for all three directions. Thus, at the crossing points, the system appears to be perfectly isotropic. We return to an account of the mechanism driving all of this behavior in Sec.\,\ref{discussion}. Further, for a field along along $\langle 111\rangle$ and $\langle 110\rangle$ the jump in the magnetization at the helical-to-conical transition at $H_\mathrm{c1}$ results in a clear peak in $\mathrm{d}M/\mathrm{d}H$. This peak is not seen in the real part of the ac susceptibility $\chi'$, suggesting slow domain reorientations as observed in previous studies of MnSi and {Fe$_{1-x}$Co$_{x}$Si} \cite{2012:Bauer:PRB,2016:Bauer:PRB,2017:Bauer:PRB} and viscous relaxation. Consistent with this evidence for dissipative processes, we find a small amount of hysteresis in $H_{\rm c1}$. In contrast, at $H_\mathrm{c2}$ we find excellent agreement between $\mathrm{d}M/\mathrm{d}H$ and the real part of the ac susceptibility $\chi'$. Also, there is essentially no hysteresis at $H_{\rm c2}$ (cf. heat capacity shown below). On a related note, at low temperatures, the imaginary part of the ac susceptibility, $\chi''$, displays a tiny contribution at $H_\mathrm{c1}$ for fields along $\langle 111 \rangle$ and $\langle 110 \rangle$ only. In stark contrast, we find a pronounced contribution in $\chi''$ for a field along $\langle 100 \rangle$ in the regime of the tilted conical phase. Comparison with the SANS data, presented below, reveals that $H_{\rm t1}^\mathrm{u}$ and $H_{\rm t2}^\mathrm{u}$ correspond with the points of inflection in $\chi''$. We note that $H_{\rm t2}^\mathrm{u}$ is always smaller than $H_{\rm s2}^\mathrm{u}$. Moreover, in contrast to the high-temperature skyrmion phase, where $H_{\rm a1}$ and $H_{\rm a2}$ [cf. Figs.\,\ref{fig:zfc}\,(b)--\ref{fig:zfc}\,(d)] are accompanied by strong dissipation processes causing a large finite value of $\chi''$, no such dissipation is observed for the LTS phase in the frequency range studied. The evidence for dissipation may reflect the underlying microscopic processes characteristic of the different magnetic phases and/or their creation. As concerns the tilted conical phase, a change of field strength results in a change of propagation direction. In turn, it seems plausible that the strong dissipation within the tilted conical phases arises from changes of propagation direction with ac field. In contrast, the dissipation at the boundary of the HTS phase reflects processes of creation and decay of skyrmions, which originate in the creation and motion of Bloch points.\cite{2013:Milde:Science,2017:Poellath:PRL} The absence of such dissipation at the phase boundaries of the LTS may be interpreted as putative evidence for a large energy barrier that inhibits the creation and destruction of the skyrmions by means of the ac field. This might also hint at a different nucleation path of the LTS phase as compared to the HTS phase, notably by virtue of the tilted conical phase as an intermediate step. More generally, the absence or presence of dissipation might be used to distinguish between two different scenarios for first-order transitions. The presence of dissipation indicates the coexistence of two phases and slow dynamics of phase boundaries driven by oscillating fields. The absence of dissipation is expected both for second- and for those first-order transitions, where phase coexistence is suppressed and an oscillating field is not able to induce oscillations in the volume occupied by each phase. This may occur if there is a transition from a high-energy metastable state A to a state B with lower free energy. In this case, only transitions from A to B but not from B to A are induced. Thus, an oscillating field will not induce oscillations between the two phases. This picture explains naturally the absence of dissipative effects when leaving the low-temperature skyrmion phase, which is expected to be metastable due to its topological protection. \subsubsection{Magnetization and susceptibility under HFC} \label{hfc-data} Shown in Figs.~\ref{fig:hfc}(a)--\ref{fig:hfc}(d) are the magnetization and the susceptibilities under HFC for different crystallographic directions as recorded at a temperature of \SI{2}{\kelvin}. Again the real part of the susceptibility is presented on a logarithmic scale, whereas the imaginary part is presented on a linear scale. Data are presented for field sweeps from negative to positive field values as indicated by the black arrows. Thus, data recorded at negative fields (left hand side) were determined under \textit{decreasing} absolute field strength (down) starting at $H<-H_{c2}$, whereas data observed at positive fields (right hand side) were recorded under \textit{increasing} absolute field strength (up) starting effectively at $H=0$. As emphasized above, we have confirmed in a large number of careful tests, that the data observed for positive field values (the right-hand side of Fig.~\ref{fig:hfc}), which start at $H=0$, with the exception of the signature of $H_{c1}$, agrees very well with data observed under ZFC, shown in Fig.~\ref{fig:zfc}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig-3} \caption{\label{fig:hfc} Typical magnetization and susceptibility data at $T = \SI{2}{\kelvin}$ following high-field cooling. Data were recorded in a field sweep from left to right. Data are shown as a function of internal field along $\langle 111 \rangle$ (blue), $\langle 110 \rangle$ (green), and $\langle 100 \rangle$ (red). (a) Magnetization as a function of magnetic field under HFC. (b)--(d)~Susceptibility calculated from the magnetization, $\mathrm{d}M/\mathrm{d}H$ (dots), the real part of ac susceptibility $\chi'$ (black line), and the imaginary part of ac susceptibility $\chi''$ (colored line) for fields along $\langle 111 \rangle$, $\langle 110 \rangle$, and $\langle 100 \rangle$. } \end{figure} It is helpful again to begin with an account of the data for fields along $\langle 111 \rangle$ and $\langle 110 \rangle$, shown in Figs.~\ref{fig:hfc}(a)--\ref{fig:hfc}(c), which correspond to the well-known properties of cubic chiral magnets reported in the literature. Essentially the same qualitative field dependence is observed under ZFC and HFC, where the magnetization as a function of field is almost point symmetric with respect to $H=0$ and $M=0$. Starting at a large negative field and reducing the field strength, a transition from the field-polarized state to the conical state at $H_{\rm c2}^{\rm d}$ is followed by the transition to the helical state at $H_{\rm c1}^{\rm d}$. Increasing the field through zero towards positive field values, the system transitions from the helical into the conical state at $H_{\rm c1}^{\rm u}$ and to the field-polarized state at $H_{\rm c2}^{\rm u}$. The susceptibilities associated with the different phases display a plateau in the conical phase, a reduced value in the helical state, and a rapid decrease when entering the field-polarized regime. Concerning the evidence of magnetic anisotropy, we note that $H_\mathrm{c1}$ is quantitatively almost identical for $\langle111\rangle$ and $\langle110\rangle$, whereas $H_\mathrm{c2}$ is highest for field parallel $\langle111\rangle$ and visibly smaller for field parallel $\langle110\rangle$. In comparison to fields along $\langle 111 \rangle$ and $\langle 110 \rangle$, the magnetization as a function of field for a field along $ \langle 100 \rangle$ displays a sizable asymmetry with respect to $H=0$ and $M=0$. Data recorded under increasing fields (positive field values) are equivalent to the data recorded under ZFC data with the exception of the tiny signature of $H_{\rm c1}^{\rm u}$ marked in Fig.\,\ref{fig:zfc}(h). As explained above, this reflects differences in domain population arising from the different temperature versus field histories. Starting at large negative fields as a function of decreasing field, the system undergoes a transition from the field-polarized to the tilted conical phase at $H^\mathrm{d}_\mathrm{t2}$ and exits the tilted conical phase at $H^\mathrm{d}_\mathrm{t1}$. The transition from the LTS phase to the conical state takes place at $H^\mathrm{d}_\mathrm{s1}$ much below $H^\mathrm{d}_\mathrm{t1}$. Continuing this field sweep through zero, the transition from the conical to the tilted conical phase is observed at $H^\mathrm{u}_\mathrm{t1}$, which vanishes at $H^\mathrm{u}_\mathrm{t2}$, followed by the transition of the LTS phase to the field-polarized state at $H^\mathrm{u}_\mathrm{s2}$. A key result of our study concerns the detailed behavior and field range of the tilted conical and LTS phases. Correcting the effects of demagnetizing fields, we find no hysteresis for the transition fields $H_\mathrm{t1}$ and $H_\mathrm{t2}$ of the tilted conical phase between increasing (u) and decreasing (d) field strength. In contrast, under decreasing magnetic fields (negative field values), the LTS phase appears to emerge essentially together with the tilted conical phase around $H^\mathrm{d}_\mathrm{t2}$. However, when decreasing the field further the LTS phase survives down to $H^\mathrm{d}_\mathrm{s1}$, i.e., well below the regime of the tilted conical state vanishing at $H^\mathrm{d}_\mathrm{t1}$. Strictly speaking, we infer the formation of the LTS phase from the observation of $H^\mathrm{d}_\mathrm{s1}$ and $H^\mathrm{u}_\mathrm{s2}$ in combination with what is known from the SANS study. Interestingly, the small maximum of $\mathrm{d}M/\mathrm{d}H$ at $H^\mathrm{d}_\mathrm{s1}$ is related to a change of the magnetization, where the magnitude of $M$ is larger in the presence of the low-temperature skyrmion phase (cf just below $H^\mathrm{d}_\mathrm{s1}$), while $M$ collapses onto the value of the magnetization observed for the other field directions just below $H^\mathrm{d}_\mathrm{s1}$. Regardless whether the data are recorded under increasing or decreasing field, the tilted conical phase always emerges first while the low temperature skyrmion phase vanishes last. Concerning the agreement and discrepancies between $\mathrm{d}M/\mathrm{d}H$ and $\chi'$ under HFC, we find essentially the same properties observed under ZFC. Namely, analogous to the behavior at $H^\mathrm{u}_\mathrm{s2}$ described above, where $\chi'$ does not track $\mathrm{d}M/\mathrm{d}H$ and $\chi''$ remains vanishingly small, $\chi'$ does not track $\mathrm{d}M/\mathrm{d}H$ at $H^\mathrm{d}_\mathrm{s1}$ either, and there is also no evidence for dissipation in the imaginary part of the ac susceptibility. This indicates potentially important differences regarding the process of nucleation of the low-temperature skyrmion phase as compared with the high-temperature skyrmion phase that are beyond the scope of our study. \subsubsection{Comparison with small-angle neutron scattering} \label{sans} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{fig-4} \caption{\label{fig:SANS} Comparison of small-angle neutron scattering intensity with the susceptibility as a function of internal magnetic field. Data recorded under HFC and ZFC are shown on the left- and right-hand side, respectively. (a1)--(a5) Typical neutron scattering intensity distribution in reciprocal space for the five different modulated phases as denoted in each panel. The scattering plane is marked in gray shading. [(b1) and (c1)] Neutron scattering intensity of the helical and conical state. [(b2) and (c2)] Neutron scattering intensity of the tilted conical and LTS phase. [(b3) and (c3)] Susceptibility as calculated from the magnetization, $\mathrm{d}M/\mathrm{d}H$ (dots), and the real part of the ac susceptibility $\chi'$ (black line). [(b4), (c4)] Imaginary part of the ac susceptibility $\chi''$.} \end{figure*} The definitions of the transition fields introduced above may be justified by means of a direct comparison with the magnetic field dependence of the scattering intensities observed in small-angle neutron scattering shown in Fig.\,\ref{fig:SANS}. For the comparison a data set was recorded under the same conditions reported in Ref.\,\onlinecite{2018:Chacon:NatPhys}. We note that the intensities shown here represent peak intensities and not an integration over rocking scans. Therefore quantitative comparison between different phases is not possible. For further technical details, we refer to Ref.\,\onlinecite{2018:Chacon:NatPhys}. Great care was exercised in order to avoid systematic errors. First, the neutron scattering data were recorded for a spherical {Cu$_{2}$OSeO$_{3}$} sample prepared from the same batch of material used to prepare the samples for the magnetization and ac susceptibility measurements reported here. Second, the magnetization and susceptibility data were recorded in a sample of almost cubic shape as listed in Table\,\ref{table:samples}. This way, the demagnetization factors were similar and differences of corrections tiny. In turn, data are shown as a function of internal field, where the demagnetization correction for the SANS data was calculated from the magnetization data measured on a different sample. Third, the same ZFC and HFC protocols were followed when recording the neutron scattering data, magnetization and susceptibility to minimize systematic differences. For what follows, it is helpful to review briefly the intensity patterns associated with the different magnetic phases shown schematically in Figs.\,\ref{fig:SANS}\,(a1)--\ref{fig:SANS}\,(a5). Taken together, all magnetic phases are characterized by scattering intensities on the surface of a small sphere in reciprocal space depicted in blue shading. The radius of this sphere reflects the characteristic length scale $|\bm{Q}|$ of the competition of ferromagnetic exchange and Dzyaloshinsky-Moriya interactions. The illustrations shown in Figs.\ref{fig:SANS}(a1)--\ref{fig:SANS}\,(a5) correspond to the helical phase (green), conical phase (gray), high-temperature skyrmion phase (light red), tilted conical phase (dark gray), and low-temperature skyrmion phase (dark red). In these depictions, the crystallographic $\langle100\rangle$ directions are shown as black arrows, while the detector plane is indicated by gray shading. It is important to note the orientations of the applied magnetic field with respect to the detector plane and the crystallographic crystal axes. In the helical state, illustrated in Fig.~\ref{fig:SANS}\,(a1), the $\langle 100 \rangle$ easy magnetic axes result in three energetically equivalent domain populations as indicated by green-shaded dots. Under ZFC, these domains are populated equally. For sufficiently large field the conical state stabilizes. The associated scattering pattern corresponds to intensities for $\pm \mathbf{Q} \parallel \mathbf{H}$ as illustrated in Fig.~\ref{fig:SANS}\,(a2). Note that for $\mathbf{H} \parallel \langle 100 \rangle$, the conical intensity is indistinguishable from the magnetic domain of the helical state in the field direction. Further, shown in Fig.~\ref{fig:SANS}(a3) is the scattering pattern of the high-temperature skyrmion phase, which is located in a plane perpendicular to the applied magnetic field. Shown here is the sixfold scattering pattern of a single skyrmion lattice domain population, where an energetically degenerate second domain population exists in principle under a rotation of $30^{\circ}$ with respect to the field direction. In addition to these well-known scattering patterns, the tilted conical phase and the LTS phase are characterized by the scattering patterns illustrated in Figs.~\ref{fig:SANS}\,(a4) and \ref{fig:SANS}\,(a5). For the tilted conical phase the diffraction spots along the magnetic field split into four contributions under an angle with respect to the field direction. This tilting angle increases under increasing magnetic field. Second, in the LTS phase, scattering intensity emerges in the plane perpendicular to the applied magnetic field. While this pattern assumes the shape of a uniform ring in most experiments, characteristic of a glassy appearance of the skyrmion lattice, it was demonstrated that the ring represents an average of a randomly oriented sixfold diffraction patterns.\cite{2018:Chacon:NatPhys} The magnetic field dependence of the different scattering intensities is shown in Figs.~\ref{fig:SANS}(b1) and (b2) for HFC, as well as Figs.~\ref{fig:SANS}\,(c1) and \ref{fig:SANS}\,(c2) for ZFC. Here, the color shading corresponds to the shading used in Figs.~\ref{fig:SANS}(a1)--\ref{fig:SANS}\,(a5). Under HFC and decreasing field, the tilted conical phase emerges at $H^\mathrm{d}_\mathrm{t2}$, defined at the point of inflection of the scattering intensity, which corresponds to the point of inflection of $\chi''$. It is not possible to identify a clear signature of the emergence of the LTS phase in the magnetization or susceptibility, which in neutron scattering clearly appears at a field value smaller than $H^\mathrm{d}_\mathrm{t2}$. The transition field at which the tilted conical phase vanishes, $H^\mathrm{d}_\mathrm{t1}$, may be defined at the point of inflection in $\chi''$ as marked. Finally, when decreasing the field further, the disappearance of the LTS phase at $H^\mathrm{d}_\mathrm{s1}$ is clearly connected with a distinct maximum in the susceptibility inferred from the magnetization, $\mathrm{d}M/\mathrm{d}H$, where neither a peak in $\chi'$ is seen nor a contribution in $\chi''$. Increasing the magnetic field after HFC further [the right-hand side of panels (b1) -- (b4)], the conical-to-helical transition may seen at $H^\mathrm{d}_\mathrm{c1}$ (not marked for clarity), which coincides with $H^\mathrm{d}_\mathrm{s1}$, as well as $H^\mathrm{u}_\mathrm{c1}$. Yet, no signatures may be seen in the magnetization and susceptibility at $H^\mathrm{d}_\mathrm{c1}$, which why we labelled the $H^\mathrm{d}_\mathrm{s1}$ only. However, when increasing the field further, the magnetic field dependence of the scattering intensities of the tilted conical phase displays again excellent agreement with $\chi''$ at the points of inflection of both quantities, defining $H^\mathrm{u}_\mathrm{t1}$ and $H^\mathrm{u}_\mathrm{t2}$. Again, the LTS phase emerges at a field value $H^\mathrm{u}_\mathrm{s1}$, slightly larger than $H^\mathrm{u}_\mathrm{t1}$, and vanishes at a field $H^\mathrm{u}_\mathrm{s2}$ well above $H^\mathrm{u}_\mathrm{t2}$. Here $H^\mathrm{u}_\mathrm{s2}$ corresponds accurately to the location of an additional small peak in $\mathrm{d}M/\mathrm{d}H$ that is neither tracked in $\chi'$ nor visible in $\chi''$, where the latter is vanishingly small. The characteristics observed in the neutron scattering intensity, the magnetization, and the susceptibilities observed under HFC are reproduced very well under ZFC as shown in Figs.~\ref{fig:SANS}(c1) through \ref{fig:SANS}(c4). The only difference that may be noticed here concerns the tiny signature in $\mathrm{d}M/\mathrm{d}H$ at $H^\mathrm{u}_\mathrm{c1}$, which is not present under HFC. \subsection{Higher harmonics in the susceptibility} \label{harmonics} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{fig-5.pdf} \caption{\label{fig:harmonics} Higher-harmonic contributions in the ac susceptibility as a function of field at selected temperatures for fields along the $\langle 110 \rangle$ and $\langle 100 \rangle$ direction. The set of panels on the left corresponds to $\mathbf{H}\parallel\langle 110 \rangle$; the set of panels on the right corresponds to $\mathbf{H}\parallel\langle 100 \rangle$. Rows display the first, second and third harmonic. Columns correspond to different temperatures as marked in the plots. Data have been recorded following high-field cooling in a field sweep from negative to positive field values. The negative branch of the $\langle110\rangle$ data is not shown for clarity, as it provides no further information.} \end{figure*} Motivated by the observation of clear differences between $\mathrm{d}M/\mathrm{d}H$ and $\chi'$, as well as substantial contributions in $\chi''$, we explored the question of higher-harmonic contributions to the ac susceptibility up to third order for field along $\langle 110 \rangle$ and $\langle 100 \rangle$ at selected temperatures. Data were recorded in a single field sweep from negative to positive fields after HFC, i.e., data recorded for positive field values during these sweeps essentially corresponded to ZFC conditions. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{fig-6.pdf} \caption{\label{fig:demag} Magnetization $M$, calculated susceptibility, $\mathrm{d}M/\mathrm{d}H$, as well as real and imaginary part of the ac susceptibility, $\chi'$ and $\chi''$, respectively, for different sample shapes with different demagnetization factors and field along $\langle100\rangle$. The demagnetization factors are denoted above each column (see Table\,\ref{table:samples} for further details). Data were recorded following high-field cooling in a single field sweep, where data recorded at negative fields are mirrored to positive field values. Data branches recorded under decreasing and increasing magnitude of the field are shown in blue and red shading, respectively. } \end{figure*} In general higher harmonics reflect the presence of strong non-linearities in $M(H)$ even for the tiny excitation amplitudes of the ac probing field (of the order 0.1\,mT). In the spirit of a brief introduction we note that the magnetic response of a sample to an external magnetic field \begin{equation} H(\omega, t) = H_\mathrm{dc} + H_\mathrm{ac} \cos{\omega t} \end{equation} may be expressed in terms of the Fourier expansion \begin{eqnarray} M(\omega, t) &= M_0 + \sum\limits_{n=1}^{\infty} \left(M_n' \cos{n \omega t} + M_n''\sin{n \omega t}\right)\\ &= \chi_\mathrm{dc} H_\mathrm{dc} + H_\mathrm{ac} \sum\limits_{n=1}^{\infty} \left(\theta_n' \cos{n \omega t} + \theta_n''\sin{n \omega t}\right) \end{eqnarray} where the Fourier coefficients represent unitless quantities given by \begin{equation} \theta_n' = \frac{1}{\pi H_\mathrm{ac}} \int\limits_0^{2\pi} M(\omega, t) \cos{n \omega t} \D\omega t \end{equation} and \begin{equation} \theta_n'' = \frac{1}{\pi H_\mathrm{ac}} \int\limits_0^{2\pi} M(\omega, t) \sin{n \omega t} \D\omega t \end{equation} As mentioned in Sec.\,\ref{methods}, in the literature, the coefficients $\theta_n'$ and $\theta_n''$ are also known as \textit{harmonic} susceptibilities. In comparison, when developing the response to an excitation in a Taylor series the expansion coefficients, denoted $ \chi_n'$ and $ \chi_n''$, are referred to as \textit{nonlinear} susceptibilities. For mathematically well behaved properties it is possible to infer the real and imaginary parts of the nonlinear susceptibilities, $\chi'_n$ and $\chi''_n$ from these Fourier coefficients $ \theta_n'$ and $ \theta_n''$ (see, e.g., Ref.\,\onlinecite{mydosh:1993} for details). However, since higher order contributions rapidly decrease in strength, we approximate the nonlinear susceptibility by the leading order contributions of the harmonic susceptibilities, namely, $\chi=\chi_1\approx\theta_1$, $\chi_2\approx2\,H_\mathrm{ac}^{-1}\theta_2$ and $\chi_3\approx4\,H_\mathrm{ac}^{-2}\theta_3$. In view that the magnetization curve for field along {$\langle100\rangle$} is not point symmetric with respect to $M=0$ and $H=0$, the second harmonic may not be point symmetric either. Further, for the sharp changes of slope of the magnetization curve the second harmonic may be expected to be finite subject to the precise excitation amplitude. Shown in Fig.\,\ref{fig:harmonics} are typical higher order contributions of the harmonic susceptibility, notably $\chi$, $\theta_2$, and $\theta_3$. In our measurements, we recorded the properties up to the third harmonic at selected temperatures following HFC. Data are shown as a function of internal field for field along $\langle 110 \rangle$ (left panels) and field along $\langle 100 \rangle$ (right panels), where rows correspond to first, second and third harmonic susceptibility, while columns correspond to different temperatures. Data for negative fields for {$\langle110\rangle$} are not shown for clarity and because they are symmetrical to the data at positive field values, thus not adding any additional information. For a field along $\langle 110 \rangle$ and $T=\SI{2}{\kelvin}$, the second harmonic is essentially zero except for a negative peak of the real part, $\chi'_2$, at $H_\mathrm{c2}$. At $T=\SI{57.5}{\kelvin}$, tiny additional peaks are present at $H_\mathrm{a2}$ and two very weak peaks at $H_\mathrm{a1}$ and $H_\mathrm{c1}$ (hardly visible on the scale chosen here). Moreover, the third harmonic, $\chi_3$, is essentially zero at both $T = \SI{2}{\kelvin}$ and $\SI{57.5}{\kelvin}$. In comparison, for a field along $\langle 100 \rangle$, the behavior at high temperatures is similar to that for field along $\langle 110\rangle$. Namely, at $T=\SI{57.5}{\kelvin}$, the second harmonic, $\chi'_2$, displays a negative peak associated with $H_\mathrm{c2}$ as well as additional tiny peaks at $H_\mathrm{a2}$, $H_\mathrm{a1}$, and $H_\mathrm{c1}$ (barely resolved on the scale chosen here). Moreover, the third harmonic, $\chi_3$, is essentially zero. Similarly, at $T=\SI{45}{\kelvin}$ only the peak associated with $H_\mathrm{c2}$ is observed in $\chi_2$, where the data are point symmetric with respect to $H=0$ and $\chi_2=0$. Again the third harmonic, $\chi_3$, is essentially zero. In contrast, at $T=\SI{2}{\kelvin}$, a huge second harmonic is observed in the field range of the tilted conical phase for $\mathrm{H} \parallel \langle 100 \rangle$. Interestingly, in comparison with the second harmonic signal at high temperatures associated with $H_\mathrm{c2}$ the new contribution has the opposite sign. Also, a huge third harmonic is observed in the field region of the tilted conical phase. In fact, key features of the second and third harmonic are already present at $T=\SI{20}{\kelvin}$, where they are weaker with additional fine structure (double peak). The highly nonlinear response we observe in {Cu$_{2}$OSeO$_{3}$} represents a key characteristic of the tilted conical state rather than the LTS phase. We speculate that the strong nonlinear and dissipative response arises from domain walls and other inhomogeneous textures generated when the tilted phase is nucleated. In the literature, similar properties are typically reported for superconductors, and molecular and low-dimensional magnets\cite{2013:balanda}, where specific models are required for the interpretation. Unfortunately, it is at present not clear how the non-linearities are generated. In the future, a Cole-Cole plot of the interdependence of the real and imaginary parts of $\chi$ for different frequencies may, for instance, provide information on the characteristic activation energies-\cite{2013:balanda} However, this level of experimental exploration and analysis is beyond the scope of the study presented here. \subsection{Demagnetizing Effects} \label{demag} In view of the importance of the temperature versus field protocol and the rather subtle features in the magnetization and susceptibilities associated with the various phase transitions we have recorded the magnetization and ac susceptibility for a series of samples with different demagnetization factors. For samples with large demagnetization factors, we find the reconstruction of the intrinsic behavior to be prohibitively difficult as small effects are irreversibly smeared out. Also, for an irregular sample shape, the nucleation processes may actually be altered. Moreover, theoretical modeling reported in Ref.\, \onlinecite{2018:Chacon:NatPhys} revealed that dipolar interactions play an important role for the details of the magnetic phase diagram as discussed below. Shown in Fig.\,\ref{fig:demag} are typical magnetization and susceptibility data as a function of applied field for various samples with different demagnetizing factors (see also Table\,\ref{table:samples}). Each column corresponds to a different demagnetization factor as stated at the top of the figure. Data were measured in a single field sweep corresponding to HFC, i.e., analogous to the data shown in Fig.\,\ref{fig:hfc}. For lack of space, data at negative fields are shown as a function of positive field values. Therefore data shown in rows (b) and (c) were recorded under increasing field, corresponding to the right-hand side of Fig.\,\ref{fig:hfc}, i.e., positive field values. They are essentially equivalent to the behavior observed under ZFC. In comparison, data shown in rows (d) and (e) were recorded under decreasing field, corresponding to the left-hand side of Fig.\,\ref{fig:hfc}, i.e., negative field values. \begin{figure*} \centering \includegraphics[width=1.0\linewidth]{fig-7.pdf} \caption{\label{fig:demag-N} Magnetic phase diagrams as a function of demagnetization factor $N$ and applied magnetic field $H$ applied along $\langle 100\rangle$. Note that different units are used for $H$ in the theoretical calculations as compared with experiment. (a) Theoretical phase diagram obtained from a Ginzburg-Landau model in the presence of the cubic anisotropy of Eq.~\eqref{CubicAniso}, as described in Ref.~\onlinecite{2018:Chacon:NatPhys}. Here we assumed that only single-domain states exist. We also display the metastable tilted conical state in the regime where its energy is higher than the LTS phase but lower than both the conical and ferromagnetic phase. It vanishes for small $N_z$. Parameters and units correspond to those used in Ref.~\onlinecite{2018:Chacon:NatPhys}. (b) Phase diagram of (a) when including the exchange anisotropy $c\sum_i(\nabla_iM_i)^2$ with $c=-0.06$, as described in the main text. The metastable tilted conical state exists even for $N_z \to 0$. (c) Thermodynamic phase diagram [parameters as in (a)] including regions where demagnetization effects stabilize phase coexistence, see main text. Note that the experimental regions of phase coexistence can be much larger due to hysteresis effects in the region where phases are metastable. (d) Summary of the experimental critical fields for samples with different demagnetization factor $N$, as detailed in Fig.\,\ref{fig:demag}, with data for negative fields mirrored to positive fields. } \end{figure*} Assuming uniform demagnetization fields across the sample volume a shearing of the data towards larger field values with increasing demagnetization factor is expected. Moreover, the bar-shaped sample geometry results in a distribution of internal field directions that generates a smearing out of all features. As shown in the first column of Fig.\,\ref{fig:demag} for a tiny demagnetizing factor, $N=0.07$, all features observed under ZFC and HFC described in detail above may be readily identified. The two most important characteristics concern the presence of hysteresis in the magnetization due to the LTS phase, and the observation of dissipation in the imaginary part of the susceptibility $\chi''$, associated with the tilted conical phase. For increasing demagnetization factor the steep rise (jump) in the magnetization associated with the onset of the tilted conical phase turns into a shallow rise. Nonetheless, a non-vanishing contribution in $\chi''$ is observed in a well-defined field range. This suggests that the tilted conical phase continues to form in a finite field range. In contrast, the signatures of the LTS phase are smeared out and vanish. Without microscopic information it is unfortunately not possible to infer any further information on the LTS phase. In order to illustrate the effects of demagnetizing fields on a theoretical level, we consider the Ginzburg-Landau model described in Ref.~\onlinecite{2018:Chacon:NatPhys} taking into account the cubic magnetocrystalline anisotropy, cf. Eq.~\eqref{CubicAniso} shown below. For a given applied field $\mathbf{H}$, the magnetic moments are subject to an internal field $\mathbf{H}_{\rm int} = \mathbf{H} - {\bf N} \mathbf{M}$ that depends on the magnetization $\mathbf{M}$ as well as the demagnetization tensor ${\bf N}$. This results in three main physical effects: (i) a shift of phase boundaries due to the change of the internal fields, (ii) a possible tilt of the internal field in the tilted conical phase, and (iii) the stabilization of coexistence regimes. Shown in Fig.\,\ref{fig:demag-N}\,(a) and (b) are phase diagrams under an applied field $\mathbf{H}$ along $\langle 100\rangle$ as a function of demagnetization factor $N_z$. The two phase diagrams only consider uniform phases, ignoring the possibility of phase coexistence discussed below. The thermodynamically stable phases are the conical, the LTS and the ferromagnetic state. We also show the metastable tilted conical phase in the regime where it has lower energy than both the ferromagnetic and conical phase. For the chosen parameters, its energy is, however, always higher than the energy of the LTS state. Note that we do not show a possible skyrmion square lattice phase. In some part of the phase diagram (see Ref.~\onlinecite{2018:Chacon:NatPhys}) its energy is very close (identical within numerical errors and model uncertainties) to the triangular skyrmion lattice. \begin{figure*} \centering \includegraphics{fig-8.pdf} \caption{\label{fig:cu2seo4:heat_capacity} Magnetization (first row) and specific heat (second row) as a function of internal field at \SI{2}{\kelvin} for all major directions. Data were recorded following high-field cooling in a single field sweep from negative to positive field values, where data recorded at negative fields are mirrored to positive field values. Data branches recorded under decreasing and increasing are indicated in blue and red shading, respectively. } \end{figure*} For increasing $N_z$ the stability range of all phase boundaries shifts to larger magnetic fields as the demagnetization factor reduces internal magnetic fields. The energetics of the metastable tilted conical phase is affected in more subtle ways as the tilting can induce internal field components perpendicular to the external magnetic field (the calculation assumes a single domain). Similar to the pitch of the modulation, also the uniform magnetization is tilted away from the applied magnetic field, notably $\mathbf{H} \parallel \langle 100\rangle$. Thus, for the single domain state assumed here, the internal field, comprising the applied field, the uniform magnetization of the tilted conical state and the demagnetization field, increasingly deviates from $\langle 100\rangle$. For $N_z\to 0$ this effect is enhanced such that the tilted conical phase eventually becomes energetically unfavorable compared to the field-polarized state, as the general condition $N_x+N_y+N_z=1$ implies that the demagnetizing field of the transverse field components gets boosted. For the parameters used in Ref.~\onlinecite{2018:Chacon:NatPhys} the tilted conical phase vanishes for $N_z \to 0$, see Fig.\,\ref{fig:demag-N}\,(a). Our experimental data display, in contrast, the signatures of the tilted conical phase even for tiny values of $N$, as summarized in Fig.\,\ref{fig:demag-N}\,(d). However, there are different possibilities (or combinations thereof) to account for this seeming discrepancy. First, additional anisotropies, such as exchange contributions $c[(\nabla_x M_x)^2+(\nabla_y M_y)^2+(\nabla_z M_z)^2]$, could stabilize the tilted conical phase further such that it remains metastable even for $N_z \to 0$. This is illustrated in Fig.\,\ref{fig:demag-N}\,(b), where the additional exchange anisotropy term given here has been taken into account as compared with the model evaluated in Fig.\,\ref{fig:demag-N}\,(a). Furthermore, the suppression of the tilted conical phase by internal field components perpendicular to the external field can be reduced when several domains with different tilting directions exist [cf.\,Fig.\,\ref{fig:SANS}\,(a4)]. Thus the total transverse magnetization might average out. We have checked that this effect indeed enlarges the metastable tilted phase but not up to $N_z=0$. Both phase diagrams, Figs.\,\ref{fig:demag-N}\,(a) and (b), were generated in finding a uniform phase with minimal energy for a given external field $\mathbf{H}$. At the first order phase transitions, however, the two states in question have different magnetization. In this case demagnetization effects stabilize regions of phase coexistence. Boundaries of regions of phase-coexistence (ignoring hysteresis effects and domain wall formation assuming a perfectly ``soft" magnet and smooth transitions into the coexistence regimes) are determined from the condition that the internal magnetic field has to match the critical field at $N_z=0$. The resulting phase diagram (without the metastable titled phase) is shown in Fig.\,\ref{fig:demag-N}\,(c). The experimentally observed regions of phase coexistence are much larger, see Fig.\,\ref{fig:demag-N}\,(d). This can be attributed to strong hysteresis effects: after their formation, topologically protected skyrmions remain locally stable over a large field range.\cite{2017:Wild:SciAdv} \subsection{Specific heat} \label{specific_heat} For temperatures exceeding $\sim10\,{\rm K}$, the specific heat $C$ is dominated by contribution of phonons. In turn, a detailed investigation of the temperature dependence of $C$ at low temperatures was beyond the scope of the work presented. In order to gain some insights into possible signatures of the two new magnetic phases, we focused instead on the magnetic field dependence of $C$ at a single low temperature. Shown in Fig.\,\ref{fig:cu2seo4:heat_capacity} is a comparison of the magnetization, $M$, and the specific heat $C$ as a function of magnetic field along {$\langle111\rangle$}, {$\langle110\rangle$} and {$\langle100\rangle$} at a constant temperature of $\SI{2}{\kelvin}$. For clarity only the change of the specific heat, $\Delta C = C(B)-C(H=0)$ is shown. Data were recorded following high-field cooling in a single field sweep from negative to positive field values, where data recorded at negative fields are mirrored to positive field values. Data branches recorded under decreasing and increasing field are indicated in blue and red shading, respectively. Data recorded under increasing and decreasing field as marked by the arrows, corresponding essentially to ZFC and HFC conditions, respectively. The magnetization, shown in Figs.\,\ref{fig:cu2seo4:heat_capacity}(a), \ref{fig:cu2seo4:heat_capacity}(b) and \ref{fig:cu2seo4:heat_capacity}(c) for field along {$\langle111\rangle$}, {$\langle110\rangle$} and {$\langle100\rangle$}, respectively, displays the key characteristics associated with the helical phase, the conical phase, the tilted conical phase, the low-temperature skyrmion phase and the field polarized phase. The specific heat for field along {$\langle111\rangle$} and {$\langle110\rangle$} as a function of magnetic field, shown in Figs.\,\ref{fig:cu2seo4:heat_capacity}(d) and \ref{fig:cu2seo4:heat_capacity}(e) exhibits three regimes of different slope, $\mathrm{d}C/\mathrm{d}H$. In the helical state, the specific heat is essentially constant, followed by a linear decrease in the conical state, which becomes steeper in the field-polarized state. Apart from a small amount of hysteresis in $M$ at $H_\mathrm{c1}$, there is no evidence for further hysteresis. In contrast, there are clear differences between increasing and decreasing field for field along {$\langle100\rangle$} shown in Fig.\,\ref{fig:cu2seo4:heat_capacity}\,(f). Namely, under increasing magnetic field (red), three distinct kinks corresponding to the transitions at $H_\mathrm{t1}^\mathrm{u}$, $H_\mathrm{t2}^\mathrm{u}$, and $H_\mathrm{s2}^\mathrm{u}$ are visible. At low magnetic fields up to $H_\mathrm{t1}^\mathrm{u}$, the specific heat is essentially unchanged, $\Delta C(B)\approx0$, where we note that this includes the helical and the conical phase. Between $H_\mathrm{t1}^\mathrm{u}$ and $H_\mathrm{t2}^\mathrm{u}$, $\Delta C(B)$ drops sharply, coinciding with the steep increase in $M$. Between $H_\mathrm{t2}^\mathrm{u}$ and $H_\mathrm{s2}^\mathrm{u}$, the rate of change, $\mathrm{d}C/\mathrm{d}H$, decreases, followed by another decrease of $\mathrm{d}C/\mathrm{d}H$ above $H_\mathrm{s2}^\mathrm{u}$. In comparison, under decreasing field (blue), only two changes of slope corresponding to $H_\mathrm{t1}^\mathrm{d}$ and $H_\mathrm{t2}^\mathrm{d}$ may be observed. Thus, there is clearly hysteresis between $H_\mathrm{t1}^\mathrm{d}$ and $H_\mathrm{s2}^\mathrm{u}$. When further decreasing the field, the specific heat coincides with the data recorded under increasing field. In particular, there is very faint evidence of $H_\mathrm{s1}^\mathrm{d}$ in the specific heat, which is clearly present in the magnetization. As the tilted conical phase and the low-temperature skyrmion phase emerge as a function of magnetic field, changes of the entropy to be expected between the different phases are rather tiny. Nonetheless, we observe a suppression of the specific heat with increasing magnetic field for all three field directions. The observation of clear signatures in the specific heat at the transition fields of the tilted conical phase and low-temperature skyrmion phase, as well as the observation of hysteresis provide clear signatures of thermodynamically distinct phases. However, some caution is necessary as the strongly hysteretic behavior observed in neutron scattering, the magnetization and the specific heat are characteristic of strong first order transitions. Both the size of the effects as well as the experimental method we use here, notably small heat pulse relaxation calorimetry, make the detection of a latent heat for a weakly field dependent phase transition line essentially impossible. \section{Discussion} \label{discussion} The changes of the magnetic phase diagram of {Cu$_{2}$OSeO$_{3}$} as a function of crystallographic direction inferred from our magnetization, susceptibility and specific heat data reflect the presence of distinct magnetic anisotropies. The discussion of these anisotropies is organized in three parts. In section \ref{anisotropy} a comparison of the anisotropies of the magnetization of {Cu$_{2}$OSeO$_{3}$} is presented with those observed in the related compounds {Fe$_{1-x}$Co$_{x}$Si} ($x=0.2$) and MnSi. This is followed by a quantitative estimate of the strength of the cubic anisotropies, presented in section \ref{energy}. The discussion closes in section \ref{potential} with qualitative considerations of the effective anisotropy potential as a function of applied magnetic field, and how these connect with the anisotropy of the magnetization and the different magnetic phases. \subsection{Anisotropy of the magnetization} \label{anisotropy} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{fig-9.pdf} \caption{\label{fig:materials} Magnetization of {Cu$_{2}$OSeO$_{3}$}, {Fe$_{1-x}$Co$_{x}$Si} ($x=0.2$), and MnSi, illustrating the different strength of the cubic magnetic anisotropies. Data are shown as a function of internal magnetic field in units of the upper critical field, $H_{\rm c2}$, determined in the direction exhibiting the largest value of $H_{\rm c2}$. The magnetization is presented in units of the magnetization at high fields. (a) Magnetization of {Cu$_{2}$OSeO$_{3}$}, exhibiting strong $\langle 100\rangle$ easy-axes anisotropy. (b) Magnetization of {Fe$_{1-x}$Co$_{x}$Si} ($x=0.2$), exhibiting moderate $\langle 100\rangle$ easy-axes anisotropy. (c) Magnetization of MnSi, exhibiting very weak $\langle 111\rangle$ easy-axes anisotropy. } \end{figure} Shown in Fig.\,\ref{fig:materials} is a comparison of the magnetization of {Cu$_{2}$OSeO$_{3}$}, {Fe$_{1-x}$Co$_{x}$Si} ($x=0.2$), and MnSi for all major crystallographic directions. Data are shown as a function of internal magnetic field in units of the upper critical field $H_{\rm c2}$, as determined in the direction exhibiting the largest value of $H_{\rm c2}$. The magnetization is presented in units of the magnetization at high fields. As emphasized in section \ref{data}, data for {Cu$_{2}$OSeO$_{3}$}, shown in Fig.\,\ref{fig:materials}\,(a), exhibit pronounced magnetic anisotropy. While the magnetization appears to be essentially isotropic up to $H_{\rm c1}$, we find that the magnetization for {$\langle111\rangle$} is larger than for any other direction between $H_{\rm c1}$ for {$\langle111\rangle$} and $H_{\rm t1}$ for {$\langle100\rangle$}. Above $H_{\rm t1}$, the magnetization for {$\langle100\rangle$} is largest. The behavior at intermediate fields may be interpreted in terms of a field induced inversion of the magnetic anisotropy, where the {$\langle111\rangle$} axes appear to be the easy axes for a small field range. However, as shown in further detail below, the cubic anisotropy (sign of $K$) does not change as a function field and the {$\langle100\rangle$} axes remain the easy axes, consistent with $H_{\rm c2}$ being smallest and largest for {$\langle100\rangle$} and {$\langle111\rangle$}, respectively. Instead, the behavior observed here reflects the property of a modulated spin structure that remains essentially rigid in a cubic anisotropy potential. The magnetization of {Fe$_{1-x}$Co$_{x}$Si} ($x=0.2$) shown in Fig.\,\ref{fig:materials}\,(b) is reminiscent of {Cu$_{2}$OSeO$_{3}$} and characteristic of a weak $\langle 100\rangle$ easy-axes anisotropy, even though the variations in {Fe$_{1-x}$Co$_{x}$Si} are not as pronounced. In comparison to {Cu$_{2}$OSeO$_{3}$}, the lower critical field $H_\mathrm{c1}$ seems essentially isotropic. It is important to emphasize that in particular the helical-to-conical transition in {Fe$_{1-x}$Co$_{x}$Si} may be dominated by disorder and defect-related pinning, accounting for the large value of $H_{\rm c1}$ as compared to $H_{\rm c2}$.\cite{2016:Bauer:PRB} The fairly small changes of $H_{\rm c2}$ as a function of orientation reflect a weak magnetic anisotropy. Also, the magnetization for fields above $H_{\rm c1}$ is qualitatively different for different field directions, where the magnetization for {$\langle100\rangle$} is steeper than for the other directions up to $H_{\rm c2}$. For fields above $H_{\rm c1}$, the slope is initially larger than just below $H_{\rm c2}$. This corresponds to the jump in $M$ at $H_{\rm c1}$ in {Cu$_{2}$OSeO$_{3}$} characteristic of a broadened first order transition. Thus, in comparison to {Cu$_{2}$OSeO$_{3}$}, the magnetic anisotropy as analyzed in terms of the magnetic work presented below is by far not as pronounced. An exciting unresolved question for {Fe$_{1-x}$Co$_{x}$Si} concerns, whether a tilted conical phase and LTS phase exist at least for some $x$. The field dependence of the magnetization of MnSi shown in Fig.\,\ref{fig:materials}\,(c) exhibits, finally, the signatures of a very weak $\langle 111\rangle$ easy-axes anisotropy. Here differences between the different crystallographic directions are difficult to discern. In comparison to {Cu$_{2}$OSeO$_{3}$} and {Fe$_{1-x}$Co$_{x}$Si} ($x=0.2$), the ratio of $H_{\rm c2}$ to $H_{\rm c1}$ is largest, whereas the variation of $H_{\rm c2}$ with orientation is smallest (see Ref.\,\onlinecite{2017:Bauer:PRB} for further details). Moreover, the changes of slope are tiny between the different phases for all crystallographic directions. Thus, taken together, the comparison of the magnetization shown here underscores that the anisotropy is clearly strongest in {Cu$_{2}$OSeO$_{3}$} with {$\langle100\rangle$} easy axes, followed by {Fe$_{1-x}$Co$_{x}$Si}, which displays also {$\langle100\rangle$} easy axes, whereas MnSi exhibits the well-known, very weak {$\langle111\rangle$} easy-axes anisotropy. \subsection{Magnetocrystalline Anisotropy} \label{energy} The LTS and tilted conical phases arise in the presence of cubic magnetocrystalline anisotropies provided that they are sufficiently strong.\cite{2018:Chacon:NatPhys,2018:Qian:arxiv} The energy density associated with the leading anisotropy may be expressed in terms of the unit vector $\hat M$ describing the orientation of the magnetization \begin{equation} \label{CubicAniso} \mathcal{F}_a = K (\hat M_x^4 + \hat M_y^4 + \hat M_z^4) \end{equation} where $K$ is the anisotropy constant. The definition of $\mathcal{F}_a$ used here is identical to a recent ferromagnetic resonance study \cite{2017:Stasinopoulos:APL} and differs from the SANS study \cite{2018:Chacon:NatPhys} in terms of the sign of $K$ and the units of the magnetization. As shown in Ref.~\onlinecite{2018:Chacon:NatPhys}, the cubic anisotropy is sufficient to give rise to a stable LTS and a metastable tilted conical phase for a magnetic field along $\langle 100\rangle$, provided that $K$ is negative and exceeds a critical value $K_{c}$. An estimate of the critical ansisotropy value is given by $K_{c} \approx - 0.07 \mu_0 H^{\rm int}_{c2} M_s$, where $H^{\rm int}_{c2}$ is the internal critical field of the conical to field-polarized transition (evaluated for $K = 0$). (Note that we use a different notation as compared to Ref.~\onlinecite{2018:Chacon:NatPhys}.) However, the stability of the LTS and tilted conical phases may be reduced or enhanced by additional contributions to the free energy arising, for example, from other anisotropy terms. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{fig-10.pdf} \caption{\label{work_temp} Magnetic work $W$, transition fields $H_{\rm c1}$ and $H_{\rm c2}$, and anisotropy constant $K$ as inferred from the magnetization and the transition fields as a function of temperature. (a) Temperature dependence of the magnetic work $W$ for all major crystallographic orientations under magnetic field saturating the magnetization. (b) Difference of the magnetic work $\Delta W$ with respect to field along {$\langle100\rangle$}. (c) Transition fields $H_{\rm c1}$ and $H_{\rm c2}$ for field along {$\langle111\rangle$} (blue), {$\langle110\rangle$} (green), and {$\langle100\rangle$} (red) as a function temperature. (d) Anisotropy constant $K$ as a function of temperature, as extracted from the magnetic work (green, blue), and the upper critical field $H_\mathrm{c2}$ (orange). Shown in gray shading is the regime in which an LTS phase becomes favorable ($K\leq-0.07$). } \end{figure} Following Ref.~\onlinecite{1937:Akulov} the magnetocrystalline anisotropy may be estimated by considering the magnetic work \begin{equation} W_{\hat H} = \mu_0 \int_{0}^{M_s} dM H \end{equation} to magnetize the system to the saturated state $M_s$ with a field pointing along the direction determined by the unit vector $\hat H$. The subscript $\hat H$ serves to indicate the dependence on the specific crystallographic direction. From the experimental magnetization data, we may determine $W_{\hat H}$ for fields along high symmetry directions, i.e., $\langle 100 \rangle$, $\langle 110 \rangle$, and $ \langle 111 \rangle$. Values as a function of temperature determined this way are shown in Fig.\ref{work_temp}\,(a). With decreasing temperature, $W$ increases monotonically below $T_{\rm c}$. At high temperatures, $W$ does not change as a function of field direction. However, towards lower temperatures a small but distinct anisotropy emerges below $\sim\SI{30}{\kelvin}$. In the zero temperature limit, we find $W\approx \SI{20}{\nano\electronvolt\angstrom^{-3}}$, which is in quantitative agreement with density functional calculations predicting the value $\SI{23.8}{\nano\electronvolt\angstrom^{-3}}$.\cite{2012:Yang:PRL} In the field-polarized state the magnetocrystalline anisotropy simplifies for magnetic fields along high symmetry directions \begin{equation} \mathcal{F}_a = K \cdot \begin{cases} 1 & \mathbf{M} \parallel \langle 100 \rangle\\ 1/2 & \mathbf{M} \parallel \langle 110 \rangle\\ 1/3 & \mathbf{M} \parallel \langle 111 \rangle. \end{cases} \end{equation} One may, for example, estimate the anisotropy constant by considering the differences of the magnetic work between different orientations: \cite{1937:Akulov} \begin{eqnarray} \label{diff-1} W_{\langle 110 \rangle} - W_{\langle 100 \rangle} & = -\frac{1}{2} K\\ \label{diff-2} W_{\langle 111 \rangle} - W_{\langle 100 \rangle} & = -\frac{2}{3} K. \end{eqnarray} These differences are shown in Fig.\,\ref{work_temp}(b). While the curves are guides to the eye, it is interesting to note the quantitative consistency of the different directions with Eqs.\,\ref{diff-1} and \ref{diff-2}. The resulting anisotropy constant $K$ as a function of temperature derived from these differences is shown by the blue and green curves in Fig.\ref{work_temp}\,(d). In fact, the critical field $H^{\rm int}_{c2}$ also displays a directional dependence in the presence of the anisotropy $K$ as reported in Ref.\,\onlinecite{2015:Grigoriev:PRB}. The difference of critical fields along $\langle 110 \rangle$ and $\langle 111 \rangle$ is proportional to $K$ and may be expressed as \begin{equation} \begin{split} H^{\rm int}_{c2, {\langle 110\rangle}} &- H^{\rm int}_{c2, {\langle 111\rangle}} = \\ & -\frac{5}{3} \frac{K}{\mu_0 M_s} \left[1+ \mathcal{O}\left(\frac{ K_{\sigma}}{\mu_0 H_{c2} M_s} \right) \right] \end{split}, \end{equation} where the correction, $ \mathcal{O}(...)$, may be neglected for small $K$. This allows to determine the value of $K$ with the help of the upper critical fields $H_{c2}$ for the three crystallographic directions {$\langle100\rangle$}, {$\langle110\rangle$}, and {$\langle111\rangle$} as shown in Figs.\,\ref{work_temp}(c) and \ref{work_temp}(d). At high temperatures, just below $T_{\rm c}$ essentially no anisotropy may be observed. However, with decreasing temperature, the values of $H_{\rm c2}$ for different directions separate below $\sim\!\SI{40}{\kelvin}$, becoming distinctly different. Values of the anisotropy constant $K$ inferred from $H_{c2}$ are shown as orange curve in Fig.\,\ref{work_temp}(d). With decreasing temperature $K$ deviates from zero and decreases below $\sim\!\SI{35}{\kelvin}$. Within the accuracy of the estimates carried out here it is not possible to rule out a change of sign and small positive value of $K$ between $T_c$ and $\sim\SI{35}{\kelvin}$. This might hint at the existence of further, weaker anisotropy terms. The putative existence of such terms may be clarified with the help of the anisotropy of $H_{c1}$. However, it is important to emphasize that despite a possible change of sign of $K$ as a function of temperature the easy magnetic axes remain the {$\langle100\rangle$} axes throughout. In the limit of low temperatures, the values of $K$ as calculated from magnetization and the upper critical fields are, finally, quantitatively consistent with the anisotropy constant determined at $T = 5$ K with the help of ferromagnetic resonance measurements, $K_{\rm FMR} = - 0.6 \times 10^3$ J/m$^3 \approx - 3.7$ neV/\AA$^3$ (see supplement of Ref.~\onlinecite{2017:Stasinopoulos:APL} for details). Also shown in Fig.\,\ref{work_temp}(d) are the anisotropy constants in dimensionless units, $K/(\mu_0 M_s H^{\rm int}_{c2,\langle111 \rangle}) \approx K/(\mu_0 M_s H^{\rm int}_{c2}|_{K=0})$ (ordinate on the right-hand side), where $M_s$ and $H^{\rm int}_{c2}$ are temperature dependent. As reported in Ref.~\onlinecite{2018:Chacon:NatPhys} the cubic anisotropy may be expected to stabilize the LTS as a metatstable state without need for further exchange anisotropies when the ratio $K/(\mu_0 M_s H^{\rm int}_{c2,\langle111 \rangle})$ falls below a threshold of $-0.07$. This condition is indicated by gray shading in Fig.\,\ref{fig:magnetic_work}(d). Indeed, below $\sim\!\SI{15}{\kelvin}$, where the LTS phase is observed for $\mathbf{H} \parallel \langle 100\rangle$, the ratio $K/(\mu_0 M_s H^{\rm int}_{c2,\langle111 \rangle})$ reaches this strength suggesting that the cubic anisotropy $K$ represents the main stabilization mechanism for the additional phases. \subsection{Effective energy landscape} \label{potential} For weak magnetic anisotropies it is very well established experimentally and theoretically, that the direction of the conical helix aligns with the magnetic field. While this alignment is on the expense of magnetic anisotropy energy, it allows to minimize Zeeman energy as the magnetic moments cant most effectively towards the field direction. In turn, this raises the question why a sufficiently strong anisotropy stabilizes a skyrmion phase and a tilted conical phase in which the moments twist around propagation directions that are not aligned with the magnetic field. In the following, we present a qualitative discussion in order to provide an intuitive picture of the competition between Zeeman energy and magnetocrystalline anisotropy for modulated structures. A quantitative numerical analysis has been reported in Ref.~\onlinecite{2018:Chacon:NatPhys}. For what follows, it is helpful to consider the magnetization of a right-handed conical modulation, \begin{equation} \label{ConHelix} \begin{split} \mathbf{M}(\mathbf{r})/M_s = & \cos \alpha\, \hat e_3 \\\nonumber &+ \sin \alpha \,\, (\hat e_1 \cos(k \hat e_3 \mathbf{r}) + \hat e_2 \sin(k \hat e_3 \mathbf{r}) ) \end{split}, \end{equation} where $M_s$ is the saturated magnetization, $\alpha$ is the cone angle with $\cos \alpha \approx H/H_{\rm c2}$ and $k$ is the magnitude of the pitch vector that is oriented along the unit vector $\hat k = \hat e_3$ with the orthonormal right-handed basis $\hat e_1 \times \hat e_2 = \hat e_3$. The uniform magnetization component of the conical helix points into the same direction than the pitch vector, namely $\hat e_3$. Using the pristine conical helix as the starting point of a variational calculation in the magnetocrystalline potential of Eq.~\eqref{CubicAniso}, an effective anisotropy potential for the orientation of the pitch vector $\hat k$ may be obtained given by \begin{equation} \mathcal{V}_{\rm a}(\hat k) = K_{\rm eff}(\alpha) \left(\hat k_x^4 + \hat k_y^4 + \hat k_z^4 \right). \end{equation} We note that this approximation neglects distortions of the conical helix, which are of the order of a few \% as observed in the SANS study (see supplement of Ref.~\onlinecite{2018:Chacon:NatPhys}; see also discussion in Ref.~\onlinecite{2017:Bauer:PRB}). The effective anisotropy, $K_{\rm eff}$, depends on the cone angle $\alpha$ where \begin{equation} \label{EffAniso} K_{\rm eff}(\alpha) = \frac{K}{64} \left(9 + 20 \cos(2\alpha) + 35 \cos(4\alpha) \right). \end{equation} As a key observation, $K_{\rm eff}$ changes sign as a function of $\alpha$. Whereas the sign of $K$ and $K_{\rm eff}$ are the same for small $\alpha$ and cone angles close to $\pi/2$, they possess opposite sign at intermediate cone angles, $30^\circ \lesssim \alpha \lesssim 70^\circ$. The effective anisotropy potential is illustrated in Fig.\,\ref{fig:energy_surface}. Large energies are shown in red shading, whereas low energies are shown in blue shading. The first three panels represent a cubic potential (a) with $K = 0$, (b) with $K<0$ favoring the cubic $\langle 100\rangle$ axes, and (c) with $K>0$ favouring the cubic $\langle 111\rangle$ axes. The panels in the next three rows, (d)--(l), indicate the energy landscape of the magnetocrystalline potential, Eq.~\eqref{CubicAniso} with $K<0$, that is scanned by the conical helix state of Eq.~\eqref{ConHelix} for different cone angles $\alpha$ and different orientations of the pitch vector $\hat k$. Finally, the last three panels represent the effective anisotropy potential for the pitch orientation $\hat k$, i.e, the function $\mathcal{V}_{\rm a}(\hat k)$ for three different cone angles. Whereas $K_{\rm eff}(\alpha) < 0$ for $\alpha = 10^\circ$ in panel (m) and for $\alpha = 80^\circ$ in panel (o), it is positive $K_{\rm eff}(\alpha) > 0$ for $\alpha = 55^\circ$ in panel (n). \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig-11.pdf} \caption{\label{fig:energy_surface} Illustration of the effective anisotropy that arises when the conical helix, Eq\,\eqref{ConHelix}, is embedded in the magnetocrystalline energy landscape of Eq.~\eqref{CubicAniso}. Large energies are shown in red shading, whereas low energies are shown in blue shading. (a)--(c) explain the representation of the orientational dependence of the energetic cost with $K=0$, $K<0$ and $K>0$, respectively. For $K<0$ the magnetic easy axes are $\langle 100\rangle$, whereas they are the hard axes for $K>0$. (d)--(l) illustrate the orientations that are covered by the magnetic moments of the conical helix with a certain cone angle $\alpha$ for different field orientations of the pitch vector $\mathbf{k}$. (m)--(o) display the effective anisotropy potential of Eq.~\eqref{EffAniso} that changes sign as function of cone angle $\alpha$. } \end{figure} When the magnetic field is applied along a $\langle 100 \rangle$ direction, both, the Zeeman energy as well as the effective anisotropy energy are minimized for $K_{\rm eff}(\alpha) < 0$. However, they compete for intermediate cone angles where $K_{\rm eff}(\alpha) > 0$. Whereas the Zeeman energy favors to align the helix with the field, the effective anisotropy supports a pitch vector that is oriented away from the field, since, otherwise, the magnetic moments of the helical state would dominantly point into hard $\langle111\rangle$\ axes resulting in a large energy penalty, see Fig.~\ref{fig:energy_surface}(e). If the cubic anisotropy exceeds a threshold value $K < K_c$ with $K_c \approx - 0.07 \mu_0 M_s H^{\rm int}_{c2}$, the anisotropy dominates such that even a LTS phase becomes favorable. A numerical assessment \cite{2018:Chacon:NatPhys} reveals, moreover, that the LTS phase comprising modulations perpendicular to the field, even forms the ground state at intermediate magnetic fields. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig-12.pdf} \caption{\label{fig:magnetic_work} Magnetization and magnetic work illustrating the field dependence of the effective anisotropy. (a)~Internal magnetic field as a function of magnetization for different directions. (b)~Magnetic work, inferred from magnetization data relative to the $\langle100\rangle$\ orientation as a function of the magnetization. } \end{figure} The notion of an effective magnetocrystalline anisotropy, Eq.~\eqref{EffAniso}, may also be confirmed by considering the magnetization process. Fig.~\ref{fig:magnetic_work}(a) displays typical magnetization data in terms of the internal field $H_{\rm int}$ versus $M$ for three crystallographic directions. The data shown here were measured at a temperature of \SI{2}{\kelvin} following ZFC with field along $\langle111\rangle$\ (blue), $\langle110\rangle$\ (green), and $\langle100\rangle$\ (red). As pointed out first in Sec.\,\ref{zfc-data} the magnetization displays two intersections of the $\langle111\rangle$\ and $\langle110\rangle$\ directions. The magnetic work $W_{\hat H}(M) = \mu_0 \int_{0}^{M} dM' H(M')$ for the $\langle 111\rangle$ and $\langle 110 \rangle$ direction as inferred from the magnetization data is shown in Fig.~\ref{fig:magnetic_work}(b) relative to the magnetic work for the $\langle100\rangle$\ orientation. It reflects the behavior of the effective anisotropy shown in Figs.\,\ref{fig:energy_surface}(m) through \ref{fig:energy_surface}(o). Close to saturation as well as for small values of $M$ both $\langle111\rangle$\ and $\langle110\rangle$\ are energetically larger than $\langle100\rangle$\ identifying the latter as the easy axis. For intermediate magnetization, however, the energy of the $\langle111\rangle$\ and $\langle110\rangle$\ orientations drops below the $\langle100\rangle$\ orientation, equivalent to a change of the effective anisotropy, i.e., a sign change of $K_\mathrm{eff}$. The magnetization interval where the inversion occurs indeed corresponds to the magnetization range in which the tilted conical phase is observed. \section{Conclusions} \label{conclusions} In conclusion, we reported a comprehensive study of the magnetization and ac susceptibility of the magnetic phase diagram of {Cu$_{2}$OSeO$_{3}$}. For magnetic field parallel to the $\langle 100\rangle$ axis in the cubic crystal structure and low temperatures we found clear evidence of the formation of two new phases, a tilted conical state and a LTS state identified recently in a SANS study. The magnetization and susceptibility are thereby in remarkable agreement with the SANS data, providing clear thermodynamic signatures of these two new phases. Complementary selected specific heat data support these results. A detailed analysis of the strength of the magnetic anisotropy establishes that the conventional quartic contribution to the free energy by itself is sufficient to stabilize the LTS phase. Detailed measurements exploring the role of different sample shapes shed new insights on the role of demagnetizing fields in the stabilization of the tilted conical state. Taken together, we find that the LTS phase in {Cu$_{2}$OSeO$_{3}$} represents a thermodynamically stable ground state driven by cubic magnetocrystalline aisotropies, whereas the tilted conical state exists as a metastable phase even for tiny demagnetization factors. It is finally instructive to speculate on the more general importance of our observations in {Cu$_{2}$OSeO$_{3}$}. As discussed in our paper the magnetocrystalline anisotropies in the B20 compounds MnSi and {Fe$_{1-x}$Co$_{x}$Si} do not appear to be sufficient to drive the formation of a LTS phase. In contrast, recent reports in Co$_7$Zn$_7$Mn$_6$ identified a second skyrmion lattice phase at low temperatures interpreted as a three-dimensional order arising from the interplay of DM interactions with the effects of frustration.\cite{2018:Karube:SciAdv} Judging from the literature this material exhibits also a pronounced magnetocrystalline anisotropy for the $\langle 100 \rangle$ axes similar to {Cu$_{2}$OSeO$_{3}$}. It is therefore tempting to speculate, whether the low temperature skyrmion phase in Co$_7$Zn$_7$Mn$_6$ originates, in fact, in the same mechanism we identify in {Cu$_{2}$OSeO$_{3}$}, however in the presence large amounts of disorder. This speculation finds further support by the observation that the effects of frustration, which must be present in {Cu$_{2}$OSeO$_{3}$} for the sake of the same analogy, do not seem to appear essential for stabilizing the LTS in {Cu$_{2}$OSeO$_{3}$}. \\\\ \begin{acknowledgments} We wish to thank S. Mayr for support. AC, MH, and WS acknowledge financial support through the TUM Graduate School. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 788031). LH and AR acknowledge financial support through DFG CRC1238 (project C02). MG acknowledges financial support through DFG CRC 1143 and DFG Grant GR1072/5. MH, AC, AB, and CP acknowledge support through DFG TRR80 (projects E1, F2 and F7) and ERC-AdG (291079 TOPFIT). MG, AR and CP acknowledge support through DFG SPP2137 (Skyrmionics). \end{acknowledgments}
train/arxiv
BkiUdaU5qoTAooKPOcv_
5
1
\section{Introduction} BiI$_3$ is a semiconducting atomic layered material in a rhombohedral structure\cite{Trotter1966,Schluter1976,Ruck1995} and belong to a family of layered metal trihalides same as CrI$_3$, a ferromagetic insulator\cite{Zhang2015,Seyler2018,Wu2019}. This atomic layered material has the energy gap corresponding to the visible light frequency and thus it has been applied to optical devices: solar cell\cite{Tiwari2018,Johansson2019,Ma2019}, photo detector\cite{Han2014,Chang2018,Qi2019}, and photo galvanic device\cite{Lehner2015}. In Ref.\ \onlinecite{Watanabe1986,Kaifu1988}, the authors investigated the optical absorption property of BiI$_3$ with stacking faults which are errors of stacking sequence from the rhombohedral structure. They had discovered that excitons are formed in a quasi two-dimensional space at the stacking faults and observed as resonant peaks slightly below the absorption edge. In experiments for a single crystal of bulk BiI$_3$, on the other hand, such exciton peaks are absent in the absence of the stacking fault\cite{Jellison1999,Podraza2013}. The experimental results indicate that the stacking structure of layers is responsible to the exciton formation. The relation between the optical property and the stacking structure is one of the fundamental and fascinating issues in atomic layered materials. Recently, the methods to control the number of layers and to fabricate hetero structure of different layered materials have been established and applied to the study of the optical property of atomic layered materials. For instance, in MoS$_2$ and the family, two types of exciton, the intra-layer exciton and inter-layer one, have been found by controlling the stacking structure. Optical measurements reveals that the former appears independently of the number of stacking but the latter emerges only in the stacked crystals including van der Waals heterostructures with a proper band alignment\cite{Yu2015,Carrascoso2019,Leisgang2020}. These experiments indicate the control of the number of stacked layers could be a simple method to change the optical property including the exciton formation even in BiI$_3$. In this paper, we theoretically investigate the electronic and optical properties of BiI$_3$ by changing the number of layers. The electric structure and the optical property are evaluated by use of first-principles calculation. To investigate the optical property, we calculate the dielectric function and adopt time-dependent density functional theory (TDDFT) to compute the quantity. TDDFT is a method to obtain the excited states in the presence of a time-dependent electric field including the electromagnetic wave and enables to calculate the spectrum of dielectric function including excitonic states by adopting the proper exchange-correlation (XC) kernel. \begin{figure}[htbp] \begin{center} \includegraphics[width=85mm]{./Schematic.eps} \caption{ The schematics of crystal structure of (a) monolayer and (b) bulk BiI$_3$. In (a), the I atoms in the top and bottom sublayers are represented by I(1) and I(2), respectively. In (b), the crystal structure is depicted by using the atomic positions of Bi. }\label{fig_crystal_structure} \end{center} \end{figure} \section{Electronic structure of BiI$_3$}\label{Sec_structure} BiI$_3$ is an atomic layered material consisting of atomically thin crystals in which Bi and I atoms are strongly bounded by forming bonding orbitals as shown in Fig.\ \ref{fig_crystal_structure}(a). The layers are weakly bounded by van der Waals interaction and stacked in the rhombohedral sequence as shown in Fig.\ \ref{fig_crystal_structure}(b). The monolayer is classified into the hexagonal crystal and has a sublayer-structure, where one sublayer of Bi is sandwiched by two sublayers of I(1) and I(2) as shown in Fig.\ \ref{fig_crystal_structure}. The lattice constant $a$ of hexagonal structure, the inter-sublayer distance ( the distance between I(1) and I(2) atoms), and the interlayer distance (the distance between two adjacent Bi sublayers) are 7.520\AA\ , 1.514 \AA , and 6.911\AA , respectively, where the parameters are determined by referring to the experimental data\cite{Ruck1995}. The lattice vectors are given by $(\sqrt{3}a/2,-a/2)$ and $(0,a)$ in Fig.\ \ref{fig_crystal_structure}(a). The electronic structures are investigated by using the first-principles calculation. The energy dispersion is obtained by using quantum-ESPRESSO\cite{quantum-ESPRESSO}, a numerical code in density functional theory (DFT). We adopt the generalized gradient approximation using the projector augmented wave method, the energy convergence criterion of 10$^{-8}$Ry, and the cut-off energy of 50Ry for the plane wave basis and 500Ry for the charge density. The wave number mesh is generated by using Monkhorst-Pack method with the resolution of $11\times11\times1$ in the first Brillouin zone. The electronic states are described by a multi-orbital tight-binding model where the hopping parameters and the maximally localized Wannier orbitals as the basis are computed by using Wannier90\cite{wannier90}. We adopt the $s$- and $p$-orbitals in Bi and I atoms as the Wannier functions. We use a superposition of atomic orbitals $|p_\mu^{\pm}\rangle=(|p_\mu(\boldsymbol{R}_1)\rangle\pm|p_\mu(\boldsymbol{R}_2)\rangle)/\sqrt{2}$ as a basis where two atoms at $\boldsymbol{R}$ and $-\boldsymbol{R}$ are the counterpart to each other under inversion for introducing the parity operator. In the tight-binding model, the parity operator can be defined as a diagonal matrix where the elements are the products of the parity of atomic orbital $P_o$ and the sign change due to the exchange of atomic positions $P_w$ at $\boldsymbol{R}_1$ and $\boldsymbol{R}_2$ in the superposition. \begin{figure}[htbp] \begin{center} \includegraphics[width=90mm]{./Orbital_BiI3_parity.eps} \caption{ The electronic band structure and the parity are presented in (a) and (b), respectively, in monolayer BiI$_3$. In (b), the parity expectation value is depicted by the line width. The parity eigenvalue is shown at the $\Gamma$ point for two valence bands and high energy bands. The sign of parity expectation value is represented by red for positive values and blue for negative values. }\label{fig_band_structure_mono} \end{center} \end{figure} In Fig.\ \ref{fig_band_structure_mono}, we show the band structure of monolayer BiI$_3$. The electronic excitation with the smallest excitation energy occurs at the $\Gamma$ point. At the $\Gamma$ point, the electronic states have the parity eigenvalues because of the inversion symmetry in the crystal structure in Fig.\ \ref{fig_crystal_structure}(a). In Fig.\ \ref{fig_band_structure_mono}(b), we present the expectation value of parity. Around the $\Gamma$ point, the expectation value is mostly unity in amplitude and has the same sign in the top valence and bottom conduction bands. Therefore, the optical excitation between the two bands are strongly suppressed around the $\Gamma$ point because the optical excitation is prohibited between the same parity states. At the other high symmetry points, the odd-parity and even-parity components are mixed with each other. Therefore, the optical excitation is not restricted by the parity at these wave numbers. \begin{figure}[htbp] \begin{center} \includegraphics[width=90mm]{./Orbital_BiI3-bi_band_parity.eps} \caption{ The electronic band structure and the parity are presented in (a) and (b), respectively, in bilayer BiI$_3$. In (b), the parity expectation value is depicted by the line width. The parity eigenvalue is shown at the $\Gamma$ point for two valence bands and some high energy bands. The sign of parity expectation value is represented by red for positive values and blue for negative values. }\label{fig_band_structure_bi} \end{center} \end{figure} In bilayer BiI$_3$, the optical excitation energy at the $\Gamma$ point is expected to be decreased although the energy dispersion is not drastically changed from that in monolayer crystal. In Fig.\ \ref{fig_band_structure_bi}(a) and (b), we present the band structure and the parity expectation value of states, respectively. In the presence of inter-layer coupling, each band in the monolayer crystal splits into two branches with a slight splitting energy as shown in Fig.\ \ref{fig_band_structure_bi}(a). At the $\Gamma$ point, the two branches must have the opposite parity eigenvalues because the electronic states can be represented by $(|n,k,t\rangle\pm|n,k,b\rangle)/2$ at $k=0$, where $|n,k,t\rangle$ ($|n,k,b\rangle$) is the electronic states in the top (bottom) monolayer crystal with the band index of $n$ and the wave vector of $k$. Therefore, electrons on the top valence band can be excited with a smaller excitation energy as compared with the monolayer. This stacking-induced decrease of excitation energy at the $\Gamma$ point can be observed in the other layered BiI$_3$: the trilayer and bulk. \section{Dielectric function} In this section, we consider the dielectric function in bulk, trilayer, bilayer, and monolayer BiI$_3$. The dielectric function is an fundamental quantity associated with the optical property of the material and directly determined by using the spectrum of optical absorption, differential reflectance, or spectroscopic ellipsometry. For instance, it had been computed from the experimental data of spectroscopic ellipsometry in the case of bulk Bil$_3$\cite{Podraza2013}. We calculate the dielectric function by using TDDFT with two types of XC kernels. One is the kernel in the random phase approximation (RPA) and the other is the bootstrap kernel\cite{Sharma2011,Sharma2015}, which enables to reproduce the long-range behavior as $1/q^2$ for a small wave number $q$. RPA underestimates the XC kernel for a small $q$ which is relevant to the exciton formation. The bootstrap kernel enables to provide the excitation spectrum including the exciton formation. Therefore, the exciton peaks can be captured by comparing the two numerical results from the RPA and the bootstrap kernel method. \subsection{Time-dependent density functional theory} In this section, we briefly review TDDFT to clarify the microscopic effects incorporated in the calculation of dielectric function $\varepsilon(\boldsymbol{q},\omega)$ in a weak irradiation. The dielectric function is defined by using the response function $\chi$ as \begin{align} \varepsilon^{-1}(\boldsymbol{q},\omega)=1+\nu(\boldsymbol{q})\chi(\boldsymbol{q},\omega), \end{align} where $\nu(\boldsymbol{q})$ is the bare Coulomb potential and the atomic unit is adopted. In TDDFT, electronic states $\psi_j(\boldsymbol{r},t)$ are assumed to be described by time-dependent Kohn-Sham equation as \begin{align} i\frac{d\psi_j}{dt}(\boldsymbol{r},t)=\left(-\frac{\nabla^2}{2}+u[\rho](\boldsymbol{r},t)\right)\psi_j(\boldsymbol{r},t), \end{align} where the time-dependent potential $u[\rho](\boldsymbol{r},t)$ includes a functional of charge density and the time-dependent external potential $u_{\mathrm{ext}}(\boldsymbol{r},t)$, \begin{align} u[\rho](\boldsymbol{r},t)=u_{\mathrm{ext}}(\boldsymbol{r},t)+\int d^3\boldsymbol{r}'\frac{\rho(\boldsymbol{r}',t)}{|\boldsymbol{r}-\boldsymbol{r}'|}+u_{\mathrm{xc}}[\rho](\boldsymbol{r},t). \end{align} Here, the last term is the XC potential and it is also a functional of $\rho(\boldsymbol{r},t)$. In the case of weak external field, the linear term of the deviation of charge density $\delta\rho(\boldsymbol{r},t)=\rho(\boldsymbol{r},t)-\rho_0(\boldsymbol{r})$ from the ground-state density $\rho_0$ is dominant and it can be approximated by \begin{align} u_{\mathrm{xc}}[\rho](\boldsymbol{r},t)=\int{dt'}\int{d^3\boldsymbol{r}'}f_{\mathrm{xc}}(\boldsymbol{r},\boldsymbol{r}',t-t')\delta \rho(\boldsymbol{r}',t'), \end{align} where $f_{\mathrm{xc}}$ is the XC kernel defined by the functional derivative of $f_{\mathrm{xc}}={\delta u_{\mathrm{xc}}}/{\delta\rho}[\rho_0]$. Thus, the potential term in time-dependent Kohn-Sham equation can be rewritten as \begin{align} u[\rho]=u_0[\rho_0]+u_{\mathrm{ext}}+\int dt'\int d^3\boldsymbol{r}'V\delta\rho \end{align} where $u_0[\rho_0]$ is the Coulomb potential due to $\rho_0(\boldsymbol{r})$ and $V$ represents the potential due to $\delta\rho(\boldsymbol{r},t)$ as \begin{align} V(\boldsymbol{r},\boldsymbol{r}',t)=\delta(t)v_0(\boldsymbol{r}-\boldsymbol{r}')+f_{\mathrm{xc}}(\boldsymbol{r},\boldsymbol{r}',t). \end{align} with the Coulomb potential $v_0(\boldsymbol{r})=1/|\boldsymbol{r}|$. Here, the first term describes the change of the classical potential, i.e., the Coulomb interaction, due to the deviation of charge density and the second term represents the other microscopic effects. The potential $V$ introduces the effect of excited electrons and the form of XC kernel $f_{\mathrm{xc}}$ is related to the considerable microscopic correlation effects. The dielectric function $\varepsilon(\boldsymbol{q},\omega)$ is a fundamental quantity related to the optical property, e.g., the optical absorption coefficient is given by $\alpha=(\omega/c)\varepsilon_2$ with $\varepsilon_2=\mathrm{Im}[\varepsilon]$. The representation of $\varepsilon$ including the correlation effects in a periodic solid is given as \begin{align} \varepsilon^{-1}(\boldsymbol{q},\omega)=&1+v_0(\boldsymbol{q})\chi(\boldsymbol{q},\omega)\nonumber\\ =&1+\chi_0(\boldsymbol{q},\omega)v_0(\boldsymbol{q})\nonumber\\ &\times\frac{1}{1-V(\boldsymbol{q},\omega)\chi_0(\boldsymbol{q},\omega)}, \end{align} where $\chi$ and $\chi_0$ represent the response function of the interacting and non-interacting Kohn-Sham systems, respectively. In this paper, we consider two approximations to the XC kernel, the RPA $f_{\mathrm{xc}}=0$ and the bootstrap kernel, \begin{align} f_{\mathrm{xc}}=\frac{\varepsilon^{-1}(\boldsymbol{q},\omega=0)}{\chi_0(\boldsymbol{q},\omega=0)}. \end{align} The RPA includes only the Coulomb potential and ignores any microscopic effect. The bootstrap kernel reproduces the behavior in the long wavelength limit $f_{xc}=\alpha_{xc}/q^2$ and enables the dielectric function to have poles attributed to bound excitons.\cite{Sharma2011} In this section, we present the numerical results of band structure and dielectric function. The band structure and the dielectric function are calculated by using DFT and TDDFT, respectively. Moreover, TDDFT calculation is performed by RPA and use of bootstrap kernel. Both the calculations are performed by using the linearized augmented plane wave code of Elk\cite{elk}. We adopt the local density approximation and include the effect of spin-orbit coupling (SOC). The cutoff of basis is set by $R^{\mathrm{MT}}|\boldsymbol{G}|\leq7$ with the averaged muffin-tin radius $R^{\mathrm{MT}}$ and the reciprocal vector $\boldsymbol{G}$. In TDDFT calculations, two empty orbitals are included per atom and spin, and the Fermi-Dirac type smearing is applied with the energy width of 13meV. The wave number sampling is 72$\times$72$\times$1 for the monolayer and bilayer and 30$\times$30$\times$12 for bulk. Here, 30$\times$30$\times$1 can achieve the convergence even in the monolayer and bilayer cases but the finer mesh only eliminate small noises. \begin{figure}[htbp] \begin{center} \includegraphics[width=90mm]{./BiI3-bulk_band_and_epsilon_data.eps} \caption{ The band structure and energy spectrum of $\varepsilon_2$ of bulk BiI$_3$. The inset is the experimental data in Ref.\ \onlinecite{Podraza2013}. }\label{fig_epsilon_bulk} \end{center} \end{figure} \begin{figure*}[htbp] \begin{center} \includegraphics[width=180mm]{./BiI3-epsilon.eps} \caption{ The dielectric function calculated by use of the bootstrap kernel (upper panels) and RPA (lower panels). }\label{fig_epsilon_thin_layers} \end{center} \end{figure*} \subsection{Bulk} We start with the numerical result of dielectric function in bulk BiI$_3$ and discuss the accuracy of TDDFT by comparing with the experiment\cite{Podraza2013}. In Fig.\ \ref{fig_epsilon_bulk}, we show the band structure and the imaginary part of dielectric function $\varepsilon_2(\omega)$, where the real part can be obtained by using Kramers-Kronig relation. The band structure indicates the indirect semiconducting property of bulk BiI$_3$. The minimum energy of the conduction band occurs at the $\Gamma$ point but the valence band has the maximum energy between the $\Gamma$ and $K$ points. The DFT calculation provides an indirect gap of 1.59 eV We also present the excitation energy at several high-symmetry wave numbers: 1.63 eV at the $\Gamma$ point, 2.36eV at the $K$ point, and 2.28eV at the $M$ point. In Fig.\ \ref{fig_crystal_structure}(b), we show the energy spectrum of $\varepsilon_2$ calculated by RPA and use of the bootstrap kernel, and that computed from the experimental data in the inset\cite{Podraza2013}. Although the spectrum indicates the slight blue shift, it is qualitatively in good agreement with the experiment. The two calculation methods also provide the qualitatively same result with a slight enhancement of $\varepsilon_2$ for the bootstrap kernel. The qualitative agreement indicates that the microscopic correlation effect does not produce the excitonic bound state. The lowest and small peak at $\omega\simeq2$ eV is associated with the direct excitation at the $\Gamma$ point with $\Delta E_\Gamma$ according to the previous work by use of DFT\cite{Shen2017}. The mismatch of the DFT gap and the excitation energy is attributed to the underestimation in the DFT calculation. A large peak occurs in both the RPA and bootstrap cases at 2.32eV and it corresponds to the excitation at the $K$ and $M$ points. The peak is attributed to the conventional direct excitation between the valence and conduction bands by the photon absorption. The agreement among the experiment and the two numerical results implies the validity of TDDFT in the optical property of BiI$_3$. \subsection{Monolayer, bilayer, and trilayer BiI$_3$} In Fig.\ \ref{fig_epsilon_thin_layers}, we present the numerical results of dielectric function in monolayer, bilayer, and trilayer BiI$_3$ by using TDDFT. In these results, the lowest edge of the spectrum is larger than the energy gap at the $\Gamma$ point in the DFT band structure. This is attributed to the underestimation of energy gaps caused by the DFT calculation. Since the band structures are similar to each other among these layers, these spectra have peaks at the same frequencies indicated by vertical lines. These peaks also appear in the spectrum for bulk BiI$_3$ as shown in Fig.\ \ref{fig_epsilon_bulk}. Especially in the case of bilayer and trilayer, $\varepsilon_2$ shows the same profile as the spectrum in the bulk crystal even though the amplitude of $\varepsilon_2$ increases with the number of layers due to the increase in the number of electronic states. In bilayer and trilayer BiI$_3$, the RPA and the bootstrap kernel provide similar results which have the same characteristics, e.g., the edge of spectrum and the peak positions, except for the amplitude. This is consistent with the similar electronic band structure in bilayer, trilayer, and bulk. The enhancement of amplitude by the bootstrap kernel has been reported in some previous papers\cite{Sharma2011,Suzuki2020}. The result about bilayer BiI$_3$ is also in a good agreement with the previous theoretical work\cite{Shen2017}. The numerical calculation indicates that BiI$_3$ shows a similar optical property regardless of the number of layers except the monolayer crystal. \begin{figure}[htbp] \begin{center} \includegraphics[width=85mm]{./BiI3_im_dielectric_function_7x7.eps} \caption{ The dielectric function in monolayer BiI$_3$ calculated by use of the bootstrap kernel in two $k$-meshes. Here $\Delta k_x$ is the shift of the origin from the $\Gamma$ point in the meshes. }\label{fig_epsilon_k_shift} \end{center} \end{figure} In the monolayer system, on the other hand, the two calculation methods derive qualitatively different results. The RPA result is in good agreement with the previous work by use of DFT calculation\cite{Ma2015}. However, by using TDDFT, we find two sharp peaks indicated by $E_1$ and $E_2$ below the peak at 2.32eV in Fig.\ \ref{fig_epsilon_thin_layers}. These large peaks are strongly suppressed in the RPA result and thus they indicate the exciton formation because the suppression of peak is a characteristic feature of excitons\cite{Botti2004}. The peaks are not changed with the size of the supercell. Actually, they remain in the spectrum calculated by using twice the unit cell as shown in Fig.\ \ref{fig_crystal_structure}(a). To analyze the electronic states associated to the exciton peaks, we calculate $\varepsilon_2$ in the $k$-mesh of $7\times7\times1$ with the wave number shift of $\Delta k_x$ which is the shift of origin from the $\Gamma$ point in the unit of mesh as shown in Fig.\ \ref{fig_epsilon_k_shift}. The sparse $k$-mesh is insufficient for the convergence and enables to avoid electronic states around a specific high symmetry point by using the $k$ shift for confirming the effect of these states to $\varepsilon_2$. Here, the $K$ point is not included in both meshes, the $\Gamma$ point is referred to only in the mesh with $\Delta k_x=0$, and the $M$ point is included only in the shifted mesh as a reference wave number. Since the peaks can be found only in the absence of the shift, the electronic states at the $\Gamma$ point are responsible for the resonant peaks. The frequencies of exciton peaks correspond to the excitation energy $\Delta E_\Gamma$ in the case of bulk BiI$_3$. Thus the absence of such excitation due to the parity (see Sec.\ \ref{Sec_structure}) enables the excitonic states to be stabilized only in the monolayer crystal. \section{Conclusion} We studied the optical property of BiI$_3$ by use of TDDFT with two types of XC kernel: RPA and bootstrap kernel, to identify the exciton peak. In the case of bulk BiI$_3$, TDDFT provides good agreement with the experimental data about the dielectric function. In the bilayer and trilayer crystals, the dielectric function $\varepsilon_2$ shows a similar energy spectrum to that in the bulk crystal regardless of the kernel. Thus, the electronic property of Bil$_3$ dose not change with the number of layers except the monolayer crystal. In the monolayer BiI$_3$, on the other hand, we found a different optical property from the stacked crystals. TDDFT reveals the presence of large exciton peaks which are absent in stacked materials. The unique spectrum of $\varepsilon_2$ suggests the drastic change of optical properties in the monolayer BiI$_3$ in comparison with the stacked crystals. \begin{acknowledgements} This work was supported by a JSPS KAKENHI Grant No. JP20K05274. \end{acknowledgements}
train/arxiv
BkiUdxA25V5hYDkHHgdd
5
1
\section{Introduction} The idea of embedding our Universe in a higher dimensional space has attracted a considerable interest recently, due to the proposal by Randall and Sundrum that our four-dimensional (4D) spacetime is a three-brane, embedded in a 5D spacetime (the bulk) ~\cite{Randall,Randall2}. This proposal is based on early studies on superstring theory and M-theory, which have suggested that our four dimensional world is embedded into a higher dimensional spacetime. Particularly, the 10 dimensional $E_8 \otimes E_8$ heterotic superstring theory is a low-energy limit of the 11 dimensional supergravity, under the compactification scheme $M^{10}\times S_1 / Z_2$ \cite{Witten,Witten2}. Thus, the 10 dimensional spacetime is compactified as $M^4 \times CY^6 \times S_1 / Z_2$, implying that our Universe (a brane) is embedded into a higher dimensional bulk. In this paradigm, the standard model particles are open strings, confined on the braneworld, whilst the gravitons and the closed strings can freely propagate into the bulk \cite {Polchinski}. The Randall-Sundrum Type II model has the virtue of providing a new type of compactification of gravity \cite{Randall,Randall2}. Standard 4D gravity can be recovered in the low-energy limit of the model, with a 3-brane of positive tension embedded in 5D anti-de Sitter bulk. The covariant formulation of the braneworld models has been formulated in \cite{Shiromizu , leading to the modification of the standard Friedmann equations on the brane. It turns out that the dynamics of the early Universe is altered by the quadratic terms in the energy density and by the contribution of the components of the bulk Weyl tensor, which both give a contribution in the energy momentum tensor. This implies a modification of the basic equations describing the cosmological and astrophysical dynamics, which has been extensively considered recently \cite{all2}. The recent observations of the CMB anisotropy by WMAP \cite{Spergel} have provided convincing evidence for the inflationary paradigm \cite{Guth}, according to which in its very early stages the Universe experienced an accelerated (de Sitter) expansionary phase (for recent reviews on inflation see \cite{infl}). At the end of inflation, the Universe is in a cold and low-entropy phase, which is utterly different from the present hot high-entropy Universe. Therefore the Universe should be reheated, or defrosted, to a high enough temperature, in order to recover the standard Hot Big Bang \cite{reh}. The reheating process may be envisioned as follows: the energy density in zero-momentum mode of the scalar field decays into normal particles with decay rate $\Gamma$. The decay products then scattered and thermalize to form a plasma \cite{infl}. Apart from the behavior of the inflaton field, the evolutions of dark energy and dark matter in reheating stage were also considered. In \cite{Susperregi , dark energy and dark matter were originated from a scalar field in different stages of the inflation, according to a special form of potential. Meanwhile, the conditions for unifying the description of inflation, dark matter and dark energy were considered in \cite{Liddle3}. A specific model was later proposed in \cite{Cardenas}, by using a modified quadratic scalar potential. The candidates of dark matter in \cite{Liddle3} and \cite {Cardenas} were oscillations of a scalar field. However, it may be possible that dark matter existed on its own without originating from the scalar field. This may pose less stringent constraint on the scalar field, so that dark matter can be included in inflation paradigm in a easier way. On the other hand, it was proposed that the decay products of scalar field acquired thermal mass \cite{Kolb}. The reheating in the braneworld models has also been considered recently. In the context of the braneworld inflation driven by a bulk scalar field, the energy dissipation from the bulk scalar field into the matter on the brane was studied in \cite{HiTa03}. The obtained results supports the idea that the brane inflation model, caused by a bulk scalar field, may be a viable alternative scenario of the early Universe. The inflation and reheating in a braneworld model derived from Type IIA string theory was studied in \cite {BrDa03}. In this model the inflaton can decay into scalar and spinor particles, thus reheating the Universe. A model in which high energy brane corrections allow a single scalar field to describe inflation at early epochs and quintessence at late times was discussed in \cite{SaDaSh03}. The reheating mechanism in the model originates from Born-Infeld matter, whose energy density mimics cosmological constant at very early times and manifests itself as radiation subsequently. The particle production at the collision of two domain walls in a 5-dimensional Minkowski spacetime was studied in \cite{TaMa04}. This may provide the reheating mechanism of an ekpyrotic (or cyclic) brane Universe, in which two BPS branes collide and evolve into a hot big bang Universe. The reheating temperature $T_{\rm RH}$ in models in which the Universe exits reheating at temperatures in the MeV regime was studied in \cite{Hannestad}, and a minimum bound on $T_{\rm RH}$ was obtained. The derived lower bound on the reheating temperature also leads to very stringent bounds on the compactification scale in models with $n$ large extra dimensions. The dark matter problem in the Randall-Sundrum type II braneworld scenario was discussed in \cite{Pa05}, by assuming that the lightest supersymmetric particle is the axino. The axinos can play the role of cold dark matter in the Universe, due to the higher reheating temperatures in the braneworld model, as compared to the conventional four-dimensional cosmology. The impact of the non-conventional brane cosmology on the relic abundance of non-relativistic stable particles in high and low reheating scenarios was investigated in \cite{DaKh06}. In the case of high reheating temperatures, the brane cosmology may enhance the dark matter relic density by many order of magnitudes, and a stringent lower bound on the five dimensional scale may be obtained. In the non-equilibrium case, the resulting relic density is very small. The curvaton dynamics in brane-world cosmologies was studied in \cite{PaZa06}. Brane-worlds with non-constant tension, based on the analogy with fluid membranes, which exhibit a temperature-dependence according to the empirical law established by E\"otv\"os, were introduced in \cite{Ger1}. This new degree of freedom allows for evolving gravitational and cosmological constants, the latter being a natural candidate for dark energy. The covariant dynamics on a brane with variable tension was studied in its full generality, by considering asymmetrically embedded branes, and allowing for non-standard model fields in the 5-dimensional space-time. This formalism was applied for a perfect fluid on a Friedmann brane, which is embedded in a 5-dimensional charged Vaidya-Anti de Sitter space-time. For cosmological branes a variable brane tension leads to several important consequences. A variable brane tension may remove the initial singularity of the Universe, since the brane Universe was created at a finite temperature $T_{c}$ and scale factor $a_{\min}$ \cite{Ger2}. Both the brane tension and the 4-dimensional gravitational coupling 'constant' increase with the scale factor from zero to asymptotic values. The 4-dimensional cosmological constant is dynamical, evolving with $a$, starting with a huge negative value, passing through zero, and finally reaching a small positive value. Such a scale--factor dependent cosmological constant has the potential to generate additional attraction at small $a$ (as dark matter does) and late-time repulsion at large $a$ (dark energy). The evolution of the brane tension is compensated by energy interchange between the brane and the fifth dimension, such that the continuity equation holds for the cosmological fluid \cite{Ger2}. The resulting cosmology closely mimics the standard model at late times, a decelerated phase being followed by an accelerated expansion. The energy absorption of the brane drives the 5D space-time towards maximal symmetry, thus becoming Anti de Sitter. Other physical and cosmological implications of a varying brane tension have been considered in \cite{yun}. It is the purpose of the present paper to further investigate the cosmological implications of a varying brane tension. As a first step in our study, we consider a thermodynamic interpretation of the varying brane tension models, by showing that the field equations with variable $\lambda $ can be interpreted as describing matter creation in a cosmological framework. The particle creation rate is determined by the variation rate of the brane tension, as well as by the brane-bulk energy-matter transfer rate. In particular, by adopting a theoretical model in which the brane tension is a simple function of the scale factor of the Universe, we consider the possibility that the early inflationary era in the evolution of the brane Universe was driven by a varying brane tension. A varying brane tension may also be responsible for the generation of the matter after reheating, as well as for the late time acceleration of the Universe. The present paper is organized as follows. In Section~\ref{geo} we present the field equations of the brane world models with varying brane tension and we write down the basic equations describing the cosmological dynamics of a flat Friedmann-Robertson-Walker Universe. The thermodynamic interpretation of the brane-world models with varying brane tension and brane-bulk matter-energy exchange is considered in Section~\ref{therm}. A power-law inflationary brane-world model with varying brane tension and non-zero bulk pressure is obtained in Section~\ref{pow}. The analytical behavior of the cosmological model with varying brane tension is considered in Section~\ref{scale}, by using the small and large time approximations for the brane tension. The numerical analysis of the model is performed in Section~\ref{numerical}. We discuss and conclude our results in Section~\ref{conclusion}. \section{Geometry and field equations in the variable brane tension models}\label{geo} In the present Section we present the field equations for brane world models with varying brane tension, and the corresponding cosmological field equations for a flat Robertson-Walker space-time. \subsection{Gravitational field equations} We start by considering a five dimensional ($5D$) spacetime (the bulk), with a large negative 5D cosmological constant ${}^{(5)}\Lambda$ and a single four-dimensional ($4D$) brane, on which usual (baryonic) matter and physical fields are confined. The $4D$ braneworld $({}^{(4)}M,{}^{(4)}g_{\mu \nu })$ is located at a hypersurface $\left(B\left( X^{A}\right) =0\right)$ in the 5D$ bulk spacetime $({}^{(5)}M,{}^{(5)}g_{AB})$ with mirror symmetry, and with coordinates $X^{A},A=0,1,...,4$. The induced $4D$ coordinates on the brane are x^{\mu },\mu =0,1,2,3$. We choose normal Gaussian coordinates, and therefore the $5D$ metric is related to the $4D$ metric by the relation ${}^{(5)}g_{MN}={}^{(4)}g_{MN}+n_{M}n_{N}$, where $n^{M}$ is the normal vector. The induced $4D$ metric is $g_{IJ}={}^{(5)}g_{IJ}-n_{I}n_{J}$, where $n_{I}$ is the space-like unit vector field normal to the brane hypersurface {}^{(4)}M$. The basic equations on the brane are obtained by projections onto the brane world with Gauss equation, Codazzi equation and Israel junction condition, the projected Einstein equation are given by \begin{equation} G_{\mu \nu }=-\Lambda g_{\mu \nu }+k^{2}T_{\mu \nu }+\bar{k}^{4}S_{\mu \nu } \bar{\epsilon}_{\mu \nu }+\bar{L}_{\mu \nu }^{TF}+\bar{P}_{\mu \nu }+F_{\mu \nu }, \end{equation} where \begin{equation} S_{\mu \nu }=\frac{1}{2}TT_{\mu \nu }-\frac{1}{4}T_{\mu \alpha }T_{\nu }^{\alpha }+\frac{3T_{\alpha \beta }T^{\alpha \beta }-T^{2}}{24}g_{\mu \nu }, \end{equation} \begin{equation}\label{eps} \varepsilon _{\mu \nu }=C_{ABCD}n^{C}n^{D}g_{\mu }^{A}g_{\nu }^{B}, \end{equation} and \begin{equation} F_{\mu \nu }=^{(5)}T_{AB}g_{\mu }^{A}g_{\nu }^{B}+\left( ^{(5)}T_{AB}n^{A}n^{B}-\frac{1}{4}^{(5)}T\right) g_{\mu \nu }, \end{equation} respectively. Apart from the terms quadratic in the brane energy-momentum tensor, in the field equations on the brane there are two supplementary terms, corresponding to the projection of the $5D$ Weyl tensor $\varepsilon _{\mu \nu }$ and of the projected tensor $F_{\mu \nu }$, which contains the bulk matter contribution. Both terms induce bulk effects on the brane. Also, the possible asymmetric embedding is characterized by the tensor \begin{equation} \bar{L}_{\mu \nu }=\bar{K}_{\mu \nu }\bar{K}-\bar{K}_{\mu \sigma }\bar{K _{\nu }^{\sigma }-\frac{g_{\mu \nu }}{2}\left( \bar{K}^{2}-\bar{K}_{\alpha \beta }\bar{K}^{\alpha \beta }\right) , \end{equation} with trace $\bar{L}=\bar{K}_{\alpha \beta }\bar{K}^{\alpha \beta }-\bar{K}^{2}$, and trace-free part $\bar{L}_{\mu \nu }^{TF}=\bar{K}_{\mu \nu }\bar{K}-\bar{K}_{\mu \sigma }\bar{ }_{\nu }^{\sigma }+\bar{L}g_{\mu \nu }/4$, respectively. For a $Z_2$ symmetric embedding $\bar{K}_{\mu \nu }=0$, and thus $\bar{L _{\mu \nu }=0$. $\bar{P}_{\mu \nu }$ is given by the pull-back to the brane of the energy-momentum tensor characterizing possible non-standard model fields (e. g. scalar, dilaton, moduli, radiation of quantum origin) living in 5D, \begin{equation} \bar{P}_{\mu \nu }=\frac{2\tilde{k}^{2}}{3}\overline{\left( g_{\mu }^{\alpha }g_{\nu }^{\beta }{}^{(5)}T_{\alpha \beta }\right) ^{TF}}, \end{equation} which is traceless by definition. Another projection of the 5D sources appears in the brane cosmological constant $\Lambda $. which is defined as \begin{equation} \Lambda =\Lambda _{0}-\frac{\bar{L}}{4}-\frac{2\tilde{k}^{2}}{3}\overline \left( n^{\alpha }n^{\beta }{}^{(5)}T_{\alpha \beta }\right)}, \end{equation} where $2\Lambda _{0}=k_5^{2}\lambda +k_5^{2}\Lambda_5$. In the case of a variable brane tension, the projected gravitational field equations on the brane have a similar form to the general case, \begin{equation} G_{\mu \nu }=-\Lambda g_{\mu \nu }+k^{2}T_{\mu \nu }+\bar{k}^{4}S_{\mu \nu } \bar{\epsilon}_{\mu \nu }+\bar{L}_{\mu \nu }^{TF}+\bar{P}_{\mu \nu }+F_{\mu \nu }. \end{equation} However, the evolution of the brane tension appears in the Codazzi equation, and in the differential Bianchi identity. The Codazzi equation is \begin{equation} \nabla_{\mu}\bar{K^{\mu}_{\nu}}-\nabla_{\nu}\bar{K}=k_5^2\overline (g^{\rho}_{\nu}n^{\sigma}{}^{(5)}T_{\rho\sigma})}, \end{equation} and it gives the conservation equation of the matter on the brane as \begin{equation} \label{codazzi} \nabla_{\mu}T^{\mu}_{\nu}=\nabla_{\nu}\lambda-\Delta(g^{\rho}_{\nu}n^ \sigma}{}^{(5)}T_{\rho\sigma}). \end{equation} The differential Bianchi identity, written as $\nabla^{\mu}R_{\rho\mu}=\frac{1}{2}\nabla_{\rho}R$, gives \begin{eqnarray} \label{2bi} \nabla^{\mu}(\bar{\epsilon}_{\mu\nu}-\overline{L}^{TF}_{\mu\nu}-\bar \mathcal{P}}_{\mu\nu})& =\frac{\nabla_{\nu}\bar{L}}{4}+\frac{k_5^2}{2 \nabla_{\nu}\overline{(n^{\rho}n^{\sigma}{}^{(5)}T_{\rho\sigma})}-\frac k_5^4\lambda}{6}\Delta(g^{\sigma}_{\nu}n^{\rho}T_{\sigma\rho}) \nonumber\\ & +\frac{k_5^4}{4}(T^{\mu}_{\nu}-\frac{T}{3}g^{\mu}_{\nu})\Delta(g^ \sigma}_{\mu}n^{\rho}{}^{(5)}T_{\sigma\rho})+\frac{k_5^4}{4 [2T^{\mu\sigma}\nabla_{[\nu}T_{\mu]\sigma} \nonumber\\ & +\frac{1}{3}(T_{\mu\nu}\nabla^{\mu}T-T\nabla_{\nu}T)]-\frac{k_5^4}{12 (T^{\mu}_{\nu}-Tg^{\mu}_{\nu})\nabla_{\mu}\lambda). \end{eqnarray} From Eq.~(\ref{eps}), one can introduce an effective non-local energy density $U$, which can be obtained by assuming that $\varepsilon_{\mu\nu}$ in the projected Einstein equation behaves as an effective radiation fluid, \begin{equation} -\varepsilon _{\mu \nu } \frac{k_{5}^{4}}{6}\lambda U(u_{\mu }u_{\nu }+\frac{a^{2}}{3}h_{\mu \nu }), \end{equation} where $u_{\mu }$ is the matter four-velocity, and $h_{\mu \nu}=g_{\mu \nu}+u_{\mu \nu }$, respectively. \subsection{Cosmological models with dynamic brane tension}\label{cosmo} We assume that the metric on the brane is given by the flat Robertson-Walker-Friedmann metric, with \begin{equation} {}^{(4)}g_{\mu\nu}dx^{\mu}dx^{\nu}=-dt^2+a^2(t)(dx^2+dy^2+dz^2), \end{equation} where $a$ is the scale factor. The matter on the brane is assumed to consist of a perfect fluid, with energy density $\rho$, and pressure $p$, respectively. The gravitational field equations, governing the evolution of the brane Universe with variable brane tension, in the presence of brane-bulk energy transfer, and with a non-zero bulk pressure, are then given by \cite{Ger1, Ger2} \begin{eqnarray}\label{feq} \left( \frac{\dot{a}}{a}\right) ^{2} &=&\frac{{}^{(4)}\Lambda }{3}+\frac k_{5}^{4}\lambda }{18}\left[ \rho +\frac{\rho ^{2}}{2\lambda }+U\right] ,\label{1} \\ \frac{\ddot{a}}{a} &=&\frac{{}^{(4)}\Lambda }{3}-\frac{k_{5}^{4}\lambda }{36 \left[ \rho \left( 1+\frac{2\rho }{\lambda }\right) +3p\left( 1+\frac{\rho } \lambda }\right) +2U\right] , \label{2}\\ \dot{\rho}+3H\left( \rho +p\right) &=&-\dot{\lambda}-2P_{5}, \label{3}\\ \frac{k_{5}^{4}\lambda }{6}\left( \dot{U}+4U\frac{\dot{a}}{a}+U\frac{\dot \lambda}}{\lambda }\right) &=&\frac{k_{5}^{2}}{2}\dot{\bar{P}}_{B}+\frac k_{5}^{4}\lambda }{3}\left( 1+\frac{\rho }{\lambda }\right) P_{5}, \label{4}\\ {}^{(4)}\Lambda &=&\frac{k_{5}^{2}}{2}\Lambda _{5}+\frac{k_{5}^{4}}{12 \lambda ^{2}-\frac{k_{5}^{2}}{2}\bar{P}_{B},\label{5} \end{eqnarray} where $P_5$ describes the bulk-brane matter-energy transfer, while $P_B$ is the bulk pressure. An important observational parameter, which is an indicator of the rate of expansion of the Universe, is the deceleration parameter $q$, defined as \begin{equation} q=\frac{d}{dt}\left( \frac{1}{H}\right) -1=-\frac{a\ddot{a}}{\dot{a}^{2}}=-\frac{\ddot{a}/a}{\left(\dot{a}/a\right)^2}. \label{q0} \end{equation} If $q<0$, the expansion of the Universe is accelerating, while $q>0$ indicates a decelerating phase. \section{Thermodynamic interpretation of the varying tension in brane-world models}\label{therm} For the sake of generality we also assume that there is an effective energy-matter transfer between the brane and the bulk, and the brane-bulk matter-energy exchange can be described as \begin{equation}\label{bbtrans} P_{5}=-\frac{\alpha _{bb}}{2}\rho _{cr}\left( \frac{a_{0}}{a}\right) ^{3w}H, \end{equation where $\alpha _{bb}$ is a constant, $\rho _{cr}$ is the present day critical density of the Universe, and $a_{0}$ is the present day value of the scale factor. In the presence of a varying brane tension and of the bulk-brane matter and energy exchange, the energy conservation equation on the brane can be written as \begin{equation} \dot{\rho}+3(\rho +p)H=-\rho \left( \frac{\dot{\lambda}}{\rho }-\alpha _{bb}H\right) , \label{partcreat} \end{equation where we have used Eq.~(\ref{bbtrans}) for the description of brane bulk energy transfer, by taking into account that $\rho =\rho _{cr}\left( a_{0}/a\right) ^{3w}$. We suppose that the matter content of the early Universe is formed from $m$ non-interacting comoving relativistic fluids with energy densities and thermodynamic pressures $\rho _{i}(t)$ and p_{i}(t)$, $i=1,2,...,m$, respectively, with each fluid formed from particles having a particle number density $n_{i}(t)$, $i=1,2,...,m$, and obeying equations of state of the form $\rho _{i}(t)=k_{i}n_{i}^{\gamma _{i}} $, $p_{i}(t)=\left( \gamma _{i}-1\right) \rho _{i}$, $i=1,2,...,m$, where k_{i}=\rho _{0i}/n_{0i}^{\gamma _{i}}\geq 0$, $i=1,2,...,m$, are constants, and $1\leq \gamma _{i}\leq 2$, $i=1,2,...,m$. For example, we can consider that the particle content of the early Universe is determined by pure radiation (i. e., different types of massless particles, or massive matter (baryonic and dark) in equilibrium with electromagnetic radiation and decoupled massive particles. The total energy density and pressure of the cosmological fluid results from summing the contribution of the $l$ simple fluid components, and are given by $\rho \left( t\right) =\sum_{i=1}^{m}\rho _{i}(t)$ and $p\left( t\right) =\sum_{i=1}^{m}p_{i}(t)$, respectively. For a multicomponent comoving cosmological fluid and in the presence of variable brane tension and bulk-brane energy exchange, Eq.~(\ref{partcreat}) becomes \begin{equation} \sum_{i=1}^{l}\left[ \dot{\rho}_{i}+3(\rho _{i}+p_{i})H\right] =-\sum_{i=1}^{m}\rho _{i}(t)\left[ \frac{\dot{\lambda}}{\sum_{i=1}^{m}\rho _{i}(t)}-\alpha _{bb}H\right] . \label{partcreat1} \end{equation} Eq.~(\ref{partcreat1}) can be recast into the form of $m$ particle balance equations, \begin{equation} \dot{n}_{i}(t)+3n_{i}(t)H=\Gamma _{i}(t)n_{i}(t),i=1,2,...,m, \label{partcreat2} \end{equation where $\Gamma _{i}(t)$, $i=1,2,...,m$, are the particle production rates, given by \begin{equation} \Gamma _{i}(t)=-\frac{1}{\gamma _{i}}\left[ \frac{\dot{\lambda}}{m\rho _{i}(t)}-\alpha _{bb}H\right] ,i=1,2,...,m. \label{Gamma} \end{equation} In order for Eq.~(\ref{partcreat2}) to describe particle production the condition $\Gamma _{i}(t)\geq 0$, $i=1,2,...,m$, is required to be satisfied, leading to the following restriction imposed to the time variation rate of the brane tension \begin{equation} \dot{\lambda}\leq \alpha _{bb}l\rho _{i}(t)H,i=1,2,...,m. \end{equation} Note that if $\Gamma _{i}(t)=0$, $i=1,2,...,m$, we obtain the usual particle conservation law of the standard cosmology. Of course, the casting of Eq.~ \ref{partcreat1}) is not unique. In Eqs.~(\ref{partcreat2}) and (\ref{Gamma ), we consider the simultaneous creation of a multicomponent comoving cosmological fluid, but other possibilities can be formulated in the same way (for example, creation of a single component in a mixture of fluids). The entropy $S_{i}$, generated during particle creation at temperatures $T_{i}$, $i=1,2,...,m$, can be obtained from Eq.~(\re {partcreat2}), and for each species of particles has the expression \begin{equation} T_{i}\frac{dS_{i}}{dt}=-\frac{1}{\gamma _{i}}\left[ \frac{\dot{\lambda}} m\rho _{i}(t)}-\alpha _{bb}H\right] \rho _{i}\left( t\right) V,i=1,2,...,m, \end{equation where $V$ is the volume of the Universe, or, equivalently, \begin{equation} \frac{dS_{i}}{dt}=\frac{\gamma _{i}\rho _{i}(t)V}{T_{i}}\Gamma _{i}(t),i=1,2,...,m. \end{equation} In a cosmological fluid where the density and pressure are functions of the temperature only, $\rho =\rho \left( T\right) $, $p=p\left( T\right) $, the entropy of the fluid is given by $S=\left( \rho +p\right) V/T=\gamma \rho \left( t\right) V/T$. Therefore we can express the total entropy $S(t)$ of the multicomponent cosmological fluid filled brane Universe as a function of the particle production rate only, \begin{equation} S(t)=\sum_{i=1}^{m}S_{0i}\exp \left[ \int_{t_{0}}^{t}\Gamma _{i}\left( t^{\prime }\right) dt^{\prime }\right] , \end{equation where $S_{0i}\geq 0$, $i=1,2,...,m$, are constants of integration. In the case of a general perfect comoving multicomponent cosmological fluid with two essential thermodynamical variables, the particle number densities $n_{i} $, $i=1,2,...,m$, and the temperatures $T_{i}$, $i=1,2,...,m$, it is conventional to express $\rho _{i}$ and $p_{i}$ in terms of $n_{i}$ and T_{i}$ by means of the equilibrium equations of state $\rho _{i}=\rho _{i}\left( n_{i},T_{i}\right) $, $p_{i}=p_{i}\left( n_{i},T_{i}\right) $, i=1,2,...,m$. By using the general thermodynamic relation \begin{equation} \frac{\partial \rho _{i}}{\partial n_{i}}=\frac{\rho _{i}+p_{i}}{n_{i}} \frac{T_{i}}{n_{i}}\frac{\partial p_{i}}{\partial T_{i}},i=1,2,...,m, \end{equation in the case of a general comoving multicomponent cosmological fluid Eq.~(\ref{partcreat}) can also be rewritten in the form of $m$ particle balance equations, \begin{equation} \dot{n}_{i}(t)+3n_{i}(t)H=\Gamma _{i}(t)n_{i}(t),i=1,2,...,m, \label{partcreat3} \end{equation with the particle production rates $\Gamma _{i}(t)$ given by some complicated functions of the thermodynamical parameters, brane tension and brane-bulk energy exchange rate, \begin{equation} \Gamma _{i}(t)=-\frac{\rho _{i}}{\rho _{i}+p_{i}}\left[ \frac{\dot{\lambda}} l\rho _{i}(t)}-\alpha _{bb}H+T_{i}\frac{\partial \ln \rho _{i}}{\partial T_{i}}\left( \frac{\dot{T}_{i}}{T_{i}}-C_{i}^{2}\frac{\dot{n}_{i}}{n_{i} \right) \right] ,i=1,2,...,m, \end{equation where $C_{i}^{2}=\left( \partial p_{i}/\partial T_{i}\right) /\left( \partial \rho _{i}/\partial T_{i}\right) $. The requirement that the particle balance equation Eq.~(\ref{partcreat3}) describes particle production, $\Gamma _{i}(t)\geq 0$, $i=1,2,...,m$, imposes in this case the following constraint on the time variation of the brane tension, \begin{equation} \dot{\lambda}<\alpha _{bb}l\rho _{i}(t)H-l\rho _{i}(t)T_{i}\frac{\partial \ln \rho _{i}}{\partial T_{i}}\left( \frac{\dot{T}_{i}}{T_{i}}-C_{i}^{2 \frac{\dot{n}_{i}}{n_{i}}\right), i=1,2,...,m . \end{equation} In the general case the entropy generated during the reheating period due to the variation of the brane tension and the bulk-brane energy exchange can be obtained for each component of the cosmological fluid from the equations \begin{equation} \frac{dS_{i}}{dt}=\frac{\left( \rho _{i}+p_{i}\right) V}{T_{i}}\left[ \frac \dot{\lambda}}{l\rho _{i}(t)}-\alpha _{bb}H+T_{i}\frac{\partial \ln \rho _{i }{\partial T_{i}}\left( \frac{\dot{T}_{i}}{T_{i}}-C_{i}^{2}\frac{\dot{n}_{i }{n_{i}}\right) \right] ,i=1,2,...,m, \end{equation while the total entropy of the Universe is given by $S(t) \sum_{i=1}^{m}S_{i}(t)$. The entropy flux vector of the $k$th component of the cosmological fluid is given by \begin{equation} S^{(k)\alpha }=n_{k}\sigma _{k}u^{\alpha },k=1,2,..,m, \end{equation where $\sigma _{k}$, $k=1,2,...,m$, is the specific entropy (per particle) of the corresponding cosmological fluid component and $u^{\alpha }$ is the four-velocity of the fluid. By using the Gibbs equation $nTd\sigma =d\rho \left[ \left( \rho +p\right) /n\right] dn$ for each component of the fluid, and assuming that the entropy density $\sigma $ does not depend on the brane tension, we obtain \begin{equation} S_{;\alpha }^{(k)\alpha }=-\frac{1}{T_{k}}\left( \dot{\lambda}-\alpha _{bb}H\rho \right) -\frac{\mu _{k}\Gamma _{k}n_{k}}{T_{k}},k=1,2,..,m, \end{equation where $\mu _{k}$ is the chemical potential defined by $\mu _{k}=\left[ \left( \rho _{k}+p_{k}\right) /n_{k}\right] -T_{k}\sigma _{k}$. The chemical potential is zero for radiation. For each component of the cosmological fluid the second law of thermodynamics requires that the condition \begin{equation} S_{;\alpha }^{(k)\alpha }\geq 0,k=1,2,..,m, \end{equation has to be satisfied. \section{Power law inflation in brane world models with varying brane tension and bulk pressure}\label{pow} For a vacuum Universe with $\rho =p=0$, in the presence of a non-zero bulk pressure and matter- energy exchange between the brane and the bulk, the field equations Eqs. \ref{feq}) take the form \begin{equation}\label{infl1} 3\left( \frac{1}{a}\frac{da}{d\tau }\right) ^{2}=\frac{l^{2}}{2}-p_{B}+lu, \end{equation} \begin{equation}\label{infl2} 3\frac{1}{a}\frac{d^{2}a}{d\tau ^{2}}=\frac{l^{2}}{2}-p_{B}-lu, \end{equation} \begin{equation}\label{infl3} \frac{dl}{d\tau }=-2\sqrt{\frac{2}{3}}p_{5}, \end{equation and \begin{equation}\label{infl4} \frac{d}{d\tau }\left( lua^{4}\right) =-a^{4}\frac{d}{d\tau }\left( \frac l^{2}}{2}-p_{B}\right) , \end{equation where we have introduced a set of dimensionless variables $\left( \tau ,l,p_{B},u,p_{5}\right) $ defined as \begin{equation} \tau =\sqrt{\frac{3}{2}}t,\lambda =\frac{3}{k_{5}^{2}}l,P_{B}=\frac{3} k_{5}^{2}}p_{B},U=\frac{3}{k_{5}^{2}}u,P_{5}=\frac{3}{k_{5}^{2}}p_{5}. \end{equation} Moreover, we consider that the five-dimensional cosmological constant $\Lambda _5=0$. We assume that the inflationary evolution is of the power law type, and therefore $a=\tau ^{\alpha }$, where $\alpha $ is a constant. Then Eqs. (\re {infl1}) and (\ref{infl2}) give \begin{equation} 2\left(\frac{l^{2}}{2}-p_{B}\right)=\frac{3\alpha \left( 2\alpha -1\right) }{\tau ^{2}}, \end{equation and \begin{equation} 2lu=\frac{3\alpha }{\tau ^{2}}. \end{equation} Eq.~(\ref{infl4}) is identically satisfied. In order to completely solve the problem, we need to specify the form of the energy matter-transfer from the bulk to the brane. By assuming a functional form given by $p_{5}=p_{05}\tau ^{-\beta }$, where $\beta >0$ and $p_{05}>0$ are constants, we obtain immediately \begin{eqnarray} &&l\left( \tau \right)=\sqrt{\frac{8}{3}}\frac{1}{\beta -1}\tau ^{-\beta +1},p_{B}\left( \tau \right) =\frac{4}{3}\frac{1}{\left( \beta -1\right) ^{2 }\tau ^{-2(\beta -1)}-\frac{3\alpha \left( 2\alpha -1\right) }{2\tau ^{2}},\nonumber\\ &&u\left( \tau \right) =\sqrt{\frac{27}{32}}\alpha \left( \beta -1\right) \tau ^{\beta -3}. \end{eqnarray} The Hubble parameter of the Universe during the inflationary phase is given by $H=\alpha /t$. The deceleration parameter is obtained as $q=d\left(1/H\right)/dt-1=\left(1-\alpha\right)/\alpha $. Therefore, if $\alpha >1$, $q<0$, and the brane world Universe experiences an inflationary expansion. \section{Scale factor dependent brane tension models}\label{scale} In the following we assume that there is no matter-energy exchange between the bulk and the brane, $P_5=0$, and that the bulk pressure is also zero, $P_B=0$. For the matter on the brane we adopt as equation of state a linear barotropic relation between density and pressure, given by \begin{equation} p=\left( w-1\right) \rho , \end{equation} where $w={\rm constant}$ and $w\in \lbrack 1,2]$. Therefore Eq.~(\ref{3}) gives \begin{equation} \dot{\rho}+3Hw\rho =-\dot{\lambda}, \label{3w} \end{equation} while Eq.~(\ref{4}) gives immediately \begin{equation} \lambda U=\frac{U_{0}}{a^{4}}, \end{equation} where $U_{0}$ is an arbitrary constant of integration. In the following, in order to simplify the analysis, we assume that $U_0=0$. In order to explain the main observational features of modern cosmology (inflation, reheating, deceleration period and late time acceleration, respectively), we assume that the brane tension varies as a function of the scale factor $a$ according to the equation \begin{equation} \lambda^2=\lambda_0^2e^{-2\beta a^2}-\frac{6{}^{(5)}\Lambda}{k_5^2}+\lambda_1^2, \label{vbrane} \end{equation} where $\beta $, $\lambda _0$ and $\lambda _1$ are constants. Suppose $t_{\rm in}, a_{\rm in}$ and $\rho_{\rm in}$ are the values of the time, of the scale factor, and of the energy density before inflation. Generally, in the present paper we use the subscript $``{\rm in}"$ to denote the values of the cosmological parameters before the inflation, and the subscript $``{\rm en}"$ to denote values after inflation. Thus, for example, $N=\ln\left(a_{\rm en}/a_{\rm in}\right)$ is the e-folding number. The basic physical parameters of our model are $t_{\rm in}$, $a_{\rm in}$, $\rho_{\rm in}$, $N, k_5$, ${}^{(5)}\Lambda$, $\lambda_0$, $\lambda_1 $, and $\beta$, respectively. The coupling constant $k_5$ and the five-dimensional cosmological constant ${}^{(5)}\Lambda$ are constrained by the present value of the gravitational constant, \begin{equation} \frac{k_5^4}{6}\sqrt{-\frac{6{}^{(5)}\Lambda}{k_5^2}+\lambda_1^2}\approx k_5^3\sqrt{-\frac{{}^{(5)}\Lambda}{6}}=8\pi G\approx 1.68\times10^{-55}{\rm eV}^{-2}, \label{gconstant} \end{equation} and by the constraints on the 5D cosmological constant \cite{Maartens} \begin{equation} \frac{k_5^2}{2}{}^{(5)}\Lambda\approx -\frac{6}{(0.1{\rm mm})^2}\approx -2.3\times 10^{-5}{\rm eV}^{2}, \end{equation} where we have used the natural system of units with $\hbar=c=1$. >From these two conditions, we obtain $k_5^4\approx 3.6\times 10^{-105}\;{\rm eV}^{-6}$ and ${}^{(5)}\Lambda\approx -7.7\times 10^{46}\;{\rm eV}^5$. Besides, the value of $\lambda_1$ can be obtained from the value of the present day dark energy $\rho_{\rm dark}\approx 10^{-12}{\rm eV}^4$ \cite{Carroll}, \begin{equation} \frac{k_{5}^{4}}{12}\lambda_1 ^{2}=8\pi G \rho_{\rm dark}\approx 1.6\times10^{-67}{\rm eV}^2, \label{matchdark} \end{equation} which gives $\lambda_1\approx 1.6\times10^{19}\;{\rm eV}^4$. We can have a backward checking on Eq.~(\ref{gconstant}), from which it follows that the condition $\lambda_1^2\ll -6{}^{(5)}\Lambda/k_5^2$ is indeed satisfied. We also choose $\lambda_0$ to be of the same order of magnitude as the vacuum energy $\rho_{\rm vac}\backsim 10^{100}\;{\rm eV}^4$ at GUT scale \cite{Carroll}, \cite{Ishak}, \begin{equation} \frac{k_{5}^{4}}{12}\lambda_0 ^{2}=8\pi G \rho_{\rm vac}\backsim 10^{45}{\rm eV}^2, \label{matchvac} \end{equation} which gives $\lambda_0\backsim 1.3\times10^{75}$ eV. The scale difference between $\lambda_0$ and $\sqrt{-6{}^{(5)}\Lambda/k_5^2+\lambda_1^2}$ is $\lambda_0/\sqrt{-6{}^{(5)}\Lambda/k_5^2}\backsim 10^{25}$. The differences in the scales of $\lambda_0$ and $\lambda_1$ are of the order of $\backsim 56$. For the remaining model parameters $t_{\rm in}, a_{\rm in}, \rho_{\rm in}, N, \beta$, we constraint them in the next section. When $a$ is very small, the brane tension $\lambda\approx\lambda_0$ dominates the early Universe at the time of inflation. Due to the exponential expansion of the Universe, the brane tension quickly decays to a constant just after the inflation. The decay of the brane tension will generate the matter content of the Universe, according to Eq.~(\ref{3}). This happens also during the accelerated expansion period of the Universe. Matter is created during all periods of the expansion of the Universe, but the most important epoch for matter creation is near the end of inflation. In the evolution of the Universe there is one moment when $\ddot{a}=0$, which corresponds to the moment when the Universe switches from the accelerating expansion to a decelerating phase. After the matter (which is mainly in the form of radiation) energy density reaches its maximum, the Universe enters into a radiation dominated phase, and the quadratic term in Eq.~(\ref{1}) will become dominant first. The matter energy density continue to decrease due to the expansion. When the linear term in matter equals the quadratic term, the Universe switches back to the $\rm\Lambda CDM$ model. Therefore, the Universe enters in the matter dominated epoch at about $4.7\times10^4 \rm yr$ \cite{Ryden}. Then the matter term equals the residue term in Eq.~(8) at about $10\rm Gyr$ \cite{Ryden}. This is the second moment in the evolution of the Universe when $\ddot{a}=0$. After this moment, the Universe enters in an accelerating expansionary phase again, and its dynamics is controlled by the term $\lambda_1$. \section{Qualitative analysis of the model} In the present Section we consider the approximate behavior of the cosmological model with varying brane tension in the different cosmological epochs. \subsection{Early inflationary phase: $2\beta a^2\ll 1$} When the scale factor $a$ is very small, the exponential factor $e^{-2\beta a^2}$ in Eq.~(\ref{vbrane}) can be approximated by $1$. Therefore, the brane tension is given by $\lambda^{2}\approx \lambda_{0}^{2}$, and physically it corresponds to the vacuum energy necessary to give an exponential inflation. Since $k^2_5\lambda_{0}^2/6\gg{}^{(5)}\Lambda$, from Eq. (\ref{1}) the scale factor evolves in time as an exponential function of time, given by \begin{equation} a=a_{\rm in}e^{(k_5^2\lambda_0/6)\,t}, \label{approxa} \end{equation} where $a_{{\rm in}}$ is the value of the scale factor prior to inflation. The e-folding number is given by $N=\ln(a/a_{{\rm in}})$, which should be roughly of the order of $N\gtrsim 60$ in order to solve the flatness, Horizon problem, etc. In the present paper we adopt for $N$ the value $N=70$. Since $2\beta a_{\rm en}^2\backsim 1$ at the end of inflation, we can roughly estimate the value of $a_{\rm en}$ to be \begin{equation} a_{\rm en}\backsim \frac{1}{\sqrt{2\beta}}. \label{aen} \end{equation} From the value of $N$ we adopted, we obtain the value of $a_{\rm in}$ as \begin{equation} a_{\rm in}=a_{\rm en}e^{-N}. \end{equation} According to Eq.~(\ref{approxa}), the end time of the inflation $t_{\rm en}$ can be estimated as \begin{equation} k_5^2\lambda_0(t_{\rm en}-t_{\rm in})/6\approx k_5^2\lambda_0 t_{\rm en}/6\approx N, \end{equation} which implies that $t_{\rm en}\approx 10^{-36}$ s. The value of $t_{\rm in}$ is insensitive to the variation of the initial conditions, provided that $t_{\rm in}$ is at least one order smaller than $t_{\rm en}$. With the adopted value of the e-folding $N$ and $\beta$, we can fix the values of $a_{\rm in}$ and of $t_{\rm in}$, respectively. Since at the beginning of the inflationary stage the matter is not yet generated, we have $\rho_{\rm in}= 0$. For the deceleration parameter, from Eq.~(\ref{q0}) we find \begin{equation} q\approx\frac{-\lambda_{0}^2e^{-2\beta a^2}-\lambda_1^2}{\lambda_{0}^{2}e^{-2\beta a^2}+\lambda_1^2}=-1. \end{equation} By substituting the tension we can rewrite Eq.~(\ref{3w}) as \begin{equation} \frac{d\rho}{dt }+3w\frac{1}{a}\frac{da}{dt }\rho=-\frac{d\lambda}{dt}=\frac{2\beta \lambda_0^2a\dot{a}e^{-2\beta a^2}}{\sqrt{\lambda_0^2e^{-2\beta a^2}-\frac{6{}^{(5)}\Lambda}{k_5^2}+\lambda_1^2}}. \end{equation} For $2\beta a^2\ll 1$, the exponential factor does not change much as $a$ increase. Therefore, \begin{equation} \frac{d\rho}{dt }+3w\frac{1}{a}\frac{da}{dt }\rho=\frac{2\beta \lambda_0^2a\dot{a}}{\sqrt{\lambda_0^2-\frac{6{}^{(5)}\Lambda}{k_5^2}+\lambda_1^2}}\approx \frac{2\beta \lambda_0^2a\dot{a}}{\sqrt{\lambda_0^2}}=\beta k_5^2\lambda_0^2 a^2/3. \end{equation} where we have also used Eq.~(\ref{approxa}). Conversion of the matter from the brane tension energy begins already during the inflationary stage, and the rate of the conversion is proportional to $a^2$ during inflation. Therefore, the matter density also rises exponentially in the late stages of the inflationary phase. The matter generation rate becomes most important at the end of inflation. \subsection{Reheating period: $2\beta a^2 \approx 1$} During the reheating period the evolution of the matter gradually changes from an exponential increase to a power law decrease, $\rho\propto a^{-3w}$. In our model the matter density is a smooth function, which is strictly increasing from the beginning of inflation, and then strictly decreases after the end of the reheating phase. Therefore there must be a maximum value of the density $\rho_{\rm max}$ at a time $t_{\rm max}$. After $t_{\rm max}$, the Universe was dominated by matter which is in the form of radiation, and almost all the energy of the brane tension converted into matter. The temperature of matter, corresponding to a radiation dominated Universe at $t_{\rm max}$, is denoted $T_{\rm RH}$, and is given by \begin{equation} \rho_{\rm max}=\frac{\pi^2}{15}(kT_{\rm RH})^4, \label{rmax} \end{equation} where $k$ is Boltzmann's constant. Current theory on gravitinos production constraints $T_{\rm RH}$ to be $T_{\rm RH}< 10^9-10^{10}\;{\rm GeV}$ \cite{Ellis, Kawasaki,Moroi}. Therefore the maximum density of the Universe must satisfy the condition $\rho_{\rm max}<10^{36}-10^{40}\;{\rm GeV}^4$. The maximum density can be obtained from the condition $\dot{\rho}|_{t=t_{\rm max}}=0$, and, with the use of Eq.~(14) it is given as a solution of the equation \begin{equation}\label{rhom} 3w\frac{1}{a}\frac{da}{dt}\rho_{\rm max}=-\frac{d\lambda}{dt }=\left.\frac{2\beta \lambda_0^2a\dot{a}e^{-2\beta a^2}}{\sqrt{\lambda_0^2e^{-2\beta a^2}-\frac{6{}^{(5)}\Lambda}{k_5^2}+\lambda_1^2}}\right|_{t=t_{\rm max}}. \end{equation} With the rough approximation $\beta a^2|_{t=t_{\rm max}}\approx \beta a_{\rm en}^2\backsim 1$, Eq.~(\ref{rhom}) can be written as \begin{equation} 3w\rho_{\rm max}=\left.\frac{2\beta \lambda_0^2a^2e^{-2\beta a^2}}{\sqrt{\lambda_0^2e^{-2\beta a^2}-\frac{6{}^{(5)}\Lambda}{k_5^2}+\lambda_1^2}}\right\vert_{t=t_{\rm max}}\approx\left.\frac{2\lambda_0^2e^{-2\beta a^2}}{\sqrt{\lambda_0^2e^{-2\beta a^2}-\frac{6{}^{(5)}\Lambda}{k_5^2}+\lambda_1^2}}\right\vert_{t=t_{\rm max}}\approx 0.74\lambda_0. \end{equation} This relation gives the maximum matter density of the Universe. And the value of $\lambda_0\backsim 10^{39}{\rm GeV}^4$ that we have chosen is consistent with the maximum value of the matter energy density. \subsection{Matter Domination period: $2\beta a^2 \gg 1$ and $2\lambda\rho \gg \lambda_1^2 $} At this stage, the Universe is dominated by matter and the brane tension is roughly a constant. During this period, the key difference with the conventional cosmological models is the presence of the quadratic term in matter density, which will dominate the dynamics of the Universe at the beginning of this period. The evolution equation of the scale factor is \begin{equation} \frac{da}{dt}=a\frac{k_5^2}{6}\left(\sqrt{2\lambda\rho+\rho^2}\right)\approx a\frac{k_5^2}{6}\rho, \label{dadt2} \end{equation} and the evolution of the matter density is given by \begin{equation} \frac{d\rho}{dt }+4\frac{1}{a}\frac{da}{dt }\rho=0, \end{equation} where we have assumed that immediately after the matter energy reaches its maximum the matter is in the form of radiation. The solution is $\rho={\rm constant} /a^4\equiv \rho_{\rm max}a^4(t_{\rm max})/a^4\approx \lambda_0/\beta^2a^4$. Therefore, the scale factor and deceleration parameter evolve as \begin{equation} a(t)=\left(\frac{2k_5^2}{3}\rho_{\rm max}a^4(t_{\rm max})t\right)^\frac{1}{4}\approx\frac{k_5^2\lambda_0}{6\beta^2}t, \end{equation} \begin{equation} q=-\frac{a\ddot{a}}{\dot{a}^2}=\frac{\frac{k_5^4\lambda}{36}\left[\rho(1+\frac{2\rho}{\lambda})+3(w-1)\rho(1+\frac{\rho}{\lambda})\right]}{\frac{k_5^4\lambda}{18}(\rho+\frac{\rho^2}{\lambda})}\approx 1+3w=5. \end{equation} After the quadratic density and linear density equality $2\lambda\rho= \rho^2$, the linear term in matter will take over, i.e. $2\lambda\rho\gg \rho^2$. The Universe enters in the $\rm \Lambda CDM$ model at this radiation dominated phase, and its dynamics is described by the equations \begin{equation} \frac{da}{dt}=a\left(\sqrt{\frac{k_5^4}{18}\lambda\rho}\right), \label{dadt3} \end{equation} and \begin{equation} \frac{d\rho}{dt }+4\frac{1}{a}\frac{da}{dt }\rho=0, \end{equation} respectively. The solution for the scale factor is \begin{equation} \frac{a^2(t)}{2}\approx\left(\frac{k_5^3\sqrt{-6{}^{(5)}\Lambda}\lambda_0}{18\beta^2}\right)^{1/2}t, \label{alinear} \end{equation} with $\rho=\rho_{\rm max}a^4(t_{\rm max})/a^4$. During this period the deceleration parameter evolves as \begin{equation} q\approx\frac{\frac{k_5^4\lambda}{36}\left[\rho+3(w-1)\rho\right]}{\frac{k_5^4}{18}\lambda\rho}=\frac{3w-2}{2}=1. \label{qlinear} \end{equation} After this stage, the Universe switches from radiation dominated to non-relativistic matter dominated at $t_{\rm rm}=1.5\times 10^{12}\;{\rm s}$ \cite{Ryden}. To simulate the transition from the radiation dominated period to the baryonic matter dominated period one can introduce a time varying $w$ given by \cite{Harko08} \begin{equation} w=\frac{4t_{\rm rm}/3+t}{t_{\rm rm}+t}. \label{rmsim} \end{equation} In the matter dominated era ($w=1$), the deceleration parameter shifts to $q=0.5$, according to Eq.~(\ref{qlinear}). Recall that $\beta$ is still free, and we can use it to match the scale factor at the radiation-matter equality. With the use of Eq.~(\ref{alinear}), and by taking $a_{\rm rm}=2.8\times10^{-4}$ \cite{Ryden}, we obtain \begin{equation} 2.6\times10^{-20}s^{-1}\backsim\left(\frac{k_5^3\sqrt{-6{}^{(5)}\Lambda}\lambda_0}{18\beta^2}\right)^{1/2}, \label{matchrm} \end{equation} which gives $\lambda_0\approx 10^{-15}\times \beta^2$. This gives the estimate of $\beta\backsim 10^{45}$. \subsection{Dark Energy Domination era ($2\lambda\rho<\lambda_1^2 $)} With the increase of the cosmological time, the constant term $\lambda_1$ in Eq.~(\ref{1}) will dominate over matter. Thus this term plays the role of the dark energy of the standard $\rm \Lambda CDM$ models. In its late stages of evolution the Universe becomes ``dark energy" dominated. The Universe turn to an exponential acceleration again, with \begin{equation} a\propto e^{(k_5^2/6)\lambda_1\, t}, \end{equation} but with a time scale much longer than the inflationary time scale. The deceleration parameter will converge to \begin{equation} q\rightarrow\frac{-k_5^4\lambda_1^2}{k_5^4\lambda_1^2}=-1, \end{equation} showing that the expansion of the universe is accelerating. \section{Numerical analysis of the model}\label{numerical} The field equation can be rewritten in a simple form by introducing as set of dimensionless variables $\tau$, $l$, and $r$, defined as \begin{equation} \tau =k_{5}\sqrt{\frac (-\Lambda _{5})}{2}}t,\rho =k_{5}^{-1}\sqrt{3\times (-\Lambda _{5}) r,\lambda =k_{5}^{-1}\sqrt{3\times (-\Lambda _{5})}l, \label{dim} \end{equation} respectively. Then the field equations Eqs.~(\ref{1})-(\ref{5}) can be written in a dimensionless form as \begin{equation} 3\left( \frac{1}{a}\frac{da}{d\tau }\right) ^{2}=-1+\frac{l^{2}}{2}+lr+\frac r^{2}}{2}, \label{f1} \end{equation} \begin{equation} \frac{1}{a}\frac{d^{2}a}{d\tau ^{2}}=-\frac{1}{3}+\frac{l^{2}}{6}-\frac{1}{6 l\left[ r\left( 1+\frac{2r}{l}\right) +3(w-1)r\left( 1+\frac{r}{l}\right) \right] , \label{f2} \end{equation} \begin{equation} \frac{dr}{d\tau }+3w\frac{1}{a}\frac{da}{d\tau }r=-\frac{dl}{d\tau }. \label{f3} \end{equation} By rescaling the four-dimensional cosmological constant so that ^{(4)}\Lambda =\left[ k_{5}^{2}(-\Lambda _{5})/2\right] l_{eff}$, it follows that $l_{\rm eff}=-1+l^{2}/2$. In the dimensionless variables, the deceleration parameter is given by $q=-\left( ad^{2}a/d\tau ^{2}\right) /\left( da/d\tau \right) ^{2} , and can be explicitly expressed as a function of the physical parameters of the model as \begin{equation} q=\frac{1-l^{2}/2+l\left[ r\left( 1+2r/l\right) +3(w-1)r\left( 1+r/l\right) \right] /2}{-1+l^{2}/2+lr+r^{2}/2}. \label{q} \end{equation} To obtain the numerical solution, one should also consider the rescaled form of $\lambda$, \begin{equation} l^{2}=2(l_{0}^{2}e^{-2\beta a^2}+1+l_1^2), \label{exptension} \end{equation} where $l_{0}$ and $l_1$ are constants, with $l_0 \approx 10^{25} \gg l_1\approx 10^{-31}$. Eq.~(\ref{rmsim}) becomes \begin{equation} w=\frac{4\tau_{\rm rm}/3+\tau}{\tau_{\rm rm}+\tau}, \label{scalermsim} \end{equation} with $\tau_{\rm rm}=(4.8\times10^{-3}\;{\rm eV})t_{\rm rm}=10^{25}$. Note that the conversion in Eq.~(\ref{dim}) can be written out numerically \begin{equation} t=(208\;{\rm eV}^{-1})\tau=(1.4\times10^{-13}\;{\rm s})\tau,\rho =(2.0\times10^{50}\;{\rm eV})r,\lambda =(2.0\times10^{50}\;{\rm eV})l. \label{dim2} \end{equation} The time variations of the scale factor, of the energy density, of the brane tension, and of the deceleration parameter of the Universe are presented, for different scales of vacuum energy $l_0$, in Figs.~\ref{fig1} respectively. From the plot of $a$ and $q$, we find that there are 5 stages in the evolution of the Universe. Namely, the inflation and reheating stage, the quadratic density stage, the radiation domination stage, the non-relativistic matter stage, and the late time acceleration stage. Comparing plots with different $l_0$'s, we see the effects of different vacuum energy scales on the evolution of the Universe. According to the plot with different $l_0$'s, we find that a larger vacuum energy would provide a faster inflation, and it also generates more matter. The matter energy density reaches its maximum at earlier times. Although there are many changes in the evolution of the Universe caused by changing $l_0$, these characteristics are hard to be constraint by observations. On the other hand, $l_0$ cannot be arbitrary due to the theoretical constraint, e.g. vacuum energy density, gravitinos production, etc. \begin{figure}[th] \centering \includegraphics[width=.450\textwidth]{f1fan.eps} \includegraphics[width=.450\textwidth]{f2fan.eps} \includegraphics[width=.450\textwidth]{f3fan.eps} \caption{Time variations of the scale factor $a$, deceleration parameter $q$, and energy density $r$ of the Universe in braneworld models with varying scale factor dependent brane tension. Different $l_0$ are plotted: $l_0=10^{24}$ (dotted curve), $l_0=10^{25}$ (solid curve), and $l_0=10^{26}$ (dashed curve).} \label{fig1} \end{figure} The time variations of the scale factor, and of the deceleration parameter of the Universe are presented, for different values of $\beta$, in Figs.~\ref{fig2} respectively. If we assume $\lambda_0$ is fixed by the vacuum energy scale, the value of $\beta$ is well confined. Varying $\beta$ would affect the observational constraint of $a$. From the graph of $r$, we can cross check that the value of $\rho_{\rm max}$ indeed fulfills the requirement of Eq.~(\ref{rmax}). Besides $\beta $, we could examine the effect of adopting a different e-folding $N$. Since $a_{\rm en}$ is determined by $\beta$ according to the condition Eq.~(\ref{aen}), it follows that it is determined by observational constraints. Different e-foldings could be a result of differences in $a_{\rm in}$. However, we find from Fig.~\ref{fig3} that $a_{\rm in}$ does not affect the post-inflationary epochs. Therefore $a_{\rm in}$ is not a robust parameters, and we cannot fully determine it. \begin{figure}[th] \centering \includegraphics[width=.450\textwidth]{f4fan.eps} \includegraphics[width=.450\textwidth]{f5fan.eps} \includegraphics[width=.450\textwidth]{f6fan.eps} \caption{Time variations of the scale factor $a$, deceleration parameter $q$, and energy density $r$ of the Universe in braneworld models with varying scale factor dependent brane tension. Different $\beta$ are plotted: $\beta=10^{43}$ (dotted curve), $\beta=10^{45}$ (solid curve), and $\beta=10^{47}$ (dashed curve).} \label{fig2} \end{figure} \begin{figure}[th] \centering \includegraphics[width=.63\textwidth]{f7fan.eps} \caption{Time variations of the scale factor $a$ of the Universe in braneworld models with varying scale factor dependent brane tension. Different $a_{\rm in}$ are plotted: $a_{\rm in}=10^{-51}$ (dotted curve), $a_{\rm in}=10^{-53}$ (solid curve), and $a_{\rm in}=10^{-55}$ (dashed curve).} \label{fig3} \end{figure} \section{Discussions and final remarks}\label{conclusion} In the present paper we have considered some cosmological implications of the braneworld models with variable tension. We have considered a thermodynamic interpretation of the model, and we have shown that from a thermodynamic point of view a variable brane tension can describe particle production processes in the early Universe, as well as the entropy production in the early stages of the cosmological evolution. A simple power-law inflationary cosmological model has also been obtained. By adopting a simple analytical form for $\lambda$, we have obtained a complete description of the dynamics and evolution of the Universe from the stage of the inflation to the phase of the late acceleration. Moreover, the differences between the energy scales of the theoretical vacuum energy at inflationary epoch and during late time acceleration are studied. A variable brane tension can drive the inflationary evolution of the Universe, and also be responsible for matter creation in the post-inflationary phase (the reheating of the Universe). In the model we have adopted the brane tension converges to a constant that gives the gravitational constant, and the residue after cancelation with the $\Lambda_5$ term would give the dark energy. It is interesting to note that, as opposed to the standard cosmological scenarios, in the present model matter creation takes place during the entire inflationary phase, but reaches a maximum after the exponential expansion of the Universe ends. Therefore in the braneworld models with varying brane tension there can be no clear distinction between inflation and reheating. By using numerical analysis as well as approximate analytical methods, we have obtained the result that in this variable tension model there are 5 phases in the cosmological evolution of the Universe. At the beginning, there are the inflationary and the reheating phases. During these phases, the brane tension is the dominant energy in the Universe, and its magnitude is of the order of the vacuum energy at GUT scale. As the brane tension decays, the matter is created, and the Universe enters in the hot Universe phase of the Big Bang picture. The third phase is the quadratic density domination phase. This is a unique characteristic of the brane world scenario, during which the Universe is dominated by a quadratic term in the energy density of the radiation. The deceleration parameter increases from $-1$ during inflation to $5$ at this phase. To study the cosmological dynamics we have introduced a set of dimensionless quantities, which describe the evolution of the scaled densities in terms of a dimensionless time parameter $\tau$. By using the results of the numerical simulations for the rescaled variables, one can obtain some constraints on the physical parameters of the model. \section*{Acknowledgments} This work is supported by the RGC grant HKU 701808P of the Government of the Hong Kong SAR.
train/arxiv
BkiUbBU5qsFAftmvMySB
5
1
\subsection{Capacity-Achieving Inputs for the Deletion Channel} In~\cite{Dobrushin_1967}, Dobrushin proves a capacity result for a class of synchronization error channels that includes the binary deletion channel. That paper also shows that the capacity can be approached by a sequence of finite-order Markov input distributions. Unfortunately, the Markov input distribution in Dobrushin's construction is not irreducible~\cite[Lemma 4]{Dobrushin_1967}. Thus, Dobrushin's result falls slightly short of what is required by the polar coding construction in this paper. In~\cite{Li_2019}, Li and Tan study the capacity of the concatenation of a deletion channel and a finite-state channel. For this setup, they prove a capacity result and show that the capacity can be approached by a sequence of finite-order Markov input distributions that are irreducible and aperiodic. As they note in their paper, their result is sufficient to prove that the polar coding scheme in this paper can achieve capacity. In this section, we describe a regular hidden-Markov input distribution that also achieves capacity on the deletion channel. Though this is not required, given~\cite{Li_2019}, we include it for completeness and because the argument is somewhat different. Denote by $P_{X^N}$ an input distribution over binary vectors of length $N$, which we will shortly optimize over. Let $\obX \triangleq(X_{1},\ldots,X_{N})$ be a random binary vector of length $N$ drawn according to $P_{X^N}$. Take $\obX$ as the input sequence to a binary deletion channel with deletion probability $\delta\in(0,1)$ and let $\underline{Y} \triangleq(Y_{1},\ldots,Y_{M})$ be the corresponding output sequence where the random variable $M$ is the output length. The maximum mutual information for a length-$N$ input is denoted by \begin{equation} \label{eq:delcapn} C_{N}\triangleq\max_{P_{X^{N}}}\frac{1}{N}I(\obX;\underline{Y}). \end{equation} It is well-known \cite[proof of Theorem II.1]{Kanoria_2013} that $NC_{N}$ is a subadditive sequence and this implies \cite[Lemma 1.2.1, page 3]{Steele:97b} that \[ C=\lim_{N\to\infty}C_{N}=\inf_{N\geq1}C_{N} \] exists and satisfies $C\leq C_{N}$ for $N\geq1$. Thus, for the optimal $P_{X^N}$ we have \begin{equation} \label{eq:IobXobYBoundsC} \frac{1}{N}I(\obX;\underline{Y}) \geq C \; . \end{equation} We begin with the standard approach~\cite{Chen_2008} of using an optimal $P_{X^{N}}$ from~\eqref{eq:delcapn} to generate a length-$kN$ random input $\mathbf{X} = \mathbf{X} (1) \odot \cdots \odot \mathbf{X} (k)$ where each $\mathbf{X}(i)$ is a length-$N$ block drawn independently from $P_{X^{N}}$ and using $\odot$ to represent vector concatenation. For this input, we denote the output by $\mathbf{Y} = \mathbf{Y}(1) \odot \cdots \odot \mathbf{Y}(k)$ where $\mathbf{Y}(i)$ contains the output symbols associated with the input $\mathbf{X}(i)$. Thus, for each $i$, the pair $\mathbf{X}(i),\mathbf{Y}(i)$ has the same distribution as the pair $\obX,\underline{Y}$. The random variables $\Ybl_{i} = |\mathbf{Y}(i)|$, for $i\in[k]$, are chosen to equal the number of output symbols generated by the input block $\mathbf{X}(i)$. Using the chain rule for mutual information, we note that \begin{align*} I\big(\mathbf{X};\mathbf{Y},\Ybl_1^{k}\big)&=I(\mathbf{X};\mathbf{Y})+I\big(\mathbf{X};\Ybl_1^{k}|\mathbf{Y}\big) \\ &\leq I(\mathbf{X};\mathbf{Y}) + k\log_{2}(N+1), \end{align*} where inequality follows from $I\big(\mathbf{X};\Ybl_1^{k}|\mathbf{Y}\big) \leq \sum_{i=1}^k H\big(M_{i}\big)$ and $0\leq \Ybl_i \leq N$. Thus, it follows that \begin{align*} I& (\mathbf{X};\mathbf{Y}) \geq -k\log_{2}(N+1) + I\big(\mathbf{X};\mathbf{Y},\Ybl_1^{k}\big)\\ &\stackrel{\!\mathrm{(a)}\!}{=} -k\log_{2}(N+1) + I\big(\mathbf{X};\mathbf{Y}(1),\ldots,\mathbf{Y}(k)\big)\\ &= -k\log_{2}(N+1) + \sum_{i=1}^{k}I\big(\mathbf{X};\mathbf{Y}(i)|\mathbf{Y}(1),\ldots,\mathbf{Y}(i-1)\big)\\ &\stackrel{\!\mathrm{(b)}\!}{=} -k\log_{2}(N+1) +\sum_{i=1}^{k}I\big(\mathbf{X}(i);\mathbf{Y}(i)\big)\\ &= -k\log_{2}(N+1)+kI(\obX;\underline{Y})\\ & =kN\left(\frac{1}{N}I(\obX;\underline{Y})-\frac{\log_{2}(N+1)}{N}\right)\\ & \stackrel{\!\mathrm{(c)}\!}{\geq} kN\left(C-\frac{\log_{2}(N+1)}{N}\right), \end{align*} where $\mathrm{(a)}$ holds because there is an invertible mapping from $\mathbf{Y},\Ybl_1^{k}$ to $\mathbf{Y}(1),\ldots,\mathbf{Y}(k)$, $\mathrm{(b)}$ follows from the pairs $(\mathbf{X}(i),\mathbf{Y}(i))_{i=1}^k$ being i.i.d., and $\mathrm{(c)}$ follows from (\ref{eq:IobXobYBoundsC}). After normalizing by the input length, this gives \[ \frac{1}{kN} I(\mathbf{X};\mathbf{Y}) \geq C - \frac{\log_2 (N+1)}{N}. \] Thus, the information rate can be made arbitrarily close to $C$ by choosing $N$ large enough. However, the infinite input distribution formed by concatenating length-$N$ blocks cannot be generated by a regular hidden-Markov process. In order to explain how to overcome this, we will first describe this input distribution as a hidden-Markov process with state set \[ \mathcal{S} \triangleq \bigcup_{j=0}^{N-1} \big\{ x \in \{0,1\}^j \, \big| \, P_{X^j}(x) \neq 0 \big\} , \] where the set $\{0,1\}^i$ represents all possible states after $i$ input symbols from the length-$N$ input distribution $P_{X^N}$. We denote the initial state by the empty string $\varepsilon \triangleq \{0,1\}^0$ and let $P_{X^0}(\varepsilon) = 1$ by convention. To generate multiple blocks, we define the underlying Markov chain to start in the $\varepsilon$ state and return to the $\varepsilon$ state with probability 1 after generating $N$ outputs. Thus, the underlying Markov chain is irreducible because we have only included states with positive probability and there is a path with positive probability from $\varepsilon$ to any $x \in \mathcal{S}$. Notice that the state implicitly encodes the current input position in the length-$N$ block distribution. For example, if $s\in \{0,1\}^j$, then next symbol is drawn according to $P_{X^{j+1}|X^j}(x|s)$. Thus, the underlying Markov chain is periodic with period $N$. To make it aperiodic, we will introduce one additional state, which we denote by $\tau$, that is used to dither the input block between length-$N$ and length-$(N+1)$. State $\tau$ always outputs a dither bit whose value is $0$ and then transitions to state $\varepsilon$. The idea is that, after a length-$N$ input block, a fair coin is used to determine if the next block will start immediately (e.g., the underlying Markov chain transitions to state $\varepsilon$) or be delayed by one symbol (e.g., the underlying Markov chain transitions to state $\tau$). After this, the modified Markov chain will be aperiodic because the transition graph has loops of length $N$ and $N+1$. The period of a Markov chain is the greatest common divisor of the lengths of all loops in the transition graph. Since $N$ and $N+1$ are relatively prime, the period is 1 and the chain is aperiodic. We also note that the new Markov chain is still irreducible because there is still a path with positive probability between any two states. Let $S_0$ be initial state of the underlying Markov chain. In the current formulation, we have $S_0 = \varepsilon$ with probability 1 and the Markov chain is not stationary. One can make this Markov chain stationary by drawing the initial state $S_0$ from the stationary distribution of the underlying Markov chain. After this change, we have constructed a regular hidden-Markov input derived from our original $P_{X^N}$ block distribution. Now, let $\mathbf{X}$ be a length-$k(N+1)$ input drawn from the constructed hidden-Markov process. This input can be broken into segments by adding commas before the inputs generated by the state $\varepsilon$. A complete segment is delimited by commas on both sides, and thus has length either $N$ or $N+1$. Note that $\mathbf{X}$ contains at least $k$ segments, and by discarding the first segment we get at least $k-1$ complete segments. We call the length-$N$ prefix of a complete segment a block. Thus, we have at least $k-1$ blocks, $\mathbf{X}(2),\ldots,\mathbf{X}(k)$, where each block can be associated with an independent draw from $P_{X^N}$. Let $T_i \in \{0,1\}$ be the side-information random variable that indicates, for the $i$-th (possibly incomplete) segment, whether or not state $\tau$ was visited during that segment. Given $S_0$ and $T_1^k$, it is always possible to compute the locations of the commas described above and separate $\mathbf{X}$ into the $k-1$ blocks $\mathbf{X}(2),\ldots,\mathbf{X}(k)$. This is because $S_0$ gives the initial offset into the first segment and $T_i$ indicates whether or not each segment has the additional dither bit. Similarly, the output $\mathbf{Y}$ can be separated into subvectors associated with the above blocks by adding commas to separate outputs generated by different segments and removing any outputs caused by dither bits. Namely, we let $M_i \in \{0,\ldots,N+1\}$ be the side-information random variable that indicates the number of outputs generated by the $i$-th \emph{segment} and $R_i \in \{0,1\}$ be the side-information random variable that indicates whether the last output in a subvector is due to a dither bit. Given $M_1^k$ and $R_1^k$, it is always possible to separate $\mathbf{Y}$ into $\mathbf{Y}(2),\ldots,\mathbf{Y}(k)$ where each $\mathbf{Y}(i)$ is the output associated with the block $\mathbf{X}(i)$. Thus, each pair $(\mathbf{X}(i),\mathbf{Y}(i))$ has the same distribution as $(\obX,\underline{Y})$. Using this setup, the chain rule of mutual information and cardinality upper bounds imply that% \begin{align} I &\big(\mathbf{X},T_1^k;\mathbf{Y},M_1^{k},R_1^{k}|S_0\big) = I \big(\mathbf{X},T_1^k;\mathbf{Y},M_1^{k},R_1^{k}\big) \nonumber \\ & \quad + I \big(\mathbf{X},T_1^k;S_0|\mathbf{Y},M_1^{k},R_1^{k} \big) - I \big(\mathbf{X},T_1^k;S_0\big) \nonumber \\ &\leq I \big(\mathbf{X},T_1^k;\mathbf{Y},M_1^{k},R_1^{k}\big) + I \big(\mathbf{X},T_1^k;S_0|\mathbf{Y},M_1^{k},R_1^{k} \big) \nonumber \\ &\stackrel{\!\mathrm{(a)}\!}{\leq} I \big(\mathbf{X},T_1^k;\mathbf{Y},M_1^{k},R_1^{k}\big) + N \nonumber \\ &= I \big(\mathbf{X};\mathbf{Y},M_1^{k},R_1^{k}\big) + I \big(T_1^{k};\mathbf{Y},M_1^{k},R_1^{k} | \mathbf{X}\big) + N \nonumber \\ &\stackrel{\!\mathrm{(b)}\!}{\leq} I \big(\mathbf{X};\mathbf{Y},M_1^{k},R_1^{k}\big) + k + N \nonumber \\ &= I \big(\mathbf{X};\mathbf{Y}\big) + I \big(\mathbf{X};M_1^{k},R_1^{k}|\mathbf{Y}\big) + k + N \nonumber \\ &\stackrel{\!\mathrm{(c)}\!}{\leq} I \big(\mathbf{X};\mathbf{Y}\big) + k \log_2 (N+2) + k + k + N,\label{eq:capmi1} \end{align} where $\mathrm{(a)}$ follows from $\log_2 |\mathcal{S}| = \log_2 \left(1+\sum_{j=0}^{N-1} 2^j \right) = N$, $\mathrm{(b)}$ holds because $T_i \in \{0,1\}$, and $\mathrm{(c)}$ follows from $0\leq M_i \leq N+1$ and $R_i \in \{0,1\}$. Based on the decompositions described above, the data processing inequality implies that \begin{align} I &\big(\mathbf{X},T_1^k;\mathbf{Y},M_1^{k},R_1^k|S_0\big) \nonumber \\ &\geq I \big(\mathbf{X}(2),\ldots,\mathbf{X}(k);\mathbf{Y}(2),\ldots,\mathbf{Y}(k)\big | S_0) \nonumber \\ &= I \big(\mathbf{X}(2),\ldots,\mathbf{X}(k);\mathbf{Y}(2),\ldots,\mathbf{Y}(k)\big) \nonumber \\ &= \sum_{i=2}^{k} I\big(\mathbf{X}(i);\mathbf{Y}(i)\big) = (k-1) I(\obX;\underline{Y}).\label{eq:capmi2} \end{align} Combining~\eqref{eq:IobXobYBoundsC}--\eqref{eq:capmi2}, we have \begin{align*} I& (\mathbf{X};\mathbf{Y}) \geq -k\log_{2}(N+2) - 2k - N + (k-1)CN. \end{align*} To lower bound the information rate, we can normalize by the input length to see that \begin{multlinecc*} \frac{1}{k(N\!+\!1)} I(\mathbf{X};\mathbf{Y}) \\ \geq \frac{(k-1)N}{k(N\!+\!1)} \left( C - \frac{1}{k\!-\!1} \right) - \frac{2+ \log_2 (N\!+\!2)}{N\!+\!1}. \end{multlinecc*} By choosing $k$ and $N$ large enough, the information rate can be made arbitrarily close to $C$. Thus, we have constructed a sequence of regular hidden-Markov input distributions that achieve capacity on the binary deletion channel. In closing, we note that this argument works without change for channels with independent insertions, deletions, and substitutions. \section*{Structure of this paper} Section \ref{} sets up notation and defines the deletion channel. We further define the family of input processes over which our scheme can be applied, as well as the information rate of the joint input-output process. In Section~\ref{} we define the trellis corresponding to a given received word and deletion channel. We then show how the `$-$' and `$+$' operations can be applied to such a trellis in order to yield a new trellis. The section also explains how SC can be applied to this setting. Section~\ref{} proves that weak polarization occurs. Strong polarization is proved in Section~\ref{}. In order to show strong polarization, we first introduce the concept of a guard band --- a sequence of `$0$' symbols inserted into the middle of a word, meant to easily distinguish between the two halves after deletions have occurred. Such guard bands are inserted during the last third of the polarization process. They aid both the polarization rate as well as the decoding complexity. \fi \section{Background}\label{sec:background} \subsection{Notation} The natural numbers are denoted by $\mathbb{N} \triangleq \{1,2,\ldots \}$. We also define $[m] \triangleq \{ 1,2,\ldots,m \}$ for $m \in \mathbb{N}$. Let $\mathcal{X}$ denote a finite set (e.g., the input alphabet of a channel). In this paper, we fix $\mathcal{X} = \{0,1\}$ as the binary alphabet. Extensions to non-binary alphabets are straightforward, see for example \cite[Chapter 3]{sasoglu_thesis} and \cite[Appendix A]{Shuval_Tal_Memory_2017}. Let $\mathbf{x} = (x_1,\ldots,x_N) \in \mathcal{X}^N$ be a vector of length $N = 2^n$. We use $[statement]$ to denote the Iverson bracket which evaluates to $1$ if $statement$ is true and $0$ otherwise. The concatenation of vectors $\mathbf{y} \in \mathcal{X}^{N_1}$ and $\mathbf{y}' \in \mathcal{X}^{N_2}$ lives in $\mathcal{X}^{N_1+N_2}$ and is denoted by $\mathbf{y} \odot \mathbf{y}'$. The length of a vector $\mathbf{y}$ is denoted by $|\mathbf{y}|$. Random variables will typically be denoted by uppercase letters. In this paper, we use the standard Ar\i{}kan transform presented in the seminal paper \cite{Arikan_2009}. The Ar\i{}kan transform of $\mathbf{x} \in \mathcal{X}^N$, $N = 2^n$, is defined recursively using length-$N/2$ binary vectors, $\mathbf{x}^{[0]}$ and $\mathbf{x}^{[1]}$: \begin{IEEEeqnarray}{rCl} \vecxbr{0} & \triangleq & (x_1 \oplus x_2,x_3 \oplus x_4,\ldots,x_{N-1} \oplus x_{N}) \; , \label{eq:vecxZero} \\ \vecxbr{1} & \triangleq & (\phantom{x_1 \oplus{}} x_2,\phantom{x_3\oplus{}} x_4,\ldots,\phantom{x_{N-1}\oplus{}} x_{N}) \label{eq:vecxOne} \; , \end{IEEEeqnarray} where $\oplus$ denotes modulo-2 addition. Then, for any sequence $b_1,b_2,\ldots,b_\lambda\in \{0,1\}$ with $\lambda \leq n$, we extend this notation to define the vector $\vecxbr{b_1,b_2,\ldots,b_\lambda} \in \mathcal{X}^{2^{n-\lambda}}$ recursively via \begin{equation} \label{eq:recursiveTransformDefinition} \vecxbr{b_1,b_2,\ldots,b_{\lambda}} = \left( \vecxbr{b_1,b_2,\ldots,b_{\lambda-1}} \right)^{[b_\lambda]} \; . \end{equation} Specifically, if $\lambda=n$, then the vector $\mathbf{x}^{[b_1,b_2,\ldots,b_\lambda]}$ is a scalar. This scalar is denoted by $u_{i(\mathbf{b})}$, where $\mathbf{b}$ defines the index \begin{equation} \label{eq:bitReversedI} i(\mathbf{b}) \triangleq 1+\sum_{j=1}^{n} b_j 2^{n-j} \; . \end{equation} The transformed length-$N$ vector is given by \begin{equation} \label{eq:vecuIsvecxTransformed} \mathbf{u}= (u_1,\ldots,u_N) = \mathcal{A}_n(\mathbf{x}) \; , \end{equation} where $\mathcal{A}_n \colon \mathcal{X}^{2^n} \to \mathcal{X}^{2^n}$ is called the Ar\i{}kan transform of order $n$. Its inverse is denoted $\mathcal{A}^{-1}_n$ and satisfies $\mathcal{A}^{-1}_n=\mathcal{A}_n$. Let $\mathbf{b} = (b_1,b_2,\ldots,b_n)$ and $\mathbf{x} \in \mathcal{X}^N$ be given, where $N = 2^n$ and $i = i(\mathbf{b})$. As before, let $\mathbf{u} = \mathcal{A}(\mathbf{x})$. Since the vector $u^{i-1} = (u_1,u_2,\ldots,u_{i-1})$ will play an important role later on, we introduce additional notation. First, note that the vectors $\mathbf{b}' \in \{0,1\}^n$ can be totally ordered according to $i(\mathbf{b}')$ which is equivalent to standard lexicographic ordering. Recalling the notation $\mathbf{x}^{[\mathbf{b}]}$, we now define the related notation $\mathbf{x}^{(\mathbf{b})} = \mathbf{x}^{(b_1,b_2,\ldots,b_{n})}$. Namely, $\mathbf{x}^{(\mathbf{b})}$ is the concatenation of $\mathbf{x}^{[\mathbf{b}']}$, over all vectors $\mathbf{b}'\in \{0,1\}^n$ satisfying $i(\mathbf{b}') < i(\mathbf{b})$. For $i(\mathbf{b}') = i(\mathbf{b})-1$, this gives \begin{equation} \label{eq:xRoundBracket} \mathbf{x}^{(\mathbf{b})} \triangleq \mathbf{x}^{[0,0,\ldots,0]} \odot \mathbf{x}^{[0,0,\ldots,0,1]} \odot \mathbf{x}^{[0,0,\ldots,0,1,0]} \odot \cdots \odot \mathbf{x}^{[\mathbf{b}']} . \end{equation} If $\mathbf{b}$ is the all-zero vector, then $\mathbf{x}^{(\mathbf{b})}$ is the null vector. From these definitions it follows that $\mathbf{x}^{(\mathbf{b})} = u^{i-1}$, where $i = i(\mathbf{b})$ and $\mathbf{u} = \mathcal{A}(\mathbf{x})$. \subsection{Deletion Channel} Let $W(\mathbf{y} | \mathbf{x})$ denote the transition probability of $N$ uses of the deletion channel with constant deletion rate $\delta$. The input is denoted by $\mathbf{x} \in \mathcal{X}^N$ and the output $\mathbf{y}$ has a random length $M = |\mathbf{y}|$ supported on $\{0,1,\ldots,N\}$. This channel is equivalent to a BEC with erasure probability $\delta$ followed by a device that removes all erasures from the output. Thus, $W(\mathbf{y} | \mathbf{x})$ equals the probability that $N - M$ deletions have occurred, which is $(1-\delta)^M \cdot \delta^{N-M}$, multiplied by the number of distinct deletion patterns that produce $\mathbf{y}$ from $\mathbf{x}$, see \cite[Section 2]{Mitzenmacher_2009}. We will also consider a trimmed deletion channel whose output is given by removing all leading and trailing zeros from the output of the standard deletion channel. See Section~\ref{sec:strong} for details. \subsection{Trellis Definition} An $N$-segment \emph{trellis} $\mathcal{T}$ is a labeled weighted directed graph $(\mathcal{V},\mathcal{E})$. We assume that $\mathcal{V}$ can be partitioned into $\mathcal{V}_0,\ldots,\mathcal{V}_N$ so that $\mathcal{V}$ is the union of $N+1$ disjoint sets: \[ \mathcal{V} = \mathcal{V}_0 \mathbin{\mathaccent\cdot\cup} \mathcal{V}_1 \mathbin{\mathaccent\cdot\cup} \cdots \mathbin{\mathaccent\cdot\cup} \mathcal{V}_{N-1} \mathbin{\mathaccent\cdot\cup} \mathcal{V}_N \; , \] where $\mathbin{\mathaccent\cdot\cup}$ denotes a disjoint union. For channels with memory, $\mathcal{V}_j$ represents the set of possible channel states after $j$ channel inputs. Similarly, the edge set $\mathcal{E}$ is arranged into a sequence of $N$ disjoint sets: \[ \mathcal{E} = \mathcal{E}_1 \mathbin{\mathaccent\cdot\cup} \mathcal{E}_2 \mathbin{\mathaccent\cdot\cup} \cdots \mathbin{\mathaccent\cdot\cup} \mathcal{E}_{N-1} \mathbin{\mathaccent\cdot\cup} \mathcal{E}_N \; . \] An edge in $\mathcal{E}_j$ connects a vertex in $\mathcal{V}_{j-1}$ to a vertex in $\mathcal{V}_j$. We define $\sigma(e)$ and $\tau(e)$ to be the starting and terminating vertices of edge $e$. Thus, for $e = u \to v$, we have $\sigma(e) = u$ and $\tau(e) = v$. Then, \[ \mbox{$e \in \mathcal{E}_j$ implies $\sigma(e) \in \mathcal{V}_{j-1}$ and $\tau(e) \in \mathcal{V}_j$} \; . \] A \emph{trellis section} comprises two adjacent sets of vertices along with the edges that connect them. That is, for $1 \leq j \leq N$, section $j$ comprises vertex sets $\mathcal{V}_{j-1}$ and $\mathcal{V}_j$, as well as edge set $\mathcal{E}_j$. See Fig.~\ref{fig:deletion_trellis} for an example of a trellis with $4$ sections. Each edge $e\in \mathcal{E}$ has a weight $w(e)\in [0,1]$ and a label $\ell (e)\in\mathcal{X}$. We also assume that $\mathcal{V}_0$ and $\mathcal{V}_N$ have weight functions, \[ q:\mathcal{V}_0 \to [0,1] \quad \mbox{and} \quad r : \mathcal{V}_N \to [0,1] \; , \] that are associated with the initial and final states. A path through a trellis is a sequence of $N$ edges, $e_1,e_2,\ldots,e_N$, which starts at a vertex in $\mathcal{V}_0$ and ends at a vertex in $\mathcal{V}_N$. Namely, $\sigma(e_1) \in \mathcal{V}_0$, $\tau(e_N) \in \mathcal{V}_N$, and for each $1 \leq j \leq N-1$, we have $\tau(e_j) = \sigma(e_{j+1})$. The weight of a path through the trellis is defined as the product of the weights on each edge in the path times the weights of the initial and final vertices. Namely, the weight of the above path is \[ q(\sigma(e_1)) \cdot r(\tau(e_N)) \times \prod_{j=1}^N w(e_j) \; . \] Thus, an $N$-section trellis naturally defines a \emph{path-sum} function $T\colon \mathcal{X}^N \to \mathbb{R}$, where $T(\mathbf{x})$ equals the sum of the path weights over all paths whose length-$N$ label sequences match $\mathbf{x}$. That is, \begin{multlinecc} \label{eq:T} T(\mathbf{x}) \triangleq \sum_{\substack{e_1 \in \mathcal{E}_1, \\ \ell(e_1) = x_1}} \; \sum_{\substack{e_2 \in \mathcal{E}_2, \\ \ell(e_2) = x_2}} \cdots \sum_{\substack{e_N \in \mathcal{E}_N, \\ \ell(e_N) = x_N }} q(\sigma(e_1)) \; r(\tau(e_N)) \\ \times \prod_{j=1}^N w(e_j) \times \prod_{j=1}^{N-1} [\tau(e_j)=\sigma(e_{j+1})] \; . \end{multlinecc} \subsection{FAIM processes} \label{subsec:FAIM} In latter parts of this paper, for simplicity, we will often introduce key ideas by first framing them in the context of the uniform input distribution. That is, by first considering the case in which the input distribution is i.i.d.\ Bernoulli $1/2$. However, the uniform input distribution, or indeed any i.i.d.\ input distribution, is known to generally be sub-optimal with respect to the information rate between input and output, when transmitting over a deletion channel~\cite{Mitzenmacher_2009,Rahmati_2015,Castiglione_2015,Cheraghchi_2019}. Thus, we stand to benefit by considering a larger class of input distributions. To this end, let $\mathcal{S}$ be a given finite set. Each element of $\mathcal{S}$ is a state of an input process. In the following\footnote{The definition of FAIM and FAIM-derived processes here is a specialization of the definition given in \cite{Shuval_Tal_Memory_2017}. Here, we are interested in FAIM-derived (i.e., hidden-Markov) input processes. However, the input-output process of a deletion channel is neither FAIM nor FAIM-derived.} definition, we have for all $j \in \mathbb{Z}$ that $S_j \in \mathcal{S}$ and $X_j \in \mathcal{X}$. \begin{defi}[FAIM process] \label{def:FAIM} A strictly stationary process $(S_j,X_j)$, $j \in \mathbb{Z}$ is called a \emph{finite-state, aperiodic, irreducible, Markov} (FAIM) process if, for all $j$, \begin{equation} \label{eq_markov property of FAIM} P_{S_j, X_j| S_{-\infty}^{j-1}, X_{-\infty}^{j-1}} = P_{S_j, X_j | S_{j-1}} \; , \end{equation} is independent of $j$ and the sequence $(S_j), j \in \mathbb{Z}$ is a finite-state Markov chain that is stationary, irreducible, and aperiodic. \end{defi} For a FAIM process, consider the sequence $X_j$, for $j \in \mathbb{Z}$. In principle, the distribution of this sequence can be computed by marginalizing the states of the FAIM process $(S_j,X_j)$. Such a sequence is typically called a \emph{hidden-Markov process}. In this paper, we sometimes add the term \emph{regular} to emphasize that the hidden state process is a regular Markov chain. Let us now connect the concept of a FAIM process to that of a trellis. Let a FAIM process $(X_j,S_j)$ be given, and fix $N \geq 1$. We now define the corresponding trellis, having $N$ stages. The vertex set is $\mathcal{V} = \mathcal{V}_0 \mathbin{\mathaccent\cdot\cup} \mathcal{V}_1 \mathbin{\mathaccent\cdot\cup} \cdots \mathbin{\mathaccent\cdot\cup} \mathcal{V}_N$, where we define \[ \mathcal{V}_j = \{s_j : s \in \mathcal{S} \} \] for $0 \leq j \leq N$ so that each $\mathcal{V}_j$ contains a distinct copy of $\mathcal{S}$. For each $x \in \mathcal{X}$, $1 \leq j \leq N$, $\alpha_{j-1} \in \mathcal{V}_{j-1}$, and $\beta_{j} \in \mathcal{V}_j$, define an edge $e$ from $\alpha_{j-1}$ to $\beta_{j}$ with label $\ell(e) = x$ and weight $w(e) = P_{S_j,X_j|S_{j-1}}(\beta,x|\alpha)$. Lastly, for all $\alpha_{0} \in \mathcal{V}_0$ define $q(\alpha_{0}) = \pi(\alpha)$, where $\pi(\alpha)$ is the stationary probability of state $\alpha$ in the Markov process $(S_j)_{j \in \mathbb{R}}$, and define $r(\beta_{N}) = 1$ for all $\beta_{N} \in \mathcal{V}_N$. It follows that the probability of $(X_1,X_2,\ldots,X_N) = (x_1,x_2,\ldots,x_N) = \mathbf{x}$ equals $T(\mathbf{x})$, where $T$ was defined in (\ref{eq:T}). \section{Trellis representation of joint probability}\label{sec:TrellisBasics} We have just seen that a trellis is instrumental in compactly representing a hidden-Markov input distribution. In fact, it is much more versatile than this. Namely, we will now show how a trellis can be used to represent the \emph{joint} distribution of a hidden-Markov input process and the channel output. \subsection{Trellis for uniform input} This trellis representation for the deletion channel can also be found in~\cite{Davey_2001}. As previously explained, it is generally beneficial to use an input distribution with memory. However, for the sake of an easy exposition, we will first consider the simplest possible input distribution, a uniform input distribution (i.e., i.i.d.\ and Bernoulli $1/2$). The trellis representation will be used on the decoder side. Thus, when building the trellis, we will have already received the output vector $\mathbf{y}$. Hence, the primary role of the trellis is to evaluate the probabilities associated with possible input vectors $\mathbf{x}$, of length $N$. That is, the trellis will be used to calculate the joint probability of $\mathbf{x}$ and $\mathbf{y}$, denoted $P_\mathbf{X} (\mathbf{x}) \cdot W(\mathbf{y}|\mathbf{x})$, for $\mathbf{y}$ fixed. Recall that $W(\mathbf{y}|\mathbf{x})$ is the deletion channel law, and in this subsection $P_\mathbf{X}$ is the uniform input distribution. We will shortly define the concept of a valid path in the trellis. Each valid path will correspond to a specific transmitted $\mathbf{x}$ and a specific deletion pattern that is compatible with the received $\mathbf{y}$ (see Fig.~\ref{fig:deletion_trellis}). We term this trellis the \emph{base trellis}, as we will ultimately construct other trellises derived from it. \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.25,dot/.style={draw,circle,minimum size=1mm,inner sep=0pt,outer sep=0pt,fill=black}, >=latex] \pgfmathsetmacro{\xlim}{5} \pgfmathsetmacro{\ylim}{4} \pgfmathsetmacro{\xxlim}{\xlim-1} \pgfmathsetmacro{\yylim}{\ylim-1} \newcommand{0}{0} \newcommand{011}{011} \node[draw,circle,minimum size=2.5mm,inner sep=0pt,outer sep=0pt] (a) at (1,\ylim) {}; \node[draw,circle,minimum size=2.5mm,inner sep=0pt,outer sep=0pt] (a) at (\xlim,1) {}; \foreach \i in {0,...,\yylim} \node (i\i) at (0.65,\ylim-\i) {\scriptsize$i\!=\!\i$}; \foreach \i in {1,...,\yylim} \node (y\i) at (0.25,\ylim-\i+0.5) {$y_\i \!=\! \StrChar{011}{\i}$}; \foreach \j in {0,...,\xxlim} \node (j\j) at (\j+1,\ylim+0.30) {\scriptsize$j\!=\!\j$}; \foreach \j in {1,...,\xxlim} \node (x\j) at (\j+0.5,\ylim+.7) {$x_\j$}; \node (cx) at (0.5,\ylim+0.7) {$x_j$}; \node (cy) at (0,\ylim+0.25) {$y_i$}; \draw (0,\ylim+0.7) -- (0.5, \ylim+0.25); \foreach \i in {0,...,\yylim} \foreach \j in {0,...,\xxlim} \node [dot] (\j-\i) at (\j+1,\ylim-\i) {}; \pgfmathsetmacro{\xxxlim}{\xlim-2} \pgfmathsetmacro{\yyylim}{\ylim-2} \foreach \i in {0,...,\yylim} \foreach \j in {0,...,\xxxlim} { \pgfmathtruncatemacro{\ii}{\i+1} \pgfmathtruncatemacro{\jj}{\j+1} \ifnum \i > \j \draw[->,blue,dashed,thick] (\j-\i) to [in=150,out=30] (\jj-\i); \draw[->,red,dashed,thick] (\j-\i) to [in=210,out=330] (\jj-\i); \else \ifnum \j > \i+\xlim-\ylim \draw[->,blue,dashed,thick] (\j-\i) to [in=150,out=30] (\jj-\i); \draw[->,red,dashed,thick] (\j-\i) to [in=210,out=330] (\jj-\i); \else \draw[->,blue,thick] (\j-\i) to [in=150,out=30] (\jj-\i); \draw[->,red,thick] (\j-\i) to [in=210,out=330] (\jj-\i); \fi \fi } \pgfmathsetmacro{\xxxlim}{\xlim-2} \pgfmathsetmacro{\yyylim}{\ylim-2} \foreach \i in {0,...,\yyylim} \foreach \j in {0,...,\xxxlim} { \pgfmathtruncatemacro{\ii}{\i+1} \pgfmathtruncatemacro{\jj}{\j+1} \pgfmathtruncatemacro{\iit}{\i+\xlim-\ylim} \ifnum \i > \j \StrChar{011}{\ii}[\temp] \ifthenelse{\equal{\temp}{0}} {\draw[->,thick,dashed,blue] (\j-\i) -- (\jj-\ii);} {\draw[->,thick,dashed,red] (\j-\i) -- (\jj-\ii);} \else \ifnum \j > \iit \StrChar{011}{\ii}[\temp] \ifthenelse{\equal{\temp}{0}} {\draw[->,thick,dashed,blue] (\j-\i) -- (\jj-\ii);} {\draw[->,thick,dashed,red] (\j-\i) -- (\jj-\ii);} \else \StrChar{011}{\ii}[\temp] \ifthenelse{\equal{\temp}{0}} {\draw[->,thick,blue] (\j-\i) -- (\jj-\ii);} {\draw[->,thick,red] (\j-\i) -- (\jj-\ii);} \fi \fi } \end{tikzpicture} \end{center} \vspace{-2.5mm} \caption{A trellis for the binary deletion channel with uniform input, a codeword length of $N=4$, and a received word $\mathbf{y}=(011)$ of length $M=3$. Vertices are denoted $v_{i,j}$ with $0 \leq i \leq M$ and $0 \leq j \leq N$. All blue edges have label `$0$' while all red edges have label `$1$'. The horizontal edges are weighted by the probability $\delta/2$. Diagonal edges are weighted by the probability $(1-\delta)/2$. The two circled vertices have $q(v_{0,0}) = r(v_{M,N}) = 1$, while all other vertices in $\mathcal{V}_0$ and $\mathcal{V}_N$ have $q$ and $r$ values equal to $0$, respectively. Edges that can be pruned without changing the function $T$ in (\ref{eq:T}) are dashed. \label{fig:deletion_trellis}} \end{figure} Recalling our notation, we have $\mathbf{x}$ as the unknown input vector, of known length $N$. The vector $\mathbf{y}$ is the known output, having known length $M = |\mathbf{y}|$. The deletion probability is $\delta$. The base trellis is defined as follows. \begin{defi}[Base Trellis for Uniform Input]\label{defi:symmetricTrellis} For $N$, $\delta$, $M$, and $\mathbf{y} \in \mathcal{X}^M$: \begin{enumerate} \item The vertex set $\mathcal{V}$ equals the disjoint union \[ \mathcal{V} = \mathcal{V}_0 \mathbin{\mathaccent\cdot\cup} \mathcal{V}_1 \mathbin{\mathaccent\cdot\cup} \cdots \mathbin{\mathaccent\cdot\cup} \mathcal{V}_N \; , \] where, for $0 \leq j \leq N$, \begin{equation} \label{eq:VjSymmetric} \mathcal{V}_j = \{ v_{i,j} : 0 \leq i \leq M\} \; . \end{equation} \item \label{it:vijMeaning_uniform} A path passing through vertex $v_{i,j}$ corresponds to the event where only $i$ of the first $j$ transmitted symbols were received. That is, from $x_1,x_2,\ldots,x_j$, the channel has deleted $j-i$ symbols\footnote{Note that we could have optimized our definition of $\mathcal{V}_j$. Namely, only $i$ in the range $\max\{0,M-N+j\} \leq i \leq \min\{j,M\}$ are actually consistent with the described event (i.e., only the solid edges in Figure~\ref{fig:deletion_trellis}). We leave such optimization to the practitioner and settle for the simpler description in (\ref{eq:VjSymmetric}).}. \item Vertices $v_{i,j}$ with $0 \leq i \leq M$ and $0 \leq j < N$ each have up to three outgoing edges: two `horizontal' edges, each corresponding to a deletion, and one `diagonal' edge, corresponding to a non-deletion. \item For $0 \leq i \leq M$ and $0 \leq j < N$, there are two edges $e,e'$ from $v_{i,j}$ to $v_{i,j+1}$. From \ref{it:vijMeaning_uniform}) above, we deduce that these two `horizontal' edges are associated with $x_{j+1}$ being deleted by the channel. The first is associated with $x_{j+1} = 0$ and has $\ell(e) = 0$, while the second is associated with $x_{j+1} = 1$ and has $\ell(e') = 1$. Since the probability of deletion is $\delta$, and in the uniform distribution $x_{j+1} = 0$ and $x_{j+1} = 1$ each occur with probability $1/2$, we set $w(e) = w(e') = \delta/2$. \item For $0 \leq i < M$ and $0 \leq j < N$, there is a single edge $e$ from $v_{i,j}$ to $v_{i+1,j+1}$. Recalling \ref{it:vijMeaning_uniform}) above, we deduce that this `diagonal' edge represents $x_{j+1}$ not being deleted, and being observed as $y_{i+1}$. Thus, $\ell(e) = y_{i+1}$. Since the probability of sending $x_{j+1}$ in the uniform case is $1/2$, regardless of its value, and the probability of a non-deletion is $1-\delta$, we set $w(e) = (1-\delta)/2$. \item We set $q(v_{0,0}) = 1$. All other vertices $v \in \mathcal{V}_0$ have $q(v) = 0$. Thus, with respect to (\ref{eq:T}), we effectively force all paths to start at $v_{0,0}$. Namely, when starting a path, no symbols have yet been transmitted, and hence no symbols have yet been received. \item We set $r(v_{M,N}) = 1$. All other vertices $v \in \mathcal{V}_N$ have $r(v) = 0$. Thus, with respect to (\ref{eq:T}), we effectively force all paths to end at $v_{M,N}$. That is, at the end of a path, $N$ symbols have been transmitted, and of these, $M$ have been received. \end{enumerate} \end{defi} In line with the definitions above, let us call a path \emph{valid} if it starts at $v_{0,0}$ and ends at $v_{M,N}$. For example, in Figure~\ref{fig:deletion_trellis}, valid paths are those that start at the circled vertex on the top left, end at the circled vertex on the bottom right, and hence contain only solid edges. Clearly, such a path is comprised of $N$ edges, $e_1,e_2,\ldots,e_N$. Denote by $\mathbf{x} = (x_1,x_2,\ldots,x_N)$ the input vector corresponding to the above path, where $x_i = \ell(e_i)$. Each such $\mathbf{x}$ is consistent with our received $\mathbf{y}$. Indeed, tracing the path, the type of the corresponding edge (horizontal or diagonal) shows exactly which of the $x_i$ to delete and which to keep in order to arrive at $\mathbf{y}$. Also, the probability of the input sequence $\mathbf{x}$ being transmitted and experiencing the above chain of deletion/no-deletion events is exactly equal to the product of the $w(e_i)$, times $q(v_{0,0}) \cdot r(v_{M,N}) = 1$. From the above discussion, one has the following key lemma. \begin{lemm} \label{lemm:baseTrellisProb} Let $\mathcal{T}$ be a trellis as described in Definition~\ref{defi:symmetricTrellis}. Then, for $\mathbf{x} \in \mathcal{X}^N$ and $T(\mathbf{x})$ as defined in (\ref{eq:T}), we have \[ T(\mathbf{x}) = P_{\mathbf{X}}(\mathbf{x}) \cdot W(\mathbf{y} | \mathbf{x}) \; , \] where $P_\mathbf{X}$ is the uniform input distribution and $W$ is the deletion channel law. \end{lemm} \begin{IEEEproof} First, we observe that the weight of a trellis path equals the joint probability of $(\mathbf{x},\mathbf{y})$ and the deletion pattern. Then, the claim follows from the fact that $T(\mathbf{x})$ sums the path weight over all paths through the trellis (i.e., all deletion patterns) consistent with the given $(\mathbf{x},\mathbf{y})$ pair. \end{IEEEproof} \subsection{Trellises for hidden-Markov inputs} As explained earlier, a trellis is used on the decoding side, in order to capture the joint probability of $\mathbf{x}$ and $\mathbf{y}$. We now show how such a trellis is built for the more general case in which $\mathbf{x}$ is drawn from a regular hidden-Markov input process. Intuitively, this is done by simply ``multiplying'' the trellis corresponding to the input distribution, as described at the end of Section~\ref{sec:background}, with the trellis defined for the uniform case (with the correction that the edge weights $\delta/2$ and $(1-\delta)/2$ are replaced by $\delta$ and $1-\delta$, respectively). A formal definition follows. \begin{defi}[Base Trellis for Hidden-Markov Input]\label{defi:FAIMTrellis} For $N$, $\delta$, $M$, $\mathcal{S}$, $P_{S_j,X_j|S_{j-1}}$, $\pi$, and $\mathbf{y} \in \mathcal{X}^M$: \begin{enumerate} \item The vertex set $\mathcal{V}$ equals the disjoint union \[ \mathcal{V} = \mathcal{V}_0 \mathbin{\mathaccent\cdot\cup} \mathcal{V}_1 \mathbin{\mathaccent\cdot\cup} \cdots \mathbin{\mathaccent\cdot\cup} \mathcal{V}_N \; , \] where, for $0 \leq j \leq N$, \begin{equation} \label{eq:sijFAIM} \mathcal{V}_j = \{ s_{i,j} : 0 \leq i \leq M \; , \; s \in \mathcal{S} \} \; . \end{equation} Thus, $|\mathcal{V}_j| = (M+1)\cdot |\mathcal{S}|$. \item \label{it:vijMeaning} A path passes through vertex $s_{i,j}$ if exactly $i$ of the first $j$ transmitted symbols are not deleted and the state of the input process is $s\in \mathcal{S}$ after the $j$-th input (i.e., $S_j = s$). \item Vertices $s_{i,j}$ with $0 \leq i \leq M$, $0 \leq j < N$, and $s \in \mathcal{S}$ each have up to $3 \cdot |\mathcal{S}|$ outgoing edges. \item For $0 \leq i \leq M$, $0 \leq j < N$, and $\alpha, \beta \in \mathcal{S}$, there are two edges $e,e'$ from $\alpha_{i,j}$ to $\beta_{i,j+1}$. From item \ref{it:vijMeaning}, we deduce that these two `horizontal' edges are associated with $x_{j+1}$ being deleted by the channel. The first is associated with $x_{j+1} = 0$ and has $\ell(e) = 0$, while the second is associated with $x_{j+1} = 1$ and has $\ell(e') = 1$. Recalling that by stationarity $P_{S_{j+1},X_{j+1}|S_j} = P_{S_j,X_j|S_{j-1}}$, we set \begin{equation} \label{eq:horizontalZero} w(e) = \delta \cdot P_{S_j,X_j|S_{j-1}}(\beta,0|\alpha) \end{equation} and \begin{equation} \label{eq:horizontalOne} w(e') = \delta \cdot P_{S_j,X_j|S_{j-1}}(\beta,1|\alpha) \; . \end{equation} That is, the probability of a deletion, times the probability implied by the underlying FAIM distribution. \item For $0 \leq i < M$, $0 \leq j < N$, and $\alpha,\beta \in \mathcal{S}$, there is a single edge $e$ from $\alpha_{i,j}$ to $\alpha_{i+1,j+1}$. Recalling item \ref{it:vijMeaning} above, we deduce that this `diagonal' edge represents $x_{j+1}$ being observed (i.e., not deleted) as $y_{i+1}$. Thus, $\ell(e) = y_{i+1}$. We set \[ w(e) = (1-\delta) \cdot P_{S_j,X_j|S_{j-1}}(\beta,y_{i+1}|\alpha) \; . \] That is, the probability of a non-deletion, times the probability implied by the underlying FAIM distribution\footnote{As in the uniform case, we have opted for simplicity of exposition over reduced algorithmic complexity. That is, as in the uniform case, we can take the index $i$ in (\ref{eq:sijFAIM}) to have range $\max\{0,M-N+j\} \leq i \leq \min\{j,M\}$. Also, edges $e$ with probability $w(e)=0$ can be removed from the trellis.}. \item For all $s_{0,0} \in \mathcal{V}_0$, where $s \in \mathcal{S}$, we set $q(s_{0,0}) = \pi(s)$. All other vertices $v \in \mathcal{V}_0$ have $q(v)=0$. Thus, with respect to (\ref{eq:T}), we effectively force all paths to start at a vertex $s_{0,0}$, where $s \in \mathcal{S}$. Namely, when starting a path, no symbols have yet been transmitted, and hence no symbols have yet been received. Moreover, the probability of starting the path at $s_{0,0}$ is $\pi(s)$, the stationary probability of $s$ in the FAIM input process. \item For all $s_{M,N} \in \mathcal{V}_N$, we set $r(s_{M,N}) = 1$. All other vertices $v \in \mathcal{V}_N$ have $r(v) = 0$. Thus, with respect to (\ref{eq:T}), we effectively force all paths to end at a vertex $s_{M,N}$. That is, at the end of a path, $N$ symbols have been transmitted, and of these, $M$ have been received. \end{enumerate} \end{defi} As in the uniform case, we have the following lemma, which is easily proved. \begin{lemm} \label{lemm:baseFAIMTrellisProb} Let $\mathcal{T}$ be a trellis as per Definition~\ref{defi:FAIMTrellis}. Then, for $\mathbf{x} \in \mathcal{X}^N$ and $T(\mathbf{x})$ as defined in (\ref{eq:T}), \[ T(\mathbf{x}) = P_{\mathbf{X}}(\mathbf{x}) \cdot W(\mathbf{y} | \mathbf{x}) \; , \] where $P_\mathbf{X}$ is the hidden-Markov input distribution and $W$ is the deletion channel law. \end{lemm} \begin{IEEEproof} First, we observe that the weight of a trellis path equals the joint probability of $(\mathbf{x},\mathbf{y})$ and the deletion pattern. Then, the claim follows from the fact that $T(\mathbf{x})$ sums the path weight over all paths through the trellis (i.e., all deletion patterns) consistent with the given $(\mathbf{x},\mathbf{y})$ pair. \end{IEEEproof} \subsection{Trellis for the trimmed deletion channel} \label{subsec:TrellisTDC} For reasons that will shortly become clear, we will now consider a slight variation of the deletion channel. Namely, we now define the trimmed deletion channel (TDC). A TDC is a deletion channel that, after the deletion process, trims its output of leading and trailing `$0$' symbols. Thus, by definition, the output of a TDC is either an empty string, or a string that starts and ends with a `$1$' symbol. We now show how to alter Definition~\ref{defi:FAIMTrellis} in order to account for this variation. The change turns out to be minimal. \begin{defi}[Base Trellis for Hidden-Markov Input and TDC]\label{defi:FAIMTDCTrellis} For $N$, $\delta$, $M$, $\mathcal{S}$, $P_{S_j,X_j|S_{j-1}}$, $\pi$, and trimmed output $\mathbf{y}^* \in \mathcal{X}^M$, define the trellis $\mathcal{T}$ as in Definition~\ref{defi:FAIMTrellis}, but with the following changes. \begin{itemize} \item The probability of an edge $e$ from $\alpha_{0,j}$ to $\beta_{0,j+1}$ with $\ell(e) = 0$ must be changed to $w(e) = P_{S_j,X_j|S_{j-1}}(\beta,0|\alpha)$. Namely, the $\delta$ factor in (\ref{eq:horizontalZero}) is removed. In short, if the path is currently at vertex $\alpha_{0,j}$, then none of the $j$ symbols $x_1,x_2,\ldots,x_j$ have made it to the output of the channel (they have either been deleted or trimmed). Thus, if $x_{j+1} = 0$, it will surely be either deleted, or else trimmed. \item The probability of an edge $e$ from $\alpha_{M,j}$ to $\beta_{M,j+1}$ with $\ell(e) = 0$ must be changed to $w(e) = P_{S_j,X_j|S_{j-1}}(\beta,0|\alpha)$. Namely, the $\delta$ factor in (\ref{eq:horizontalZero}) is removed. Note that the exact same reasoning from the previous point applies; the only difference is that now we are correcting for the trimming of the trailing `$0$' symbols. \end{itemize} \end{defi} The result of the above altered trellis definition is the following lemma. \begin{lemm} \label{lemm:baseFAIMTDCTrellisProb} Let $\mathcal{T}$ be a trellis as described in Definition~\ref{defi:FAIMTDCTrellis}. Then, for $\mathbf{x} \in \mathcal{X}^N$ and $T(\mathbf{x})$ as defined in (\ref{eq:T}), \[ T(\mathbf{x}) = P_{\mathbf{X}}(\mathbf{x}) \cdot W^*(\mathbf{y}^* | \mathbf{x}) \; , \] where $P_\mathbf{X}$ is the hidden-Markov input distribution and $W^*$ is the law of the TDC. \end{lemm} \begin{IEEEproof} First, we observe that the weight of a trellis path equals the joint probability of $(\mathbf{x},\mathbf{y}^*)$ and the deletion/trimming event associated with that path. Then, the claim follows from the fact that $T(\mathbf{x})$ sums the path weight over all paths through the trellis (i.e., all deletion/trimming events) consistent with the given $(\mathbf{x},\mathbf{y}^*)$ pair. \end{IEEEproof} \section{Polarization operations on a trellis}\label{sec:TrellisPolarization} Polar plus and minus transforms for channels with memory were first presented in~\cite{Wang_2014,Wang_2015}. Let an input distribution on $\mathbf{x}^N$ be given, for $N$ even. For this input distribution and a vector channel with input $\mathbf{x} \in \mathcal{X}^N$ and output $\mathbf{y}$, let $\mathcal{T}$ be a trellis with $N$ sections whose path-sum function satisfies \begin{equation} \label{eq:generalPathSumFunction} T(\mathbf{x}) = \Pr(\mathbf{Y} = \mathbf{y} , \mathbf{X}=\mathbf{x}) \; . \end{equation} \subsection{Minus transform} For a given path-sum function $T(\mathbf{x})$, where $\mathbf{x} \in \mathcal{X}^N$, the polar \emph{minus transform} defines a new path-sum function $T^{[0]} (\mathbf{z})$, $\mathbf{z} \in \mathcal{X}^{N/2}$. Specifically, $T^{[0]}(\mathbf{z})$ is the marginalization of $T(\mathbf{x})$ over all $\mathbf{x}$ vectors satisfying \[ \mathbf{z} = \mathbf{x}^{[0]} = (x_1 \oplus x_2,\ldots,x_{N-1}\oplus x_N) \; . \] That is, \begin{IEEEeqnarray}{rCl} T^{[0]} (\mathbf{z}) & \triangleq & \sum_{\mathbf{x} \in \mathcal{X}^N: \mathbf{x}^{[0]} = \mathbf{z}} T(\mathbf{x}) \label{eq:minusTransformPathSum} \\ &=& \sum_{\mathbf{x} \in \mathcal{X}^{N} } T(\mathbf{x}) \prod_{j=1}^{N/2} [x_{2j-1}\oplus x_{2j} = z_j] \IEEEnonumber \\ & = & \Pr(\mathbf{Y} = \mathbf{y} , \mathbf{X}^{[0]} = \mathbf{z}) \IEEEnonumber \; , \end{IEEEeqnarray} where the last equality follows under the assumption of (\ref{eq:generalPathSumFunction}). Due to the local nature of this reparameterization, there is a modified trellis $\mathcal{T}^{[0]}$ with $N/2$ sections that represents the new path-sum function. \begin{defi}[Minus Transform] \label{def:minus_transform} Let $\mathcal{T} = \mathcal{T}(\mathcal{V}, \mathcal{E}, w, \ell, q, r)$ be a length-$N$ trellis, where $N$ is even. The trellis $\tilde{\mathcal{T}} = \tilde{\mathcal{T}}(\tilde{\mathcal{V}}, \tilde{\mathcal{E}}, \tilde{w}, \tilde{\ell},\tilde{q}, \tilde{r}) = \mathcal{T}^{[0]}$ is defined as follows. \begin{itemize} \item The vertex set of $\tilde{\mathcal{T}}$ is \[ \tilde{\mathcal{V}} = \tilde{\mathcal{V}}_0 \mathbin{\mathaccent\cdot\cup} \tilde{\mathcal{V}}_1 \mathbin{\mathaccent\cdot\cup} \cdots \mathbin{\mathaccent\cdot\cup} \tilde{\mathcal{V}}_{N/2} \; , \] where \[ \tilde{\mathcal{V}}_j = \mathcal{V}_{2j} \; . \] \item We next define the edge set $\tilde{\mathcal{E}}$ implicitly. Consider an edge $\tilde{e} = \alpha \to \gamma \in \tilde{\mathcal{E}}$ in section $j$ of $\tilde{\mathcal{T}}$ with label $\tilde{\ell}(\tilde{e}) = z$. Then, \[ \alpha \in \tilde{\mathcal{V}}_{j-1} = \mathcal{V}_{2j-2} \quad \mbox{and} \quad \gamma \in \tilde{\mathcal{V}}_j = \mathcal{V}_{2j} \; . \] The weight $\tilde{w}(\tilde{e})$ of this edge equals the sum of the product of the edge weights along each two-step path $\alpha \xrightarrow{e_1} \beta \xrightarrow{e_2} \gamma$ in $\mathcal{T}$ with $\ell(e_1) \oplus \ell(e_2) = z$. That is, \begin{aligncc*} \tilde{w} (\tilde{e}) = \onlydouble{&} \sum_{\substack{e_1 \in \mathcal{E}_{2j-1} : \\ \sigma(e_1)=\alpha}} \;\; \sum_{\substack{e_2 \in \mathcal{E}_{2j} : \\ \tau(e_2)=\gamma} } w(e_1) \, w(e_2) \\ \onlydouble{& \quad \quad} \times [\tau(e_1)=\sigma(e_2)] \cdot [\ell(e_1)\oplus \ell(e_2) = z]. \end{aligncc*} Edges with weight $0$ may be removed from $\tilde{\mathcal{T}}$. \item The minus operation does not affect initial and final vertices and this implies that $\tilde{q}(s) = q(s)$ and $\tilde{r}(s) = r(s)$. \end{itemize} \end{defi} The following lemma states that applying a minus transform to a trellis indeed results in a trellis whose corresponding path-sum function is the minus transform of the path-sum function of the initial trellis. \begin{lemm} \label{lemm:minusTransform} Let $\mathcal{T}$ be a trellis with $N$ sections, where $N$ is even. Denote the minus transform of $\mathcal{T}$ by $\mathcal{T}' = \mathcal{T}^{[0]}$ per Definition~\ref{def:minus_transform}. Let $T$ and $T'$ be the path-sum functions corresponding to $\mathcal{T}$ and $\mathcal{T}'$, respectively, as defined in (\ref{eq:T}) . Then, $T'$ equals $T^{[0]}$ as defined in (\ref{eq:minusTransformPathSum}). \end{lemm} \begin{IEEEproof} This follows from the fact that the minus trellis is constructed by merging adjacent trellis stages and then combining paths according to their $\mathbf{x}^{[0]}$ values. Finally, the new paths are relabeled by their $\mathbf{x}^{[0]}$ values. \end{IEEEproof} \subsection{Plus transform} For a given path-sum function $T(\mathbf{x})$, where $\mathbf{x} \in \mathcal{X}^N$, the polar \emph{plus transform} defines a new path-sum function $T^{[1]} (\mathbf{z}')$, $\mathbf{z}' \in \mathcal{X}^{N/2}$. This definition is always with respect to a vector $\mathbf{z} \in \mathcal{X}^{N/2}$, which is assumed to be fixed. Specifically, $T^{[1]}(\mathbf{z}')$ equals $T(\mathbf{x})$, where $\mathbf{x}$ is the unique vector satisfying \begin{IEEEeqnarray*}{rCl} \mathbf{z} &=& \mathbf{x}^{[0]} = (x_1 \oplus x_2,\ldots,x_{N-1}\oplus x_N) \quad \mbox{and} \\ \mathbf{z}' &=& \mathbf{x}^{[1]}=(x_2,x_4,\ldots,x_N) \; . \end{IEEEeqnarray*} That is, \begin{IEEEeqnarray}{rCl} T^{[1]} (\mathbf{z}') & \triangleq & T(\mathbf{x}) \big|_{\mathbf{x}:\mathbf{x}^{[0]} = \mathbf{z}, \mathbf{x}^{[1]} = \mathbf{z}'} \label{eq:plusTransformPathSum} \\ & = & \sum_{\mathbf{x} \in \mathcal{X}^{N} } T(\mathbf{x}) \prod_{j=1}^{N/2} [x_{2j-1} \oplus x_{2j} = z_j] \cdot [x_{2j} = z'_j] \IEEEnonumber \\ & = &\Pr(\mathbf{Y} = \mathbf{y} , \mathbf{X}^{[0]} = \mathbf{z}, \mathbf{X}^{[1]} = \mathbf{z}') \; , \IEEEnonumber \end{IEEEeqnarray} where the last equality follows under the assumption of (\ref{eq:generalPathSumFunction}). As with the minus transform, there is a corresponding operation one can apply to the underlying trellis, which we now detail. Note that the plus-transform of a trellis is defined with respect to a fixed vector $\mathbf{z}$, which may not be specified explicitly when it is clear from the context. \begin{defi}[Plus Transform] \label{def:plus_transform} Let $\mathcal{T} = \mathcal{T}(\mathcal{V}, \mathcal{E}, w, \ell, q, r)$ be a length-$N$ trellis, where $N$ is even and let $\mathbf{z} \in \mathcal{X}^{N/2}$ be given. The trellis $\tilde{\mathcal{T}} = \tilde{\mathcal{T}}(\tilde{\mathcal{V}}, \tilde{\mathcal{E}}, \tilde{w}, \tilde{\ell},\tilde{q}, \tilde{r}) = \mathcal{T}^{[1]}$ is defined as follows. \begin{itemize} \item The vertex set of $\tilde{\mathcal{T}}$ is the same as the minus trellis $\mathcal{T}^{[0]}$. This is also the case for the functions $\tilde{q}$ and $\tilde{r}$. \item We next define the edge set $\tilde{\mathcal{E}}$ implicitly. Consider an edge $\tilde{e} = \alpha \to \gamma \in \tilde{\mathcal{E}}$ in section $j$ of $\tilde{\mathcal{T}}$ with label $\tilde{\ell}(\tilde{e}) = z'$. Then, \[ \alpha \in \tilde{\mathcal{V}}_{j-1} = \mathcal{V}_{2j-2} \quad \mbox{and} \quad \gamma \in \tilde{\mathcal{V}}_j = \mathcal{V}_{2j} \; . \] The weight $\tilde{w}(\tilde{e})$ of this edge equals the sum of the product of the edge weights along each two-step path $\alpha \xrightarrow{e_1} \beta \xrightarrow{e_2} \gamma$ in $\mathcal{T}$ with $\ell(e_1) \oplus \ell(e_2) = z_j$ and $\ell(e_2)=z'$. That is, \begin{aligncc*} \tilde{w} (\tilde{e}) = \onlydouble{&} \sum_{\substack{e_1 \in \mathcal{E}_{2j-1} : \\ \sigma(e_1)=\alpha}} \;\; \sum_{\substack{e_2 \in \mathcal{E}_{2j} : \\ \tau(e_2)=\gamma} } w(e_1) \, w(e_2) \\ \onlydouble{& \!\!\!} \times [\tau(e_1)=\sigma(e_2)] \cdot [\ell(e_1)\oplus z' = z_j] \cdot [\ell(e_2) = z'] \; . \end{aligncc*} Edges with weight $0$ may be removed from $\tilde{\mathcal{T}}$. \end{itemize} \end{defi} This lemma states the key property of plus transform. \begin{lemm} \label{lemm:plusTransform} Let $\mathcal{T}$ be a trellis with $N$ sections where $N$ is even, and let $\mathbf{z} \in \mathcal{X}^{N/2}$ be given. Denote the plus transform of $\mathcal{T}$ by $\mathcal{T}' = \mathcal{T}^{[1]}$ per Definition~\ref{def:plus_transform}. Let $T$ and $T'$ be the path-sum functions corresponding to $\mathcal{T}$ and $\mathcal{T}'$, respectively, as defined in (\ref{eq:T}) . Then, $T'$ equals $T^{[1]}$ as defined in (\ref{eq:plusTransformPathSum}). \end{lemm} \begin{IEEEproof} This follows from the fact that the plus trellis is constructed by merging adjacent trellis stages and then pruning paths that do not satisfy $\mathbf{x}^{[0]}=\mathbf{z}$. Finally, the remaining paths are relabeled with $\mathbf{x}^{[1]}$ values. \end{IEEEproof} \subsection{Successive cancellation decoding} As in Ar\i{}kan's seminal paper~\cite{Arikan_2009}, the transform defined above leads to a SC decoding algorithm. In brief, given $\mathbf{y}$ we first construct a base trellis $\mathcal{T}$. Then, there is a recursive decoder that, given $\mathcal{T}^{[b_1, b_2, \ldots, b_\lambda]}$, constructs $\mathcal{T}^{[b_1, b_2, \ldots, b_\lambda, 0]}$ and calls itself with that argument. When this returns the decoded $\vecxbrbr{b_1, b_2, \ldots, b_\lambda, 0}$, it then builds $\mathcal{T}^{[b_1, b_2, \ldots, b_\lambda, 1]}$ with respect to those hard decisions and calls itself to decode $\vecxbrbr{b_1, b_2, \ldots, b_\lambda, 1}$. Then, the two decoded vectors are combined to form $\vecxbrbr{b_1, b_2, \ldots, b_\lambda}$ and the function returns. The following lemma makes this precise. \begin{lemm} \label{lemm:recursiveT} Let $\mathcal{T}$ be a base trellis with $N= 2^n$ sections corresponding to a received word $\mathbf{y}$ such that (\ref{eq:generalPathSumFunction}) holds for the corresponding path-sum function. For each $i \in [N]$ in order, let $\hat{u}_1^{i-1}$ be a vector of past decisions and $b_1,b_2,\ldots,b_n \in \{0,1\}$ satisfy $i(\mathbf{b}) = i$. Construct $\mathcal{T}^{[b_1,b_2,\ldots,b_{n}]}$ iteratively as follows. For $\lambda = 1,2, \ldots, n$, let us define \vspace{0.25cm} \begin{equation*} \mathcal{T}^{[b_1,b_2,\ldots,b_{\lambda}]} \triangleq \begin{cases} (\mathcal{T}^{[b_1,b_2,\ldots,b_{\lambda-1}]})^{[b_\lambda]} & \mbox{if $\lambda \geq 2$} \; , \\ \mathcal{T}^{[b_1]} & \mbox{if } \lambda = 1. \end{cases} \vspace{0.25cm} \end{equation*} If $b_\lambda = 1$, then we apply the plus transform with respect to the fixed vector \begin{equation} \label{eq:veczRecursiveT} \mathbf{z} = \mathcal{A}_{n-\lambda}^{-1}\left(\hat{u}_{\tau}^{\theta}\right) \; , \end{equation} where $\hat{u}_{\tau}^{\theta} \triangleq \left(\hat{u}_{\tau},\hat{u}_{\tau+1},\ldots,\hat{u}_{\theta}\right)$ and \begin{equation} \label{eq:thetaTau} \theta = \sum_{j=1}^\lambda b_j 2^{n-j} \; , \quad \tau = \theta - 2^{n-\lambda} + 1 \; . \end{equation} Then, for $\mathbf{U} = \mathcal{A}_n(\mathbf{X})\in \mathcal{X}^N$, we have \[ T^{[b_1,b_2,\ldots,b_{n}]}(u) = \Pr(U_i = u, U_1^{i-1} = \hat{u}_1^{i-1}, \mathbf{Y} = \mathbf{y}) \; . \] \end{lemm} \begin{IEEEproof} To facilitate a proof by induction, we actually prove a stronger claim. Namely, let $0 \leq \lambda \leq n$ be given. Define $\mathbf{b}_\lambda$ as the vector in $\{0,1\}^n$ whose first $\lambda$ entries equal those of $\mathbf{b}$, while the remaining entries are all-zero. That is, \begin{equation} \label{eq:blambda} \mathbf{b}_\lambda = (b_1,b_2,\ldots,b_\lambda,0,0,\ldots,0) \; . \end{equation} Recalling the notation in (\ref{eq:vecxZero})--(\ref{eq:bitReversedI}) and (\ref{eq:xRoundBracket}), we will prove that for all $\boldsymbol{\mu} \in \mathcal{X}^{2^{n-\lambda}}$, \begin{multlinecc} \label{eq:TPlusMinusHalfway} T^{[b_1,b_2,\ldots,b_\lambda]}(\boldsymbol{\mu}) \\ = P(\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda]} = \boldsymbol{\mu}, \mathbf{X}^{(\mathbf{b}_\lambda)} = \hat{u}_1^{i(\mathbf{b}_\lambda) - 1}, \mathbf{Y} = \mathbf{y} ) \; . \end{multlinecc} Clearly, for $\lambda=n$, the reduces to the claimed lemma. The proof of (\ref{eq:TPlusMinusHalfway}) proceeds by induction on $\lambda$. For the base case, take $\lambda = 0$, and note that (\ref{eq:TPlusMinusHalfway}) holds by assumption: the LHS is by definition $T(\boldsymbol{\mu})$ while the RHS is simply $P(\mathbf{X} = \boldsymbol{\mu}, \mathbf{Y} = \mathbf{y})$, and the two are equal by (\ref{eq:generalPathSumFunction}). For the induction step, we assume that (\ref{eq:TPlusMinusHalfway}) is true for $\lambda$, and prove it to be true for $\lambda + 1$. Assume first that $b_{\lambda+1} = 0$. In this case, $\mathbf{b}_\lambda = \mathbf{b}_{\lambda+1}$. Recall that since $b_\lambda = 0$, we get the trellis $\mathcal{T}^{[b_1,b_2,\ldots,b_\lambda,b_{\lambda+1}]}$ by applying a minus transform (Definition~\ref{def:minus_transform}) on $\mathcal{T}^{[b_1,b_2,\ldots,b_\lambda]}$. We must prove that (\ref{eq:TPlusMinusHalfway}) holds with $\lambda+1$ in place of $\lambda$, and this is indeed the case by Lemma~\ref{lemm:minusTransform}. Indeed, recall that by our recursive definition, $\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,b_{\lambda+1}]} = \left(\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda]}\right)^{[0]}$, and apply Lemma~\ref{lemm:minusTransform}, where in (\ref{eq:generalPathSumFunction}) and (\ref{eq:minusTransformPathSum}) we replace $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{y}$ with $\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda]}$, $(\mathbf{Y},\mathbf{X}^{(\mathbf{b}_\lambda)})$, and $(y,\hat{u}_1^{i(\mathbf{b}_\lambda) - 1})$, respectively. Now, let us assume that $b_{\lambda+1} = 1$. Because of this, note that $\mathbf{b}_\lambda \neq \mathbf{b}_{\lambda+1}$. As before, we assume that (\ref{eq:TPlusMinusHalfway}) is true for $\lambda$, and prove it to be true for $\lambda + 1$. By definition, we get the trellis $\mathcal{T}^{[b_1,b_2,\ldots,b_\lambda,b_{\lambda+1}]}$ by applying a plus transform (Definition~\ref{def:plus_transform}) on $\mathcal{T}^{[b_1,b_2,\ldots,b_\lambda]}$, with respect to the vector $\mathbf{z}$ defined in (\ref{eq:veczRecursiveT}) and (\ref{eq:thetaTau}), with $\lambda$ replaced by $\lambda+1$. Thus, if we denote by $T$ the probability function associated with $\mathcal{T}^{[b_1,b_2,\ldots,b_\lambda]}$, we get by Lemma~\ref{lemm:plusTransform} that the probability function associated with $\mathcal{T}^{[b_1,b_2,\ldots,b_\lambda,b_{\lambda+1}]}$, which we denote by $T'$, satisfies \begin{IEEEeqnarray*}{rCl} T'(\mathbf{z}') & = & T(\boldsymbol{\mu}) \\ & = & P(\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda]} = \boldsymbol{\mu}, \mathbf{X}^{(\mathbf{b}_\lambda)} = \hat{u}_1^{i(\mathbf{b}_\lambda) - 1}, \mathbf{Y} = \mathbf{y} ) \; , \end{IEEEeqnarray*} where $\boldsymbol{\mu}$ is the unique vector for which $\boldsymbol{\mu}^{[0]} = \mathbf{z}$ and $\boldsymbol{\mu}^{[1]} = \mathbf{z}'$. The condition $\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda]} = \boldsymbol{\mu}$ is equivalent to the pair of conditions \[ \mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,0]} = \boldsymbol{\mu}^{[0]} \quad \mbox{and} \quad \mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,1]} = \boldsymbol{\mu}^{[1]} \; . \] That is, to the pair of conditions \[ \mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,0]} = \mathbf{z} \quad \mbox{and} \quad \mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,b_{\lambda+1}]} = \mathbf{z}' \; . \] We will shortly prove that the pair of conditions \begin{equation} \label{eq:lastStepOfRecursiveT_A} \mathbf{X}^{(\mathbf{b}_\lambda)} = \hat{u}_1^{i(\mathbf{b}_\lambda) - 1} \quad \mbox{and} \quad \mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,0]} = \mathbf{z} \end{equation} can be simplified to \begin{equation} \label{eq:lastStepOfRecursiveT_B} \mathbf{X}^{(\mathbf{b}_{\lambda+1})} = \hat{u}_1^{i(\mathbf{b}_{\lambda+1}) - 1} \; . \end{equation} Once this is proved, the lemma follows, since the above implies that \begin{aligncc*} T'\onlydouble{&}(\mathbf{z}') = \\ \onlydouble{&} P(\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,b_{\lambda+1}]} = \mathbf{z}', \mathbf{X}^{(\mathbf{b}_{\lambda+1})} = \hat{u}_1^{i(\mathbf{b}_{\lambda+1}) - 1}, \mathbf{Y} = \mathbf{y} ). \end{aligncc*} Let us now show that (\ref{eq:lastStepOfRecursiveT_A}) is equivalent to (\ref{eq:lastStepOfRecursiveT_B}). Since $b_{\lambda+1}=1$, the set of transforms we need to add to $\mathbf{X}^{(\mathbf{b}_\lambda)}$ in order to get $\mathbf{X}^{(\mathbf{b}_{\lambda+1})}$ are those with prefix $(b_1,b_2,\ldots,b_\lambda,0)$. That is, we are missing the $\mathcal{A}_{n-(\lambda+1)}$ transform of $\mathbf{X}^{[b_1,b_2,\ldots,b_\lambda,0]}$, and this transform must equal $\hat{u}_{i(\mathbf{b}_{\lambda})}^{i(\mathbf{b}_{\lambda+1}) - 1}$. To see that this indeed is the case, we observe that $\mathbf{z}$ is defined by (\ref{eq:veczRecursiveT}) and (\ref{eq:thetaTau}) with $\lambda+1$ in place of $\lambda$. Recalling (\ref{eq:bitReversedI}) and (\ref{eq:blambda}), and keeping in mind that in (\ref{eq:thetaTau}) we replace $\lambda$ by $\lambda+1$, we see that $\theta = i(\mathbf{b}_{\lambda+1})-1$ while $\tau = i(\mathbf{b}_{\lambda})$. \end{IEEEproof} Actually, the above lemma is not unique to the deletion channel and it applies to any base trellis for which (\ref{eq:generalPathSumFunction}) holds. The above lemma also gives an efficient method for deciding the value of $\hat{u}_i$ at stage $i$, since \begin{multlinecc} \label{eq:trellisConditionalProbCalc} \Pr(U_i = u | U_1^{i-1} = \hat{u}_1^{i-1}, \mathbf{Y} = \mathbf{y}) \\ = \frac{ T^{[b_1,b_2,\ldots,b_{n}]}(u)}{\displaystyle \sum_{u' \in \cal X} T^{[b_1,b_2,\ldots,b_{n}]}(u') } \end{multlinecc} when $\Pr(U_1^{i-1} = \hat{u}_1^{i-1}, \mathbf{Y} = \mathbf{y}) > 0$. \subsection{Complexity} \label{subsec:complexity} In~\cite{Wang_2014}, SC trellis decoding is generalized to finite-state channels with memory. For a finite-state channel with $A$ states, the decoding complexity of a length-$N$ code is shown to be $O(A^3 N \log N)$. While there are some connections between finite-state channels and deletion channels~\cite{Castiglione_2015}, it is not clear if this complexity result can be applied directly to the deletion channel. Using a different formulation, a SC decoder for polar codes on the deletion channel is defined in~\cite{Tian_2017}. Its complexity is $O(N^4 \log N)$ for a constant deletion rate and a uniform input distribution\footnote{As noted earlier, the complexity of the decoding algorithm in~\cite{Tian_2017} is misstated as $O(d^2 N \log N)$ for $d$ deletions but it is actually $O(d^3 N \log N)$.}. In this section, we bound the complexity of computing the plus and minus transformations of a trellis. For a trellis $\mathcal{T}$ with $N$ sections, let $P_2 (j)$ be the number of distinct 2-step paths from states in $\mathcal{V}_{2j}$ to states in $\mathcal{V}_{2j+2}$ and define \[ C(\mathcal{T}) \triangleq \sum_{j=0}^{N/2 - 1} P_2 (j). \] From Definition~\ref{def:minus_transform}, one can verify that the minus transform requires $C(\mathcal{T})$ multiplies and adds to compute $\mathcal{T}^{[0]}$. Similarly, from Definition~\ref{def:plus_transform}, it follows that the plus transform requires at most $C(\mathcal{T})$ multiplies and adds to compute $\mathcal{T}^{[1]}$. Consider a trellis $\mathcal{T}_\lambda$ at depth-$\lambda$ in the decoding process. Such a trellis will have $2^{n-\lambda}$ sections each corresponding to $2^\lambda$ channel uses. For the deletion channel, we observe that each state in $\mathcal{V}_{2j}$ has at most $2(2^\lambda+1)|\mathcal{S}|$ outgoing edges. This is because each edge can be labeled by 0 or 1, the number of deletions (between $0$ and $2^{\lambda}$) determines the change in the channel state, and the input state can change to any of $|\mathcal{S}|$ possibilities. Combining these observations, and noting that the number of vertices in each segment is at most $2^n|\mathcal{S}|$, we see that \begin{align*} C(\mathcal{T}_{\lambda}) &\leq 2^n |\mathcal{S}| \cdot \left( 2(2^{\lambda}+1) |\mathcal{S}| \right)^2 2^{n-\lambda} \leq 2^{2n+2} (2^\lambda+3) |\mathcal{S}|^3. \end{align*} Since the full decoder uses $2^\lambda$ plus and minus operations at depth $\lambda$, the overall decoding complexity is \[ \sum_{\lambda=0}^{n-1} 2^\lambda 2^{2n+2} (2^\lambda+3) |\mathcal{S}|^3 = O(|\mathcal{S}|^3 N^4), \] which is lower than previous methods by a $\log N$ factor. This occurs because the $\lambda=n-1$ decoding step dominates the calculation and has $O(|\mathcal{S}|^3 N^4)$ complexity by itself. The reader should happily note that the above quartic growth in $N$ is \emph{not} present in Theorem~\ref{theo:main}. The overall complexity of our scheme is much smaller because the guard bands allow the codeword to be separated into many smaller blocks whose trellises can be processed separately. \section{Information rates} \label{sec:informationRates} In this section, we will introduce and analyze various information rates related to polar codes on the deletion channel. For a given regular hidden-Markov input distribution, let $\mathbf{X}$ be an input vector of length $N$ and let $\mathbf{Y}$ be the corresponding output vector (i.e., the observation of $\mathbf{X}$ through the deletion channel). The main goal of this paper is to show that our polar coding scheme achieves the information rate \begin{equation} \label{eq:informationRate} \mathcal{I} = \lim_{N \to \infty} \frac{I(\mathbf{X};\mathbf{Y})}{N} \; , \end{equation} where $\mathbf{X}$ and $\mathbf{Y}$ depend implicitly on $N$. This existence of this limit is well-known~\cite{Dobrushin_1967} but we revisit it here because the same argument will be used later with slight variations. \begin{lemm} \label{lemm:twoLimitsExist} Fix a hidden-Markov input distribution. For a given $N$, let $\mathbf{X} = (X_1,X_2,\ldots,X_N)$ be a random vector with the above distribution. Let $\mathbf{Y}$ be the result of passing $\mathbf{X}$ through a deletion channel with deletion probability $\delta$. Then, the following two limits exist, \begin{equation} \label{eq:twoLimits} \lim_{N \to \infty} \frac{H(\mathbf{X})}{N} \quad \mbox{and} \quad \lim_{N \to \infty} \frac{H(\mathbf{X}|\mathbf{Y})}{N} \; . \end{equation} \end{lemm} \begin{IEEEproof} The proof of this lemma is detailed below for uniform inputs in Section~\ref{sec:info_rate_lemma_uniform} and hidden-Markov inputs in Section~\ref{sec:info_rate_lemma_hm}. \end{IEEEproof} Once the limits in~\eqref{eq:twoLimits} are established, the limit in~(\ref{eq:informationRate}) follows because \[ \frac{I(\mathbf{X};\mathbf{Y})}{N} = \frac{H(\mathbf{X})}{N} - \frac{H(\mathbf{X}|\mathbf{Y})}{N} \; . \] \subsection{Uniform input} \label{sec:info_rate_lemma_uniform} In this subsection, we prove Lemma~\ref{lemm:twoLimitsExist}, for the restricted case in which the input distribution is i.i.d.\ and uniform. \begin{IEEEproof}[Proof of Lemma~\ref{lemm:twoLimitsExist} for Uniform Inputs] In such a setting, the first limit in (\ref{eq:twoLimits}) clearly exists and equals $1$. To prove the second limit in (\ref{eq:twoLimits}), let us first define \begin{equation} \label{eq:calH} \mathcal{H}_N = H(\mathbf{X}|\mathbf{Y}) \; , \quad |\mathbf{X}| = N \; . \end{equation} Our plan is to show that the sequence $\mathcal{H}_N$ is superadditive, implying \cite[Lemma 1.2.1, page 3]{Steele:97b} the existence of the second limit in (\ref{eq:twoLimits}). Indeed, let $N_1$ and $N_2$ be given, and let $\mathbf{X}$ and $\mathbf{X}'$ be distributed according to the input distribution, and having lengths $N_1$ and $N_2$, respectively. Denote the outputs corresponding to to $\mathbf{X}$ and $\mathbf{X}'$ by $\mathbf{Y}$ and $\mathbf{Y}'$, respectively. We have \begin{IEEEeqnarray*}{rCl} \mathcal{H}_{N_1 + N_2} &=& H(\mathbf{X} \odot \mathbf{X}'| \mathbf{Y} \odot \mathbf{Y}') \\ &\eqann[=]{a}& H(\mathbf{X}, \mathbf{X}'| \mathbf{Y} \odot \mathbf{Y}') \\ &\geq& H( \mathbf{X}, \mathbf{X}'| \mathbf{Y} \odot \mathbf{Y}', \mathbf{Y}, \mathbf{Y}') \\ & \eqann{b} & H(\mathbf{X},\mathbf{X}'| \mathbf{Y}, \mathbf{Y}') \\ & \eqann{c} & H(\mathbf{X}| \mathbf{Y}, \mathbf{Y}') + H(\mathbf{X}'|\mathbf{X}, \mathbf{Y}, \mathbf{Y}') \\ & \eqann{d} & H(\mathbf{X}| \mathbf{Y}) + H(\mathbf{X}'|\mathbf{Y}') \\ & = & \mathcal{H}_{N_1} + \mathcal{H}_{N_2} \; , \end{IEEEeqnarray*} where \eqannref{a} holds because $N_1$ and $N_2$, the lengths of $\mathbf{X}$ and $\mathbf{X}'$, respectively, are constant parameters; \eqannref{b} holds because $ \mathbf{Y} \odot \mathbf{Y}'$ is a function of $\mathbf{Y}$ and $\mathbf{Y}'$; \eqannref{c} follows by the chain rule; \eqannref{d} holds because, for the i.i.d.\ uniform input distribution, the pair $(\mathbf{X},\mathbf{Y})$ is independent of the pair $(\mathbf{X}',\mathbf{Y}')$. Hence, the sequence $\mathcal{H}_N$ is indeed superadditive. \end{IEEEproof} \subsection{Hidden-Markov input} \label{sec:info_rate_lemma_hm} We now prove Lemma~\ref{lemm:twoLimitsExist} for the case where the input distribution is a regular hidden-Markov process. Since now $\mathcal{H}_N$ is not generally superadditive, we will take an indirect route to prove Lemma~\ref{lemm:twoLimitsExist}. Indeed, the following lemma is proved by defining a related quantity, $\hat{\mathcal{H}}_N$, which is superadditive. \begin{lemm} \label{lemm:HHatLimitExists} Fix a regular hidden-Markov input distribution. For a given $N$, let $\mathbf{X} = (X_1,X_2,\ldots,X_N)$ be a random vector with the above distribution. Let $\mathbf{Y}$ be the result of passing $\mathbf{X}$ through a deletion channel with deletion probability $\delta$. Then, the following limit exists: \begin{equation} \label{eq:HHatLimit} \lim_{N \to \infty} \frac{H(\mathbf{X}|\mathbf{Y},S_0,S_N)}{N} \; . \end{equation} \end{lemm} \begin{IEEEproof} Define \begin{equation} \hat{\mathcal{H}}_N = H(\mathbf{X}|\mathbf{Y},S_0,S_N) \; , \quad |\mathbf{X}| = N \; . \label{eq:Hhat} \end{equation} To borrow the terminology of \cite{Shuval_Tal_Memory_2017}, the above defines the \emph{boundary-state-aware entropy}. Note that $S_0$ and $S_N$ are the states just before transmission has started, and just after transmission has ended, respectively. We now show that $\hat{\mathcal{H}}_N$ is superadditive. Indeed, let $\mathbf{X}$ and $\mathbf{X}'$ be consecutive input vectors of length $N_1$ and $N_2$, respectively. That is, $\mathbf{X} \odot \mathbf{X}'$ is a vector of length $N_1+N_2$ drawn from the input distribution. Denote by $\mathbf{Y}$ and $\mathbf{Y}'$ the output vectors corresponding to $\mathbf{X}$ and $\mathbf{X}'$, respectively. Then, \begin{IEEEeqnarray*}{rCl} \hat{\mathcal{H}}_{N_1 + N_2} &=& H(\mathbf{X} \odot \mathbf{X}'| \mathbf{Y} \odot \mathbf{Y}', S_0, S_{N_1+N_2}) \\ &\eqann[=]{a}& H(\mathbf{X},\mathbf{X}'| \mathbf{Y} \odot \mathbf{Y}', S_0, S_{N_1+N_2}) \\ & \eqann[\geq]{b} & H(\mathbf{X},\mathbf{X}'| \mathbf{Y}, \mathbf{Y}', S_0, S_{N_1+N_2}) \\ & \geq & H(\mathbf{X},\mathbf{X}'| \mathbf{Y}, \mathbf{Y}', S_0, S_{N_1}, S_{N_1+N_2}) \\ & \eqann{c} & H(\mathbf{X}| \mathbf{Y}, \mathbf{Y}', S_0, S_{N_1}, S_{N_1+N_2}) \\ & & + H(\mathbf{X}'|\mathbf{X}, \mathbf{Y}, \mathbf{Y}', S_0, S_{N_1}, S_{N_1+N_2}) \\ & \eqann{d} & H(\mathbf{X}| \mathbf{Y},S_0,S_{N_1}) + H(\mathbf{X}'|\mathbf{Y}',S_{N_1},S_{N_1+N_2}) \\ & = & \hat{\mathcal{H}}_{N_1} + \hat{\mathcal{H}}_{N_2} \; , \end{IEEEeqnarray*} where \eqannref{a} holds because $N_1$ and $N_2$, the lengths of $\mathbf{X}$ and $\mathbf{X}'$, respectively, are constant parameters; \eqannref{b} holds because $ \mathbf{Y} \odot \mathbf{Y}'$ is a function of $\mathbf{Y}$ and $\mathbf{Y}'$; \eqannref{c} follows by the chain rule; \eqannref{d} holds because of conditional independence: given $S_{N_1}$, $(\mathbf{X}, \mathbf{Y},S_0)$ is independent of $(\mathbf{X}',\mathbf{Y}',S_{N_1+N_2})$. Hence, the sequence $\hat{\mathcal{H}}_N$ is indeed superadditive, and the following limit exists by \cite[Lemma 1.2.1, page 3]{Steele:97b}, \[ \lim_{N \to \infty} \frac{\hat{\mathcal{H}}_N}{N} \; . \] \end{IEEEproof} All that remains now is to account for the difference in the entropies of $\mathcal{H}_N$ and $\hat{\mathcal{H}}_N$, incurred by conditioning on $S_0$ and $S_N$. As will be made clear in the following proof, this difference can be bounded by a constant, and hence vanishes when we divide by $N$. \begin{IEEEproof}[Proof of Lemma~\ref{lemm:twoLimitsExist} for hidden-Markov inputs] We first note that the existence of the second limit in (\ref{eq:twoLimits}) implies the existence of the first limit. Indeed, taking the deletion probability $\delta$ equal to $1$ makes the second limit equal the first. Hence, all that remains is to prove the existence of the second limit. To show that the second limit in (\ref{eq:twoLimits}) exists, note that, for $|\mathbf{X}| = N$, we have on the one hand that \begin{IEEEeqnarray*}{rCl} H(\mathbf{X},S_0,S_N|\mathbf{Y}) & = & H(\mathbf{X}|\mathbf{Y}) + H(S_0,S_N|\mathbf{X},\mathbf{Y}) \\ & \geq & H(\mathbf{X}|\mathbf{Y}) \\ & = & \mathcal{H}_N \; , \end{IEEEeqnarray*} and on the other hand that \begin{IEEEeqnarray*}{rCl} H(\mathbf{X},S_0,S_N|\mathbf{Y}) & = & H(S_0,S_N|\mathbf{Y}) + H(\mathbf{X}|\mathbf{Y},S_0,S_N) \\ & \leq & 2 \log_2 |\mathcal{S}| + H(\mathbf{X}|\mathbf{Y},S_0,S_N) \\ & = & 2 \log_2 |\mathcal{S}| + \hat{\mathcal{H}}_N \; . \end{IEEEeqnarray*} Thus, \[ \mathcal{H}_N \leq \hat{\mathcal{H}}_N + 2 \log_2 |\mathcal{S}| \; . \] Since it is easily seen that $\hat{\mathcal{H}}_N \leq \mathcal{H}_N$, we have that \begin{equation} \label{eq:preLimit} \frac{\hat{\mathcal{H}}_N}{N} \leq \frac{\mathcal{H}_N}{N} \leq \frac{\hat{\mathcal{H}}_N}{N} + \frac{2 \log_2 |\mathcal{S}|}{N} \; . \end{equation} We have already proved that the limit of the LHS of (\ref{eq:preLimit}) exists, in Lemma~\ref{lemm:HHatLimitExists}. Since the limit of $(2 \log_2 |\mathcal{S}|)/N$ is $0$, the limit of the RHS of (\ref{eq:preLimit}) exists and equals that of the LHS. By the sandwich property, the limit of the middle term exists as well, which is the desired result. \end{IEEEproof} We finish by restating the last part of the proof as a lemma. \begin{lemm} \label{lemm:HatNoHatLimitsEqual} Fix a hidden-Markov input distribution. For a given $N$, let $\mathbf{X} = (X_1,X_2,\ldots,X_N)$ be a random vector with the above distribution. Let $\mathbf{Y}$ be the result of passing $\mathbf{X}$ through a deletion channel with deletion probability $\delta$. Then, \begin{equation} \label{eq:HatNoHatLimitsEqual} \lim_{N \to \infty} \frac{H(\mathbf{X}|\mathbf{Y},S_0,S_N)}{N} = \lim_{N \to \infty} \frac{H(\mathbf{X}|\mathbf{Y})}{N} \; . \end{equation} \end{lemm} \section{Weak polarization} \label{sec:weak} In this section, we prove weak polarization for both the deletion channel and the trimmed deletion channel, as defined in Subsection~\ref{subsec:TrellisTDC}. As in \cite{Arikan_2009}, we will first prove that a certain process is submartingale, and then prove that it either converges to $0$ or to $1$. As a first step, we will shortly define three entropies. These are defined with respect to an input $\mathbf{X}$ of length $N=2^n$, which has a regular hidden-Markov input distribution, and $\mathbf{U}= \mathcal{A}_n (\mathbf{X})$. The corresponding output is denoted $\mathbf{Y}$. Recall that $S_0$ and $S_N$ are the (hidden) states of the input process, just before $\mathbf{X}$ is transmitted and right after $\mathbf{X}$ is transmitted, respectively. Lastly, denote by $\mathbf{Y}^*$ the result of trimming all leading and trailing `$0$' symbols from $\mathbf{Y}$. Then, for a given $n$ and $1 \leq i \leq N = 2^n$, define the following (deterministic) entropies: \begin{IEEEeqnarray}{rCl} h_i &=& H(U_i|U_1^{i-1},\mathbf{Y}) \; , \label{eq:hi} \\ \hat{h}_i &=& H(U_i|U_1^{i-1},S_0,S_N,\mathbf{Y}) \; , \label{eq:hathi}\\ h_i^* &=& H(U_i|U_1^{i-1},\mathbf{Y}^*) \label{eq:hiTDC} \; . \end{IEEEeqnarray} Clearly, \[ h_i^* \geq h_i \geq \hat{h}_i \; . \] Note that in the case of a uniform input distribution, there is only one state, and hence $h_i$ and $\hat{h}_i$ are equal. Following \cite{Arikan_2009}, we show weak polarization by considering a sequence $B_1,B_2,\ldots$ of i.i.d.\ $\mathrm{Ber}(1/2)$ random variables. For any $n\in \mathbb{N}$, let $J_n = i(B_1,B_2,\ldots,B_{n})$ be the random index defined by (\ref{eq:bitReversedI}), with $B_t$ in place of $b_t$. We will study the three related random processes defined for $n\in \mathbb{N}$ by \begin{IEEEeqnarray}{rCl} H_n &=& h_{J_n} \; , \label{eq:Hn} \\ \hat{H}_n &=& \hat{h}_{J_n} \; , \label{eq:hatHn} \\ H^*_{n} &=& h_{J_n}^* \; . \label{eq:HnTDC} \end{IEEEeqnarray} The arguments below will show that $\hat{H}_n$ is a submartingale, converging to either $0$ or $1$. From this we will infer that $H_n$ and $H^*_{n}$ must converge to either $0$ or $1$ as well. Though neither $H_n$ nor $H^*_{n}$ are necessarily submartingales. \begin{theo} \label{thm:hatH_polarizes} The sequence $\hat{H}_n$ converges (almost surely and in $L^1$) to a well-defined random variable $\hat{H}_\infty\ \in \{0,1\}$ and, for any $\epsilon > 0$, it follows that \begin{align} \frac{1}{N} \left| \left\{ i \in [N] \, | \, H(U_i|U_1^{i-1},S_0,S_N,\mathbf{Y}) \in [\epsilon, 1-\epsilon] \right\} \right| & \to 0. \end{align} \end{theo} \begin{IEEEproof} Lemma~\ref{lemm:subMartingale} below shows that $\hat{H}_1,\hat{H}_2,\hat{H}_3,\ldots \in [0,1]$ is a bounded submartingale with respect to $J_n$. This implies that the sequence $\hat{H}_n$ converges (almost surely and in $L^1$) to a limit that is denoted by $\hat{H}_\infty$~\cite[p.~236]{Durrett-2019}. Lemma~\ref{lemm:weakPolarizationFAIM} below shows that, for any $\epsilon >0$, there is a $\Delta > 0$ such that $\hat{H}_n \in [\epsilon,1-\epsilon]$ implies $\hat{H}_{n+1} > \hat{H}_n + \Delta$ with probability $\frac{1}{2}$. Thus, the sequence $\hat{H}_n$ cannot converge to the set $(0,1)$ and hence $\hat{H}_\infty \in \{0,1\}$. From~\eqref{eq:hathi} and~\eqref{eq:hatHn}, we see that $\Pr \left( \hat{H}_n \in [\epsilon,1-\epsilon] \right)$ equals \[ \frac{1}{N} \left| \left\{ i \in [N] \, | \, H(U_i|U_1^{i-1},S_0,S_N,\mathbf{Y}) \in [\epsilon, 1-\epsilon] \right\} \right|. \] Since $\hat{H}_n$ converges almost surely to $\hat{H}_\infty$ and $\epsilon,1-\epsilon$ are continuity points of $\Pr(\hat{H}_\infty \leq x)$~\cite[Ch.~4]{Durrett-2019}, it follows that \[ \lim_{n\to \infty} \Pr \left(\hat{H}_n \in [\epsilon,1-\epsilon] \right) = \Pr \left( \hat{H}_\infty \in [\epsilon,1-\epsilon] \right) = 0. \] This completes the proof. \end{IEEEproof} \begin{lemm} \label{lemm:subMartingale} For a hidden-Markov input distribution and a deletion channel with deletion probability $\delta$, let $\hat{H}_n$ and $J_n$ be as defined above. Then, the sequence $\hat{H}_1,\hat{H}_2,\hat{H}_3,\ldots$ is a bounded submartingale with respect to the $J_1,J_2,J_3,\ldots$ sequence. \end{lemm} \begin{IEEEproof} Since $\hat{H}_n$ is clearly bounded between $0$ and $1$, it remains to show that $E(\hat{H}_{n+1}|J_1,J_2,\ldots,J_n) \geq \hat{H}_n$. Let $\mathbf{X} \odot \mathbf{X}'$ be a length-$2N$ input to the channel. Denote by $\mathbf{Y} \odot \mathbf{Y}'$ the corresponding output, where $\mathbf{Y}$ only contains inputs from $\mathbf{X}$ and $\mathbf{Y}'$ only contains inputs from $\mathbf{X}'$. Recall that $\mathbf{U} = \mathcal{A}_{n}(\mathbf{X})$ and define $\mathbf{V} = \mathcal{A}_n(\mathbf{X}')$ and \[ \mathbf{F} = (U_1 \oplus V_1, V_1,U_2 \oplus V_2, V_2,\ldots,U_N \oplus V_N,V_N). \] By (\ref{eq:bitReversedI}), we have that $J_{n+1} = 2J_n -1$ with probability $1/2$ and $J_{n+1} = 2J_n$ with probability $1/2$. Thus, \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{E(\hat{H}_{n+1} |J_1^n)} \\ \quad & =& E \big( H(F_{J_{n+1}} | F_1^{J_{n+1}-1},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) | J_1^n \big) \\ & = & \frac{1}{2} H(F_{2 J_{n}-1} | F_1^{2 J_{n}-2},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) \\ & & \quad + \frac{1}{2} H(F_{2 J_{n}} | F_1^{2 J_{n}-1},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) \\ & = &\frac{1}{2} H(F_{2 J_{n}-1},F_{2 J_{n}} | F_1^{2 J_{n}-2},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) \\ & =& \frac{1}{2} H(U_{J_n} \oplus V_{J_n},V_{J_n} | F_1^{2 J_{n}-2},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) \\ & = & \frac{1}{2} H(U_{J_n},V_{J_n} | U_1^{J_{n}-1},V_1^{J_{n}-1},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) \\ & \eqann[\geq]{a} & \frac{1}{2} H(U_{J_n},V_{J_n} | U_1^{J_{n}-1},V_1^{J_{n}-1},\mathbf{Y},\mathbf{Y}',S_0,S_{2N}) \\ & \eqann[\geq]{b} & \frac{1}{2} H(U_{J_n},V_{J_n} | U_1^{J_{n}-1},V_1^{J_{n}-1},\mathbf{Y},\mathbf{Y}',S_0,S_N,S_{2N}) \\ & \eqann{c} & \frac{1}{2} H(U_{J_n} | U_1^{J_{n}-1},\mathbf{Y},S_0,S_N) \\ && \quad + \frac{1}{2} H(V_{J_n} | V_1^{J_{n}-1},\mathbf{Y}',S_N,S_{2N}) \\ & \eqann{d} & \hat{H}_{n}. \end{IEEEeqnarray*} The inequality \eqannref{a} follows from the fact that $\mathbf{Y} \odot \mathbf{Y}'$ is a deterministic function of $\mathbf{Y},\mathbf{Y}'$. Inequality \eqannref{b} follows since conditioning reduces entropy. Step \eqannref{c} holds by the Markov property. Finally, \eqannref{d} is due to stationarity: $\hat{H}_n = H(U_{J_n} | U_1^{J_{n}-1},\mathbf{Y},S_0,S_N) = H(V_{J_n} | V_1^{J_{n}-1},\mathbf{Y}',S_N,S_{2N})$. \end{IEEEproof} \vspace{1mm} Since the sequence $\hat{H}_n$ is a bounded submartingale, it converges almost surely and in $L_1$ to a random variable $\hat{H}_\infty \in [0,1]$. \iffalse That is, \[ \Pr(\lim_{n \to \infty} \hat{H}_n \neq \hat{H}_\infty) = 0 \] and \[ \lim_{n \to \infty} E(|\hat{H}_n-\hat{H}_\infty|) = 0 \; . \] \fi To show that $\hat{H}_\infty \in \{0,1\}$ with probability 1, one can show that, if $\epsilon \leq \hat{H}_n \leq 1-\epsilon$, then there is a $\Delta = \Delta(\epsilon) > 0$ such that $\hat{H}_n^- - \hat{H}_n > \Delta(\epsilon)$, where \begin{equation} \label{hat_n_plus} \hat{H}_n^- \triangleq H(U_{J_n} \oplus V_{J_n} |U_1^{J_{n}-1},V_1^{J_{n}-1},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) \; . \end{equation} That is, a `minus' operation applied to non-polarized entropy changes the entropy by at least $\Delta$. Such a result indeed establishes the above, since it dictates that $\hat{H}_n$ cannot converge to anything other than either $0$ or $1$. As before, we first prove the above for the simple case of i.i.d.\ uniform input, and then generalize to a hidden-Markov input. \subsection{Uniform input} \begin{lemm} \label{lemm:weakPolarizationSymmetric} Let $\mathbf{X}$ and $\mathbf{X}'$ be independent vectors of length $N = 2^n$, both drawn from an i.i.d.\ uniform distribution. Let $\hat{H}_n$ and $\hat{H}_n^-$ be as defined in (\ref{eq:hathi}), (\ref{eq:hatHn}), and (\ref{hat_n_plus}), with $S_0$, $S_N$ and $S_{2N}$ being degenerate random variables always taking the value $1$. Then, for every $\epsilon > 0$ there exists $\Delta(\epsilon) > 0$ such that if $\epsilon \leq \hat{H}_n \leq 1- \epsilon$, then $\hat{H}_n^- - \hat{H}_n > \Delta(\epsilon)$. \end{lemm} \begin{IEEEproof} Denote $i=J_n$, and assume a fixed $\epsilon$ for which $\epsilon \leq \hat{H}_n \leq 1- \epsilon$. Then, since $S_0$, $S_N$, and $S_{2N}$ are degenerate, we observe that $(U_i,U_1^{i-1},\mathbf{Y})$ is independent of $(V_i,V_1^{i-1},\mathbf{Y}')$. It follows that \[ H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y}, \mathbf{Y}') \] is the entropy of the modulo-2 sum of the independent binary random variables $U_i$ and $V_i$ . Thus, Mrs.\ Gerber's Lemma~\cite[Lemma 2.2]{sasoglu:12b} implies that, for every $\epsilon >0$, there is $\Delta >0$ such that \[ H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y}, \mathbf{Y}') - H(U_i|U_1^{i-1},\mathbf{Y}) \geq \Delta \; . \] Since \begin{align*} \hat{H}_{n+1}^- &= H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y} \odot \mathbf{Y}') \\ &\geq H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y}, \mathbf{Y}'), \end{align*} the result follows. \end{IEEEproof} \subsection{Hidden-Markov input} The proof of Lemma~\ref{lemm:weakPolarizationSymmetric} above relied on the mutual independence of $(U_i,U_1^{i-1},\mathbf{Y})$ and $(V_i,V_1^{i-1},\mathbf{Y}')$. To emulate\footnote{For independence, it is sufficient to condition on the event $S_N = s_N$. Conditioning on the more specific event $S_0 = s_0$, $S_N = s_N$, $S_{2N} = s_{2N}$ is needed for latter parts.} this property in a FAIM setting, we note that for $s_0$, $s_N$, and $s_{2N}$ fixed, we indeed have that $(U_i,U_1^{i-1},\mathbf{Y})$ and $(V_i,V_1^{i-1},\mathbf{Y}')$ are independent, when conditioning on the event $S_0 = s_0$, $S_N = s_N$, $S_{2N} = s_{2N}$. Towards this end, for $s_0,s_N,s_{2N} \in \mathcal{S}$, we denote the probability of these three states occurring as \begin{equation} \label{eq:ps0sNs2N} p(s_0,s_N,s_{2N}) = \Pr(S_0 = s_0, S_N = s_N, S_{2N} = s_{2N} ) \; . \end{equation} In the reminder of this subsection, we will assume that $N$ is large enough such that the above probability is always positive. This is indeed possible, by the following lemma. \begin{lemm} \label{lemm:allTripletsProbable} For $s \in \mathcal{S}$, denote by $\pi(s)$ the stationary probability of $s$. That is, the probability that $S_0=s$. Let \[ \pi_{\mathrm{min}} = \min_{s \in \mathcal{S}} \pi(s) \; , \] Then, $\pi_{\mathrm{min}} > 0$, and there exists a $\nu$ such that for all $N \geq 2^{\nu}$ and all $s_0,s_N, s_{2N} \in \mathcal{S}$ we have \begin{equation} \label{eq:allStatesProbable} \Pr(S_0 = s_0, S_N = s_N, S_{2N} = s_{2N} ) > \frac{(\pi_{\mathrm{min}})^3}{2} \; . \end{equation} \end{lemm} \begin{IEEEproof} Since the underlying Markov chain is regular (i.e., finite-state, irreducible, and aperiodic), some power of the transition matrix must be strictly positive and this implies that $\pi_{\mathrm{min}}>0$. Regularity further implies that $S_0,S_N,S_{2N}$ become asymptotically independent as $N$ increases. Thus, there must be an $N_0 =2^{n_0}$ such that (\ref{eq:allStatesProbable}) holds for all $N\geq N_0$ \end{IEEEproof} For $(s_0,s_N,s_{2N})$, we define the quantities $\alpha(s_0,s_N,s_{2N})$ and $\beta(s_0,s_N,s_{2N})$ as follows. \begin{IEEEeqnarray}{l} \alpha(s_0,s_N,s_{2N}) \triangleq \label{eq:alpha} \\ H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y}, \mathbf{Y}',S_0=s_0,S_N=s_N,S_{2N}=s_{2N}) \nonumber \end{IEEEeqnarray} and \begin{equation} \label{eq:beta} \beta(s_0,s_N,s_{2N}) \triangleq \frac{\gamma(s_0,s_N) + \gamma(s_N,s_{2N})}{2} \; , \end{equation} where \begin{equation} \label{eq:gamma} \gamma(s_0,s_N) \triangleq H(U_i|U_1^{i-1},\mathbf{Y},S_0=s_0,S_N=s_N) \; . \end{equation} Note that by stationarity, \[ \gamma(s_N,s_{2N}) = H(V_i|V_1^{i-1},\mathbf{Y}',S_N=s_N,S_{2N}=s_{2N}) \; . \] The following lemma states how $\alpha$ and $\beta$ are related to our quantities of interest, $\hat{H}_n$ and $\hat{H}_n^-$. \begin{lemm} \label{lemm:alphaBetaConnection} Let $N = 2^n > 2^{\nu}$, where $\nu$ was promised in Lemma~\ref{lemm:allTripletsProbable}. Then, for $\alpha$ and $\beta$ as defined above, we have that \begin{equation} \label{eq:hatH_alpha} \hat{H}_n^- \geq \sum_{s_0,s_N,s_{2N} \in \mathcal{S}} p(s_0,s_N,s_{2N}) \cdot \alpha(s_0,s_N,s_{2N}) \; , \end{equation} and \begin{equation} \label{eq:hatH_beta} \hat{H}_n = \sum_{s_0,s_N,s_{2N} \in \mathcal{S}} p(s_0,s_N,s_{2N}) \cdot \beta(s_0,s_N,s_{2N}) \; . \end{equation} Furthermore, for all $s_0,s_N,s_{2N} \in \mathcal{S}$, \begin{equation} \label{eq:alphaGeqBeta} \alpha(s_0,s_N,s_{2N}) \geq \beta(s_0,s_N,s_{2N}) \; . \end{equation} \end{lemm} \begin{IEEEproof} Define $i = J_n$. To prove (\ref{eq:hatH_alpha}), we proceed similarly to the proof in Lemma~\ref{lemm:subMartingale} and deduce that \begin{IEEEeqnarray*}{rCl} \hat{H}_n^- & = & H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y} \odot \mathbf{Y}',S_0,S_{2N}) \\ & \geq & H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y}, \mathbf{Y}',S_0,S_{2N}) \\ & \geq & H(U_i \oplus V_i|U_1^{i-1},V_1^{i-1},\mathbf{Y}, \mathbf{Y}',S_0,S_N,S_{2N}) \\ & = & \sum_{s_0,s_N,s_{2N} \in \mathcal{S}} p(s_0,s_N,s_{2N}) \cdot \alpha(s_0,s_N,s_{2N}) \; , \end{IEEEeqnarray*} \iffalse If $s_0=s_N=s_{2N}$, then the argument from Lemma~\ref{lemm:weakPolarizationSymmetric} shows that \[ \alpha(s_0,s_N,s_{2N}) = \alpha(s,s,s) \geq \] we can use same argument ]Since $U_i$ and $V_i$ and conditionally independent given $\mathcal{A}=(U_1^{i-1},V_1^{i-1},\mathbf{Y}, \mathbf{Y}',S_0=s_0,S_N=s_N,S_{2N}=s_{2N})$, it follows that \begin{align*} \alpha(s_0,s_N,s_{2N}) &= H(U_i \oplus V_i|\mathcal{A}) \\ &\geq H(U_i|U_1^{i-1},\mathbf{Y},S_0=s_0,S_N=s_N) \\ &= \gamma(s_0,s_N). \end{align*} Similarly, it follows that $\alpha(s_0,s_N,s_{2N}) \geq \gamma(s_N,s_{2N})$. \fi The proof of (\ref{eq:hatH_beta}) follows by stationarity. That is, \begin{IEEEeqnarray*}{rCl} \hat{H}_n & = & H(U_i |U_1^{i-1},\mathbf{Y}, S_0,S_N) \\ & = & \frac{H(U_i |U_1^{i-1},\mathbf{Y}, S_0,S_N) + H(V_i |V_1^{i-1},\mathbf{Y}', S_N,S_{2N})}{2} \\ & = & \sum_{s_0,s_N,s_{2N} \in \mathcal{S}} p(s_0,s_N,s_{2N}) \cdot \frac{\gamma(s_0,s_N) + \gamma(s_N,s_{2N})}{2} \\ & = & \sum_{s_0,s_N,s_{2N} \in \mathcal{S}} p(s_0,s_N,s_{2N}) \cdot \beta(s_0,s_N,s_{2N}) \; . \end{IEEEeqnarray*} By (\ref{eq:beta}), we deduce that (\ref{eq:alphaGeqBeta}) will follow from proving that \begin{equation} \label{eq:alphaGeqGammaOne} \alpha(s_0,s_N,s_{2N}) \geq \gamma(s_0,s_N) \end{equation} and \begin{equation} \label{eq:alphaGeqGammaTwo} \alpha(s_0,s_N,s_{2N}) \geq \gamma(s_N,s_{2N}) \end{equation} W.l.o.g, we prove (\ref{eq:alphaGeqGammaOne}). Indeed, given that $S_N = s_N$, we have by the Markov property that $(S_0, U_1^{i-1}, U_i, \mathbf{Y})$ and $(V_1^{i-1}, V_i,\mathbf{Y}',S_{2N})$ are independent. Hence, for any $s_{2N}$ we may also write $\gamma$, defined in (\ref{eq:gamma}), as \begin{aligncc*} \gamma(s_0,s_N) = H( U_i| \onlydouble{&} U_1^{i-1},V_1^{i-1},V_i,\mathbf{Y},\mathbf{Y}', \\ \onlydouble{&} S_0=s_0,S_N=s_N,S_{2N} = s_{2N}) \; . \end{aligncc*} Lastly, note that in the above expression for $\gamma$, since we condition on $V_i$, we could have written $U_i \oplus V_i$ in place of $U_i$. This would give us the expression for $\alpha$ in (\ref{eq:alpha}), up to a further conditioning on $V_i$. Since conditioning reduces entropy, (\ref{eq:alphaGeqGammaOne}) follows. As noted, the proof of (\ref{eq:alphaGeqGammaTwo}) is similar. Hence, we deduce (\ref{eq:alphaGeqBeta}). \end{IEEEproof} In light of Lemma~\ref{lemm:alphaBetaConnection}, our plan is to show the existence of a triplet $(s_0,s_N,s_{2N})$ for which $\alpha(s_0,s_N,s_{2N})$ is substantially greater than $\beta(s_0,s_N,s_{2N})$. The next lemma assures us such a triplet indeed exists. \begin{lemm} \label{lemm:alphaSubstantiallyLargerThanBeta} For every $\epsilon > 0$ there exists a $\Delta' = \Delta'(\epsilon)$ for which the following holds. Let $N = 2^n > 2^{\nu}$, where $\nu$ was promised in Lemma~\ref{lemm:allTripletsProbable}. Then, if $\epsilon \leq \hat{H}_n \leq 1- \epsilon$, then there exists a triplet $s_0,s_N,s_{2N}$ such that \begin{equation} \label{eq:alphaSubstantiallyLargerThanBeta} \alpha(s_0,s_N,s_{2N}) > \beta(s_0,s_N,s_{2N}) + \Delta' \; . \end{equation} \end{lemm} \begin{IEEEproof} By definition of $\gamma$ in (\ref{eq:gamma}), we have that \begin{IEEEeqnarray}{rCl} \hat{H}_n &=& \sum_{s_0,s_N \in \mathcal{S}} \Pr(S_0 = s_0, S_N = s_N) \cdot \gamma(s_0,s_N) \label{eq:HnGamma} \\ &=& \sum_{s_N,s_{2N} \in \mathcal{S}} \Pr(S_N = s_N, S_{2N} = s_{2N}) \cdot \gamma(s_N,s_{2N}) \; , \IEEEnonumber \end{IEEEeqnarray} where the second equality follows by stationarity. A crucial point will be to show the existence of a triplet $(s_0,s_N,s_{2N})$ for which $(\hat{H}_n - \gamma(s_0,s_N))\cdot(\hat{H}_n - \gamma(s_N,s_{2N})) \leq 0$. In other words, either \begin{equation} \label{eq:gammaMixing} \gamma(s_0,s_N) \leq \hat{H}_n \quad \mbox{and} \quad \gamma(s_N,s_{2N}) \geq \hat{H}_n \; , \end{equation} or \begin{equation} \label{eq:gammaMixing2} \gamma(s_0,s_N) \geq \hat{H}_n \quad \mbox{and} \quad \gamma(s_N,s_{2N}) \leq \hat{H}_n \; . \end{equation} To show this by contradiction, we start by supposing that this is not the case. Then, for all $s_0,s_N,s_{2N} \in \mathcal{S}$, it must be that \begin{equation} \label{eq:bridge} (\hat{H}_n - \gamma(s_0,s_N))\cdot(\hat{H}_n - \gamma(s_N,s_{2N})) > 0 \; . \end{equation} Fix some arbitrary $a,b \in \mathcal{S}$. By specializing $s_0$ to $a$ and $s_N$ to $b$ in (\ref{eq:bridge}), we deduce that $\hat{H}_n \neq \gamma(a,b)$. Assume w.l.o.g.\ that $\gamma(a,b) < \hat{H}_n$. We now claim that for all $c,d \in \mathcal{S}$, \begin{equation} \label{eq:gammacd} \gamma(c,d) < \hat{H}_n \; . \end{equation} Indeed, let $c,d \in \mathcal{S}$ be given. By setting $s_0 = a$, $s_N = b$, $s_{2N} = c$, we deduce from (\ref{eq:bridge}) that $\gamma(b,c) < \hat{H}_n$. Hence, if we set $s_0 = b$, $s_N = c$, $s_{2N} = d$ in (\ref{eq:bridge}), we deduce (\ref{eq:gammacd}). From the above paragraph, we conclude that for all $s_0, s_N \in \mathcal{S}$, we must have that $\gamma(s_0,s_N) < \hat{H}_n$. However, recalling from (\ref{eq:HnGamma}) that $\hat{H}_n$ is a weighted average of such $\gamma$ terms, we arrive at a contradiction. Hence, there exists a triplet $(s_0,s_N,s_{2N})$ for which either (\ref{eq:gammaMixing}) or (\ref{eq:gammaMixing2}) holds. This is the triplet we are searching for. Indeed, since we have assumed that $\epsilon \leq \hat{H}_n \leq 1- \epsilon$, the above triplet satisfies \[ \min \{ \gamma(s_0,s_N), \gamma(s_N,s_{2N}) \} \leq 1-\epsilon \] and \[ \max \{ \gamma(s_0,s_N), \gamma(s_N,s_{2N}) \} \geq \epsilon \; . \] Our result now follows by combining part (i) of \cite[Lemma 2.2]{sasoglu:12b} with\footnote{The first two strict inequalities in the statement of \cite[Lemma 11]{SasogluTal:18a} are essentially typos: they should both be replaced by weak inequalities, as is evident from reading the beginning of the proof.} \cite[Lemma 11]{SasogluTal:18a}. \end{IEEEproof} Combining Lemmas~\ref{lemm:alphaBetaConnection} and~\ref{lemm:alphaSubstantiallyLargerThanBeta} gives the following key result. \begin{lemm} \label{lemm:weakPolarizationFAIM} For every $\epsilon > 0$ there exists $\Delta = \Delta(\epsilon)$ for which the following holds. Let $N = 2^n > 2^{\nu}$, where $\nu$ was promised in Lemma~\ref{lemm:allTripletsProbable}. Then, if $\epsilon < \hat{H}_n \leq 1- \epsilon$, then \[ \hat{H}_n^- - \hat{H}_n > \Delta(\epsilon) \] \end{lemm} \begin{IEEEproof} Take \[ \Delta = \frac{\Delta' \cdot (\pi_{\mathrm{min}})^3}{2} \; , \] where $\Delta'$ is as defined in Lemma~\ref{lemm:alphaSubstantiallyLargerThanBeta}. Now, simply combine (\ref{eq:allStatesProbable}), (\ref{eq:hatH_alpha}), (\ref{eq:hatH_beta}), (\ref{eq:alphaGeqBeta}) and the existence of triplet $s_0,s_N,s_{2N}$ for which (\ref{eq:alphaSubstantiallyLargerThanBeta}) holds, to yield the claim. \end{IEEEproof} The following lemma will be useful. \begin{lemm} \label{lem:order_rv_convergence} For $n\in \mathbb{N}$, let $A_n$ and $B_n$ be real random variables defined on a common probability space. Suppose $B_n$ converges in $L^1$ to $B_\infty$ and $E(A_n)$ converges to $E(B_\infty)$. If $A_n \geq B_n$ for all $n\in \mathbb{N}$, then $A_n$ converges in $L^1$ to $B_\infty$. \end{lemm} \begin{IEEEproof} By definition, $B_n$ converges to $B_\infty$ in $L^1$ if and only if $E(|B_n-B_\infty|) \to 0$. Thus, by the triangle inequality, \begin{align*} E(|A_n - B_\infty|) &\leq E(|A_n - B_n|) + E(|B_n - B_\infty|) \\ & = E(A_n - B_n) + E(|B_n - B_\infty |) \\ & = E(A_n) - E(B_n) + E(|B_n - B_\infty|). \end{align*} In the limit, the first two terms converge to $E(B_\infty)$ and the last term converges to $0$. Thus, $E(|A_n - B_\infty|) \to 0$. \end{IEEEproof} The following theorem claims weak polarization for the three cases discussed earlier. \begin{theo} \label{theo:slowPolarization} Fix $\epsilon \in (0,1)$ and let $N = 2^n$. For a given hidden-Markov input distribution, let $\mathbf{X} = (X_1,X_2,\ldots,X_N)$ be a random vector with the above distribution. Let $\mathbf{Y}$ be the result of passing $\mathbf{X}$ through a deletion channel with deletion probability $\delta$. Denote $\mathbf{U} = \mathcal{A}(\mathbf{X})$. Let $S_0$ and $S_N$ be as in Definition~\ref{def:FAIM}. Then, \begin{IEEEeqnarray}{rCl} \IEEEyesnumber\label{eq:SlowLowEntropy}\IEEEyessubnumber* && \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) < \epsilon}}}{N} \label{eq:slowLowEntropy_stateInformed} \\ & = & \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}) < \epsilon}}}{N} \label{eq:slowLowEntropy_regular} \\ & = & \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) < \epsilon}}}{N} \label{eq:slowLowEntropy_TDC}\\ & = & 1 - \lim_{n \to \infty} \frac{H(\mathbf{X}|\mathbf{Y})}{N} \label{eq:slowLowEntropy_target} \end{IEEEeqnarray} and \begin{IEEEeqnarray}{rCl} \IEEEyesnumber\label{eq:SlowHighEntropy}\IEEEyessubnumber* && \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) > 1- \epsilon}}}{N} \label{eq:slowHighEntropy_stateInformed}\\ & = & \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}) > 1- \epsilon}}}{N} \label{eq:slowHighEntropy_regular} \\ & = & \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) > 1- \epsilon}}}{N} \label{eq:slowHighEntropy_TDC} \\ & = & \lim_{n \to \infty} \frac{H(\mathbf{X}|\mathbf{Y})}{N} \label{eq:slowHighEntropy_target} \end{IEEEeqnarray} \end{theo} \begin{IEEEproof} For simplicity, the proof is split into 4 parts. \paragraph*{Part I: (\ref{eq:slowLowEntropy_target}) and (\ref{eq:slowHighEntropy_target}) are well defined} Recall from Lemma~\ref{lemm:twoLimitsExist} that $\lim_{n \to \infty} H(\mathbf{X}|\mathbf{Y})/N$ exists. Thus, the right hand sides of both (\ref{eq:slowLowEntropy_target}) and (\ref{eq:slowHighEntropy_target}) are well defined. \paragraph*{Part II: (\ref{eq:slowLowEntropy_stateInformed})$=$(\ref{eq:slowLowEntropy_target}) and (\ref{eq:slowHighEntropy_stateInformed})$=$(\ref{eq:slowHighEntropy_target})} Since the Ar\i{}kan\xspace\ transform is invertible, it follows that $\hat{\mathcal{H}}_N = H(\mathbf{X}|\mathbf{Y},S_0,S_N) = H(\mathbf{U}|\mathbf{Y},S_0,S_N)$, where $\hat{\mathcal{H}}_N$ is defined in (\ref{eq:Hhat}). Thus, from the chain rule for entropy, we observe that \begin{align*} E(\hat{H}_n) &= \frac{1}{N} \sum_{i=1}^N H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) \\ &= \frac{1}{N} H(\mathbf{U}|\mathbf{Y},S_0,S_N) \\ &= \frac{1}{N} \hat{\mathcal{H}}_N . \end{align*} From Theorem~\ref{thm:hatH_polarizes}, we see that $\hat{H}_n$ converges in $L^1$ to $\hat{H}_\infty \in \{0,1\}$. This implies that $E(\hat{H}_\infty) = \lim_{n\to\infty} E(\hat{H}_n)$ which exists and equals $\lim_{N \to \infty} \hat{\mathcal{H}}_N/N$ by Lemma~\ref{lemm:HHatLimitExists}. Since $\hat{H}_\infty \in \{0,1\}$, observing that $E(\hat{H}_\infty) = \Pr (\hat{H}_\infty = 1) $ shows that \begin{align*} \eqref{eq:slowHighEntropy_stateInformed} &= \lim_{n \to \infty} \Pr (\hat{H}_n > 1-\epsilon) = \Pr (\hat{H}_\infty = 1) = \lim_{n \to \infty} \frac{1}{N} \hat{\mathcal{H}}_N, \end{align*} where the second equality holds because convergence in $L^1$ implies convergence in distribution and $1-\epsilon$ is a continuity point of $\Pr(\hat{H}_\infty \leq x)$~\cite[Ch.~4]{Durrett-2019}. Since Lemma~\ref{lemm:HatNoHatLimitsEqual} shows that $\lim_{N \to \infty} \hat{\mathcal{H}}_N/N$ equals (\ref{eq:slowHighEntropy_target}), it follows that (\ref{eq:slowHighEntropy_stateInformed}) equals (\ref{eq:slowHighEntropy_target}). The last step is observing that \begin{align*} \eqref{eq:slowLowEntropy_stateInformed} &= \lim_{n \to \infty} \Pr (\hat{H}_n < \epsilon) = \Pr (\hat{H}_\infty = 0)=1-\Pr (\hat{H}_\infty = 1) \end{align*} holds because convergence in $L^1$ implies convergence in distribution and $\epsilon$ is a continuity point of $\Pr(\hat{H}_\infty \leq x)$. Thus, (\ref{eq:slowLowEntropy_stateInformed}) equals (\ref{eq:slowLowEntropy_target}). \paragraph*{Part III: (\ref{eq:slowLowEntropy_TDC})$=$(\ref{eq:slowLowEntropy_target}) and (\ref{eq:slowHighEntropy_TDC})$=$(\ref{eq:slowHighEntropy_target})} To prove these equalities, we will apply Lemma~\ref{lem:order_rv_convergence} to the sequences $A_n = H_n^*$ and $B_n = \hat{H}_n$. Theorem~\ref{thm:hatH_polarizes} shows that $\hat{H}_n$ converges in $L^1$ to $\hat{H}_\infty$ and we established in the previous part that $E(\hat{H}_\infty)$ equals (\ref{eq:slowHighEntropy_target}). From the definitions in~\eqref{eq:hatHn} and~\eqref{eq:HnTDC}, it follows that $H_n^* \geq \hat{H}_n$ for all $n \in \mathbb{N}$. The only other element required for Lemma~\ref{lem:order_rv_convergence} is that $E(H_n^*) \to E(\hat{H}_\infty)$ and this will be shown below. Assuming this for now, we observe Lemma~\ref{lem:order_rv_convergence} implies that $H_n^*$ converges in $L^1$ to $\hat{H}_\infty$ and gives the desired result \begin{align*} \eqref{eq:slowLowEntropy_TDC} &= \lim_{n\to\infty} \Pr(H_n^* < \epsilon) = \Pr(\hat{H}_{\infty} < \epsilon) = \eqref{eq:slowLowEntropy_target} \\ \eqref{eq:slowHighEntropy_TDC} &= \lim_{n\to\infty} \Pr(H_n^* > 1- \epsilon) = \Pr(\hat{H}_{\infty} > 1- \epsilon) = \eqref{eq:slowHighEntropy_target}, \end{align*} where the second equality on each line holds because convergence in $L^1$ implies convergence in distribution and $\epsilon,1-\epsilon$ are continuity points of $\Pr(\hat{H}_\infty \leq x)$~\cite[Ch.~4]{Durrett-2019}. To show that $E(H_n^*) \to E(\hat{H}_\infty)$, we will use the fact that \begin{multlinecc} \label{eq:HUGivenYTDC_sandwich} H(\mathbf{U} | \mathbf{Y},S_0,S_N) \leq H(\mathbf{U} | \mathbf{Y}^*) \leq \\ H( \mathbf{U} | \mathbf{Y},S_0,S_N) + 2 \log_2 |\mathcal{S}| + 2 \log_2 (N+1) \; . \end{multlinecc} Indeed, the first inequality holds because $\mathbf{Y}^*$ is a function of $\mathbf{Y}$. The second inequality follows from first noting that \[ H(\mathbf{U}|\mathbf{Y}^*) \leq H(\mathbf{Y},S_0,S_N,\mathbf{U} |\mathbf{Y}^*) \; . \\ \] And then observing that \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{H(\mathbf{Y},S_0,S_N,\mathbf{U} |\mathbf{Y}^*)} \\ \quad &=& H(\mathbf{Y} | \mathbf{Y}^*) + H(S_0,S_N|\mathbf{Y},\mathbf{Y}^* ) + H(\mathbf{U}|\mathbf{Y},\mathbf{Y}^*,S_0,S_N) \\ \quad &\eqann{a}& H(\mathbf{Y} | \mathbf{Y}^*) + H(S_0,S_N|\mathbf{Y},\mathbf{Y}^* ) + H(\mathbf{U}|\mathbf{Y},S_0,S_N) \\ \quad & \eqann[\leq]{b} & H(\mathbf{Y} | \mathbf{Y}^*) + 2 \log_2 |\mathcal{S}| + H(\mathbf{U}|\mathbf{Y},S_0,S_N) \\ \quad & \eqann[\leq]{c} & 2 \log_2(N+1) + 2 \log_2 |\mathcal{S}| + H(\mathbf{U}|\mathbf{Y},S_0,S_N) \; , \end{IEEEeqnarray*} where \eqannref{a} follows from $\mathbf{Y}^*$ being a function of $\mathbf{Y}$, \eqannref{b} follows by $S_0$ and $S_N$ each having a support of size $|\mathcal{S}|$, and \eqannref{c} follows since in order to construct $\mathbf{Y}$ from $\mathbf{Y}^*$, it suffices to be told how many `$0$' symbols have been trimmed from each side of $\mathbf{Y}$, and both numbers are always between $0$ and $N$. Combining the above two displayed equations yields the RHS of (\ref{eq:HUGivenYTDC_sandwich}). Finally, we divide both sides of (\ref{eq:HUGivenYTDC_sandwich}) by $N$ and take the limit as $N\to\infty$. Since the left-most and right-most terms converge to $E(\hat{H}_\infty)$, the sandwich property implies that the center term, $E(H_n^*)$ also converges to this quantity. \iffalse Since $\mathbf{Y}^*$ is a function of $\mathbf{Y}$, we have for all $1 \leq i \leq N$ that \begin{equation} \label{eq:HleqHTDC} H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) \leq H(U_i | U_1^{i-1},\mathbf{Y}^*) \; . \end{equation} Thus, \begin{IEEEeqnarray}{rCl} \IEEEeqnarraymulticol{3}{l}{\liminf_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) > 1- \epsilon}}}{N}} \IEEEnonumber \\ \quad &\geq & \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) > 1- \epsilon}}}{N} \IEEEnonumber \\ & = & \lim_{n \to \infty} \frac{H(\mathbf{X}|\mathbf{Y})}{N} \label{eq:liminfHTDC} \; , \end{IEEEeqnarray} where the equality in the above as well as the existence of the limits have already been established. We will now prove that \begin{IEEEeqnarray}{rCl} \IEEEeqnarraymulticol{3}{l}{\liminf_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) < \epsilon}}}{N}} \IEEEnonumber \\ \quad &\geq & \lim_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) < \epsilon}}}{N} \IEEEnonumber \\ & = & 1 - \lim_{n \to \infty} \frac{H(\mathbf{X}|\mathbf{Y})}{N} \label{eq:limsupHTDC} \; . \end{IEEEeqnarray} As before, we note that the equality and the existence of the limits in (\ref{eq:limsupHTDC}) have already been established. Fix $\epsilon > 0$, and denote the RHS of (\ref{eq:limsupHTDC}) by $\tau$. For every $\delta > 0$, we must show an $n_0$ large enough such that for all $n \geq n_0$, taking $N=2^n$ gives \begin{equation} \label{eq:HRegularTauPlusDelta} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) < \epsilon}}}{N} > \tau - \delta \; . \end{equation} Fix $\delta$, and assume w.l.o.g.\ that $\delta \leq \epsilon$. Take $n_0$ large enough such that for all $n \geq n_0$, taking $N=2^n$ gives \begin{equation} \label{eq:HInformedTauPlusHalfDelta} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) < \epsilon/2}}}{N} > \tau - \delta/2 \; . \end{equation} This is indeed possible, since we have already established that (\ref{eq:slowLowEntropy_stateInformed}) equals (\ref{eq:slowLowEntropy_target}), also when plugging in $\epsilon/2$ in place of $\epsilon$. We will further require that $n_0$ is large enough so that for all $n \geq n_0$ we have for $N=2^n$ that \[ \frac{2 \log_2 |\mathcal{S}| + 2 \log_2 (N+1)}{N} \leq \frac{\delta^2}{4} \; . \] By the above, (\ref{eq:HUGivenYTDC_sandwich}), the chain rule, and Markov's inequality, we deduce that the number of indices $1 \leq i \leq N$ for which \[ H(U_i | U_1^{i-1},\mathbf{Y}^*) \geq H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) + \frac{\delta}{2} \] is at most $N \delta / 2$. Clearly, since we have assumed that $\delta \leq \epsilon$, the number of indices for which \[ H(U_i | U_1^{i-1},\mathbf{Y}^*) \geq H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) + \frac{\epsilon}{2} \] is at most $N \delta / 2$ as well. Thus, the number of indices for which the condition in (\ref{eq:HInformedTauPlusHalfDelta}) holds but the condition in (\ref{eq:HRegularTauPlusDelta}) does not hold is at most $N\delta/2$. We deduce that since the LHS of (\ref{eq:HInformedTauPlusHalfDelta}) is greater than $\tau - \delta/2$, the LHS of (\ref{eq:HRegularTauPlusDelta}) must be greater that $\tau - \delta/2 - \delta/2 = \tau - \delta$. That is we deduce that (\ref{eq:HRegularTauPlusDelta}) indeed holds, proving (\ref{eq:limsupHTDC}). We now claim that the weak inequalities in both (\ref{eq:liminfHTDC}) and (\ref{eq:limsupHTDC}) can be replaced by equalities. Indeed, this follows since the RHS of (\ref{eq:liminfHTDC}) plus the RHS of (\ref{eq:limsupHTDC}) sum to $1$, while the LHS of (\ref{eq:liminfHTDC}) plus the LHS of (\ref{eq:limsupHTDC}) can sum to at most $1$. In fact, the last claim in our argument can be strengthened, in two ways. Namely, note that \begin{multline*} \underbrace{\liminf_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) < \epsilon}}}{N}}_{\underline{A}} \\ + \underbrace{\limsup_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) > 1- \epsilon}}}{N}}_{\overline{B}} \leq 1 \end{multline*} and that \begin{multline*} \underbrace{\limsup_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) < \epsilon}}}{N}}_{\overline{A}} \\ + \underbrace{\liminf_{n \to \infty} \frac{\mysize{\myset{i : H(U_i | U_1^{i-1},\mathbf{Y}^*) > 1- \epsilon}}}{N}}_{\underline{B}} \leq 1 \end{multline*} Denote the right hand sides of (\ref{eq:liminfHTDC}) and (\ref{eq:limsupHTDC}) by $\alpha$ and $\beta$, respectively. We have previously established that $\underline{A} = \alpha$ and $\underline{B} = \beta$. From the above, we deduce that \[ \overline{B} \leq 1 - \underline{A} = 1 - \alpha = \beta = \underline{B} \] and \[ \overline{A} \leq 1 - \underline{B} = 1 - \beta = \alpha = \underline{A} \; . \] Conversely, by definition of $\limsup$ and $\liminf$ we have that $\overline{A} \geq \underline{A}$ and $\overline{B} \geq \underline{B}$. We deduce that $\overline{B} = \underline{B}$ and $\overline{A} = \underline{A}$. Namely, in both cases, $\liminf = \limsup$. Thus, the limits in (\ref{eq:slowLowEntropy_TDC}) and (\ref{eq:slowHighEntropy_TDC}) exist, and we have that both (\ref{eq:slowLowEntropy_TDC})$=$(\ref{eq:slowLowEntropy_target}) and (\ref{eq:slowHighEntropy_TDC})$=$(\ref{eq:slowHighEntropy_target}). \fi \paragraph*{Part IV: (\ref{eq:slowLowEntropy_stateInformed})$=$(\ref{eq:slowLowEntropy_regular})$=$(\ref{eq:slowLowEntropy_TDC}) and (\ref{eq:slowHighEntropy_stateInformed})$=$(\ref{eq:slowHighEntropy_regular})$=$(\ref{eq:slowHighEntropy_TDC})} Note that, for $1 \leq i \leq N$, we have \[ H(U_i | U_1^{i-1},\mathbf{Y},S_0,S_N) \leq H(U_i | U_1^{i-1},\mathbf{Y}) \leq H(U_i | U_1^{i-1},\mathbf{Y}^*) \; . \] We have already proved that (\ref{eq:slowLowEntropy_stateInformed})$=$(\ref{eq:slowLowEntropy_TDC}) and (\ref{eq:slowHighEntropy_stateInformed})$=$(\ref{eq:slowHighEntropy_TDC}). Thus, by the sandwich property, (\ref{eq:slowLowEntropy_stateInformed})$=$(\ref{eq:slowLowEntropy_regular})$=$(\ref{eq:slowLowEntropy_TDC}) and (\ref{eq:slowHighEntropy_stateInformed})$=$(\ref{eq:slowHighEntropy_regular})$=$(\ref{eq:slowHighEntropy_TDC}). \end{IEEEproof} \section{Strong polarization} \label{sec:strong} To rigorously claim a coding scheme for the deletion channel, one must also show strong polarization. For this, Theorem~\ref{theo:slowPolarization} is not sufficient and, so far, we have been unable to prove strong polarization for the \emph{standard} polar code construction. Thus, we will modify the standard coding scheme to proceed. \subsection{Overview of Coding Scheme} \label{sec:strong_setup} Fix a deletion probability $\delta$ and a regular hidden Markov input distribution. Recall that our goal is to achieve the information rate $\mathcal{I}$ given in (\ref{eq:informationRate}). For didactic reasons, we first consider a simplified setting in which this goal is easily attained. Specifically, let $N_0$ be given parameter, and consider a block-TDC with block length $N_0$ and deletion probability $\delta$. That is, for each input block $\mathbf{X}(\phi)$ of length $N_0$, where $\phi = 1, 2, \ldots$, the channel outputs $\mathbf{Y}^*(\phi)$, which is the result of passing $\mathbf{X}(\phi)$ through a TDC with deletion probability $\delta$. The crucial point to note is that, contrary to a deletion channel, the output of a block-TDC \emph{contains commas between segments}. That is, we know exactly which output segment corresponds to which input block. How would one code for such a channel and achieve a rate approaching $\mathcal{I}$? For this, we will assume that \begin{equation} \label{eq:N0} N_0=2^{n_0} \; , \end{equation} and that we can choose $N_0$ to be arbitrarily large. Let \begin{equation} \label{eq:Phi} \Phi = 2^{n_1} \end{equation} be the number of blocks we will transmit through the channel. Consider the following input distribution: each block $\mathbf{X}(\phi)$ will be distributed according to the input distribution that we have fixed at the start of this subsection, and the input blocks $\mathbf{X}(1), \mathbf{X}(2), \ldots, \mathbf{X}(\Phi)$ will be \emph{i.i.d.} In a nutshell, this suffices to achieve a coding rate of $\mathcal{I}$ with vanishing probability of error for the following two reasons. First, Theorem~\ref{theo:slowPolarization} shows weak polarization for each block and, in each block, we have the required fractions of high-entropy/low-entropy indices. Second, the independence between blocks implies that strong polarization will occur. We now back the above claim with a few more details. We denote the output of the encoder --- the concatenation of the above blocks --- by \begin{equation} \label{eq:vecx} \mathbf{X} = \mathbf{X}(1) \odot \mathbf{X}(2) \odot \cdots \odot \mathbf{X}(\Phi) \; . \end{equation} This output has length \begin{equation} \label{eq:NN0Phi} N = N_0 \cdot \Phi = 2^{n_0+n_1} = 2^n \; . \end{equation} We will use a sans-serif font to denote a vector whose elements are `blocks'. Thus, we will denote the partitioning of the above $\mathbf{X}$ into blocks of length $N_0$ by \begin{equation} \label{eq:bigvecx} \mathsf{X} = (\mathbf{X}(1), \mathbf{X}(2), \ldots, \mathbf{X}(\Phi)) \; . \end{equation} The corresponding output of the block-TDC is denoted \begin{equation} \label{eq:bigVecYDPT} \mathsf{Y}^* = (\mathbf{Y}^*(1),\mathbf{Y}^*(2),\ldots,\mathbf{Y}^*(\Phi)) \; . \end{equation} That is, $\mathsf{Y}^*$ is comprised of $\Phi$ distinguishable blocks --- it is \emph{not} simply the concatenation of the $\mathbf{Y}^*(\phi)$. The superscript `$*$' in $\mathsf{Y}^*$ suggest that trimming operation is applied \emph{blockwise}. We first consider the polar transform of $\mathbf{X}(\phi)$, denoted\footnote{We reserve the letter $U$, commonly used to denote the result of a polar transform, for a related yet distinct definition that is yet to appear.} \begin{equation} \label{eq:Vphi} \mathbf{V}(\phi) = \mathcal{A}(\mathbf{X}(\phi)) \; , \end{equation} where $1 \leq \phi \leq \Phi$. Note that $\mathbf{V}(\phi)$ is a binary vector of length $N_0$, \[ \mathbf{V}(\phi) = (V_1(\phi),V_2(\phi),\ldots,V_{N_0}(\phi)) \; . \] Recall that $\mathbf{Y}^*(\phi)$ is the output corresponding to $\mathbf{X}(\phi)$, and note that since we have assumed that the $\mathbf{X}(\phi)$ are i.i.d., then this must also hold for triplets $(\mathbf{X}(\phi),\mathbf{V}(\phi),\mathbf{Y}^*(\phi))$, when ranging over $1 \leq \phi \leq \Phi$. For a fixed $1 \leq \phi \leq \Phi$ and a given $1 \leq i_0 \leq N_0$, consider the pair of entropies \begin{multlinecc} \label{eq:twoIdealizedEntropies} H(V_{i_0}(\phi)|V_1^{i_0-1}(\phi) ,\mathbf{Y}^*(\phi)) \;\; \mbox{and} \onlysingle{\;\;\quad} \\ H(V_{i_0}(\phi)|V_1^{i_0-1}(\phi))\; . \end{multlinecc} We now make two important observations. First, since we have already established that the $(\mathbf{X}(\phi),\mathbf{V}(\phi),\mathbf{Y}^*(\phi))$ are i.i.d.\ over $\phi$, we deduce that (\ref{eq:twoIdealizedEntropies}) is independent of $\phi$. Second, both entropies in (\ref{eq:twoIdealizedEntropies}) exhibit slow polarization, in the sense of Theorem~\ref{theo:slowPolarization}. That is, on one hand, we deduce that (\ref{eq:slowLowEntropy_TDC})$=$(\ref{eq:slowLowEntropy_target}) and (\ref{eq:slowHighEntropy_TDC})$=$(\ref{eq:slowHighEntropy_target}), if in both (\ref{eq:slowLowEntropy_TDC}) and (\ref{eq:slowHighEntropy_TDC}) we replace $U_i$, $U_1^{i-1}$, $\mathbf{Y}^*$, $n$ and $N$ by $V_{i_0}(\phi)$, $V_1^{i_0-1}(\phi)$, $\mathbf{Y}^*(\phi)$, $n_0$ and $N_0$, respectively. These statements hold for all $\delta\in [0,1]$. For the special case of $\delta=1$, one gets a degenerate channel where $\mathbf{Y}^*(\phi)$ always equals the empty string. Thus, on the other hand, the same claim of (\ref{eq:slowLowEntropy_TDC})$=$(\ref{eq:slowLowEntropy_target}) and (\ref{eq:slowHighEntropy_TDC})$=$(\ref{eq:slowHighEntropy_target}), under the above substitutions continues to hold, with $\mathbf{Y}$ and $\mathbf{Y}^*$ removed from these equations. Since the first entropy in (\ref{eq:twoIdealizedEntropies}) is always less than or equal to the second, we deduce from the above paragraph and the first half of Theorem~\ref{theo:slowPolarization} that for $\epsilon \in (0,1)$ fixed, the fraction of indices $i_0$ for which \begin{multlinecc*} H(V_{i_0}(\phi)|V_1^{i_0-1}(\phi) ,\mathbf{Y}^*(\phi)) < \epsilon \quad \mbox{and} \onlysingle{\quad\quad} \\ H(V_{i_0}(\phi)|V_1^{i_0-1}(\phi)) \geq \epsilon \end{multlinecc*} tends to \begin{multlinecc*} \left( 1 - \lim_{n_0 \to \infty} \frac{H(\mathbf{X}(\phi)|\mathbf{Y}(\phi)))}{N_0} \right) \\ - \left( 1 - \lim_{n_0 \to \infty} \frac{H(\mathbf{X}(\phi))}{N_0} \right) = \mathcal{I} \; , \end{multlinecc*} as $n_0 \to \infty$. For simplicity of exposition, let us further restrict $\epsilon$ to $\epsilon \in (0,1/2)$. By both halves of Theorem~\ref{theo:slowPolarization}, we deduce that the fraction of indices $i_0$ for which \[ \epsilon \leq H(V_{i_0}(\phi)|V_1^{i_0-1}(\phi)) \leq 1 - \epsilon \] vanishes. The conclusion is stated as a lemma, for future reference. \begin{lemm} \label{lemm:slowPolarizationStar} For $\epsilon \in (0,1/2)$ fixed, the fraction of indices $1 \leq i_0 \leq N_0$ for which \begin{multlinecc} \label{eq:goodIndexI} H(V_{i_0}(\phi)|V_1^{i_0-1}(\phi) ,\mathbf{Y}^*(\phi)) < \epsilon \quad \mbox{and} \onlysingle{\quad\quad} \\ H(V_{i_0}(\phi)|V_1^{i_0-1}(\phi)) > 1 - \epsilon \end{multlinecc} tends to $\mathcal{I}$, as $n_0 \to \infty$, and is the same for every $1 \leq \phi \leq \Phi$. \end{lemm} We now note that for a given $\phi$ and $i_0$, we have an efficient method of calculating the probabilities corresponding to (\ref{eq:goodIndexI}). Namely, this is achieved by using the base trellis defined for a TDC in Subsection~\ref{subsec:TrellisTDC}, applying a series of plus and minus polarization operations on it, according to the binary representation of $i_0-1$, and then invoking (\ref{eq:trellisConditionalProbCalc}). That is, the only thing stopping us from applying the Honda-Yamamoto scheme \cite{Honda_Yamamoto_2013} at this point is the fact that the above $\epsilon$ is fixed. Informally, we overcome the above problem as follows. Take $\epsilon$ `small' and $n_0$ as well as $n_1$ `large'. Consider a `good' index $i_0$. That is, an index $i_0$ for which (\ref{eq:goodIndexI}) holds. This will be the case for a fraction of indices `very close' to $\mathcal{I}$. Next, recall the definition of $\mathbf{X}$ in (\ref{eq:vecx}), and denote its polar transform as \[ \mathbf{U} = \mathcal{A}(\mathbf{X}) \; . \] Consider the subvector $U_{(i_0-1) \cdot \Phi + 1}^{i_0 \cdot \Phi}$. It is not hard to prove that \begin{equation} \label{eq:vecVSlice} U_{(i_0-1) \cdot \Phi + 1}^{i_0 \cdot \Phi} = \mathcal{A}((V_{i_0}(1),V_{i_0}(2),\ldots,V_{i_0}(\Phi))) \; . \end{equation} That is, the LHS of (\ref{eq:vecVSlice}) is gotten by applying the Ar\i{}kan\xspace\ transform to the vector $(V_{i_0}(1),V_{i_0}(2),\ldots,V_{i_0}(\Phi))$. Since each entry of this vector satisfies (\ref{eq:goodIndexI}), `almost all' indices $i$ of $\mathbf{U}$, where $(i_0-1) \cdot \Phi + 1 \leq i \leq i_0 \cdot \Phi$ are strongly polarized. That is, satisfy \begin{multlinecc} \label{eq:goodIndexJ} Z(U_i|U_1^{i-1} ,\mathsf{Y}^*) < 2^{-n_1 \beta} \quad \mbox{and} \onlysingle{\quad\quad} \\ K(U_i|U_1^{i-1}) < 2^{-n_1 \beta} \end{multlinecc} where $Z$ and $K$ are the conditional Bhattacharyya parameter and the conditional total variation (see Definitions~\ref{defi:Z} and \ref{defi:K} in Appendix~\ref{sec:ZK} , $\beta < 1/2$ is some fixed constant, and $\mathsf{Y}^*$ is the block-TDC output vector defined in (\ref{eq:bigVecYDPT}). That is, the overall fraction of useful indices $1 \leq i \leq N_0 \Phi$ with respect to the Honda-Yamamoto scheme will be `very close' to $\mathcal{I}$, and the error of the scheme will approach $0$ at a rate of roughly $2^{-\sqrt{N_1}}$. The reader may not be surprised to learn that the above informal statements can be made rigorous and proven\footnote{Such a proof is not a straightforward adaptation of the ideas in \cite{Arikan_2009} and \cite{Arikan_Telatar_2009}. Namely, it requires the use of \cite[Lemma 40]{ShuvalTal:18a}, which we indeed invoke in the proof of Theorem~\ref{theo:main}.}. Indeed, this will be done as part of the proof of Theorem~\ref{theo:main}. However, one important point remains to be addressed. That is, the channel we will in fact be coding for is the deletion channel, and \emph{not} the block-TDC. Hence, in the above description, we have implicitly assumed a genie which has manufactured the punctuated vector $\mathsf{Y}^*$ for us. The purpose of the guard-bands, defined shortly, is to approximate such a genie in practice. Our actual coding scheme will be as follows. For the encoding step, we will first use the Honda-Yamamoto scheme with respect to the block-TDC. I.e., the information bits will be placed in indices $j$ of $\mathbf{U}$ for which (\ref{eq:goodIndexJ}) holds. The resulting codeword will be $\mathbf{X}$. Then, we will add to $\mathbf{X}$ runs of `$0$' symbols in key locations, and transmit the resulting word (which will be longer than $\mathbf{X}$) on the deletion channel. On the decoder side, a preliminary step will be to deduce the punctuated vector $\mathsf{Y}^*$ from the received vector $\mathbf{Y}$. That is, we will remove the guard bands (and trim the $\mathbf{Y}(\phi)$ into $\mathbf{Y}^*(\phi)$ in the process), thus producing $\mathsf{Y}^*$. Then, the decoder will be applied on $\mathsf{Y}^*$ to yield $\mathbf{U}$, and thus the information bits. \subsection{Guard bands} In this subsection, we first describe how the guard bands are added to $\mathbf{X}$ on the encoder side. We then explain how the decoder deduces the punctuated vector $\mathsf{Y}^*$ from the received vector $\mathbf{Y}$. \begin{figure} \small \begin{equation*} \begin{array}{cccccccc} \mathbf{V}(1) & & \mathbf{V}(2) & & \mathbf{V}(\phi) & \cdots & \mathbf{V}(\Phi) \\ \xuparrow{0.7cm} \mathcal{A} & & \xuparrow{0.7cm} \mathcal{A} & & \xuparrow{0.7cm} \mathcal{A} && \xuparrow{0.7cm} \mathcal{A} \\[5mm] \mathbf{X}(1) & 00\ldots 0 & \mathbf{X}(2) & 00\ldots 0 & \mathbf{X}(\phi) & \ldots & \mathbf{X}(\Phi) \end{array} \end{equation*} \normalsize \caption{The $\Phi = N/N_0$ blocks, denoted $\mathbf{X}(1), \mathbf{X}(2), \ldots, \mathbf{X}(\Phi)$, have length $N_0=2^{n_0}$, are i.i.d., and each is distributed according to the regular hidden-Markov input distribution. Their polar transforms are $\mathbf{V}(1), \mathbf{V}(2), \ldots, \mathbf{V}(\Phi)$. An additional $n-n_0$ polarization steps (not shown) will be applied to $\mathbf{V}(1),\mathbf{V}(2),\ldots,\mathbf{V}(\Phi)$, resulting in $\mathbf{U}$. The transmitted codeword is gotten by separating consecutive $\mathbf{X}(\cdot)$ vectors by a `guard band'. That is, by a string of `$0$' symbols. The length of the guard bands is not constant. For example, the middle guard band is always the longest, while the first and last guard bands are always the shortest.} \label{fig:manyGuardBands} \end{figure} We start by defining how guard bands are added between the blocks $\mathbf{X}(1),\mathbf{X}(2),\ldots,\mathbf{X}(\Phi)$, see Figure~\ref{fig:manyGuardBands}. That is, we define how $\mathbf{X}$ is transformed into $g(\mathbf{X})$. This is done in a simple recursive manner. Informally, let $\mathbf{x}$ be a vector of length $2^n$. If this length is greater than the designated block-length $N_0$, we halve $\mathbf{x}$, add $\ell_n$ `$0$' symbols in the middle, and then apply $g$ recursively to each original half. Namely, for $\mathbf{x} = \vecx_{\mathrm{I}} \odot \vecx_{\mathrm{II}} \in \mathcal{X}^{2^n}$ with \[ \vecx_{\mathrm{I}} = x_1^{2^{n-1}} \in \mathcal{X}^{2^{n-1}} \; , \quad \vecx_{\mathrm{II}} = x_{2^{n-1}+1}^{2^n} \in \mathcal{X}^{2^{n-1}} \] being the first and second halves of $\mathbf{x}$, respectively, we define \begin{IEEEeqnarray}{rCl} \label{eq:guardBand} g(\mathbf{x}) &\triangleq & \begin{cases} \mathbf{x} & \text{if }n \leq n_0 \\ g(\vecx_{\mathrm{I}}) \odot \overbrace{00 \ldots 0}^{\ell_n} \odot g(\vecx_{\mathrm{II}}) & \text{if }n> n_0, \\ \end{cases} \\ \noalign{\noindent and\vspace{2\jot}} \ell_n & \triangleq & \floor{2^{(1-\xi) (n-1)}}, \label{eq:ln} \end{IEEEeqnarray} where $\xi \in (0,1/2)$ is a yet-to-be-specified `small' constant. The parameter $\xi$ controls the rate penalty of adding guard bands, on one hand, and the probability of the decoder successfully removing the guard bands, on the other hand. We will require that $n_0 > 1$, so that the inequality \begin{equation} \label{eq:lnSimpleInequality} \ell_n > 2^{(n-1)(1-\xi)-1} \end{equation} used later on will hold for all relevant $n$, i.e., for $n > n_0$. Note the above specifically implies that $\ell_n > 0$. We now explain how the guard bands are removed, from the received word $\mathbf{Y}$, in order to produce the punctuated sequence $\mathsf{Y}^*$ defined in (\ref{eq:bigVecYDPT}). Equivalently, we now show a procedure with the following outcome: for each block index $1 \leq \phi < \Phi$, we will produce the trimmed vector $\mathbf{Y}^*(\phi)$ corresponding to the block $\mathbf{X}(\phi)$. Before explaining how this is done, we first mention that our method has a small yet non-zero probability of failing. That is, there is a non-zero probability that our method will fail to produce $\mathsf{Y}^*$. This probability will be analyzed at a later stage. \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.3] \node at (-3.15,4.75) {$\mathbf{X}$}; \draw [very thick,draw=black] (-1.3,5) rectangle (0,4.5) node[pos=.5] {$\vecX_{\mathrm{I}}$}; \draw [very thick,draw=black] (0,5) rectangle (1.3,4.5) node[pos=.5] {$\vecX_{\mathrm{II}}$}; \node at (-3.15,3.75) {$\mathbf{G}$}; \draw [very thick,draw=black] (-2.5,4) rectangle (-0.5,3.5) node[pos=.5] {$\vecG_{\mathrm{I}}$}; \draw [very thick,draw=black] (-0.5,4) rectangle (0.5,3.5) node[pos=.5] {$\vecG_{\triangle}$}; \draw [very thick,draw=black] (0.5,4) rectangle (2.5,3.5) node[pos=.5] {$\vecG_{\mathrm{II}}$}; \draw[dashed] (-1.3,4.5) -- (-2.5,4); \draw[dashed] (1.3,4.5) -- (2.5,4); \draw[dashed] (0,4.5) -- (-0.5,4); \draw[dashed] (0,4.5) -- (0.5,4); \node at (-3.15,2.75) {$\mathbf{Y}$}; \draw [very thick,draw=black] (-2.2,3) rectangle (-0.60,2.5) node[pos=.5] {$\vecY_{\mathrm{I}}$}; \draw [very thick,draw=black] (-0.60,3) rectangle (0.30,2.5) node[pos=.5] {$\vecY_{\triangle}$}; \draw [very thick,draw=black] (0.30,3) rectangle (2.2,2.5) node[pos=.5] {$\vecY_{\mathrm{II}}$}; \draw[dashed] (-2.5,3.5) -- (-2.2,3); \draw[dashed] (2.5,3.5) -- (2.2,3); \draw[dashed] (-0.5,3.5) -- (-0.60,3); \draw[dashed] (0.5,3.5) -- (0.30,3); \node at (-3.15,1.75) {$\mathbf{Z} =\mathbf{Y}^*$}; \draw [very thick,draw=black] (-2.0,2) rectangle (-0.60,1.5) node[pos=.5] {$\vecZ_{\mathrm{I}}$}; \draw [very thick,draw=black] (-0.60,2) rectangle (0.30,1.5) node[pos=.5] {$\vecZ_{\triangle}$}; \draw [very thick,draw=black] (0.30,2) rectangle (2.0,1.5) node[pos=.5] {$\vecZ_{\mathrm{II}}$}; \draw[dashed] (-2.2,2.5) -- (-2.0,2); \draw[dashed] (2.2,2.5) -- (2.0,2); \draw[dashed] (-0.60,2.5) -- (-0.6,2); \draw[dashed] (0.30,2.5) -- (0.3,2); \end{tikzpicture} \end{center} \caption{The random variables $\mathbf{X}$, $\mathbf{G}$, $\mathbf{Y}$, and $\mathbf{Z}$.} \label{fig:XGYZ} \end{figure} Our procedure for producing $\mathsf{Y}^*$ will have a preliminary step, and will then involve a recursion. The preliminary step is simple: we trim the received vector $\mathbf{Y}$ of leading and trailing zeros to produce $\mathbf{Y}^*$. We stress that, generally, $\mathbf{Y}^*$ does \emph{not} equal the punctuated sequence $\mathsf{Y}^*$ defined in (\ref{eq:bigVecYDPT}). In order to introduce notation required later on, let us now define the above operation more verbosely. Let $\vecX_{\mathrm{I}}$ and $\vecX_{\mathrm{II}}$ be the left and right halves of $\mathbf{X}$, see Figure~\ref{fig:XGYZ}. Thus, the transmitted word is $g(\mathbf{X}) = \vecG_{\mathrm{I}} \odot \vecG_{\triangle} \odot \vecG_{\mathrm{II}}$, where $\vecG_{\mathrm{I}} = g(\vecX_{\mathrm{I}})$, $\vecG_{\mathrm{II}} = g(\vecX_{\mathrm{II}})$, and $\vecG_{\triangle}$ is the middle guard band of length $\ell_n$, where $n$ is $\log_2$ of the length of $\mathbf{X}$. Clearly, $\vecG_{\mathrm{I}}$ and $\vecG_{\mathrm{II}}$ are of equal length. Denote the parts of $\mathbf{Y}$ corresponding to $\vecG_{\mathrm{I}}$, $\vecG_{\triangle}$ and $\vecG_{\mathrm{II}}$ by $\vecY_{\mathrm{I}}$, $\vecY_{\triangle}$, and $\vecY_{\mathrm{II}}$, respectively. Note that at this stage, the decoder sees $\mathbf{Y}$, but can only make an informed guess as to what parts of $\mathbf{Y}$ constitute $\vecY_{\mathrm{I}}$, $\vecY_{\triangle}$, and $\vecY_{\mathrm{II}}$. We remove from the received word $\mathbf{Y}$ all leading and trailing `$0$' symbols and denote the resulting vector $\mathbf{Z} = \mathbf{Y}^*$. We denote the parts of $\mathbf{Z}$ corresponding to $\vecY_{\mathrm{I}}$, $\vecY_{\triangle}$, and $\vecY_{\mathrm{II}}$ by $\vecZ_{\mathrm{I}}$, $\vecZ_{\triangle}$, and $\vecZ_{\mathrm{II}}$, respectively. In order to build up the reader's intuition, we note that in a `typical case', $\vecZ_{\mathrm{I}}$ is $\vecY_{\mathrm{I}}$ after the leading zeros have been removed, $\vecZ_{\mathrm{II}}$ is $\vecY_{\mathrm{II}}$ after the trailing zeros have been removed, and $\vecZ_{\triangle}$ is simply $\vecY_{\triangle}$. As explained, the production of $\mathbf{Z}$ from $\mathbf{Y}$ constitutes the preliminary step of our method. We will now specify how the punctuated vector $\mathsf{Y}^*$ is recursively produced from $\mathbf{Z}$. For the base case, note that if $\Phi=1$, then $\mathsf{Y}^*$ is simply $\mathbf{Z}$. Our procedure hinges on the assumption that the middle index of $\mathbf{Z}$ originated from a guard band symbol. Specifically, we will assume that the middle index of $\mathbf{Z}$ (rounding down) belongs to $\vecZ_{\triangle}$. As explained, there is a probability of this assumption being false, and this will be analyzed at a later stage. For now, consider the case in which the assumption holds. In this case, the crucial observation is that $\vecY_{\mathrm{I}}^*$ equals the first half of $\mathbf{Z}$, trimmed, while $\vecY_{\mathrm{II}}^*$ equals the second half of $\mathbf{Z}$, trimmed. Namely, if $\zeta$ is the length of $\mathbf{Z}$, then \begin{IEEEeqnarray}{rCl} \vecY_{\mathrm{I}}^* &=& (Z_1,Z_2,\ldots, Z_{\lfloor{\zeta/2}\rfloor})^* \; , \label{eq:YIFromZ} \\ \vecY_{\mathrm{II}}^* &=& (Z_{\lfloor{\zeta/2}\rfloor+1} , Z_{\lfloor{\zeta/2}\rfloor+1}, \ldots, Z_\zeta)^* \label{eq:YIIFromZ}\; , \end{IEEEeqnarray} since the guard band $\vecZ_{\triangle}$ has been `trimmed out'. Thus, we have reduced our original problem of producing $\mathsf{Y}^*$ from $\mathbf{Y}^*$ into two equivalent problems, each half the size of the original: find the first half of $\mathsf{Y}^*$, namely $\mathbf{Y}^*(1), \mathbf{Y}^*(2),\ldots,\mathbf{Y}^*(\Phi/2)$, from $\vecY_{\mathrm{I}}^*$ and the second half of $\mathsf{Y}^*$ from $\vecY_{\mathrm{II}}^*$. Thus, we continue recursively: we apply our method first to the RHS (\ref{eq:YIFromZ}) and then to the RHS of (\ref{eq:YIIFromZ}). If, during all these recursive invocations, our assumptions on the middle index being part of the middle guard band were indeed correct, then we will have succeeded in producing $\mathsf{Y}^*$. Note that the recursion depth is $n-n_0$. There are two points that must be addressed. First, recall that adding guard bands makes the transmitted word longer. We must show that this has a vanishingly small effect on the rate of our scheme. Second, we must show that our scheme of producing $\mathsf{Y}^*$ from $\mathbf{Y}$ has a vanishingly small probability of failing. Once this is done, the proof of Theorem~\ref{theo:main} will follow easily. \subsection{Auxiliary lemmas} \label{sec:auxiliaryLemmas} In this section, we state and prove a number of lemmas key to the proof of Theorem~\ref{theo:main}. In the sequel, we will choose a fixed $\nu \in (0,\frac{1}{3}]$ and set $n_0 = \lfloor \nu n \rfloor$. The parameter $\nu$ will trade-off reliability and decoding complexity (e.g., see Theorem~\ref{theo:main}). Recall that both $\xi$, the parameter through which $\ell_n$ is defined in (\ref{eq:ln}), and $\nu$ are positive and fixed (not a function of $n$). Thus, the following lemma ensures that the rate penalty of adding guard bands is negligible as $n \to \infty$. \begin{lemm} \label{lemm:blockLength} Let $\mathbf{x}$ be a vector of length $|\mathbf{x}| = 2^n$. Then, \begin{equation} \label{eq:vecgLengthBounds} |\mathbf{x}| \leq |g(\mathbf{x})| < \left( 1 + \frac{2^{-(\xi \cdot n_0 + 1)}}{1-2^{-\xi}} \right) \cdot |\mathbf{x}| \; . \end{equation} \end{lemm} \begin{IEEEproof} From the definition of $g(\mathbf{x})$, induction shows \begin{equation} \label{eq:vecgLengthRecursion} |g(\mathbf{x})| = \begin{cases} 2^n & \mbox{if $n \leq n_0$} \\ 2^{n} + \sum_{t = n_0 + 1}^{n} 2^{n - t} \cdot \ell_t & \mbox{otherwise}. \end{cases} \end{equation} Thus, the lower bound in (\ref{eq:vecgLengthBounds}) is trivial, since $|\mathbf{x}| = 2^n$, and every term in the sum in (\ref{eq:vecgLengthRecursion}) is non-negative, by (\ref{eq:ln}). The upper bound in (\ref{eq:vecgLengthBounds}) is trivially true for $n \leq n_0$. For the case $n > n_0$, we have that \begin{IEEEeqnarray*}{rCl} |g(\mathbf{x})|/|\mathbf{x}| & \eqann{a} & 1 + \sum_{t = n_0 + 1}^{n} 2^{-t} \cdot \ell_t \\ & \eqann[\leq]{b} & 1 + \sum_{t = n_0 + 1}^{n} 2^{-t} \cdot 2^{(1-\xi) \cdot (t-1)} \\ & = & 1 + \sum_{t = n_0 + 1}^{n} 2^{-\xi \cdot (t-1)-1} \\ & < & 1 + \sum_{t = n_0 + 1}^{\infty} 2^{-\xi \cdot (t-1)-1} \\ & \eqann{c} & 1 + \frac{2^{-(\xi \cdot n_0 + 1)}}{1-2^{-\xi}} \; . \end{IEEEeqnarray*} where \eqannref{a} follows from $|\mathbf{x}| = 2^n$ and (\ref{eq:vecgLengthRecursion}); \eqannref{b} follows from (\ref{eq:ln}); \eqannref{c} is simply the sum of geometric series. \end{IEEEproof} A key idea enabling the `genie' described earlier is the recursive processing of each half of the received sequence. This processing will be successful if the middle symbol of the received sequence is a `$0$' originating from the outermost guard band, as per the recursive definition in (\ref{eq:guardBand}). The following lemma shows that this is indeed the case, with very high probability. \begin{lemm} \label{lemm:guardBandErrorBound} Let the guard-band length $\ell_n$ in (\ref{eq:ln}) use a fixed $\xi \in (0,1/2)$. Fix the channel deletion probability $\delta$ and a regular hidden-Markov input distribution. Let $n > n_0 > 1$ and let $\mathbf{X}$ be a random vector of length $N = 2^n$ distributed according to the modified input distribution described above: i.i.d.\ blocks of length $N_0 = 2^{n_0}$, each distributed according to the specified input distribution. Denote by $\mathbf{Y}$ the result of transmitting $g(\mathbf{X})$ through the deletion channel. Then, there exists a constant $\theta > 0$, dependent only on the input distribution and the deletion probability such that, for $n_0$ large enough, the probability that the middle symbol of $\mathbf{Y}^*$ (rounding down) is not a `$0$' from the outer guard band of length $\ell_n$ is at most $ 2^{-\theta \cdot 2^{(1-2\xi) n_0}}. $ \end{lemm} \begin{IEEEproof} Let $\mathbf{G} = g(\mathbf{X})$ (see Fig.~\ref{fig:XGYZ}). Recall that we denote the first and second halves of $\mathbf{X}$ by $\vecX_{\mathrm{I}}$ and $\vecX_{\mathrm{II}}$, respectively. Let $\vecG_{\mathrm{I}} = g(\vecX_{\mathrm{I}})$ and $\vecG_{\mathrm{II}} = g(\vecX_{\mathrm{II}})$, and denote by $\vecG_{\triangle}$ the guard band comprised of $\ell_n$ `$0$' symbols between $\vecG_{\mathrm{I}}$ and $\vecG_{\mathrm{II}}$. Hence, by (\ref{eq:guardBand}), \[ \mathbf{G} = \vecG_{\mathrm{I}} \odot \vecG_{\triangle} \odot \vecG_{\mathrm{II}} \; . \] Denote by $\mathbf{Y}$ the (untrimmed) result of passing $\mathbf{G}$ through the deletion channel. Let $\vecY_{\mathrm{I}}$, $\vecY_{\mathrm{II}}$, and $\vecY_{\triangle}$ be the parts of $\mathbf{Y}$ corresponding to $\vecG_{\mathrm{I}}$, $\vecG_{\mathrm{II}}$, and $\vecG_{\triangle}$, respectively. Let $\mathbf{Z} = \mathbf{Y}^*$ be the trimmed $\mathbf{Y}$. Define $\vecZ_{\mathrm{I}}$, $\vecZ_{\mathrm{II}}$, and $\vecZ_{\triangle}$, as the parts of $\mathbf{Z}$ corresponding to $\vecG_{\mathrm{I}}$, $\vecG_{\mathrm{II}}$, and $\vecG_{\triangle}$, respectively. For $\mathbf{Z} = (Z_1,Z_2,\ldots,Z_t)$ with $t\geq 1$, the middle index of $\mathbf{Z}$ (rounding down) is $s=\floor{(t+1)/2}$. A sufficient condition for $Z_s$ belonging to $\vecZ_{\triangle}$ is \begin{equation} \label{eq:threePartsCondition} |\vecZ_{\mathrm{I}}| < |\vecZ_{\triangle}|+|\vecZ_{\mathrm{II}}| \; , \quad |\vecZ_{\mathrm{II}}| < |\vecZ_{\mathrm{I}}| + |\vecZ_{\triangle}| \; . \end{equation} To see that this is sufficient, we observe that $|\vecZ_{\mathrm{I}}| < |\vecZ_{\triangle}|+|\vecZ_{\mathrm{II}}|$ implies that the middle index does not fall in $\vecZ_{\mathrm{I}}$ because then \begin{align*} \floor{(|\mathbf{Z}|+1)/2} &= \floor{(|\vecZ_{\mathrm{I}}|+ |\vecZ_{\triangle}|+|\vecZ_{\mathrm{II}}|+1)/2} \\ &\geq \floor{(|\vecZ_{\mathrm{I}}|+ |\vecZ_{\mathrm{I}}|+2)/2} = |\vecZ_{\mathrm{I}}| + 1. \end{align*} Similarly, if $|\vecZ_{\mathrm{II}}| < |\vecZ_{\mathrm{I}}| + |\vecZ_{\triangle}|$, then the middle index does not fall in $\vecZ_{\mathrm{II}}$ because then \begin{align*} \lfloor (|\mathbf{Z}| &+1)/2 \rfloor = \floor{(|\vecZ_{\mathrm{I}}|+ |\vecZ_{\triangle}|+|\vecZ_{\mathrm{II}}|+1)/2} \\ &\leq \floor{(|\vecZ_{\mathrm{I}}|+ |\vecZ_{\triangle}|+|\vecZ_{\mathrm{I}}| + |\vecZ_{\triangle}|)/2} = |\vecZ_{\mathrm{I}}|+ |\vecZ_{\triangle}|. \end{align*} Now, we will analyze the probability of (\ref{eq:threePartsCondition}). Denote by $\alpha$, $\beta$, and $\gamma$ the following length differences between the three parts of $\mathbf{G}$ and the three corresponding parts of $\mathbf{Y}$, \begin{IEEEeqnarray*}{rCl} \alpha & = &|\vecG_{\mathrm{I}}| - |\vecY_{\mathrm{I}}| \; , \\ \beta & = & |\vecG_{\triangle}| - |\vecY_{\triangle}| \; , \\ \gamma & = & |\vecG_{\mathrm{II}}| - |\vecY_{\mathrm{II}}| \; . \end{IEEEeqnarray*} Also, denote by $\alpha'$, $\beta'$, and $\gamma'$ the length differences resulting from trimming, \begin{IEEEeqnarray*}{rCl} \alpha' & = &|\vecY_{\mathrm{I}}| - |\vecZ_{\mathrm{I}}| \; , \\ \beta' & = & |\vecY_{\triangle}| - |\vecZ_{\triangle}| \; ,\\ \gamma' & = & |\vecY_{\mathrm{II}}| - |\vecZ_{\mathrm{II}}| \; . \end{IEEEeqnarray*} Suppose that the trimming on both sides stopped short of the guard band. In this case, $\beta'=0$. Since $|\vecG_{\mathrm{I}}| = |\vecG_{\mathrm{II}}|$ and $|\vecG_{\triangle}| = \ell_n$, condition (\ref{eq:threePartsCondition}) would reduce to \begin{IEEEeqnarray}{rCl} \alpha + \alpha' &<& \gamma + \gamma' + \ell_n - \beta \; , \label{eq:firstAlphaBetaGammaCondition} \\ \gamma + \gamma' &<& \alpha + \alpha' + \ell_n - \beta \; . \label{eq:secondAlphaBetaGammaCondition} \end{IEEEeqnarray} Our aim is to show that, with very high probability, both (\ref{eq:firstAlphaBetaGammaCondition}) and (\ref{eq:secondAlphaBetaGammaCondition}) hold, as well as the assumption leading to their formulation. Recall that $\delta$ is the channel deletion probability and let \begin{equation} \label{eq:ellHat} \hat{\ell} = \ell_n \cdot (1-\delta)/2 \; . \end{equation} We define the following `good' events on the random variables $\alpha$, $\alpha'$, $\beta$, $\beta'$, $\gamma$, and $\gamma$: \begin{IEEEeqnarray}{rCl} A\phantom{'}: & \quad & \delta |\vecG_{\mathrm{I}}| - \hat{\ell}/4 < \alpha < \delta |\vecG_{\mathrm{I}}| + \hat{\ell}/4 \label{eq:A} \\ A': & \quad & 0 \leq \alpha' < \hat{\ell}/4 \label{eq:APrime} \\ B\phantom{'}: & \quad & 0 \leq \beta < \delta \cdot \ell_n + \hat{\ell} \\ B': & \quad & \beta' = 0 \label{eq:BPrime}\\ C\phantom{'}: & \quad & \delta |\vecG_{\mathrm{II}}| - \hat{\ell}/4 < \gamma < \delta |\vecG_{\mathrm{II}}| + \hat{\ell}/4 \label{eq:C}\\ C': & \quad & 0 \leq \gamma' < \hat{\ell}/4 \label{eq:CPrime} \end{IEEEeqnarray} First, we note that the total number of symbols deleted or trimmed from $\vecG_{\mathrm{I}}$ is given by $|\vecG_{\mathrm{I}}|-|\vecZ_{\mathrm{I}}| = \alpha+\alpha'$. If $A$ and $A'$ hold, then this is bounded by \begin{IEEEeqnarray}{rCl} \label{eq:alpha_plus_alphap} \alpha + \alpha' & < & \delta |\vecG_{\mathrm{I}}| + \hat{\ell}/4 + \hat{\ell}/4 \IEEEnonumber \\ & = & \delta |\vecG_{\mathrm{I}}| + \hat{\ell}/2 \label{eq:alphaPlusAlphaPrimeBound} \;. \end{IEEEeqnarray} By (\ref{eq:vecgLengthBounds}), $|\vecG_{\mathrm{I}}| = 2^{n-1} + t$, where $t \geq 0$. We now show that if $A$ and $A'$ hold, then $\alpha+\alpha' < |\vecG_{\mathrm{I}}|$. Indeed, by (\ref{eq:ln}) and (\ref{eq:ellHat}), \begin{IEEEeqnarray*}{rCl} \delta |\vecG_{\mathrm{I}}| + \hat{\ell}/2 & < & \delta |\vecG_{\mathrm{I}}| + \hat{\ell} \\ &= & \delta (2^{n-1}+t) + 2^{-1} (1-\delta) \floor{2^{(1-\xi)(n-1)}} \\ & < & \delta (2^{n-1}+t) + (1-\delta) 2^{n-2} \\ & = & \delta 2^{n-1} + (1-\delta) 2^{n-2} + \delta t \\ & < & 2^{n-1} + \delta t \\ & < & 2^{n-1} + t = |\vecG_{\mathrm{I}}|\; . \end{IEEEeqnarray*} The analogous claim also holds for $C$, $C'$, and $\vecG_{\mathrm{II}}$. Thus, if $A$, $A'$, $C$, and $C'$ hold, then some parts of $\vecG_{\mathrm{I}}$ and $\vecG_{\mathrm{II}}$ must remain in $\vecZ_{\mathrm{I}}$ and $\vecZ_{\mathrm{II}}$ after deletion and trimming. Hence, the trimming has stopped short of the guard band, which implies $B'$. If, in addition, $B$ occurs, then both (\ref{eq:firstAlphaBetaGammaCondition}) and (\ref{eq:secondAlphaBetaGammaCondition}) must also hold. To verify that (\ref{eq:firstAlphaBetaGammaCondition}) holds, note that \begin{IEEEeqnarray*}{rCl} \gamma + \gamma' + \ell_n - \beta & \eqann[>]{a} & \delta |\vecG_{\mathrm{II}}| - \hat{\ell}/4 + \ell_n - \delta \cdot \ell_n - \hat{\ell} \\ & = & \delta |\vecG_{\mathrm{II}}| - \hat{\ell}/4 + (1-\delta)\ell_n - \hat{\ell}\\ & \eqann{b} & \delta |\vecG_{\mathrm{II}}| - \hat{\ell}/4 + 2\hat{\ell} - \hat{\ell} \\ & = & \delta |\vecG_{\mathrm{II}}| + 3\hat{\ell}/4 \\ & \eqann[>]{c} & \delta |\vecG_{\mathrm{II}}| + \hat{\ell}/2 \; , \end{IEEEeqnarray*} where \eqannref{a} follows from (\ref{eq:BPrime}), (\ref{eq:C}), and (\ref{eq:CPrime}); \eqannref{b} follows from (\ref{eq:ellHat}); \eqannref{c} follows since $\ell_n$ is positive, by (\ref{eq:lnSimpleInequality}), and thus so is $\hat{\ell}$, by (\ref{eq:ellHat}). Next, observe that $|\vecG_{\mathrm{I}}| = |\vecG_{\mathrm{II}}|$, and apply (\ref{eq:alphaPlusAlphaPrimeBound}). The proof of (\ref{eq:secondAlphaBetaGammaCondition}) is the same except that the upper and lower bounds are swapped for $\alpha+\alpha'$ and $\gamma+\gamma'$. To recap, the occurrence of all the `good' events in (\ref{eq:A})--(\ref{eq:CPrime}) implies that the middle index falls inside $\vecZ_{\triangle}$. Hence, the next step is to show that each of the above events occurs with very high probability, if $n$ is large enough. We now recall Hoeffding's bound \cite[Theorem 2]{Hoeffding:63p}\cite[proof of Lemma 4.13]{MitzenmacherUpfal:17b} and apply it to the deletion channel with deletion probability $\delta$. Namely, let $D$ be a random variable equal to the number of deletions after $N$ channel uses. Hence, $E[D] = \delta N$, and for $t \geq 0$ we have by Hoeffding's bound that \begin{IEEEeqnarray}{rCl} \Pr ( D \geq \delta N + t ) &\leq& e^{-2t^2/N} \; , \label{eq:HoefPlus} \\ \Pr ( D \leq \delta N - t ) &\leq& e^{-2t^2/N} \; . \label{eq:HoefMinus} \end{IEEEeqnarray} Recalling that $\xi > 0$, we now require that $n_0$ be large enough that the bracketed term in (\ref{eq:vecgLengthBounds}) is at most $2$. That is, we assume that $n_0$ is large enough such that, for $n > n_0$, we have \begin{equation} \label{eq:GINotBig} |\vecG_{\mathrm{I}}| \leq 2 \cdot 2^{n-1} \; . \end{equation} Applying both (\ref{eq:HoefPlus}) and (\ref{eq:HoefMinus}), we deduce that, for $n > n_0$, we have \begin{IEEEeqnarray}{rCl} 1- \Pr (A) &\leq& 2e^{-2(\hat{\ell}/4)^2/|\vecG_{\mathrm{I}}|} \nonumber \\ & = & 2e^{-2(\ell_n(1-\delta)/8)^2/|\vecG_{\mathrm{I}}|} \nonumber \\ & \eqann[<]{a} & 2e^{-2(2^{(n-1)\cdot(1-\xi)-1}(1-\delta)/8)^2/|\vecG_{\mathrm{I}}|} \nonumber \\ & \eqann[\leq]{b} & 2e^{-2(2^{(n-1)\cdot(1-\xi)-1}(1-\delta)/8)^2/(2 \cdot 2^{n-1})} \nonumber \\ & = & 2e^{-\left(\frac{(1-\delta)^2}{256}\right) \cdot 2^{(n-1)(1-2\xi)}} \nonumber \\ & \eqann[\leq]{c} & 2e^{-\left(\frac{(1-\delta)^2}{256}\right) \cdot 2^{n_0 \cdot (1-2\xi)}} \; , \label{eq:PA} \end{IEEEeqnarray} where \eqannref{a} follows from (\ref{eq:lnSimpleInequality}); \eqannref{b} holds by (\ref{eq:GINotBig}); and \eqannref{c} follows from $n > n_0$. Exactly the same bound applies to $1-\Pr (C)$. For $\Pr (B)$, we again use (\ref{eq:HoefPlus}) to deduce that \begin{IEEEeqnarray}{rCl} 1-\Pr (B) & \leq & e^{-2 \hat{\ell}^2/\ell_n} \nonumber \\ & \eqann{a} & e^{-2\left( \frac{\ell_n(1-\delta)}{2} \right)^2/\ell_n} \nonumber \\ & = & e^{-2 \left( \frac{(1-\delta)}{2} \right)^2 \cdot \ell_n } \nonumber \\ & \eqann[<]{b} & e^{-2 \left( \frac{(1-\delta)}{2} \right)^2 \cdot 2^{(n-1)(1-\xi) - 1 } } \nonumber \\ & = & e^{- \left( \frac{(1-\delta)^2}{4} \right) \cdot 2^{(n-1)(1-\xi) }} \nonumber \\ & \eqann[\leq]{c} & e^{- \left( \frac{(1-\delta)^2}{4} \right) \cdot 2^{n_0 \cdot (1-\xi) }} \; , \label{eq:PB} \end{IEEEeqnarray} where \eqannref{a} follows from (\ref{eq:ellHat}); \eqannref{b} follows from (\ref{eq:lnSimpleInequality}); and \eqannref{c} holds because $n > n_0$. We now bound $1-\Pr (A' \cap C')$ from above. Consider $\vecG_{\mathrm{I}}$ and $\vecY_{\mathrm{I}}$ first. Next, recall that by the recursive definition of $g$ in (\ref{eq:guardBand}), the prefix of length $N_0 = 2^{n_0}$ of $\vecG_{\mathrm{I}}$ is distributed according to the underlying regular Markov input distribution (it does not contain a guard band). Denote this prefix as $X_1,X_2,\ldots,X_{N_0}$, and denote the state of the process at time $0$ as $S_0$. Since our input distribution is not degenerate, there exists an integer $\tau > 0$ and a probability $0 < p < 1$ such that for any $s \in \mathcal{S}$, \begin{equation} \label{eq:pnonzero} \Pr \big((X_1,X_2,\ldots,X_\tau) = (0,0,\ldots,0) | S_0 = s\big) < p \; . \end{equation} Let \begin{equation} \label{eq:ltilde} \tilde{\ell} = \ell_{n_0+1} \cdot (1-\delta)/2 \; . \end{equation} Since $n > n_0$, we have by (\ref{eq:ln}) and (\ref{eq:ellHat}) that $\tilde{\ell} \leq \hat{\ell}$ and that \[ \tilde{\ell}/4 < 2^{n_0} \; . \] Let \[ \rho = \tau \cdot \left\lfloor \frac{ \tilde{\ell}/4}{\tau} \right\rfloor \; , \] and partition $X_1,X_2,\ldots X_{\rho}$ into consecutive segments of length $\tau$. Then, we define event $A''$ to occur if there exists a segment that is not an all-zero vector of length $\tau$, and its first non-zero entry has not been deleted. We define $C''$ as the analogous event, with respect to $\vecG_{\mathrm{II}}$ and $\vecY_{\mathrm{II}}$, the only difference being that we are now considering the length $\rho$ suffix of $\vecX_{\mathrm{II}}$, and considering the last non-zero entry of a segment. By construction, if $A''$ and $C''$ hold, then $A'$ and $C'$ must hold. That is, if event $A''$ occurs, then the number of symbols trimmed from the left of $\vecG_{\mathrm{I}}$ is strictly less than $\tilde{\ell}/4$, since the above non-zero non-deleted symbol is not trimmed, and this assures that the ``trimming from the left'' stops before it. A similar claim holds with respect to $C''$. Thus, $1-\Pr (A' \cap C') \leq 1 - \Pr(A'' \cap C'')$. Since (\ref{eq:pnonzero}) holds for all $s \in \mathcal{S}$, we have by the Markov property that \begin{equation} \label{eq:PAdoublePrime} 1-\Pr (A'') < \big( 1 - (1-p)(1-\delta) \big)^{\rho/\tau} \; . \end{equation} Indeed, if $A''$ does not hold, this means that we have ``failed'' on each of the $\rho/\tau$ blocks, in the sense that each such block was either all-zero, or its first non-zero symbol was deleted. Since the probability of ``success'' conditioned on any given string of past failures is always greater than $(1-p)(1-\delta)$, the above follows. Define \[ \zeta = -\log_e \big( 1 - (1-p)(1-\delta) \big) \; , \] and note that $\zeta > 0$. Next, we bound $\rho$ as \begin{IEEEeqnarray*}{rCl} \rho & > & \tilde{\ell}/4 - \tau \\ & = & \ell_{n_0+1} \cdot (1-\delta)/8 - \tau\\ & > & \left( 2^{(1-\xi) \cdot n_0 - 1}\right) \cdot (1-\delta)/8 - \tau , \end{IEEEeqnarray*} where the second inequality follows from (\ref{eq:lnSimpleInequality}). Thus, \[ 1-\Pr (A'') < e^{- \frac{\zeta}{\tau} \left( \left(2^{(1-\xi) \cdot n_0 - 1}\right) \cdot(1-\delta)/8 - \tau\right)} \; . \] Of course, exactly the same bound holds for $1-\Pr (C'')$. Hence, by the union bound, and recalling that $A'' \cap C''$ implies $A' \cap C'$, we have that \begin{equation} \label{eq:PACprime} 1-\Pr (A' \cap C') < 2 e^{- \frac{\zeta}{\tau} \left( \left(2^{(1-\xi) \cdot n_0 - 1}\right) \cdot(1-\delta)/8 - \tau\right)} \; . \end{equation} Putting (\ref{eq:PA}), (\ref{eq:PB}), and (\ref{eq:PACprime}) together, and applying the union bound proves the lemma. \end{IEEEproof} We conclude this section with the proof of our main theorem. Note that both the encoding and decoding schemes are specified in the proof. \begin{IEEEproof}[Proof of Theorem~\ref{theo:main}] Our proof is divided into two parts. In the first part, we consider the `idealized' random vectors $\mathbf{X}$ and $\mathbf{Y}$. That is, $\mathbf{X}$ is drawn from the probability distribution defined in Lemma~\ref{lemm:guardBandErrorBound} (there is no encoding of date) and $\mathbf{Y}$ is the result of transmitting $g(\mathbf{X})$ through our deletion channel. We will show that by previously proven lemmas, the rate penalty of expanding $\mathbf{X}$ to $g(\mathbf{X})$ is negligible and the probability of deducing $\mathsf{Y}^*$ from $\mathbf{Y}$ is very high. We conclude the first part by discussing the polarization of $\mathbf{U} = \mathcal{A}(\mathbf{X})$. In the second part of the proof, we consider the actual case at hand. That is, we show how encoding and decoding are carried out, discuss the encoding and decoding complexity, prove that the rate of our coding scheme approaches the information rate $\mathcal{I}$, and prove that the probability of misdecoding tends to $0$. Recall that $0 < \nu' < \nu \leq 1/3$ are fixed parameters. We let \begin{equation} \label{eq:n0} n_0 = \floor{\nu n} \end{equation} and \begin{equation} \label{eq:nuDoublPrime} \nu'' = \frac{\nu + \nu'}{2} \; , \end{equation} implying that \begin{equation} \label{eq:trainOfNuInequalites} 0 < \nu' < \nu'' < \nu \leq \frac{1}{3} \; . \end{equation} Then, set $\xi$ for the guard-band length $\ell_n$ defined in (\ref{eq:ln}) to \begin{equation} \label{eq:epsilonFromGamma} \xi = \frac{1-\frac{1+\nu'' / \nu}{2}}{2} = \frac{1-\nu''/\nu}{4} \; . \end{equation} Note that by (\ref{eq:NN0Phi}), \begin{equation} \label{eq:n1} n_1 = n - \floor{\nu n} = \ceiling{(1-\nu) n} \; . \end{equation} We start with the first part of the proof: let $\mathbf{X}$ and $\mathbf{Y}$ be defined as in Lemma~\ref{lemm:guardBandErrorBound} (as yet, no coding of information). \begin{subclaim} \label{subclaim:ratePenalty} The rate penalty incurred by adding guard bands becomes negligible as $n \to \infty$. Namely, $ |g(\mathbf{X})|/|\mathbf{X}|$ tends to $1$ as $n \to \infty$. \end{subclaim} This follows by Lemma~\ref{lemm:blockLength}, which shows that the rate penalty incurred by adding guard bands becomes negligible as $n_0 \to \infty$, and the connection between $n_0$ and $n$ given in (\ref{eq:n0}). \begin{subclaim} \label{subclaim:trimming} The probability of making a mistake during the partitioning of $\mathbf{Y}$ into the $\Phi = 2^{n-n_0}$ trimmed blocks $\mathbf{Y}(1)^*$, $\mathbf{Y}(2)^*$,\ldots,$\mathbf{Y}(\Phi)^*$ is less than $\frac{1}{3} \cdot 2^{-2^{\nu'' n}}$, for $N = 2^n$ large enough. \end{subclaim} This follows from Lemma~\ref{lemm:guardBandErrorBound} and the union bound. Specifically, recalling the recursive nature of our algorithm to produce $\mathsf{Y}^*$, we note that an error is made only if the relevant portion of the received vector $\mathbf{Y}$, after that portion has been trimmed, is such that the middle symbol (rounding down) does not belong to the outermost guard band. Each such probability can be bounded by using Lemma~\ref{lemm:guardBandErrorBound}. Since we produce $\Phi$ blocks, our recursion is applied $\Phi-1$ times. Hence, for $n_0$ large enough, the probability of failing to produce $\mathsf{Y}^*$ is at most \begin{multlinecc} \label{eq:genieFailureUpperBound} (\Phi-1) \cdot 2^{-\theta \cdot 2^{(1-2\xi) n_0}} \\ = (2^{n-\floor{\nu n}}-1) \cdot 2^{-\theta \cdot 2^{ \floor{\nu n} \cdot((1+\nu'' / \nu)/2)}} \; , \end{multlinecc} where the equality follows from (\ref{eq:Phi}) and (\ref{eq:n0})--(\ref{eq:n1}). Recalling (\ref{eq:n0}), we may take $n$ large enough such that $n_0$ is indeed large enough for the above to hold. Moreover, since $0 < \nu'' < \nu$, it is straightforward to show that the RHS of (\ref{eq:genieFailureUpperBound}) is less than $\frac{1}{3} \cdot 2^{-2^{\nu'' n}}$ for large enough $n$, as required. \begin{subclaim} \label{subclaim:calIInformationBits} For $\mathbf{U} = \mathcal{A}(\mathbf{X})$, the fraction of indices $1 \leq i \leq N$ for which the Bhattacharyya parameter satisfies \begin{equation} \label{eq:subclaim_zsmall} Z(U_i|U_1^{i-1},\mathbf{Y}(1)^*, \mathbf{Y}(2)^*,\ldots,\mathbf{Y}(\Phi)^*) < \frac{1}{3N} \cdot 2^{-2^{\nu'' n}} \end{equation} and the total variation parameter (see Definition~\ref{defi:K} in the appendix) satisfies \begin{equation} \label{eq:subclaim_ksmall} K(U_i|U_1^{i-1}) < \frac{1}{3N} \cdot 2^{-2^{\nu'' n}} \end{equation} tends to $\mathcal{I}$, as $n \to \infty$. \end{subclaim} Informally, $H \approx 0$ iff $Z \approx 0$ and $H \approx 1$ iff $K \approx 0$. For a formal statement, see e.g.\ \cite[Lemma 1]{Shuval_Tal_Memory_2017}. Thus, Lemma~\ref{lemm:slowPolarizationStar} continues to hold if we replace (\ref{eq:goodIndexI}) by the condition \begin{multlinecc} \label{eq:goodIndexIZK} Z(V_{i_0}(\phi)|V_1^{i_0-1}(\phi) ,\mathbf{Y}^*(\phi)) < \epsilon \quad \mbox{and} \onlysingle{\quad\quad} \\ K(V_{i_0}(\phi)|V_1^{i_0-1}(\phi)) < \epsilon \; . \end{multlinecc} That is, at the end of $n_0$ polarization stages, the fraction of indices $1 \leq i_0 \leq N_0$ satisfying the `weak polarization' in (\ref{eq:goodIndexIZK}) tends to $\mathcal{I}$ for any $\epsilon > 0$. To get from the `weak polarization' implied by (\ref{eq:goodIndexIZK}) to the `strong polarization' implied by (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}), we employ \cite[Lemma 40]{ShuvalTal:18a}, as follows. For $\mathbf{b} = (b_1,b_2,\ldots,b_n)$, recall from (\ref{eq:bitReversedI}) the definition of $i(\mathbf{b})$, and denote \[ i_0(\mathbf{b}) \triangleq 1+\sum_{j=1}^{n_0} b_j 2^{n_0-j} \; . \] Thus, we may think of the random process by which $i(B_1,B_2,\ldots,B_n)$ is chosen as first selecting $i_0$, which is in fact a function of $B_1,B_2,\ldots,B_{n_0}$, and then completing the choice of $i$ according to a new process $\tilde{B}_1,\tilde{B}_2, \ldots, \tilde{B}_{n_1}$, where \begin{equation} \label{eq:Btildeprocess} \tilde{B}_1 = B_{n_0+1} , \tilde{B}_2 = B_{n_0+2}, \ldots, \tilde{B}_{n_1} = B_{n_0 + n_1} \; , \end{equation} recalling that $n_0 + n_1 = n$, by (\ref{eq:n0}) and (\ref{eq:n1}). Fix $\epsilon > 0$ to a value that will shortly be specified. Next, for now, let us fix an index $i_0$ for which (\ref{eq:goodIndexIZK}) holds. We define two processes related to (\ref{eq:Btildeprocess}), denoted $\tilde{Z}_1,\tilde{Z}_2,\ldots,\tilde{Z}_{n_1}$ and $\tilde{K}_1,\tilde{K}_2,\ldots,\tilde{K}_{n_1}$. Recall that by definition, the $(\mathbf{X}(\phi), \mathbf{Y}(\phi))$ are i.i.d.\ over $1 \leq \phi \leq \Phi$. Hence, this must also be the case for $(V_{i_0}(\phi), V_1^{i_0-1}(\phi),\mathbf{Y}^*(\phi))$, by (\ref{eq:Vphi}). The first process is the evolution of the conditional Bhattacharyya parameter as we apply the $n_1$ polar transforms implied by (\ref{eq:Btildeprocess}), to $(\tilde{X}_\phi,\tilde{Y}_\phi)_{\phi=1}^\Phi$, where \[ \tilde{X}_\phi = V_{i_0}(\phi) \; \; \mbox{and} \;\; \tilde{Y}_\phi = (V_1^{i_0-1}(\phi),\mathbf{Y}^*(\phi)) \; . \] The second process is defined similarly, but now we consider the evolution of the conditional total variation parameter as we apply $n_1$ polar transforms to $(\tilde{X}_\phi,\dbtilde{Y}_\phi)_{\phi=1}^\Phi$, where \[ \dbtilde{Y}_\phi = V_1^{i_0-1}(\phi) \; . \] By our assumption of $i_0$ satisfying (\ref{eq:goodIndexIZK}), \[ \tilde{Z}_1 = Z(\tilde{X}_1|\tilde{Y}_1) < \epsilon \;\; \mbox{and} \;\; \tilde{K}_1 = K(\tilde{X}_1|\dbtilde{Y}_1) < \epsilon \; . \] Since $(\tilde{X}_\phi,\tilde{Y}_\phi)$ are i.i.d.\ over $\phi$, and the same holds for $(\tilde{X}_\phi,\dbtilde{Y}_\phi)$, we have by \cite[Proposition 5]{Arikan_2009} that \[ \tilde{Z}_{t+1} \leq \begin{cases} 2\tilde{Z}_t & \mbox{if $\tilde{B}_t = 0$} \\ \tilde{Z}_t^2 & \mbox{if $\tilde{B}_t = 1$} \end{cases} \] and by \cite[Proposition 4]{Shuval_Tal_Memory_2017} that \[ \tilde{K}_{t+1} \leq \begin{cases} \tilde{K}_t^2 & \mbox{if $\tilde{B}_t = 0$} \\ 2\tilde{K}_t & \mbox{if $\tilde{B}_t = 1$} . \end{cases} \] Lastly, it follows from (\ref{eq:vecVSlice}) that $\tilde{Z}_{n_1}$ equals the LHS of (\ref{eq:subclaim_zsmall}) while $\tilde{K}_{n_1}$ equals the LHS of (\ref{eq:subclaim_ksmall}), where $i$ is defined in (\ref{eq:bitReversedI}), with $B_j$ instead of $b_j$. To prove the sub-claim, we must show that, for every $\xi > 0$, there exists a threshold such that, if $n$ is larger than the threshold, then the fraction of indices $i$ satisfying both (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}) is at least $\mathcal{I} - \xi$. We will do this by choosing an $\epsilon$ and $n_0$ such that the fraction of indices satisfying (\ref{eq:goodIndexIZK}) is at least $\mathcal{I} - \xi/3$. Of these weakly polarized indices, we will choose $n_1$ such that at least a fraction $1- 2\xi/3$ satisfy both (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}). This is sufficient because $(\mathcal{I} - \xi/3)(1-2\xi/3) \geq \mathcal{I}-\xi$. To make a proper argument, however, we will work in reverse. First, we will set the parameters for strong polarization assuming sufficient weak polarization. In particular, we define \begin{equation} \label{eq:betafixed} \beta = 3(\nu + \nu'')/4 \end{equation} and observe that (\ref{eq:trainOfNuInequalites}) implies $0 < \beta < 1/2$. Then, we let $\psi = \xi/3$ be the maximum fraction of weakly polarized indices that can fail to strongly polarize and apply \cite[Lemma 40]{ShuvalTal:18a} to determine a valid maximum for $\epsilon$ and minimum for $n_1$ (in \cite{ShuvalTal:18a}, $\psi$, $\epsilon$, and $n_1$ are denoted $\delta$, $\eta$, and $n$, respectively). This lemma implies the existence of an $\epsilon > 0$ such that if (\ref{eq:goodIndexIZK}) holds for an index $i_0$, then the fraction of $i$ values ($(i_0-1) \cdot \Phi + 1 \leq i \leq i_0 \cdot \Phi$) for which both $\tilde{Z}_i < 2^{-2^{\beta n_1}}$ and $\tilde{K}_i < 2^{- 2^{\beta n_1}}$ is at least $1-2\xi/3$, for all $n_1$ large enough\footnote{Crucially, $\epsilon$ and the $n_1$ threshold do not depend on the choice of $i_0$.}. Conceptually, we need to apply the lemma twice -- once for (\ref{eq:subclaim_zsmall}) and once for (\ref{eq:subclaim_ksmall}). Thus, the fraction of weakly polarized indices that fail to satisfy both (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}) is at most $2\psi = 2\xi/3$. Next, for the $\epsilon$ determined above, we find the minimum $n_0$ to guarantee that (\ref{eq:goodIndexIZK}) holds for at least a fraction $\mathcal{I}-\xi/3$ of the $i_0$ indices. Lastly, we recall that $n_0$ and $n_1$ are monotonically increasing functions of $n$, by (\ref{eq:n0}) and (\ref{eq:n1}). Hence, for all large enough $n$, the parameters $n_0$ and $n_1$ will exceed the bounds computed earlier and the fraction of indices satisfying (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}), where in both cases we replace the RHS by $2^{- 2^{\beta n_1}}$, is at least $\mathcal{I} - \xi$. In order to prove the sub-claim, all that remains is to show that, for all large enough $n$, we have \begin{equation} \label{eq:betaAndNuDoublePrime} 2^{-2^{\beta n_1}} < \frac{1}{3N} \cdot 2^{-2^{\nu'' n}} \; , \end{equation} the latter term being RHS of (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}). Indeed, by (\ref{eq:n1}) we have that $n_1 \geq (1-\nu)n$, and recalling from (\ref{eq:trainOfNuInequalites}) that $\nu \leq 1/3$, we deduce that $n_1 \geq 2n/3$. Hence, to prove (\ref{eq:betaAndNuDoublePrime}), it suffices to show that \begin{equation} \label{eq:betaAndNuDoublePrimePlusPlus} 2^{-2^{2\beta n/3}} < \frac{1}{3N} \cdot 2^{-2^{\nu'' n}} \; . \end{equation} Indeed, by (\ref{eq:trainOfNuInequalites}) and (\ref{eq:betafixed}) we have that $2 \beta/3 > \nu''$. Thus, recalling that $N=2^n$, we deduce that (\ref{eq:betaAndNuDoublePrimePlusPlus}) holds for all $n$ large enough. We now move to the second part of our proof. Let us first discuss how data is encoded. We produce $\mathbf{u} = u_1^{N}$ successively, starting from $u_1$ and ending in $u_N$. If the current index $i$ satisfies (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}), then $u_i$ is set to an information bit, where the information bits are assumed i.i.d.\ and Bernoulli$(1/2)$. Otherwise, $u_i$ is randomly picked according to the distribution $P(U_i = u_i | U_1^{i-1} = u_1^{i-1})$, where $u_1^{i-1}$ are the realizations occurring in previous stages. The random picks in this case are assumed to be from a random source common to both the encoder and the decoder. Typically, this is implemented using a pseudo-random number generator, common to both sides: if the pseudo-random number $0 \leq r_i \leq 1$ drawn for this stage is such that $P(U_i = 0 | U_1^{i-1} = u_1^{i-1}) \leq r_i$, we set $u_i = 0$. Otherwise, we set $u_i = 1$. These are essentially the `frozen-bits' from the seminal paper \cite{Arikan_2009}. Transforming $\mathbf{u}$ to $\mathbf{x} = \mathcal{A}^{-1}_n(\mathbf{u})$ and adding guard bands to $\mathbf{x}$ is as described before. The following sub-claim proves a key part of our theorem and is an immediate consequence of Subclaims \ref{subclaim:ratePenalty} and \ref{subclaim:calIInformationBits}. \begin{subclaim} The rate of our coding scheme approaches $\mathcal{I}$, as $n \to \infty$. \end{subclaim} Note that the probability distribution of our encoded $\mathbf{u}$ does \emph{not} generally equal that of the random variable $\mathbf{U}$ used throughout this paper. Namely, denote by $\tilde{p}$ the probability distribution corresponding to the above encoding process: the probability of the encoder producing the vector $\mathbf{u}$ is $\tilde{p}(\mathbf{u})$. Next, denote by $p$ the probability distribution of $\mathbf{U}$. That is, the probability we would get if we were to set $u_i$ to $0$ with probability $P(U_i = 0 | U_1^{i-1} = u_1^{i-1})$, irrespective of whether $i$ satisfies (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}) or not. Our plan is to show that the difference between $p$ and $\tilde{p}$ is `small'. However, we must first address a subtle point stemming from this difference in distributions. Specifically, the probability $P(U_i = 0 | U_1^{i-1} = u_1^{i-1})$ used at stage $i$ might be undefined, since we might be conditioning on an event with probability $0$. In this case, we define the above probability to be $1/2$. We decode as previously explained: we first recursively partition the received vector into $\mathbf{y}(1)^*, \mathbf{y}(2)^*,\ldots,\mathbf{y}(\Phi)^*$. Then, we employ successive cancellation decoding. That is, we produce our estimate $\hat{\mathbf{u}} = \hat{u}_1^N$ of $\mathbf{u}$ by first producing $\hat{u}_1$, then $\hat{u}_2$, etc., up to $\hat{u}_N$. If index $i$ is such that both (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}) hold, then we set $\hat{u}_i$ to the value maximizing \begin{multlinecc} \label{eq:decodeNonTrivialI} P(U_i = \hat{u}_i |U_1^{i-1}=\hat{u}_1^{i-1},\mathbf{Y}(1)^*=\mathbf{y}(1)^*,\\ \mathbf{Y}(2)^*=\mathbf{y}(2)^*, \ldots,\mathbf{Y}(\Phi)^* = \mathbf{y}(\Phi)^*) \; . \end{multlinecc} Otherwise, if $i$ does not satisfy both (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}), we set $\hat{u}_i$ is accordance with the common randomness. That is, in the pseudo-random number implementation, we set $\hat{u}_i = 0$ if \begin{equation} \label{eq:decodeTrivialI} P(U_i = 0 | U_1^{i-1} = \hat{u}_1^{i-1}) \leq r_i \; . \end{equation} Otherwise, we set $\hat{u}_i = 1$. We stress that the probabilities in (\ref{eq:decodeNonTrivialI}) and (\ref{eq:decodeTrivialI}) are calculated according to the probability distribution of the random vector $\mathbf{U}$ used throughout this paper. That is, although $\mathbf{u}$ has been encoded according to the probability $\tilde{p}$, we decode it `as if' it had been encoded using $p$. This discrepancy will shortly be addressed. However, as a first step, the following sub-claim considers the case in which there is no discrepancy. \begin{subclaim} If $\mathbf{u}$ were chosen according to the probability distribution $p$, then the probability of misdecoding would be less than $\frac{2}{3} \cdot 2^{-2^{\nu'' n}}$, for large enough $n$. \end{subclaim} To see this, note that if the above were the case, then $\mathbf{u}$ and $\mathbf{U}$ would have the same probability distribution. Thus, Subclaim~\ref{subclaim:trimming} would apply, and would imply that the probability of our partitioning algorithm failing to produce the correct $\mathbf{y}(1)^*,\mathbf{y}(2)^*,\ldots,\mathbf{y}(\Phi)^*$ from the received vector would be less than $\frac{1}{3} \cdot 2^{-2^{\nu'' n}}$, for large enough $n$. Also, if a `genie' were to give us the correct $\mathbf{y}(1)^*,\mathbf{y}(2)^*,\ldots,\mathbf{y}(\Phi)^*$, we have from (\ref{eq:subclaim_zsmall}) that the probability of misdecoding $\mathbf{u}$ would be less than $\frac{1}{3} \cdot 2^{-2^{\nu'' n}}$ for large enough $n$, using exactly\footnote{Since \cite{Arikan_2009} considers the Bhattacharyya parameter for the case of a channel with uniform input, we also need to claim that our $Z$ upper bounds the probability of maximum-aposteriori misdecoding in the more general setting where the channel input is non-uniform. This is well known, see e.g.\ \cite[Remark 1]{Shuval_Tal_Memory_2017} for a proof of a slightly stronger claim.} the same arguments as given in \cite[Proof of Theorem 2]{Arikan_2009} to bound the probability of the successive cancellation decoder failing. The result follows by applying the union bound. For $\mathbf{u}$ such that $p(\mathbf{u}) > 0$, denote by $P_\mathrm{e}(\mathbf{u})$ the probability that our decoder fails, given that $\mathbf{u}$ was encoded. Otherwise, if $p(\mathbf{u})=0$, define\footnote{Note that we are being conservative. We could have simply defined $P_\mathrm{e}(\mathbf{u})$ as the probability that our decoder fails, given that $\mathbf{u}$ was encoded. However, if our input distribution is such that some vectors $\mathbf{u}$ are given a probability of $0$, say in order to satisfy a constraint on the input, we should treat the event of the encoder producing a $\mathbf{u}$ not satisfying this constraint as an error.} $P_\mathrm{e}(\mathbf{u})=1$. We have just shown that for large enough $n$, \begin{equation} \sum_{\mathbf{u} \in \mathcal{X}^N} p(\mathbf{u}) P_\mathrm{e}(\mathbf{u}) < \frac{2}{3} \cdot 2^{-2^{\nu'' n}} \; . \end{equation} However, recall that our ultimate goal is to upper bound the LHS, after $p(\mathbf{u})$ is replaced by $\tilde{p}(\mathbf{u})$. Informally, a similar bound holds for this case as well, since $p$ and $\tilde{p}$ are `close'. The two following sub-claims makes this statement precise. \begin{subclaim} \[ \sum_{\mathbf{u} \in \mathcal{X}^N} | \tilde{p}(\mathbf{u}) - p(\mathbf{u}) | < \frac{1}{3} \cdot 2^{-2^{\nu'' n}} \] \end{subclaim} To see this, we use the following result from \cite[Lemma 3.5]{Korada:09z}: \[ A_1^N - B_1^N = \sum_{i=1}^N B_1^{i-1} (A_i - B_i) A_{i+1}^N \] where, here, $A_i^j$ denotes the product $A_i^j= A_i \cdot A_{i+1} \cdots A_j$, and $A_1^0 = A_{N+1}^N \triangleq 1$. We now take \[ A_i = A_i(\mathbf{u}) = \tilde{p}(u_i|u_1^{i-1}) \quad \mbox{and} \quad B_i = B_i(\mathbf{u}) = p(u_i|u_1^{i-1}) \; . \] Recall that we have defined $B_i$ to be $1/2$ if $p(u_1^{i-1}) = 0$. Similarly, we define $A_i$ to be $1/2$ if $\tilde{p}(u_1^{i-1}) = 0$. We deduce that \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{\sum_{\mathbf{u} \in \mathcal{X}^N} |\tilde{p}(\mathbf{u}) - p(\mathbf{u}) |} \\ \quad & = & \sum_{\mathbf{u} \in \mathcal{X}^N} |A_1^N - B_1^N| \\ & = & \sum_{\mathbf{u} \in \mathcal{X}^N} \left|\sum_{i=1}^N B_1^{i-1} (A_i - B_i)A_{i+1}^N \right| \\ & \leq & \sum_{\mathbf{u} \in \mathcal{X}^N} \sum_{i=1}^N |B_1^{i-1} (A_i - B_i)A_{i+1}^N | \\ & = & \sum_{i=1}^N \sum_{\mathbf{u} \in \mathcal{X}^N} |B_1^{i-1} (A_i - B_i)A_{i+1}^N | \; , \IEEEyesnumber \label{eq:KoradaInternalExternal} \end{IEEEeqnarray*} where the first equality follows by the chain rule and the first inequality follows from the triangle inequality. Next, fix $i$, and consider the internal sum in (\ref{eq:KoradaInternalExternal}), \begin{equation} \label{eq:KoradaInternal} \sum_{\mathbf{u} \in \mathcal{X}^N} |B_1^{i-1} (A_i - B_i)A_{i+1}^N | \; . \end{equation} If $i$ is an index for which both (\ref{eq:subclaim_zsmall}) and (\ref{eq:subclaim_ksmall}) hold, then $A_i = A_i(\mathbf{u}) = 1/2$ for all $\mathbf{u}$. For this case, we get from (\ref{eq:subclaim_ksmall}) and Lemma~\ref{lemm:KsmallCloseToBerHalf} in Appendix~\ref{sec:ZK} that \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{\sum_{\mathbf{u} \in \mathcal{X}^N} |B_1^{i-1} (A_i - B_i)A_{i+1}^N |} \\ \quad &=& \sum_{\mathbf{u} \in \mathcal{X}^N} B_1^{i-1} \cdot |A_i - B_i| \cdot A_{i+1}^N \\ & = & \sum_{\mathbf{u} \in \mathcal{X}^N} p(u_1^{i-1}) \cdot \left|\frac{1}{2} - p(u_i|u_1^{i-1}) \right| \cdot \tilde{p}(u_{i+1}^n | u_1^i) \\ & = & \sum_{u_1^i \in \mathcal{X}^i} p(u_1^{i-1}) \cdot \left|\frac{1}{2} - p(u_i|u_1^{i-1}) \right| \sum_{u_{i+1}^n \in \mathcal{X}^{N-i}} \tilde{p}(u_{i+1}^n | u_1^i) \\ & = & \sum_{u_1^i \in \mathcal{X}^i} p(u_1^{i-1}) \cdot \left|\frac{1}{2} - p(u_i|u_1^{i-1}) \right| \\ & = & K(U_i|U_1^{i-1}) < \frac{1}{3N} \cdot 2^{-2^{\nu'' n}} \; . \end{IEEEeqnarray*} Otherwise, if $i$ is an index for which either (\ref{eq:subclaim_zsmall}) or (\ref{eq:subclaim_ksmall}) do not hold, then $A_i = B_i$ for all $\mathbf{u}$, and thus (\ref{eq:KoradaInternal}) equals $0$. The sub-claim follows. We are now ready to state our bound on the probability of misdecoding. \begin{subclaim} \label{subclaim:proberrorN} For large enough $n$, \[ \sum_{\mathbf{u} \in \mathcal{X}^N} \tilde{p}(\mathbf{u}) P_\mathrm{e}(\mathbf{u}) < 2^{-2^{\nu'' n}} \; . \] \end{subclaim} To show this, we use the two previous sub-claims as follows, \begin{IEEEeqnarray*}{rCl} \sum_{\mathbf{u} \in \mathcal{X}^N} \tilde{p}(\mathbf{u}) P_\mathrm{e}(\mathbf{u}) & = & \sum_{\mathbf{u} \in \mathcal{X}^N} (p(\mathbf{u}) + \tilde{p}(\mathbf{u}) - p(\mathbf{u}) ) P_\mathrm{e}(\mathbf{u}) \\ & \leq & \sum_{\mathbf{u} \in \mathcal{X}^N} (p(\mathbf{u}) + |\tilde{p}(\mathbf{u}) - p(\mathbf{u})| ) P_\mathrm{e}(\mathbf{u}) \\ & \leq & \sum_{\mathbf{u} \in \mathcal{X}^N} p(\mathbf{u}) P_\mathrm{e}(\mathbf{u}) + \sum_{\mathbf{u} \in \mathcal{X}^N} |\tilde{p}(\mathbf{u}) - p(\mathbf{u})| \\ & < & \frac{2}{3} \cdot 2^{-2^{\nu'' n}} + \frac{1}{3} \cdot 2^{-2^{\nu'' n}} \; , \end{IEEEeqnarray*} which holds for a large enough $n$. Recall that in the statement of our theorem, we have denoted the length of our codeword (after adding the guard bands) as $\Lambda$. The following subclaim proves another key part of our theorem. \begin{subclaim} For large enough $n$, the probability of misdecoding is less than $2^{-\Lambda^{\nu' n}}$. \end{subclaim} The proof follows by (\ref{eq:nuDoublPrime}), Subclaim~\ref{subclaim:ratePenalty}, and Subclaim~\ref{subclaim:proberrorN}. All that remains now is to discuss the encoding and decoding complexity of our algorithms. \begin{subclaim} The encoding complexity is $O(\Lambda \log \Lambda)$. \end{subclaim} Like the complexity of successive cancellation decoding, the complexity of producing $\mathbf{u}$, and from it $\mathbf{x}$ is $O(N \log N)$. Adding the guard bands is a simple recursive process whose total time is $O(\Lambda)$. Since $\Lambda \geq N$, the result follows. \begin{subclaim} The decoding complexity is $O(\Lambda^{1+3\nu})$. \end{subclaim} The complexity of partitioning the received vector $\mathbf{y}$ into the $\Phi$ trimmed blocks $\mathbf{y}(1)^*, \mathbf{y}(2)^*,\ldots,\mathbf{y}(\Phi)^*$ is $O(\Lambda)$. Next, consider step $i$ of the decoding algorithm, in which we decide on the value of $\hat{u}_i$. The key step is to calculate the probability \begin{multlinecc*} P(U_i = 0| U_1^{i-1} = \hat{u}_1^{i-1},\mathbf{Y}(1)^* = \mathbf{y}(1)^*,\\ \mathbf{Y}(2)^* = \mathbf{y}(2)^*,\ldots,\mathbf{Y}(\Phi)^* = \mathbf{y}(\Phi)^*) \; . \end{multlinecc*} This is done in two stages. Recall (\ref{eq:vecVSlice}) and the discussion below it. First, for each $1 \leq \phi \leq \Phi$, we calculate the probabilities \[ P(V_{i_0}(\phi) = 0| V_1^{i-1}(\phi) = \hat{v}_1^{i-1}(\phi),\mathbf{Y}(\phi)^* = \mathbf{y}(\phi)^*) \; , \] where $i_0$ is the unique integer for which \[ (i_0-1) \Phi + 1 \leq i \leq i_0 \Phi \] and $\hat{v}_1^{i_0-1}(\phi)$ is related to $\hat{u}_1^{i-1}$ through (\ref{eq:vecVSlice}). That is, we have just calculated the probabilities corresponding to the first $n_0$ polarization stages. Recall that by Subsection~\ref{subsec:TrellisTDC}, this can be done using $\Phi$ trellises. Next, we apply the remaining $n-n_0$ polarization steps to these probabilities. That is, the standard SC decoder is run for the last $n-n_0$ stages, and can be thought of as effectively operating on a code of length $N_1 = 2^{n_1} = 2^{n-n_0}$. The total running time of the second stage is well known to be $O(N_1 \log N_1)$, which is indeed $O(\Lambda^{1+3\nu})$. Recalling the discussion in Subsection~\ref{subsec:complexity}, the total running time of the first stage is \[ O(\Phi \cdot|\mathcal{S}|^3 N_0^4) \; , \] where $|\mathcal{S}|$ is the number of states in the Markov chain through which the input distribution is defined (and which we treat as a constant), $N_0 = 2^{n_0} = 2^{\floor{n \nu}}$ and $\Phi = 2^{n - n_0} = 2^{n - \floor{n \nu}}$. Since $N = 2^n \leq \Lambda$, the result follows. \end{IEEEproof} \section{Introduction} In many communications systems, symbol-timing errors may result in insertion and deletion errors. For example, a \emph{deletion channel} with constant deletion rate maps a length-$N$ input string to a substring using an i.i.d.\ process that deletes each input symbol with probability $\delta$. These types of channels were first studied in the 1960s~\cite{Gallager_1961,Dobrushin_1967} and modern coding techniques were first applied to them in~\cite{Davey_2001}. Over the past 15 years, numerical bounds on the capacity of the deletion channel have been significantly improved but a closed-form expression for the capacity remains elusive~\cite{Mitzenmacher_2009,Fertonani_2010,Mercier_2012,Iyengar_2011,Iyengar_2015,Rahmati_2015,Castiglione_2015,Cheraghchi_2019}. Recently, polar codes were applied to the deletion channel in a series of papers, but the question of polarization for non-vanishing deletion rates remained open~\cite{Thomas_2017,Tian_2017,Tian_2018,Tian_it2018}. In this work, we show that polar codes can be used to efficiently approach the mutual information rate between a regular (i.e., finite-state, irreducible, and aperiodic) hidden-Markov input process and the output of the deletion channel with constant deletion rate. \looseness=-1 In~\cite{Thomas_2017}, a polar code is designed for the binary erasure channel (BEC) and evaluated on a BEC that also introduces a single deletion. An inner cyclic-redundancy check (CRC) code is used and decoding is performed by running the successive cancellation list (SCL) decoder \cite{Tal_2015} exhaustively over all compatible erasure locations. The results show one can recover a single deletion in this setting. Extensions to a finite number of deletions are also discussed but the decoding complexity grows faster than $N^{d+1}$, where $N$ is the code length and $d$ is the number of deletions. In \cite{Tian_2017}, a low-complexity decoder is proposed for the same setup. Its complexity, for a length-$N$ polar code, is roughly $d^3 N \log N$ when $d$ deletions occur\footnote{In~\cite{Tian_2017}, this complexity is misstated as $O(d^2 N \log N)$.}. The paper also presents simulation results for polar codes with lengths ranging from 256 to 2048 on two deletion channels. The first channel has a fixed deletion rate of 0.002 and the second introduces exactly $4$ deletions. Based on their results, the authors of \cite{Tian_2017} conjecture that polarization occurs when $N \to \infty$ while the total number of deletions, $d$, is fixed. The final papers~\cite{Tian_2018,Tian_it2018} in this series extend the previous results by proving that weak polarization occurs when $N\to \infty$ and $d=o(N)$. While this result is quite interesting, its proof does not extend to the case of constant deletion rate. For the case where $N\to \infty$ with $d$ fixed, these papers also show strong polarization for the deletion channel and weak polarization for the cascade of the deletion channel and a discrete memoryless channel (DMC). In this paper, we combine the well-known trellis representation for channels with synchronization errors~\cite{Davey_2001} with low-complexity successive-cancellation (SC) trellis decoding for channels with memory~\cite{Wang_2014,Wang_2015}. In particular, \cite{Davey_2001} describes how the joint input-output probability of the deletion channel (and other synchronization-error channels) can be represented using a trellis. This is closely related to fast algorithms for the edit distance between strings based on dynamic programming~\cite{Wagner_1974}. The main advantage of the trellis perspective is that it naturally generalizes to other channels with synchronization errors (e.g., with insertions, deletions, and errors). The papers~\cite{Wang_2014,Wang_2015} describe how the plus and minus polar-decoding operations can be efficiently applied to a channel whose input-output mapping is represented by a trellis. Putting these ideas together defines a low-complexity SC decoder for polar codes on the deletion channel that is essentially equivalent to the decoder defined in~\cite{Tian_2017}. Building on previous proofs of polarization for channels with memory \cite{SasogluTal:18a,Shuval_Tal_Memory_2017}, this paper proves weak and strong polarization for the deletion channel. In order to prove strong polarization, guard bands of `$0$' symbols are embedded in the codewords of Ar\i{}kan\xspace's standard polar codes. Effectively, these guard bands allow the decoder to work on independent blocks and enable our proof of strong polarization. The primary results of this research are summarized in Theorem~\ref{theo:main}. Conceptually, it provides a polynomial-time method to achieve the mutual information rate between a fixed regular hidden-Markov input process and the binary deletion channel. \begin{theo}\label{theo:main} Fix a regular hidden-Markov input process and a parameter $\nu \in (0,1/3]$. The rate of our coding scheme approaches the mutual information rate between the input process and the binary deletion channel output. The encoding and decoding complexities of our scheme are $O(\Lambda \log \Lambda)$ and $O(\Lambda^{1+3\nu})$, respectively, where $\Lambda$ is the blocklength. For any $0 < \nu' < \nu$ and sufficiently large blocklength $\Lambda$, the probability of decoding error is at most $2^{-\Lambda^{\nu'}}$. \end{theo} The family of allowed input distributions is defined in Subsection~\ref{subsec:FAIM} and the structure of the codeword is defined in Section~\ref{sec:strong_setup}. Its proof can be found in Section~\ref{sec:strong}. While the theorem is stated for a fixed input process, we note that the encoding and decoding complexities scale cubically with the number of states in the input process. Theorem~\ref{theo:cap} establishes a sequence of regular hidden-Markov input processes whose mutual information rates approach the deletion channel capacity. \begin{theo}\label{theo:cap} Let $C$ be the capacity of the binary deletion channel with deletion probability $\delta$. For any $\epsilon>0$, there is a regular hidden-Markov input process whose mutual information rate on the binary deletion channel output is at least $C-\epsilon$. \end{theo} \vspace{1mm} Together, the two theorems imply that the first scheme can be used to achieve capacity on the binary deletion channel. We should note, however, that we do not provide an efficient method to optimize the input distribution or to bound its complexity in terms of the gap to capacity. Also, Theorem~\ref{theo:cap} is weaker than a recent result by Li and Tan which proves the capacity can be approached by a sequence of finite-order Markov input distributions that are both irreducible and aperiodic~\cite{Li_2019}. Both results are both predated by an earlier proof of Dobrushin that shows a sequence of periodic finite-state Markov input distributions can approach capacity on the deletion channel~\cite{Dobrushin_1967}. Here is an outline of the structure of this paper. Section~\ref{sec:background} sets up the basic notation and definitions used in this paper. Section~\ref{sec:TrellisBasics} defines the concept of a trellis and shows how it can be used to compactly represent various deletion patterns and their corresponding probabilities. In Section~\ref{sec:TrellisPolarization}, we describe how plus and minus polarization operations are applied to trellises to yield new trellises. This provides a more detailed description of the SC trellis decoding method introduced in~\cite{Wang_2014}. It is our hope that all sections up to and including Section~\ref{sec:TrellisPolarization} will be accessible to practitioners who are primarily interested in the implementation details. Section~\ref{sec:informationRates} discusses information rates and Section~\ref{sec:weak} proves that, in our setting, weak polarization occurs. Section~\ref{sec:strong} focuses on strong polarization. The practitioner is advised to read Section~\ref{sec:strong_setup} which defines the structure and operation of an encoder with guard bands. The proof of the main theorem is presented in Section~\ref{sec:strong}.
train/arxiv
BkiUc6025YjgKOpQ6IO-
5
1
\section{Introduction}\label{sec:intro} The Russia-Ukraine war has sent a seismic wave to the energy market and brought soaring gas prices under the spotlight. Ideally, the optimal retail fuel price at the pump needs to take into account many factors, including the crude oil price, the transportation cost, the brand value, and the local competition. Because of the complexity of the pricing problem, in practice, it is not surprising that the station managers would rely on some heuristics or simple rules (such as a constant markup over the cost) to set prices instead of using a sophisticated pricing algorithm. This is no longer the case for many gas stations, especially those owned by a large corporation. PDI Fuel Pricing\footnote{\url{https://www.pdisoftware.com/fuel-pricing-solutions/}} sells software to gas station managers that helps them set fuel prices more intelligently using data analytics and machine learning. It uses a wide range of data, including historical prices and demand, as well as competitors' prices and claims to ``fine-tune your pricing strategy with live competitive insights allowing [managers] to react quickly to market conditions.'' Needless to say, machine learning algorithms can significantly improve profitability over the heuristic approach. However, it is not hard to imagine scenarios when a human analyst or the station manager may not be fully convinced by the price prescribed by the algorithm, especially when the algorithm is a black box (typical for many machine learning algorithms) and the prescribed price deviates from human intuition significantly. For example, when the algorithm recommends a price that is much higher than what would have been charged by the station manager, should the algorithmic decision be trusted over human knowledge? On the one hand, the algorithm takes much more quantitative information as input than the human analyst, and the higher price could reflect the rising demand, a pattern in the data missed by the human analyst. On the other hand, human knowledge may have relied on simple rules such as matching the price of another station around the corner. Such price-matching heuristics may have worked well in the past. When the decisions from the algorithm and human knowledge are in conflict, it can be hard for the analyst to make a call. The problem faced by the station manager in the motivating example is prevalent. Most business owners have realized the importance of the AI revolution and are willing to invest in it to improve business decision-making. However, as AI algorithms become increasingly sophisticated, many firms have no choice but to outsource the standardized components in the decision-making process to commercial AI solutions. These decisions, such as pricing and inventory management, have historically been made through human instincts and experiences. When human knowledge and the decision output by AI algorithms deviate significantly, firms face a similar dilemma to the station manager. In this paper, we provide a general analytical framework to study practical problems in which humans and AI interact in the decision-making process. Motivated by the gas station example, we consider an AI system prescribing a decision based on past data and some machine learning algorithms. Based on the prescribed decision, the human analyst may set a guardrail using simple rules from the accumulated knowledge, experiences, or expertise. More precisely, human knowledge is translated to a cap or floor, or both of the decision. That is, if the algorithmic decision violates the bounds, the human analyst may override it by clipping it to the imposed cap or floor. For example, the algorithm may recommend a retail price of \$5.10 per gallon. At the same time, human knowledge indicates, ``the price can't be higher than \$5.00 per gallon because the station around the corner is only charging \$4.80.'' As a result, the human analyst may set the final price to \$5.00. Otherwise, if the recommended price is lower than \$5.00, then the algorithmic decision is followed. In this interaction, AI is the main force behind the decision-making, while human knowledge serves as an auxiliary, safeguarding the algorithmic decision from prescribing unreasonably high prices. It is a fair representation of a considerable fraction of human-AI interaction in practice. With the framework, we aim to answer the following research question: When does human knowledge add value to AI decision-making? Our first result is \emph{negative}: human knowledge does not provide any benefit if (1) the algorithmic decision improves with more data, for example, when the mean squared error with respect to the optimal decision is diminishing, and (2) human knowledge is not improving with more data. This result is somewhat expected: The guardrail prescribed by human knowledge can itself be treated as the pattern extrapolation of past data, albeit a simple and heuristic one. If the algorithm can efficiently recognize and extrapolate the pattern better than the human, as many machine learning algorithms do, then it is unnecessary to augment the algorithmic decision with human knowledge under sufficient data. The above result may sound intuitive, but it is derived in an ideal situation. While it may be reasonable to assume that human knowledge does not improve constantly with more data, as human brains are generally unable to recognize complex patterns hidden in a large dataset, there are many caveats in applying commercial off-the-shelf AI systems to real-world applications as those algorithms may fail to satisfy condition (1) above. In these cases, human knowledge can be used to augment the algorithmic decision. In this study, we identify three such use cases within our framework and argue that, in these cases, rhetorically, the gas station manager should not completely delegate the pricing decision to the algorithm of PDI Fuel Pricing. The three cases summarize the common pitfalls when making business decisions and trusting the algorithm blindly. \begin{itemize} \item When the firm is in a \emph{competitive market}, and the algorithm fails to fully take into account the competitors' decision (due to incomplete data or algorithmic design), simple decision rules based on human knowledge, such as price matching, can improve the algorithmic decision. This is not an uncommon setting. For example, PDI Fuel Pricing may not have direct access to the pricing data of other competing local stations unless they subscribe to some service as well. We show that when a competitor sets a price near the Nash equilibrium, using the algorithmic price and matching it to the competitor's price when the algorithmic price is higher can improve the algorithmic decision. \item The algorithm may be susceptible to \emph{model misspecification}. In the pricing context, the algorithm may mistakenly treat the demand function as a linear function and recommend the optimal price based on the misspecified linear demand model. On the other hand, the human analyst may simply observe which price generates the highest profit empirically in the past data without fitting or optimizing a model. This heuristic turns out to be quite robust to model misspecification. We show that human knowledge when used to safeguard the algorithm, can help mitigate the model misspecification and improve the profitability of the algorithmic decision. \item When the data fed into the algorithm are \emph{contaminated}, possibly due to the reporting or measurement error, then the relative insensitivity of the human knowledge to specific data points turns out to be a robust mechanism. Not surprisingly, the combination of human knowledge and the algorithmic decision can prevent the latter from being misguided by the contaminated data. We provide an analytical condition that characterizes the contamination level for human knowledge to prevail. \end{itemize} In all three cases, instead of an abstract AI system, we materialize the algorithmic decision and study linear regression, which allows us to concretely analyze the trade-off of safeguarding the regression output using simple rules. Linear regression is widely used and is representative of a more complex machine learning algorithm. Such treatment allows us to provide technical conditions under which augmentation by human knowledge can improve the algorithmic decision. This study contributes to the growing literature on human-AI collaboration. In some applications, it has been shown that AI lacks crucial human strengths such as domain knowledge and common-sense reasoning \citep{holstein2021designing,lake2017building,miller2019explanation}, which motivates the collaboration between AI and human experts on subjects including chess \citep{case2018become,das2020leveraging}, healthcare \citep{patel2019human,irvin2019chexpert,dai2021artificial}, criminal justice \citep{kleinberg2018human,grgic2019human}, education \citep{smith2012predictive,cheng2019explaining}, and public services \citep{chouldechova2018case,binns2018s}. This study is motivated by business problems, and the human-AI interaction is uniquely defined by the context. Below we review the literature closely related to this study. \section{Related Literature}\label{sec:literature} This research is broadly related to two streams of literature: those papers providing conceptual or theoretical frameworks for human-AI collaboration and empirical papers documenting real-world interactions between AI and human analysts. In the first stream, recent literature in computer science aims at the optimal integration of human and AI decisions \citep{madras2018predict,wilder2020learning,gao2021human,mozannar2020consistent,keswani2021towards,bansal2021most,bansal2019updates,donahue2022human,rastogi2022unifying,raghu2019algorithmic}. On the one hand, \citet{madras2018predict} propose a learning-to-defer framework in which the AI can choose to make decision by its own or just pass the task to the downstream human expert. The expert has information unavailable to AI and may make better decisions. Follow-up papers extend the framework to more complex settings, such as multiple experts \citep{keswani2021towards}, bandit feedback \citep{gao2021human}, joint optimization of the prediction algorithm and pass function \citep{wilder2020learning,mozannar2020consistent}. On the other hand, \citet{donahue2022human,rastogi2022unifying} consider a \emph{weighted average} aggregation of human and AI decisions and show conditions for human-AI complementarity in which the aggregated decision outperforms both individual decisions. Recently, \citet{grand2022best} show that in the setting of sequential decision-making, the AI algorithm should be trained differently when a human analyst is involved. Motivated by business applications, our paper differs from this stream of works as we study a particular (not necessarily optimal) way to integrate the algorithmic and human decisions tailored to the application. In the motivating example, the manager does not have access to the internal structure of the algorithm and cannot design a meta-algorithm to optimally instill her own knowledge into the algorithm. Some recent studies in Operations Management analyze the human-AI interaction in a theoretical framework \citep{boyaci2020human,agrawal2018prediction,agrawal2019exploring,devericourt2022your,ibrahim2021eliciting,dai2021artificial}. They focus on modeling the impact of AI-based predictions on the human decision-making process. \citet{boyaci2020human} study the impact of AI predictions on human decision errors and the cognitive effort humans put into their decisions. The human has the cognitive flexibility to attend information from diverse sources but under limited cognitive capacity, while the AI only processes incomplete information but with great accuracy and efficiency. Through a rational inattention model, the authors show that AI prediction improves the overall accuracy of human decisions and reduces cognitive effort. \citet{agrawal2018prediction} consider the human analyst aiming to maximize the utility which depends on their decision and the uncertain state. The state can be predicted accurately by the AI algorithm. But the human needs to learn the utility function. The authors show that AI prediction generally complements the human effort but could be a substitute in some cases. \citet{devericourt2022your} consider the human-AI interactions in a sequential setting in which the analyst gradually learns the accuracy of the AI algorithm through a sequence of tasks. Since the analyst can override AI and never actively explores the AI accuracy, the analyst may never know whether AI outperforms herself at the end of the day. The authors provide explanations for the coexistence of AI and humans, even if one actually outperforms the other. \citet{dai2021artificial} use a theoretical framework to analyze a physician's decision with regard to whether to use AI when prescribing a treatment. They find that physicians may intentionally avoid using AI, even when AI can help mitigate clinical uncertainty because doing so increases their liability when adverse patient outcomes occur. Our paper differs from these papers in modeling the human decision-making process. In our model, we assume the human analyst aims to directly safeguard the AI decisions using intuition and expertise. We focus on whether the integration improves the raw AI output. Empirical evidence shows that human knowledge can still improve AI systems, even though the latter have access to big data and computational resources \citep{van2010ordering,campbell2011market,phillips2015effectiveness,karlinsky2019automating,kesavan2020field,Liu2022Alibaba,sun2022predicting}. For example, in the context of inventory replenishment, \citet{van2010ordering} find that store managers often modify the algorithmic recommendation from an automated replenishment system. \citet{kesavan2020field} use the data from a field experiment to investigate the merchant's modification of the advice from a data-driven central-planning system. They find that the merchant's modification reduces the overall profitability but improves the profit for growth-stage products whose historical data are limited. \citet{Liu2022Alibaba} conduct a field experiment to compare the inventory replenishment strategies of human buyers and AI algorithms. They find the algorithm outperforms human buyers in terms of reducing out-of-stocks rates and inventory rates. The most related empirical works to our study are \citet{ibrahim2021eliciting, fogliato2022case}. \citet{ibrahim2021eliciting} show how to exploit the human domain knowledge to improve the AI predictions for surgery duration. Particularly, they suggest inputting the human adjustment (so-called private information adjustment in the paper), instead of the human direct forecast, into the prediction algorithm. Their work conveys a message similar to ours: even the human predictions are less accurate than AI, they can still help boost AI performance. \citet{fogliato2022case} investigate human-AI collaboration in the context of child maltreatment hotline screening. Due to the technical glitch caused by incorrect input, the AI may incorrectly predict the risk score in some cases. They find that human analysts are more likely to override AI recommendations when AI makes a mistake. The work shows that humans can augment the algorithmic decision when the algorithm exhibits defects in real-world applications. Our work provides a theoretical framework to complement the empirical evidence provided in the above papers and analyzes the situations when the human augmentation of algorithmic outputs is beneficial. Another stream of the related empirical literature is ``judgmental adjustment of statistical forecasts'' (see \citealt{arvan2019integrating,lawrence2006judgmental} for a review). These studies consider the demand forecast problem in supply chain management. The human analyst is allowed to adjust the forecasts generated by an algorithm. The adjustment can improve the accuracy when the algorithmic forecast is deficient or the human has important domain knowledge that is unavailable to AI \citep{lawrence2006judgmental}. Several empirical studies aim to investigate the effect of the direction and magnitude of the adjustment on accuracy \citep{fildes2009effective,davydenko2013measuring,baker2021maximizing}. However, the benefit of such adjustments may be highly context-dependent \citep{khosrowabadi2022evaluating}. Although we consider a general decision-making problem, our work can also contribute to this literature by providing an analytical framework to characterize when the adjustment adds value to the algorithmic forecast. Finally, we notice some recent works focusing on how to design user-friendly AI algorithms which the human analyst can easily understand and follow. \citet{bastani2018interpreting} construct extracted decision trees to interpret complex, black-box AI models and summarize their reasoning process. Applied to the diabetes risk prediction problem, the proposed algorithm produces more accurate interpretations than baseline algorithms. \citet{bastani2021learning} propose a reinforcement-learning algorithm for inferring interpretable tips to help workers improve their performance in sequential decision-making tasks. Through a virtual kitchen-management game, they show that the algorithm improves workers' performance. \citet{dietvorst2018overcoming} find that giving the human analyst some control over the AI output can reduce human's aversion to algorithms. \section{An Analytical Framework for Human-Safeguarded Algorithmic Decisions}\label{sec:general-loss} In the retail fuel example, the gas station manager intends to set prices to maximize the profit. The objective can be viewed more generally as the minimization of the loss in comparison to the optimal price. In this section, we consider a general problem that an analyst intends to minimize a loss function $l(\cdot): \mathbb R\to\mathbb R$, which measures the loss due to the deviation from the optimal decision $\xx$, e.g., the profit loss due to making suboptimal operations or pricing decisions. The loss function may represent the operational cost or the expected negative profit. The analyst may not know the form of the loss function exactly and seeks the help from AI algorithms. We do not impose any structure of the loss function but make the following mild assumptions. \begin{assumption}\label{ass:general-loss} Suppose the loss function $l(\cdot)$ satisfies: (\romannumeral1) $l(x) $ is nonnegative; (\romannumeral2) $l(x)$ is quasiconvex with minimizer $\xx$. \end{assumption} Part (\romannumeral1) of Assumption \ref{ass:general-loss} is without loss of generality as the loss function can be shifted up by a constant of $|l(\xx)|$. We first give two examples that will serve as running examples throughout the rest of the paper. The two examples are intended to give the context of the loss function and demonstrate the generality of Assumption~\ref*{ass:general-loss}. \begin{example}[Predictive Analytics: Prediction]\label{exp:prediction} If a firm intends to forecast a quantity, for example, the demand in the next season, then the firm's problem can be cast as a prediction problem: The goal is to minimize the loss function $l(x)=(x-\xx)^2$, where $\xx$ is the actual value of the quantity of interest. \end{example} \begin{example}[Prescriptive Analytics: Pricing]\label{exp:pricing} Sophisticated algorithms such as online learning has been widely used in pricing (see, e.g., \citealt{den2022dynamic,keskin2022data}). When a new product is launched to the market, the retailer needs to set its price $x$. The goal of the retailer is to maximize the profit, which is the product of the profit margin $x-c$, where $c$ is the marginal cost and demand, i.e., $\pi(x)=(x-c) f(x)$. Denote $\xx$ by the optimal price. The retailer knows the marginal cost, but does not know the demand function $f(x)$ nor the optimal price. The loss function can be written as $l(x)=\pi(\xx)-\pi(x)$. If the profit function $\pi(\cdot)$ is unimodal, then the loss function satisfies Assumption~\ref{ass:general-loss}. \end{example} We next introduce the algorithmic decision and human knowledge into the framework. \textbf{Algorithmic decision.} To accommodate a wide range of algorithms, we simply use a generic random variable $\xa$ to represent the decision. The randomness may come from the randomness in the historical data or the randomization of the algorithm itself. The performance of the algorithmic decision is thus evaluated by $\mathbb E[l(\xa)]$. \textbf{Human knowledge.} We focus on human knowledge in the form of a guardrail. That is, the human analyst forms a belief with an upper bound on the optimal decision, based on her domain knowledge and experiences. We use a random variable $\xh$ to denote the upper bound. In Example~\ref*{exp:pricing}, $\xh$ could be interpreted as a price cap manually imposed by the retailer. Note that unlike the algorithmic decision, $\xh$ is usually not data-dependent and tends to be stable, although we allow it to be random and correlated with $\xa$. We use the upper bound as a form of domain knowledge due to two reasons. First, it is common for human brains to perceive uncertainty in terms of intervals and worst-case scenarios. The notion is closely related to confidence intervals in statistics that have shaped how human's belief is formed. Second, compared to point estimators, the notion we propose is more flexible and allows for different confidence levels. To keep the framework general, we do not specify how $\xa$ and $\xh$ are generated. For $\xa$, it may be output by a machine learning algorithm deployed by the analyst or a black-box commercial software as mentioned in the fuel-pricing example in the introduction. The complexity of the algorithm may vary, e.g., linear regression versus neural networks. The random variable $\xa$ can fully capture the wide range of scenarios. For $\xh$, although itself may not represent a sensible decision, it may serve as a safeguard distilled from the accumulated knowledge of the human analyst. Depending on the conservativeness and the risk preference of the analyst, $\xh$ may have different values. For instance, in Example~\ref*{exp:prediction}, $\xh$ may roughly be the upper confidence bound of the targeted quantity with various confidence levels. \textbf{Human-safeguarded algorithmic decision.} We consider a simple yet pervasive approach to integrate the algorithmic decision and human knowledge. The human analyst safeguards the algorithmic decision by using \begin{equation}\label{eq:integration} \xin \triangleq \min\{\xa, \xh\}. \end{equation} This is a rather natural step: the analyst follows the algorithmic decision if the upper bound is not violated; otherwise, the upper bound is used. Consider the example mentioned in the introduction (a special case of Example~\ref*{exp:pricing}), $\xa$ is the price output by PDI Fuel Pricing; $\xh$ is the price cap imposed by the station manager. The safeguarded algorithmic decision takes the minimum of the two, guaranteeing that the price output by the algorithm does not exceed the price cap. This type of augmentation also captures the interaction between autonomous drones and vehicles and their human overseers \citep{berger2022}, in which the human overseer needs to step in and override the algorithm when the system encounters an unexpected situation. Note that neither the algorithm nor the human analyst has access to the optimal decision $\xx$. If $\xh\ge \xx$ almost surely, i.e., the upper bound provided by the human belief is indeed always larger than the \emph{true} optimal decision, then we can show that the safeguarded decision $\xin$ outperforms the raw algorithmic decision $\xa$, i.e., $\mathbb E[l(\xin)]\le \mathbb E[l(\xa)]$. To see this, note that \begin{equation*} \mathbb E[l(\xin)]-\mathbb E[l(\xa)]=\int_{\xx}^{\infty} \int_{x_h}^{\infty} (l(x_h)-l(x_a)) f(x_a,x_h) dx_a dx_h \leq 0, \end{equation*} where $f(\cdot,\cdot)$ represents the joint PDF and the inequality follows from $l(\xa) \ge l(\xh)$ for $\xa \ge \xh \ge \xx$. The condition $\xh\ge \xx$, however, cannot be guaranteed, because the human analyst does not have precise information about $\xx$. On one hand, when an unnecessary guardrail $\xh < \xx$ is imposed, the performance of $\xa$ is hurt for $\xa \in [\xh,\xx]$. In other words, if the suggested $\xh$ by the analyst is too aggressive, then $\xh< \xx$ is likely to happen and the human belief ends up clipping the algorithmic output $\xa$ for too many possible scenarios, even though the latter may accurately achieve the true optimal decision $\xx$. Such an unnecessary guardrail inevitably introduces a significant downward bias and may cause the safeguarded algorithmic decision to be worse. The faulty human knowledge leads to an additional cost to the AI decision. On the other hand, one may argue that $\xh$ can be a sufficiently large number, so that $\xh\ge \xx$ always holds. However, in this case, the human knowledge is almost useless in the process, as it does not provide a meaningful upper bound. The improvement by the human augmentation, if any, is going to be minimal. This is the result of an overly conservative human belief. The observation highlights the impact of aggressive/conservative human augmentation. In the next proposition, we quantify the benefit of human augmentation. \begin{proposition}[Conditions for beneficial human augmentation] \label{prop:loss-one-side-up} Suppose Assumption~\ref{ass:general-loss} holds. \begin{enumerate}[label={(\roman*)}] \item We can quantify the benefit of human augmentation by \begin{equation}\label{equ:loss-closed} \mathbb E[l(\xa)]-\mathbb E[l(\xin)]=\mathbb E[(l(\xa)-l(\xh)) \mathbb{I} (\xh \leq \xa)]. \end{equation} \item A sufficient condition for beneficial augmentation $\mathbb E[l(\xin)] \le \mathbb E[l(\xa)]$ is \begin{equation}\label{equ:loss-sufficient-low} \mathbb E[l(\xa) \mathbb{I}(\xa > \xx,\xh \le \xx)] \ge \mathbb E[l(\xh)\mathbb{I}(\xh \le \xx)]. \end{equation} \item A necessary condition for beneficial augmentation $\mathbb E[l(\xin)] \le \mathbb E[l(\xa)]$ is \begin{equation}\label{equ:loss-sufficient-high} \mathbb E[l(\xa) \mathbb{I}(\xa \ge \xx)] \ge \mathbb E[l(\xh)\mathbb{I}(\xh \le \xx,\xa > \xx)]. \end{equation} \end{enumerate} \end{proposition} Next, we interpret the result of Proposition~\ref{prop:loss-one-side-up}. In \eqref{equ:loss-closed}, the benefit of human augmentation depends on the performance of the algorithmic decision $l(\xa)$ and human knowledge $l(\xh)$ on the event that the guardrail takes effect, i.e., $\mathbb{I} (\xh \leq \xa)$. This is intuitive because the analyst counts on their knowledge to improve the algorithmic decision when it looks ``unreasonable.'' Conditions \eqref{equ:loss-sufficient-low} and \eqref{equ:loss-sufficient-high} are easier to interpret when $\xa$ and $\xh$ are independent, although we allow $\xa$ and $\xh$ to be dependent. For example, suppose the human knowledge $\xh$ is data-independent. Then, \eqref{equ:loss-sufficient-low} and \eqref{equ:loss-sufficient-high} are reduced to, respectively, \begin{align} \mathbb E[l(\xa) \mathbb{I}(\xa > \xx)] \ge \mathbb E[l(\xh)\mathbb{I}(\xh \le \xx)]/\mathbb P(\xh \le \xx) \label{equ:loss-sufficient-low-in}\\ \mathbb E[l(\xa) \mathbb{I}(\xa \ge \xx)] \ge \mathbb E[l(\xh)\mathbb{I}(\xh \le \xx)] \mathbb P(\xa > \xx) . \label{equ:loss-sufficient-high-in} \end{align} The left-hand sides of \eqref{equ:loss-sufficient-low-in} and \eqref{equ:loss-sufficient-high-in} measure the performance of the algorithmic decision. In particular, \eqref{equ:loss-sufficient-low-in} says that it is beneficial to safeguard the algorithmic decision when the right-hand side $\mathbb E[l(\xh)|\xh\le \xx]$ is small enough. In other words, the conditional expected loss does not explode when the human makes mistakes and imposes an overly aggressive bound ($\xh\le \xx$). Moreover, the necessary condition \eqref{equ:loss-sufficient-high-in}, which is weaker than \eqref{equ:loss-sufficient-low-in}, implies not to safeguard the algorithmic decision when the expected loss ($\mathbb E[l(\xh)\mathbb{I}(\xh \le \xx)]$) incurred by human belief is over a certain amount. It is easier to check whether the human augmentation is beneficial using \eqref{equ:loss-sufficient-low-in} and \eqref{equ:loss-sufficient-high-in}, than directly comparing \eqref{equ:loss-closed} with zero. This is because \eqref{equ:loss-closed} depends on the joint distribution of $(\xh,\xa)$ and the values of $l(\xh)$ and $l(\xa)$. However, in practice, the analyst may have collected the data in the past decision epochs during which one of the algorithmic and the human decisions has been applied and their realized losses have been observed. It may not be the case that the realized $l(\xh)$ and $l(\xa)$ can be observed simultaneously. While the conditions \eqref{equ:loss-sufficient-low-in} and \eqref{equ:loss-sufficient-high-in} only require the marginal distributions of $\xh$ and $\xa$ to be observed, which allows the analyst to evaluate whether human augmentation is effective in a data-driven manner. Furthermore, we show the tightness of the sufficient condition \eqref{equ:loss-sufficient-low-in} relative to the necessary condition \eqref{equ:loss-sufficient-high-in}. The right-hand sides of both \eqref{equ:loss-sufficient-low-in} and \eqref{equ:loss-sufficient-high-in} has the common term $\mathbb E[l(\xh)\mathbb{I}(\xh \le \xx)]$. The residual multipliers $1/\mathbb P(\xh \le \xx)$ and $\mathbb P(\xa \ge \xx)$ in \eqref{equ:loss-sufficient-low-in} and \eqref{equ:loss-sufficient-high-in} tend to be constant even with increasing data sizes because in the former, human knowledge usually does not scale with big data, while in the latter, for unbiased algorithmic decisions, $\mathbb P(\xa \ge \xx)\approx 1/2$. So the sufficient and necessary conditions tend to only differ by a constant factor. In the next example, we show that the sufficient condition in Proposition~\ref{prop:loss-one-side-up} cannot be improved even when the likelihood $\mathbb P(\xh \le \xx)$ diminishes, and the right-hand side of \eqref{equ:loss-sufficient-low-in} cannot be relaxed to a constant multiplying $\mathbb E[l(\xh)\mathbb{I}(\xh \le \xx)]$. As a result, the condition tends to be tight. \begin{example}[Tightness of the sufficient condition \eqref{equ:loss-sufficient-low-in}]\label{prop:loss-counter-example} Suppose $l(x)=(x-\xx)^2$, $\xa \sim N(\xx,\sigma^2)$, and $\xh$ satisfies \begin{equation}\label{equ:loss-counter-example} \mathbb P(\xh=x)=\left\{ \begin{array}{ll} 1-\epsilon & \text{if} \ x=\infty,\\ \frac{3 \epsilon}{(x-\xx)^4} & \text{if} \ x \le \xx-1.\\ \end{array} \right. \end{equation} Then for any $a \geq 1/4$, if $\epsilon \in (0, \sigma^2/(6a))$, $\xx<1$, and $\sigma^2<3/2$, we have $\mathbb E[l(\xa) \mathbb{I}(\xa \ge \xx)] \ge a\mathbb E[l(\xh)\mathbb{I}(\xh \le \xx)]$, but $\mathbb E[l(\xin)] > \mathbb E[l(\xa)]$. \end{example} Example~\ref{prop:loss-counter-example} shows the sufficient condition in Proposition~\ref{prop:loss-one-side-up} no longer holds if $\mathbb P(\xh \le \xx)$ in \eqref{equ:loss-sufficient-low-in} is replaced by a constant. To better understand \eqref{equ:loss-sufficient-low-in} and \eqref{equ:loss-sufficient-high-in}, we show the conditions for Example~\ref{exp:prediction} (predictive analytics: prediction). \begin{example}[Conditions for beneficial augmentation for the prediction problem] Consider historical samples $Z_1,\ldots, Z_n \sim N(\xx,\sigma^2)$. The AI algorithm estimates $\xx$ by the sample mean $\xa=\frac{1}{n} \sum_{i=1}^n Z_i$, which follows the distribution $N(\xx,\sigma^2/n)$. By Proposition~\ref{prop:loss-one-side-up}, if the upper bound derived from the human belief satisfies $\mathbb E[(\xh-\xx)^2| \xh \le \xx] \le \sigma^2/(2n)$, then the augmentation improves the algorithmic decision. On the other hand, if $\mathbb E[(\xh-\xx)^2 \mathbb{I}(\xh \le \xx)] \ge \sigma^2/n$, then the augmentation is not beneficial. \end{example} Proposition~\ref{prop:loss-one-side-up} provides us with an analytical framework to analyze the benefit of augmentation. Based on the framework, we can show that there is an optimal level of safeguard when the human belief is deterministic. \begin{corollary}[Optimal safeguard]\label{cor:loss-reduction} Suppose $\xh=x_h$ is a constant. Then the benefit of augmentation $\mathbb E[l(\xa)]-\mathbb E[l(\xin)]$ is unimodal in $x_h$, i.e., it increases when $x_h \le \xx$ and decreases when $x_h \ge \xx$. \end{corollary} Corollary \ref{cor:loss-reduction} holds under the condition that the human belief is deterministic. In this case, although the human analyst imposes an upper bound, it is the best to equate it to the true optimal decision $\xx$, i.e., a buffer is not necessary. Of course, the corollary cannot provide a guidance for the human analyst to select the optimal bound, because $\xx$ is not accessible. It does show the trade-off between conservative/aggressive guardrails. Symmetrically, we can derive similar results when the guardrail derived from the human domain knowledge takes the form of a lower bound. \begin{corollary}[Safeguarded by a lower bound] \label{prop:loss-one-side-low} Consider $\xin=\max\{\xa,\xh\}$. We have (\romannumeral1) $ \mathbb E[l(\xa)]-\mathbb E[l(\xin)]=\mathbb E[(l(\xa)-l(\xh)) \mathbb{I} (\xh \geq \xa)]$. (\romannumeral2) A sufficient condition for $\mathbb E[l(\xin)] \le \mathbb E[l(\xa)]$ is \begin{equation}\label{equ:loss-sufficient-low-low} \mathbb E[l(\xa) \mathbb{I}(\xa < \xx,\xh \ge \xx)] \ge \mathbb E[l(\xh)\mathbb{I}(\xh \ge \xx)]. \end{equation} And (\romannumeral3) a necessary condition for $\mathbb E[l(\xin)] \le \mathbb E[l(\xa)]$ is \begin{equation} \mathbb E[l(\xa) \mathbb{I}(\xa \le \xx)] \ge \mathbb E[l(\xh)\mathbb{I}(\xh \ge \xx, \xa < \xx)]. \end{equation} \end{corollary} Next we extend the results by two-sided bounds. In particular, suppose the human analyst imposes both lower and upper bounds on the algorithmic decision. For example, in the motivating example in the introduction, the station manager may propose a range for the retail price: the markup has to be between $\$0.10/L$ and $\$0.30/L$, regardless of the recommendation of the algorithm. Mathematically, the human belief is translated to an interval $[\xhl,\xhu]$. The algorithmic decision $\xa$ is then projected onto the interval, i.e., $\xin=\min\{\max\{\xa,\xhl\},\xhu\}$. Proposition~\ref{prop:loss-two-side} characterizes the benefit of such augmentation. \begin{proposition}[Benefit of safeguarding using a two-sided bound] \label{prop:loss-two-side} Suppose Assumption \ref{ass:general-loss} holds. \begin{enumerate}[label={(\roman*)}] \item The benefit of the human safeguard by a two-sided bound is \begin{equation*} \mathbb E[l(\xa)]-\mathbb E[l(\xin)]=\mathbb E[(l(\xa)-l(\xhl)) \mathbb{I} (\xa \leq \xhl)]+\mathbb E[(l(\xa)-l(\xhu)) \mathbb{I} (\xa \geq \xhu)]. \end{equation*} \item A sufficient condition for $\mathbb E[l(\xin)] \le \mathbb E[l(\xa)]$ is \begin{align}\label{equ:loss-sufficient-low-two} \mathbb E[l(\xa) \mathbb{I}(\xa \ge \xx,\xhu \le \xx)] &+\mathbb E[l(\xa) \mathbb{I}(\xa \le \xx,\xhl \ge \xx)] \notag\\ &\ge \mathbb E[l(\xhu)\mathbb{I}(\xhu \le \xx)]+\mathbb E[l(\xhl)\mathbb{I}(\xhl \ge \xx)]. \end{align} \item A necessary condition for $\mathbb E[l(\xin)] \le \mathbb E[l(\xa)]$ is \begin{align}\label{equ:loss-sufficient-high-two} \mathbb E[l(\xa)] \ge \mathbb E[l(\xhu)\mathbb{I}(\xhu \le \xx \le \xa)] +\mathbb E[l(\xhl)\mathbb{I}(\xa \le \xx \le \xhl)]. \end{align} \end{enumerate} \end{proposition} If the bounds satisfy $\mathbb P(\xhl \le \xx \le \xhu)=1$, i.e., they always enclose the actual optimal decision, then the safeguard always improves the algorithmic decision. When this condition fails, \eqref{equ:loss-sufficient-low-two} and \eqref{equ:loss-sufficient-high-two} imply conditions to check whether to safeguard the algorithmic decision. Intuitively, the safeguard is beneficial when the loss incurred by the interval not covering $\xx$ is relatively small compared to the loss of the algorithmic decision. One can see that Proposition~\ref{prop:loss-two-side} reduces to Proposition~\ref{prop:loss-one-side-up} and Corollary~\ref{prop:loss-one-side-low} when $\xhl=-\infty$ or $\xhu=\infty$. We point out that the conditions for two-side bounds are weaker than the conditions for the one-sided bound. If $\xhu$ satisfies \eqref{equ:loss-sufficient-low} and $\xhl$ satisfies \eqref{equ:loss-sufficient-low-low}, then $[\xhl,\xhu]$ satisfies \eqref{equ:loss-sufficient-low-two}. But the reverse is not true. So the two-side conditions allow the human to make more mistakes in one side as long as the loss can be compensated by the other. \subsection{Covariate Information}\label{sec:covariate} So far, we have considered a simple model that the environment does not provide any covariate information at the specific decision epoch. However, in many data-driven decision-making problems, the analyst may observe additional covariate information $W$ and hence the optimal decision $\xx$ can depend on such covariate information. Upon observing the covariates, the algorithm outputs a decision $\xa(W)$. In the prediction problem (Example~\ref{exp:prediction}), one can think of $W$ as the new input to the prediction algorithm such as weather conditions. In the pricing problem (Example~\ref{exp:pricing}), $W$ may represent the available side information about the market to assist the choice of the optimal price. For example, PDI Fuel Pricing would take into account the crude oil price, which is a major cost component, to determine the retail gas price. In this case, the crude oil price is changing over time and can be considered as part of the covariate information. After receiving the algorithmic recommendation, the human analyst comes up with a bound $[\xhl(W),\xhu(W)]$ to safeguard it. That is, $\xin(W)=\min\{\max\{X(W),\xhl(W)\},\xhu(W)\}$. Note that in many cases the human domain knowledge may not be sophisticated enough to adapt to a specific covariate $W$. In such cases, $\xhl$ and $\xhu$ do not depend on $W$, which is also covered by our framework. When the covariate information is available, the loss function $l(x,w)$ depends on both the decision and the covariate. We impose the following assumption, in parallel to Assumption~\ref{ass:general-loss}. \begin{assumption}\label{ass:general-loss-covariate} Assume the loss function $l(x,w)$ satisfies the following conditions. \begin{enumerate}[label={(\roman*)}] \item For any decision $x \in \mathbb R$ and any covariate $w \in \mathbb R^d$, $l(x,w) \ge 0$. \item For any covariate $w \in \mathbb R^d$, $l(\cdot,w)$ is quasi-convex with minimizer $\xx(w)$. \end{enumerate} \end{assumption} Next, we characterize the benefit of human augmentation in the presence of covariate information by generalizing Proposition~\ref{prop:loss-one-side-up}. Note that $\xh$, $\xa$, and $\xx$ all depend on (and are correlated with) $W$. We omit the dependence for the readability. \begin{proposition}[Benefit of human augmentation with covariate information] \label{prop:loss-two-side-cov} Suppose Assumption \ref{ass:general-loss-covariate} holds. \begin{enumerate}[label={(\roman*)}] \item The benefit of human augmentation is \begin{align}\label{equ:loss-two-closed-cov} \mathbb E[l(\xa,W)]-\mathbb E[l(\xin,W)] &=\mathbb E[(l(\xa,W)-l(\xhl,W)) \mathbb{I} (\xa \leq \xhl)] \notag\\ &\quad+\mathbb E[(l(\xa,W)-l(\xhu,W)) \mathbb{I} (\xa \geq \xhu)]. \end{align} \item A sufficient condition for $\mathbb E[l(\xin,W)] \le \mathbb E[l(\xa,W)]$ is \begin{align}\label{equ:loss-sufficient-low-two-cov} \mathbb E[l(\xa,W) \mathbb{I}(\xa \ge \xx,\xhu \le \xx)] &+\mathbb E[l(\xa,W) \mathbb{I}(\xa \le \xx, \xhl \ge \xx)] \notag\\ &\ge \mathbb E[l(\xhu,W)\mathbb{I}(\xhu \le \xx)]+\mathbb E[l(\xhl,W)\mathbb{I}(\xhl \ge \xx)]. \end{align} \item A necessary condition for $\mathbb E[l(\xin,W)] \le \mathbb E[l(\xa,W)]$ is \begin{align}\label{equ:loss-sufficient-high-two-cov} \mathbb E[l(\xa,W) ] \ge \mathbb E[l(\xhu,W)\mathbb{I}(\xhu \le \xx,\xa \ge \xx)]+\mathbb E[l(\xhl,W)\mathbb{I}(\xhl \ge \xx,\xa \le \xx)] . \end{align} \end{enumerate} \end{proposition} When $W$ is a constant, Proposition~\ref{prop:loss-two-side-cov} reduces to the special case of Proposition~\ref{prop:loss-two-side}. As expected, the conditions in Proposition~\ref{prop:loss-two-side-cov} are more involved, although they follow a similar form to Proposition~\ref{prop:loss-two-side}. To explain the intuition, we adopt the following example. \begin{example}[Linear regression]\label{example:MSE-LR} Linear regression is a special case of the prediction problem (Example~\ref{exp:prediction}) with covariates. Suppose the loss function is $l(x,w) = (x-w^\top\beta)^2$ for some unknown coefficient $\beta$. As a result, the optimal decision is $\xx(W)=W^\top \beta$. Using the least squares estimator $\hat\beta$, the algorithmic output is $\xa(W) = W^\top \hat\beta$. In the necessary condition \eqref{equ:loss-sufficient-high-two-cov}, the left-hand side is the mean-squared error (MSE) of the least-squares estimation, which typically converges to zero at the rate of $1/n$, where $n$ is the sample size. In this case, for the human augmentation to outperform the algorithm, the right-hand side of \eqref{equ:loss-sufficient-high-two-cov} should diminish at the same or a faster rate. It is only possible if the bounds derived from human belief, $\xhl$ and $\xhu$, have diminishing MSEs $l(\xhl,W)$ and $l(\xhu,W)$, or they almost always sandwich the optimal $x^*$, i.e., $ \xhl \le x^*\le \xhu$. Both of the above two requirements set impractically high bars for the human domain knowledge. \end{example} From the example, we see that AI incurs diminishing loss as it gathers more data. In a data-rich environment, it appears that the human domain knowledge is not likely to improve AI. However, Example~\ref*{example:MSE-LR} does not reflect one of the major reasons why human domain knowledge may be helpful: algorithms designed for general purposes sometimes ignore practical factors in the training dataset, such as contamination, model misspecification, and data errors. In the following sections, we provide examples to show that even if AI has a large amount of data, the human knowledge can still play an important role and contribute to the decision-making. \section{Three Use Cases on Beneficial Human Augmentation} In this section, we provide three concrete use cases when the safeguard derived from human knowledge can indeed improve algorithmic decisions even with large data, despite the potential limitation of human knowledge illustrated in Example~\ref{example:MSE-LR}. We first provide a summary of the three use cases as follows: \begin{itemize} \item In Section~\ref{sec:compe-pricing}, we consider a pricing problem (Example~\ref{exp:pricing}) under competition. We show that when the algorithm fails to take into account the competitive environment the pricing problem resides in, simple human augmentation like price matching can improve the algorithmic decision. \item In Section~\ref{sec:mis}, we consider a pricing problem when the algorithm misspecifies the demand function. We show that using empirical observations (in particular, setting a price interval using the historical price range that contains the highest profit in the past) can improve the performance of the algorithm. \item In Section~\ref{sec:conta-LR}, we show that in prediction problems (Example~\ref{exp:prediction}), the human knowledge can serve as a robust mechanism to limit the damage due to data contamination and thus improve the algorithm's performance. \end{itemize} \subsection{Pricing Algorithm under Competition}\label{sec:compe-pricing} In this section, consider the pricing problem of a focal firm when there is a competitor in the market. The loss function of the firm under price $p$, given the price of the competitor $\ph$, is the negative revenue under a linear demand function: \begin{equation*} l(p)=\mathbb E[-p d(p, \ph)] \coloneqq -p(\alpha-\beta p+ \gamma \ph). \end{equation*} We assume $\beta>\gamma>0$, which is standard in the literature and means that the firm's demand is more sensitive to its own price than its competitor's price. The best response of the firm, given the competitor's price, can be easily solved as: \begin{equation*} p^*=\argmin_{p} l(p)=\frac{\alpha+\gamma \ph}{2 \beta}. \end{equation*} However, this best response requires the knowledge of $\alpha,\beta,\gamma$, which are typically unavailable to the firm. Next we specify how an algorithm may recommend a price based on the historical data. \textbf{Algorithmic price.} The algorithm attempts to learn the demand function from the historical data. However, the algorithm may not be aware of the presence of the competitor (see, e.g., \citealt{cooper2015learning}). Consider the gas station example in the introduction. To provide the competitors' prices as inputs to PDI Fuel Pricing, the station manager needs to check the prices of nearby gas stations periodically. Even though this is convenient, the resolution of the competitors' prices may be lower than that of the historical prices of the focal station, which have constantly been recorded in its system. To accommodate this realistic setting with data unavailability, we assume that the algorithm attempts to learn a monopolistic demand function \begin{equation}\label{equ:compe-demand-AI} \hat d(p)=\hat\alpha-\hat\beta p. \end{equation} This setting of learning a monopolistic demand function under competition has also been studied in \citet{cooper2015learning} and \citet{hansen2021frontiers}. The algorithm has access to the historical prices $p_1,\ldots,p_n$ and realized demand $d_1,\ldots,d_n$. We assume that the demand is generated by \begin{equation}\label{compe-true-demand} d_t =\alpha-\beta p_t+ \gamma p_t'+\epsilon_t, \end{equation} for some independent and identically distributed (i.i.d.) noise $\epsilon_t$ and $t=1,\dots,n$. The algorithm uses the ordinary least squares (OLS) to estimate $\hat{\alpha}$ and $\hat{\beta}$. Finally, the algorithm recommends a price maximizing the estimated revenue \eqref{compe-true-demand}, i.e., $\pa={\hat{\alpha}}/{2 \hat{\beta}}$. Note that the historical demand and the algorithmic price $\pa$ depend on the unobserved competitor's prices $p_1',\ldots,p_n'$. To analyze $\pa$, we assume that \begin{assumption}\label{asp:comp-pricing} The prices $(p_n, p_n')$ are i.i.d.\ for $n=1,2,\dots$. Moreover, $\mathbb E[p_n]=\mathbb E[ p_n']=\mu$, $\Var(p_n) = \Var(p_n')=\sigma^2$ and the correlation of $(p_n, p_n')$ is $\rho\in [0,1]$. \end{assumption} Assumption~\ref{asp:comp-pricing} can be rather mild if we consider a symmetric duopoly and the past samples are all independent. The condition $\rho\ge 0$ implies that the prices of competing firms are positively correlated. The next result characterizes the asymptotic behavior of $\pa$. \begin{lemma}\label{lem:compe-AI-price} Suppose Assumption~\ref{asp:comp-pricing} holds. The algorithmic price $\pa$ converges in probability to \begin{equation}\label{equ:compe-AI-converge} \plim_{n\to\infty}\pa = \frac{\alpha+ \gamma \mu (1-\rho)}{2(\beta-\gamma \rho)}. \end{equation} \end{lemma} To understand the algorithmic price with large data ($n\to \infty$), note that the symmetric Nash equilibrium price satisfies $p_{NE}=\argmax_p\{pd(p, p_{NE})\}=\alpha/(2 \beta-\gamma)$. If $\rho=1$, i.e., the historical prices of the firm and the competitor are perfectly correlated, then $\pa={\alpha}/{2(\beta-\gamma)}$ converges to the collusive price that maximizes the joint revenue of both parties. Clearly such collusive price is higher than $p_{NE}$. On the other hand, if $\rho=0$ and $\mu=p_{NE}$, i.e., the historical prices of the firm and the competitor are uncorrelated and centered around the Nash equilibrium, then $\pa$ converges to $p_{NE}$. \textbf{Human safeguard --- price matching.} We consider price matching, a common competitive strategy for analysts. In particular, after receiving the price recommended by the algorithm, the analyst may intentionally check the competitor's price. If the competitor's price is lower than algorithmic price, then the analyst lowers the algorithmic price to match the competitor's price. That is, the human-safeguarded price is $\prin=\min\{\pa,\ph\}$. Such an augmentation strategy is highly relevant for the human analyst: (i) it does not depend on the historical data or the unknown parameters, (ii) is easy to process and explain to human managers, (iii) it takes into account the competitive environment, and (iv) can be used to complement the algorithmic price. Moreover, since price matching specifies a lower bound, it also fits into the general framework in Section~\ref{sec:general-loss}. In the next theorem, we characterize the condition under which the human augmentation improves the algorithmic price. \begin{theorem}\label{thm:compe-pricing} Suppose Assumption~\ref{asp:comp-pricing} holds and the algorithmic price is given in \eqref{equ:compe-AI-converge}. Assume $\mu \ge p_{NE}$. If the competitor's price satisfies \begin{equation*} \ph \ge p_L\coloneqq \frac{\alpha \beta - 2\alpha \gamma \rho - \beta \gamma (1-\rho) \mu}{2(\beta-\rho \gamma)(\beta-\gamma)}\in (0, p_{NE}), \end{equation*} then the revenue of the safeguarded price is higher than that of the algorithmic price, i.e., $ \prin d(\prin,p')\ge \pa d(\pa,p')$. In addition, if $\ph \in (p_L,\pa)$, then the revenue improvement is strictly positive. \end{theorem} To understand this theorem, first note that the assumption $\mu\ge p_{NE}$ is mild. It merely states that the historical prices are a convex combination of the equilibrium price and the collusive price, since the latter is higher. The expression for $p_L$ is complicated, but we can show that $p_L \le p_{NE}$. As long as the competitor's price is not significantly lower than the equilibrium price, the augmentation by price matching improves the algorithmic decision. Price matching is particularly useful when the competitor undercuts the algorithmic price ($\ph<\pa$) while not setting a price much lower than the equilibrium price ($\ph>p_L$). \subsection{Misspecified Algorithms}\label{sec:mis} Consider the pricing problem in Example~\ref{exp:pricing} in a monopolistic market. The demand function is assumed to be a general non-increasing function $f(p)$, and the unit cost of the product is $c$. As a result, the loss function faced by the analyst is $l(p) = -(p-c)f(p)$. The optimal price $p^*$ satisfies $p^*=\argmin_{p} l(p)$. Since the demand function $f$ is unknown to the analyst, she sets up price experiments to collect data and uses AI algorithms to learn the demand function. \textbf{Price experiments.} In practice, the analyst cannot charge prices arbitrarily. In fear of consumer backlash, the price experimentation usually takes the form of promotions, such as \$10-off coupons. The analyst has done price experimentation at a grid of prices and observed realized demand at those price points. In particular, we consider the following uniform price grid between $[c,\bar p]$: \begin{equation}\label{equ:mis-discrete-prices} p_j=c+j\frac{\bar{p}-c}{n}, \quad j=0,1,\dots, n, \end{equation} where $\bar p$ represents the nominal price without any promotion. For each price on the grid, we suppose $K$ noisy demand observations have been collected: \begin{equation*} f(p_j)+\epsilon_{jk}, \ \forall \, k={1,2,\ldots,K}, \end{equation*} where $\epsilon_{jk}$ is an independent $\sigma$-sub-Gausssian noise. For example, the firm may have set the price $p_j$ for $K$ hours and recorded the hourly demand, whose mean is $f(p_j)$ with noise $\epsilon_{jk}$. \textbf{Algorithmic decision.} In order to find the optimal price $p^*$, the algorithm needs to learn the demand function $f(p)$. However, because the price experiments are only conducted on a grid, the algorithm typically postulates a model for the demand function and estimate the model parameters. One of the most common models is the linear demand function. That is, \begin{equation}\label{eq:mis-linear} \hat f(p)=\hat \alpha-\hat \beta p, \end{equation} where $\hat\alpha$ and $\hat\beta$ guarantees that $\hat f$ is the best linear fit to the points $\{(p_j, f(p_j)+\epsilon_{jk})\}$ for $j=0,\dots, n$ and $k=1,\dots,K$ in terms of the $\ell_2$ error: \begin{equation}\label{equ:mis-fit-MSE} (\hat \alpha, \hat \beta)=\argmin_{\alpha,\beta} \left\{\sum_{j=0}^n \sum_{k=1}^K\left(f(p_j)+\epsilon_{jk}-\alpha+\beta p_j\right)^2\right\}. \end{equation} Eventually, the algorithm outputs an optimized price $p_a$ based on the estimated demand function $\hat{f}(p)$, i.e., $p_a=\arg\max_p (p-c)\hat f(p)= \hat \alpha/2\hat\beta+c/2$. Although linear demand models have been shown in \citet{cohen2021simple} to perform well when the model is misspecified, in our setup, we do not claim that the linear model is necessarily the best algorithmic choice in this scenario. In fact, given the data points $\{(p_j, f(p_j)+\epsilon_{jk})\}$, there may be other choices such as a linear interpolation that can fit the demand function better. Our main goal is to demonstrate a salient feature of a wide range of algorithms based on parametric statistical models---the chosen model may be misspecified and hence the resulting algorithm may be built on a shaky foundation. That is, the relationship between price and demand may not be accurately captured by the postulated class of models. In this case, even with sufficient data, the misspecification cannot be fully remedied. As we shall see in the left of Figure~\ref{fig:mis-linear}, such a misspecification can be clearly pronounced for the linear model. \begin{figure}[t] \centering \pgfplotsset{ every axis/.append style={ line width=1pt, tick style={line width=0.8pt}}, scaled x ticks=false, tick label style={font=\normalsize}, major grid style={dotted,color=black}, legend style={font=\normalsize}, tick align=center, width=8.8cm} \begin{tikzpicture} \begin{axis} [xtick={1,2,3,4,5,6,7,8,9,10}, ytick={0,1,2,3,4,5,6}, xmin=1, xmax=10.5, ymin = 0, ymax = 6.5, xlabel=$p_j$, legend style ={at={(0.5,1.03)},anchor=south }, grid=major ] \addplot+ [only marks, blue, mark options={fill=blue}, mark=*] table[x = xpoint, y = fy, col sep=comma] {misfigure.csv}; \addlegendentry{Observed demand $f(p)+\epsilon$} \addplot [blue,domain=1:9,samples=10] {6.3087-0.694*x}; \addlegendentry{Misspecified demand $\hat{f}(p)$} \addplot [red,domain=1:10,samples=30, dashed] {10*exp(-x/3)}; \addlegendentry{Actual demand $f(p)$} \end{axis} \end{tikzpicture} \hfill \begin{tikzpicture} \begin{axis}[xtick={1,2,3,4,5,6,7,8,9,10}, ytick={0,1,2,3,4,5,6,7,8,9}, xmin=0.5, xmax=10.5, ymin = 0, ymax = 10, xlabel=$p_j$, legend style ={at={(0.5,1.03)},anchor=south}, grid=major, extra x tick style={ x tick label as interval=false } ] \addplot+ [only marks, blue, mark options={fill=blue}, mark=*] table[x = xpoint, y = xfy, col sep=comma] {misfigure.csv}; \addlegendentry{Observed profit $(p-c) (f(p)+\epsilon)$} \addplot[line width=1.1pt,ybar interval, draw=blue ] coordinates { (1.5,5.0045) (2.5,7.0755) (3.5,8.3929) (4.5,8.84) (5.5,7.0635) (6.5,5.7788) (7.5,4.1711) (8.5,3.5318) (9.5,2.926) (10.5,2.926) }; \addlegendentry{Average profit $(p-c) \tilde f(p)$} \addplot [red,domain=1:10,samples=30, dashed] {10*(x-1)*exp(-x/3)}; \addlegendentry{Actual profit $(p-c)f(p)$} \end{axis} \end{tikzpicture} \caption{The left figure shows the misspecified linear model by the algorithm. The actual demand function is $f(p)=10 \exp(-p/3)$ with $c=1$, $\bar{p}=10$, $n=10$, and $K=3$. The OLS estimator is $\hat{\alpha}=6.309$, $\hat{\beta}=-0.694$. The right figure shows the human analyst identifying the price that earns the highest empirical profit.} \label{fig:mis-linear} \end{figure} \textbf{Human knowledge.} How can human knowledge help with the misspecified algorithm? Due to the limitation of the human brain, the human analyst typically does not form a model to process the historical data, and it would be impossible to judge whether the algorithmic price suffers from misspecification. We consider a rather natural and straightforward approach: since the analyst observes the noisy demand on the price grid, it first uses the average to form an estimate of the demand at each price as \begin{equation} \tilde{f}(p_j) \coloneqq f(p_j)+\frac{1}{K} \sum_{k=1}^K \epsilon_{jk}. \end{equation} Using this estimate, the optimal price on the grid that generates the highest empirical profit $(p_j-c) \tilde f(p_j)$ can be easily calculated. Suppose $j^*$ is the index of one of the optimal prices on the grid: \begin{equation*} (p_{j^*}-c) \tilde f(p_{j^*})\ge (p_{j}-c) \tilde f(p_{j})\quad \forall j=0,\dots, n. \end{equation*} Taking the demand function in Figure~\ref{fig:mis-linear} as an example, the human analyst observes the noisy demand on the price grid $\{1,2,\ldots,10\}$. Then, she chooses the price point $p_{j^*}=5$ that gives the highest empirical profit. In this example, the chosen price $p_{j^*}=5$ does not equal to the true optimal price $p^*=4$, but the neighborhood $[p_{j^*-1},p_{j^*+1}]=[4,6]$ includes $p^*$. It is easy to see the complementary effects of human knowledge and the algorithm in this example. The optimal price from the human knowledge is empirically validated without any statistical model. However, while the algorithm may suffer from misspecification, it has two strengths unmatched by the human analyst. First, the algorithm aggregates all $(n+1)K$ demand observations while the human analyst takes the average of $K$ demand observations locally. It is well-known that more samples improve the statistical prediction power. Second, the human analyst does not attempt to specify a model to extrapolate the demand function. As a result, only the prices on the grid can be selected, and a discretization error is always born by the price picked by the analyst. For example, if the prices $\{\bar p-20, \bar p-10, \bar p\}$ have been experimented, i.e., two types of promotions, $\$20$ off and $\$10$ off, in addition to the nominal price $\bar p$, have been offered in the past, then $p_{j^*}$ can be suboptimal if the actual optimal price is $\bar p-15$. In this case, the algorithm learns a model that interpolates the price gaps on the grid and remedies the discretization error. The following result characterizes the performances of the two approaches, which allow us to further understand the benefit of human augmentation to the algorithm. \begin{proposition}\label{prop:mis-finite-human} Assume that the loss function $l(p)$ is strongly convex with parameter $\lambda$ (or equivalently, the profit function is $\lambda$-concave), i.e., \begin{equation*} l(p) \le l(p')-l'(p') (p-p')-\frac{\lambda}{2} (p-p')^2, \quad \forall \, p,p' \in [c,\bar{p}]. \end{equation*} We then have \begin{enumerate} \item [(\romannumeral1)][Algorithmic decision] Let $p_a^*$ denote the optimal price for the misspecified linear demand. Given $n$ and $K$, we have \begin{equation}\label{equ:mis-sample-AI} \mathbb P(|\pa-p_a^*| \ge \delta) \le 4 \exp(-b nK), \end{equation} where $b$ is a constant independent of $n$ and $K$. \item [(\romannumeral2)][Human knowledge] The probability of the true optimal price not falling into the neighborhood of human's estimated price $p_{j^*}$ satisfies \begin{equation}\label{equ:mis-sample-human} \mathbb P\left(p^* \notin [p_{j^*-1},p_{j^*+1}] \right) \le 2(n+1) \exp\left(-\frac{K \lambda^2 (\bar{p}-c)^4}{32 \sigma^2 \bar{p}^2 n^4}\right). \end{equation} \end{enumerate} \end{proposition} Note that for large samples ($K\to\infty$ or $n\to\infty$), the algorithmic decision does not converge to the true optimal price $p^*$. Instead, it converges to the optimal price for the misspecified linear demand. For the human knowledge, without misspecification, the price neighborhood $[p_{j^*-1},p_{j^*+1}]$ on the grid containing $p_{j^*}$ will eventually include the true optimal price as $K\to\infty$. However, compared to the algorithmic decision, the human's error probability can be significantly inflated, which even increases in the number $n$ of price points, reflecting a lack of data efficiency, while the algorithm's error probability decreases in $n$. \textbf{Augmentation by safeguarding.} Because of the weaknesses of the finite-sample results in Proposition~\ref{prop:mis-finite-human} (i.e., the convergence to the wrong target in \eqref{equ:mis-sample-AI} by the algorithm and inefficient sample use in \eqref{equ:mis-sample-human} by the human), the analyst may decide to integrate both approaches. Based on $p_{j^*}$, the human analyst imposes a guardrail $[p_{j^*-1}, p_{j^*+1}]$ as in Proposition~\ref{prop:loss-two-side}. In other words, the human analyst uses the two neighboring prices of $p_{j^*}$ on the grid to form an interval to regulate the algorithmic output. As a result, the safeguarded algorithmic price is \begin{equation*} \prin=\max\{\min\{\pa,p_{j^*+1}\},p_{j^*-1}\}. \end{equation*} The following result characterizes the condition under which such an augmentation is beneficial. \begin{theorem}\label{thm:misspec} Assume the profit function $(p-c)f(p)$ is unimodal. The augmentation improves the algorithmic price, i.e., $(\prin-c)f(\prin) \ge (\pa-c)f(\pa)$, if the true optimal price $p^* \in [p_{j^*-1},p_{j^*+1}]$. In particular, the latter condition always holds when $K\to\infty$. \end{theorem} Theorem~\ref{thm:misspec} requires the profit function to be unimodal, which is satisfied by most demand functions $f(\cdot)$ (see, e.g., \citealt{ziya2004relationships}). As a result, the regime that sees the most benefit of human augmentation is when $n$ is fixed but $K\to\infty$, i.e., the price experimentation is conducted on a few prices for an extended period. This is arguably a common scenario in retailing, due to the infeasibility of frequent price changes. In this case, augmenting the algorithmic price using the bounds distilled from the human knowledge, the augmented price $\hat p$ enjoys the benefits of both worlds. Intuitively, when $p^*$ falls in the interval $[p_{j^*-1}, p_{j^*+1}]$ and the algorithmic price is outside the interval, the human safeguard always pulls the algorithmic price toward the actual optimal price and improves the algorithmic recommendation due to the unimodality of the profit function. On the other hand, when the algorithmic price falls into the same interval, indicating the discretization error may exceed the misspecification error (imagine $[p_{j^*-1}, p_{j^*+1}]$ being a wide interval), the guardrail does not take effect and the analyst follows the algorithmic decision. This result confirms the complementary effects of algorithms and human knowledge, in particular, the robustness of simple heuristics against model misspecification. We next study two commonly used demand function forms of $f(\cdot)$ and characterize the conditions under which the algorithmic price falls outside the interval $[p_{j^*-1}, p_{j^*+1}]$, i.e., when it is strictly improved by the human augmentation. We consider $K\to\infty$ in both examples. \begin{example}[Isoelastic demand] Consider the demand function $f(p)=bp^{-a}$ where $a>1$ and $b>0$. It can be shown that the profit function is unimodal, and the optimal price is $p^*=\frac{ac}{a-1}$. We can show that when the nominal price \begin{equation}\label{equ:mis-poly} \bar{p} > \dfrac{\frac{a}{a-1}-\frac{1}{2}-\frac{2}{n}}{\frac{1}{3}-\frac{2}{n}} c, \end{equation} the algorithmic price $\pa$ is outside the interval $[p_{j^*-1}, p_{j^*+1}]$ and thus the human augmentation strictly improves the algorithm. For example, if $a=2$, $n=10$, then \eqref{equ:mis-poly} is equivalent to $\bar{p}>2.25c$, i.e., when the nominal price is more than 125\% higher than the production cost. \end{example} \begin{example}[Exponential demand] Consider the demand function $f(p)=b e^{-ap}$ where $a>0$. It can be shown that the optimal price is $p^*=1/a+c$. We can show that when \begin{equation}\label{equ:mis-exp} \bar{p} > \dfrac{\frac{1}{a}+(\frac{1}{2}-\frac{2}{n})c}{\frac{1}{3}-\frac{2}{n}}, \end{equation} the human augmentation strictly improves the algorithm. For example, if $a=2$, $n=10$, then \eqref{equ:mis-exp} is translated to $\bar{p}>3.75+2.25c$. \end{example} From both examples, we can see that the misspecification error by the algorithm grows larger relative to the discretization error by the guardrail and hence human's augmentation becomes more beneficial, when $\bar p $ is much larger than $c$, i.e., the interval for the price experiments is wider. \subsection{Data Contamination}\label{sec:conta-LR} In this section, we consider the case when the data can be contaminated, due to outliers, reporting errors, etc. While the possibly contaminated data is fed into algorithms, the error in data also propagates to the output decision. For this reason, \cite{fogliato2022case} advocate humans-in-the-loop, to mitigate the data contamination. Human analysts are less susceptible to data contamination because the human brain cannot process large data sets, which turns out to be a blessing rather than a curse in this case as it makes the human knowledge robust to minor data contamination. Next we provide a formal analysis of human augmentation to the algorithms for this use case. The application we consider is the linear regression problem in Example \ref{example:MSE-LR}. Without contamination, the historical data $\{(X_i, W_i)\}_{i=1}^n$ is generated by $X_i=W_i^\top \beta+ \epsilon_i$ for some unknown coefficients $\beta$ and noise $\epsilon_i$. We consider two contamination mechanisms that affect a fraction of samples: contamination in response \citep{bhatia2017consistent} and covariates \citep{mcwilliams2014fast,loh2011high}. Before we provide the formal introduction to the mechanisms, we first explain how the algorithm and human knowledge play a role in the process. Not able to tell whether the data is contaminated, the algorithm simply applies the OLS estimator to the data\footnote{We acknowledge that there are statistical tests to identify outliers and robust estimators to mitigate data contamination. We do not consider them in the model because they usually require some information about the contamination such as whether the data is contaminated or the contamination mechanism, while in practice, the algorithm is agnostic to such knowledge.}, as stated in Example~\ref{example:MSE-LR}. For a new covariate $W$, we denote the prediction from the OLS estimator as $X_a(W)$. For the human analyst, we consider a generic two-sided bounded range $[\xhl,\xhu]$. As shown in Section~\ref{sec:general-loss}, the corresponding safeguarded decision is $\xin(W)=\max\{\min\{\xa(W),\xhu\},\xhl\}$ for the two-sided guardrail. Also recall that the loss function is $l(x,w)=(x-w^\top \beta)^2$. \subsubsection{Contamination in Response}\label{sec:conta-resp} We first consider the contamination in the response of the samples. In particular, the observed response $X_i$ is not generated from $W_i^\top \beta+ \epsilon_i$, but \begin{equation}\label{eq:contam-response} X_i=W_i^\top \beta +B_i+ \epsilon_i \end{equation} for some random variable $B_i$. Here $B_i$ controls for the degree of contamination: with a high probability, it is zero and the sample is not contaminated. When $B_i\neq 0$, the response of the sample, $X_i$, deviates from the uncontaminated observation $W_i^\top \beta + \epsilon_i$. Note that $B$ is not to be confused with $\epsilon$, which has a zero mean. We assume $\mathbb E[B]$ to be nonzero which means that the contamination has a systematic influence on the estimation. Contamination in the response is studied in the computer science community, see, e.g., \cite{wright2008robust} and \cite{nguyen2012robust} for applications to image recognition. For management applications, this type of contamination may occur due to various reasons: the historical response $X_i$ may be subject to reporting errors, or may be censored in certain periods and some ad hoc imputation methods are used so that the missing data are replaced by their estimated values. Our contamination model can capture both cases. Not surprisingly, when the historical data is contaminated, the algorithmic output that uses OLS is biased. More precisely, given the i.i.d. historical samples $\{(X_i, W_i)\}_{i=1}^N$ that are generated by \eqref{eq:contam-response} and a new covariate $w$, suppose $\xa(w)$ is the OLS estimator applied to $w$. We have: \begin{lemma}\label{lem:conta-res} Assume the covariance matrix $\Sigma=\mathbb E[W W^\top]$ is positive definite. Then the OLS predictor $\xa(w)=w^\top\hat{\beta}$ converges to $w^\top \beta +\mathbb E[B]$ in probability for any $w$ as $N\to\infty$. Therefore, $l(\xa(w),w)=\mathbb E[B]^2$. \end{lemma} In other words, even with a sufficiently large dataset, the bias caused by the contamination persists. To analyze how the human augmentation may improve the algorithmic result, we consider a simple form of contamination. Let $B=b>0$ with probability $p$ and $B=0$ with probability $1-p$. Therefore, the contamination always leads to upward bias (the downward bias can be formulated similarly), and the parameter $b$ represents the magnitude of the contamination and $p$ represents its propensity. To adjust for the upward bias, the human analyst can impose an upper bound $\xhu$ and cap the output of the algorithm and lead to the safeguarded prediction as $\xin(W)=\min\{\xa(W),\xhu\}$. However, because $\xhu$ is not data-driven, the safeguard runs the risk of overcorrection. The following proposition provides such a condition. \begin{proposition}\label{prop:conta-response} Assume the domain of $W$ is a closed and bounded set $\mathcal{W} \in \mathbb{R}^d$, and we take $N\to\infty$. If $\xhu$ satisfies \begin{equation}\label{equ:conta-LR-response} \xhu \ge \max_{w\in \mathcal{W}}\{w^\top \beta\}-p b, \end{equation} then, for all $w\in \mathcal {W}$, we have $l(\xin(w),w ) \le l(\xa(w),w ) $. \end{proposition} To interpret the result, on the one hand, in the extreme case of $\xhu \ge\max_{w \in \mathcal{W}} \{w^\top \beta\}+pb $, we always have $\xhu \ge \xa$ and $\xin=\xa$. The bound of $\xhu$ is too conservative, and the safeguard provides no benefit. On the other hand, one can show that the loss function for the algorithmic prediction is simply $l(\xa(w),w) = p^2 b^2$ due to the contamination. If $\xhu < \max_{w \in \mathcal{W}} \{w^\top \beta\}-pb$, then there exists $w$ such that $l(\xin({w}),{w}) > p^2 b^2$ because the upper bound imposed by the human analyst is too aggressive and outweighs the bias introduced by the contamination. Clearly, condition~\eqref{equ:conta-LR-response} is easier to satisfy when the contamination gets more severe due to an increased value of $p$ or $b$. When the contamination can lead to a bias of either direction, i.e., it is possible that $B>0$ or $B<0$, it is safer for the human analyst to set up both bounds $\xhl$ and $\xhu$. That is, once the algorithmic prediction $\xa(w)$ is given, the safeguarded decision is $\xin(w)=\max\{\min\{\xa(w),\xhu\},\xhl\}$. Generalizing Proposition~\ref{prop:conta-response}, we have: \begin{theorem}\label{thm:conta-respon} Suppose the domain of $W$ is a closed and bounded set $\mathcal{W} \in \mathbb{R}^d$, and we take $N\to\infty$. If the lower and upper bounds $ \xhl,\xhu$ satisfy \begin{equation}\label{equ:conta-LR-response-two-side} \xhu \ge \max_{w\in \mathcal{W}}\{w^\top \beta\}- \big|\mathbb E[B]\big|, \ \ \xhl \le \min_{w\in \mathcal{W}}\{w^\top \beta\}+ \big|\mathbb E[B]\big|, \end{equation} then for all $w\in \mathcal {W}$, we have $ l(\xin(w),w ) \le l(\xa(w),w )$ and $\mathbb E[l(\xin(W),W)] \le \mathbb E[l(\xa(W),W)]$. \end{theorem} Note that when the absolute bias ($\big|\mathbb E[B]\big|$) is large, the human augmentation of imposing upper and lower bounds tends to be helpful, regardless of the sign of the bias. For example, even when $B>0$, i.e., the bias is always upward, Theorem \ref{thm:conta-respon} states that imposing a lower bound $\xhl$ is more likely to be beneficial when the bias is large because \eqref{equ:conta-LR-response-two-side} is more likely to be satisfied. To see the intuition, as the contamination becomes more severe, the algorithmic decision is subject to a larger bias. Hence, it is easier for the human augmentation to outperform the raw algorithmic decision. \subsubsection{Contamination in Covariates} In this section, we consider the covariates in the data, $W_i$, which can be contaminated. This type of contamination is sometimes referred to as the error-in-variable \citep{loh2011high}, which occurs in voting, surveys, and sensor networks. In business applications, there may exist measurement errors in the historical samples of the covariate. For example, when a firm is running a survey to learn consumer sentiment, the design of the survey may lead to biased measurement of the quantity of interest. We consider the following contamination model: the observed covariate is generated by $W_i=Z_i+U_i$, where $Z_i$ is the actual covariate, and $U_i\in \mathbb{R}^d$ is an error that contaminates the observation, independent of $Z_i$. The response is generated from $X_i=Z_i^\top \beta+\epsilon_i$. For a new covariate $W_0$, the algorithm outputs $\xa(W_0)$ using the OLS estimator from the data $\{(X_i,W_i\}_{i=1}^\infty$. (We consider infinite samples in this analysis.) What differentiates the contamination in covariates from Section~\ref{sec:conta-resp} is that even the new covariate $W_0$ itself may be contaminated. Therefore, the human safeguard can serve for two purposes: it helps to control the contamination in the training data and curtail the potential error in the new covariate based on which the prediction is given. We impose the following technical assumption. \begin{assumption}\label{ass:conta-cov-design} The matrix $\Sigma_1 \coloneqq \mathbb E[ZZ^\top]$ is positive definite and $\Sigma_2 \coloneqq \mathbb E[U U^\top]$ is positive semi-definite. \end{assumption} Next, we show that the contamination usually leads to an inconsistent OLS estimator. \begin{lemma}\label{lem:conta-cov} Suppose Assumption \ref{ass:conta-cov-design} holds. The OLS estimator $\hat{\beta}$ for \eqref{eq:contam-response} converges to $\left(\mathcal{I}-(\Sigma_1+ \Sigma_2)^{-1} \Sigma_2\right) \beta$ in probability. Furthermore, $\hat{\beta}$ converges to $\beta$ in probability if and only if $\Sigma_2 \beta=\bm{0}$. \end{lemma} From Lemma~\ref{lem:conta-cov}, we know that the OLS estimator $\hat{\beta}$ does not converge to the true parameter $\beta$ unless $\beta$ is in the null space of $\Sigma_2$. As a result, the predicted response $\xa(w)=w^\top \hat{\beta}$ given $w$ is usually biased. Note that in this case the bias can be translated to the contamination in response as in Section~\ref{sec:conta-resp}, and the conditions in Theorem~\ref{thm:conta-respon} can be similarly applied. In this section, we instead focus on a different angle: even when $\Sigma_2 \beta=\bm{0}$ holds and $\hat{\beta}$ converges to $\beta$, the algorithm is still not bias-free. This is because the new covariate $W_0$ may be contaminated. The actual prediction should be $Z_0^\top \beta = (W_0-U_0)^\top\beta$, while under contamination, even with $\hat\beta = \beta$, the prediction is $W_0^\top\beta$. We consider the two-sided guardrail $[\xhu,\xhl]$ for the human augmentation. Note that in this case, the loss functions for the algorithmic and safeguarded outcomes are $l(\xa(W), Z)$ and $l(\xin(W),Z)$, respectively, because the loss only depends on the actual covariate $Z$, not the observed but potentially contaminated $W$. We impose the following technical assumption. \begin{assumption}\label{ass:conta-cov-VW} Assume the domain of $Z$ is a closed and bounded set $\mathcal{Z} \in \mathbb{R}^d$. Assume $\Sigma_2 \beta=\bm{0}$ and there exist constants $b,p \in (0,0.5)$ such that \begin{equation}\label{equ:LR-conta-cov-v} \mathbb P\left(U^\top\beta \ge b\right) \ge p, \ \mathbb P\left(U^\top\beta \le -b\right) \ge p. \end{equation} \end{assumption} The compactness of the covariate $Z$ is similar to the assumption in Theorem~\ref{thm:conta-respon}. Equation~\eqref{equ:LR-conta-cov-v} states that the contamination $U^\top \beta$ is not concentrated at zero, which would make the application of the two-sided range more likely to be beneficial. Next we state our main result. \begin{theorem}\label{thm:conta-covariates} Suppose Assumptions \ref{ass:conta-cov-design} and \ref{ass:conta-cov-VW} hold. If the upper and lower bounds $\xhu,\xhl$ satisfy \begin{equation}\label{equ:conta-cov-discrete} \xhl \le \min_{z\in \mathcal{Z}}\{z^\top \beta\}+\sqrt{\frac{p}{1-p}}b, \ \ \xhu \ge \max_{z\in \mathcal{Z}}\{z^\top \beta\}-\sqrt{\frac{p }{1-p}}b, \end{equation} then we have $\mathbb E[l(\xin(W),Z)] \le \mathbb E[l(\xa(W),Z)]$. \end{theorem} Comparing to Theorem~\ref{thm:conta-respon}, it is worth pointing out that although the contamination mechanisms differ, the conditions for beneficial human augmentation are surprisingly similar. In particular, when $b$ or $p$ is larger, i.e., the magnitude of the contamination increases, there is more room for human augmentation to be helpful (as conditions (\ref{equ:conta-cov-discrete}) are more likely to hold). \section{Conclusion}\label{sec:conclusion} Motivated by a consulting project on retail fuel pricing, we propose a framework to study the human-AI interaction in which an algorithm first recommends a decision to the human analyst, then the analyst can augment it based on domain knowledge and experience. As far as we know, this is the first study to investigate this type of interaction. With the framework, we investigate when human knowledge adds value to algorithmic decision-making. We demonstrate three common and practical situations in which human knowledge may play a critical role in harnessing and correcting algorithmic decisions, even with large data. We conclude by discussing potential future directions. First, we may consider more sophisticated yet realistic human augmentation. For example, AI is known to suffer from out-of-distribution issues when the algorithmic decision learned from the training data does not provide much value, and for these instances, human knowledge is particularly useful in correcting. It is desirable to extend our framework and incorporate simple rules to identify such instances and override the algorithmic decision. Second, another important reason for humans to intervene is the consideration for fairness or ethical issues associated with the algorithmic decision. It is a fruitful direction to extend our framework by incorporating these considerations as rules of thumb to guardrail algorithmic outputs. Lastly, in some applications, the adoption choice between the algorithm and the human knowledge needs to be made before the computation of algorithmic decisions because it may incur significant waiting if human correction is conducted after observing the algorithmic decisions. In this case, the human analyst needs to design and commit to a simple rule based on the observed covariate. Our framework may be extended to study this kind of human-AI interaction. \bibliographystyle{informs2014}
train/arxiv
BkiUd-85qoTBFq8c9cDR
5
1
\section{Introduction} It was recently suggested \cite {PC09,PCC09} that the binding energy of Cooper pairs is not constant. Breaking one Cooper pair has an effect on all other Cooper pairs, due to the Pauli blocking of the composite bosons. In the present note we introduce a simple microscopic model in which the binding energy of the paired electrons is partly cancelled by an increasing kinetic energy, in such a way that the effective binding energy is indeed not constant. The sole interaction term of the model is of fourth order in the electron operators and quadratic in the phonon operators. This means a scattering between phonons and electron pairs. One would expect such a term in the context of the bipolaronic Hamiltonian \cite {AR81,AD09}. But here, the term is responsible for the formation of electron pairs, rather than modelling interactions between existing pairs. \section{The model} The motion of the electrons is described by the hopping term. In ${\bf k}$-space it can be written as \begin{eqnarray} H_{\rm el}=-\sum_{{\bf k}}\hbar\lambda_{\bf k} \left(B_{{\bf k},\uparrow}^\dagger B_{{\bf k},\uparrow}+B_{{\bf k},\downarrow}^\dagger B_{{\bf k},\downarrow}\right). \end{eqnarray} The coefficients $\lambda_{\bf k}$ are real and satisfy $\lambda_{-{\bf k}}=\lambda_{\bf k}$ and $\sum_{\bf k} \lambda_{\bf k}=0$. The lattice phonons are also needed with two different polarisations. For the sake of convenience they are denoted with up and down arrows. Their contribution to the Hamiltonian is \begin{eqnarray} H_{\rm ph}=\sum_{{\bf k}}\hbar\mu_{\bf k}\left(c_{{\bf k},\uparrow}^\dagger c_{{\bf k},\uparrow}+c_{{\bf k},\downarrow}^\dagger c_{{\bf k},\downarrow}\right). \end{eqnarray} The operators $c_{{\bf k},\uparrow}^\dagger,c_{{\bf k},\uparrow},c_{{\bf k},\downarrow}^\dagger, c_{{\bf k},\downarrow}$ satisfy the canonical commutation relations. Introduce pseudospin operators \begin{eqnarray} J_{\bf k}^{-} &=&B_{-{\bf k},\uparrow}^\dagger B_{{\bf k},\downarrow}^\dagger B_{-{\bf k},\downarrow} B_{{\bf k},\uparrow}\crcr J_{\bf k}^{+} &=&B_{{\bf k},\uparrow}^\dagger B_{-{\bf k},\downarrow}^\dagger B_{{\bf k},\downarrow} B_{-{\bf k},\uparrow}\crcr J_{\bf k}^z&\equiv&\frac 12[J_{\bf k}^{+},J_{\bf k}^{-}]\cr &=&\frac 12n_{{\bf k},\uparrow}(1-n_{-{\bf k},\uparrow})n_{-{\bf k},\downarrow}(1-n_{{\bf k},\downarrow})\crcr & & -\frac 12n_{{\bf k},\downarrow}(1-n_{-{\bf k},\downarrow})n_{-{\bf k},\uparrow}(1-n_{{\bf k},\uparrow}) \end{eqnarray} with $n_{{\bf k},\tau}=B^\dagger_{{\bf k},\tau}B_{{\bf k},\tau}$. They satisfy the su(2) relations \begin{eqnarray} [J_{\bf k}^z,J_{\bf k}^\pm]=\pm J_{\bf k}^\pm. \end{eqnarray} The operator $J_{{\bf k}}^{+}$ flips a spin current from the direction $-{\bf k}$ into the direction ${\bf k}$. Introduce also new bosonic operators by \begin{eqnarray} a_{\bf k} =c_{{\bf k},\uparrow}c_{-{\bf k},\downarrow}^\dagger, \quad\mbox{ and }\quad a_{\bf k}^\dagger =c_{-{\bf k},\downarrow}c_{{\bf k},\uparrow}^\dagger. \end{eqnarray} Some care is needed when doing calculations with these operators because they do not satisfy the canonical commutation relations. The model Hamiltonian is now \begin{eqnarray} H=H_{\rm el}+H_{\rm ph}+\hbar\xi\sum_{\bf k} \left(a_{\bf k}^\dagger J_{\bf k}^-+a_{\bf k} J_{\bf k}^+\right). \label {modham} \end{eqnarray} Other terms can be added without spoiling the integrability of the model. However, they are not needed for understanding the model. \section{Eigenstates} The interaction term of (\ref {modham}) involves only electrons and phon\-ons with the same or opposite momentum. Hence, the determination of eigenstates reduces to an easy problem involving at most 4 electrons. The electronic subsystem with momenta $\pm{\bf k}$ has 16 possible states, 14 of which do not interact with the phonons. The only two states that do interact are two-electron states with total momentum zero and total spin zero. If the coupling constant $\xi$ is large enough then the ground state of this two-particle subsystem involves a single phonon, which itself is a superposition of two phonons with opposite momenta and polarisation. This ground state describes an electron pair which is kept together by a deformation of the lattice. \section{Ground state} For the sake of simplicity the following argument is made for a one-dimensional model. Let us start from the situation in which all states of the free electron model are occupied up to the Fermi level. This is, $n_{\bf k}=2$ if $|{\bf k}|\le k_F$, and $n_{\bf k}=0$ otherwise. Introduce the notations $\lambda_F\equiv\lambda_{k_F}$ and $\mu_F\equiv\mu_{k_F}$. Similarly, the derivatives are denoted $\lambda_F'\equiv\lambda_{k_F}'$ and $\mu_F'\equiv\mu_{k_F}'$. In the following it is assumed that $\lambda'_F<0$. Make the following expansions \begin{eqnarray} \lambda_k=\lambda_F+\lambda_F'(k-k_F)+O\left((k-k_F)^2\right),\cr \mu_k=\mu_F+\mu_F'(k-k_F)+O\left((k-k_F)^2\right). \end{eqnarray} Take the 4 electrons with ${\bf k}=\pm k_F$ and move two of them to the pair state at ${\bf k}=\pm k_F$, the other two to the pair state at ${\bf k}=\pm(k_F+\delta k)$. Before moving, the 4 electrons together have the energy $-4\lambda_F$. After moving, the pair at ${\bf k}=\pm k_F$ has energy $\mu_F-2\lambda_F-\xi$. The pair at ${\bf k}=\pm(k_F+\delta k)$ has energy $\mu_F-2\lambda_F-\xi+(\mu_F'-2\lambda_F')\delta k$. The gain in energy is \begin{eqnarray} 2\xi-2\mu_F-(\mu_F'-2\lambda_F')\delta k. \end{eqnarray} Repeat the operation now removing 4 electrons from the free states at ${\bf k}=\pm (k_F-\delta k)$ and making one pair with unchanged momenta, and one pair at ${\bf k}=\pm (k_F+2\delta k)$. Now, the gain in energy is only \begin{eqnarray} 2\xi-2\mu_F-(\mu_F'-6\lambda_F')\delta k. \end{eqnarray} This procedure is repeated $n$ times, after which no more energy can be gained by creating electron pairs. The condition for $n$ reads \begin{eqnarray} 0=2\xi-2\mu_F-\left[\mu'_F-2(2n-1)\lambda'_F\right]\delta k. \end{eqnarray} The ground state has now been obtained. For $|{\bf k}|<k_F-n\delta k$ all free electron states are occupied. For $k_F-n\delta k <|{\bf k}|<k_F+n\delta k$ the paired states are occupied. The picture is not so different from that of a layer of Cooper pairs above a quiescent Fermi sea \cite {CLN56}. The main difference is that in the present derivation the paired electrons do not all have the same energies. See the figure. \begin{figure} \onefigure[width=7cm]{kspace.eps} \caption{Number of electrons as a function of the $k$-vector in a one-dimensional ground state.} \label {kspace} \end{figure} \section{Elementary excitations} From the above construction of the ground state it is clear that the excitations, lowest in energy, are the collision of two electron pairs, one at $\pm(k_F-(n-1)\delta k)$, the other at $\pm(k_F+n\delta k)$, and the inverse process of 4 electrons at $\pm(k_F-n\delta k)$ forming two electron pairs. During these processes the two electrons of one of the pairs lose or gain a momentum of approximately $2n\delta k$. This means that there is a gap in the spectrum of the kinetic energy of the electrons. \section{Discussion} This note explores a novel mechanism for electron pair formation due to electron-phonon interaction. The interaction term is unusual in that it is fourth order in the electron operators and quadratic in the phonon operators. With a suitable choice of parameters the ground state exists of a non-interacting Fermi sea with 4 electrons per pair of wave vectors $({\bf k},-{\bf k})$, and above this, a region with one electron pair per pair of wave vectors $({\bf k},-{\bf k})$. This results in a gap in the kinetic energy spectrum of the electrons, but not in the spectrum of the total Hamiltonian. More details about the present model and its extensions will be published elsewhere.
train/arxiv
BkiUaRrxK6nrxrSHcshT
5
1
\section{Introduction} Counting problems in graphs can be very difficult, i.e. $\#P$-hard in the general case, even for simple objects such as trees and independent sets. Research on graph classes has been motivated by such ``hard" decision or optimization problems, and restricting the input to given graph classes has led to numerous polynomial-time algorithms. Despite this, only a few useful algorithms for counting problems exist, and these are relatively recent.\\ In this paper, we focus on maximal matching counting and path matching (linear forest) counting problems. Matching counting and all extensions considered in this paper have been proved $\#P$-complete in the general case. Some sparse graph classes such as planar graphs or graphs of bounded tree-width allow polynomial-time algorithms for perfect matching counting (see \cite{bib11} and \cite{bib1}); on the negative side, Valiant, when introducing the class $\#P$, proved that counting perfect matchings as well as general matchings in bipartite graphs was $\#P$-complete \cite{bib17,bib18}. Valiant's proof concerning matchings has since been extended to 3-regular bipartite graphs \cite{bib8}, bipartite graphs of maximum degree 4 and bipartite planar graphs of maximum degree 6 \cite{bib16}. The problem of counting perfect matchings in chordal and chordal bipartite graphs is also $\#P$-complete \cite{bib14}, but good results on independent sets \cite{bib13} give the impression that the chordal structure could nevertheless be interesting regarding matching counting. This led us to focus on a related graph class, the $(5,2)$-crossing-chordal graphs. We especially make use of the bounded clique-width of this graph class. Courcelle et al.~introduced clique-width in \cite{bib5} as a generalization of tree-width, and it attracted attention mainly for two reasons. On the one hand, in a similar fashion as the tree width, putting a bound on the clique-width makes many difficult problems solvable in polynomial time (see for example \cite{bib6}). On the other hand, this class contains dense graphs as well as sparse graphs, which makes for more general results. Makowsky et al.~already proved as a consequence of a result in \cite{bib12} that matching counting on graphs of bounded clique-width is polynomial. In this paper, we will extend this result by adapting their method to maximal matchings and path matchings. Our algorithms are polynomial of the graph size, but exponential of the clique-width $k$, i.e., $O(n^{poly(k)})$ time. It might be hard to develop a fixed parameter tractable algorithm such as an $O(c^{poly(k)}poly(n))$ time algorithm, since many graph algorithms, e.g. vertex coloring, have to spend $O(n^{poly(k)})$ time unless FPT $\ne$ $W[1]$ \cite{bib9a}. The existing matching counting algorithms can not be used to count maximal matchings directly. The algorithms in \cite{bib12} classify matchings of local graphs according to their sizes and the colors of the endpoints, and then get information about larger graphs my merging the matchings. However, in this way, each classified group may contain both matchings included in maximal matchings and those not included in any maximal matching. Actually, it seems to be difficult to characterize the number of matchings included in some maximal matching, by using only their sizes and their endpoints. In this paper, we introduce matching-cover pairs for this task. When we restrict a maximal matching to a subgraph, it can be decomposed into the matching edges belonging to the subgraph and end vertices of matching edges not included in the subgraph. From the maximality, the end vertices form a vertex cover of the edges of the subgraph. Thus, we count such pairs of matching and vertex cover according to their sizes and colors, and obtain a polynomial time algorithm for the problem. For the problem of counting paths and path matchings, we have to have some way to handle the connectivity of edge sets. Actually, connectivity is not easy to handle; for example, checking for the existence of Hamiltonian path is equivalent to checking whether the number of paths of length $n-1$ is larger than zero or not. Gimenez et al. devised an algorithm based on Tutte polynomial computation to count the number of forests in bounded-clique-width graphs in sub-exponential time, running in $2^{O(n^c)}$ time for constant $c<1$ \cite{GmHlNy05}. We use the properties of bounded-clique-width graphs so that we can classify the path matchings in a polynomial number of groups of equivalent path matchings, and thereby compute the number of paths and path matchings in polynomial time. \section{Clique Width} We shall introduce clique-width on undirected, non-empty labeled graphs by a construction method. Let $G_i$ be the subgraph of vertices labeled $i$ in a graph $G$. We define the singleton $S_i$ as the labeled graph with one vertex of label $i$ and no edge, and the following construction operations: \begin{itemize} \renewcommand{\labelitemi}{-} \item Renaming : $\rho_{i\rightarrow j}(G)$ is $G$ where all labels $i$ are replaced by labels $j$; \item Disjoint union : $(V_1,E_1)\oplus(V_2,E_2) = (V_1\cup V_2,E_1\cup E_2)$; \item Edge creation : $\eta_{i,j}((V,E)) = (V,E\cup \{(v_1,v_2)\ |\ v_1 \in G_i, v_2\in G_j\})$. \end{itemize} The class of graphs with clique-width $\leq k$ is the smallest class containing the singletons $S_i$, closed under $\rho_{i\rightarrow j}, \oplus$ and $\eta_{i,j}$ ($1\leq i,j \leq k$). In other words, the {\em clique-width} of a graph $G$, denoted as $cwd (G)$, is the minimal number of labels necessary to construct $G$ by using singletons and renaming, disjoint union and edge creation operations. \\ For an unlabeled graph $G$, we define its clique-width by labeling all vertices with label 1. This is necessarily the best labeling, since any labeling can be renamed to a monochromatic labeling. Note that the clique-width of a graph of order $n$ is at most $n$. $(5,2)$-crossing-chordal graphs are known to have clique-width $\leq 3$ \cite{bib3} (we recall that a $(5,2)$-crossing-chordal graph is a graph where any cycle of length $\geq 5$ has a pair of crossing diagonals). Other interesting results include: cographs are exactly the graphs with $cwd(G)\leq 2$, planar graphs of bounded diameter have bounded clique-widths, and any graph class of treewidth $\leq k$ also has a bounded clique-width of $\leq 3.2^{k-1}$ \cite{bib4}. A complete review can be found in \cite{bib10}.\\ An {\em $l$-expression} is a term using $S_i, \rho_{i\rightarrow j}, \eta_{i,j}$ and $\oplus$ (with $i,j \leq l$) that respects the arity of each operation. It can be represented more conveniently in a tree structure, and we can inductively associate the current state of the construction to each node. If $G$ is the graph associated with the root, we say that this term is an $l$-expression for $G$, and it is a certificate that $G$ is of clique-width $\leq l$. An example is given in Fig.1. \begin{figure}[t] \centering \begin{tikzpicture} [style/.style={circle,draw = black, inner sep=1pt,minimum size=3.5mm}, small/.style={circle,draw = black, inner sep=0.5pt,minimum size=1.5mm}] \node (1) at (0,3) [style] {}; \node (2) at (0,5) [style] {}; \node (3) at (1,4) [style] {}; \node (4) at (2,3) [style] {}; \node (5) at (2,5) [style] {}; \draw (4) -- (1) -- (2) -- (3) -- (4) -- (5) -- (2) (3) -- (5); \node (0) at (6,7) [style] {\scriptsize$\eta_{1,3}$}; \node (1) at (6,6) [style] {\scriptsize{$\oplus$}}; \node (10) at (5,5) [style] {\scriptsize$S_3$}; \node (11) at (7,5) [style] {\scriptsize$\eta_{1,2}$}; \node (12) at (7,4) [style] {\scriptsize{$\oplus$}}; \node (120) at (6,3) [style] {\scriptsize{$\oplus$}}; \node (121) at (8,3) [style] {\scriptsize$\rho_{2\to1}$}; \node (1200) at (5,2) [style] {\scriptsize$S_2$}; \node (1201) at (7,2) [style] {\scriptsize$S_2$}; \node (122) at (8,2) [style] {\scriptsize$\eta_{1,2}$}; \node (123) at (8,1) [style] {\scriptsize{$\oplus$}}; \node (1230) at (7,0) [style] {\scriptsize$S_1$}; \node (1231) at (9,0) [style] {\scriptsize$S_2$}; \draw (0) -- (1) -- (10) (1) -- (11) -- (12) -- (120) -- (1200) (120) -- (1201) (12) -- (121) -- (122) -- (123) -- (1230) (123) -- (1231); \node (1) at (9,6.5) [small] {\tiny3}; \node (2) at (9,7.5) [small] {\tiny2}; \node (3) at (9.5,7) [small] {\tiny1}; \node (4) at (10,6.5) [small] {\tiny2}; \node (5) at (10,7.5) [small] {\tiny1}; \draw (4) -- (1) -- (2) -- (3) -- (4) -- (5) -- (2) (3) -- (5); \draw [->, thick, >=stealth] (8.4,7) -- (6.7, 7); \node (2) at (9,5.5) [small] {\tiny2}; \node (3) at (9.5,5) [small] {\tiny1}; \node (4) at (10,4.5) [small] {\tiny2}; \node (5) at (10,5.5) [small] {\tiny1}; \draw (2) -- (3) -- (4) -- (5) -- (2) (3) -- (5); \draw [->, thick, >=stealth] (9,5) -- (7.6, 5); \node (1) at (9.5,2.75) [small] {\tiny1}; \node (2) at (10,3.25) [small] {\tiny1}; \draw (1) -- (2); \draw [->, thick, >=stealth] (9.3,3) -- (8.5, 3); \node (1) at (9.5,1.75) [small] {\tiny2}; \node (2) at (10,2.25) [small] {\tiny1}; \draw (1) -- (2); \draw [->, thick, >=stealth] (9.3,2) -- (8.5, 2); \node (1) at (4,2.75) [small] {\tiny1}; \node (2) at (4.5,3.25) [small] {\tiny1}; \draw (1) -- (2); \draw [->, thick, >=stealth] (4.7,3) -- (5.5, 3); \end{tikzpicture} \caption {Graph of clique-width 3, and a possible 3-expression tree (the last renaming operations are omitted).} \end{figure} Fellows et al.~proved the NP-hardness of computing the minimum clique-width for general graphs \cite{bib9}. The current best approximation is due to Oum and Seymour \cite{bib15}, who provided an algorithm in linear time that, given a graph $G$ and an integer $c$ as input, returns an $2^{3c+2}$-expression for $G$ or certifies that the graph has a clique-width larger than $c$. This implies that we can compute in quadratic time a $2^{3k+2}$-expression for a graph of clique-width $k$ by applying this algorithm for $c=1,2\dots$. As the bound is independent of $n$, algorithms requiring expressions as input will still be in polynomial time, although the time complexity will usually be extremely poor. For $(5,2)$-crossing-chordal graphs, though, this is not a concern since it is possible to compute a 3-expression in linear time \cite{bib3}. An $l$-expression is called {\em irredundant} if every edge-creation operation $\eta_{i,j}$ is applied to a graph where no pair of vertices in $G_i$ and $G_j$ are adjacent. Any $l$-expression can be turned into an $l$-irredundant expression in linear time \cite{bib7}. Therefore, we can assume w.l.o.g. that the input expression is irredundant. \section{Framework of Our Algorithms} The input of our algorithms is a graph $G$ on $n$ vertices and an $l$-expression for $G$, and the output is the number of objects (ex. matchings, paths) in $G$. The procedure works by counting these objects at each step of the construction, by using the expression tree : we start from the leaves and process a node once all its children have been processed. Finally, the value at the root of the tree is the output of the algorithm. Instead of doing it directly with the considered object, we introduce appropriate intermediate objects, and we compute tables of values at each step. To avoid tedious case studies, we shall assume that requesting the value of any vector outside of the range $\{0\dots n\}$ returns the value 0. Also, $\Delta_r(l)$ is the vector $(\delta_{i,r})_{1\leq i\leq l}$, and $\Delta_{r,s}(l)$ is the vector $(\delta_{i,r}\cdot\delta_{j,s})_{\substack{0\leq i\leq j\leq l\\(i,j)\not = (0,0)}}$, where $\delta_{i,j}$ is the {\em Kronecker delta}:\[\delta_{i,j} = \left\{\begin{array}{rl}1&\mbox{if }i=j\\0&\mbox{otherwise}\end{array}\right.\]We will omit the $l$ when it is obvious in context. \section{Counting Maximal Matchings} \begin{theorem} Computing the number of maximal matchings of a graph with $n$ vertices with a corresponding $l$-expression can be done in polynomial time in $n$ (but exponential w.r.t $l$). \end{theorem} We cannot directly use the previous framework on maximal matchings. Indeed, consider $M$ a maximal matching of $G = \eta_{i,j}(G')$ and $M'$ the induced matching in $G'$: $M'$ is not necessarily maximal. However, we can keep track of the vertices of $G'$ that are covered in $M$, and those vertices must form a vertex cover of the subgraph left uncovered by $M'$. See Fig.2 for an example.\\ \begin{figure}[t] \centering \begin{tikzpicture} [style/.style={circle,draw = black, inner sep=1pt,minimum size=3.5mm}, tvick/.style={circle, very thick, draw = black, inner sep=1pt,minimum size=3.5mm}] \node (0) at (0,6) [style] {\small3}; \node (10) at (2,6) [style] {\small2}; \node (11) at (2,4.5) [style] {\small2}; \node (12) at (2,3) [style] {\small2}; \node (13) at (2,1.5) [style] {\small2}; \node (14) at (2,0) [style] {\small2}; \node (21) at (4,4.5) [style] {\small1}; \node (22) at (4,3) [style] {\small1}; \node (23) at (4,1.5) [style] {\small1}; \node (24) at (4,0) [style] {\small1}; \draw (0) -- (10) -- (21) -- (11) -- (22) -- (12) -- (23) -- (14) -- (21) -- (13) -- (24) -- (12) -- (11) -- (24) -- (10) -- (23) -- (22) -- (13) -- (12) -- (21) (10) -- (22) -- (14) (23) -- (11); \draw [very thick] (14) -- (24) (13) -- (23) (10) -- (11) (21) -- (22); \draw (11) to [bend right = 20] (13); \node (0) at (7,6) [style] {\small3}; \node (10) at (9,6) [style] {\small2}; \node (11) at (9,4.5) [style] {\small2}; \node (12) at (9,3) [style] {\small2}; \node (13) at (9,1.5) [tvick] {\small2}; \node (14) at (9,0) [tvick] {\small2}; \node (21) at (11,4.5) [style] {\small1}; \node (22) at (11,3) [style] {\small1}; \node (23) at (11,1.5) [tvick] {\small1}; \node (24) at (11,0) [tvick] {\small1}; \draw (0) -- (10) (11) -- (12) -- (13) (22) -- (23); \draw [very thick] (10) -- (11) (21) -- (22); \draw (11) to [bend right = 20] (13); \end{tikzpicture} \caption{Maximal matching of $\eta_{1,2}(G')$, and the corresponding matching-cover pair of $G'$.} \end{figure} A {\em matching-cover} pair of a graph $G = (V,E)$ is a pair $(m,c)$ such that: \begin{itemize} \item $m\subseteq E$ is a matching of $G$ (i.e. no vertex is covered more than once); \item $c\subseteq V$ is a vertex cover of the subgraph left uncovered by $m$ (i.e. every {\em edge} is covered at least once). \end{itemize} We show that computing the number of matching-cover pairs of a graph with $n$ vertices with a corresponding $l$-expression can be done in polynomial time in $n$.\\ Let $M = (m_i)_{1\leq i \leq l}$ and $C= (c_i)_{1\leq i\leq l}$ be two vectors of non-negative integers. For a graph $G$, we say that a pair $(m,c)$ satisfies the condition $\varphi_{M,C}(G)$ if $m$ covers $m_i$ vertices in $G_i$ and $c$ uses $c_i$ vertices in $G_i$ for all $i$, and we denote by $mc_{M,C}(G)$ the number of pairs that satisfy $\varphi_{M,C}(G)$. Note that maximal matchings are exactly pairs with an empty cover; therefore, the number of maximal matchings of $G$ is $\sum_{k\leq n} mc_{k\cdot\Delta_1,0}(G)$. Now we will follow the framework described above and compute $mc_{M,C}$ for all possible $M$ and $C$, at each step of the construction. We associate to each node of the tree a table of size $n^{2l}$ corresponding to the values of $mc_{M,C}$ on this graph for $M$ and $C$ ranging from $(0,..,0)$ to $(n,..,n)$. For a singleton $S_i$, we can easily see that: \[ mc_{M,C}(S_i) =\left\{\begin{array}{rl} 1 & \mbox{if } M = 0\mbox{ and }C = 0 \mbox{ or }\Delta_i \\ 0 & \mbox{otherwise} \end{array}\right. \] For the renaming operation $G = \rho_{i\rightarrow j}(G')$, the graph is not modified, but all vertices of label $i$ are set to label $j$. Hence, we modify the entries $i$ and $j$ accordingly. \[mc_{M,C}(G)=\sum_{\substack{M':(M,M')\vdash \phi_{i,j}\\C':(C,C')\vdash \phi_{i,j}}} mc_{M',C'}(G')\] \[ \mbox{where }(X,X')\vdash\phi_{i,j} \Leftrightarrow \left(\begin{array}{l} x_j = x'_i+x'_j\\ x_i=0\\ \forall k \not\in \{i,j\}, x_k=x'_k \end{array}\right)\] For the disjoint union of two graphs $G=G_1\oplus G_2$, we have a bijection between matching-cover pairs $(m,c)$ in $G$ and pairs $(m_1,c_1),(m_2, c_2)$ of matching-cover pairs in $G_1$ and $G_2$, respectively. Moreover, if $(m,c)$ satisfies $\varphi_{M,C}$, $(m_1,c_1)$ satisfies $\varphi_{M_1,C_1}$ and $(m_2, c_2)$ satisfies $\varphi_{M_2,C_2}$, we have $M=M_1+M_2$ and $C=C_1+C_2$. \[mc_{M,C}(G)=\sum_{\substack{M_1+M_2=M\\C_1+C_2=C}} mc_{M_1,C_1}(G_1)\cdot mc_{M_2,C_2}(G_2)\] For the edge creation operation $G=\eta_{i,j}(G')$, we have to choose the extremities of the edges added to the matching among the vertices in the vertex cover. If $q$ is the number of new edges, we have: \[mc_{M,C}(G)=\sum_{q=0}^n mc_{M',C'}(G')\cdot \left( \begin{array}{l} c'_i\\q \end{array} \right) \cdot \left( \begin{array}{l} c'_j\\q \end{array} \right)\cdot q! \] \[\mbox{where } M'=M-q\Delta_i-q\Delta_j,\ C'=C+q\Delta_i+q\Delta_j\] Once the maximal matchings of all sizes are computed, it is straightforward to count the number of perfect matchings and the number of minimum maximal matchings in polynomial time. Note that counting perfect matchings can be achieved in $O(n^{2l+1})$ time by adapting the matching counting algorithm presented in \cite{bib12} in a similar fashion.\\ {\bf Complexity study:} Obviously, there are exactly $n$ singleton operations, and each operation requires a constant amount of time. Every other operation requires one to compute $n^{2l}$ values. As the expression is irredundant, every edge creation operation adds at least one edge, so there are at most $n^2$ edge creation operations, processed in linear time. As a disjoint union operation has two children in the tree, and there are $n$ leaves, there are $n-1$ disjoint union operations, and they require $O(n^{2l})$ time. For the renaming operation, consider the number of different labels at each step of the construction. This number is one for a singleton, the edge creation operation has no effect, the disjoint union is an addition in the worst case (no shared label) and the renaming operation diminishes this number by one. Therefore, there are at most $n$ renaming operations, and they are done in $O(n^4)$ time. The final sum requires $O(n^l)$ operations.\\ Therefore, the overall complexity of the algorithm is \[O(n)+O(n^{2l}) \cdot \left(O(n^5)+O(n^{2l+1})+ O(n^3)\right)+ O(n^l)= O(n^{4l+1})~(l\geq 2).\] For $(5,2)$-crossing-chordal graphs, we can compute an expression of width $l=3$ in linear time and the algorithm runs in time $O(n^{13})$. \section{Counting paths and path matchings} A {\em path matching} (or {\em linear forest}) is a disjoint union of paths, in other words, a cycle-free set of edges such that no vertex is covered more than twice. \begin{theorem} Computing the number of paths $pth(G)$ and the number of path matchings $pm(G)$ of a graph of clique-width $\leq k$ can be done in polynomial time (but exponential w.r.t. $k$). \end{theorem} \begin{proof} Let $K = (k_{i,j})_{\substack{0\leq i\leq j\leq l\\(i,j)\not = (0,0)}}$ be a vector of non-negative integers. We say that a path matching $P$ of $G$ satisfies the condition $\psi_K$ if: \begin{itemize} \renewcommand{\labelitemi}{-} \item $\forall i>0, k_{0,i}$ vertices in $G_i$ are left uncovered by $P$; \item $\forall (i,j),i\leq j$, $k_{i,j}$ paths in $P$ have extremities in $G_i$ and $G_j$. \end{itemize} We denote the number of path matchings in $G$ satisfying $\psi_K$ by $pm_K(G)$. If $i>j$, we denote $k_{i,j}= k_{j,i}$. As $K$ is of size $\frac{l(l+3)}{2}$, we compute tables of size $n^{\frac{l(l+3)}{2}}$ at each step. For a singleton $S_i$, the only possible path matching is empty and leave the vertex uncovered. \[\forall K, pm_K(S_i) =\left\{\begin{array}{rl} 1 & \mbox{ if }K =\Delta_{0,i}\\ 0 & \mbox{otherwise} \end{array}\right. \] For the renaming operation $G= \rho_{i\rightarrow{} j}(G')$, the method is the same as for maximal matchings. \[pm_K(G) = \sum_{K':(K,K')\vdash \phi} pm_{K'}(G')\] \[\mbox{where }(K,K')\vdash \phi \Leftrightarrow \left(\begin{array}{l} k_{j,j}=k'_{j,j}+k'_{i,j}+k'_{i,i}\\ \forall a \not\in \{i,j\}), k_{a,j} = k'_{a,i}+k'_{a,j}\\ \forall a, k_{a,i} = 0\\ \forall a \not\in \{i,j\}, b \not\in \{0,i,j\}, k_{a,b}=k'_{a,b} \end{array}\right)\] For the disjoint union operation $G = G_1 \oplus G_2$, we have a bijection between path matchings $p$ in $G$ and pairs $(p_1, p_2)$ of path matchings in $G_1$ and $G_2$, respectively. Plus, if $p_1$ satisfies $\psi_{K_1}$, $p_2$ satisfies $\psi_{K_2}$ and $p$ satisfies $\psi_K$, we have $K=K_1+K_2$. \[pm_K(G) = \sum_{K_1+K_2=K} pm_{K_1}(G_1)\cdot pm_{K_2}(G_2)\] Consider now the edge creation operation $G=\eta_{i,j}(G')$. We say a path matching $P$ in $G$ is an {\em extension} of a path matching $P'$ in $G'$ if $P\cap G'=P'$, so that $P=P'\cup E_{i,j}$ where $E_{i,j}$ is a subset of the edges added by the operation. Now, if we consider a path matching $P'$ in $G'$ that satisfies $\psi_{K'}$, we claim that the number of extensions of $P'$ in $G$ that satisfy $\psi_K$ depends only on $i,j,K'$ and $K$ (and not on $P'$ or $G'$), and we represent it as $N_{i,j}(K,K')$. Since every path matching of $G$ is an extension of an unique path matching of $G'$, we have: \[ pm_K(G)=\sum_{K'} pm_{K'}(G')\cdot N_{i,j}(K', K) \] Moreover, we can compute all the $N_{i,j}(K',K)$ beforehand in $O(n^{l(l+4)})$ time. The proof of these claims is given in the appendix. We can then compute the number of paths $pth(G)$ and the number of path matchings $pm(G)$ with the formulas: \[\begin{array}{rrlrl} pth(G) =&\displaystyle \sum_{0\leq a \leq n}&pm_{K(a)}(G) &\mbox{where}&K(a)=a\cdot \Delta_{0,1}+\Delta_{1,1}\\ pm(G) =&\displaystyle \sum_{1\leq a+2b \leq n} &pm_{K(a,b)}(G) &\mbox{where}&K(a,b)=a\cdot \Delta_{0,1}+b\cdot \Delta_{1,1} \end{array}\] \end{proof} {\bf Complexity study:} A singleton operation requires constant time. Every other operation requires us to compute $n^{\frac{l(l+3)}{2}}$ values. For each value, the renaming operation in processed in linear time, the disjoint union operation in $O(n^{l^2})$ time and the edge creation operation in $O(n^{\frac{l(l+3)}{2}})$ time. The overall complexity of the algorithm is: \[\left\{\begin{array}{rl} O(n^{l^2+4l})&\mbox{for }l\leq 5 \\ O(n^{\frac{3}{2}(l^2+l)+1}) & \mbox{for } l>5 \end{array}\right.\] For $(5,2)$-crossing-chordal graphs, we can compute in linear time an expression of width $l=3$ and we have an algorithm running in $O(n^{21})$ time. \section{Conclusion} These results seem to confirm the intuition that bounding clique-width is an efficient restriction on the input of $\#P$-hard problems in order to allow the use of polynomial algorithms. Notably, being able to count paths and path matchings in polynomial time is interesting because connected structures are usually very difficult to count. In that sense, the next logical step was to study the tree (or, equivalently, forest) counting problem. However, our attempts to do so by using a method similar to the one we used in the paper, only produced algorithms running in exponential time. Our feeling is that the tree counting problem remains $\#P$-complete for graphs of bounded clique-width, as this intuitive method keeps giving bad results. It remains an open problem for now.
train/arxiv
BkiUfes5qhDC0uAdXBXn
5
1
\section{Introduction} \label{intro} Coupled-cluster (CC) theory \cite{coester58_421,coester60_477,cizek66_4256,paldus72_50,purvis82_1910,paldus1999critical,RevModPhys.79.291} has evolved into one of the most accurate formulations to capture complex correlation effects defining a broad class of quantum effects that drive chemical transformations, usually identified with bond-forming and bond-breaking processes.\cite{RevModPhys.79.291,Bartlett2009} Over the last few decades, theoretical efforts closely followed by the development of sophisticated computational models that take advantage of the ever-growing computational power of parallel architectures, enabled the establishment of a hierarchy of approximations and novel formulations for closed/open-shell molecules, strong correlation, and large-scale systems. Special attention has been paid to single-reference CC formulations that can adequately describe chemical reactions and topological events of the corresponding ground-state potential energy surfaces, including barriers, avoided crossings, and multiple minima. This effort includes an impressive array of formulations based on the inclusion of high-rank collective effects contributing to the expansion of the ground-state wave function. This progress was made possible to achieve thanks to cornerstone implementations of the now ubiquitous models such as CCSD,\cite{purvis82_1910} CCSDT, \cite{ccsdt_noga,ccsdt_noga_err,scuseria_ccsdt} CCSDTQ \cite{Kucharski1991,ccsdtq_nevin} and their perturbative counterparts, including CCSD[T],\cite{urban1985towards,noga1987towards} CCSD(T),\cite{raghavachari89_479} CCSD(TQ) and CCSDT(Q) type approaches. \cite{bartlett1990non,kucharski1998efficient} For its simplicity, the CCSD(T) method has assumed a unique position in high-accuracy chemical simulations of equilibrium properties and chemical reactions. However, a significant effort has been expended to alleviate serious issues associated with the perturbative nature of the (T) for molecular configurations away from the equilibrium geometry.\cite{crawford_t,mmcc1,crccx,deustua2017converging,gwal1,gwal2,sohir1,ybom1,robkn,ugur1,mauss1} Different design principles drive a large class of these developments in perturbative and non-perturbative formulations. Structural changes in molecular systems such as bond-breaking/forming can be characterized by analyzing their potential energy surface (PES) or scrutinizing processes/states in various energy regimes sensitive to the geometry changes induced by chemical transformations. Excited state extensions of CC formalism play an essential role in these studies either through equation-of-motion CC methodologies (EOMCC) \cite{bartlett89_57,bartlett93_414,Stanton1993} or CC linear response theory (LR-CC),\cite{monkhorst77_421,Koch1990} which are usually formulated in frequency space. These methods have been widely applied in studies of excitation energies, excited-state PESs, excited-state non-adiabatic dynamics, frequency-dependent optical properties, and, more recently, multi-component polaritonic systems.\cite{PhysRevX.10.041043} Green's function extensions of the CC formalism (CCGF) \cite{nooijen92_55,nooijen93_15,nooijen95_1681,meissner93_67,kkgfcc1,kkgfcc2,kkgfcc3,peng2021coupled,mcclain2016spectral,zhu2019coupled,shee2019coupled,lange2018relation} can also capture excited-state correlation effects needed to describe quasiparticles (QP) and satellite peaks observed in X-ray photoemission spectra (XPS). It is worth mentioning that the development of the CC methods for the description of satellite peaks was paralleled by enabling vertex corrected GW$\Gamma$ approaches \cite{mejuto2021multi} and adaptive sampling configuration interaction (ASCI) algorithms \cite{tubman2020modern} capable of attaining high accuracy in describing multi-configurational electronic states for relatively little computational cost. Time-dependent CC formulations, which provide an alternative way of describing excited-state processes, have attracted significant attention in the last decade. \cite{kvaal2012ab,pedersen2019symplectic,sato2018communication,nascimento2016linear,nascimento2017simulation,cooper2021short,li2020real} In this context, we have recently developed a real-time equation-of-motion CC (RT-EOM-CC) method\cite{RVKNKP,vila2021eom,vila2020real,vila2022real} to compute the one-electron GF based on a CCSD cumulant approach, which provides several advantages compared to other excited-state formulations. For example, the approach leads to an explicit exponential cumulant representation of the Green's function\cite{Hedin99review} and, at the same time, allows one to formulate a non-perturbative expression for the cumulant in terms of the solutions to a set of coupled, first-order nonlinear differential equations for the CC amplitudes. Additionally, the RT-EOM-CC formulation provides a theoretical platform for including nonlinear corrections to the traditional cumulant approximations, which are usually linear in the self-energy. In this paper, we compare the performance of the CCGF and RT-EOM-CC formulations when applied to core and valence binding energies of the water and water dimer molecules. In particular, we focus our analysis on identifying unique features of spectral functions, i.e., the location of quasiparticle and satellite peaks (for core and valence regions) as functions of geometry changes in bond-breaking processes. In this regard, we focus on the O-H stretch in H$_2$O and the bridging proton transfer process in (H$_2$O)$_2$. We demonstrate that the impact of the geometry changes is especially strong for the satellite region of the core spectral function. We also investigate the effect of basis set effects on the accuracies of ionization potentials obtained with the CCGF and RT-EOM-CC models with singles and doubles (CCSDGF/RT-EOM-CCSD) and compare them with large CCSD(T) calculations. The paper is organized as follows: In Sec.~\ref{theory}, we provide a brief overview of the CCGF, IP-EOMCC, and RT-EOM-CC formulations and approximations used in the calculations for the H$_2$O and (H$_2$O)$_2$ systems. In Sec.~\ref{results}, we discuss the ionization potentials and spectral functions in the valence and core regions. Using RT-EOM-CCSD we investigate how bond-breaking process affects the spectral functions calculated for core and valence energy regimes. Special focus is on evaluating the performance of the RT-EOM-CCSD formalism in comparison with other CC formulations. In Sec.~\ref{results} we also discuss the basis set effects on the accuracy of calculated ionization potentials. Finally, Sec.~\ref{conclusion} briefly summarizes our results. \section{Theory} \label{theory} In this section, we discuss basic aspects of the various CC approaches used here to calculate correlated binding energies in various energy windows. In particular, we focus on the coupled-cluster Green's function formalism and closely related ionization-potential equation-of-motion coupled-cluster (EOM-CC) approach, and on the real-time EOM-CC cumulant formulation. \subsection{CC Green's function} The retarded part of the Green's function (advanced part can be developed in an analogous way) is defined by the following matrix elements $G^R_{pq}(\omega)$: \begin{equation} G^R_{pq}(\omega) = \langle \Psi_g | a_q^\dagger (\omega + ( H - E_0 ) - i \eta)^{-1} a_p | \Psi_g \rangle \;, \label{gf0} \end{equation} where $E_0$ is the corresponding ground-state energy for the $N$-electron system, $\eta$ is a broadening factor, $a_p/a^{\dagger}_p$ are creation/annihilation operators for an electron in the $p$-th spin-orbital, and $|\Psi_g\rangle$ is the ground state wave function of the $N$-electron system. The bi-variational CC \cite{arponen1983variational,Stanton1993} formalism utilizes distinct parametrizations for the bra ($\langle\Psi_g|$) and ket ($|\Psi_g\rangle$) ground-state wave functions, i.e., \begin{eqnarray} \langle \Psi_g | &=& \langle \Phi | (1+\Lambda)e^{-T} \label{biv1} \\ | \Psi_g \rangle &=& e^T | \Phi \rangle, \label{biv2} \end{eqnarray} which leads to the following form of retarded part of the CC Green's function (CCGF) \cite{nooijen92_55,nooijen93_15,nooijen95_1681,meissner93_67,kkgfcc1,mcclain2016spectral,kkgfcc3, PengKowalski2018,doi:10.1063/1.5138658,doi:10.1021/acs.jctc.9b00172} \begin{eqnarray} G^R_{pq}(\omega) = \langle\Phi|(1+\Lambda) \overline{a_q^{\dagger}} (\omega+\bar{H}_N- \text{i} \eta)^{-1} \overline{a_p} |\Phi\rangle. \label{gf1} \end{eqnarray} The similarity transformed operators $\overline{A}$ ($A = H, a_p, a_q^{\dagger}$) are defined as $\overline{A} = e^{-T} A ~e^{T}$ (the $\overline{H}_N$ stands for a normal product form of $\overline{H}$). The numerical algorithms for calculating Eq. (\ref{gf1}) employ auxiliary operators $X_p(\omega)$ \begin{eqnarray} \hspace*{-0.4cm} X_p(\omega) &=& X_{p,1}(\omega)+X_{p,2}(\omega) + \ldots \notag \\ &=& \sum_{i} x_i(\omega)_p a_i + \sum_{i<j,a} x_{ij}^a(\omega)_p a_a^{\dagger} a_j a_i +\ldots , \;\; \forall_p \label{xp} \end{eqnarray} that satisfy the equations \begin{eqnarray} (\omega+\overline{H}_N - \text{i} \eta )X_p(\omega)|\Phi\rangle = \overline{a_p} |\Phi\rangle. \label{eq:xplin} \end{eqnarray} In Eq. (\ref{xp}), the $i,j,\ldots$ ($a,\ldots$) labels refer to spin-orbitals labels occupied (unoccupied) in the $N$-electron reference function $|\Phi\rangle$. Using these operators, the matrix elements can be expressed in a simple form \begin{eqnarray} G^R_{pq}(\omega) = \langle\Phi_g|(1+\Lambda) \overline{a_q^{\dagger}} X_p(\omega) |\Phi_g\rangle. \label{gfxn2} \end{eqnarray} The tensors $x_i(\omega)_p$, $x_{ij}^a(\omega)_p$, etc. in Eq.~(\ref{xp}) represent sought after cluster amplitudes. In our implementation of CCGF, we approximate the de-excitation $\Lambda$ operator by the cluster operator $T^{\dagger}$. The main numerical effort associated with constructing the retarded CC Green's function is associated with the need to solve a large number of independent linear equations, which in turn can contribute to efficient parallel schemes utilizing multiple levels of parallelism. We implemented the CCGF model with single and double excitations (CCSDGF)\cite{peng2021GFCCLib} using the parallel tensor library TAMM (Tensor Contraction for Many-body Methods).\cite{TAMM} The current version of the CCGF code utilizes Cholesky-decomposed two-electron integrals that significantly reduce the memory requirements and inter-node communication. \subsection{Ionization potential EOMCC formalism} The ionization potential EOMCC (IP-EOMCC) formulation \cite{stanton1995perturbative} utilizes the EOMCC type wave function expansion for the $k$-th electronic state $|\Psi_k\rangle$ of the $N-1$ electron system: \begin{equation} |\Psi_k\rangle=R_k e^T |\Phi\rangle, \label{ipcc1} \end{equation} where $|\Phi\rangle$ and $T$ are the reference function and cluster operator of the $N$-electron system, respectively, and the $R_k$ operator is given by the expansion \begin{equation} R_k=\sum_i r_i(k)a_i + \sum_{i<j;a} r(k)_{ij}^a a_a^{\dagger} a_j a_i + \ldots ~~ . \label{ipcc2} \end{equation} The $R_k$ can be determined by solving a non-Hermitian eigenvalue problem in the $N-1$ Hilbert space \begin{equation} \bar{H}_N R_k |\Phi\rangle = \omega^{\rm IP}_k R_k |\Phi\rangle \;, \label{ipcc3} \end{equation} where $\omega_k^{\rm IP}$ stands for the $k$-th ionization potential. It should be noted that the eigenvalues of the IP-EOMCC Eq. (\ref{ipcc3}) are identical to the singularities (as functions of $\omega$ parameter for $\eta=0$) of the solutions of Eq. (\ref{eq:xplin}). For the present studies we use the TCE implementation of the IP-EOMCC model with singles and doubles (IP-EOMCCSD).\cite{bhaskaran2014equation} \subsection{Real-time EOM-CC cumulant approach} The cumulant approximation, in which the core-hole GF is approximated as the product of the free particle GF and the exponential of a cumulant, has been shown to provide accurate spectral functions for extended systems.\cite{PhysRevB.90.085112,PhysRevB.91.121112,PhysRevB.94.035156,doi:10.1116/6.0001173} We have recently combined this form of the GF with the CC method to develop\cite{doi:10.1063/5.0004865,vila2021eom,vila2020real,vila2022real} a real-time equation-of-motion CC (RT-EOM-CC) approach to compute the core and valence one-electron GF based on a CC approximation to the cumulant. In this approximation, the retarded GF can be expressed as \begin{equation} \label{eq:cum_gf} G_{c}^{R}(t) = -i \Theta(t) e^{-i (\epsilon_c + E_N^{corr}) t} e^{C_c^{R}(t)}. \end{equation} where $c$ is the index of the excited hole spin-orbital (which in our previous studies was restricted to be a core spin-orbital, but here can be any occupied spin-orbital), $\epsilon_c$ is its bare or Fock energy, $E_N^{corr}$ is the correlation energy of the $N$ electron ground state, and $C_c^{R}(t)$ is the retarded cumulant associated with $c$. As shown in Eq. \ref{eq:cum_gf}, the CC approximation naturally results in a GF with an explicit exponential cumulant form,\cite{langreth70,SG1978,Hedin99review,sky} which, unlike that obtained in self-energy formulations, is non-perturbative. The expression for the time derivative of the cumulant in terms of the time-dependent CC amplitudes takes the simple form: \begin{equation} \label{eq:dcdt} \begin{split} -i\frac{d C_c^R(t)}{dt} =& \sum_{ia} f_{ia} t_i^a(t) + \frac{1}{2} \sum_{ijab} v_{ij}^{ab} t_j^b(t) t_i^a(t)\\ +& \frac{1}{4} \sum_{ijab} v_{ij}^{ab} t_{ij}^{ab}(t), \end{split} \end{equation} where the $f_{pq} = \epsilon_p \delta_{pq} - v_{pc}^{qc}$ are the matrix elements of the $N-1$ electron Fock operator, $\epsilon_p$ is the energy of spin-orbital $p$, and $v_{pq}^{rs} = \left< pq \left| \right| rs \right>$ are the antisymmetrized two-particle Coulomb integrals over the generic spin-orbitals $p, q, r, s$. The time-dependent amplitudes $t_{ij...}^{ab...}(t)$ of the CC expansion are obtained from a set of coupled, first-order non-linear differential equations analogous to those used in the solution of the static CCSD equations (see, for instance, Ref. \citenum{vila2022real}). These equations have the initial conditions $t_{ij...}^{ab...}(0)=0$, which result in the initial condition $C_c^R(0)=0$ for the cumulant in Eq \ref{eq:cum_gf}. As shown in Eq. \ref{eq:dcdt}, the RT-EOM-CC cumulant also naturally includes non-linear contributions, in contrast to traditional cumulant approximations that usually depend linearly on the self-energy. RT-EOM-CC uses two main approximations: 1) The separable approximation $\left| 0 \right> \simeq a_c^\dagger \left| N-1 \right>$. Here $\left| N-1 \right>$ is the fully correlated $N-1$ electron portion of the $N$ electron exact ground state wave function $\left| 0 \right>$. 2) A time-dependent (TD) CC ansatz for this $N-1$ electron state, i.e., $\left| N-1, t \right> = \tilde N(t) e^{T(t)} \left| \phi \right>$. Here $\tilde N(t)$ is the normalization factor and $T(t)$ is the TD cluster operator. In contrast with traditional CC formulations, $T(t)$ is defined in the $N-1$ electron Fock space. Therefore, the reference determinant $\left| \phi \right>$ has a hole in level $c$, i.e., $\left| \phi \right> = a_c \left| \Phi \right>$, where $\left| \Phi \right>$ is the traditional ground state $N$ electron Hartree--Fock (HF) Slater determinant. We have shown\cite{vila2022real} that, at the CCSD level, the RT-EOM-CC method gives accurate core quasiparticle (QP) binding energies, with a mean absolute error (MAE) from experiment of about 0.3 eV for the CH$_4$, NH$_3$, H$_2$O, HF and Ne systems. The method also gives a good treatment of the many-body satellites, with errors in the QP-satellite gap of less than 1 eV. Finally, despite the use of the separable approximation, RT-EOM-CCSD provides accurate valence ionization energies, as shown below and elsewhere.\cite{vilartvalence2022} \section{Results and Discussion} \label{results} \subsection{Geometries, basis sets, and computational details} The calculations of the water monomer use experimental geometries\cite{cccbdb} where R(OH) = 0.958 \AA\ and $\theta$(HOH)= 104.48\degree. The molecule has $C_{2v}$ symmetry and is oriented in the traditional way, with the atoms lying in the $yz$ plane. This results in the molecular orbital configuration $(1$a$_1)^2(2$a$_1)^2(1$b$_2)^2(3$a$_1)^2(1$b$_1)^2$. For the water dimer structure we use the best estimate reported by Lane\cite{lane2013ccsdtq} which was obtained using the counterpoise corrected CCSD(T)-F12b method at the complete basis set limit, with corrections for higher-order excitations, core correlation, relativistic effects, and diagonal Born-Oppenheimer effects. For reference, this structure has the following parameters: R(OH)$_f$ = 0.95685 \AA, R(OH)$_b$ = 0.96414 \AA, R(OH)$_a$ = 0.95843 \AA, $\theta$(HOH)$_d$ = 104.854\degree, $\theta$(HOH)$_a$ = 104.945\degree, R(O$\cdots$O) = 2.90916 \AA, $\alpha$ = 5.686\degree, $\beta$ = 123.458\degree (for an explanation of these labels refer to Fig. 3 of Ref. \citenum{lane2013ccsdtq}). The proton transfer coordinate $\Delta$R(OH)$_b$ used here is defined as the otherwise rigid displacement of the bridging proton, around its optimal position, following the direction parallel to the axis defined by the O$\cdots$O bond. The RT-EOM-CCSD simulations were performed using the Tensor Contraction Engine (TCE) \cite{TCE2} implementation of RT-EOM-CCSD.\cite{vila2022real} All the simulations used a time step of 0.025 au and frozen virtual orbitals above 10 au, when present. The total length of the simulations was 90 au for the valence states of the monomer and dimer, and 400 au for the core state of the monomer. The valence simulations of the monomer used the aug-cc-pVTZ basis set,\cite{augccpvdz} while the core ones used the Sapporo-TZP\cite{sapporotzp} basis set. The RT-EOM-CC simulations of the dimer used the aug-cc-pVTZ basis set. The CCGF spectral functions were calculated using the highly-scalable parallel tensor library TAMM (Tensor Algebra for Many-body Methods) \cite{mutlu2022tamm, mutlu2019toward} implementation of the CCSDGF formalism.\cite{peng2021GFCCLib, doi:10.1063/1.5138658} The valence vertical ionization potentials for the water and water dimer were calculated using IP-EOMCCSD code available in NWChem.\cite{nwchem, apra2020nwchem} All CCSD(T) calculations for the water dimer in cc-pVXZ and aug-cc-pVXZ (X=D, T, Q, 5, 6) were obtained with the TAMM implementation of the CCSD(T) formalism,\cite{kim2020scalable,mutlu2022tamm, mutlu2019toward} with a linear dependence threshold for the basis sets of 10$^{-5}$, SCF convergence cutoff of 10$^{-8}$ for the energy, and a CCSD convergence cutoff of 10$^{-8}$. In the largest CCSD(T) calculations corresponding to the aug-cc-pV6Z, more than 1,100 orbitals were correlated. \begin{center} \begin{table}[t] \centering \caption{The low-lying ionization potentials of the H$_2$O molecule in experimental geometry for various basis sets obtained with the IP-EOMCCSD, relativistic IP-EOMCCSD, and RT-EOM-CCSD approaches.} \begin{tabular}{lccc} \hline \hline\\ Basis set &\, $1^2B_1$ &\, $1^2A_1$ &\, $1^2B_2$ \\%[0.2cm] \hline\\ & \multicolumn{3}{c}{IP-EOMCCSD} \\%[0.2cm] \hline cc-pVDZ &\, 11.807 &\, 14.131 &\, 18.470 \\%[0.1cm] cc-pVTZ &\, 12.423 &\, 14.651 &\, 18.848 \\%[0.1cm] cc-pVQZ &\, 12.707 &\, 14.912 &\, 19.072 \\%[0.1cm] cc-pV5Z &\, 12.721 &\, 14.920 &\, 19.079 \\%[0.3cm] aug-cc-pVDZ &\, 12.384 &\, 14.673 &\, 18.906 \\%[0.1cm] aug-cc-pVTZ &\, 12.617 &\, 14.832 &\, 19.002 \\%[0.1cm] aug-cc-pVQZ &\, 12.707 &\, 14.912 &\, 19.072 \\%[0.1cm] aug-cc-pV5Z &\, 12.740 &\, 14.938 &\, 19.095 \\%[0.1cm] \hline \\ & \multicolumn{3}{c}{Relativistic IP-EOMCCSD} \\%[0.2cm] \hline dyall.ae4z &\, 12.627 &\, 14.834 &\, 18.871 \\%[0.2cm] \hline \\ & \multicolumn{3}{c}{RT-EOM-CCSD} \\%[0.2cm] \hline aug-cc-pVDZ &\, 12.477 &\, 14.774 &\, 18.981 \\%[0.1cm] aug-cc-pVTZ &\, 12.587 &\, 14.817 &\, 18.977 \\%[0.1cm] \hline \\ & \multicolumn{3}{c}{Expt.} \\%[0.2cm] \hline Ref. \citenum{h2oIPnist} &\, 12.621\,$\pm$\,0.002 &\, &\, \\%[0.1cm] Ref. \citenum{doi:10.1021/jp030263q} &\, &\, 14.84\,$\pm\,$0.02 &\, 18.78\,$\pm$\,0.02 \\%[0.1cm] \hline \hline \end{tabular} \label{table_h2o_ipeom} \end{table} \end{center} \begin{figure}[t] \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{Fig1_h2o2_iP_accpvdz.pdf} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{Fig1_h2o2_iP_accpvdz_v2.pdf} \caption{Top: Ionization potential of the $1^2A''$ state of (H$_2$O)$_2$ as a function of proton transfer coordinate calculated with the IP-EOMCCSD and RT-EOM-CCSD approaches, and with the $\Delta$CCSD, $\Delta$CCSD(T) and $\Delta$CCSDT methods as energy differences between the $N$ and $N-1$ electron systems. All calculations used the the aug-cc-pVDZ basis and correlated all the molecular.Bottom: Ionization potentials relative to the $\Delta$CCSDT values.} \label{fig:methcomp} \end{figure} \subsection{Valence region} Table \ref{table_h2o_ipeom} collects the IP-EOMCCSD, relativistic IP-EOMCCSD, and RT-EOM-CCSD results for the ionization potentials corresponding to the $1^2B_1$, $1^2B_2$ and $1^2A_1$ states of H$_2$O$^+$ (in the equilibrium structure of H$_2$O) calculated with various basis sets of the cc-pVXZ \cite{ccpvdz}and aug-cc-pVXZ (X=D,T,Q,5) \cite{augccpvdz} type. The observed basis set effects can vary with the targeted states. For example, the difference between cc-pVDZ and cc-pV5Z corresponding to the lowest ionization potential ($1^2B_1$) amounts to 0.917 eV. The analogous difference for the $1^2B_2$ state is equal to 0.609 eV. The utilization of larger aug-cc-pVXZ basis sets significantly reduces the dependence of the calculated IP-EOMCCSD ionization energies on the basis set employed, which is well illustrated by the $1^2B_1$ ionization potential, where the difference between IP-EOMCCSD results for aug-cc-pVDZ and aug-cc-pV5Z basis sets is reduced to 0.356 eV. It should also be noted that increasing the basis set size in the IP-EOMCCSD simulations of the $1^2B_1$ state results in overshooting its experimental value (12.621$\pm$0.002). The aug-cc-pVTZ basis set provides the best IP-EOMCCSD estimate for this state. In analogy to the IP-EOMCCSD method, the RT-EOM-CCSD simulations using the same aug-cc-pVTZ basis set provide an accurate (12.587 eV) estimate of the experimental value. The RT-EOM-CCSD results also agree with the IP-EOMCCSD results and with experiment for the aug-cc-pVTZ basis set for the remaining states. Table \ref{table_h2o_ipeom} also shows results obtained with the fully 4-component relativistic IP-EOMCCSD.\cite{pathak2014atom,pathak2014molecule,pathak2016open} The effects of relativity have been included by using a Dirac--Coulomb--Gaunt Hamiltonian, taken a Gaussian charge density distribution nuclear model to mimic the finite nucleus. The calculations use the dyall.ae4z basis for both H and O, where large and small component basis functions are uncontracted. All electrons are correlated, and none of the virtual spinors are frozen. The computed IPs are found to be very precise in comparison to experiment. However, the effect of relativity has a negligible role. \renewcommand{\tabcolsep}{0.55cm} \begin{center} \begin{table} \centering \caption{Low-lying ionization potentials of (H$_2$O)$_2$ in the equilibrium geometry, obtained with the IP-EOMCCSD and RT-EOM-CCSD approaches for various basis sets. In all calculations all molecular orbitals were correlated.} \begin{tabular}{lcc} \hline \hline\\ Basis set &\, $1^2A''$ &\, $1^2A'$ \\%[0.1cm] \hline \\ & \multicolumn{2}{c}{IP-EOMCCSD} \\ \hline cc-pVDZ &\, 10.889 &\, 12.369 \\%[0.1cm] cc-pVTZ &\, 11.546 &\, 12.938 \\%[0.1cm] cc-pVQZ &\, 11.774 &\, 13.136 \\%[0.1cm] cc-pV5Z &\, 11.868 &\, 13.216 \\%[0.3cm] aug-cc-pVDZ &\, 11.537 &\, 12.873 \\%[0.1cm] aug-cc-pVTZ &\, 11.768 &\, 13.107 \\%[0.1cm] aug-cc-pVQZ &\, 11.857 &\, 13.200 \\%[0.2cm] \hline \\ & \multicolumn{2}{c}{RT-EOM-CCSD} \\ \hline aug-cc-pVDZ &\, 11.589 &\, 13.043 \\%[0.1cm] \hline \\ & \multicolumn{2}{c}{Expt.} \\ \hline Ref. \citenum{tomoda1982photoelectron} &\, 12.1\,$\pm$\,0.1 &\, 13.2\,$\pm$\,0.2 \\ Ref. \citenum{stephen_leone_water_dimer} &\, 11.74\,$\pm$\,0.05 &\, \\ \hline \hline \end{tabular} \label{table_h2o2_ipeomccsd_b} \end{table} \end{center} Fig.~\ref{fig:methcomp} (top) shows a comparison of the ionization potential of the $1^2A''$ state of (H$_2$O)$_2$ as a function of proton transfer coordinate calculated with the IP-EOMCCSD and RT-EOM-CCSD approaches, and with the $\Delta$CCSD, $\Delta$CCSD(T) and $\Delta$CCSDT methods as energy differences between the $N$ and $N-1$ electron systems. Fig. \ref{fig:methcomp} (bottom) shows the same ionization potentials relative to those obtained with the $\Delta$CCSDT method. For all geometries considered, the $\Delta {\rm CCSD(T)}$ results are in excellent agreement with the $\Delta {\rm CCSDT}$ ones. At the same time $\Delta {\rm CCSD(T)}$ results provide systematic improvements of the $\Delta {\rm CCSD}$ ionization potentials. It is also worth noticing that the IP-EOMCCSD results are consistently better for the larger separations (using $\Delta {\rm CCSDT}$ results as a reference point) than the $\Delta {\rm CCSD}$ ones. The effect of the basis set quality on the low-lying ionization energies calculated with the IP-EOMCCSD formalism for the equilibrium structure of the water dimer is shown in Table \ref{table_h2o2_ipeomccsd_b}. As in the case of the water monomer (see Table~\ref{table_h2o_ipeom}), one can observe a significant impact of the basis set size on the calculated values of the ionization energies. In particular, the same trend of increasing the IP values with the growing size of the basis set can be noticed. For example, for the aug-cc-pVXZ (X=D,T,Q) the aug-cc-pVQZ basis is needed to get a satisfactory agreement with the experiment for the IP corresponding to the $1^2A''$ state. In the case of the $1^2A'$ state, the IP-EOMCCSD results align with the experimental values. The RT-EOM-CCSD result in the aug-cc-pVDZ basis for the $1^2A''$ are better than the IP-EOMCCSD ionization potential obtained with the same basis set. \begin{table}[t] \caption{\label{tab:gs}{Ionization potential of the $1^2A''$ state of the water dimer as a function of basis set, computed as the energy difference between the CCSD(T) ground state energies of (H$_2$O)$_2$ and (H$_2$O)$_2^+$.}} \begin{ruledtabular} \begin{tabular}{lcc} \multicolumn{1}{l}{Basis set}&\multicolumn{1}{c}{\# of basis functions}& \multicolumn{1}{r}{IP (eV)}\\%[0.1cm] \hline cc-pVDZ &\, 50 &\, 10.998 \\%[0.1cm] cc-pVTZ &\, 130 &\, 11.596 \\%[0.1cm] cc-pVQZ &\, 280 &\, 11.787 \\%[0.1cm] cc-pV5Z &\, 525 &\, 11.865 \\%[0.1cm] cc-pV6Z &\, 848 &\, 11.886 \\%[0.1cm] \hline aug-cc-pVDZ &\, 86 &\, 11.643 \\%[0.1cm] aug-cc-pVTZ &\, 210 &\, 11.804 \\%[0.1cm] aug-cc-pVQZ &\, 428 &\, 11.862 \\%[0.1cm] aug-cc-pV5Z &\, 741 &\, 11.885 \\%[0.1cm] aug-cc-pV6Z &\, 1120 &\, 11.893 \\%[0.1cm] \end{tabular} \end{ruledtabular} \label{ccsd_t} \end{table} In order to assess the basis set size effects on the ionization potentials, we have performed a series of calculations for the lowest ionization potential of the water dimer changes. These results are compiled in Table \Ref{ccsd_t}. As described above, the ionization potential is calculated as the difference in energy between the CCSD(T) ground state energies of the water dimer and its cation at the optimal geometry. The basis set sizes vary from the smallest set (cc-pVDZ), with 50 basis functions for the water dimer, to the largest (aug-cc-PV6Z) with 1120 basis functions. Interestingly, the augmentation functions seem to significantly improve the convergence of the ionization potential to its basis set limit, as demonstrated by the smaller (0.25 eV) range of IP values for the augmented sets than for the unaugmented ones (1 eV). From the table, it is evident that even a modest augmented basis set like aug-cc-pVDZ does a job compared to a much more saturated basis set. \begin{figure}[t] \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.45\textwidth]{H2O_Core_IP_QPW.pdf} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.45\textwidth]{H2O_Val_IP_QPW.pdf} \caption{Ionization potential (IP) and quasi-particle strength (QPS) as a function of OH bond length for the core $^2A_1$ (top), and HOMO-1 $1^2A_1$ and HOMO $1^2B_1$ (bottom) states of H$_2$O, calculated with the RT-EOM-CCSD method (RT) using the aug-cc-pVTZ basis set for the valence states and the Sapporo-TZP\cite{sapporotzp} for the core. The blue circles and diamonds indicate the experimental (Expt.) ionization energies at equilibrium.\cite{h2ospf,doi:10.1021/jp030263q}} \label{fig:h2oIP} \end{figure} Fig.~\ref{fig:h2oIP} (bottom) shows the ionization energies and QP strengths for the HOMO $1^2B_1$ and HOMO-1 $1^2A_1$ states of H$_2$O, computed with the RT-EOM-CCSD method and the aug-cc-pVTZ basis set. For these states the RT-EOM-CCSD method gives good results, with errors compared to experiment of only 0.02 eV and 0.03 eV, respectively. Both $1^2B_1$ and $1^2A_1$ states are sensitive to the bond stretching, with a decrease in ionization energy of ~1 eV along the bond dissociation. The ionization energies have very similar behavior over the whole range, yet, for these states the QP strength shows very different behavior. While the QP strength of the $1^2B_1$ state decreases monotonically $\sim$6\%, the one for the $1^2A_1$ varies only $\sim$2\% but goes through a minimum at 1.1 \AA. This can be clearly seen in the middle panel of Fig. \ref{fig:h2oSPF}, that shows that the spectral function of the $1^2A_1$ state is nearly featureless in the satellite region, in accordance with the little weight transfer from the QP. For the $1^2B_1$ state (Fig.~\ref{fig:h2oSPF}, bottom), the $\sim$6\% weight transfer is manifested in a the small increase in satellite weight in the -50 to -30 eV region. \begin{figure} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.45\textwidth]{H2O_Core_A_c.pdf} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.45\textwidth]{H2O_HOMOm1_A_c.pdf} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.45\textwidth]{H2O_HOMO_A_c.pdf} \caption{Quasi-particle peak and satellite region of the spectral function, as a function of OH bond length (R$_\mathrm{OH}$) for the core $^2A_1$ (top), HOMO-1 $1^2A_1$ (middle), and HOMO $1^2B_1$ (bottom) states of H$_2$O, calculated with the RT-EOM-CCSD method using the aug-cc-pVTZ basis set.} \label{fig:h2oSPF} \end{figure} \begin{figure} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{H2O_2_PES.pdf} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{H2O_2_Val_IP_QPW.pdf} \caption{Top: Potential energy surface for neutral and ionized (H$_2$O)$_2$ as a function of proton transfer coordinate $\Delta$R(OH)$_\mathrm{b}$ calculated at the CCSD level. Bottom: Ionization potentials (IP) and quasi-particle strengths (QPS) as a function of proton transfer coordinate $\Delta$R(OH)$_\mathrm{b}$ for the HOMO $1^2A''$ and HOMO-1 $1^2A'$ states of (H$_2$O)$_2$, calculated with the RT-EOM-CCSD method (RT). All calculations used the aug-cc-pVDZ basis set. The blue circles and diamonds indicate the experimental (Expt.) ionization energies at equilibrium.\cite{tomoda1982photoelectron,stephen_leone_water_dimer}} \label{fig:h2o2IP} \end{figure} Fig.~\ref{fig:h2o2IP} (bottom) shows results for the ionization of the HOMO $1^2A''$ and HOMO-1 $1^2A'$ states of (H$_2$O)$_2$ as a function of proton transfer coordinate. For reference, the top panel shows the potential energy surfaces for the ground and first ionized ($1^2A''$) states computed at the standard CCSD level. Unlike the monomer case, which shows little variation as a function of coordinate, the two lowest ionized states of the dimer vary significantly. The predicted decrease in IP associated with the proton transfer is expected given the formation of an (OH)$^-$(H$_3$O)$^+$ cluster. The IP of this proton-transferred system should be dominated by the ionization of the (OH)$^-$ fragment, which has a low IP despite being``solvated'' by the (H$_3$O)$^+$ fragment.\cite{winter2006electron} Moreover, the first two ionized states of (OH)$^-$(H$_3$O)$^+$ are expected to become nearly degenerate given that the HOMO state of isolated OH$^-$ is doubly degenerate (1$\pi$). The CCSD PES also predicts an adiabatic ionization energy from (H$_2$O)$_2$ to disproportionated ion (OH)(H$_3$O)$^+$ of 10.83 eV, in reasonable agreement with other estimated values of 10.64 eV.\cite{barnett1995pathways} The QP strength (Fig.~\ref{fig:h2o2IP}) shows a small ($\sim$4\%) variation over the proton transfer range for the $1^2A''$ state, and goes through a minimum in the region 0.4-0.6 \AA. The $1^2A'$ state shows very similar behavior. The QP strength minima coincide with the transition state region in the ionized state potential energy surface and result from an increase in many-body electron correlation in the bond breaking/reforming region, which manifests itself in the spectral function as a transfer of weight to the satellites (Fig.~\ref{fig:h2oSPF}). Beside the QP shift as a function of proton transfer coordinate, which actually results in an overall shift, the spectral function shows very little change, as seen from Fig.~\ref{fig:h2o2SPF}. \begin{figure}[t] \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{H2O_2_HOMO_A_c.pdf} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{H2O_2_HOMOm1_A_c.pdf} \caption{Quasi-particle peak and satellite region of the spectral function as a function of proton transfer coordinate $\Delta$R(OH)$_\mathrm{b}$ for the HOMO $1^2A''$ (top) and HOMO-1 $1^2A'$ (bottom) states of (H$_2$O)$_2$, calculated with the RT-EOM-CCSD method. All calculations used the aug-cc-pVDZ basis set.} \label{fig:h2o2SPF} \end{figure} \begin{figure}[t] \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{H2O_Dimer_GFCCSD_aug_ccpvdz_IP_QPW.pdf} \includegraphics[clip,trim=0.5cm 1.0cm 0.5cm 2.8cm, width=0.50\textwidth]{H2O_Dimer_GFCCSD_aug_ccpvdz_A_c.pdf} \caption{Ionization potential (IP) and quasi-particle strength (QPS) (top) and quasi-particle peak and satellite region of the spectral function (bottom) as a function of proton transfer coordinate $\Delta$R(OH)$_\mathrm{b}$ for the HOMO $1^2A''$ state of (H$_2$O)$_2$, calculated with the CCSDGF method. All calculations used the aug-cc-pVDZ basis set. The blue circle indicates the experimental (Expt.) ionization energy at equilibrium.\cite{stephen_leone_water_dimer}} \label{fig:h2o2GFCCSD} \end{figure} We are interested in the change in satellite features as a function of proton transfer coordinate, $\Delta$R(OH)$_\mathrm{b}$. From Fig.~\ref{fig:h2o2SPF} a shoulder satellite peak can be observed on the l.h.s. of the main QP peak at all the studied $\Delta$R(OH)$_\mathrm{b}$'s, and the satellite peak position red-shifts as the $\Delta$R(OH)$_\mathrm{b}$ gets larger. This observation is also confirmed by independent static CCSDGF calculations using the same basis set for the same (H$_2$O)$_2$ systems at the same $\Delta$R(OH)$_\mathrm{b}$'s and over the same energy regime, and the corresponding results are shown in Fig.~\ref{fig:h2o2GFCCSD}. The only difference between Figs.~\ref{fig:h2o2SPF} and \ref{fig:h2o2GFCCSD} is that in the CCSDGF results the satellites are more separated from the QP and more refined satellite structure can be observed. At least two shoulder satellites are observed on the l.h.s. of the QP peak, while in the RT-EOM-CCSD results the refined satellite structure are embedded in the background of the QP peak. These peaks would likely be better resolved if with longer RT-EOM-CCSD calculations. Further wave function analysis of the satellites features show a $\pi\rightarrow \sigma*$ transition (e.g. from HOMO-1 to LUMO) along with the main ionization process from the HOMO. However, it is worth noting that the satellites predicted by CCSDGF are usually only qualitative, and systematic improvement towards quantitative evaluation would require including higher-order excitation in the CCGF framework as already demonstrated in the previous CCGF studies (see for example Ref. \citenum{peng2018green}). On the other hand, previous studies for small and medium-size molecules (e.g. H$_2$O, NH$_3$, CH$_4$, etc.)\cite{RVKNKP,vila2021eom,vila2020real,vila2022real} suggested that RT-EOM-CCSD is capable of providing better predictions for the satellite positions attributed to its explicit consideration of the time evolution of the correlation in $N-1$ electron state. Given the fact that only singles and doubles are included in both CCSDGF and RT-EOM-CCSD calculations, and the satellite peaks are close to each other, we conjecture that the satellite peak on the l.h.s. of the main QP peak also features a $\pi\rightarrow \sigma*$ transition along with the main ionization. It is worth mentioning that the satellite assignment in RT-EOM-CCSD method is challenging, and new methods in this regard are now under intensive development by the same authors. \subsection{Core binding energies} Fig.~\ref{fig:h2oIP} (top) shows the ionization energy and QP strength for the core state of the water monomer, computed with the RT-EOM-CCSD method and the Sapporo-TZP basis set. Unlike with the valence states, the core ionization energy is nearly insensitive to the deformation of the bond, with a change of only $\sim$0.7 eV. At the experimental bond length the agreement with the experimental core ionization energy error of about 0.5 eV is larger than for the valence states yet reasonable. On the other hand, the core ionization QP strength is much more sensitive to bond length than for the valence states, decreasing by about 10\% over the same range. This implies a significant transfer of weight from the QP to the satellite region. This transfer can be seen in Fig.~\ref{fig:h2oSPF} (top) which focuses on the satellite region of the spectral function as a function of bond length. Unlike the QP, which barely changes position, the satellite region 60 eV below it shows remarkable changes in weight distribution. As the bond dissociates, the main feature, 26 eV below the QP, becomes less pronounced and is replaced by a pair of features about 9 and 12 eV below the QP. \section{Conclusions} \label{conclusion} In the present studies, we have compared the performance of several high-accuracy coupled-cluster formulations in calculating the ionization potentials and spectral functions of the water monomer and dimer for various geometries corresponding to bond-breaking and proton transfer processes. We established that the ionization potentials calculated with the CCSDGF, IP-EOMCCSD, and RT-EOM-CCSD methods reproduce those calculated with the CCSD(T) and CCSDT approaches for those systems. When compared to experiment, these methods provide accurate results, with average errors on the order of 0.1 eV. The direct comparison and crosscheck between the CCSDGF and RT-EOM-CCSD results also provides insights into the possible many-body excitations responsible for the satellite region above the quasiparticle peak. The peak assignment can be further improved by including higher-order excitation in the CCGF and RT-EOM-CC frameworks, and new methods are currently being explored. We find that the satellite region of the water monomer is particularly sensitive to bond breaking. We have also demonstrated that the CCSD(T) formalism can almost perfectly reproduce the CCSDT ionization potentials for cases where CCSDT calculations were possible for neutral and charged systems. This prompted us to study the effect of basis set quality on the accuracy of the CCSD(T) ionization potentials. We find that for the water dimer, the aug-cc-pVTZ and aug-cc-pV5Z basis sets provides results converged to 0.1 eV and 0.01 eV, respectively. The largest CCSD(T)/aug-cc-pV6Z calculations for the (H$_2$O)$_2$ and (H$_2$O)$_2^+$ systems involved 1120 orbitals. While both CCSD(T) IPs obtained with the cc-pV6Z and aug-cc-pV6Z basis are almost identical, the discrepancy between calculated IPs obtained with the smallest and the largest basis sets of a given type is significantly larger for the cc-pVXZ series and amounts to 0.89 eV, showing that the augmented basis set provide faster convergence. Finally, we also reported results with the fully 4-component relativistic IP-EOMCCSD method for the lowest three states of the water monomer and found that its predictions are precise in comparison to the available experimental values. However, we found the effects of relativity to be almost negligible for the valence ionization potential values. \section{Acknowledgements} This work was supported by the Computational Chemical Sciences Program of the U.S. Department of Energy, Office of Science, BES, Chemical Sciences, Geosciences and Biosciences Division in the Center for Scalable and Predictive methods for Excitations and Correlated phenomena (SPEC) at PNNL, with computational support from NERSC, a DOE Office of Science User Facility, under contract no. DE-AC02-05CH11231. B.P. also acknowledges support from the Laboratory Directed Research and Development (LDRD) Program at PNNL. \section*{AUTHOR DECLARATIONS} \subsection*{Conflict of Interest} The authors have no conflicts of interest to declare. \section*{DATA AVAILABILITY} The data that support the findings of this study are available from the corresponding authors upon reasonable request. \section*{REFERENCES} \bibliographystyle{h-physrev}
train/arxiv
BkiUau7xK7FjYCv2Pfaq
5
1
\section{Applications} \label{sec:applications} Let us briefly discuss now some possible future applications of the NNPDF fitting framework presented in this work. As representative examples, we consider the inclusion of new experimental data, producing fits varying the theory settings, and going beyond the determination unpolarised collider PDFs. We discuss each of these applications in turn, and for a more comprehensive list we refer the interested user to the online documentation. \paragraph{Adding new experimental data to the global fit.} A typical application of the open-source NNPDF framework would be that of assessing the impact of some new measurement in the global PDF fit. Carrying out a full-fledged fit has several advantages as compared to approximate methods such as Bayesian reweighting~\cite{Ball:2011gg,Ball:2010gb}, in particular one does not rely on the availability of prior fits composed by a very large number of replicas. Also, this way the user can easily vary the input dataset or the theoretical settings of this baseline fit. Furthermore, it is possible to add simultaneously to the global fit a large number of new datasets, while the reliability of the reweighting procedure is typically limited to a single dataset. To implement a new dataset in the NNPDF code, one should start by adding the new measurement to the {\tt buildmaster} suite. This will parse the new data points into the common format suitable for its use in the PDF fits. Such an implementation will in general include information regarding the data central values, specification of the kinematic variables, statistical uncertainties and any relevant correlated systematic uncertainties that may exist in the dataset. In particular, the systematic uncertainties must be accompanied by metadata specifying their type (i.e if they are multiplicative or additive) as well as any possible correlations they may have with other systematic uncertainties (for example, the luminosity uncertainty will often be correlated across a given experiment). These uncertainties will then be used to construct the covariance matrix as well as the Monte Carlo replicas used to train the neural networks parametrising the PDFs. Furthermore, in order to run the fit, the user would have to produce the corresponding {\tt FK}-tables for this new dataset, which implies evaluating the fast NLO grids via {\tt APPLgrid}, {\tt FastNLO}, or {\tt PineAPPL}~\cite{Carrazza:2020gss} and then combining them with the DGLAP evolution kernels via {\tt APFELcomb}. Depending on the perturbative order and electroweak settings of the fit, one needs to complement this {\tt FK}-table with bin-by-bin NNLO QCD and/or NLO electroweak $K$-factors. With these ingredients, it is then be possible to add the data to a NNPDF fit and gauge its impact by comparing to a baseline with this same dataset excluded. If the impact of the dataset on the PDFs is moderate one can adopt the same hyperparameters as in the baseline reference; however, it is recommended practice to verify the stability of the fit results with respect a dedicated round of hyperoptimisation. Note also that new theory constraints, e.g. as those that could be imposed by lattice QCD calculations, can be accounted for in the same manner is with the addition of a new dataset. As a consequence of adding the new dataset to those already packaged within the {\sc\small NNPDF} code, the user now has access to the \texttt{validphys} tools described in Sect.~\ref{sec:analysis}, and hence they can easily characterise the goodness of the fit and quantify the agreement with the theory predictions, and well as assess the impact of this new dataset into the PDFs, partonic luminosities, and physical cross-sections. \paragraph{NNPDF fits with different theory settings.} Another foreseeable application of the open source fitting code is to produce variants of the NNPDF global analyses with modifying settings for the theoretical calculations. For instance, in determinations of the strong coupling $\alpha_s(m_Z)$ from collider data, one typically needs a series of PDF fits with a wide range and fine spacing of $\alpha_s$ values. These dedicated fits can be produced with the NNPDF code, and in addition while producing such PDF fits the user can also choose to tune the input dataset, e.g. by excluding specific types of processes, and the theory settings, e.g. with different variable-flavour-number-scheme. As emphasised in~\cite{Forte:2020pyp}, when extracting SM parameters such as $\alpha_s(m_Z)$ from datasets sensitive to PDFs, it it necessary to simultaneously account for the impact of such datasets on the PDFs themselves (and not only on $\alpha_s$) to avoid biasing the determination. Hence, these varying-$\alpha_s$ fits should already include the dataset from which $\alpha_s$ will be extracted, and this is only possible thanks to the availability of the NNPDF open source code. The same caveats apply in the case of determinations of the heavy quark (charm, bottom, and top) masses from collider processes in which PDFs also enter the theory calculations. Other possible examples of NNPDF fits with varying theory settings are fits with different flavour assumptions, DGLAP evolution settings, or with approximations for unknown higher order perturbative corrections such as those evaluated from resummation. One may also be interested in tailored PDF sets for specific cross-section calculations, such as the doped PDFs~\cite{Bertone:2015gba} where the running with the active number of flavours $n_f$ is different for $\alpha_s(Q)$ and for the PDF evolution. In order to run a variant of the NNPDF fit with different theory settings, the user needs to verify if the corresponding sought-for {\tt theory-id} already exists in the {\tt theory} database. If this is the case, the fit with the new theory settings can be easily produced by adjusting the {\tt theory-id} parameter in the run card. If, however, the {\tt FK}-tables with the required theory settings are not available in the database, the user needs first to produce them using {\tt APFELcomb}. We note that this is a relatively inexpensive step from the computational point of view, provided the corresponding NLO fast grids and the associated $K$-factors have been already produced. The user can follow the instructions in \begin{center} {\tt \url{https://docs.nnpdf.science/tutorials/apfelcomb.html}} \end{center} to produce {\tt FK}-tables with their desired settings and assign them to a new {\tt theory-id} in the theory database. By means of the {\tt validphys} tooling, this new set of {\tt FK}-tables can also be uploaded to the theory server where it will become available for other users. \paragraph{Beyond unpolarised collinear PDFs.} The current version of the NNPDF code focuses on unpolarised parton distributions. However, its flexible and modular infrastructure can be extended to the determination of related non-perturbative QCD quantities by means of the same methodology. While the NNPDF approach has also been used for the determination of polarised PDFs~\cite{Nocera:2014gqa,Ball:2013lla}, fragmentation functions~\cite{Bertone:2017tyb,Bertone:2018ecm}, and nuclear PDFs~\cite{AbdulKhalek:2019mzd,AbdulKhalek:2020yuc}, in all these cases the code infrastructure only partially overlaps with that underlying NNPDF4.0. For instance, the polarised PDF determination rely on the {\sc\small Fortran} predecessor of the NNPDF code, while the nuclear PDF fits adopt the {\tt FK}-table approach for theoretical calculations but are based on a stand-alone machine learning framework. The availability of the NNPDF framework as open source code should hence lead to progress into its extension to other quantities beyond unpolarised collinear PDFs, as well as for the determination of the collinear PDFs of different hadronic species such as pions or kaons. These studies are especially interesting at the light of future experiments with focus on testing the nucleon, nuclear, and mesonic structure, from the Electron Ion Colliders~\cite{Anderle:2021wcy,AbdulKhalek:2021gbh} to AMBER at the CERN-SPS~\cite{Adams:2018pwt}. A closely related application of the NNPDF fitting code would be the simultaneous determination of non-perturbative QCD quantities exhibiting non-trivial cross-talk, such as nucleon and nuclear PDFs~\cite{Khalek:2021ulf}, (un)polarised PDFs together with fragmentation functions~\cite{Moffat:2021dji}, or collinear and transverse-momentum-dependent PDFs. Such integrated global PDF determinations have many attractive features, for instance in the proton global analysis it would not be necessary anymore to treat in a special manner the deuteron and heavy nuclear datasets (since the $A$ dependence would be directly extracted from the data), and the interpretation of processes such as semi-inclusive DIS (SIDIS) would not rely on assumptions about the behaviour of either the nucleon PDFs (for the initial state) or the fragmentation functions (for the final state). Clearly, a pre-requisite for such integrated fits is the availability of the code infrastructure for the determination of the individual non-perturbative QCD quantities within the public NNPDF framework. \section{Code structure} \label{sec:code} \label{sec:config} The open-source {\tt NNPDF} framework enables performing global QCD analyses of lepton-proton(nucleus) and proton-(anti)proton scattering data in terms of the NNPDF4.0 methodology described in~\cite{nnpdf40}. The code is publicly available from its {\sc\small GitHub} repository \begin{center} {\tt \url{https://github.com/NNPDF/}} \end{center} and is accompanied by an extensive, continuously updated, online documentation \begin{center} {\tt \url{https://docs.nnpdf.science/}} \end{center} In this section, we describe the structure of the code and we present a high-level description of its functionalities. We invite the reader to consult the documentation for details on its usage.\\[-0.3cm] \begin{figure} \centering \includegraphics[scale=.5]{diagram.png} \caption{\label{fig:wf} Schematic of the NNPDF code. % The three main inputs are the theoretical calculations, encoded in terms of the precomputed {\tt FK}-tables, the methodological settings as determined by the hyperopt procedure, and the experimental data in the common {\tt buildmaster} format. % The PDFs are fitted using {\tt n3fit}, and following a postfit selection the outcome is stored in the {\sc\small LHAPDF} grid format. % Finally, a thorough characterisation of the results is carried out by the {\tt validphys} framework. } \end{figure} The workflow for the {\sc\small NNPDF} code is illustrated in Fig.~\ref{fig:wf}. The {\sc\small NNPDF} code is composed of the following main packages: \begin{description} \item[\textbf{The {\tt buildmaster} experimental data formatter}] A {\tt C++} code which transforms the original measurements provided by the experimental collaborations, e.g. via {\sc\small HepData}~\cite{Maguire:2017ypu}, into a standard format that is tailored for PDF fitting. % In particular, the code allows for a flexible handling of experimental systematic uncertainties allowing for different treatments of the correlated systematic uncertainties~\cite{Ball:2009qv,Ball:2012wy}. \vspace*{0.3cm} \item[\textbf{The {\tt APFELcomb} interpolation table generator}] This code takes hard-scattering partonic matrix element interpolators from {\tt APPLgrid}~\cite{Carli:2010rw} and {\tt FastNLO}~\cite{Wobisch:2011ij} (for hadronic processes) and {\tt APFEL}~\cite{Bertone:2013vaa} (for DIS structure functions) and combines them with the QCD evolution kernels provided by {\tt APFEL} to construct the fast interpolation grids called {\tt FK}-tables~\cite{Bertone:2016lga}. In this way, physical observables can be evaluated in a highly efficient manner as a tensor sum of {\tt FK}-tables with a grid of PDFs at an initial parametrisation scale $Q_0$. {\tt APFELcomb} also handles NNLO QCD and/or NLO electroweak $K$-factors when needed.\\ Theory predictions can be generated configuring a variety of options, such as the perturbative order (currently up to NNLO), the values of the heavy quark masses, the electroweak parameters, the maximum number of active flavours, and the variable-flavour-number scheme used to account for the effects of the heavy quark masses in the DIS structure functions. The {\tt FK}-tables resulting from each choice are associated to a database entry trough a theory id, which allows to quickly identify them them. \vspace*{0.3cm} \item[\textbf{The {\tt n3fit} fitting code}] This code implements the fitting methodology described in~\cite{refId0,nnpdf40} as implemented in the {\tt TensorFlow} framework~\cite{abadi2016tensorflow}. The \texttt{n3fit} library allows for a flexible specification of the neural network model adopted to parametrise the PDFs, whose settings can be selected automatically via the built-in hyperoptimisation tooling ~\cite{2015CS&D....8a4008B}. These include the neural network type and architecture, the activation functions, and the initialisation strategy; the choice of optimiser and of its corresponding parameters; and hyperparameters related to the implementation in the fit of theoretical constraints such as PDF positivity~\cite{Candido:2020yat} and integrability. The settings for a PDF fit are input via a declarative run card. Using these settings, {\tt n3fit} finds the values of the neural network parameters, corresponding to the PDF at initial scale which describe the input data. Following a post-fit selection and PDF evolution step, the final output consists of an LHAPDF grid corresponding to the best fit PDF as well as metadata on the fit performance. \vspace*{0.3cm} \item[\textbf{The {\tt libnnpdf} {\tt C++} legacy code}] A {\tt C++} library which contains common data structures together with the fitting code used to produce the NNPDF3.0 and NNPDF3.1 analyses~\cite{Ball:2014uwa,Ball:2016neh,Ball:2017nwa,Bertone:2017bmex,Ball:2017otu}. % The availability of the {\tt libnnpdf} guarantees strict backwards compatibility of the NNPDF framework and the ability to benchmark the current methodology against the previous one. % To facilitate the interaction between the NNPDF {\tt C++} and {\tt Python} codebases, we have developed {\tt Python} wrappers using the {\tt SWIG}~\cite{10.5555/1267498.1267513} library. \vspace*{0.3cm} \item[\textbf{The {\tt validphys} analysis framework}] A package allowing to analyse and plot data related to the NNPDF fit structures and I/O capabilities to other elements of the code base. The {\tt validphys} framework is discussed in detail in Sect.~\ref{sec:analysis}. \end{description} Complementing these main components, the {\sc\small NNPDF} framework also contains a number of additional, ever-evolving, tools which are described in the online documentation. \paragraph{Development workflow.} The {\sc\small NNPDF} code adopts a development workflow compliant with best practices in professionally developed software projects. Specifically, every code modification undergoes code review and is subjected to a suite of automated continuous integration testing. Moreover, before merging into the main release branch, all relevant documentation is added alongside any new tests that may be relevant to the incoming feature. This feature ensures that a broad code coverage within the test suite is maintained. \paragraph{Installation.} The various software packages that compose the NNPDF fitting code can be installed via the binary packages provided by the {\tt conda} interface, as described in \begin{center} {\tt \url{https://docs.nnpdf.science/get-started/installation.html} } \end{center} The binary distribution allows users to easily install the entire code suite alongside all relevant dependencies within an isolated environment, which is also compatible with the one that has been tested automatically. Consequently, PDF fits can be produced with a known fixed version of the code and all its dependencies, regardless of the machine where it is running, hence ensuring the reproducibility of the result. For the purposes of code development, it is also possible to set up an environment where the dependencies are the same but the code can be edited, allowing users to contribute to the open-source framework. \paragraph{Input configuration.} The settings that define the outcome of a NNPDF fit are specified by means of a run card written in {\tt YAML}, a common human-readable data-serialisation language. The main elements of fit run cards are: \begin{description} \item[\textbf{Input dataset:}] for each dataset, the user has to specify the NNPDF-internal string associated to it, the fraction of the data that goes into the training and validation subsets, and the inclusion of $K$-factors in the corresponding theoretical predictions. % The latter are assigned different naming conventions depending on their nature: NNLO QCD, NLO electroweak, heavy-quark mass corrections for neutrino DIS~\cite{Gao:2017kkx}, or overall normalisation rescaling. % Correlations between common systematic uncertainties between different datasets are automatically taken into account. \vspace*{0.2cm} \item[\textbf{Kinematical cuts:}] a declarative format that specifies the cuts applied to the experimental data, based on the kinematics of each data point and depending on the corresponding theory settings. The cuts can be based on simple relations between the kinematics of each data point, such as the usual $Q^2_{\rm min}$ and $W^2_{\rm min}$ cuts applied to DIS structure functions, some derived quantity such as the value of the lepton charge asymmetry in $W$ decay data, or on more complex conditions such as retaining only points where the relative difference between NLO and NNLO predictions is below some threshold. % These kinematical cut configuration can either be specified directly in the run card or the built-in defaults can be used, and can be required for individual datasets or for types of processes instead. % \vspace*{0.2cm} \item[\textbf{Theory settings:}] the settings for theory predictions to be used in the fit, such as the perturbative order and the values of the coupling constants and of the quark masses, are specified an entry in the theory database, which in turn selects the set of {\tt FK}-tables, to be used during the fit. % A wide range of {\tt FK}-tables for the most commonly used theory settings are already available and can be installed using the NNPDF code, while tables corresponding to different settings can also be assembled by the user whenever required. The settings for the available entries of the theory database are specified in the online documentation. \vspace*{0.2cm} \item[\textbf{Fitting strategy and hyperparameters:}] the user can specify via the run card a number of methodological settings that affect the optimisation, such as the minimisation algorithm with the corresponding parameters, the maximum training length, the neural network architecture and activation functions, and the choice of PDF fitting basis (e.g. using the evolution or the flavour basis). % These methodological settings can either be set by hand or taken from the result of a previous {\tt hyperopt} run. Furthermore, random seeds can be configured to achieve different levels of correlation between Monte Carlo replicas across fits, as required e.g. for the correlated replica method used in the $\alpha_s(m_Z)$ extraction of ~\cite{Ball:2018iqk}. % The user can additionally decide whether to save the weights of the neural networks during the fit or not, and whether to fit the Monte Carlo replicas or instead the central values of the experimental data. % Another choice accessible via the run card is whether to use real data or instead fit to pseudo-data generated from a known underlying PDFs, as required during a closure test~\cite{Ball:2014uwa, closure40}. \vspace*{0.2cm} \item[\textbf{PDF positivity and integrability:}] as described in~\cite{nnpdf40}, in the NNPDF4.0 determination one imposes theoretical requirements on the positivity and integrability of the fitted PDFs by means of the Lagrange multiplier method. % The user can then decide via the run card whether or not (or only partially) to impose these constraints on the PDFs, and if so define the initial values of the Lagrange multiplier weights. % Note that some of the parameters governing the implementation of these theory requirements can also be adjusted by means of the hyperoptimisation procedure. \vspace*{0.2cm} \item[\textbf{Weighted fits:}] % the user can choose to give additional weight to specific datasets when computing the total $\chi^2$. % This feature can be useful to investigate in more detail the relative impact that such datasets have in the global fit, and explore possible tensions with other datasets or groups of processes following the strategy laid out in~\cite{nnpdf40}. \end{description} The run cards required for producing the main NNPDF4.0 fits are stored under \begin{center} {\tt \url{https://github.com/NNPDF/nnpdf/tree/master/n3fit/runcards/reproduce_nnpdf40/}} \end{center} These enable users to readily reproduce the results and also generate modifications of dataset selection, methodology or theory choices by suitably tweaking a run card. \paragraph{Performance.} One of the main advantages introduced by the new methodology underlying NNPDF4.0 in comparison to its predecessors using genetic algorithms is the significant fitting speed up achieved. As an illustration of this improvement in performance, we note that the NNPDF4.0 NNLO global fit takes fewer than 6 hours per replica on a single CPU core, as compared to \(\simeq 36\) hours using the NNPDF3.1-like methodology. This significant reduction of the CPU footprint of the global PDF fits leads to a faster production rate of fit variants, and it also allows one the prototyping of new approaches to PDF fitting using deep learning. Indeed, technologies such as hyperoptimisation were previously impractical but with the improved computational performance of the NNPDF code they are used in the fit. % Furthermore, with the use of \texttt{TensorFlow} in the fitting toolkit, the ability to conveniently perform fits on the Graphics Processing Unit (GPU) might allow for further improvements in performance as suggested by the study in Ref.~\cite{Carrazza:2020qwu}. Such an implementation in the main {\tt NNPDF} code is reserved for a future release. \section{Conclusions} \label{sec:conclusions} In this work we have presented the public release, as an open-source code, of the software framework underlying the recent NNPDF4.0 global determination of parton distributions. The flexible and robust NNPDF code exploits state-of-the-art developments in machine learning to realise a comprehensive determination of the proton structure from a wealth of experimental data. The availability of this framework as open source should encourage the broader high-energy and nuclear physics communities to deploy machine learning methods in the context of PDF studies. Among the wide range of possible user cases provided by the NNPDF code, one can list assessing the impact of new data, producing tailored fits with variations of SM parameters such as $\alpha_s(m_Z)$ or $m_c$ for their simultaneous extraction together with the PDFs, and studying the eventual presence of beyond the SM physics in precision LHC measurements of the high-$p_T$ tails of kinematic distributions using effective field theories. Furthermore, the determination of related non-perturbative QCD quantities from nuclear PDFs and polarised PDFs to fragmentation functions represents another potential application of the NNPDF framework In order facilitate these various applications, the NNPDF codebase is now almost entirely written in {\sc\small Python}, the currently \textit{de facto} standard choice of programming language within both the data science as well as the scientific community. With the majority of the libraries being highly efficient wrappers of faster languages, {\sc\small Python} is no longer bottle-necked by performance and so its relatively low barrier of entry should allow for the NNPDF code to be modified and expanded. With this motivation, we have discussed how the user may wish to configure a run card for their PDF fit, indicated the details of the parameters that are exposed to the user, and presented the \texttt{validphys} library which acts as an in-house analysis suite designed to be not only reproducible, but also allowing for complex tasks to be achieved using transparent run card based inputs. We reiterate that we have restricted ourselves to a succinct high-level summary of the main functionalities of the {\sc\small NNPDF} code. The main reference for the interested user is online documentation which accompanies this release, which features technical commentary as well as example use cases. The documentation is kept continuously up-to-date following the ongoing development of the code. \section*{Acknowledgements} S.~C., S.~F., J.~C.-M., R.~S. and C.~S. are supported by the European Research Council under the European Union's Horizon 2020 research and innovation Programme (grant agreement n.740006). M.~U. and Z.~K. are supported by the European Research Council under the European Union's Horizon 2020 research and innovation Programme (grant agreement n.950246). M.~U. and S.~I. are partially supported by the Royal Society grant RGF/EA/180148. The work of M.~U. is also funded by the Royal Society grant DH150088. The work of M.~U., S.~I., C.~V. and Z.~K. is partially supported by the STFC consolidated grant ST/L000385/1. The work of Z.~K. was partly supported by supported by the European Research Council Consolidator Grant "NNLOforLHC2". J.~R. is partially supported by NWO, the Dutch Research Council. C.~V. is supported by the STFC grant ST/R504671/1. T.~G. is supported by The Scottish Funding Council, grant H14027. R.~L.~P. and M.~W. by the STFC grant ST/R504737/1. R.~D.~B., L.~D.~D. and E.~R.~N. are supported by the STFC grant ST/P000630/1. E.~R.~N. was also supported by the European Commission through the Marie Sklodowska- Curie Action ParDHonS (grant number 752748). \section{Introduction} \label{sec:introduction} The success of the ambitious programme of the upcoming Run III at the LHC and its subsequent High-Luminosity upgrade~\cite{Cepeda:2019klc,Azzi:2019yne} relies on achieving the highest possible accuracy not only in the experimental measurements but also in the corresponding theoretical predictions. A key component of the latter are the parton distribution functions (PDFs), which parametrize the quark and gluon substructure of the colliding protons~\cite{Gao:2017yyd,Ethier:2020way}. PDFs are dictated by non-perturbative QCD dynamics and hence must be phenomenologically extracted by matching a wide range of experimental data with the corresponding theoretical predictions. The determination of PDFs and their uncertainties requires a robust statistical framework which minimises unnecessary assumptions while implementing known theoretical constraints such as QCD evolution, sum rules, positivity, and integrability. Recently, a new family of global PDF analyses has been presented by the NNPDF Collaboration: NNPDF4.0~\cite{nnpdf40}. This updated PDF determination framework supersedes its predecessor NNPDF3.1~\cite{Ball:2017nwa} by improving on all relevant aspects, from the experimental input and theoretical constraints to the optimisation methodology and the validation of results. As with previous NNPDF releases, the NNPDF4.0 PDFs are made publicly available via the standard {\sc\small LHAPDF} interface~\cite{Buckley:2014ana}. However, until now only the outcome of the NNPDF fits (the {\sc\small LHAPDF} interpolation grid files) was released, while the code itself remained private. This situation implied that the only option to produce tailored variants of the NNPDF analyses was by requesting them to the developers, and further that results were not reproducible by external parties. Another limitation of private PDF codes is that benchmarking studies, such as those carried out by the PDF4LHC working group~\cite{Butterworth:2015oua,Rojo:2015acz}, become more convoluted due to the challenge in disentangling the various components that determine the final outcome. Motivated by this state of affairs, as well as by the principles of Open and FAIR~\cite{FAIR} (findable, accessible, interoperable and reusable) Science, in this work we describe the public release of the complete software framework~\cite{nnpdfcode} underlying the NNPDF4.0 global determination together with user-friendly examples and an extensive documentation. In addition to the fitting code itself, this release includes the original and filtered experimental data, the fast NLO interpolation grids relevant for the computation of hadronic observables, and whenever available the bin-by-bin next-to-next-to-leading order (NNLO) QCD and next-to-leading (NLO) electroweak $K$-factors for all processes entering the fit. Furthermore, the code comes accompanied by a battery of plotting, statistical, and diagnosis tools providing the user with an extensive characterisation of the PDF fit output. The availability of the NNPDF open-source code, along with its detailed online documentation, will enable users to perform new PDF analyses based on the NNPDF methodology and modifications thereof. Some examples of potential applications include assessing the impact of new measurements in the global fit; producing variants based on reduced datasets, carrying out PDF determinations with different theory settings, e.g. as required for studies of $\alpha_s$ or heavy quark mass sensitivity, or with different electroweak parameters; estimating the impact on the PDFs of theoretical constraints and calculations e.g. from non-perturbative QCD models~\cite{Ball:2016spl} or lattice calculations~\cite{Lin:2017snn,Cichy:2019ebf}; and quantifying the role of theoretical uncertainties from missing higher orders to nuclear effects. One could also deploy the NNPDF code as a toolbox to pin down the possible effects of beyond the Standard Model physics at the LHC, such as Effective Field Theory corrections in high-$p_T$ tails~\cite{Carrazza:2019sec, Greljo:2021kvv} or modified DGLAP evolution from new BSM light degrees of freedom~\cite{Berger:2010rj}. Furthermore, while the current version of the NNPDF code focuses on unpolarised parton distributions, its modular and flexible infrastructure makes it amenable to the determination of closely related non-perturbative collinear QCD quantities such as polarised PDFs, nuclear PDFs, fragmentation functions, or even the parton distributions of mesons like pions and kaons~\cite{Adams:2018pwt}. It should be noted that some of the functionalities described above are already available within the open source QCD fit framework {\tt xFitter}~\cite{Alekhin:2014irh,Zenaiev:2016jnq}. The NNPDF code offers complementary functionalities as compared to those in {\tt xFitter}, in particular by means of state-of-the-art machine learning tools for the PDF parametrisation, robust methods for uncertainty estimate and propagation, a wider experimental dataset, an extensive suite of statistical validation and plotting tools, the possibility to account for generic theoretical uncertainties, and an excellent computational performance which makes possible full-fledged global PDF fits in less than one hour. The main goal of this paper is to summarise the key features of the NNPDF code and to point the interested reader to the online documentation, in which the code is presented in detail and which, importantly, is kept up-to-date as the code continues to be developed and improved. First, in Sect.~\ref{sec:code} we describe the structure of the code and its main functionalities, including the relevant options. The framework used to analyse the outcome of a PDF fit is described in Sect.~\ref{sec:analysis}, while in Sect.~\ref{sec:applications} we describe a few examples of possible applications for which users may wish to use the code. We conclude and summarise some possible directions of future development in Sect.~\ref{sec:conclusions}. \section{The NNPDF analysis code: {\tt validphys}} \label{sec:analysis} The {\tt validphys} toolkit is at the heart of the NNPDF code base, bridging together the other components and providing basic data structures, compatibility interfaces, I/O operations and algorithms. These are used to assemble a suite of statistical analysis and plotting tools. We describe it here, and refer the reader to the publications mentioned in the code structure description in Sec.~\ref{sec:code} as well as the online documentation of the NNPDF framework for further details on the other parts of the code. The {\tt validphys} code is in turn built on top {\tt reportengine}~\cite{zahari_kassabov_2019_2571601}, a data analysis framework which seeks to achieve the following goals: \begin{itemize} \item To aid structuring data science code bases so as to make them understandable and lower the entry barrier for new users and developers. % \item To provide a declarative interface that allows the user specifying the required analysis by providing a minimal amount of information in the form of a run card, making the analysis reproducible given said run card. % \item To provide a robust environment for the execution of data analysis pipelines including robust error checking, automatic documentation, command-line tools and interactive applications. \end{itemize} The key observation underpinning the design of {\tt reportengine} is that most programming tasks in data science correspond to codes that are fully deterministic given their input. Every such program can be seen as a direct acyclic graph (DAG), see example the one shown in Fig.~\ref{fig:graph}, with links representing the dependencies between a given step of the computation and the subsequent ones. Specifically, the nodes in such graph (resources) correspond to results of executing functions (providers) which usually correspond to functions in the {\sc\small Python} programming language. These functions are required to be pure, that is, such that their outputs are deterministic functions of the inputs and that no side effects that alter the state of the program happen.\footnote{Note that the concept of pure function is used here somewhat more loosely than in programming languages such as {\tt Haskell~\cite{mena2019practical}}, since side effects such as logging information or writing files to disk are allowed as long as they are idempotent.} These side effects are typically managed by the {\tt reportengine} framework itself, with tools to, for example, save image files to a suitably unique filesystem location. The goal of simplifying the programming structure is achieved then by decomposing the program in terms of pure functions. Code developers are required to reason about the inputs of each individual function as well as its code, but not about any global state of the program or the order of execution, with the problem of putting the program together being delegated to the framework. \begin{figure} \centering \includegraphics[width=0.99\textwidth]{graph.pdf} \caption{\label{fig:graph} Direct acyclic graph corresponding to the run card provided in Fig.~\ref{fig:runcard}. The graph shows the inputs extracted from the run card, such as {\tt pdf} (the PDF set) and the {\tt dataset}, the intermediate steps required for the $\chi^2$ computation (such as evaluating the {\tt covariance\_matrix}), and the final target requested by the user, in this case a training {\tt report} containing a histogram and a table with the $\chi^2$ values obtained for this dataset for the indicated input PDF and choice of theory settings.} \end{figure} The {\tt reportengine} framework has extensive facilities for automatically building the computation graph from the provided input. Users are only required to specify the ultimate target of the analysis (such as a figure, table, or report) with the intermediate steps being deduced thanks to a set of conventions in the program structure and a collection of utilities provided by the framework (for example tools to implement the map-reduce pattern). This allows complex analyses to be specified by purely declarative run cards without the need to write custom code for each of them. In turn, the run cards allow any user to precisely reproduce the results based on it and the corresponding version of the code. A simple {\tt validphys} run card, illustrating a minimal analysis of a dataset is shown in Fig.~\ref{fig:runcard} with the DAG it spawns in Fig.~\ref{fig:graph}. As an example of the meta-programming features of {\tt reportengine}, the {\tt template\_text} input in the runcard displayed in Fig.~\ref{fig:runcard} illustrates how it is possible to spawn arbitrary other actions, with their corresponding dependencies, based on the user input as shown in Fig.~\ref{fig:graph}. The framework allows implementing similar complex workflows with its programming interface. Users are referred to the online documentation for further details, code references, and specific examples. \begin{figure} \centering \begin{minipage}{0.5\textwidth} \begin{framed} \begin{large} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{dataset\_input}\KeywordTok{:} \AttributeTok{ }\FunctionTok{dataset}\KeywordTok{:}\AttributeTok{ ATLAS\_WP\_JET\_8TEV\_PT} \AttributeTok{ }\FunctionTok{cfac}\KeywordTok{:}\AttributeTok{ }\KeywordTok{[}\AttributeTok{QCD}\KeywordTok{]} \FunctionTok{theoryid}\KeywordTok{:}\AttributeTok{ }\DecValTok{200} \FunctionTok{use\_cuts}\KeywordTok{:}\AttributeTok{ }\StringTok{"nocuts"} \FunctionTok{pdf}\KeywordTok{:}\AttributeTok{ NNPDF31\_nnlo\_as\_0118} \FunctionTok{template\_text}\KeywordTok{: }\CharTok{|} \NormalTok{ \# Histogram} \NormalTok{ \{@plot\_chi2dist@\}} \NormalTok{ \# Table} \NormalTok{ \{@dataset\_chi2\_table@\}} \FunctionTok{actions\_}\KeywordTok{:} \AttributeTok{ }\KeywordTok{{-}}\AttributeTok{ report(main=True)} \end{Highlighting} \end{Shaded} \end{large} \end{framed} \end{minipage} \caption{\label{fig:runcard} A {\tt validphys} runcard which produces a report containing a table and a histogram with the $\chi^2$ values obtained for the ATLAS $W^++{\rm jets}$ 8 TeV differential distributions when using the $N_{\rm rep}=100$ replicas of NNPDF3.1 NNLO as input dataset and the theory settings specified by the {\tt theoryid: 200} of the database. % In particular, the runcard specifies the string for the {\tt dataset}, the use of QCD $K$-factors, and the requirements that no kinematic cuts should be applied to the input dataset. Possible input options are referenced in Sec.~\ref{sec:config}. % The DAG graph corresponding to the execution of this runcard is represented in Fig.~\ref{fig:graph}. } \end{figure} The introspection capabilities of {\tt reportengine} enable it to provide a robust and convenient environment for carrying out analysis. Most notably they enable specifying powerful checks on the user input. Basic constraints are implemented by instrumenting type annotations of {\sc\small Python} functions, which are used to verify that data types in the run cards match those expected by the code, but in addition arbitrary checks can also be attached to both input values or provider functions. This is commonly known as contract programming, but differs with many implementations in that checks are executed at the time the DAG is being built instead of when functions are executed. Thus, the DAG construction phase can be seen as a compilation phase, where developers have the ability to write arbitrary compiler checks. This feature allows eliminating large classes of runtime errors, thereby increasing the chances that the analysis runs to completion once the DAG has been constructed and checked. Another introspection feature consists of the capability of tracing the required inputs for a given provider and displaying them as automatically generated command line documentation. As an implementation of the {\tt reportengine} framework the {\tt validphys} code features workflow focused on declarative and reproducible run cards. The code relies on common {\tt Python} data science libraries such as {\tt NumPy}~\cite{harris2020array}, {\tt SciPy}~\cite{2020SciPy-NMeth}, {\tt Matplotlib}~\cite{4160265} and {\tt Pandas}~\cite{mckinney-proc-scipy-2010} through its use of {\tt Pandoc}~\cite{macfarlane2013pandoc}, and it implements data structures that can interact with those of {\tt libnnpdf}, as well as with analogs written in pure {\sc\small Python}. These include NNPDF fits, {\tt LHAPDF} grids, and {\tt FK}-tables. In addition, the code allows to quickly acquire relevant data and theory inputs by automatically downloading them from remote sources whenever they are required in a runcard. It also contains tooling to upload analysis results to an online server, to share it with other users or developers, and to allow it to be reproduced by other parties. Some common data analysis actions that can be realised within the {\tt validphys} framework include: \begin{itemize} \item Evaluating the convolutions between {\tt FK}-tables and PDF sets, to evaluate in a highly efficient manner the theoretical predictions for the cross-sections of those datasets and theory settings we have implemented. % Note that here any input PDF set can be used, not only NNPDF sets. \item Producing data versus theory comparison plots allowing for the graphical visualisation of the wealth of experimental measurements implemented in the NNPDF framework matched against the theory predictions. % Again, predictions for arbitrary PDF sets can be used as input. \item Computing statistical estimators based on such data versus theory comparison, such as the various types of $\chi^2$~\cite{Ball:2009qv}, together with many plotting and grouping options. \item A large variety of plotting tools and options for the PDFs and partonic luminosities, including plots in arbitrary PDF bases. % Some of these functionalities are related to those provided by the {\tt APFEL-Web} online PDF plotter~\cite{Carrazza:2014gfa}. \item Manipulating LHAPDF grids, implementing operations such as Hessian conversions~\cite{Carrazza:2015aoa,Carrazza:2016htc}. \end{itemize} \begin{figure} \centering \includegraphics[width=.85\textwidth]{vpreport.png} \caption{\label{fig:report} The output of executing the runcard in Fig.~\ref{fig:runcard} with {\tt validphys} is an HTML report consistent of an histogram and the corresponding table indicating the distribution of $\chi^2$ values over the $N_{\rm rep}=100$ replicas of NNPDF3.1 NNLO for the ATLAS $W^++{\rm jets}$ 8 TeV differential distributions~\cite{Aaboud:2017soa} and the {\tt theoryid:200} theory settings.} \end{figure} The typical output of {\tt validphys} is an HTML report containing the results requested by the user via the runcard. Fig.~\ref{fig:report} displays the report obtained after executing the runcard in Fig.~\ref{fig:runcard}, consistent of an histogram displaying the distribution of $\chi^2$ values for the $N_{\rm rep}=100$ replicas of the NNPDF3.1 NNLO set when its predictions based on the {\tt theoryid:200} theory settings are compared to the ATLAS $W,Z$ 13 TeV total cross-sections. In order to highlight the potential of {\tt validphys}, we have collected in this link \begin{center} {\tt \url{https://data.nnpdf.science/nnpdf40-reports/}} \end{center} representative training reports corresponding to the NNPDF4.0 analysis, such as comparisons between fits at different perturbative orders and between fits based on different datasets. Additional features of the current release of the {\tt validphys} framework include tools that make possible: \begin{itemize} \item Comparing two PDF fits by means of the {\tt vp-comparefits} tool, which generates a report composed by almost 2000 figures and and 12 tables, displaying fit quality estimators, PDF comparisons, data-theory comparisons and positivity observables. \item Carrying out and characterising closure tests~\cite{closure40} and future tests~\cite{Cruz-Martinez:2021rgy}. \item Performing simultaneous fits of the PDFs together with the strong coupling constant~\cite{Ball:2018iqk}. \item Evaluating the theory covariance matrix constructed from scale variations, which can then be used as input for PDF fits accounting for missing higher order uncertainties (MHOUs) following the strategy of~\cite{AbdulKhalek:2019ihb,AbdulKhalek:2019bux}. \item Studying Hessian PDF tolerances. \item Determining Wilson coefficients in the Effective Field Theory (EFT) framework together with PDFs following the strategy presented in~\cite{Carrazza:2019sec, Greljo:2021kvv}. \item Analysing theoretical prediction with matched scale variations. \end{itemize} In conclusion, it is worth emphasising that many of the {\tt validphys} features described here can be deployed outside the framework of the NNPDF fits. For instance, the tooling to evaluate the theory covariance matrix could also be relevant in the context of Hessian PDF fits, and comparisons between PDF sets can be carried out for other fits beyond NNPDF, provided one is careful and adopts consistent theoretical settings for each of the inputs.
train/arxiv