Conclusion
Concluding the research
In the previous chapters I have first explained photon bunching and photon statistics qualitatively and then explored its impact on the DESHIMA spectrometer quantitatively. I have shown a model for the probabilistics behind photon bunching, showing that it is not the detection itself that triggers bunching, but rather a change in the underlying photon probability. I have also discussed the coherence time, defined as the timescale below which the non-stochastic effects of photon bunching take hold. The coherence time is related to the bandwidth by
$$\begin{equation} \tau_\mathrm{coh}\approx\frac{1}{\Delta\nu} \end{equation}$$After exploring photon statistics, I discussed photon noise induced by both Poisson statistics and photon bunching. The total noise equivalent power induced by photon noise is given by[1]:
$$\begin{equation} \mathrm{NEP}_{\tau,\mathrm{ph}}^2=\frac{1}{\tau}\int_0^\infty h\nu\eta\left(\nu\right)\mathrm{PSD} + \eta^2\left(\nu\right)\mathrm{PSD}^2d\nu \end{equation}$$Whereas the approximation given by [1] and used previously in calculating the sensitivity of the DESHIMA system[2] is found by approximating a very narrow bandwidth ($\nu\gg\Delta\nu$) and then approximating the integral by
$$\begin{equation} \mathrm{NEP}_{\tau,\mathrm{ph}}^2=\frac{1}{\tau}\left(h\nu\eta_0\mathrm{PSD}\Delta\nu+\eta_0^2\mathrm{PSD}^2\Delta\nu\right)\label{NEP_int} \end{equation}$$this approximation overestimates the photon bunching effects for a filter $\eta\left(\nu\right)$ with a Lorentzian shape. I have shown that, in the case of a Lorentzian filter and a flat $\mathrm{PSD}$, eq. \eqref{NEP_int} collapses to
$$\begin{equation} \mathrm{NEP}_{\tau,\mathrm{ph}}^2=\frac{1}{\tau}\left(h\nu\eta_0\mathrm{PSD}\Delta\nu+\frac{2}{\pi}\eta_0^2\mathrm{PSD}^2\Delta\nu\right) \end{equation}$$with $\Delta\nu$ the $\mathrm{FWHM}$ of the filter. This factor of $\pi/2$ is explained by the width of the Lorentzian filter. Previously the bandwidth of the filters was assumed to be negligible, resulting in an overestimation of the bunching. Because the photons that are impinging on the detector span a bigger bandwidth they bunch less.
The definition for the noise equivalent power $\mathrm{NEP}_{\tau=0.5s}$ means that it is defined at an integration time of $\tau=0.5\:s$. For different integration times the $\mathrm{NEP}_\tau$ is defined as:
$$\begin{equation} \mathrm{NEP}_{\tau} = \frac{1}{\sqrt{2\tau}}\mathrm{NEP}_{\tau=0.5\mathrm{s}} \end{equation}$$However, this assumes that the integration time is much bigger than the coherence time of the detected photons $\tau\gg t_\mathrm{coh}$. Due to correlation of photons within the coherence time the $\mathrm{NEP}_{\tau}$ drops when the integration time approaches and subceeds the coherence time.
Besides this algebraic result for a Lorentzian filter I have also modified the existing deshima-sensitivity[3] model to calculate the integral in eq. \eqref{NEP_int} not just for mathematical filters, but for arbitrary filter shapes loaded in via a file. This allows researchers to compare the sensitivity of various designs in software.
To verify the changes to the model I have compared it with the old model and confirmed that in the case of perfect Lorentzian filter the latter overestimated the bunching noise by a factor of $\pi/2$ , even on average for a non-flat $\mathrm{PSD}$. Other than this, the changes affect the power and noise in local extrema of the $\mathrm{PSD}$, where the old model didn't integrate over the full range of the filter and therefore took the $\mathrm{PSD}$ as a flat spectrum locally.
The future of this research
Because the model now more closely resembles the physics that is occurring inside the filter section of a DESHIMA spectrometer, it can be used to compare different filter designs in the deshima-sensitivity package itself. This is an important tool to compare filter topologies differing from Lorentzian shapes. Paired with methods to accurately describe the coupling of a source to the detector it can be used to very accurately calculate the signal to noise ratio of a specific filter profile, aiding in the experimental design of different filter profiles. Such research is in progress and as such the model will immediately be put to use.
The improved accuracy of the approximation in the bunching term for a Lorentzian filter can prove useful when designing the DESHIMA spectrometer, as it gives a more physically rigorous target of the maximum sensitivity the DESHIMA spectrometer can strive towards.
Finally, this thesis gives a thorough overview of photon statistics in astronomical measurement and of photon bunching in particular and as such can be a great teaching tool. My thesis supervisor, dr. Akira Endo, has been very interested in using this as teaching material for courses he gives on the subject and I am looking forward to helping other students understand photon statistics and photon bunching better.
Bibliography
- [1]J. Zmuidzinas, “Thermal noise and correlations in photon detection,” Applied Optics, vol. 42, no. 25, p. 4989, 2003, doi: 10.1364/ao.42.004989.
- [2]A. Endo et al., “First light demonstration of the integrated superconducting spectrometer,” Nature Astronomy, vol. 3, no. 11, pp. 989–996, 2019, doi: 10.1038/s41550-019-0850-8.
- [3]A. Endo and A. Taniguchi, “deshima-sensitivity v0.3.0,” pypi.org. Open Source, Jun-2021 [Online]. Available at: https://pypi.org/project/deshima-sensitivity/