Ghosh and Mukhopadhyay (1975: Calcutta Statistical Association Bulletin) introduced a purely sequential minimum risk point estimation procedure for the unknown parameter θ (>0) in a U(0,θ) population. This was developed under a squared error loss plus a linear cost function of sampling. Mukhopadhyay et al. (1983: Sequential Analysis) broadened that earlier methodology considerably. In both these papers, the unknown parameter θ was estimated by the associated randomly stopped largest sample order statistic (S) in both the loss function and the stopping rule.
Subsequently, Mukhopadhyay (1987: South African Statistical Journal) proposed a slightly different idea of sequential minimum risk point estimation for θ. He used the associated randomly stopped versions of S or T in either the loss function or the stopping rule. Here, T stands for twice the sample mean. Performances of such procedures were compared with those associated with earlier proposed sequential estimators of θ based on S.
But, clearly, using a randomly stopped version of T would amount to some loss of information when compared with a corresponding randomly stopped largest sample order statistic in both the loss function and the stopping rule. In this paper, we explore some novel approaches for recovering any such loss of information by fine-tuning the loss function and then properly tailoring the associated sequential methodologies.
We will examine how the sequential risks of our newly proposed methodologies would compare with those associated with the existing sequential estimators. We will also present small, moderate as well as large sample-size performances of the new randomly stopped versions of T and explore some selected second-order properties.
Keywords: Information; Lost information; Randomly stopped sample max; Randomly stopped sample mean
Biography: Debanjan Bhattacharjee is from India. He has just completed his PhD in Statistics from University of Connecticut in USA under the guidance of Dr. Nitis Mukhopadhyay.