On Monday 26th November 2018, the Department of Mathematical Sciences at Durham University will hold a Lecture Day at the occasion of the visit of Prof. Narayanaswamy Balakrishnan (McMaster University, Canada) to the UK. The programme for the meeting is shown below; the venue will be the Department of Mathematical Sciences, Durham University (marked 15 on the map). Further Travel Information can be found here.
There is no registration fee, but for catering purposes please let us know by Friday 16th of November if you would like to attend. To confirm attendance, or in case of any questions, please contact Tahani Coolen-Maturi (tahani.maturi@durham.ac.uk).
Nonparametric predictive inference (NPI) is a frequentist statistics method based on few assumptions, with uncertainty quantified by imprecise probabilities. We show the application of NPI to investigate reproducibility of some basic statistical hypothesis tests, including a precedence test. This considers the question whether or not the final test result wrt rejection of the null hypothesis would be the same if the test were repeated. This is an important practical issue which has received relatively little attention, and about which there is a lot of confusion.
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate between two classes or groups. In practice multiple diagnostic tests or biomarkers are combined to improve diagnostic accuracy. Often biomarker measurements are undetectable either below or above so-called limits of detection (LoD). Nonparametric predictive inference (NPI) for best linear combination of two or more biomarkers subject to limits of detection is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The NPI lower and upper bounds for the ROC curve subject to limits of detection are derived, where the objective function to maximize is the area under the ROC curve (AUC).
In aerospace, conjunction analysis uses uncertainty quantification to estimate the risk of satellite collisions in orbit. A troubling phenomenon has emerged in conjunction analysis in which lower quality data paradoxically appear to make a satellite safer. It turns out that this effect is a special case of a much broader structural deficiency in additive belief functions. In such representations of statistical inference, there are always false propositions that will consistently be assigned a high degree of belief over repeated draws of the data. In other words, the problem is false confidence, and it is ubiquitous in such applications. False confidence can be found in even the simplest problems of statistical inference. This finding reinforces the position, already held in some circles, that epistemic uncertainty can only be appropriately represented using non-probabilistic mathematics.
In this talk, I will first introduce a mixture cure rate model as it was originally introduced. After that, I will formulate cure rate model in the context of competing risks and present some flexible families of cure rate models. I will then describe various inferential results for these models. Next, as a two-stage model, I will present destructive cure rate models and discuss inferential methods for it. In the final part of the talk, I will discuss various other extensions and generalizations of these models and the associated inferential methods. All the models and inferential methods will be illustrated with simulation studies as well as some well-known melanoma data sets.
Extreme Value Theory provides a rigorous mathematical justification for being able to extrapolate outside the range of the sampled observations. The primary assumption is that the observations are independent and identically distributed. Although the celebrated extreme value theorem still holds under several forms of weak dependence, relaxing the stationarity assumption, for example by considering a trend in extremes, leads to a changeling problem of inference based around the frequency of extreme events. Some studies have deemed that climate change is not so much about startling magnitudes of extreme phenomena but rather of how the frequency of extreme events can contribute to the worst case scenarios that could play out on the planet. For instance, the average rainfall may not be changing much, but heavy rainfall may become significantly more or less frequent, meaning that different observations must be endowed with different aspects in their underlying distributions. In this talk, I will present statistical tools for semi-parametric modelling of the evolution of extreme values over time and/or space by considering a trend on the frequency of high exceedances. The methodology is illustrated with application to daily rainfall data from several stations across Germany and The Netherlands.
Count data are usually fitted through Poisson models or simple two-parameter models such as the Negative Binomial distribution. Whilst many numeric methods, such as AIC and deviance, exist for assessing or comparing model fit, diagrammatic methods are few. We present here a diagnostic plot, to which we refer as `Christmas tree plot' due its characteristic shape, that may be used to visually assess the suitability of a given count data model. The construction of the plot is based on identifying, for any non-negative integer, a range of plausible counts of that integer, given the hypothesized count distribution. The plot allows, hence, at a glance the identification of inflation or deflation of any integer w.r.t. the count data model, including the detection of zero-inflation as a prominent special case.
One of the strategies that might be considered to enhance reliability and resilience of a system is swapping components when a component fails, so replacing it by another component from the system which is still functioning. This presentation considers cost effective component swapping to increase system reliability. The cost is discussed in two scenarios, namely fixed cost and time dependent cost for system failure.
Recently, several inferential methods based on the random sets theory were proposed in the literature. Among those, we would like to focus on Confidence Structures. These can be seen as a generalisation of the inferential approach based on Confidence Distributions. In those, the result of the inference is a probability distribution over the range of parameter of interest which can be used to construct confidence intervals and test hypotheses on any level of significance. Using the random set models allows us to seamlessly derive approaches for analysing censored observations without any assumptions about the underlying censoring model whilst retaining the coverage properties of the confidence distributions. We will show the basic ideas behind the concept of confidence structures and demonstrate its use on reliability analysis of a simple system based on a set of censored observations of lifetimes of its components.
Notes Prof. Balakrishnan's visit to the UK is supported by Durham University Global Engagement Travel Grants.