Mathematics Faculty Publications
Recent Submissions

Essential Norms of Weighted Composition Operators Between Bloch Type SpacesWe compute the essential norm of a weighted composition operator uC? acting from an analytic Lipschitz space into a weighted Blochtype space on the disk, and give estimates for the essential norm of uC? when it maps the standard Bloch space into a weighted Blochtype space. We also study boundedness and compactness of weighted composition operators on analytic Lipschitz spaces from a geometric perspective.

Generalized SchwarzPick EstimatesWe obtain higher derivative generalizations of the SchwarzPick inequality for analytic selfmaps of the unit disk as a consequence of recent characterizations of boundedness and compactness of weighted composition operators between Blochtype spaces.

Toeplitz Operators on BlochType SpacesWe characterize complex measures ? on the unit disk for which the Toeplitz operator T? ? , ? > 0, is bounded or compact on the Bloch type spaces B?.

A Volterra Type Operator on Spaces of Analytic FunctionsThe main results are conditions on g such that the Volterra type operator is bounded or compact on BMOA. We also point out certain information when Jg is considered as an operator on a general space X of analytic functions on the disc.

Weighted Composition Operators between Bloch Type SpacesWe discuss boundedness and compactness of composition operators followed by multiplication as operators between Blochtype spaces of analytic functions on the unit disk.

GraphBased Upper Bounds for the Probability of the Union of EventsWe consider the problem of generating upper bounds for the probability of the union of events when the individual probabilities of the events as well as the probabilities of pairs of these events are known. By formulating the problem as a Linear Program, we can obtain bounds as objective function values corresponding to dual basic feasible solutions. The new upper bounds are based on underlying bipartite and threshold type graph structures.

Permutation ReconstructionIn this paper, we consider the problem of permutation reconstruction. This problem is an analogue of graph reconstruction, a famous question in graph theory. In the case of permutations, the problem can be stated as follows: In all possible ways, delete k entries of the permutation p = p1p2p3 . . . pn and renumber accordingly,creating (n/k) substrings. How large must n be in order for us to be able to reconstruct p from this multiset of substrings? That is, how large must n be to guarantee that this multiset is unique to ? Alternatively, one can look at the sets of substrings created this way. We show that in the case when k = 1, regardless of whether we consider sets or multisets of these substrings, a random permutation needs to be of length at least five to guarantee reconstruction. This in turn yields an interesting result about the symmetries of the poset of permutations. We also give some partialresults in the cases when = 2 and k = 3, and finally we give a lower bound on the length of a permutation for general k.

Derivation and Validation of Clinical Phenotypes for COPD: A Systematic ReviewBackground: The traditional classification of COPD, which relies solely on spirometry, fails to account for the complexity and heterogeneity of the disease. Phenotyping is a method that attempts to derive a single or combination of disease attributes that are associated with clinically meaningful outcomes. Deriving phenotypes entails the use of cluster analyses, and helps individualize patient management by identifying groups of individuals with similar characteristics. We aimed to systematically review the literature for studies that had derived such phenotypes using unsupervised methods. Methods: Two independent reviewers systematically searched multiple databases for studies that performed validated statistical analyses, free of definitive predetermined hypotheses, to derive phenotypes among patients with COPD. Data were extracted independently. Results: 9156 citations were retrieved, of which, 8 studies were included. The number of subjects ranged from 213 to 1543. Most studies appeared to be biased: patients were more likely males, with severe disease, and recruited in tertiary care settings. Statistical methods used to derive phenotypes varied by study. The number of phenotypes identified ranged from 2 to 5. Two phenotypes, with poor longitudinal health outcomes, were common across multiple studies: young patients with severe respiratory disease, few cardiovascular comorbidities, poor nutritional status and poor health status, and a phenotype of older patients with moderate respiratory disease, obesity, cardiovascular and metabolic comorbidities. Conclusions: The recognition that two phenotypes of COPD were often reported may have clinical implications for altering the course of the disease. This review also provided important information on limitations of phenotype studies in COPD and the need for improvement in future studies.

Essential Norm Estimates of Weighted Composition Operators Between Bergman Spaces on Strongly Pseudoconvex DomainsWe give estimates of the essential norms of weighted composition operators acting between Bergman spaces on strongly pseudoconvex domains. We also characterize boundedness and compactness of these operators.

Limits of Weakly Hypercyclic and Supercyclic OperatorsWe give a spectral characterization of the norm closure of the class of all weakly hypercyclic operators on a Hilbert space. Analogous results are obtained for weakly supercyclic operators.

A Simulation Based Evaluation of the Bootstrap Bias Corrected Percentile Interval Estimators of the Local False Discovery RatesLarge scale data, such as the one collected in microarray, proteomics, MRI imaging, and massive social science surveys etc., often requires simultaneous consideration of hundreds or thousands of hypothesis tests, which leads to inflated type I error rate. A popular way to account for it is to use local false discovery rates (LFDR), which is the probability that a gene is truly not differentially expressed given the observed test statistic. The purpose of this report is to evaluate the Bootstrap Bias Corrected Percentile (BBCP) method proposed by Shao and Tu (1995) for estimating the lower bound for the LFDR. The method didn’t perform as expected. The overall coverage probability for null genes as well as non null genes was far from nominal coverage level of 50%.

Interval Estimation of Some Epidemiological Measures of AssociationIn epidemiological cohort studies, the probability of developing a disease for individuals in a treatment/intervention group is compared with that of a control group. The groups involve varying cluster sizes, and the binary responses within each cluster cannot be assumed independently. Three major measures of association used to report the efficacy of treatments or effectiveness of public health intervention programs in case of prospective studies are Risk Difference (RD), Risk Ratio (RR) and Relative Risk Difference (RED). The preference of one measure of association over the other in drawing statistical inference depends on design of study. Lui (2004) discusses a number of methods of constructing confidence intervals for each of these measures. Specifically, Lui (2004) discusses four methods for RD, four methods for RR and three methods for RED. For the construction of confidence intervals for RD, Paul and Zaihra (2008) compare the four methods discussed by Lui (2004), using extensive simulations with a method based on an estimator of the variance of a ratio estimator by Cochran (1977) and a method based on a sandwich estimator of the variance of the regression estimator using the generalized estimating equations approach of Zeger and Liang (1986). Paul and Zaihra (2008) conclude that the method based on an estimate of the variance of a ratio estimator performs best overall. In this paper, we extend the two new methodologies introduced in Paul and Zaihra (2008) to confidence interval construction of the risk measures RR and RED. Extensive simulations show that the method based on an estimate of the variance of a ratio estimator performs best overall for constructing confidence interval for the other two risk measures RR and RED as well. This method involves a very simple variance expression which can be implemented with a very few computer codes. Therefore, it can be considered as an easily implementable alternative for all the three measures of association.