902 resultados para Eigenvalue Bounds
Resumo:
For executing the activities of a project, one or several resources are required, which are in general scarce. Many resource-allocation methods assume that the usage of these resources by an activity is constant during execution; in practice, however, the project manager may vary resource usage by individual activities over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and precedence and various work-content-related constraints are met.
Resumo:
Very recently, the ATLAS and CMS Collaborations reported diboson and dijet excesses above standard model expectations in the invariant mass region of 1.8–2.0 TeV. Interpreting the diboson excess of events in a model independent fashion suggests that the vector boson pair production searches are best described by WZ or ZZ topologies, because states decaying into W+W− pairs are strongly constrained by semileptonic searches. Under the assumption of a low string scale, we show that both the diboson and dijet excesses can be steered by an anomalous U(1) field with very small coupling to leptons. The Drell–Yan bounds are then readily avoided because of the leptophobic nature of the massive Z′ gauge boson. The non-negligible decay into ZZ required to accommodate the data is a characteristic footprint of intersecting D-brane models, wherein the Landau–Yang theorem can be evaded by anomaly-induced operators involving a longitudinal Z. The model presented herein can be viewed purely field-theoretically, although it is particularly well motivated from string theory. Should the excesses become statistically significant at the LHC13, the associated Zγ topology would become a signature consistent only with a stringy origin.
Resumo:
We consider the problem of nonparametric estimation of a concave regression function F. We show that the supremum distance between the least square s estimatorand F on a compact interval is typically of order(log(n)/n)2/5. This entails rates of convergence for the estimator’s derivative. Moreover, we discuss the impact of additional constraints on F such as monotonicity and pointwise bounds. Then we apply these results to the analysis of current status data, where the distribution function of the event times is assumed to be concave.
Resumo:
During the last decade wireless mobile communications have progressively become part of the people’s daily lives, leading users to expect to be “alwaysbest-connected” to the Internet, regardless of their location or time of day. This is indeed motivated by the fact that wireless access networks are increasingly ubiquitous, through different types of service providers, together with an outburst of thoroughly portable devices, namely laptops, tablets, mobile phones, among others. The “anytime and anywhere” connectivity criterion raises new challenges regarding the devices’ battery lifetime management, as energy becomes the most noteworthy restriction of the end-users’ satisfaction. This wireless access context has also stimulated the development of novel multimedia applications with high network demands, although lacking in energy-aware design. Therefore, the relationship between energy consumption and the quality of the multimedia applications perceived by end-users should be carefully investigated. This dissertation addresses energy-efficient multimedia communications in the IEEE 802.11 standard, which is the most widely used wireless access technology. It advances the literature by proposing a unique empirical assessment methodology and new power-saving algorithms, always bearing in mind the end-users’ feedback and evaluating quality perception. The new EViTEQ framework proposed in this thesis, for measuring video transmission quality and energy consumption simultaneously, in an integrated way, reveals the importance of having an empirical and high-accuracy methodology to assess the trade-off between quality and energy consumption, raised by the new end-users’ requirements. Extensive evaluations conducted with the EViTEQ framework revealed its flexibility and capability to accurately report both video transmission quality and energy consumption, as well as to be employed in rigorous investigations of network interface energy consumption patterns, regardless of the wireless access technology. Following the need to enhance the trade-off between energy consumption and application quality, this thesis proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA). By using the end-users’ feedback to establish a proper trade-off between energy consumption and application performance, OPAMA aims at enhancing the energy efficiency of end-users’ devices accessing the network through IEEE 802.11. OPAMA performance has been thoroughly analyzed within different scenarios and application types, including a simulation study and a real deployment in an Android testbed. When compared with the most popular standard power-saving mechanisms defined in the IEEE 802.11 standard, the obtained results revealed OPAMA’s capability to enhance energy efficiency, while keeping end-users’ Quality of Experience within the defined bounds. Furthermore, OPAMA was optimized to enable superior energy savings in multiple station environments, resulting in a new proposal called Enhanced Power Saving Mechanism for Multiple station Environments (OPAMA-EPS4ME). The results of this thesis highlight the relevance of having a highly accurate methodology to assess energy consumption and application quality when aiming to optimize the trade-off between energy and quality. Additionally, the obtained results based both on simulation and testbed evaluations, show clear benefits from employing userdriven power-saving techniques, such as OPAMA, instead of IEEE 802.11 standard power-saving approaches.
Resumo:
We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.
Resumo:
The logic PJ is a probabilistic logic defined by adding (noniterated) probability operators to the basic justification logic J. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic PJ. The main result of the paper is that the complexity of the derivability problem in PJ remains the same as the complexity of the derivability problem in the underlying logic J, which is π[p/2] -complete. This implies that the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language.
Resumo:
The modulus method introduced by H. Grötzsch yields bounds for a mean distortion functional of quasiconformal maps between two annuli mapping the respective boundary components onto each other. P. P. Belinskiĭ studied these inequalities in the plane and identified the family of all minimisers. Beyond the Euclidean framework, a Grötzsch-Belinskiĭ-type inequality has been previously considered for quasiconformal maps between annuli in the Heisenberg group whose boundaries are Korányi spheres. In this note we show that--in contrast to the planar situation--the minimiser in this setting is essentially unique.
Resumo:
We apply the theory of Peres and Schlag to obtain generic lower bounds for Hausdorff dimension of images of sets by orthogonal projections on simply connected two-dimensional Riemannian manifolds of constant curvature. As a conclusion we obtain appropriate versions of Marstrand's theorem, Kaufman's theorem, and Falconer's theorem in the above geometrical settings.
Resumo:
When choosing among models to describe categorical data, the necessity to consider interactions makes selection more difficult. With just four variables, considering all interactions, there are 166 different hierarchical models and many more non-hierarchical models. Two procedures have been developed for categorical data which will produce the "best" subset or subsets of each model size where size refers to the number of effects in the model. Both procedures are patterned after the Leaps and Bounds approach used by Furnival and Wilson for continuous data and do not generally require fitting all models. For hierarchical models, likelihood ratio statistics (G('2)) are computed using iterative proportional fitting and "best" is determined by comparing, among models with the same number of effects, the Pr((chi)(,k)('2) (GREATERTHEQ) G(,ij)('2)) where k is the degrees of freedom for ith model of size j. To fit non-hierarchical as well as hierarchical models, a weighted least squares procedure has been developed.^ The procedures are applied to published occupational data relating to the occurrence of byssinosis. These results are compared to previously published analyses of the same data. Also, the procedures are applied to published data on symptoms in psychiatric patients and again compared to previously published analyses.^ These procedures will make categorical data analysis more accessible to researchers who are not statisticians. The procedures should also encourage more complex exploratory analyses of epidemiologic data and contribute to the development of new hypotheses for study. ^
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
Tis the season of the National Basketball Association finals and the beginning of the Professional Women's Basketball Association. The skills of collaboration and teamwork required to achieve the ballet of basketball is learned by players over a number of years. On school grounds everywhere, children are learning the techniques and skills necessary to play the game of basketball. Recently, I saw a coach on the sidelines screaming at a young player to make her free-throws, and if she missed, she would have to run laps. This reminded me of traditional services to families which threaten, or at best demand a certain level of performance of parents without providing any true "coaching". I often watch our college coach work from a strengths perspective with the team on minute techniques such as the match-up defense and in-bounds plays. This is the approach that family preservation must employ with families, programs, and their communities.
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
Downhole temperature and thermal conductivity measurements in core samples recovered during Legs 127 and 128 in the Japan Sea resulted in five accurate determinations of heat flow through the seafloor and accurate estimates of temperature vs. depth over the drilled sections. The heat flows measured at these sites are in excellent agreement with nearby seafloor measurements. Drilling sampled basaltic rocks that form the acoustic basement in the Yamato and Japan basins and provided biostratigraphic and isotopic estimates of the age of these basins. The preliminary age estimates are compared with predicted heat flow values for two different thermal models of the lithosphere. A heat flow determination from the crest of the Okushiri Ridge yielded an anomalously high heat flow of 156 mW/m**2. This excessive heat flow value may have resulted from frictional heating on an active reverse fault that bounds the eastern side of the Ridge. Accurate estimates of sedimentation rates and temperatures in the sedimentary section combined with models of basin formation provide an opportunity to test thermochemical models of silica diagenesis. The current location of the opal-A/opal CT transition in the sedimentary section is determined primarily by the thermal history of the layer in which the transition is now found. Comparison of the ages and temperatures of the layer where the opal-A/opal-CT is found today is compatible with an activation energy of 14 to 17 kcal/mole.