863 resultados para Interval Linear Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with an event-bus tour booked by Bollywood film fans. During the tour, the participants visit selected locations of famous Bollywood films at various sites in Switzerland. Moreover, the tour includes stops for lunch and shopping. Each day, up to five buses operate the tour; for organizational reasons, two or more buses cannot stay at the same location simultaneously. The planning problem is how to compute a feasible schedule for each bus such that the total waiting time (primary objective) and the total travel time (secondary objective) are minimized. We formulate this problem as a mixed-integer linear program, and we report on computational results obtained with the Gurobi solver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE To compare the initial stability and stability after fatigue of three different locking systems (Synthes(®), Stryker(®) and Medartis(®)) for mandibular fixation and reconstruction. METHOD Standard mandible locking plates with identical profile height (1,5 mm), comparable length and screws with identical diameter (2,0 mm) were used. Plates were fixed with six screws according a preparation protocol. Four point bending tests were then performed using artificial bone material to compare their initial stability and failure limit under realistic loading conditions. Loading of the plates was performed using of a servo hydraulic driven testing machine. The stiffness of the implant/bone construct was calculated using a linear regression on the experimental data included in a range of applied moment between 2 Nm and 6 Nm. RESULTS No statistical difference in the elastic stiffness was visible between the three types of plate. However, differences were observed between the systems concerning the maximal load supported. The Stryker and Synthes systems were able to support a significantly higher moment. CONCLUSION For clinical application all systems show good and reliable results. Practical aspects such as handling, possible angulation of screw fixation, possibility of screw/plate removal, etc. may favour one or the other plating system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general goal of this thesis is correlating observable properties of organic and metal-organic materials with their ground-state electron density distribution. In a long-term view, we expect to develop empirical or semi-empirical approaches to predict materials properties from the electron density of their building blocks, thus allowing to rationally engineering molecular materials from their constituent subunits, such as their functional groups. In particular, we have focused on linear optical properties of naturally occurring amino acids and their organic and metal-organic derivatives, and on magnetic properties of metal-organic frameworks. For analysing the optical properties and the magnetic behaviour of the molecular or sub-molecular building blocks in materials, we mostly used the more traditional QTAIM partitioning scheme of the molecular or crystalline electron densities, however, we have also investigated a new approach, namely, X-ray Constrained Extremely Localized Molecular Orbitals (XC-ELMO), that can be used in future to extracted the electron densities of crystal subunits. With the purpose of rationally engineering linear optical materials, we have calculated atomic and functional group polarizabilities of amino acid molecules, their hydrogen-bonded aggregates and their metal-organic frameworks. This has enabled the identification of the most efficient functional groups, able to build-up larger electric susceptibilities in crystals, as well as the quantification of the role played by intermolecular interactions and coordinative bonds on modifying the polarizability of the isolated building blocks. Furthermore, we analysed the dependence of the polarizabilities on the one-electron basis set and the many-electron Hamiltonian. This is useful for selecting the most efficient level of theory to estimate susceptibilities of molecular-based materials. With the purpose of rationally design molecular magnetic materials, we have investigated the electron density distributions and the magnetism of two copper(II) pyrazine nitrate metal-organic polymers. High-resolution X-ray diffraction and DFT calculations were used to characterize the magnetic exchange pathways and to establish relationships between the electron densities and the exchange-coupling constants. Moreover, molecular orbital and spin-density analyses were employed to understand the role of different magnetic exchange mechanisms in determining the bulk magnetic behaviour of these materials. As anticipated, we have finally investigated a modified version of the X-ray constrained wavefunction technique, XC-ELMOs, that is not only a useful tool for determination and analysis of experimental electron densities, but also enables one to derive transferable molecular orbitals strictly localized on atoms, bonds or functional groups. In future, we expect to use XC-ELMOs to predict materials properties of large systems, currently challenging to calculate from first-principles, such as macromolecules or polymers. Here, we point out advantages, needs and pitfalls of the technique. This work fulfils, at least partially, the prerequisites to understand materials properties of organic and metal-organic materials from the perspective of the electron density distribution of their building blocks. Empirical or semi-empirical evaluation of optical or magnetic properties from a preconceived assembling of building blocks could be extremely important for rationally design new materials, a field where accurate but expensive first-principles calculations are generally not used. This research could impact the community in the fields of crystal engineering, supramolecular chemistry and, of course, electron density analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Gene expression is an important process whereby the genotype controls an individual cell’s phenotype. However, even genetically identical cells display a variety of phenotypes, which may be attributed to differences in their environment. Yet, even after controlling for these two factors, individual phenotypes still diverge due to noisy gene expression. Synthetic gene expression systems allow investigators to isolate, control, and measure the effects of noise on cell phenotypes. I used mathematical and computational methods to design, study, and predict the behavior of synthetic gene expression systems in S. cerevisiae, which were affected by noise. Methods I created probabilistic biochemical reaction models from known behaviors of the tetR and rtTA genes, gene products, and their gene architectures. I then simplified these models to account for essential behaviors of gene expression systems. Finally, I used these models to predict behaviors of modified gene expression systems, which were experimentally verified. Results Cell growth, which is often ignored when formulating chemical kinetics models, was essential for understanding gene expression behavior. Models incorporating growth effects were used to explain unexpected reductions in gene expression noise, design a set of gene expression systems with “linear” dose-responses, and quantify the speed with which cells explored their fitness landscapes due to noisy gene expression. Conclusions Models incorporating noisy gene expression and cell division were necessary to design, understand, and predict the behaviors of synthetic gene expression systems. The methods and models developed here will allow investigators to more efficiently design new gene expression systems, and infer gene expression properties of TetR based systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A population based ecological study was conducted to identify areas with a high number of TB and HIV new diagnoses in Harris County, Texas from 2009 through 2010 by applying Geographic Information Systems to determine whether distinguished spatial patterns exist at the census tract level through the use of exploratory mapping. As of 2010, Texas has the fourth highest occurrence of new diagnoses of HIV/AIDS and TB.[31] The Texas Department of State Health Services (DSHS) has identified HIV infected persons as a high risk population for TB in Harris County.[29] In order to explore this relationship further, GIS was utilized to identify spatial trends. ^ The specific aims were to map TB and HIV new diagnoses rates and spatially identify hotspots and high value clusters at the census tract level. The potential association between HIV and TB was analyzed using spatial autocorrelation and linear regression analysis. The spatial statistics used were ArcGIS 9.3 Hotspot Analysis and Cluster and Outlier Analysis. Spatial autocorrelation was determined through Global Moran's I and linear regression analysis. ^ Hotspots and clusters of TB and HIV are located within the same spatial areas of Harris County. The areas with high value clusters and hotspots for each infection are located within the central downtown area of the city of Houston. There is an additional hotspot area of TB located directly north of I-10 and a hotspot area of HIV northeast of Interstate 610. ^ The Moran's I Index of 0.17 (Z score = 3.6 standard deviations, p-value = 0.01) suggests that TB is statistically clustered with a less than 1% chance that this pattern is due to random chance. However, there were a high number of features with no neighbors which may invalidate the statistical properties of the test. Linear regression analysis indicated that HIV new diagnoses rates (β=−0.006, SE=0.147, p=0.970) and census tracts (β=0.000, SE=0.000, p=0.866) were not significant predictors of TB new diagnoses rates. ^ Mapping products indicate that census tracts with overlapping hotspots and high value clusters of TB and HIV should be a targeted focus for prevention efforts, most particularly within central Harris County. While the statistical association was not confirmed, evidence suggests that there is a relationship between HIV and TB within this two year period.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ever since its discovery, Eocene Thermal Maximum 2 (ETM2; ~53.7 Ma) has been considered as one of the "little brothers" of the Paleocene-Eocene Thermal Maximum (PETM; ~56 Ma) as it displays similar characteristics including abrupt warming, ocean acidification, and biotic shifts. One of the remaining key questions is what effect these lesser climate perturbations had on ocean circulation and ventilation and, ultimately, biotic disruptions. Here we characterize ETM2 sections of the NE Atlantic (Deep Sea Drilling Project Sites 401 and 550) using multispecies benthic foraminiferal stable isotopes, grain size analysis, XRF core scanning, and carbonate content. The magnitude of the carbon isotope excursion (0.85-1.10 per mil) and bottom water warming (2-2.5°C) during ETM2 seems slightly smaller than in South Atlantic records. The comparison of the lateral d13C gradient between the North and South Atlantic reveals that a transient circulation switch took place during ETM2, a similar pattern as observed for the PETM. New grain size and published faunal data support this hypothesis by indicating a reduction in deepwater current velocity. Following ETM2, we record a distinct intensification of bottom water currents influencing Atlantic carbonate accumulation and biotic communities, while a dramatic and persistent clay reduction hints at a weakening of the regional hydrological cycle. Our findings highlight the similarities and differences between the PETM and ETM2. Moreover, the heterogeneity of hyperthermal expression emphasizes the need to specifically characterize each hyperthermal event and its background conditions to minimalize artifacts in global climate and carbonate burial models for the early Paleogene.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 560-meter-thick sequence of Cenomanian through Pleistocene sediments cored at DSDP Site 462 in the Nauru Basin overlies a 500-meter-thick complex unit of altered basalt flows, diabase sills, and thin intercalated volcaniclastic sediments. The Upper Cretaceous and Cenozoic sediments contain a high proportion of calcareous fossils, although the site has apparently been below the calcite compensation depth (CCD) from the late Mesozoic to the Pleistocene. This fact and the contemporaneous fluctuations of the calcite and opal accumulation rates suggest an irregular influx of displaced pelagic sediments from the shallow margins of the basin to its center, resulting in unusually high overall sedimentation rates for such a deep (5190 m) site. Shallow-water benthic fossils and planktonic foraminifers both occur as reworked materials, but usually are not found in the same intervals of the sediment section. We interpret this as recording separate erosional interludes in the shallow-water and intermediate-water regimes. Lower and upper Cenozoic hiatuses also are believed to have resulted from mid-water events. High accumulation rates of volcanogenic material during Santonian time suggest a corresponding significant volcanic episode. The coincidence of increased carbonate accumulation rates during the Campanian and displacement of shallow-water fossils during the late Campanian-early Maestrichtian with the volcanic event implies that this early event resulted in formation of the island chains around the Nauru Basin, which then served as platforms for initial carbonate deposition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML?fiber couple.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear regression is a technique widely used in digital signal processing. It consists on finding the linear function that better fits a given set of samples. This paper proposes different hardware architectures for the implementation of the linear regression method on FPGAs, specially targeting area restrictive systems. It saves area at the cost of constraining the lengths of the input signal to some fixed values. We have implemented the proposed scheme in an Automatic Modulation Classifier, meeting the hard real-time constraints this kind of systems have.