983 resultados para Minkowski sum
Resumo:
Objectives: To determine whether adjusting the denominator of the common hospital antibiotic use measurement unit (defined daily doses/100 bed-days) by including age-adjusted comorbidity score (100 bed-days/age-adjusted comorbidity score) would result in more accurate and meaningful assessment of hospital antibiotic use.
Methods: The association between the monthly sum of age-adjusted comorbidity and monthly antibiotic use was measured using time-series analysis (January 2008 to June 2012). For the purposes of conducting internal benchmarking, two antibiotic usage datasets were constructed, i.e. 2004-07 (first study period) and 2008-11 (second study period). Monthly antibiotic use was normalized per 100 bed-days and per 100 bed-days/age-adjusted comorbidity score.
Results: Results showed that antibiotic use had significant positive relationships with the sum of age-adjusted comorbidity score (P = 0.0004). The results also showed that there was a negative relationship between antibiotic use and (i) alcohol-based hand rub use (P = 0.0370) and (ii) clinical pharmacist activity (P = 0.0031). Normalizing antibiotic use per 100 bed-days contributed to a comparative usage rate of 1.31, i.e. the average antibiotic use during the second period was 31% higher than during the first period. However, normalizing antibiotic use per 100 bed-days per age-adjusted comorbidity score resulted in a comparative usage rate of 0.98, i.e. the average antibiotic use was 2% lower in the second study period. Importantly, the latter comparative usage rate is independent of differences in patient density and case mix characteristics between the two studied populations.
Conclusions: The proposed modified antibiotic measure provides an innovative approach to compare variations in antibiotic prescribing while taking account of patient case mix effects.
Resumo:
Samples of fine-grained channel bed sediment and overbank floodplain deposits were collected along the main channels of the Rivers Aire (and its main tributary, the River Calder) and Swale, in Yorkshire, UK, in order to investigate downstream changes in the storage and deposition of heavy metals (Cr, Cu, Pb, Zn), total P and the sum of selected PCB congeners, and to estimate the total storage of these contaminants within the main channels and floodplains of these river systems. Downstream trends in the contaminant content of the <63 μm fraction of channel bed and floodplain sediment in the study rivers are controlled mainly by the location of the main sources of the contaminants, which varies between rivers. In the Rivers Aire and Calder, the contaminant content of the <63 μm fraction of channel bed and floodplain sediment generally increases in a downstream direction, reflecting the location of the main urban and industrialized areas in the middle and lower parts of the basin. In the River Swale, the concentrations of most of the contaminants examined are approximately constant along the length of the river, due to the relatively unpolluted nature of this river. However, the Pb and Zn content of fine channel bed sediment decreases downstream, due to the location of historic metal mines in the headwaters of this river, and the effect of downstream dilution with uncontaminated sediment. The magnitude and spatial variation of contaminant storage and deposition on channel beds and floodplains are also controlled by the amount of <63 μm sediment stored on the channel bed and deposited on the floodplain during overbank events. Consequently, contaminant deposition and storage are strongly influenced by the surface area of the floodplain and channel bed. Contaminant storage on the channel beds of the study rivers is, therefore, generally greatest in the middle and lower reaches of the rivers, since channel width increases downstream. Comparisons of the estimates of total storage of specific contaminants on the channel beds of the main channel systems of the study rivers with the annual contaminant flux at the catchment outlets indicate that channel storage represents <3% of the outlet flux and is, therefore, of limited importance in regulating that flux. Similar comparisons between the annual deposition flux of specific contaminants to the floodplains of the study rivers and the annual contaminant flux at the catchment outlet, emphasise the potential importance of floodplain deposition as a conveyance loss. In the case of the River Aire the floodplain deposition flux is equivalent to between ca. 2% (PCBs) and 36% (Pb) of the outlet flux. With the exception of PCBs, for which the value is ≅0, the equivalent values for the River Swale range between 18% (P) and 95% (Pb). The study emphasises that knowledge of the fine-grained sediment delivery system operating in a river basin is an essential prerequisite for understanding the transport and storage of sediment-associated contaminants in river systems and that conveyance losses associated with floodplain deposition exert an important control on downstream contaminant fluxes and the fate of such contaminants. © 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
BACKGROUND: Offspring of women with diabetes mellitus (DM) during pregnancy have a risk of developing metabolic disease in adulthood greater than that conferred by genetics alone. The mechanisms responsible are unknown, but likely involve fetal exposure to the in utero milieu, including glucose and circulating adipokines. The purpose of this study was to assess the impact of maternal DM on fetal adipokines and anthropometry in infants of Hispanic and Native American women.
METHODS: We conducted a prospective study of offspring of mothers with normoglycemia (Con-O; n = 79) or type 2 or gestational DM (DM-O; n = 45) pregnancies. Infant anthropometrics were measured at birth and 1-month of age. Cord leptin, high-molecular-weight adiponectin (HMWA), pigment epithelium-derived factor (PEDF) and C-peptide were measured by ELISA. Differences between groups were assessed using the Generalized Linear Model framework. Correlations were calculated as standardized regression coefficients and adjusted for significant covariates.
RESULTS: DM-O were heavier at birth than Con-O (3.7 ± 0.6 vs. 3.4 ± 0.4 kg, p = 0.024), but sum of skinfolds (SSF) were not different. At 1-month, there was no difference in weight, SSF or % body fat or postnatal growth between groups. Leptin was higher in DM-O (20.1 ± 14.9 vs. 9.5 ± 9.9 ng/ml in Con-O, p < 0.0001). Leptin was positively associated with birth weight (p = 0.0007) and SSF (p = 0.002) in Con-O and with maternal hemoglobin A1c in both groups (Con-O, p = 0.023; DM-O, p = 0.006). PEDF was positively associated with birth weight in all infants (p = 0.004). Leptin was positively associated with PEDF in both groups, with a stronger correlation in DM-O (p = 0.009). At 1-month, HMWA was positively associated with body weight (p = 0.004), SSF (p = 0.025) and % body fat (p = 0.004) across the cohort.
CONCLUSIONS: Maternal DM results in fetal hyperleptinemia independent of adiposity. HMWA appears to influence postnatal growth. Thus, in utero exposure to DM imparts hormonal differences on infants even without aberrant growth.
Resumo:
A 94 GHz waveguide Rotman lens is described which can be used to implement an amplitude comparison monopulse RADAR. In transmit mode, adjacent dual beam ports are excited with equal amplitude and phase to form a sum radiation pattern, and in receive mode, the outputs of the beam port pairs are combined using magic tees to provide a sum and a difference signal which can be used to calculate an angular error estimate for target acquisition and tracking. This approach provides an amplitude comparison monopulse system which can be scanned in azimuth and which has a low component count, with no requirement for phase shift circuitry in the array feed lines, making it suitable for mm-wave frequencies. A 12 input (beam ports), 12 output (array ports) lens is designed using CST Microwave Studio, and the predicted results are presented.
Resumo:
A periodic monitoring of the pavement condition facilitates a cost-effective distribution of the resources available for maintenance of the road infrastructure network. The task can be accurately carried out using profilometers, but such an approach is generally expensive. This paper presents a method to collect information on the road profile via accelerometers mounted in a fleet of non-specialist vehicles, such as police cars, that are in use for other purposes. It proposes an optimisation algorithm, based on Cross Entropy theory, to predict road irregularities. The Cross Entropy algorithm estimates the height of the road irregularities from vehicle accelerations at each point in time. To test the algorithm, the crossing of a half-car roll model is simulated over a range of road profiles to obtain accelerations of the vehicle sprung and unsprung masses. Then, the simulated vehicle accelerations are used as input in an iterative procedure that searches for the best solution to the inverse problem of finding road irregularities. In each iteration, a sample of road profiles is generated and an objective function defined as the sum of squares of differences between the ‘measured’ and predicted accelerations is minimized until convergence is reached. The reconstructed profile is classified according to ISO and IRI recommendations and compared to its original class. Results demonstrate that the approach is feasible and that a good estimate of the short-wavelength features of the road profile can be detected, despite the variability between the vehicles used to collect the data.
Resumo:
In this paper, we study the achievable ergodic sum-rate of multiuser multiple-input multiple-output downlink systems in Rician fading channels. We first derive a lower bound on the average signal-to-leakage-and-noise ratio by using the Mullen’s inequality, and then use it to analyze the effect of channel mean information on the achievable ergodic sum-rate. A novel statistical-eigenmode space-division multiple-access (SESDMA) downlink transmission scheme is then proposed. For this scheme, we derive an exact analytical closed-form expression for the achievable ergodic rate and present tractable tight upper and lower bounds. Based on our analysis, we gain valuable insights into the system parameters, such as the number of transmit antennas, the signal-to-noise ratio (SNR) and Rician K-factor on the system sum-rate. Results show that the sum-rate converges to a saturation value in the high SNR regime and tends to a lower limit for the low Rician K-factor case. In addition, we compare the achievable ergodic sum-rate between SE-SDMA and zeroforcing beamforming with perfect channel state information at the base station. Our results reveal that the rate gap tends to zero in the high Rician K-factor regime. Finally, numerical results are presented to validate our analysis.
Resumo:
Single component geochemical maps are the most basic representation of spatial elemental distributions and commonly used in environmental and exploration geochemistry. However, the compositional nature of geochemical data imposes several limitations on how the data should be presented. The problems relate to the constant sum problem (closure), and the inherently multivariate relative information conveyed by compositional data. Well known is, for instance, the tendency of all heavy metals to show lower values in soils with significant contributions of diluting elements (e.g., the quartz dilution effect); or the contrary effect, apparent enrichment in many elements due to removal of potassium during weathering. The validity of classical single component maps is thus investigated, and reasonable alternatives that honour the compositional character of geochemical concentrations are presented. The first recommended such method relies on knowledge-driven log-ratios, chosen to highlight certain geochemical relations or to filter known artefacts (e.g. dilution with SiO2 or volatiles). This is similar to the classical normalisation approach to a single element. The second approach uses the (so called) log-contrasts, that employ suitable statistical methods (such as classification techniques, regression analysis, principal component analysis, clustering of variables, etc.) to extract potentially interesting geochemical summaries. The caution from this work is that if a compositional approach is not used, it becomes difficult to guarantee that any identified pattern, trend or anomaly is not an artefact of the constant sum constraint. In summary the authors recommend a chain of enquiry that involves searching for the appropriate statistical method that can answer the required geological or geochemical question whilst maintaining the integrity of the compositional nature of the data. The required log-ratio transformations should be applied followed by the chosen statistical method. Interpreting the results may require a closer working relationship between statisticians, data analysts and geochemists.
Resumo:
Possibilistic answer set programming (PASP) unites answer set programming (ASP) and possibilistic logic (PL) by associating certainty values with rules. The resulting framework allows to combine both non-monotonic reasoning and reasoning under uncertainty in a single framework. While PASP has been well-studied for possibilistic definite and possibilistic normal programs, we argue that the current semantics of possibilistic disjunctive programs are not entirely satisfactory. The problem is twofold. First, the treatment of negation-as-failure in existing approaches follows an all-or-nothing scheme that is hard to match with the graded notion of proof underlying PASP. Second, we advocate that the notion of disjunction can be interpreted in several ways. In particular, in addition to the view of ordinary ASP where disjunctions are used to induce a non-deterministic choice, the possibilistic setting naturally leads to a more epistemic view of disjunction. In this paper, we propose a semantics for possibilistic disjunctive programs, discussing both views on disjunction. Extending our earlier work, we interpret such programs as sets of constraints on possibility distributions, whose least specific solutions correspond to answer sets.
Resumo:
Statistics are regularly used to make some form of comparison between trace evidence or deploy the exclusionary principle (Morgan and Bull, 2007) in forensic investigations. Trace evidence are routinely the results of particle size, chemical or modal analyses and as such constitute compositional data. The issue is that compositional data including percentages, parts per million etc. only carry relative information. This may be problematic where a comparison of percentages and other constraint/closed data is deemed a statistically valid and appropriate way to present trace evidence in a court of law. Notwithstanding an awareness of the existence of the constant sum problem since the seminal works of Pearson (1896) and Chayes (1960) and the introduction of the application of log-ratio techniques (Aitchison, 1986; Pawlowsky-Glahn and Egozcue, 2001; Pawlowsky-Glahn and Buccianti, 2011; Tolosana-Delgado and van den Boogaart, 2013) the problem that a constant sum destroys the potential independence of variances and covariances required for correlation regression analysis and empirical multivariate methods (principal component analysis, cluster analysis, discriminant analysis, canonical correlation) is all too often not acknowledged in the statistical treatment of trace evidence. Yet the need for a robust treatment of forensic trace evidence analyses is obvious. This research examines the issues and potential pitfalls for forensic investigators if the constant sum constraint is ignored in the analysis and presentation of forensic trace evidence. Forensic case studies involving particle size and mineral analyses as trace evidence are used to demonstrate the use of a compositional data approach using a centred log-ratio (clr) transformation and multivariate statistical analyses.
Resumo:
We consider a multi-pair two-way amplify-and-forward relaying system with a massive antenna array at the relay and estimated channel state information, assuming maximum-ratio combining/transmission processing. Closed-form approximations of the sum spectral effi- ciency are developed and simple analytical power scaling laws are presented, which reveal a fundamental trade-off between the transmit powers of each user/the relay and of each pilot symbol. Finally, the optimal power allocation problem is studied.
Resumo:
We study multicarrier multiuser multiple-input multiple-output (MU-MIMO) systems, in which the base station employs an asymptotically large number of antennas. We analyze a fully correlated channel matrix and provide a beam domain channel model, where the channel gains are independent of sub-carriers. For this model, we first derive a closed-form upper bound on the achievable ergodic sum-rate, based on which, we develop asymptotically necessary and sufficient conditions for optimal downlink transmission that require only statistical channel state information at the transmitter. Furthermore, we propose a beam division multiple access (BDMA) transmission scheme that simultaneously serves multiple users via different beams. By selecting users within non-overlapping beams, the MU-MIMO channels can be equivalently decomposed into multiple single-user MIMO channels; this scheme significantly reduces the overhead of channel estimation, as well as, the processing complexity at transceivers. For BDMA transmission, we work out an optimal pilot design criterion to minimize the mean square error (MSE) and provide optimal pilot sequences by utilizing the Zadoff-Chu sequences. Simulations demonstrate the near-optimal performance of BDMA transmission and the advantages of the proposed pilot sequences.
Resumo:
The bacterial community composition and biomass abundance from a depositional mud belt in the western Irish Sea and regional sands were investigated by phospholipid ester-linked fatty acid profiling, denaturing gradient gel electrophoresis and barcoded pyrosequencing of 16S rRNA genes. The study area varied by water depth (12-111 m), organic carbon content (0.09-1.57% TOC), grain size, hydrographic regime (well-mixed vs. stratified), and water column phytodetrital input (represented by algal polyunsaturated PLFA). The relative abundance of bacterial-derived PLFA (sum of methyl-branched, cyclopropyl and odd-carbon number PLFA) was positively correlated with fine-grained sediment, and was highest in the depositional mud belt. A strong association between bacterial biomass and eukaryote primary production was suggested based on observed positive correlations with total nitrogen and algal polyunsaturated fatty acids. In addition, 16S rRNA genes affiliated to the classes Clostridia and Flavobacteria represented a major proportion of total 16S rRNA gene sequences. This suggests that benthic bacterial communities are also important degraders of phytodetrital organic matter and closely coupled to water column productivity in the western Irish Sea.
Resumo:
Gel aspiration-ejection (GAE) has recently been introduced as an effective technique for the rapid production of injectable dense collagen (IDC) gel scaffolds with tunable collagen fibrillar densities (CFDs) and microstructures. Herein, a GAE system was applied for the advanced production and delivery of IDC and IDC-Bioglass® (IDC-BG) hybrid gel scaffolds for potential bone tissue engineering applications. The efficacy of GAE in generating mineralizable IDC-BG gels (from an initial 75-25 collagen-BG ratio) produced through needle gauge numbers 8G (3.4 mm diameter and 6 wt% CFD) and 14G (1.6 mm diameter and 14 wt% CFD) was investigated. Second harmonic generation (SHG) imaging of as-made gels revealed an increase in collagen fibril alignment with needle gauge number. In vitro mineralization of IDC-BG gels was confirmed where carbonated hydroxyapatite was detected as early as day 1 in simulated body fluid, which progressively increased up to day 14. In vivo mineralization of, and host response to, acellular IDC and IDC-BG gel scaffolds were further investigated following subcutaneous injection in adult rats. Mineralization, neovascularization and cell infiltration into the scaffolds was enhanced by the addition of BG and at day 21 post injection, there was evidence of remodelling of granulation tissue into woven bone-like tissue in IDC-BG. SHG imaging of explanted scaffolds indicated collagen fibril remodelling through cell infiltration and mineralization over time. In sum, the results suggest that IDC-BG hybrid gels have osteoinductive properties and potentially offer a novel therapeutic approach for procedures requiring the injectable delivery of a malleable and dynamic bone graft that mineralizes under physiological conditions
Resumo:
Where either the seller or buyer of landed property fails to complete a contract to sell land the non-breaching party has a right to seek specific performance of the contract. This remedy would compel the party in default to perform the contract on pain of being held in contempt of court if the court's order is not obeyed. The defaulting party would not be able to satisfy its obligations under the law by paying a sum of money as damages for breach of contract. This paper considers the impecuniosity defence to specific performance as recognised by courts in Northern Ireland, the Republic of Ireland, Australia and New Zealand. Where the buyer demonstrates that he or she simply cannot raise the funds to buy the property specific performance will not be decreed and the court will make an award of damages for breach of contract measured by the difference between the contract price and the market price of the property at the time of default. The paper considers the nature and parameters of this defence and how it differs (if at all) from the alternative defence of extreme hardship. The paper addresses the question of whether it might be better to move to a position where sellers of land in all cases no longer enjoy a presumption of specific performance but have to demonstrate that the alternative remedy of damages is clearly inadequate. If this should be so the paper goes on to consider whether abolition of the presumption in favour of specific performance for sellers should lead to abolition of the presumption of specific performance for buyers, as is the position in Canada following the Supreme Court's decision in Semelhago v Paramadevan [1996] 2 SCR 415.
Resumo:
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants' biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non-linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.