931 resultados para parametric oscillators and amplifiers
Resumo:
Background: As part of the second generation surveillance system for HIV/Aids in Switzerland, repeated cross-sectional surveys were conducted in 1993, 1994, 1996, 2000, 2006 and 2011 among attenders of all low threshold facilities (LTFs) with needle exchange programmes and/or supervised drug consumption rooms for injection or inhalation. The number of syringes distributed to the injectors has also been measured annually since 2000. Distribution in other settings, such as pharmacies, is also monitored nationally. Methods: Periodic surveys of LTFs have been conducted using an interviewer/self-administered questionnaire structured along four themes: socio-demographic characteristics, drug consumption, risk/preventive behaviour and health. Analysis is restricted to attenders who had injected drugs during their lifetime (IDU´s). Pearson's chi-square test and trend analysis were conducted on annual aggregated data. Trend significance was assessed using Stata's non parametric test nptrend. Results: Median age of IDU´s increased from 26 years in 1993 to 40 in 2011; most are men (78%). Total yearly number of syringes distributed by LTFs has decreased by 44% in 10 years. Use of cocaine has increased (Table 1). Injection, regular use of heroin and borrowing of syringes/needles have decreased, while sharing of other material remains stable. There are fewer new injectors; more IDU´s report substitution treatment. Most attenders had ever been tested for HIV (90% in 1993, 94% in 2011). Reported prevalence of HIV remained stable around 10%; that of HCV decreased from 62% in 2000 to 42% in 2011. Conclusions: Overall, findings indicate a decrease in injection as a means of drug consumption in that population. This interpretation is supported by data from other sources, such as a national decrease in distribution from other delivery points. Switzerland's behavioural surveillance system is sustainable and allows the HIV epidemic to be monitored among this hard-to-reach population, providing information for planning and evaluation.
Resumo:
Pulse wave velocity (PWV) is a surrogate of arterial stiffness and represents a non-invasive marker of cardiovascular risk. The non-invasive measurement of PWV requires tracking the arrival time of pressure pulses recorded in vivo, commonly referred to as pulse arrival time (PAT). In the state of the art, PAT is estimated by identifying a characteristic point of the pressure pulse waveform. This paper demonstrates that for ambulatory scenarios, where signal-to-noise ratios are below 10 dB, the performance in terms of repeatability of PAT measurements through characteristic points identification degrades drastically. Hence, we introduce a novel family of PAT estimators based on the parametric modeling of the anacrotic phase of a pressure pulse. In particular, we propose a parametric PAT estimator (TANH) that depicts high correlation with the Complior(R) characteristic point D1 (CC = 0.99), increases noise robustness and reduces by a five-fold factor the number of heartbeats required to obtain reliable PAT measurements.
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its majordrawbacks however is its susceptibility to response biases. A known strategy to control these biases hasbeen the use of so-called ipsative items. Ipsative items are items that require the respondent to makebetween-scale comparisons within each item. The selected option determines to which scale the weight ofthe answer is attributed. Consequently in questionnaires only consisting of ipsative items everyrespondent is allotted an equal amount, i.e. the total score, that each can distribute differently over thescales. Therefore this type of response format yields data that can be considered compositional from itsinception.Methodological oriented psychologists have heavily criticized this type of item format, since the resultingdata is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians havekept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate bothpositions and addresses the similarities and differences between the two data collection methods. Theultimate objective is to formulate a guideline when to use which type of item format.The comparison is based on data obtained with both an ipsative and normative version of threepsychological questionnaires, which were administered to 502 first-year students in psychology accordingto a balanced within-subjects design. Previous research only compared the direct ipsative scale scoreswith the derived ipsative scale scores. The use of compositional data analysis techniques also enables oneto compare derived normative score ratios with direct normative score ratios. The addition of the secondcomparison not only offers the advantage of a better-balanced research strategy. In principle it also allowsfor parametric testing in the evaluation
Resumo:
INTRODUCTION Functional imaging studies of addiction following protracted abstinence have not been systematically conducted to look at the associations between severity of use of different drugs and brain dysfunction. Findings from such studies may be relevant to implement specific interventions for treatment. The aim of this study was to examine the association between resting-state regional brain metabolism (measured with 18F-fluorodeoxyglucose Positron Emission Tomography (FDG-PET) and the severity of use of cocaine, heroin, alcohol, MDMA and cannabis in a sample of polysubstance users with prolonged abstinence from all drugs used. METHODS Our sample consisted of 49 polysubstance users enrolled in residential treatment. We conducted correlation analyses between estimates of use of cocaine, heroin, alcohol, MDMA and cannabis and brain metabolism (BM) (using Statistical Parametric Mapping voxel-based (VB) whole-brain analyses). In all correlation analyses conducted for each of the drugs we controlled for the co-abuse of the other drugs used. RESULTS The analysis showed significant negative correlations between severity of heroin, alcohol, MDMA and cannabis use and BM in the dorsolateral prefrontal cortex (DLPFC) and temporal cortex. Alcohol use was further associated with lower metabolism in frontal premotor cortex and putamen, and stimulants use with parietal cortex. CONCLUSIONS Duration of use of different drugs negatively correlated with overlapping regions in the DLPFC, whereas severity of cocaine, heroin and alcohol use selectively impact parietal, temporal, and frontal-premotor/basal ganglia regions respectively. The knowledge of these associations could be useful in the clinical practice since different brain alterations have been associated with different patterns of execution that may affect the rehabilitation of these patients.
Resumo:
INTRODUCTION No definitive data are available regarding the value of switching to an alternative TNF antagonist in rheumatoid arthritis patients who fail to respond to the first one. The aim of this study was to evaluate treatment response in a clinical setting based on HAQ improvement and EULAR response criteria in RA patients who were switched to a second or a third TNF antagonist due to failure with the first one. METHODS This was an observational, prospective study of a cohort of 417 RA patients treated with TNF antagonists in three university hospitals in Spain between January 1999 and December 2005. A database was created at the participating centres, with well-defined operational instructions. The main outcome variables were analyzed using parametric or non-parametric tests depending on the level of measurement and distribution of each variable. RESULTS Mean (+/- SD) DAS-28 on starting the first, second and third TNF antagonist was 5.9 (+/- 2.0), 5.1 (+/- 1.5) and 6.1 (+/- 1.1). At the end of follow-up, it decreased to 3.3 (+/- 1.6; Delta = -2.6; p > 0.0001), 4.2 (+/- 1.5; Delta = -1.1; p = 0.0001) and 5.4 (+/- 1.7; Delta = -0.7; p = 0.06). For the first TNF antagonist, DAS-28-based EULAR response level was good in 42% and moderate in 33% of patients. The second TNF antagonist yielded a good response in 20% and no response in 53% of patients, while the third one yielded a good response in 28% and no response in 72%. Mean baseline HAQ on starting the first, second and third TNF antagonist was 1.61, 1.52 and 1.87, respectively. At the end of follow-up, it decreased to 1.12 (Delta = -0.49; p < 0.0001), 1.31 (Delta = -0.21, p = 0.004) and 1.75 (Delta = -0.12; p = 0.1), respectively. Sixty four percent of patients had a clinically important improvement in HAQ (defined as > or = -0.22) with the first TNF antagonist and 46% with the second. CONCLUSION A clinically significant effect size was seen in less than half of RA patients cycling to a second TNF antagonist.
Resumo:
INTRODUCTION Human host immune response following infection with the new variant of A/H1N1 pandemic influenza virus (nvH1N1) is poorly understood. We utilize here systemic cytokine and antibody levels in evaluating differences in early immune response in both mild and severe patients infected with nvH1N1. METHODS We profiled 29 cytokines and chemokines and evaluated the haemagglutination inhibition activity as quantitative and qualitative measurements of host immune responses in serum obtained during the first five days after symptoms onset, in two cohorts of nvH1N1 infected patients. Severe patients required hospitalization (n = 20), due to respiratory insufficiency (10 of them were admitted to the intensive care unit), while mild patients had exclusively flu-like symptoms (n = 15). A group of healthy donors was included as control (n = 15). Differences in levels of mediators between groups were assessed by using the non parametric U-Mann Whitney test. Association between variables was determined by calculating the Spearman correlation coefficient. Viral load was performed in serum by using real-time PCR targeting the neuraminidase gene. RESULTS Increased levels of innate-immunity mediators (IP-10, MCP-1, MIP-1beta), and the absence of anti-nvH1N1 antibodies, characterized the early response to nvH1N1 infection in both hospitalized and mild patients. High systemic levels of type-II interferon (IFN-gamma) and also of a group of mediators involved in the development of T-helper 17 (IL-8, IL-9, IL-17, IL-6) and T-helper 1 (TNF-alpha, IL-15, IL-12p70) responses were exclusively found in hospitalized patients. IL-15, IL-12p70, IL-6 constituted a hallmark of critical illness in our study. A significant inverse association was found between IL-6, IL-8 and PaO2 in critical patients. CONCLUSIONS While infection with the nvH1N1 induces a typical innate response in both mild and severe patients, severe disease with respiratory involvement is characterized by early secretion of Th17 and Th1 cytokines usually associated with cell mediated immunity but also commonly linked to the pathogenesis of autoimmune/inflammatory diseases. The exact role of Th1 and Th17 mediators in the evolution of nvH1N1 mild and severe disease merits further investigation as to the detrimental or beneficial role these cytokines play in severe illness.
Resumo:
In this paper, robustness of parametric systems is analyzed using a new approach to interval mathematics called Modal Interval Analysis. Modal Intervals are an interval extension that, instead of classic intervals, recovers some of the properties required by a numerical system. Modal Interval Analysis not only simplifies the computation of interval functions but allows semantic interpretation of their results. Necessary, sufficient and, in some cases, necessary and sufficient conditions for robust performance are presented
Resumo:
OBJECTIVES: To assess the extent to which stage at diagnosis and adherence to treatment guidelines may explain the persistent differences in colorectal cancer survival between the USA and Europe. DESIGN: A high-resolution study using detailed clinical data on Dukes' stage, diagnostic procedures, treatment and follow-up, collected directly from medical records by trained abstractors under a single protocol, with standardised quality control and central statistical analysis. SETTING AND PARTICIPANTS: 21 population-based registries in seven US states and nine European countries provided data for random samples comprising 12 523 adults (15-99 years) diagnosed with colorectal cancer during 1996-1998. OUTCOME MEASURES: Logistic regression models were used to compare adherence to 'standard care' in the USA and Europe. Net survival and excess risk of death were estimated with flexible parametric models. RESULTS: The proportion of Dukes' A and B tumours was similar in the USA and Europe, while that of Dukes' C was more frequent in the USA (38% vs 21%) and of Dukes' D more frequent in Europe (22% vs 10%). Resection with curative intent was more frequent in the USA (85% vs 75%). Elderly patients (75-99 years) were 70-90% less likely to receive radiotherapy and chemotherapy. Age-standardised 5-year net survival was similar in the USA (58%) and Northern and Western Europe (54-56%) and lowest in Eastern Europe (42%). The mean excess hazard up to 5 years after diagnosis was highest in Eastern Europe, especially among elderly patients and those with Dukes' D tumours. CONCLUSIONS: The wide differences in colorectal cancer survival between Europe and the USA in the late 1990s are probably attributable to earlier stage and more extensive use of surgery and adjuvant treatment in the USA. Elderly patients with colorectal cancer received surgery, chemotherapy or radiotherapy less often than younger patients, despite evidence that they could also have benefited.
Resumo:
Our project aims at analyzing the relevance of economic factors (mainly income and other socioeconomic characteristics of Spanish households and market prices) on the prevalence of obesity in Spain and to what extent market intervention prices are effective to reduce obesity and improve the quality of the diet, and under what circumstances. In relation to the existing literature worldwide, this project is the first attempt in Spain trying to get an overall picture on the effectiveness of public policies on both food consumption and the quality of diet, on one hand, and on the prevalence of obesity on the other. The project consists of four main parts. The first part represents a critical review of the literature on the economic approach of dealing with the obesity prevalence problems, diet quality and public intervention policies. Although another important body of obesity literature is dealing with physical exercise but in this paper we will limit our attention to those studies related to food consumption respecting the scope of our study and as there are many published literature review dealing with the literature related to the physical exercise and its effect on obesity prevalence. The second part consists of a Parametric and Non-Parametric Analysis of the Role of Economic Factors on Obesity Prevalence in Spain. The third part is trying to overcome the shortcomings of many diet quality indices that have been developed during last decades, such as the Healthy Eating Index, the Diet Quality Index, the Healthy Diet Indicator, and the Mediterranean Diet Score, through the development of a new obesity specific diet quality index. While the last part of our project concentrates on the assessment of the effectiveness of market intervention policies to improve the healthiness of the Spanish Diet Using the new Exact Affine Stone Index (EASI) Demand System.
Resumo:
BACKGROUND Type 2 diabetes mellitus (T2DM) is an emerging risk factor for cognitive impairment. Whether this impairment is a direct effect of this metabolic disorder on brain function, a consequence of vascular disease, or both, remains unknown. Structural and functional neuroimaging studies in patients with T2DM could help to elucidate this question. OBJECTIVE We designed a cross-sectional study comparing 25 T2DM patients with 25 age- and gender-matched healthy control participants. Clinical information, APOE genotype, lipid and glucose analysis, structural cerebral magnetic resonance imaging including voxel-based morphometry, and F-18 fluorodeoxyglucose positron emission tomography were obtained in all subjects. METHODS Gray matter densities and metabolic differences between groups were analyzed using statistical parametric mapping. In addition to comparing the neuroimaging profiles of both groups, we correlated neuroimaging findings with HbA1c levels, duration of T2DM, and insulin resistance measurement (HOMA-IR) in the diabetic patients group. Results: Patients with T2DM presented reduced gray matter densities and reduced cerebral glucose metabolism in several fronto-temporal brain regions after controlling for various vascular risk factors. Furthermore, within the T2DM group, longer disease duration, and higher HbA1c levels and HOMA-IR were associated with lower gray matter density and reduced cerebral glucose metabolism in fronto-temporal regions. CONCLUSION In agreement with previous reports, our findings indicate that T2DM leads to structural and metabolic abnormalities in fronto-temporal areas. Furthermore, they suggest that these abnormalities are not entirely explained by the role of T2DM as a cardiovascular risk factor.
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
Initial topography and inherited structural discontinuities are known to play a dominant role in rock slope stability. Previous 2-D physical modeling results demonstrated that even if few preexisting fractures are activated/propagated during gravitational failure all of those heterogeneities had a great influence on mobilized volume and its kinematics. The question we address in the present study is to determine if such a result is also observed in 3-D. As in 2-D previous models we examine geologically stable model configuration, based upon the well documented landslide at Randa, Switzerland. The 3-D models consisted of a homogeneous material in which several fracture zones were introduced in order to study simplified but realistic configurations of discontinuities (e.g. based on natural example rather than a parametric study). Results showed that the type of gravitational failure (deep-seated landslide or sequential failure) and resulting slope morphology evolution are the result of the interplay of initial topography and inherited preexisting fractures (orientation and density). The three main results are i) the initial topography exerts a strong control on gravitational slope failure. Indeed in each tested configuration (even in the isotropic one without fractures) the model is affected by a rock slide, ii) the number of simulated fracture sets greatly influences the volume mobilized and its kinematics, and iii) the failure zone involved in the 1991 event is smaller than the results produced by the analog modeling. This failure may indicate that the zone mobilized in 1991 is potentially only a part of a larger deep-seated landslide and/or wider deep seated gravitational slope deformation.
Resumo:
ABSTRACTThe Copula Theory was used to analyze contagion among the BRIC (Brazil, Russia, India and China) and European Union stock markets with the U.S. Equity Market. The market indexes used for the period between January 01, 2005 and February 27, 2010 are: MXBRIC (BRIC), MXEU (European Union) and MXUS (United States). This article evaluated the adequacy of the main copulas found in the financial literature using log-likelihood, Akaike information and Bayesian information criteria. This article provides a groundbreaking study in the area of contagion due to the use of conditional copulas, allowing to calculate the correlation increase between indexes with non-parametric approach. The conditional Symmetrized Joe-Clayton copula was the one that fitted better to the considered pairs of returns. Results indicate evidence of contagion effect in both markets, European Union and BRIC members, with a 5% significance level. Furthermore, there is also evidence that the contagion of U.S. financial crisis was more pronounced in the European Union than in the BRIC markets, with a 5% significance level. Therefore, stock portfolios formed by equities from the BRIC countries were able to offer greater protection during the subprime crisis. The results are aligned with recent papers that present an increase in correlation between stock markets, especially in bear markets.
Resumo:
This paper describes a maximum likelihood method using historical weather data to estimate a parametric model of daily precipitation and maximum and minimum air temperatures. Parameter estimates are reported for Brookings, SD, and Boone, IA, to illustrate the procedure. The use of this parametric model to generate stochastic time series of daily weather is then summarized. A soil temperature model is described that determines daily average, maximum, and minimum soil temperatures based on air temperatures and precipitation, following a lagged process due to soil heat storage and other factors.
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.