938 resultados para Non-parametric regression methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sample dimension, types of variables, format used for measurement, and construction of instruments to collect valid and reliable data must be considered during the research process. In the social and health sciences, and more specifically in nursing, data-collection instruments are usually composed of latent variables or variables that cannot be directly observed. Such facts emphasize the importance of deciding how to measure study variables (using an ordinal scale or a Likert or Likert-type scale). Psychometric scales are examples of instruments that are affected by the type of variables that comprise them, which could cause problems with measurement and statistical analysis (parametric tests versus non-parametric tests). Hence, investigators using these variables must rely on suppositions based on simulation studies or recommendations based on scientific evidence in order to make the best decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Casos de fraudes têm ocorrido, frequentemente no mercado mundial. Diversos são os profissionais envolvidos nesses casos, inclusive os da contabilidade. Os escândalos contabilísticos, especialmente os mais famosos, como os incidido nas empresas Enron e Wordcom, acenderam para uma maior preocupação em relação a conduta ética dos profissionais da contabilidade. Como consequência há uma maior exigência quanto a transparência e a fidedignidade das informações prestadas por estes profissionais. Esta preocupação visa, sobretudo, manter a confiança das empresas, investidores, fornece-dores e sociedade em geral, de entre outras, na responsabilidade ética do contabilista, de-negrida pelo envolvimento nas fraudes detectadas. Desta forma, o presente estudo teve como objectivo verificar a conduta ética dos contabilistas, quando, no exercício da sua profissão, depararem com questões relacionadas a fraudes. Nesse sentido considerou-se factores que podem vir a influenciar o processo decisório ético de um indivíduo, demonstrados através do modelo de tomada de decisão, desenvolvido por Alves, quanto a motivar um indivíduo a cometer uma fraude, evidenciada através do modelo desenvolvido por Cressey. Tentando responder a questão norteadora desta pesquisa, executou-se a análise descritiva e estatística dos dados. Em relação a análise descritiva, foram elaboradas tabelas de frequência. Para a análise estatística dos dados foi utilizado o teste não paramétrico de Spearman. Os resultados demonstraram que a maioria dos contabilistas, da amostra pesquisada, reconhece a questão moral inserida nos cenários, e discordam dos actos dos agentes de cada cenário, e, ainda os classificam como graves ou muito graves. A pesquisa revelou maior aproximação desses profissionais a corrente teleológica, uma vez que a intenção de agir é mais influenciada por alguns factores como a oportunidade, a racionalização e principalmente a pressão. Alguns factores individuais apresentam influências sob o posicionamento ético dos contabilistas entrevistados nesta pesquisa. Cases of fraud have occurred, in the word market. Several are involved in these cases, including the accounting class. The accounting scandals, especially the most famous, such as focusing on companies and Enron Word Com, kindled to greater concern about the ethical conduct of professional accounting. As a result there is a greater demand on the transparency and reliability of information provide by these professionals This concern is aimed, primarily, to maintain the confidence of businesses, investor, suppliers and society, among others, the ethical responsibility of the meter, denigrated, by involvement in the fraud detected. Thus, this study aimed to verify the ethical conduct of accounts in when, in the exercise of their professional activities, is confronted with issues related to fraud. This is considered some factors that can both come to influence the ethical decision making of an individual, demonstrated by the model of decision making, developed by Alves, as a motivated individual to commit a fraudulent act, developed by Cressey. Seeking to answer question, guiding this study, performed to exploratory and confirmatory analysis of data. For exploratory data analysis were made table of frequencies. For confirmatory analysis of data, were used non parametric tests of Spearman. The results showed that the majority of accountings professionals, the sample, recognizing the moral issue included in the scenarios, disagrees the acts of agents of each scenario, and also classifies such acts as serious and very serious. However, we found that these accounting professionals tend to have a position more toward the teleological theory, since the intention to act is influenced by factors as opportunity, rationalization and particularly the pressure. Some individual factors also had influence on the ethical position of the professional interviewed is this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM: To describe the incidence of retinocytomas, their variability at presentation and their growth patterns both before and after regressionMETHODS: Medical notes of the 525 patients of the Jules-Gonin Eye Hospital Retinoblastoma Clinic between 1964 and 2008 were reviewed and the charts of 36 patients with retinocytomas and/or phthisis bulbi were selected.¦RESULTS: The proportion of patients with retinocytomas and/or phthisis bulbi was 3.2%. The mean age at diagnosis was 28.7±17 years. Five tumours presented a cystic pattern (5.8%). Evidence of aggressive exophytic disease prior to spontaneous regression was documented in two eyes, and of invasive endophytic disease (regressed vitreous seeding or internal limiting membrane disruption) in three eyes. Twenty patients were followed with a mean follow-up of 44±60 months. Tumour growth was observed in 16% cases, benign cystic enlargement in 4% and malignant transformation in 12%.¦CONCLUSION: This large study of retinocytomas substantially expands the published features of retinocytoma by describing the cystic nature of some retinocytomas as well as clinical characteristics of the endophytic and exophytic preregression growth patterns. The authors report two different patterns of reactivation: benign cystic enlargement and malignant transformation with or without cystic growth. Higher than previously reported frequency of growth and possible life-threatening complications impose close lifetime follow-up of retinocytoma patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How much would output increase if underdeveloped economies were toincrease their levels of schooling? We contribute to the development accounting literature by describing a non-parametric upper bound on theincrease in output that can be generated by more schooling. The advantage of our approach is that the upper bound is valid for any number ofschooling levels with arbitrary patterns of substitution/complementarity.Another advantage is that the upper bound is robust to certain forms ofendogenous technology response to changes in schooling. We also quantify the upper bound for all economies with the necessary data, compareour results with the standard development accounting approach, andprovide an update on the results using the standard approach for a largesample of countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Different studies have shown circadian variation of ischemic burden among patients with ST-Elevation Myocardial Infarction (STEMI), but with controversial results. The aim of this study was to analyze circadian variation of myocardial infarction size and in-hospital mortality in a large multicenter registry. METHODS: This retrospective, registry-based study was based on data from AMIS Plus, a large multicenter Swiss registry of patients who suffered myocardial infarction between 1999 and 2013. Peak creatine kinase (CK) was used as a proxy measure for myocardial infarction size. Associations between peak CK, in-hospital mortality, and the time of day at symptom onset were modelled using polynomial-harmonic regression methods. RESULTS: 6,223 STEMI patients were admitted to 82 acute-care hospitals in Switzerland and treated with primary angioplasty within six hours of symptom onset. Only the 24-hour harmonic was significantly associated with peak CK (p = 0.0001). The maximum average peak CK value (2,315 U/L) was for patients with symptom onset at 23:00, whereas the minimum average (2,017 U/L) was for onset at 11:00. The amplitude of variation was 298 U/L. In addition, no correlation was observed between ischemic time and circadian peak CK variation. Of the 6,223 patients, 223 (3.58%) died during index hospitalization. Remarkably, only the 24-hour harmonic was significantly associated with in-hospital mortality. The risk of death from STEMI was highest for patients with symptom onset at 00:00 and lowest for those with onset at 12:00. DISCUSSION: As a part of this first large study of STEMI patients treated with primary angioplasty in Swiss hospitals, investigations confirmed a circadian pattern to both peak CK and in-hospital mortality which were independent of total ischemic time. Accordingly, this study proposes that symptom onset time be incorporated as a prognosis factor in patients with myocardial infarction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: To enhance the induction of insert specific immune responses, a new generation of replication competent poxvirus vectors was designed and evaluated against non-replicating poxvirus vectors in a HIV vaccine study in non human primates.Methods: Rhesus macaques were immunized with either the non-replicating variant NYVAC-GagPolNef HIV-1 clade C or the replicating NYVAC-GagPolNef-C-KC, boosted with HIVGag- PolEnv-SLP and immune responses were monitored.Results: Gag-specific T-cell responses were only detected in animals immunized with the replicating NYVAC-GagPolNef-C-KC variant. Further enhancement and broadening of the immune response was studied by boosting the animals with novel T-cell immunogens HIVconsv synthetic long peptides (SLP), which direct vaccine-induced responses to the most conserved regions of HIV and contain both CD4 T-helper and CD8 CTL epitopes. The adjuvanted (Montanide ISA-720) SLP divided into subpools and delivered into anatomically separate sites enhanced the Gag-specific T-cell responses in 4 out of 6 animals, to more than 1000 SFC/106 PBMC in some animals. Furthermore, the SLP immunization broadened the immune response in 4 out of 6 animals to multiple Pol epitopes. Even Env-specific responses, to which the animals had not been primed, were induced by SLP in 2 out of 6 animals.Conclusion: This new immunization strategy of priming with replicating competent poxvirus NYVAC-HIVGagPolNef and boosting with HIVGagPolEnv-SLP, induced strong and broad Tcell responses and provides a promising new HIV vaccine approach. This study was performed within the Poxvirus T-cell Vaccine Discovery Consortium (PTVDC) which is part of the CAVD program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Invasive fungal infections are an increasingly frequent etiology of sepsis in critically ill patients causing substantial morbidity and mortality. Candida species are by far the predominant agent of fungal sepsis accounting for 10% to 15% of health-care associated infections, about 5% of all cases of severe sepsis and septic shock and are the fourth most common bloodstream isolates in the United States. One-third of all episodes of candidemia occur in the intensive care setting. Early diagnosis of invasive candidiasis is critical in order to initiate antifungal agents promptly. Delay in the administration of appropriate therapy increases mortality. Unfortunately, risk factors, clinical and radiological manifestations are quite unspecific and conventional culture methods are suboptimal. Non-culture based methods (such as mannan, anti-mannan, β-d-glucan, and polymerase chain reaction) have emerged but remain investigational or require additional testing in the ICU setting. Few prophylactic or pre-emptive studies have been performed in critically ill patients. They tended to be underpowered and their clinical usefulness remains to be established under most circumstances. The antifungal armamentarium has expanded considerably with the advent of lipid formulations of amphotericin B, the newest triazoles and the echinocandins. Clinical trials have shown that the triazoles and echinocandins are efficacious and well tolerated antifungal therapies. Clinical practice guidelines for the management of invasive candidiasis have been published by the European Society for Clinical Microbiology and Infectious Diseases and the Infectious Diseases Society of North America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We revisit the debt overhang question. We first use non-parametric techniques to isolate a panel of countries on the downward sloping section of a debt Laffer curve. In particular, overhang countries are ones where a threshold level of debt is reached in sample, beyond which (initial) debt ends up lowering (subsequent)growth. On average, significantly negative coefficients appear when debt face value reaches 60 percent of GDP or 200 percent of exports, and when its present value reaches 40 percent of GDP or 140 percent of exports. Second, we depart from reduced form growth regressions and perform direct tests of the theory on the thus selected sample of overhang countries. In the spirit of event studies, we ask whether, as overhang level of debt is reached: (i)investment falls precipitously as it should when it becomes optimal to default, (ii) economic policy deteriorates observably, as it should when debt contracts become unable to elicit effort on the part of the debtor, and (iii) the terms of borrowing worsen noticeably, as they should when it becomes optimal for creditors to pre-empt default and exact punitive interest rates. We find a systematic response of investment, particularly when property rights are weakly enforced, some worsening of the policy environment, and a fall in interest rates. This easing of borrowing conditions happens because lending by the private sector virtually disappears in overhang situations, and multilateral agencies step in with concessional rates. Thus, while debt relief is likely to improve economic policy (and especially investment) in overhang countries, it is doubtful that it would ease their terms of borrowing, or the burden of debt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this paper is to present an optimal resource allocation model for the regional allocation of public service inputs. Theproposed solution leads to maximise the relative public service availability in regions located below the best availability frontier, subject to exogenous budget restrictions and equality ofaccess for equal need criteria (equity-based notion of regional needs). The construction of non-parametric deficit indicators is proposed for public service availability by a novel application of Data Envelopment Analysis (DEA) models, whose results offer advantages for the evaluation and improvement of decentralised public resource allocation systems. The method introduced in this paper has relevance as a resource allocation guide for the majority of services centrally funded by the public sector in a given country, such as health care, basic and higher education, citizen safety, justice, transportation, environmental protection, leisure, culture, housing and city planning, etc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural equation models (SEM) are commonly used to analyze the relationship between variables some of which may be latent, such as individual ``attitude'' to and ``behavior'' concerning specific issues. A number of difficulties arise when we want to compare a large number of groups, each with large sample size, and the manifest variables are distinctly non-normally distributed. Using an specific data set, we evaluate the appropriateness of the following alternative SEM approaches: multiple group versus MIMIC models, continuous versus ordinal variables estimation methods, and normal theory versus non-normal estimation methods. The approaches are applied to the ISSP-1993 Environmental data set, with the purpose of exploring variation in the mean level of variables of ``attitude'' to and ``behavior''concerning environmental issues and their mutual relationship across countries. Issues of both theoretical and practical relevance arise in the course of this application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Site-specific regression coefficient values are essential for erosion prediction with empirical models. With the objective to investigate the surface-soilconsolidation factor, Cf, linked to the RUSLE's prior-land-use subfactor, PLU, an erosion experiment using simulated rainfall on a 0.075 m m-1 slope, sandy loam Paleudult soil, was conducted at the Agriculture Experimental Station of the Federal University of Rio Grande do Sul (EEA/UFRGS), in Eldorado do Sul, State of Rio Grande do Sul, Brazil. Firstly, a row-cropped area was excluded from cultivation (March 1995), the existing crop residue removed from the field, and the soil kept clean-tilled the rest of the year (to get a degraded soil condition for the intended purpose of this research). The soil was then conventional-tilled for the last time (except for a standard plot which was kept continuously cleantilled for comparison purposes), in January 1996, and the following treatments were established and evaluated for soil reconsolidation and soil erosion until May 1998, on duplicated 3.5 x 11.0 m erosion plots: (a) fresh-tilled soil, continuously in clean-tilled fallow (unit plot); (b) reconsolidating soil without cultivation; and (c) reconsolidating soil with cultivation (a crop sequence of three corn- and two black oats cycles, continuously in no-till, removing the crop residues after each harvest for rainfall application and redistributing them on the site after that). Simulated rainfall was applied with a Swanson's type, rotating-boom rainfall simulator, at 63.5 mm h-1 intensity and 90 min duration, six times during the two-and-half years of experimental period (at the beginning of the study and after each crop harvest, with the soil in the unit plot being retilled before each rainfall test). The soil-surface-consolidation factor, Cf, was calculated by dividing soil loss values from the reconsolidating soil treatments by the average value from the fresh-tilled soil treatment (unit plot). Non-linear regression was used to fit the Cf = e b.t model through the calculated Cf-data, where t is time in days since last tillage. Values for b were -0.0020 for the reconsolidating soil without cultivation and -0.0031 for the one with cultivation, yielding Cf-values equal to 0.16 and 0.06, respectively, after two-and-half years of tillage discontinuation, compared to 1.0 for fresh-tilled soil. These estimated Cf-values correspond, respectively, to soil loss reductions of 84 and 94 %, in relation to soil loss from the fresh-tilled soil, showing that the soil surface reconsolidated intenser with cultivation than without it. Two distinct treatmentinherent soil surface conditions probably influenced the rapid decay-rate of Cf values in this study, but, as a matter of a fact, they were part of the real environmental field conditions. Cf-factor curves presented in this paper are therefore useful for predicting erosion with RUSLE, but their application is restricted to situations where both soil type and particular soil surface condition are similar to the ones investigate in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Erosion is deleterious because it reduces the soil's productivity capacity for growing crops and causes sedimentation and water pollution problems. Surface and buried crop residue, as well as live and dead plant roots, play an important role in erosion control. An efficient way to assess the effectiveness of such materials in erosion reduction is by means of decomposition constants as used within the Revised Universal Soil Loss Equation - RUSLE's prior-land-use subfactor - PLU. This was investigated using simulated rainfall on a 0.12 m m-1 slope, sandy loam Paleudult soil, at the Agriculture Experimental Station of the Federal University of Rio Grande do Sul, in Eldorado do Sul, State of Rio Grande do Sul, Brazil. The study area had been covered by native grass pasture for about fifteen years. By the middle of March 1996, the sod was mechanically mowed and the crop residue removed from the field. Late in April 1996, the sod was chemically desiccated with herbicide and, about one month later, the following treatments were established and evaluated for sod biomass decomposition and soil erosion, from June 1996 to May 1998, on duplicated 3.5 x 11.0 m erosion plots: (a) and (b) soil without tillage, with surface residue and dead roots; (c) soil without tillage, with dead roots only; (d) soil tilled conventionally every two-and-half months, with dead roots plus incorporated residue; and (e) soil tilled conventionally every six months, with dead roots plus incorporated residue. Simulated rainfall was applied with a rotating-boom rainfall simulator, at an intensity of 63.5 mm h-1 for 90 min, eight to nine times during the experimental period (about every two-and-half months). Surface and subsurface sod biomass amounts were measured before each rainfall test along with the erosion measurements of runoff rate, sediment concentration in runoff, soil loss rate, and total soil loss. Non-linear regression analysis was performed using an exponential and a power model. Surface sod biomass decomposition was better depicted by the exponential model, while subsurface sod biomass was by the power model. Subsurface sod biomass decomposed faster and more than surface sod biomass, with dead roots in untilled soil without residue on the surface decomposing more than dead roots in untilled soil with surface residue. Tillage type and frequency did not appreciably influence subsurface sod biomass decomposition. Soil loss rates increased greatly with both surface sod biomass decomposition and decomposition of subsurface sod biomass in the conventionally tilled soil, but they were minimally affected by subsurface sod biomass decomposition in the untilled soil. Runoff rates were little affected by the studied treatments. Dead roots plus incorporated residues were effective in reducing erosion in the conventionally tilled soil, while consolidation of the soil surface was important in no-till. The residual effect of the turned soil on erosion diminished gradually with time and ceased after two years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In a previous study we demonstrated that mild metabolic alkalosis resulting from standard bicarbonate haemodialysis induces hypotension. In this study, we have further investigated the changes in systemic haemodynamics induced by bicarbonate and calcium, using non-invasive procedures. METHODS: In a randomized controlled trial with a single-blind, crossover design, we sequentially changed the dialysate bicarbonate and calcium concentrations (between 26 and 35 mmol/l for bicarbonate and either 1.25 or 1.50 mmol/l for calcium). Twenty-one patients were enrolled for a total of 756 dialysis sessions. Systemic haemodynamics was evaluated using pulse wave analysers. Bioimpedance and BNP were used to compare the fluid status pattern. RESULTS: The haemodynamic parameters and the pre-dialysis BNP using either a high calcium or bicarbonate concentration were as follows: systolic blood pressure (+5.6 and -4.7 mmHg; P < 0.05 for both), stroke volume (+12.3 and +5.2 ml; P < 0.05 and ns), peripheral resistances (-190 and -171 dyne s cm(-5); P < 0.05 for both), central augmentation index (+1.1% and -2.9%; ns and P < 0.05) and BNP (-5 and -170 ng/l; ns and P < 0.05). The need of staff intervention was similar in all modalities. CONCLUSIONS: Both high bicarbonate and calcium concentrations in the dialysate improve the haemodynamic pattern during dialysis. Bicarbonate reduces arterial stiffness and ameliorates the heart tolerance for volume overload in the interdialytic phase, whereas calcium directly increases stroke volume. The slight hypotensive effect of alkalaemia should motivate a probative reduction of bicarbonate concentration in dialysis fluid for haemodynamic reasons, only in the event of failure of classical tools to prevent intradialytic hypotension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our procedure to detect moving groups in the solar neighbourhood (Chen et al., 1997) in the four-dimensional space of the stellar velocity components and age has been improved. The method, which takes advantadge of non-parametric estimators of density distribution to avoid any a priori knowledge of the kinematic properties of these stellar groups, now includes the effect of observational errors on the process to select moving group stars, uses a better estimation of the density distribution of the total sample and field stars, and classifies moving group stars using all the available information. It is applied here to an accurately selected sample of early-type stars with known radial velocities and Strömgren photometry. Astrometric data are taken from the HIPPARCOS catalogue (ESA, 1997), which results in an important decrease in the observational errors with respect to ground-based data, and ensures the uniformity of the observed data. Both the improvement of our method and the use of precise astrometric data have allowed us not only to confirm the existence of classical moving groups, but also to detect finer structures that in several cases can be related to kinematic properties of nearby open clusters or associations.