973 resultados para Error in substance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims: Sex on premises venues (SOPV) have been given considerable attention, in particular when exploring the relationship between SOPV attendance and risk behaviours such as drug use and unsafe sex. Little attention has been given to the perspectives of those who work in these venues.

Methods: Semi-structured interviews were conducted with staff recruited from four SOPV in Sydney, Australia. Content analysis was performed to identify common themes.

Findings: Several themes emerged from the staff interviews. These themes concentrated on particular drugs of concern; interventions in place to deal with substance use and those under the influence; and that drug use among SOP patrons occurred at venues prior to attending SOPV.

Conclusions: Interviews with SOP workers showed that these venues face unique challenges that may not be encountered by other settings. SOPV staff has a detailed understanding of their clientele and their perspectives may be important not only when informing trends in substance use but also the development and distribution of harm reduction materials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In audio watermarking, the robustness against pitch-scaling attack, is one of the most challenging problems. In this paper, we propose an algorithm, based on traditional time-spread(TS) echo hiding based audio watermarking to solve this problem. In TS echo hiding based watermarking, pitch-scaling attack shifts the location of pseudonoise (PN) sequence which appears in the cepstrum domain. Thus, position of the peak, which occurs after correlating with PN-sequence changes by an un-known amount and that causes the error. In the proposed scheme, we replace PN-sequence with unit-sample sequence and modify the decoding algorithm in such a way it will not depend on a particular point in cepstrum domain for extraction of watermark. Moreover proposed algorithm is applied to stereo audio signals to further improve the robustness. Experimental results illustrate the effectiveness of the proposed algorithm against pitch-scaling attacks compared to existing methods. In addition to that proposed algorithm also gives better robustness against other conventional signal processing attacks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compressed sensing (CS) is a new information sampling theory for acquiring sparse or compressible data with much fewer measurements than those otherwise required by the Nyquist/Shannon counterpart. This is particularly important for some imaging applications such as magnetic resonance imaging or in astronomy. However, in the existing CS formulation, the use of the â„“ 2 norm on the residuals is not particularly efficient when the noise is impulsive. This could lead to an increase in the upper bound of the recovery error. To address this problem, we consider a robust formulation for CS to suppress outliers in the residuals. We propose an iterative algorithm for solving the robust CS problem that exploits the power of existing CS solvers. We also show that the upper bound on the recovery error in the case of non-Gaussian noise is reduced and then demonstrate the efficacy of the method through numerical studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is currently no universally recommended and accepted method of data processing within the science of indirect calorimetry for either mixing chamber or breath-by-breath systems of expired gas analysis. Exercise physiologists were first surveyed to determine methods used to process oxygen consumption ([OV0312]O 2) data, and current attitudes to data processing within the science of indirect calorimetry. Breath-by-breath datasets obtained from indirect calorimetry during incremental exercise were then used to demonstrate the consequences of commonly used time, breath and digital filter post-acquisition data processing strategies. Assessment of the variability in breath-by-breath data was determined using multiple regression based on the independent variables ventilation (VE), and the expired gas fractions for oxygen and carbon dioxide, FEO 2 and FECO2, respectively. Based on the results of explanation of variance of the breath-by-breath [OV0312]O2 data, methods of processing to remove variability were proposed for time-averaged, breath-averaged and digital filter applications. Among exercise physiologists, the strategy used to remove the variability in sequential [OV0312]O2 measurements varied widely, and consisted of time averages (30 sec [38%], 60 sec [18%], 20 sec [11%], 15 sec [8%]), a moving average of five to 11 breaths (10%), and the middle five of seven breaths (7%). Most respondents indicated that they used multiple criteria to establish maximum [OV0312]O 2 ([OV0312]O2max) including: the attainment of age-predicted maximum heart rate (HRmax) [53%], respiratory exchange ratio (RER) >1.10 (49%) or RER >1.15 (27%) and a rating of perceived exertion (RPE) of >17, 18 or 19 (20%). The reasons stated for these strategies included their own beliefs (32%), what they were taught (26%), what they read in research articles (22%), tradition (13%) and the influence of their colleagues (7%). The combination of VE, FEO 2 and FECO2 removed 96-98% of [OV0312]O2 breath-by-breath variability in incremental and steady-state exercise [OV0312]O2 data sets, respectively. Correction of residual error in [OV0312]O2 datasets to 10% of the raw variability results from application of a 30-second time average, 15-breath running average, or a 0.04 Hz low cut-off digital filter. Thus, we recommend that once these data processing strategies are used, the peak or maximal value becomes the highest processed datapoint. Exercise physiologists need to agree on, and continually refine through empirical research, a consistent process for analysing data from indirect calorimetry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Precise and reliable modelling of polymerization reactor is challenging due to its complex reaction mechanism and non-linear nature. Researchers often make several assumptions when deriving theories and developing models for polymerization reactor. Therefore, traditional available models suffer from high prediction error. In contrast, data-driven modelling techniques provide a powerful framework to describe the dynamic behaviour of polymerization reactor. However, the traditional NN prediction performance is significantly dropped in the presence of polymerization process disturbances. Besides, uncertainty effects caused by disturbances present in reactor operation can be properly quantified through construction of prediction intervals (PIs) for model outputs. In this study, we propose and apply a PI-based neural network (PI-NN) model for the free radical polymerization system. This strategy avoids assumptions made in traditional modelling techniques for polymerization reactor system. Lower upper bound estimation (LUBE) method is used to develop PI-NN model for uncertainty quantification. To further improve the quality of model, a new method is proposed for aggregation of upper and lower bounds of PIs obtained from individual PI-NN models. Simulation results reveal that combined PI-NN performance is superior to those individual PI-NN models in terms of PI quality. Besides, constructed PIs are able to properly quantify effects of uncertainties in reactor operation, where these can be later used as part of the control process. © 2014 Taiwan Institute of Chemical Engineers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

 Some illustrative examples are provided to identify the ineffective and unrealistic characteristics of existing approaches to solving fuzzy linear programming (FLP) problems (with single or multiple objectives). We point out the error in existing methods concerning the ranking of fuzzy numbers and thence suggest an effective method to solve the FLP. Based on the consistent centroid-based ranking of fuzzy numbers, the FLP problems are transformed into non-fuzzy single (or multiple) objective linear programming. Solutions of FLP are then crisp single or multiple objective programming problems, which can respectively be obtained by conventional methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Audit committees (ACs) are expected to play a key role in improving financial statement integrity and as a consequence reduce audit risk. Companies reporting conformity with regulations can have an AC that appears effective but is not actually effective in substance. We surveyed audit partners and managers to identify their indicators of actual AC effectiveness (auditor-chosen list). We hypothesize a negative association between AC effectiveness and audit risk, only when an auditor-chosen list, rather than extent of conformity with regulations, is used to measure effectiveness. Results support our expectations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heterogeneous deformation developed during "static recrystallization (SRX) tests" poses serious questions about the validity of the conventional methods to measure softening fraction. The challenges to measure SRX and verify a proposed kinetic model of SRX are discussed and a least square technique is utilized to quantify the error in a proposed SRX kinetic model. This technique relies on an existing computational-experimental multi-layer formulation to account for the heterogeneity during the post interruption hot torsion deformation. The kinetics of static recrystallization for a type 304 austenitic stainless steel deformed at 900 °C and strain rate of 0.01s-1 is characterized implementing the formulation. Minimizing the error between the measured and calculated torque-twist data, the parameters of the kinetic model and the flow behavior during the second hit are evaluated and compared with those obtained based on a conventional technique. Typical static recrystallization distributions in the test sample will be presented. It has been found that the major differences between the conventional and the presented technique results are due to the heterogeneous recrystallization in the cylindrical core of the specimen where the material is still partially recrystallized at the onset of the second hit deformation. For the investigated experimental conditions, the core is confined in the first two-thirds of the gauge radius, when the holding time is shorter than 50 s and the maximum pre-strain is about 0.5.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An analytic solution to the multi-target Bayes recursion known as the δ-Generalized Labeled Multi-Bernoulli ( δ-GLMB) filter has been recently proposed by Vo and Vo in [“Labeled Random Finite Sets and Multi-Object Conjugate Priors,” IEEE Trans. Signal Process., vol. 61, no. 13, pp. 3460-3475, 2014]. As a sequel to that paper, the present paper details efficient implementations of the δ-GLMB multi-target tracking filter. Each iteration of this filter involves an update operation and a prediction operation, both of which result in weighted sums of multi-target exponentials with intractably large number of terms. To truncate these sums, the ranked assignment and K-th shortest path algorithms are used in the update and prediction, respectively, to determine the most significant terms without exhaustively computing all of the terms. In addition, using tools derived from the same framework, such as probability hypothesis density filtering, we present inexpensive (relative to the δ-GLMB filter) look-ahead strategies to reduce the number of computations. Characterization of the L1-error in the multi-target density arising from the truncation is presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

São analisados 106 pacientes submetidos a localização estereotáctica. Os procedimentos variaram de biópsias cerebrais, orientação de craniotomias, colocação de cateter em cavidade tumoral, drenagem de hematoma intracerebral e drenagem de abscesso cerebral. As orientações de craniotomias foram para MAVs, tumores e processos inflamatórios, em 21 pacientes. As biópsias cerebrais estereotácticas para diagnóstico anatomopatológico apresentaram um índice de positividade de 87,50 % com complicações em 1,20 %, em 82 casos. São analisadas estatisticamente as variáveis como: idade, sexo, procedimento realizado, diagnóstico anatomopatológico e volume das lesões. É discutida a imprecisão na aquisição e cálculo das coordenadas estereotácticas com a TC do encéfalo e verificada a precisão do método estereotomográfico com a utilização de um phanton. O maior erro das coordenadas foi de 6,8 mm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Com a relevância que o mercado de crédito vem ganhando na economia o presente trabalho se propôs a fazer uma revisão conceitual do risco de crédito. Tendo a perda esperada como o principal componente do risco de crédito, o trabalho se aprofundou nesse tema propondo uma maneira nova para o cálculo da mesma. Da maneira que ela é modelada usualmente pressupoem que os parâmetros de PD e LGD são independentes. Alguns autores questionam essa pressuposição e que, se essa dependência não for levada em conta os cálculos de perda esperada e o capital que deveria ser alocado estarão incorretos. Uma alternativa para tratar a correlação é modelar os dois componentes conjuntamente, ao comparar os resultados do modelo usual com o modelo conjunto conclui-se que o erro da estimativa de perda esperada do modelo conjunto foi menor. Não se pode afirmar que o menor erro na estimativa de perda se deve a correlação entre a PD e LGD, porém ao modelar os parâmetros conjuntamente, retira-se essa forte pressuposição.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An economical solution for cementing oil wells is the use of pre-prepared dry mixtures containing cement and additives. The mixtures may be formulated, prepared and transported to the well where is added water to be pumped.Using this method, becomes dispensable to prepare the cement mixes containing additives in the cementing operation, reducing the possibility of error. In this way, the aim of this work is to study formulations of cement slurries containing solid additives for primary cementing of oil wells onshore for typical depths of 400, 800 and 1,200 meters. The formulations are comprised of Special Class Portland cement, mineral additions and solids chemical additives.The formulated mixtures have density of 1.67 g / cm ³ (14.0 lb / gal). Their optimization were made through the analysis of the rheological parameters, fluid loss results, free water, thickening time, stability test and mechanical properties.The results showed that mixtures are in conformity the specifications for cementing oil wells onshore studied depths

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O controle do ácaro Brevipalpus phoenicis (Geijskes, 1939), transmissor do vírus da leprose, deve ser realizado quando sua população atinge o nível de ação, obtido pelo monitoramento de sua população, por meio de amostragens. Objetivou-se determinar o tamanho da amostra aceitável para estimar a população do ácaro, para posterior tomada de decisão. O experimento foi realizado na Fazenda Cambuhy, Matão - SP, no ano agrícola 2003-2004. Escolheu-se ao acaso um talhão da variedade Valência, com oito anos de idade, plantada no espaçamento 7x3,5m, com 2.480 plantas. Nesse talhão, foram inspecionados 1; 2; 3; 5; 10 e 100% das plantas, o que corresponde a 25; 50; 74; 124; 248 e 2.480 plantas, respectivamente, em caminhamento no sentido das linhas de plantio. Foram amostrados 3 frutos ou, na ausência destes, eram analisados ramos. de acordo com os resultados obtidos, observa-se que a porcentagem de erro na estimativa da média para a porcentagem de frutos com presença de ácaros, quando se amostra apenas 1% das plantas (25 plantas), é de 50%, ou seja, para uma infestação de 10%, a variação da porcentagem de frutos infestados estaria entre 5 e 15%, levando o produtor a subestimar ou a superestimar o nível de infestação, aumentando os gastos com pulverizações desnecessárias ou um controle ineficiente do ácaro. Para que o erro na amostragem fique dentro da situação aceitável, de 20 a 30% (em média 25%) de erro, deveriam ser amostradas 105 plantas. Na porcentagem de frutos com mais de 10 ácaros, verifica-se que, para a situação aceitável (20 a 30%), devem ser inspecionadas 540 plantas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The determination of the amount of sample units that will compose the sample express the optimization of the workforce, and reduce errors inherent in the report of recommendation and evaluation of soil fertility. This study aimed to determine in three systems use and soil management, the numbers of units samples design, needed to form the composed sample, for evaluation of soil fertility. It was concluded that the number of sample units needed to compose the composed sample to determination the attributes of organic matter, pH, P, K, Ca, Mg, Al and H+Al and base saturation of soil vary by use and soil management and error acceptable to the mean estimate. For the same depth of collected, increasing the number of sample units, reduced the percentage error in estimating the average, allowing the recommendation of 14, 14 and 11 sample in management with native vegetation, pasture cultivation and corn, respectively, for a error 20% on the mean estimate.