917 resultados para pure error
Resumo:
The objective of this work was to evaluate the width and length incidence in a single seed fraction of oat [Avena sativa (L.)] cv. Cristal. The seeds were selected by a mechanical divider and by hand, and their correspondence to radiographic images in seeds with glumes and their caryopses. The width and length of the seeds with glumes and their caryopses were measured with electronic calliper, and their weight, with precision balance. Radiographic images of seeds with glumes were taken with an X-ray experimental equipment. The analyst selected seeds with glumes by the width and by the length previously determined and so with more weight, than that obtained by hand selection was slightly narrower, larger and lighter. The presence of the glumes masked the caryopses real dimensions (width and length), and conduced the analyst to select seeds that differed more by the width than by the length. The radiographic images showed the presence, or not, of caryopses inside the seed and its real dimensions. The mechanical partition method for seeds showed to be more efficient because the analyst subjectivity was not considered when the selection upon its dimensions was done. The X-ray analysis was a useful tool that complements the pure seed fraction selection as another factor of seed quality.
Resumo:
Individual learning (e.g., trial-and-error) and social learning (e.g., imitation) are alternative ways of acquiring and expressing the appropriate phenotype in an environment. The optimal choice between using individual learning and/or social learning may be dictated by the life-stage or age of an organism. Of special interest is a learning schedule in which social learning precedes individual learning, because such a schedule is apparently a necessary condition for cumulative culture. Assuming two obligatory learning stages per discrete generation, we obtain the evolutionarily stable learning schedules for the three situations where the environment is constant, fluctuates between generations, or fluctuates within generations. During each learning stage, we assume that an organism may target the optimal phenotype in the current environment by individual learning, and/or the mature phenotype of the previous generation by oblique social learning. In the absence of exogenous costs to learning, the evolutionarily stable learning schedules are predicted to be either pure social learning followed by pure individual learning ("bang-bang" control) or pure individual learning at both stages ("flat" control). Moreover, we find for each situation that the evolutionarily stable learning schedule is also the one that optimizes the learned phenotype at equilibrium.
Resumo:
PURPOSE: To review, retrospectively, the possible causes of sub- or intertrochanteric fractures after screw fixation of intracapsular fractures of the proximal femur. METHODS: Eighty-four patients with an intracapsular fracture of proximal femur were operated between 1995 and 1998 by using three cannulated 6.25 mm screws. The screws were inserted in a triangular configuration, one screw in the upper part of the femoral neck and two screws in the inferior part. Between 1999 and 2001, we use two screws proximally and one screw distally. RESULTS: In the first series, two patients died within one week after operation. Sixty-four fractures healed without problems. Four patients developed an atrophic non-union; avascular necrosis of the femoral head was found in 11 patients. Three patients (3.6%) suffered a sub- and/or intertrochanteric fracture after a mean postoperative time of 30 days, in one case without obvious trauma. In all three cases surgical revision was necessary. Between 1999 and 2001 we did not observe any fracture after screwing. CONCLUSION: Two screws in the inferior part of the femoral neck create a stress riser in the subtrochanteric region, potentially inducing a fracture in the weakened bone. For internal fixation for proximal intracapsular femoral fracture only one screw must be inserted in the inferior part of neck.
Resumo:
Context. White dwarfs can be used to study the structure and evolution of the Galaxy by analysing their luminosity function and initial mass function. Among them, the very cool white dwarfs provide the information for the early ages of each population. Because white dwarfs are intrinsically faint only the nearby (~ 20 pc) sample is reasonably complete. The Gaia space mission will drastically increase the sample of known white dwarfs through its 5-6 years survey of the whole sky up to magnitude V = 20-25. Aims. We provide a characterisation of Gaia photometry for white dwarfs to better prepare for the analysis of the scientific output of the mission. Transformations between some of the most common photometric systems and Gaia passbands are derived. We also give estimates of the number of white dwarfs of the different galactic populations that will be observed. Methods. Using synthetic spectral energy distributions and the most recent Gaia transmission curves, we computed colours of three different types of white dwarfs (pure hydrogen, pure helium, and mixed composition with H/He = 0.1). With these colours we derived transformations to other common photometric systems (Johnson-Cousins, Sloan Digital Sky Survey, and 2MASS). We also present numbers of white dwarfs predicted to be observed by Gaia. Results. We provide relationships and colourcolour diagrams among different photometric systems to allow the prediction and/or study of the Gaia white dwarf colours. We also include estimates of the number of sources expected in every galactic population and with a maximum parallax error. Gaia will increase the sample of known white dwarfs tenfold to about 200 000. Gaia will be able to observe thousands of very cool white dwarfs for the first time, which will greatly improve our understanding of these stars and early phases of star formation in our Galaxy.
Resumo:
Real time glycemia is a cornerstone for metabolic research, particularly when performing oral glucose tolerance tests (OGTT) or glucose clamps. From 1965 to 2009, the gold standard device for real time plasma glucose assessment was the Beckman glucose analyzer 2 (Beckman Instruments, Fullerton, CA), which technology couples glucose oxidase enzymatic assay with oxygen sensors. Since its discontinuation in 2009, today's researchers are left with few choices that utilize glucose oxidase technology. The first one is the YSI 2300 (Yellow Springs Instruments Corp., Yellow Springs, OH), known to be as accurate as the Beckman(1). The YSI has been used extensively for clinical research studies and is used to validate other glucose monitoring devices(2). The major drawback of the YSI is that it is relatively slow and requires high maintenance. The Analox GM9 (Analox instruments, London), more recent and faster, is increasingly used in clinical research(3) as well as in basic sciences(4) (e.g. 23 papers in Diabetes or 21 in Diabetologia). This article is protected by copyright. All rights reserved.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Abstract
Resumo:
ABSTRACT: Pure seminoma is a rare pathology of the young adult, often discovered in the early stages. Its prognosis is generally excellent and many therapeutic options are available, especially in stage I tumors. High cure rates can be achieved in several ways: standard treatment with radiotherapy is challenged by surveillance and chemotherapy. Toxicity issues and the patients' preferences should be considered when management decisions are made. This paper describes firstly the management of primary seminoma and its nodal involvement and, secondly, the various therapeutic options according to stage.
Resumo:
The purpose of this bachelor's thesis was to chart scientific research articles to present contributing factors to medication errors done by nurses in a hospital setting, and introduce methods to prevent medication errors. Additionally, international and Finnish research was combined and findings were reflected in relation to the Finnish health care system. Literature review was conducted out of 23 scientific articles. Data was searched systematically from CINAHL, MEDIC and MEDLINE databases, and also manually. Literature was analysed and the findings combined using inductive content analysis. Findings revealed that both organisational and individual factors contributed to medication errors. High workload, communication breakdowns, unsuitable working environment, distractions and interruptions, and similar medication products were identified as organisational factors. Individual factors included nurses' inability to follow protocol, inadequate knowledge of medications and personal qualities of the nurse. Developing and improving the physical environment, error reporting, and medication management protocols were emphasised as methods to prevent medication errors. Investing to the staff's competence and well-being was also identified as a prevention method. The number of Finnish articles was small, and therefore the applicability of the findings to Finland is difficult to assess. However, the findings seem to fit to the Finnish health care system relatively well. Further research is needed to identify those factors that contribute to medication errors in Finland. This is a necessity for the development of methods to prevent medication errors that fit in to the Finnish health care system.
Resumo:
Voltage fluctuations caused by parasitic impedances in the power supply rails of modern ICs are a major concern in nowadays ICs. The voltage fluctuations are spread out to the diverse nodes of the internal sections causing two effects: a degradation of performances mainly impacting gate delays anda noisy contamination of the quiescent levels of the logic that drives the node. Both effects are presented together, in thispaper, showing than both are a cause of errors in modern and future digital circuits. The paper groups both error mechanismsand shows how the global error rate is related with the voltage deviation and the period of the clock of the digital system.
Resumo:
This paper presents a probabilistic approach to model the problem of power supply voltage fluctuations. Error probability calculations are shown for some 90-nm technology digital circuits.The analysis here considered gives the timing violation error probability as a new design quality factor in front of conventional techniques that assume the full perfection of the circuit. The evaluation of the error bound can be useful for new design paradigms where retry and self-recoveringtechniques are being applied to the design of high performance processors. The method here described allows to evaluate the performance of these techniques by means of calculating the expected error probability in terms of power supply distribution quality.
Resumo:
La problemàtica jurídica-social que ha sorgit aquests darrers anys amb les permutes financeres i les participacions preferents ha fet plantejar si s'ha produït un error en el consentiment contractual amb aquest tipus de productes financers. A partir del contingut del Codi Civil espanyol i la doctrina, s'han analitzat els elements essencials del contracte, així com, la legislació aplicable als instruments financers. Amb l’ ajuda de la jurisprudència s'ha pogut comprovar que en la majoria de casos portats als tribunals en relació a aquests contractes, en els quals, es demana l'anul·labilitat contractual, el fonament principal es basa en la vulneració de les entitats de crèdit dels seus deures legals . En el present treball queda palesa la importància d'enllaçar l'element contractual del consentiment amb l'obligació que tenen les entitats de crèdit d'informar els seus clients. Així, la incorrecta formació sobre la realitat contractual que els clients manifesten amb el consentiment, passa sense cap dubte per la necessitat d'obtenir tota la informació rellevant del contracte. L’obligació d’informació està estretament lligada al deure de classificar als clients, totes dues són un compromís legal que tenen les entitats en la seva funció de lleialtat empresària. Les entitats financeres deuen per tant classificar els seus clients i proporcionals la informació, amb més rigor si cap , en el cas de clients minoristes. Per tot això, veiem que en aquells casos de clients minoristes en els quals no s'ha pogut demostrar per part de les entitats de crèdit que es va proporcionar tota la informació necessària, s'ha produït un error en el consentiment. Els clients no coneixien l’autèntic abast de la vinculació ni els costos als quals s'havia obligat , no hi ha dubte que en molts dels casos d'haver conegut la realitat, no haguessin contractat.
Resumo:
Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed.As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.
Resumo:
Contrast enhancement is an image processing technique where the objective is to preprocess the image so that relevant information can be either seen or further processed more reliably. These techniques are typically applied when the image itself or the device used for image reproduction provides poor visibility and distinguishability of different regions of interest inthe image. In most studies, the emphasis is on the visualization of image data,but this human observer biased goal often results to images which are not optimal for automated processing. The main contribution of this study is to express the contrast enhancement as a mapping from N-channel image data to 1-channel gray-level image, and to devise a projection method which results to an image with minimal error to the correct contrast image. The projection, the minimum-error contrast image, possess the optimal contrast between the regions of interest in the image. The method is based on estimation of the probability density distributions of the region values, and it employs Bayesian inference to establish the minimum error projection.