961 resultados para approximate calculation of sums


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently available molecular biology tools allow forensic scientists to characterize DNA evidence found at crime scenes for a large variety of samples, including those of limited quantity and quality, and achieve high levels of individualization. Yet, standard forensic markers provide limited or no results when applied to mixed DNA samples where the contributors are present in very different proportions (unbalanced DNA mixtures). This becomes an issue mostly for the analysis of trace samples collected on the victim or from touched objects. To this end, we recently proposed an innovative type of genetic marker, named DIP-STR that relies on pairing deletion/insertion polymorphisms (DIP) with standard short tandem repeats (STR). This novel compound marker allows detection of the minor DNA contributor in a DNA mixture of any gender and cellular origin with unprecedented resolution (beyond a DNA ratio of 1:1000). To provide a novel analytical tool useful in practice to common forensic laboratories, this article describes the first set of 10 DIP-STR markers selected according to forensic technical standards. The novel DIP-STR regions are short (between 146 and 271 bp), include only highly polymorphic tri-, tetra- and pentanucleotide tandem repeats and are located on different chromosomes or chromosomal arms to provide statistically independent results. This novel set of DIP-STR can target the amplification of 0.03-0.1 ng of DNA when mixed with a 1000-fold excess of major DNA. DIP-STR relative allele frequencies are estimated based on a survey of 103 Swiss individuals. Finally, this study provides an estimate of the occurrence of informative alleles and a calculation of the corresponding random match probability of the detected minor DIP-STR genotype assessed across 10,506 pairwise conceptual mixtures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have investigated the behavior of bistable cells made up of four quantum dots and occupied by two electrons, in the presence of realistic confinement potentials produced by depletion gates on top of a GaAs/AlGaAs heterostructure. Such a cell represents the basic building block for logic architectures based on the concept of quantum cellular automata (QCA) and of ground state computation, which have been proposed as an alternative to traditional transistor-based logic circuits. We have focused on the robustness of the operation of such cells with respect to asymmetries derived from fabrication tolerances. We have developed a two-dimensional model for the calculation of the electron density in a driven cell in response to the polarization state of a driver cell. Our method is based on the one-shot configuration-interaction technique, adapted from molecular chemistry. From the results of our simulations, we conclude that an implementation of QCA logic based on simple ¿hole arrays¿ is not feasible, because of the extreme sensitivity to fabrication tolerances. As an alternative, we propose cells defined by multiple gates, where geometrical asymmetries can be compensated for by adjusting the bias voltages. Even though not immediately applicable to the implementation of logic gates and not suitable for large scale integration, the proposed cell layout should allow an experimental demonstration of a chain of QCA cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compared different approaches to analyze running mechanics alterations during repeated treadmill sprints. Thirteen active male athletes performed five 5-second sprints with 25 seconds of recovery on an instrumented treadmill. This approach allowed continuous measurement of running kinetics/kinematics and calculation of vertical and leg stiffness variables that were subsequently averaged over 3 distinct sections of the 5-second sprint (steps 2-5, 7-10, and 12-15) and for all steps (steps 2-15). Independently from the analyzed section, propulsive power and step frequency decreased with fatigue, while contact time and step length increased (P < .05). Except for step frequency, all mechanical variables varied (P < .05) across sprint sections. The only parameters that highly depend on running velocity (propulsive power and vertical stiffness) showed a significant interaction (P < .05) between the analyzed sections, with smaller magnitude of fatigue-induced change observed for steps 2-5. Considering all steps or only a few steps during early, middle, or late phases of 5-second sprints provides similar mechanical outcomes during repeated treadmill sprinting, although acceleration induces noticeable differences between the sections studied. Furthermore, quantifying mechanical alterations from the early acceleration phase may not be readily detectable, and is not recommended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Case: A 11 yo girl with Marfan syndrome was referred to cardiac MR (CMR) to measure the size of her thoracic aorta. She had a typical phenotype with arachnodactyly, abnormally long arms, and was tall and slim (156 cm, 28 kg, body mass index 11,5 kg/m2). She complained of no symptoms. Cardiac auscultation revealed a prominent mid-systolic click and an end-systolic murmur at the apex. A recent echocardiogram showed a moderately dilated left ventricle with normal function and a mitral valve prolapse with moderate mitral valve regurgitation. CMR showed a dilatation of the aortic root (38 mm, Z-score 8.9) and a severe prolapse of the mitral valve with regurgitation. The ventricular cavity was moderately dilated (116 ml/m2) and its contraction was hyperdynamic (stroke volume (SV): 97 ml; LVEF 72%, with the LV volumes measured by modified Simpson method from the apex to the mitral annulus). In this patient however, the mitral prolapse was characterized by a severe backward movement of the valve toward the left atrium (LA) in systole and the dyskinetic movement of the atrioventricular plane caused a ventricularisation of a part of the LA in systole (Figure). This resulted in a significant reduction of LVEF: more than ¼ of the apparent SV was displaced backwards into the ventricularized LA volume, reducing the effective LVEF to 51% (effective SV 69ml). Moreover, by flow measurement, the SV across the ascending aorta was 30 ml (cardiac index 2.0 l/min/m2) allowing the calculation of a regurgitant fraction across the mitral valve of 56%, which was diagnostic for a severe mitral valve insufficiency. Conclusion: This case illustrates the phenomenon of a ventricularisation of the LA where the severe prolapse gives the illusion of a higher attachement of the mitral leaflets within the atrial wall. Besides the severe mitral regurgitation, this paradoxical backwards movement of the valve causes an intraventricular unloading during systole reducing the apparent LVEF of 72% to an effective LVEF of only 51%. In addition, forward flow fraction is only 22% after accounting for the regurgitant volume, as well. This combined involvement of the mitral valve could explain the discrepancy between a low output state and an apparently hyperdynamic LV contraction. Due to its ability to precisely measure flows and volumes, CMR is particularly suited to detect this phenomenon and to quantify its impact on the LV pump function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The northwestern margin of the Valencia trough is an area of low strain characterized by slow normal faults and low to moderate seismicity. Since the mid 1990s this area has been the subject of a number of studies on active tectonic which have proposed different approaches to the location of active faults and to the calculation of the parameters that describe their seismic cycle. Fifty-six active faults have been found and a classification has been made in accordance with their characteristics: a) faults with clear evidence of large paleo-, historic or instrumental earthquakes (2/56); b) faults with evidence of accumulated activity during the Plio-Quaternary and with associated instrumental seismicity (7/56); c) faults with evidence of accumulated activity during the Plio-Quaternary and without associated instrumental seismicity (17/56); d) faults with associated instrumental seismicity and without evidence of accumulated activity during the Plio-Quaternary (30/56), and e) faults without evidence of activity or inactive faults. The parameters that describe the seismic cycle of these faults have been evaluated by different methods that use the geological data obtained for each fault except when paleoseismological studies were available. This classification can be applied to other areas with low slip faults because of the simplicity of the approaches adopted. This study reviews the different approaches proposed and describes the active faults located, highlighting the need a) to better understand active faults in slow strain zones through paleoseismological studies, and b) to include them in seismic hazard studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary bypass grafting remains the best option for patients suffering from multivessel coronary artery disease, and the saphenous vein is used as an additional conduit for multiple complete revascularizations. However, the long-term vein graft durability is poor, with almost 75% of occluded grafts after 10 years. To improve the durability, the concept of an external supportive structure was successfully developed during the last years: the eSVS Mesh device (Kips Bay Medical) is an external support for vein graft made of weft-knitted nitinol wire into a tubular form with an approximate length of 24 cm and available in three diameters (3.5, 4.0 and 4.5 mm). The device is placed over the outer wall of the vein and carefully deployed to cover the full length of the graft. The mesh is flexible for full adaptability to the heart anatomy and is intended to prevent kinking and dilatation of the vein in addition to suppressing the intima hyperplasia induced by the systemic blood pressure. The device is designed to reduce the vein diameter of about 15-20% at most to prevent the vein radial expansion induced by the arterial blood pressure, and the intima hyperplasia leading to the graft failure. We describe the surgical technique for preparing the vein graft with the external saphenous vein graft support (eSVS Mesh) and we share our preliminary clinical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To evaluate the association of conventional angiography (AG) with computed tomography angiography (CTA) as compared with CTA only, preoperatively, in the treatment of aortic diseases. Materials and Methods Retrospective study involving patients submitted to endovascular treatment of aortic diseases, in the period from January 2009 to July 2010, with use of preoperative CTA + conventional AG or CTA only. The patients were divided into two groups, namely: G1 – thoracic aortic diseases; and G2 – abdominal aortic diseases. G1 was subdivided into 1A (preoperative AG + CTA) and 1B (preoperative CTA). G2 was subdivided into 2C (CTA + AG) and 2D (CTA only). Results The authors evaluated 156 patients. In subgroups 1A and 1B, the rate of technical success was, respectively, 100% and 94.7% (p = 1.0); and the rate of therapeutic success was, respectively, 81% and 58% (p = 0.13). A higher number of complications were observed in subgroup 1B (p = 0.057). The accuracy in the calculation of the prosthesis was higher in subgroup 1A (p = 0.065). In their turn, the rate of technical success in subgroups 2C and 2D was, respectively, 92.3% and 98.6% (p = 0.17). The rate of therapeutic success was 73% and 98.6% (p = 0.79). Conclusion Preoperative conventional AG should be reserved for cases where CTA cannot provide all the information in the planning of a therapeutic intervention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To suggest a national value for the diagnostic reference level (DRL) in terms of activity in MBq.kg–1, for nuclear medicine procedures with fluorodeoxyglucose (18F-FDG) in whole body positron emission tomography (PET) scans of adult patients. Materials and Methods A survey on values of 18F-FDG activity administered in Brazilian clinics was undertaken by means of a questionnaire including questions about number and manufacturer of the installed equipment, model and detector type. The suggested DRL value was based on the calculation of the third quartile of the activity values distribution reported by the clinics. Results Among the surveyed Brazilian clinics, 58% responded completely or partially the questionnaire; and the results demonstrated variation of up to 100% in the reported radiopharmaceutical activity. The suggested DRL for 18F-FDG/PET activity was 5.54 MBq.kg–1 (0.149 mCi.kg–1). Conclusion The present study has demonstrated the lack of standardization in administered radiopharmaceutical activities for PET procedures in Brazil, corroborating the necessity of an official DRL value to be adopted in the country. The suggested DLR value demonstrates that there is room for optimization of the procedures and 18F-FDG/PET activities administered in Brazilian clinics to reduce the doses delivered to patients. It is important to highlight that this value should be continually revised and optimized at least every five years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AbstractObjective:To compare the accuracy of computer-aided ultrasound (US) and magnetic resonance imaging (MRI) by means of hepatorenal gradient analysis in the evaluation of nonalcoholic fatty liver disease (NAFLD) in adolescents.Materials and Methods:This prospective, cross-sectional study evaluated 50 adolescents (aged 11–17 years), including 24 obese and 26 eutrophic individuals. All adolescents underwent computer-aided US, MRI, laboratory tests, and anthropometric evaluation. Sensitivity, specificity, positive and negative predictive values and accuracy were evaluated for both imaging methods, with subsequent generation of the receiver operating characteristic (ROC) curve and calculation of the area under the ROC curve to determine the most appropriate cutoff point for the hepatorenal gradient in order to predict the degree of steatosis, utilizing MRI results as the gold-standard.Results:The obese group included 29.2% girls and 70.8% boys, and the eutrophic group, 69.2% girls and 30.8% boys. The prevalence of NAFLD corresponded to 19.2% for the eutrophic group and 83% for the obese group. The ROC curve generated for the hepatorenal gradient with a cutoff point of 13 presented 100% sensitivity and 100% specificity. As the same cutoff point was considered for the eutrophic group, false-positive results were observed in 9.5% of cases (90.5% specificity) and false-negative results in 0% (100% sensitivity).Conclusion:Computer-aided US with hepatorenal gradient calculation is a simple and noninvasive technique for semiquantitative evaluation of hepatic echogenicity and could be useful in the follow-up of adolescents with NAFLD, population screening for this disease as well as for clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changes in the angle of illumination incident upon a 3D surface texture can significantly alter its appearance, implying variations in the image texture. These texture variations produce displacements of class members in the feature space, increasing the failure rates of texture classifiers. To avoid this problem, a model-based texture recognition system which classifies textures seen from different distances and under different illumination directions is presented in this paper. The system works on the basis of a surface model obtained by means of 4-source colour photometric stereo, used to generate 2D image textures under different illumination directions. The recognition system combines coocurrence matrices for feature extraction with a Nearest Neighbour classifier. Moreover, the recognition allows one to guess the approximate direction of the illumination used to capture the test image

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coastal birds are an integral part of coastal ecosystems, which nowadays are subject to severe environmental pressures. Effective measures for the management and conservation of seabirds and their habitats call for insight into their population processes and the factors affecting their distribution and abundance. Central to national and international management and conservation measures is the availability of accurate data and information on bird populations, as well as on environmental trends and on measures taken to solve environmental problems. In this thesis I address different aspects of the occurrence, abundance, population trends and breeding success of waterbirds breeding on the Finnish coast of the Baltic Sea, and discuss the implications of the results for seabird monitoring, management and conservation. In addition, I assess the position and prospects of coastal bird monitoring data, in the processing and dissemination of biodiversity data and information in accordance with the Convention on Biological Diversity (CBD) and other national and international commitments. I show that important factors for seabird habitat selection are island area and elevation, water depth, shore openness, and the composition of island cover habitats. Habitat preferences are species-specific, with certain similarities within species groups. The occurrence of the colonial Arctic Tern (Sterna paradisaea) is partly affected by different habitat characteristics than its abundance. Using long-term bird monitoring data, I show that eutrophication and winter severity have reduced the populations of several Finnish seabird species. A major demographic factor through which environmental changes influence bird populations is breeding success. Breeding success can function as a more rapid indicator of sublethal environmental impacts than population trends, particularly for long-lived and slowbreeding species, and should therefore be included in coastal bird monitoring schemes. Among my target species, local breeding success can be shown to affect the populations of the Mallard (Anas platyrhynchos), the Eider (Somateria mollissima) and the Goosander (Mergus merganser) after a time lag corresponding to their species-specific recruitment age. For some of the target species, the number of individuals in late summer can be used as an easier and more cost-effective indicator of breeding success than brood counts. My results highlight that the interpretation and application of habitat and population studies require solid background knowledge of the ecology of the target species. In addition, the special characteristics of coastal birds, their habitats, and coastal bird monitoring data have to be considered in the assessment of their distribution and population trends. According to the results, the relationships between the occurrence, abundance and population trends of coastal birds and environmental factors can be quantitatively assessed using multivariate modelling and model selection. Spatial data sets widely available in Finland can be utilised in the calculation of several variables that are relevant to the habitat selection of Finnish coastal species. Concerning some habitat characteristics field work is still required, due to a lack of remotely sensed data or the low resolution of readily available data in relation to the fine scale of the habitat patches in the archipelago. While long-term data sets exist for water quality and weather, the lack of data concerning for instance the food resources of birds hampers more detailed studies of environmental effects on bird populations. Intensive studies of coastal bird species in different archipelago areas should be encouraged. The provision and free delivery of high-quality coastal data concerning bird populations and their habitats would greatly increase the capability of ecological modelling, as well as the management and conservation of coastal environments and communities. International initiatives that promote open spatial data infrastructures and sharing are therefore highly regarded. To function effectively, international information networks, such as the biodiversity Clearing House Mechanism (CHM) under the CBD, need to be rooted at regional and local levels. Attention should also be paid to the processing of data for higher levels of the information hierarchy, so that data are synthesized and developed into high-quality knowledge applicable to management and conservation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The approaches are part of the everyday of the Physical Chemistry. In many didactic books in the area of Chemistry, the approaches are validated starting from qualitative and not quantitative approaches. We elaborated some examples that allow evaluating the quantitative impact of the approaches, being considered the mistake tolerated for the approximate calculation. The estimate of the error in the approaches should serve as guide to establish the validity of the calculation, which use them. Thus, the shortcut that represents a calculation approached to substitute accurate calculations; it can be used without it loses of quality in the results, besides indicating, as they are valid the adopted criterions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last few years, the discussion on the marginal social costs of transportation has been active. Applying the externalities as a tool to control transport would fulfil the polluter pays principle and simultaneously create a fair control method between the transport modes. This report presents the results of two calculation algorithms developed to estimate the marginal social costs based on the externalities of air pollution. The first algorithm calculates the future scenarios of sea transport traffic externalities until 2015 in the Gulf of Finland. The second algorithm calculates the externalities of Russian passenger car transit traffic via Finland by taking into account both sea and road transport. The algorithm estimates the ship-originated emissions of carbon dioxide (CO2), nitrogen oxides (NOx), sulphur oxides (SOx), particulates (PM) and the externalities for each year from 2007 to 2015. The total NOx emissions in the Gulf of Finland from the six ship types were almost 75.7 kilotons (Table 5.2) in 2007. The ship types are: passenger (including cruisers and ROPAX vessels), tanker, general cargo, Ro-Ro, container and bulk vessels. Due to the increase of traffic, the estimation for NOx emissions for 2015 is 112 kilotons. The NOx emission estimation for the whole Baltic Sea shipping is 370 kilotons in 2006 (Stipa & al, 2007). The total marginal social costs due to ship-originated CO2, NOx, SOx and PM emissions in the GOF were calculated to almost 175 million Euros in 2007. The costs will increase to nearly 214 million Euros in 2015 due to the traffic growth. The major part of the externalities is due to CO2 emissions. If we neglect the CO2 emissions by extracting the CO2 externalities from the results, we get the total externalities of 57 million Euros in 2007. After eight years (2015), the externalities would be 28 % lower, 41 million Euros (Table 8.1). This is the result of the sulphur emissions reducing regulation of marine fuels. The majority of the new car transit goes through Finland to Russia due to the lack of port capacity in Russia. The amount of cars was 339 620 vehicles (Statistics of Finnish Customs 2008) in 2005. The externalities are calculated for the transportation of passenger vehicles as follows: by ship to a Finnish port and, after that, by trucks to the Russian border checkpoint. The externalities are between 2 – 3 million Euros (year 2000 cost level) for each route. The ports included in the calculations are Hamina, Hanko, Kotka and Turku. With the Euro-3 standard trucks, the port of Hanko would be the best choice to transport the vehicles. This is because of lower emissions by new trucks and the saved transport distance of a ship. If the trucks are more polluting Euro 1 level trucks, the port of Kotka would be the best choice. This indicates that the truck emissions have a considerable effect on the externalities and that the transportation of light cargo, such as passenger cars by ship, produces considerably high emission externalities. The emission externalities approach offers a new insight for valuing the multiple traffic modes. However, the calculation of the marginal social costs based on the air emission externalities should not be regarded as a ready-made calculation system. The system is clearly in the need of some improvement but it can already be considered as a potential tool for political decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is often reasonable to convert old boiler to bubbling fluidized bed boiler instead of building a new one. Converted boiler consists of old and new heat surfaces which must be fitted to operate together. Prediction of heat transfer in not so ideal conditions sets challenges for designers. Two converted boilers situated in Poland were studied on the grounds of acceptance tests and further studies. Calculation of boiler process was performed with boiler design program. Main interest was heat transfer in superheaters and factors affecting it. Theory for heat transfer is presented according to information found from literature. Results obtained from experimental studies and calculations have been compared. With correct definitions calculated parameters corresponded well to measured data at boiler maximum design load. However overload situations revealed to be difficult to model at least without considering changes in the combustion process which requires readjustments to the design program input values.