965 resultados para Global test


Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA barcoding is a recently proposed global standard in taxonomy based on DNA sequences. The two main goals of DNA barcoding methodology are assignment of specimens to a species and discovery of new species. There are two main underlying assumptions: i) reciprocal monophyly of species, and ii) intraspecific divergence is always less than interspecific divergence. Here we present a phylogenetic analysis of the family Potamotrygonidae based on mitochondrial cytochrome c oxidase I gene, sampling 10 out of the 18 to 20 valid species including two non-described species. Potamotrygonidae systematics is still not fully resolved with several still-to-be-described species while some other species are difficult to delimit due to overlap in morphological characters and because of sharing a complex color patterns. Our results suggest that the family passed through a process of rapid speciation and that the species Potamotrygon motoro, P. scobina, and P. orbignyi share haplotypes extensively. Our results suggest that systems of identification of specimens based on DNA sequences, together with morphological and/or ecological characters, can aid taxonomic studies, but delimitation of new species based on threshold values of genetic distances are overly simplistic and misleading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies a genetic algorithm with hierarchically structured population to solve unconstrained optimization problems. The population has individuals distributed in several overlapping clusters, each one with a leader and a variable number of support individuals. The hierarchy establishes that leaders must be fitter than its supporters with the topological organization of the clusters following a tree. Computational tests evaluate different population structures, population sizes and crossover operators for better algorithm performance. A set of known benchmark test problems is solved and the results found are compared with those obtained from other methods described in the literature, namely, two genetic algorithms, a simulated annealing, a differential evolution and a particle swarm optimization. The results indicate that the method employed is capable of achieving better performance than the previous approaches in regard as the two criteria usually employed for comparisons: the number of function evaluations and rate of success. The method also has a superior performance if the number of problems solved is taken into account. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Fisioterapia - FCT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural differences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to offer support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and difficulty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to define the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workflows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the effective communication and understanding of it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of three types of global ischemia by occlusion of carotid artery on motor and exploratory behaviors of Gerbils were evaluated by the Activity Cage and Rota rod tests. Animals were divided based on two surgical criteria: unilateral (UNI) or bilateral (BIL) carotid occlusion, with (REP) or without (OCL) reperfusion; and their behavior was evaluated on the fourth (4) or sixth (6) day. There was reduction of cell number in striatum, motor cortex M1 area, and hippocampal CA1 area in all groups in comparison to control animals. For M1 area and striatum, the largest reduction was observed in UNI6, UNI4, and BIL4 groups. Neuronal loss was also observed in CA1 area of BIL4 rodents. There was a decrease in crossings and rearings in all groups in activity cage test, compared to control. Reperfusion, unilateral and bilateral occlusion groups showed decrease in crossings. Only the BIL4 showed a decrease of rearing. In the Rota rod test, except the UNIOCL6, the groups showed a decrease in the balance in comparison to control. Both groups with REP4 showed a major decrease in balance. These findings suggest that both unilateral and bilateral carotid occlusions with reperfusion produce impairments of motor and exploratory behavior. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to perform the translation on and cultural adaptation of the Global Appraisal of Individual Needs - Initial instrument, and calculate its content validity index. This is a methodological study designed for the cultural adaptation of the instrument. The instrument was translated into Portuguese in two versions that originated the synthesis of the translations, which were then submitted to the evaluation of four judges, experts in the field of alcohol and other drugs. After the suggested changes were made, the instrument was back-translated and resubmitted to the judges and authors of the original instrument, resulting in the final version of the instrument, Avaliacao Global das Necessidades Individuais - Inicial. The content validity index of the instrument was 0.91, considered valid according to the literature. The instrument Avaliacao Global das Necessidades Individuais - Inicial was culturally adapted to the Portuguese language spoken in Brazil; however, it was not submitted to tests with the target population, which suggests further studies should be performed to test its reliability and validity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The regional monsoons of the world have long been viewed as seasonal atmospheric circulation reversal-analogous to a thermally-driven land-sea breeze on a continental scale. This conventional view of monsoons is now being integrated at a global scale and accordingly, a new paradigm has emerged which considers regional monsoons to be manifestations of global-scale seasonal changes in response to overturning of atmospheric circulation in the tropics and subtropics, and henceforth, interactive components of a singular Global Monsoon (GM) system. The paleoclimate community, however, tends to view 'paleomonsoon' (PM), largely in terms of regional circulation phenomena. In the past decade, many high-quality speleothem oxygen isotope (delta O-18) records have been established from the Asian Monsoon and the South American Monsoon regions that primarily reflect changes in the integrated intensities of monsoons on orbital-to-decadal timescales. With the emergence of these high-resolution and absolute-dated records from both sides of the Equator, it is now possible to test a concept of the 'Global-Paleo-Monsoon' (GPM) on a wide-range of timescales. Here we present a comprehensive synthesis of globally-distributed speleothem delta O-18 records and highlight three aspects of the GPM that are comparable to the modern GM: (1) the GPM intensity swings on different timescales; (2) their global extent; and (3) an anti-phased inter-hemispheric relationship between the Asian and South American monsoon systems on a wide range of timescales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reduction of friction and wear in systems presenting metal-to-metal contacts, as in several mechanical components, represents a traditional challenge in tribology. In this context, this work presents a computational study based on the linear Archard's wear law and finite element modeling (FEM), in order to analyze unlubricated sliding wear observed in typical pin on disc tests. Such modeling was developed using finite element software Abaqus® with 3-D deformable geometries and elastic–plastic material behavior for the contact surfaces. Archard's wear model was implemented into a FORTRAN user subroutine (UMESHMOTION) in order to describe sliding wear. Modeling of debris and oxide formation mechanisms was taken into account by the use of a global wear coefficient obtained from experimental measurements. Such implementation considers an incremental computation for surface wear based on the nodal displacements by means of adaptive mesh tools that rearrange local nodal positions. In this way, the worn track was obtained and new surface profile is integrated for mass loss assessments. This work also presents experimental pin on disc tests with AISI 4140 pins on rotating AISI H13 discs with normal loads of 10, 35, 70 and 140 N, which represent, respectively, mild, transition and severe wear regimes, at sliding speed of 0.1 m/s. Numerical and experimental results were compared in terms of wear rate and friction coefficient. Furthermore, in the numerical simulation the stress field distribution and changes in the surface profile across the worn track of the disc were analyzed. The applied numerical formulation has shown to be more appropriate to predict mild wear regime than severe regime, especially due to the shorter running-in period observed in lower loads that characterizes this kind of regime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral work gains deeper insight into the dynamics of knowledge flows within and across clusters, unfolding their features, directions and strategic implications. Alliances, networks and personnel mobility are acknowledged as the three main channels of inter-firm knowledge flows, thus offering three heterogeneous measures to analyze the phenomenon. The interplay between the three channels and the richness of available research methods, has allowed for the elaboration of three different papers and perspectives. The common empirical setting is the IT cluster in Bangalore, for its distinguished features as a high-tech cluster and for its steady yearly two-digit growth around the service-based business model. The first paper deploys both a firm-level and a tie-level analysis, exploring the cases of 4 domestic companies and of 2 MNCs active the cluster, according to a cluster-based perspective. The distinction between business-domain knowledge and technical knowledge emerges from the qualitative evidence, further confirmed by quantitative analyses at tie-level. At firm-level, the specialization degree seems to be influencing the kind of knowledge shared, while at tie-level both the frequency of interaction and the governance mode prove to determine differences in the distribution of knowledge flows. The second paper zooms out and considers the inter-firm networks; particularly focusing on the role of cluster boundary, internal and external networks are analyzed, in their size, long-term orientation and exploration degree. The research method is purely qualitative and allows for the observation of the evolving strategic role of internal network: from exploitation-based to exploration-based. Moreover, a causal pattern is emphasized, linking the evolution and features of the external network to the evolution and features of internal network. The final paper addresses the softer and more micro-level side of knowledge flows: personnel mobility. A social capital perspective is here developed, which considers both employees’ acquisition and employees’ loss as building inter-firm ties, thus enhancing company’s overall social capital. Negative binomial regression analyses at dyad-level test the significant impact of cluster affiliation (cluster firms vs non-cluster firms), industry affiliation (IT firms vs non-IT fims) and foreign affiliation (MNCs vs domestic firms) in shaping the uneven distribution of personnel mobility, and thus of knowledge flows, among companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’argomento scelto riguarda l’adozione di standard privati da parte di imprese agro-alimentari e le loro conseguenze sulla gestione globale dell’azienda. In particolare, lo scopo di questo lavoro è quello di valutare le implicazioni dovute all’adozione del BRC Global Standard for Food Safety da parte delle imprese agro-alimentari italiane. La valutazione di tale impatto è basata sulle percezioni dei responsabili aziendali in merito ad aspetti economici, gestionali, commerciali, qualitativi, organizzativi. La ricerca ha seguito due passaggi fondamentali: innanzitutto sono state condotte 7 interviste in profondità con i Responsabili Qualità (RQ) di aziende agro-alimentari italiane certificate BRC Food. Le variabili estrapolate dall’analisi qualitativa del contenuto delle interviste sono state inserite, insieme a quelle rilevate in letteratura, nel questionario creato per la successiva survey. Il questionario è stato inviato tramite e-mail e con supporto telefonico ad un campione di aziende selezionato tramite campionamento random. Dopo un periodo di rilevazione prestabilito, sono stati compilati 192 questionari. L’analisi descrittiva dei dati mostra che i RQ sono in buona parte d’accordo con le affermazioni riguardanti gli elementi d’impatto. Le affermazioni maggiormente condivise riguardano: efficienza del sistema HACCP, efficienza del sistema di rintracciabilità, procedure di controllo, formazione del personale, miglior gestione delle urgenze e non conformità, miglior implementazione e comprensione di altri sistemi di gestione certificati. Attraverso l’analisi ANOVA fra variabili qualitative e quantitative e relativo test F emerge che alcune caratteristiche delle aziende, come l’area geografica, la dimensione aziendale, la categoria di appartenenza e il tipo di situazione nei confronti della ISO 9001 possono influenzare differentemente le opinioni degli intervistati. Successivamente attraverso un’analisi fattoriale sono stati estratti 8 fattori partendo da un numero iniziale di 28 variabili. Sulla base dei fattori è stata applicata la cluster analysis di tipo gerarchico che ha portato alla segmentazione del campione in 5 gruppi diversi. Ogni gruppo è stato interpretato sulla base di un profilo determinato dal posizionamento nei confronti dei vari fattori. I risultati oltre ad essere stati validati attraverso focus group effettuati con ricercatori ed operatori del settore, sono stati supportati anche da una successiva indagine qualitativa condotta presso 4 grandi retailer inglesi. Lo scopo di questa successiva indagine è stato quello di valutare l’esistenza di opinioni divergenti nei confronti dei fornitori che andasse quindi a sostenere l’ipotesi di un problema di asimmetria informativa che nonostante la presenza di standard privati ancora sussiste nelle principali relazioni contrattuali. Ulteriori percorsi di ricerca potrebbero stimare se la valutazione dell’impatto del BRC può aiutare le aziende di trasformazione nell’implementazione di altri standard di qualità e valutare quali variabili possono influenzare invece le percezioni in termini di costi dell’adozione dello standard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Decreased exercise capacity, and reduction in peak oxygen uptake are present in most patients affected by hypertrophic cardiomyopathy (HCM) . In addition an abnormal blood pressure response during a maximal exercise test was seen to be associated with high risk for sudden cardiac death in adult patients affected by HCM. Therefore exercise test (CPET) has become an important part of the evaluation of the HCM patients, but data on its role in patients with HCM in the pediatric age are quite limited. Methods and results Between 2004 and 2010, using CPET and echocardiography, we studied 68 children (mean age 13.9 ± 2 years) with HCM. The exercise test was completed by all the patients without adverse complications. The mean value of achieved VO2 max was 31.4 ± 8.3 mL/Kg/min which corresponded to 77.5 ± 16.9 % of predicted range. 51 patients (75%) reached a subnormal value of VO2max. On univariate analysis the achieved VO2 as percentage of predicted and the peak exercise systolic blood pressure (BP) Z score were inversely associated with max left ventricle (LV) wall thickness, with E/Ea ratio, and directly related with Ea and Sa wave velocities No association was found with the LV outflow tract gradient. During a mean follow up of 2.16 ± 1.7 years 9 patients reached the defined clinical end point of death, transplantation, implanted cardioverter defibrillator (ICD) shock, ICD implantation for secondary prevention or myectomy. Patients with peak VO2 < 52% or with peak systolic BP Z score < -5.8 had lower event free survival at follow up. Conclusions Exercise capacity is decreased in patients with HCM in pediatric age and global ventricular function seems being the most important determinant of exercise capacity in these patients. CPET seems to play an important role in prognostic stratification of children affected by HCM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A year of satellite-borne lidar CALIOP data is analyzed and statistics on occurrence and distribution of bulk properties of cirri are provided. The relationship between environmental and cloud physical parameters and the shape of the backscatter profile (BSP) is investigated. It is found that CALIOP BSP is mainly affected by cloud geometrical thickness while only minor impacts can be attributed to other quantities such as optical depth or temperature. To fit mean BSPs as functions of geometrical thickness and position within the cloud layer, polynomial functions are provided. It is demonstrated that, under realistic hypotheses, the mean BSP is linearly proportional to the IWC profile. The IWC parameterization is included into the RT-RET retrieval algorithm, that is exploited to analyze infrared radiance measurements in presence of cirrus clouds during the ECOWAR field campaign. Retrieved microphysical and optical properties of the observed cloud are used as input parameters in a forward RT simulation run over the 100-1100 cm-1 spectral interval and compared with interferometric data to test the ability of the current single scattering properties database of ice crystal to reproduce realistic optical features. Finally a global scale investigation of cirrus clouds is performed by developing a collocation algorithm that exploits satellite data from multiple sensors (AIRS, CALIOP, MODIS). The resulting data set is utilized to test a new infrared hyperspectral retrieval algorithm. Retrieval products are compared to data and in particular the cloud top height (CTH) product is considered for this purpose. A better agreement of the retrieval with the CALIOP CTH than MODIS is found, even if some cases of underestimation and overestimation are observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rock-pocket and honeycomb defects impair overall stiffness, accelerate aging, reduce service life, and cause structural problems in hardened concrete members. Traditional methods for detecting such deficient volumes involve visual observations or localized nondestructive methods, which are labor-intensive, time-consuming, highly sensitive to test conditions, and require knowledge of and accessibility to defect locations. The authors propose a vibration response-based nondestructive technique that combines experimental and numerical methodologies for use in identifying the location and severity of internal defects of concrete members. The experimental component entails collecting mode shape curvatures from laboratory beam specimens with size-controlled rock pocket and honeycomb defects, and the numerical component entails simulating beam vibration response through a finite element (FE) model parameterized with three defect-identifying variables indicating location (x, coordinate along the beam length) and severity of damage (alpha, stiffness reduction and beta, mass reduction). Defects are detected by comparing the FE model predictions to experimental measurements and inferring the low number of defect-identifying variables. This method is particularly well-suited for rapid and cost-effective quality assurance for precast concrete members and for inspecting concrete members with simple geometric forms.