971 resultados para identity-based (ID-based) signatures
Resumo:
This paper presents an architecture (Multi-μ) being implemented to study and develop software based fault tolerant mechanisms for Real-Time Systems, using the Ada language (Ada 95) and Commercial Off-The-Shelf (COTS) components. Several issues regarding fault tolerance are presented and mechanisms to achieve fault tolerance by software active replication in Ada 95 are discussed. The Multi-μ architecture, based on a specifically proposed Fault Tolerance Manager (FTManager), is then described. Finally, some considerations are made about the work being done and essential future developments.
Resumo:
This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sum-to-one. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangian-based optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based state-of-the-art competitors.
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
This chapter appears in Encyclopaedia of Human Resources Information Systems: Challenges in e-HRM edited by Torres-Coronas, T. and Arias-Oliva, M. Copyright 2009, IGI Global, www.igi-global.com. Posted by permission of the publisher. URL:http://www.igi-pub.com/reference/details.asp?id=7737
Resumo:
This chapter appears in Encyclopaedia of Distance Learning 2nd Edition edit by Rogers, P.; Berg, Gary; Boettecher, Judith V.; Howard, Caroline; Justice, Lorraine; Schenk, Karen D.. Copyright 2009, IGI Global, www.igi-global.com. Posted by permission of the publisher. URL: http://www.igi-global.com/reference/ details.asp?ID=9703&v=tableOfContents
Resumo:
DNA amplification techniques are being used increasingly in clinical laboratories to confirm the identity of medically important bacteria. A PCR-based identification method has been in use in our centre for 10 years for Burkholderia pseudomallei and was used to confirm the identity of bacteria isolated from cases of melioidosis in Ceará since 2003. This particular method has been used as a reference standard for less discriminatory methods. In this study we evaluated three PCR-based methods of B. pseudomallei identification and used DNA sequencing to resolve discrepancies between PCR-based results and phenotypic identification methods. The established semi-nested PCR protocol for B. pseudomallei 16-23s spacer region produced a consistent negative result for one of our 100 test isolates (BCC #99), but correctly identified all 71 other B. pseudomallei isolates tested. Anomalous sequence variation was detected at the inner, reverse primer binding site for this method. PCR methods were developed for detection of two other B. pseudomallei bacterial metabolic genes. The conventional lpxO PCR protocol had a sensitivity of 0.89 and a specificity of 1.00, while a real-time lpxO protocol performed even better with sensitivity and specificity of 1.00, and 1.00. This method identified all B. pseudomallei isolates including the PCR-negative discrepant isolate. The phaC PCR protocol detected the gene in all B. pseudomallei and all but three B. cepacia isolates, making this method unsuitable for PCR-based identification of B. pseudomallei. This experience with PCR-based B. pseudomallei identification methods indicates that single PCR targets should be used with caution for identification of these bacteria, and need to be interpreted alongside phenotypic and alternative molecular methods such as gene sequencing.
Resumo:
The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.
Resumo:
When a pregnant woman is guided to a hospital for obstetrics purposes, many outcomes are possible, depending on her current conditions. An improved understanding of these conditions could provide a more direct medical approach by categorizing the different types of patients, enabling a faster response to risk situations, and therefore increasing the quality of services. In this case study, the characteristics of the patients admitted in the maternity care unit of Centro Hospitalar of Porto are acknowledged, allowing categorizing the patient women through clustering techniques. The main goal is to predict the patients’ route through the maternity care, adapting the services according to their conditions, providing the best clinical decisions and a cost-effective treatment to patients. The models developed presented very interesting results, being the best clustering evaluation index: 0.65. The evaluation of the clustering algorithms proved the viability of using clustering based data mining models to characterize pregnant patients, identifying which conditions can be used as an alert to prevent the occurrence of medical complications.
Resumo:
Determining the timing, identity and direction of migrations in the Mediterranean Basin, the role of "migratory routes" in and among regions of Africa, Europe and Asia, and the effects of sex-specific behaviors of population movements have important implications for our understanding of the present human genetic diversity. A crucial component of the Mediterranean world is its westernmost region. Clear features of transcontinental ancient contacts between North African and Iberian populations surrounding the maritime region of Gibraltar Strait have been identified from archeological data. The attempt to discern origin and dates of migration between close geographically related regions has been a challenge in the field of uniparental-based population genetics. Mitochondrial DNA (mtDNA) studies have been focused on surveying the H1, H3 and V lineages when trying to ascertain north-south migrations, and U6 and L in the opposite direction, assuming that those lineages are good proxies for the ancestry of each side of the Mediterranean. To this end, in the present work we have screened entire mtDNA sequences belonging to U6, M1 and L haplogroups in Andalusians--from Huelva and Granada provinces--and Moroccan Berbers. We present here pioneer data and interpretations on the role of NW Africa and the Iberian Peninsula regarding the time of origin, number of founders and expansion directions of these specific markers. The estimated entrance of the North African U6 lineages into Iberia at 10 ky correlates well with other L African clades, indicating that U6 and some L lineages moved together from Africa to Iberia in the Early Holocene. Still, founder analysis highlights that the high sharing of lineages between North Africa and Iberia results from a complex process continued through time, impairing simplistic interpretations. In particular, our work supports the existence of an ancient, frequently denied, bridge connecting the Maghreb and Andalusia.
Resumo:
Background: Gender can influence post-infarction cardiac remodeling. Objective: To evaluate whether gender influences left ventricular (LV) remodeling and integrin-linked kinase (ILK) after myocardial infarction (MI). Methods: Female and male Wistar rats were assigned to one of three groups: sham, moderate MI (size: 20-39% of LV area), and large MI (size: ≥40% of LV area). MI was induced by coronary occlusion, and echocardiographic analysis was performed after six weeks to evaluate MI size as well as LV morphology and function. Real-time RT-PCR and Western blot were used to quantify ILK in the myocardium. Results: MI size was similar between genders. MI resulted in systolic dysfunction and enlargement of end-diastolic as well as end-systolic dimension of LV as a function of necrotic area size in both genders. Female rats with large MI showed a lower diastolic and systolic dilatation than the respective male rats; however, LV dysfunction was similar between genders. Gene and protein levels of ILK were increased in female rats with moderate and large infarctions, but only male rats with large infarctions showed an altered ILK mRNA level. A negative linear correlation was evident between LV dimensions and ILK expression in female rats with large MI. Conclusions: Post-MI ILK expression is altered in a gender-specific manner, and higher ILK levels found in females may be sufficient to improve LV geometry but not LV function.
Resumo:
BACKGROUND: Healthy lifestyle including sufficient physical activity may mitigate or prevent adverse long-term effects of childhood cancer. We described daily physical activities and sports in childhood cancer survivors and controls, and assessed determinants of both activity patterns. METHODOLOGY/PRINCIPAL FINDINGS: The Swiss Childhood Cancer Survivor Study is a questionnaire survey including all children diagnosed with cancer 1976-2003 at age 0-15 years, registered in the Swiss Childhood Cancer Registry, who survived ≥5 years and reached adulthood (≥20 years). Controls came from the population-based Swiss Health Survey. We compared the two populations and determined risk factors for both outcomes in separate multivariable logistic regression models. The sample included 1058 survivors and 5593 controls (response rates 78% and 66%). Sufficient daily physical activities were reported by 52% (n = 521) of survivors and 37% (n = 2069) of controls (p<0.001). In contrast, 62% (n = 640) of survivors and 65% (n = 3635) of controls reported engaging in sports (p = 0.067). Risk factors for insufficient daily activities in both populations were: older age (OR for ≥35 years: 1.5, 95CI 1.2-2.0), female gender (OR 1.6, 95CI 1.3-1.9), French/Italian Speaking (OR 1.4, 95CI 1.1-1.7), and higher education (OR for university education: 2.0, 95CI 1.5-2.6). Risk factors for no sports were: being a survivor (OR 1.3, 95CI 1.1-1.6), older age (OR for ≥35 years: 1.4, 95CI 1.1-1.8), migration background (OR 1.5, 95CI 1.3-1.8), French/Italian speaking (OR 1.4, 95CI 1.2-1.7), lower education (OR for compulsory schooling only: 1.6, 95CI 1.2-2.2), being married (OR 1.7, 95CI 1.5-2.0), having children (OR 1.3, 95CI 1.4-1.9), obesity (OR 2.4, 95CI 1.7-3.3), and smoking (OR 1.7, 95CI 1.5-2.1). Type of diagnosis was only associated with sports. CONCLUSIONS/SIGNIFICANCE: Physical activity levels in survivors were lower than recommended, but comparable to controls and mainly determined by socio-demographic and cultural factors. Strategies to improve physical activity levels could be similar as for the general population.
Resumo:
Synchronization of data coming from different sources is of high importance in biomechanics to ensure reliable analyses. This synchronization can either be performed through hardware to obtain perfect matching of data, or post-processed digitally. Hardware synchronization can be achieved using trigger cables connecting different devices in many situations; however, this is often impractical, and sometimes impossible in outdoors situations. The aim of this paper is to describe a wireless system for outdoor use, allowing synchronization of different types of - potentially embedded and moving - devices. In this system, each synchronization device is composed of: (i) a GPS receiver (used as time reference), (ii) a radio transmitter, and (iii) a microcontroller. These components are used to provide synchronized trigger signals at the desired frequency to the measurement device connected. The synchronization devices communicate wirelessly, are very lightweight, battery-operated and thus very easy to set up. They are adaptable to every measurement device equipped with either trigger input or recording channel. The accuracy of the system was validated using an oscilloscope. The mean synchronization error was found to be 0.39 μs and pulses are generated with an accuracy of <2 μs. The system provides synchronization accuracy about two orders of magnitude better than commonly used post-processing methods, and does not suffer from any drift in trigger generation.
Resumo:
Species distribution models (SDMs) are increasingly used to predict environmentally induced range shifts of habitats of plant and animal species. Consequently SDMs are valuable tools for scientifically based conservation decisions. The aims of this paper are (1) to identify important drivers of butterfly species persistence or extinction, and (2) to analyse the responses of endangered butterfly species of dry grasslands and wetlands to likely future landscape changes in Switzerland. Future land use was represented by four scenarios describing: (1) ongoing land use changes as observed at the end of the last century; (2) a liberalisation of the agricultural markets; (3) a slightly lowered agricultural production; and (4) a strongly lowered agricultural production. Two model approaches have been applied. The first (logistic regression with principal components) explains what environmental variables have significant impact on species presence (and absence). The second (predictive SDM) is used to project species distribution under current and likely future land uses. The results of the explanatory analyses reveal that four principal components related to urbanisation, abandonment of open land and intensive agricultural practices as well as two climate parameters are primary drivers of species occurrence (decline). The scenario analyses show that lowered agricultural production is likely to favour dry grassland species due to an increase of non-intensively used land, open canopy forests, and overgrown areas. In the liberalisation scenario dry grassland species show a decrease in abundance due to a strong increase of forested patches. Wetland butterfly species would decrease under all four scenarios as their habitats become overgrown