951 resultados para Multiple-Time Scale Problem


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Consider the problem of scheduling a task set τ of implicit-deadline sporadic tasks to meet all deadlines on a t-type heterogeneous multiprocessor platform where tasks may access multiple shared resources. The multiprocessor platform has m k processors of type-k, where k∈{1,2,…,t}. The execution time of a task depends on the type of processor on which it executes. The set of shared resources is denoted by R. For each task τ i , there is a resource set R i ⊆R such that for each job of τ i , during one phase of its execution, the job requests to hold the resource set R i exclusively with the interpretation that (i) the job makes a single request to hold all the resources in the resource set R i and (ii) at all times, when a job of τ i holds R i , no other job holds any resource in R i . Each job of task τ i may request the resource set R i at most once during its execution. A job is allowed to migrate when it requests a resource set and when it releases the resource set but a job is not allowed to migrate at other times. Our goal is to design a scheduling algorithm for this problem and prove its performance. We propose an algorithm, LP-EE-vpr, which offers the guarantee that if an implicit-deadline sporadic task set is schedulable on a t-type heterogeneous multiprocessor platform by an optimal scheduling algorithm that allows a job to migrate only when it requests or releases a resource set, then our algorithm also meets the deadlines with the same restriction on job migration, if given processors 4×(1+MAXP×⌈|P|×MAXPmin{m1,m2,…,mt}⌉) times as fast. (Here MAXP and |P| are computed based on the resource sets that tasks request.) For the special case that each task requests at most one resource, the bound of LP-EE-vpr collapses to 4×(1+⌈|R|min{m1,m2,…,mt}⌉). To the best of our knowledge, LP-EE-vpr is the first algorithm with proven performance guarantee for real-time scheduling of sporadic tasks with resource sharing on t-type heterogeneous multiprocessors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

20th International Conference on Reliable Software Technologies - Ada-Europe 2015 (Ada-Europe 2015), Madrid, Spain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Demo presented in 12th Workshop on Models and Algorithms for Planning and Scheduling Problems (MAPSP 2015). 8 to 12, Jun, 2015. La Roche-en-Ardenne, Belgium. Extended abstract.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The vision of the Internet of Things (IoT) includes large and dense deployment of interconnected smart sensing and monitoring devices. This vast deployment necessitates collection and processing of large volume of measurement data. However, collecting all the measured data from individual devices on such a scale may be impractical and time consuming. Moreover, processing these measurements requires complex algorithms to extract useful information. Thus, it becomes imperative to devise distributed information processing mechanisms that identify application-specific features in a timely manner and with a low overhead. In this article, we present a feature extraction mechanism for dense networks that takes advantage of dominance-based medium access control (MAC) protocols to (i) efficiently obtain global extrema of the sensed quantities, (ii) extract local extrema, and (iii) detect the boundaries of events, by using simple transforms that nodes employ on their local data. We extend our results for a large dense network with multiple broadcast domains (MBD). We discuss and compare two approaches for addressing the challenges with MBD and we show through extensive evaluations that our proposed distributed MBD approach is fast and efficient at retrieving the most valuable measurements, independent of the number sensor nodes in the network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Lógica Computacional

Relevância:

40.00% 40.00%

Publicador:

Resumo:

RESUMO - O consumo de tabaco foi responsável por 100 milhões de mortes no século XX. Apesar dos grandes avanços alcançados no controlo deste problema a nível mundial, sob os auspícios da OMS, no contexto da Convenção-Quadro para o Controlo do Tabaco da OMS, se não forem adoptadas medidas consistentes e efectivas de saúde pública, a morbi-mortalidade que lhe está associada continuará a aumentar durante o presente século. A promoção da cessação tabágica constitui a estratégia populacional que permitirá obter ganhos em saúde a mais curto prazo. Embora a larga maioria dos fumadores faça, ao longo da vida, várias tentativas para parar de fumar sem apoio, apenas uma pequena minoria consegue manter-se abstinente a longo prazo. Os médicos de Medicina Geral e Familiar são, de entre todos os profissionais de saúde, os que podem intervir de modo mais consistente e efectivo neste âmbito e que melhores resultados obtêm na cessação tabágica dos pacientes fumadores, dado o vínculo terapêutico e a interacção frequente e continuada que com eles estabelecem ao longo do seu ciclo de vida. O aconselhamento breve, tendo por base a adopção de um estilo de comunicação motivacional centrado no paciente, adaptado aos estádios de mudança comportamental, tem-se revelado efectivo no apoio à mudança de comportamentos relacionados com a saúde e à resolução da ambivalência que caracteriza este processo. A revisão de literatura evidenciou o facto de os médicos nem sempre intervirem nas áreas preventivas e de promoção da saúde, em particular na área da cessação tabágica, com o investimento e a continuidade desejáveis. Por outro lado, muitos pacientes fumadores referem nunca ter sido aconselhados pelo seu médico a deixar de fumar.. Não são conhecidos estudos de âmbito nacional que permitam conhecer esta realidade, bem como os factores associados às melhores práticas de intervenção ou as barreiras sentidas pelos médicos de MGF à actuação nesta área. O presente trabalho teve como objectivos: (i) avaliar a hipótese de que os médicos que disseram adoptar o método clínico centrado no paciente teriam atitudes mais favoráveis relativamente à cessação tabágica e uma maior probabilidade de aconselhar os seus pacientes a parar de fumar; (ii) estudar a relação entre as atitudes, a percepção de auto-eficácia, a expectativa de efectividade e as práticas de aconselhamento sobre cessação tabágica, auto-referidas pelos médicos; (iii) Identificar as variáveis preditivas da adopção de intervenções breves de aconselhamento adaptadas ao estádio de mudança comportamental dos pacientes fumadores; (iv) identificar as barreiras e os incentivos à adopção de boas práticas de aconselhamento nesta área. A população de estudo foi constituída pelo total de médicos de medicina geral e familiar inscritos na Associação Portuguesa de Médicos de Clínica Geral, residentes em Portugal. Para recolha de informação, foi utilizado um questionário de resposta anónima, de autopreenchimento, aplicado por via postal a 2942 médicos, em duas séries de envio. O questionário integrou perguntas fechadas, semifechadas, escalas de tipo Likert e escalas de tipo visual analógico. Para avaliação da adopção do método clínico centrado no paciente, foi usada a Patient Practitioner Orientation Scale (PPOS). O tratamento estatístico dos dados foi efectuado com o Programa PASW Statistics (ex-SPSS), versão 18. Foram utilizados: o índice de α de Cronbach, diversos testes não paramétricos e a análise de regressão logística binária. Foi obtida uma taxa de resposta de 22,4%. Foram analisadas 639 respostas (67,4% de mulheres e 32,6% de homens). Referiram ser fumadores 23% dos homens e 14% das mulheres. Foi identificada uma grande carência formativa em cessação tabágica, tendo apenas 4% dos médicos afirmado não necessitar de formação nesta área. Responderam necessitar de formação em entrevista motivacional 66%, em prevenção da recaída 59%, de treino numa consulta de apoio intensivo 55%, em intervenção breve 54% e em terapêutica farmacológica 55%. Cerca de 92% dos respondentes consideraram que o aconselhamento para a cessação tabágica é uma tarefa que faz parte das suas atribuições, mas apenas 76% concordaram totalmente com a realização de uma abordagem oportunística deste assunto em todos os contactos com os seus pacientes. Como prática mais frequente, perante um paciente em preparação para parar, 85% dos médicos disseram tomar a iniciativa de aconselhar, 79% avaliar a motivação, 67% avaliar o grau de dependência, 60% marcar o “dia D” e 50% propor terapêutica farmacológica. Apenas 21% assumiram realizar com frequência uma intervenção breve com pacientes em preparação (5 Ás); 13% uma intervenção motivacional com pacientes não motivados para mudar (5 Rs) e 20% uma intervenção segundo os princípios da entrevista motivacional, relativamente a pacientes ambivalentes em relação à mudança. A análise multivariada de regressão logística permitiu concluir que as variáveis com maior influência na decisão de aconselhar os pacientes sobre cessação tabágica foram a percepção de auto-eficácia, o nível de atitudes negativas, a adopção habitual do Programa-tipo de cessação tabágica da DGS, a posse de formação específica nesta área e a não identificação de barreiras ao aconselhamento, em particular organizacionais ou ligadas ao processo de comunicação na consulta. Embora se tenha confirmado a existência de associação entre a adopção do método clínico centrado no paciente e as atitudes face à cessação tabágica, não foi possível confirmar plenamente a associação entre a adopção deste método e as práticas autoreferidas de aconselhamento. Os médicos que manifestaram um nível baixo ou moderado de atitudes negativas, uma percepção elevada de auto-eficácia, que nunca fumaram, que referiram adoptar o Programa-tipo de cessação tabágica e que não identificaram barreiras organizacionais apresentaram uma maior probabilidade de realizar uma intervenção breve (“5 Ás”) de aconselhamento de pacientes fumadores em preparação para parar de fumar. Nunca ter fumado apresentou-se associado a uma probabilidade de realizar uma intervenção breve (“5 Ás”) com frequência, superior à verificada entre os médicos que referiram ser fumadores (Odds-ratio ajustado = 2,6; IC a 95%: 1,1; 5,7). Os médicos com o nível de auto-eficácia no aconselhamento mais elevado apresentaram uma probabilidade superior à encontrada entre os médicos com o menor nível de auto-eficácia de realizar com frequência uma intervenção breve de aconselhamento, integrando as cinco vertentes dos “5 Ás” (Odds ratio ajustado = 2,6; IC a 95%: 1,3; 5,3); de realizar uma intervenção motivacional breve com fumadores renitentes a parar de fumar (Odds ratio ajustado = 3,1; IC a 95%: 1,4; 6,5) ou de realizar com frequência uma intervenção motivacional com pacientes em estádio de ambivalência (Odds ratio = 8,8; IC a 95%: 3,8; 19,9). A falta de tempo, a falta de formação específica e a falta de equipa de apoio foram as barreiras ao aconselhamento mais citadas. Como factores facilitadores de um maior investimento nesta área, cerca de 60% dos médicos referiram a realização de um estágio prático de formação; 57% a possibilidade de dispor do apoio de outros profissionais; cerca de metade a melhoria da sua formação teórica. Cerca de 25% dos médicos investiria mais em cessação tabágica se dispusesse de um incentivo financeiro e 20% se os pacientes demonstrassem maior interesse em discutir o assunto ou existisse uma maior valorização desta área por parte dos colegas e dos órgãos de gestão. As limitações de representatividade da amostra, decorrentes da taxa de resposta obtida, impõem reservas à possibilidade de extrapolação destes resultados para a população de estudo, sendo de admitir que os respondentes possam corresponder aos médicos mais interessados por este tema e que optam por não fumar. Outra importante limitação advém do facto de não ter sido estudada a vertente relativa aos pacientes, no que se refere às suas atitudes, percepções e expectativas quanto à actuação do médico neste campo. Pesem embora estas limitações, os resultados obtidos revelaram uma grande perda de oportunidades de prevenção da doença e de promoção da saúde. Parece ter ficado demonstrada a importante influência que as atitudes, em especial as negativas, e as percepções, em particular a percepção de auto-eficácia, podem exercer sobre as práticas de aconselhamento auto-referidas. Todavia, será necessário aprofundar os resultados agora encontrados com estudos de natureza qualitativa, que permitam compreender melhor, por um lado, as percepções, expectativas e necessidades dos pacientes, por outro, as estratégias de comunicação que deverão ser adoptadas pelo médico, atendendo à complexidade do problema e ao tempo disponível na consulta, tendo em vista aumentar a literacia dos pacientes para uma melhor autogestão da sua saúde. Parece ter ficado igualmente patente a grande carência formativa neste domínio. A adopção do modelo biomédico como paradigma da formação médica pré e pós-graduada, proposto, há precisamente cem anos, por Flexner, tem contribuído para a desvalorização das componentes psicoemocionais e sociais dos fenómenos de saúde e de doença, assim como para criar clivagens entre cuidados curativos e preventivos e entre medicina geral e familiar e saúde pública. Porém, o actual padrão de saúde/doença próprio das sociedades desenvolvidas, caracterizado por “pandemias” de doenças crónicas e incapacitantes, determinadas por factores de natureza sociocultural e comportamental, irá obrigar certamente à revisão daquele paradigma e à necessidade de se (re)adoptarem os grandes princípios Hipocráticos de compreensão dos processos de saúde/doença e do papel da medicina.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study analyses financial data using the result characterization of a self-organized neural network model. The goal was prototyping a tool that may help an economist or a market analyst to analyse stock market series. To reach this goal, the tool shows economic dependencies and statistics measures over stock market series. The neural network SOM (self-organizing maps) model was used to ex-tract behavioural patterns of the data analysed. Based on this model, it was de-veloped an application to analyse financial data. This application uses a portfo-lio of correlated markets or inverse-correlated markets as input. After the anal-ysis with SOM, the result is represented by micro clusters that are organized by its behaviour tendency. During the study appeared the need of a better analysis for SOM algo-rithm results. This problem was solved with a cluster solution technique, which groups the micro clusters from SOM U-Matrix analyses. The study showed that the correlation and inverse-correlation markets projects multiple clusters of data. These clusters represent multiple trend states that may be useful for technical professionals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We say the endomorphism problem is solvable for an element W in a free group F if it can be decided effectively whether, given U in F, there is an endomorphism Φ of F sending W to U. This work analyzes an approach due to C. Edmunds and improved by C. Sims. Here we prove that the approach provides an efficient algorithm for solving the endomorphism problem when W is a two- generator word. We show that when W is a two-generator word this algorithm solves the problem in time polynomial in the length of U. This result gives a polynomial-time algorithm for solving, in free groups, two-variable equations in which all the variables occur on one side of the equality and all the constants on the other side.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study presents a first attempt to extend the “Multi-scale integrated analysis of societal and ecosystem metabolism (MuSIASEM)” approach to a spatial dimension using GIS techniques in the Metropolitan area of Barcelona. We use a combination of census and commercial databases along with a detailed land cover map to create a layer of Common Geographic Units that we populate with the local values of human time spent in different activities according to MuSIASEM hierarchical typology. In this way, we mapped the hours of available human time, in regards to the working hours spent in different locations, putting in evidence the gradients in spatial density between the residential location of workers (generating the work supply) and the places where the working hours are actually taking place. We found a strong three-modal pattern of clumps of areas with different combinations of values of time spent on household activities and on paid work. We also measured and mapped spatial segregation between these two activities and put forward the conjecture that this segregation increases with higher energy throughput, as the size of the functional units must be able to cope with the flow of exosomatic energy. Finally, we discuss the effectiveness of the approach by comparing our geographic representation of exosomatic throughput to the one issued from conventional methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

SUMMARYIn the context of the biodiversity crisis, amphibians are experiencing the most severe worldwide decline of all vertebrates and are in urgent need of better management. Efficient conservation strategies rely on sound knowledge of the species biology and of the genetic and demographic processes that might impair their welfare. Nonetheless, these processes are poorly understood in amphibians. Delineating population boundaries remains consequently problematic for these species, while it is of critical importance to define adequate management units for conservation. In this study, our attention focused on the alpine salamander (Salamandra atra), a species that deserves much interest in terms of both conservation biology and evolution. This endemic alpine species shows peculiar life-history traits (viviparity, reduced activity period, slow maturation) and has a slow population turnover, which might be problematic for its persistence in a changing environment. Due to its elusive behaviour (individuals spend most of their time underground and are unavailable for sampling), dynamic processes of gene and individuals were poorly understood for that species. Consequently, its conservation status could hardly be reliably assessed. Similarly the fire salamander (Salamandra salamandra) also poses special challenges for conservation, as no clear demarcation of geographical populations exists and dispersal patterns are poorly known. Through a phylogeographic analysis, we first studied the evolutionary history of the alpine salamander to better document the distribution of the genetic diversity along its geographical range. This study highlighted the presence of multiple divergent lineages in Italy together with a clear genetic divergence between populations from Northern and Dinaric Alps. These signs of cryptic genetic differentiation, which are not accounted for by the current taxonomy of the species, should not be neglected for further definition of conservation units. In addition, our data supported glacial survival of the species in northern peripheral glacial réfugia and nunataks, a pattern rarely documented for long-lived species. Then, we evaluated the level of gene flow between populations at the local scale and tested for asymmetries in male versus female dispersal using both field-based (mark-recapture) and genetic approaches. This study revealed high level of gene flow between populations, which stems mainly from male dispersal. This corroborated the idea that salamanders are much better dispersers than hitherto thought and provided a well- supported example of male-biased dispersal in amphibians. In a third step, based on a mark- recapture survey, we addressed the problem of sampling unavailability in alpine salamanders and evaluated its impact on two monitoring methods. We showed that about three quarters of individuals were unavailable for sampling during sampling sessions, a proportion that can vary with climatic conditions. If not taken into account, these complexities would result in false assumptions on population trends and misdirect conservation efforts. Finally, regarding the daunting task of delineating management units, our attention was drawn on the fire salamander. We conducted a local population genetic study that revealed high levels of gene flow among sampling sites. Management units for this species should consequently be large. Interestingly, despite the presence of several landscape features often reported to act as barriers, genetic breaks occurred at unexpected places. This suggests that landscape features may rather have idiosyncratic effects on population structure. In conclusion, this work brought new insights on both genetic and demographic processes occurring in salamanders. The results suggest that some biological paradigms should be taken with caution when particular species are in focus. Species- specific studies remain thus fundamental for a better understanding of species evolution and conservation, particularly in the context of current global changes.RESUMEDans le contexte de la crise de la biodiversité actuelle, les amphibiens subissent le déclin le plus important de tous les vertébrés et ont urgemment besoin d'une meilleure protection. L'établissement de stratégies de conservation efficaces repose sur des connaissances solides de la biologie des espèces et des processus génétiques et démographiques pouvant menacer leur survie. Ces processus sont néanmoins encore peu étudiés chez les amphibiens.Dans cette étude, notre attention s'est portée sur la salamandre noire (Salamandra atra), une espèce endémique des Alpes dont les traits d'histoire de vie atypiques (viviparité, phase d'activité réduite, lent turnover des populations) pourraient la rendre très vulnérable face aux changements environnementaux. Par ailleurs, en raison de son comportement cryptique (les individus passent la plupart de leur temps sous terre) la dynamique des gènes et des individus est mal comprise chez cette espèce. Il est donc difficile d'évaluer son statut de conservation de manière fiable. La salamandre tachetée {Salamandra salamandra), pour qui il n'existe aucune démarcation géographique apparente des populations, pose également des problèmes en termes de gestion. Dans un premier temps, nous avons étudié l'histoire évolutive de la salamandre noire afin de mieux décrire la distribution de sa diversité génétique au sein de son aire géographique. Cela a permis de mettre en évidence la présence de multiples lignées en Italie, ainsi qu'une nette divergence entre les populations du nord des Alpes et des Alpes dinariques. Ces résultats seront à prendre en compte lorsqu'il s'agira de définir des unités de conservation pour cette espèce. D'autre part, nos données soutiennent l'hypothèse d'une survie glaciaire dans des refuges nordiques périglaciaires ou dans des nunataks, fait rarement documenté pour une espèce longévive. Nous avons ensuite évalué la différentiation génétique des populations à l'échelle locale, ce qui a révélé d'important flux de gènes, ainsi qu'une asymétrie de dispersion en faveur des mâles. Ces résultats corroborent l'idée que les amphibiens dispersent mieux que ce que l'on pensait, et fournissent un exemple robuste de dispersion biaisée en faveur des mâles chez les amphibiens. Nous avons ensuite abordé le problème de Γ inaccessibilité des individus à la capture. Nous avons montré qu'environ trois quarts des individus sont inaccessibles lors des échantillonnages, une proportion qui peut varier en fonction des conditions climatiques. Ignoré, ce processus pourrait entraîner une mauvaise interprétation des fluctuations de populations ainsi qu'une mauvaise allocation des efforts de conservation. Concernant la définition d'unités de gestion pour la salamandre tachetée, nous avons pu mettre en évidence un flux de gènes important entre les sites échantillonnés. Les unités de gestion pour cette espèce devraient donc être étendues. Etonnamment, malgré la présence de nombreuses barrières potentielles au flux de gènes, les démarcations génétiques sont apparues à des endroits inattendus. En conclusion, ce travail a apporté une meilleure compréhension des processus génétiques et démographiques en action chez les salamandres. Les résultats suggèrent que certains paradigmes biologiques devraient être considérés avec précaution quand il s'agit de les appliquer à des espèces particulières. Les études spécifiques demeurent donc fondamentales pour une meilleure compréhension de l'évolution des espèces et leur conservation, tout particulièrement dans le contexte des changements globaux actuels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: The purpose of this study was to assess decision making in patients with multiple sclerosis (MS) at the earliest clinically detectable time point of the disease. METHODS: Patients with definite MS (n = 109) or with clinically isolated syndrome (CIS, n = 56), a disease duration of 3 months to 5 years, and no or only minor neurological impairment (Expanded Disability Status Scale [EDSS] score 0-2.5) were compared to 50 healthy controls using the Iowa Gambling Task (IGT). RESULTS: The performance of definite MS, CIS patients, and controls was comparable for the two main outcomes of the IGT (learning index: p = 0.7; total score: p = 0.6). The IGT learning index was influenced by the educational level and the co-occurrence of minor depression. CIS and MS patients developing a relapse during an observation period of 15 months dated from IGT testing demonstrated a lower learning index in the IGT than patients who had no exacerbation (p = 0.02). When controlling for age, gender and education, the difference between relapsing and non-relapsing patients was at the limit of significance (p = 0.06). CONCLUSION: Decision making in a task mimicking real life decisions is generally preserved in early MS patients as compared to controls. A possible consequence of MS relapsing activity in the impairment of decision making ability is also suspected in the early phase of MS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Malaria diagnoses has traditionally been made using thick blood smears, but more sensitive and faster techniques are required to process large numbers of samples in clinical and epidemiological studies and in blood donor screening. Here, we evaluated molecular and serological tools to build a screening platform for pooled samples aimed at reducing both the time and the cost of these diagnoses. Positive and negative samples were analysed in individual and pooled experiments using real-time polymerase chain reaction (PCR), nested PCR and an immunochromatographic test. For the individual tests, 46/49 samples were positive by real-time PCR, 46/49 were positive by nested PCR and 32/46 were positive by immunochromatographic test. For the assays performed using pooled samples, 13/15 samples were positive by real-time PCR and nested PCR and 11/15 were positive by immunochromatographic test. These molecular methods demonstrated sensitivity and specificity for both the individual and pooled samples. Due to the advantages of the real-time PCR, such as the fast processing and the closed system, this method should be indicated as the first choice for use in large-scale diagnosis and the nested PCR should be used for species differentiation. However, additional field isolates should be tested to confirm the results achieved using cultured parasites and the serological test should only be adopted as a complementary method for malaria diagnosis.