823 resultados para Image-Intuitive Modes of Perception
Resumo:
El trasplante de órganos y/o tejidos es considerado como una opción terapéutica viable para el tratamiento tanto de enfermedades crónicas o en estadios terminales, como de afectaciones no vitales, pero que generen una disminución en la calidad de vida percibida por el paciente. Este procedimiento, de carácter multidimensional, está compuesto por 3 actores principales: el donante, el órgano/tejido, y el receptor. Si bien un porcentaje significativo de investigaciones y planes de intervención han girado en torno a la dimensión biológica del trasplante, y a la promoción de la donación; el interés por la experiencia psicosocial y la calidad de vida de los receptores en este proceso ha aumentado durante la última década. En relación con esto, la presente monografía se plantea como objetivo general la exploración de la experiencia y los significados construidos por los pacientes trasplantados, a través de una revisión sistemática de la literatura sobre esta temática. Para ello, se plantearon unos objetivos específicos derivados del general, se seleccionaron términos o palabras claves por cada uno de estos, y se realizó una búsqueda en 5 bases de datos para revistas indexadas: Ebsco Host (Academic Search; y Psychology and Behavioral Sciences Collection); Proquest; Pubmed; y Science Direct. A partir de los resultados, se establece que si bien la vivencia de los receptores ha comenzado a ser investigada, aún es necesaria una mayor exploración sobre la experiencia de estos pacientes; exploración que carecería de objetivo si no se hiciera a través de las narrativas o testimonios de los mismos receptores
Resumo:
Actualmente, el concepto de seguridad ha logrado expandirse hacia la inclusión de amenazas no tradicionales. En este contexto, el fenómeno de la migración internacional empieza a hacer parte de la agenda de algunos gobiernos, entendiéndose como un asunto que amenaza la seguridad del Estado. El interés de esta monografía gira en torno a examinar el discurso securitizador del Reino Unido sobre la inmigración rumana entre 2007-2014, con el fin de determinar la incidencia que este ha tenido en la percepción de la migración internacional como un asunto de seguridad en la UE. Al entender el discurso del Reino Unido a la luz de la teoría de securitización e incluir el análisis de la opinión pública europea, se observa que, si bien el discurso ha influido en el contexto doméstico, éste ha tenido una baja incidencia en la percepción de la migración internacional como un asunto de seguridad en la UE.
Resumo:
A new practical method to generate a subspace of active coordinates for quantum dynamics calculations is presented. These reduced coordinates are obtained as the normal modes of an analytical quadratic representation of the energy difference between excited and ground states within the complete active space self-consistent field method. At the Franck-Condon point, the largest negative eigenvalues of this Hessian correspond to the photoactive modes: those that reduce the energy difference and lead to the conical intersection; eigenvalues close to 0 correspond to bath modes, while modes with large positive eigenvalues are photoinactive vibrations, which increase the energy difference. The efficacy of quantum dynamics run in the subspace of the photoactive modes is illustrated with the photochemistry of benzene, where theoretical simulations are designed to assist optimal control experiments
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
The dynamics of silence and remembrance in Australian writer Lily Brett’s autobiographic fiction Things Could Be Worse reflects the crisis of memory and understanding experienced by both first and second-generation Holocaust survivors within the diasporic space of contemporary Australia. It leads to issues of handling traumatic and transgenerational memory, the latter also known as postmemory (M. Hirsch), in the long aftermath of atrocities, and problematises the role of forgetting in shielding displaced identities against total dissolution of the self. This paper explores the mechanisms of remembrance and forgetting in L. Brett’s narrative by mainly focusing on two female characters, mother and daughter, whose coming to terms with (the necessary) silence, on the one hand, and articulated memories, on the other, reflects different modes of comprehending and eventually coping with individual trauma. By differentiating between several types of silence encountered in Brett’s prose (that of the voiceless victims, of survivors and their offspring, respectively), I argue that silence can equally voice and hush traumatic experience, that it is never empty, but invested with individual and collective meaning. Essentially, I contend that beside the (self-)damaging effects of silence, there are also beneficial consequences of it, in that it plays a crucial role in emplacing the displaced, rebuilding their shattered self, and contributing to their reintegration, survival and even partial healing.
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
The impact of systematic model errors on a coupled simulation of the Asian Summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the GCM. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the GCM, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Nino. In part this is related to changes in the characteristics of El Nino, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
Resumo:
The sensitivity of the UK Universities Global Atmospheric Modelling Programme (UGAMP) General Circulation Model (UGCM) to two very different approaches to convective parametrization is described. Comparison is made between a Kuo scheme, which is constrained by large-scale moisture convergence, and a convective-adjustment scheme, which relaxes to observed thermodynamic states. Results from 360-day integrations with perpetual January conditions are used to describe the model's tropical time-mean climate and its variability. Both convection schemes give reasonable simulations of the time-mean climate, but the representation of the main modes of tropical variability is markedly different. The Kuo scheme has much weaker variance, confined to synoptic frequencies near 4 days, and a poor simulation of intraseasonal variability. In contrast, the convective-adjustment scheme has much more transient activity at all time-scales. The various aspects of the two schemes which might explain this difference are discussed. The particular closure on moisture convergence used in this version of the Kuo scheme is identified as being inappropriate.
Resumo:
Background and purpose: Carisbamate is being developed for adjuvant treatment of partial onset epilepsy. Carisbamate produces anticonvulsant effects in primary generalized, complex partial and absence-type seizure models, and exhibits neuroprotective and antiepileptogenic properties in rodent epilepsy models. Phase IIb clinical trials of carisbamate demonstrated efficacy against partial onset seizures; however, its mechanisms of action remain unknown. Here, we report the effects of carisbamate on membrane properties, evoked and spontaneous synaptic transmission and induced epileptiform discharges in layer II-III neurones in piriform cortical brain slices. Experimental approach: Effects of carisbamate were investigated in rat piriform cortical neurones by using intracellular electrophysiological recordings. Key results: Carisbamate (50–400 mmol·L-1) reversibly decreased amplitude, duration and rise-time of evoked action potentials and inhibited repetitive firing, consistent with use-dependent Na+ channel block; 150–400 mmol·L-1 carisbamate reduced neuronal input resistance, without altering membrane potential. After microelectrode intracellular Cl- loading, carisbamate depolarized cells, an effect reversed by picrotoxin. Carisbamate (100–400 mmol·L-1) also selectively depressed lateral olfactory tract-afferent evoked excitatory synaptic transmission (opposed by picrotoxin), consistent with activation of a presynaptic Cl conductance. Lidocaine (40–320 mmol·L-1) mimicked carisbamate, implying similar modes of action. Carisbamate (300–600 mmol·L-1) had no effect on spontaneous GABAA miniature inhibitory postsynaptic currents and at lower concentrations (50–200 mmol·L-1) inhibited Mg2+-free or 4-aminopyridine-induced seizure-like discharges. Conclusions and implications: Carisbamate blocked evoked action potentials use-dependently, consistent with a primary action on Na+ channels and increased Cl- conductances presynaptically and, under certain conditions, postsynaptically to selectively depress excitatory neurotransmission in piriform cortical layer Ia-afferent terminals.
Resumo:
The acute hippocampal brain slice preparation is an important in vitro screening tool for potential anticonvulsants. Application of 4-aminopyridine (4-AP) or removal of external Mg2+ ions induces epileptiform bursting in slices which is analogous to electrical brain activity seen in status epilepticus states. We have developed these epileptiform models for use with multi-electrode arrays (MEAs), allowing recording across the hippocampal slice surface from 59 points. We present validation of this novel approach and analyses using two anticonvulsants, felbamate and phenobarbital, the effects of which have already been assessed in these models using conventional extracellular recordings. In addition to assessing drug effects on commonly described parameters (duration, amplitude and frequency), we describe novel methods using the MEA to assess burst propagation speeds and the underlying frequencies that contribute to the epileptiform activity seen. Contour plots are also used as a method of illustrating burst activity. Finally, we describe hitherto unreported properties of epileptiform bursting induced by 100M4-AP or removal of external Mg2+ ions. Specifically, we observed decreases over time in burst amplitude and increase over time in burst frequency in the absence of additional pharmacological interventions. These MEA methods enhance the depth, quality and range of data that can be derived from the hippocampal slice preparation compared to conventional extracellular recordings. It may also uncover additional modes of action that contribute to anti-epileptiform drug effects
Resumo:
The complexity inherent in climate data makes it necessary to introduce more than one statistical tool to the researcher to gain insight into the climate system. Empirical orthogonal function (EOF) analysis is one of the most widely used methods to analyze weather/climate modes of variability and to reduce the dimensionality of the system. Simple structure rotation of EOFs can enhance interpretability of the obtained patterns but cannot provide anything more than temporal uncorrelatedness. In this paper, an alternative rotation method based on independent component analysis (ICA) is considered. The ICA is viewed here as a method of EOF rotation. Starting from an initial EOF solution rather than rotating the loadings toward simplicity, ICA seeks a rotation matrix that maximizes the independence between the components in the time domain. If the underlying climate signals have an independent forcing, one can expect to find loadings with interpretable patterns whose time coefficients have properties that go beyond simple noncorrelation observed in EOFs. The methodology is presented and an application to monthly means sea level pressure (SLP) field is discussed. Among the rotated (to independence) EOFs, the North Atlantic Oscillation (NAO) pattern, an Arctic Oscillation–like pattern, and a Scandinavian-like pattern have been identified. There is the suggestion that the NAO is an intrinsic mode of variability independent of the Pacific.
Resumo:
The East Asian Winter Monsoon (EAWM) and Siberian High (SH) are inherently related, based on prior studies of instrumental data available for recent decades (since 1958). Here we develop an extended instrumental EAWM index since 1871 that correlates significantly with the SH. These two indices show common modes of variation on the biennial (2-3 year) time scale. We also develop an index of the pressure gradient between the SH and the Aleutian Low, a gradient which critically impacts EAWM variability. This difference series, based on tree-ring reconstructions of the SH and the North Pacific Index (NPI) over the past 400 years, shows that the weakening of this gradient in recent decades has not been unusual in a long-term context. Correlations between the SH series and a tree-ring reconstruction of the El Nino-Southern Oscillation (ENSO) suggest a variable tropical-higher latitude teleconnection.
Resumo:
1. Jerdon's courser Rhinoptilus bitorquatus is a nocturnally active cursorial bird that is only known to occur in a small area of scrub jungle in Andhra Pradesh, India, and is listed as critically endangered by the IUCN. Information on its habitat requirements is needed urgently to underpin conservation measures. We quantified the habitat features that correlated with the use of different areas of scrub jungle by Jerdon's coursers, and developed a model to map potentially suitable habitat over large areas from satellite imagery and facilitate the design of surveys of Jerdon's courser distribution. 2. We used 11 arrays of 5-m long tracking strips consisting of smoothed fine soil to detect the footprints of Jerdon's coursers, and measured tracking rates (tracking events per strip night). We counted the number of bushes and trees, and described other attributes of vegetation and substrate in a 10-m square plot centred on each strip. We obtained reflectance data from Landsat 7 satellite imagery for the pixel within which each strip lay. 3. We used logistic regression models to describe the relationship between tracking rate by Jerdon's coursers and characteristics of the habitat around the strips, using ground-based survey data and satellite imagery. 4. Jerdon's coursers were most likely to occur where the density of large (>2 m tall) bushes was in the range 300-700 ha(-1) and where the density of smaller bushes was less than 1000 ha(-1). This habitat was detectable using satellite imagery. 5. Synthesis and applications. The occurrence of Jerdon's courser is strongly correlated with the density of bushes and trees, and is in turn affected by grazing with domestic livestock, woodcutting and mechanical clearance of bushes to create pasture, orchards and farmland. It is likely that there is an optimal level of grazing and woodcutting that would maintain or create suitable conditions for the species. Knowledge of the species' distribution is incomplete and there is considerable pressure from human use of apparently suitable habitats. Hence, distribution mapping is a high conservation priority. A two-step procedure is proposed, involving the use of ground surveys of bush density to calibrate satellite image-based mapping of potential habitat. These maps could then be used to select priority areas for Jerdon's courser surveys. The use of tracking strips to study habitat selection and distribution has potential in studies of other scarce and secretive species.
Resumo:
Mature (clitellate) Eisenia andrei Bouche (ultra epigeic), Lumbricus rubellus Hoffmeister (epigeic), and Aporrectodea caliginosa (Savigny) (endogeic) earthworms were placed in soils treated with Pb(NO3)(2) to have concentrations in the range 1000 to 10 000 mg Pb kg(-1). After 28 days LC50(-95%confidence limit) (+95%confidence limit) values were E. andrei 5824(-361)(+898) mg Pb kg(-1), L. rubellus 2867(-193)(+145) mg Pb kg(-1) and A. caliginosa 2747(-304)(+239) mg Pb kg(-1) and EC50s for weight change were E. andrei 2841(-68)(+150) Pb kg(-1), L. rubellus 1303(-201)(+204) mg Pb kg(-1) and A. caliginosa 1208(-206)(+212) Mg Pb kg(-1). At any given soil Pb concentration, Pb tissue concentrations after 28 days were the same for all three earthworm species. In a soil avoidance test there was no difference between the behaviour of the different species. The lower sensitivity to Pb exhibited by E. andrei is most likely due to physiological adaptations associated with the modes of life of the earthworms, and could have serious implications for the use of this earthworm as the species of choice in standard toxicological testing. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Ecological risk assessments must increasingly consider the effects of chemical mixtures on the environment as anthropogenic pollution continues to grow in complexity. Yet testing every possible mixture combination is impractical and unfeasible; thus, there is an urgent need for models that can accurately predict mixture toxicity from single-compound data. Currently, two models are frequently used to predict mixture toxicity from single-compound data: Concentration addition and independent action (IA). The accuracy of the predictions generated by these models is currently debated and needs to be resolved before their use in risk assessments can be fully justified. The present study addresses this issue by determining whether the IA model adequately described the toxicity of binary mixtures of five pesticides and other environmental contaminants (cadmium, chlorpyrifos, diuron, nickel, and prochloraz) each with dissimilar modes of action on the reproduction of the nematode Caenorhabditis elegans. In three out of 10 cases, the IA model failed to describe mixture toxicity adequately with significant or antagonism being observed. In a further three cases, there was an indication of synergy, antagonism, and effect-level-dependent deviations, respectively, but these were not statistically significant. The extent of the significant deviations that were found varied, but all were such that the predicted percentage effect seen on reproductive output would have been wrong by 18 to 35% (i.e., the effect concentration expected to cause a 50% effect led to an 85% effect). The presence of such a high number and variety of deviations has important implications for the use of existing mixture toxicity models for risk assessments, especially where all or part of the deviation is synergistic.