856 resultados para ability to suspect phishing emails
Resumo:
Many plant strengtheners are promoted for their supposed effects on nutrient uptake and/or resistance induction (IR). In addition, many organic fertilizers are supposed to enhance plant health and several studies have shown that tomatoes grown organically are more resistant to late blight, caused by Phytophthora infestans to tomatoes grown conventionally. Much is known about the mechanisms underlying IR. In contrast, there is no systematic knowledge about genetic variation for IR. Therefore, the following questions were addressed in the presented dissertation: (i) Is there genetic variation among tomato genotypes for inducibility of resistance to P. infestans? (ii) How do different PS compare with the chemical inducer BABA in their ability to IR? (iii) Does IR interact with the inducer used and different organic fertilizers? A varietal screening showed that contrary to the commonly held belief IR in tomatoes is genotype and isolate specific. These results indicate that it should be possible to select for inducibility of resistance in tomato breeding. However, isolate specificity also suggests that there could be pathogen adaptation. The three tested PS as well as two of the three tested organic fertilisers all induced resistance in the tomatoes. Depending on PS or BABA variety and isolate effects varied. In contrast, there were no variety and isolate specific effects of the fertilisers and no interactions with the PS and fertilisers. This suggests that the different PS should work independent of the soil substrate used. In contrast the results were markedly different when isolate mixtures were used for challenge inoculations. Plants were generally less susceptible to isolate mixtures than to single isolates. In addition, the effectiveness of the PS was greater and more similar to BABA when isolate mixtures were used. The fact that the different PS and BABA differed in their ability to induce resistance in different host genotype -pathogen isolate combinations puts the usefulness of IR as a breeding goal in question. This would result in varieties depending on specific inducers. The results with the isolate mixtures are highly relevant. On the one hand they increase the effectiveness of the resistance inducers. On the other hand, measures that increase the pathogen diversity such as the use of diversified host populations will also increase the overall resistance of the hosts. For organic tomato production the results indicate that it is possible to enhance the tomato growing system with respect to plant health management by using optimal fertilisers, plant strengtheners and any measures that increase system diversity.
Resumo:
Many approaches to force control have assumed the ability to command torques accurately. Concurrently, much research has been devoted to developing accurate torque actuation schemes. Often, torque sensors have been utilized to close a feedback loop around output torque. In this paper, the torque control of a brushless motor is investigated through: the design, construction, and utilization of a joint torque sensor for feedback control; and the development and implementation of techniques for phase current based feedforeward torque control. It is concluded that simply closing a torque loop is no longer necessarily the best alternative since reasonably accurate current based torque control is achievable.
Resumo:
The central challenge in face recognition lies in understanding the role different facial features play in our judgments of identity. Notable in this regard are the relative contributions of the internal (eyes, nose and mouth) and external (hair and jaw-line) features. Past studies that have investigated this issue have typically used high-resolution images or good-quality line drawings as facial stimuli. The results obtained are therefore most relevant for understanding the identification of faces at close range. However, given that real-world viewing conditions are rarely optimal, it is also important to know how image degradations, such as loss of resolution caused by large viewing distances, influence our ability to use internal and external features. Here, we report experiments designed to address this issue. Our data characterize how the relative contributions of internal and external features change as a function of image resolution. While we replicated results of previous studies that have shown internal features of familiar faces to be more useful for recognition than external features at high resolution, we found that the two feature sets reverse in importance as resolution decreases. These results suggest that the visual system uses a highly non-linear cue-fusion strategy in combining internal and external features along the dimension of image resolution and that the configural cues that relate the two feature sets play an important role in judgments of facial identity.
Resumo:
A common problem in video surveys in very shallow waters is the presence of strong light fluctuations, due to sun light refraction. Refracted sunlight casts fast moving patterns, which can significantly degrade the quality of the acquired data. Motivated by the growing need to improve the quality of shallow water imagery, we propose a method to remove sunlight patterns in video sequences. The method exploits the fact that video sequences allow several observations of the same area of the sea floor, over time. It is based on computing the image difference between a given reference frame and the temporal median of a registered set of neighboring images. A key observation is that this difference will have two components with separable spectral content. One is related to the illumination field (lower spatial frequencies) and the other to the registration error (higher frequencies). The illumination field, recovered by lowpass filtering, is used to correct the reference image. In addition to removing the sunflickering patterns, an important advantage of the approach is the ability to preserve the sharpness in corrected image, even in the presence of registration inaccuracies. The effectiveness of the method is illustrated in image sets acquired under strong camera motion containing non-rigid benthic structures. The results testify the good performance and generality of the approach
Resumo:
Conscientious objection is defined as the ability to depart from statutory mandates because of intimate convictions based on ethical or religious convictions. A discussion of this issue presents the conflict between the idea of a State concerned with the promotion of individual rights or the protection of general interests and an idea of law based on the maintenance of order and against a view of the law as a means to claim the protection of minimum conditions of the person. From this conflict is drawn the possibility to argue whether conscientious objection should be guaranteed as a fundamental right of freedom of conscience or as a statutory authority legislatively conferred upon persons. This paper sets out a discussion around the two views so as to develop a position that is more consistent with the context of social and constitutional law.
Resumo:
This dissertation has as its goal the quantitative evaluation of the application of coupled hydrodynamic, ecological and clarity models, to address the deterministic prediction of water clarity in lakes and reservoirs. Prediction of water clarity is somewhat unique, insofar as it represents the integrated and coupled effects of a broad range of individual water quality components. These include the biological components such as phytoplankton, together with the associated cycles of nutrients that are needed to sustain their popuiations, and abiotic components such as suspended particles that may be introduced by streams, atmospheric deposition or sediment resuspension. Changes in clarity induced by either component will feed back on the phytoplankton dynamics, as incident light also affects biological growth. Thus ability to successfully model changes in clarity will by necessity have to achieve the correct modeling of these other water quality parameters. Water clarity is also unique in that it may be one of the earliest and most easily detected wamings of the acceleration of the process of eutrophication in a water body.
Resumo:
En la literatura sobre mecànica quàntica és freqüent trobar descriptors basats en la densitat de parells o la densitat electrònica, amb un èxit divers segons les aplicacions que atenyin. Per tal de que tingui sentit químic un descriptor ha de donar la definició d'un àtom en una molècula, o ésser capaç d'identificar regions de l'espai molecular associades amb algun concepte químic (com pot ser un parell solitari o zona d'enllaç, entre d'altres). En aquesta línia, s'han proposat diversos esquemes de partició: la teoria d'àtoms en molècules (AIM), la funció de localització electrònica (ELF), les cel·les de Voroni, els àtoms de Hirshfeld, els àtoms difusos, etc. L'objectiu d'aquesta tesi és explorar descriptors de la densitat basats en particions de l'espai molecular del tipus AIM, ELF o àtoms difusos, analitzar els descriptors existents amb diferents nivells de teoria, proposar nous descriptors d'aromaticitat, així com estudiar l'habilitat de totes aquestes eines per discernir entre diferents mecanismes de reacció.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
This paper presents a study investigating how informed pediatricians are about hearing loss and their ability to assist and refer parents of children with hearing loss.
Resumo:
Detailed knowledge of waterfowl abundance and distribution across Canada is lacking, which limits our ability to effectively conserve and manage their populations. We used 15 years of data from an aerial transect survey to model the abundance of 17 species or species groups of ducks within southern and boreal Canada. We included 78 climatic, hydrological, and landscape variables in Boosted Regression Tree models, allowing flexible response curves and multiway interactions among variables. We assessed predictive performance of the models using four metrics and calculated uncertainty as the coefficient of variation of predictions across 20 replicate models. Maps of predicted relative abundance were generated from resulting models, and they largely match spatial patterns evident in the transect data. We observed two main distribution patterns: a concentrated prairie-parkland distribution and a more dispersed pan-Canadian distribution. These patterns were congruent with the relative importance of predictor variables and model evaluation statistics among the two groups of distributions. Most species had a hydrological variable as the most important predictor, although the specific hydrological variable differed somewhat among species. In some cases, important variables had clear ecological interpretations, but in some instances, e.g., topographic roughness, they may simply reflect chance correlations between species distributions and environmental variables identified by the model-building process. Given the performance of our models, we suggest that the resulting prediction maps can be used in future research and to guide conservation activities, particularly within the bounds of the survey area.
Resumo:
The Global Ocean Data Assimilation Experiment (GODAE [http:// www.godae.org]) has spanned a decade of rapid technological development. The ever-increasing volume and diversity of oceanographic data produced by in situ instruments, remote-sensing platforms, and computer simulations have driven the development of a number of innovative technologies that are essential for connecting scientists with the data that they need. This paper gives an overview of the technologies that have been developed and applied in the course of GODAE, which now provide users of oceanographic data with the capability to discover, evaluate, visualize, download, and analyze data from all over the world. The key to this capability is the ability to reduce the inherent complexity of oceanographic data by providing a consistent, harmonized view of the various data products. The challenges of data serving have been addressed over the last 10 years through the cooperative skills and energies of many individuals.
Resumo:
A generic Nutrient Export Risk Matrix (NERM) approach is presented. This provides advice to farmers and policy makers on good practice for reducing nutrient loss and is intended to persuade them to implement such measures. Combined with a range of nutrient transport modelling tools and field experiments, NERMs can play an important role in reducing nutrient export from agricultural land. The Phosphorus Export Risk Matrix (PERM) is presented as an example NERM. The PERM integrates hydrological understanding of runoff with a number of agronomic and policy factors into a clear problem-solving framework. This allows farmers and policy makers to visualise strategies for reducing phosphorus loss through proactive land management. The risk Of Pollution is assessed by a series of informed questions relating to farming intensity and practice. This information is combined with the concept of runoff management to point towards simple, practical remedial strategies which do not compromise farmers' ability to obtain sound economic returns from their crop and livestock.
Resumo:
The response to painful stimulation depends not only on peripheral nociceptive input but also on the cognitive and affective context in which pain occurs. One contextual variable that affects the neural and behavioral response to nociceptive stimulation is the degree to which pain is perceived to be controllable. Previous studies indicate that perceived controllability affects pain tolerance, learning and motivation, and the ability to cope with intractable pain, suggesting that it has profound effects on neural pain processing. To date, however, no neuroimaging studies have assessed these effects. We manipulated the subjects' belief that they had control over a nociceptive stimulus, while the stimulus itself was held constant. Using functional magnetic resonance imaging, we found that pain that was perceived to be controllable resulted in attenuated activation in the three neural areas most consistently linked with pain processing: the anterior cingulate, insular, and secondary somatosensory cortices. This suggests that activation at these sites is modulated by cognitive variables, such as perceived controllability, and that pain imaging studies may therefore overestimate the degree to which these responses are stimulus driven and generalizable across cognitive contexts. [References: 28]
Resumo:
The history of using vesicular systems for drug delivery to and through skin started nearly three decades ago with a study utilizing phospholipid liposomes to improve skin deposition and reduce systemic effects of triamcinolone acetonide. Subsequently, many researchers evaluated liposomes with respect to skin delivery, with the majority of them recording localized effects and relatively few studies showing transdermal delivery effects. Shortly after this, Transfersomes were developed with claims about their ability to deliver their payload into and through the skin with efficiencies similar to subcutaneous administration. Since these vesicles are ultradeformable, they were thought to penetrate intact skin deep enough to reach the systemic circulation. Their mechanisms of action remain controversial with diverse processes being reported. Parallel to this development, other classes of vesicles were produced with ethanol being included into the vesicles to provide flexibility (as in ethosomes) and vesicles were constructed from surfactants and cholesterol (as in niosomes). Thee ultradeformable vesicles showed variable efficiency in delivering low molecular weight and macromolecular drugs. This article will critically evaluate vesicular systems for dermal and transdermal delivery of drugs considering both their efficacy and potential mechanisms of action.
Resumo:
This study investigates the response of wintertime North Atlantic Oscillation (NAO) to increasing concentrations of atmospheric carbon dioxide (CO2) as simulated by 18 global coupled general circulation models that participated in phase 2 of the Coupled Model Intercomparison Project (CMIP2). NAO has been assessed in control and transient 80-year simulations produced by each model under constant forcing, and 1% per year increasing concentrations of CO2, respectively. Although generally able to simulate the main features of NAO, the majority of models overestimate the observed mean wintertime NAO index of 8 hPa by 5-10 hPa. Furthermore, none of the models, in either the control or perturbed simulations, are able to reproduce decadal trends as strong as that seen in the observed NAO index from 1970-1995. Of the 15 models able to simulate the NAO pressure dipole, 13 predict a positive increase in NAO with increasing CO2 concentrations. The magnitude of the response is generally small and highly model-dependent, which leads to large uncertainty in multi-model estimates such as the median estimate of 0.0061 +/- 0.0036 hPa per %CO2. Although an increase of 0.61 hPa in NAO for a doubling in CO2 represents only a relatively small shift of 0.18 standard deviations in the probability distribution of winter mean NAO, this can cause large relative increases in the probabilities of extreme values of NAO associated with damaging impacts. Despite the large differences in NAO responses, the models robustly predict similar statistically significant changes in winter mean temperature (warmer over most of Europe) and precipitation (an increase over Northern Europe). Although these changes present a pattern similar to that expected due to an increase in the NAO index, linear regression is used to show that the response is much greater than can be attributed to small increases in NAO. NAO trends are not the key contributor to model-predicted climate change in wintertime mean temperature and precipitation over Europe and the Mediterranean region. However, the models' inability to capture the observed decadal variability in NAO might also signify a major deficiency in their ability to simulate the NAO-related responses to climate change.