885 resultados para fuzzy based evaluation method
Resumo:
Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. ^ This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.^
Resumo:
New designer drugs are constantly emerging onto the illicit drug market and it is often difficult to validate and maintaincomprehensive analytical methods for accurate detection of these compounds. Generally, toxicology laboratories utilize a screening method, such as immunoassay, for the presumptive identification of drugs of abuse. When a positive result occurs, confirmatory methods, such as gas chromatography (GC) or liquid chromatography (LC) coupled with mass spectrometry (MS), are required for more sensitive and specific analyses. In recent years, the need to study the activities of these compounds in screening assays as well as to develop confirmatory techniques to detect them in biological specimens has been recognized. Severe intoxications and fatalities have been encountered with emerging designer drugs, presenting analytical challenges for detection and identification of such novel compounds. The first major task of this research was to evaluate the performance of commercially available immunoassays to determine if designer drugs were cross-reactive. The second major task was to develop and validate a confirmatory method, using LC-MS, to identify and quantify these designer drugs in biological specimens.^ Cross-reactivity towards the cathinone derivatives was found to be minimal. Several other phenethylamines demonstrated cross-reactivity at low concentrations, but results were consistent with those published by the assay manufacturer or as reported in the literature. Current immunoassay-based screening methods may not be ideal for presumptively identifying most designer drugs, including the "bath salts." For this reason, an LC-MS based confirmatory method was developed for 32 compounds, including eight cathinone derivatives, with limits of quantification in the range of 1-10 ng/mL. The method was fully validated for selectivity, matrix effects, stability, recovery, precision, and accuracy. In order to compare the screening and confirmatory techniques, several human specimens were analyzed to demonstrate the importance of using a specific analytical method, such as LC-MS, to detect designer drugs in serum as immunoassays lack cross-reactivity with the novel compounds. Overall, minimal cross-reactivity was observed, highlighting the conclusion that these presumptive screens cannot detect many of the designer drugs and that a confirmatory technique, such as the LC-MS, is required for the comprehensive forensic toxicological analysis of designer drugs.^
Resumo:
The advent of smart TVs has reshaped the TV-consumer interaction by combining TVs with mobile-like applications and access to the Internet. However, consumers are still unable to seamlessly interact with the contents being streamed. An example of such limitation is TV shopping, in which a consumer makes a purchase of a product or item displayed in the current TV show. Currently, consumers can only stop the current show and attempt to find a similar item in the Web or an actual store. It would be more convenient if the consumer could interact with the TV to purchase interesting items. ^ Towards the realization of TV shopping, this dissertation proposes a scalable multimedia content processing framework. Two main challenges in TV shopping are addressed: the efficient detection of products in the content stream, and the retrieval of similar products given a consumer-selected product. The proposed framework consists of three components. The first component performs computational and temporal aware multimedia abstraction to select a reduced number of frames that summarize the important information in the video stream. By both reducing the number of frames and taking into account the computational cost of the subsequent detection phase, this component component allows the efficient detection of products in the stream. The second component realizes the detection phase. It executes scalable product detection using multi-cue optimization. Additional information cues are formulated into an optimization problem that allows the detection of complex products, i.e., those that do not have a rigid form and can appear in various poses. After the second component identifies products in the video stream, the consumer can select an interesting one for which similar ones must be located in a product database. To this end, the third component of the framework consists of an efficient, multi-dimensional, tree-based indexing method for multimedia databases. The proposed index mechanism serves as the backbone of the search. Moreover, it is able to efficiently bridge the semantic gap and perception subjectivity issues during the retrieval process to provide more relevant results.^
Resumo:
Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.
Resumo:
An experimental setup to measure the three-dimensional phase-intensity distribution of an infrared laser beam in the focal region has been presented. It is based on the knife-edge method to perform a tomographic reconstruction and on a transport of intensity equation-based numerical method to obtain the propagating wavefront. This experimental approach allows us to characterize a focalized laser beam when the use of image or interferometer arrangements is not possible. Thus, we have recovered intensity and phase of an aberrated beam dominated by astigmatism. The phase evolution is fully consistent with that of the beam intensity along the optical axis. Moreover, this method is based on an expansion on both the irradiance and the phase information in a series of Zernike polynomials. We have described guidelines to choose a proper set of these polynomials depending on the experimental conditions and showed that, by abiding these criteria, numerical errors can be reduced.
Resumo:
This paper presents an image processing based detection method for detecting pitting corrosion in steel structures. High Dynamic Range (HDR) imaging has been carried out in this regard to demonstrate the effectiveness of such relatively inexpensive techniques that are of immense benefit to Non – Destructive – Tesing (NDT) community. The pitting corrosion of a steel sample in marine environment is successfully detected in this paper using the proposed methodology. It is observed, that the proposed method has a definite potential to be applied to a wider range of applications.
Resumo:
Permanent water bodies not only store dissolved CO2 but are essential for the maintenance of wetlands in their proximity. From the viewpoint of greenhouse gas (GHG) accounting wetland functions comprise sequestration of carbon under anaerobic conditions and methane release. The investigated area in central Siberia covers boreal and sub-arctic environments. Small inundated basins are abundant on the sub-arctic Taymir lowlands but also in parts of severe boreal climate where permafrost ice content is high and feature important freshwater ecosystems. Satellite radar imagery (ENVISAT ScanSAR), acquired in summer 2003 and 2004, has been used to derive open water surfaces with 150 m resolution, covering an area of approximately 3 Mkm**2. The open water surface maps were derived using a simple threshold-based classification method. The results were assessed with Russian forest inventory data, which includes detailed information about water bodies. The resulting classification has been further used to estimate the extent of tundra wetlands and to determine their importance for methane emissions. Tundra wetlands cover 7% (400,000 km**2) of the study region and methane emissions from hydromorphic soils are estimated to be 45,000 t/d for the Taymir peninsula.
Resumo:
Several north temperate marine species were recorded on subtidal hard-substratum reef sites selected to produce a gradient of structural complexity. The study employed an established scuba-based census method, the belt transect. The three types of reef examined, with a measured gradient of increasing structural complexity, were natural rocky reef, artificial reef constructed of solid concrete blocks, and artificial reef made of concrete blocks with voids. Surveys were undertaken monthly over a calendar year using randomly placed fixed rope transects. For a number of conspicuous species of fish and invertebrates, significant differences were found between the levels of habitat complexity and abundance. Overall abundance for many of the species examined was 2-3 times higher on the complex artificial habitats than on simple artificial or natural reef habitats. The enhanced habitat availability produced by the increased structural complexity delivered through specifically designed artificial reefs may have the potential to augment faunal abundance while promoting species diversity.
Resumo:
In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.
Resumo:
Purpose The purpose of this study was to describe the evidence-based research recommendations on injury prevention methods against hamstring injuries among Swedish men's elite team in football. The research-based recommendations was then to be compared with the way Swedish elite football teams work to prevent hamstrings injuries. Method First a literature search of PubMed and SPORTDiscuss was made to find the most evidence-based training methods to hamstring injuries. Then an Internet questionnaire regarding injury prevention training methods against hamstring injuries was sent to all Swedish elite football teams. The answers off the questionnaire was then compared with the research that had the most evidence based training methods to hamstring injuries. Results Research shows that the method with the most evidence is eccentric strength training. Flexibility, static stretch and core stability training is research methods that can be used to prevent hamstrings injuries but these methods lack a large validated research basis. 8 of 32 (25 %) teams answered the questionnaire. All teams indicated that they were working with injury prevention methods but the methods varied from the eccentric strength training to periodization and flexibility training. 2 of 8 teams indicated that they worked with eccentric strength training that is recommended by science as the most evidence-based training method. Conclusion The study shows that the teams partly work after what the research recommends as the most evidence-based training methods against hamstring injuries. However, the study lacks validity and further research is needed before definitive conclusions can be drawn.
Resumo:
La description des termes dans les ressources terminologiques traditionnelles se limite à certaines informations, comme le terme (principalement nominal), sa définition et son équivalent dans une langue étrangère. Cette description donne rarement d’autres informations qui peuvent être très utiles pour l’utilisateur, surtout s’il consulte les ressources dans le but d’approfondir ses connaissances dans un domaine de spécialité, maitriser la rédaction professionnelle ou trouver des contextes où le terme recherché est réalisé. Les informations pouvant être utiles dans ce sens comprennent la description de la structure actancielle des termes, des contextes provenant de sources authentiques et l’inclusion d’autres parties du discours comme les verbes. Les verbes et les noms déverbaux, ou les unités terminologiques prédicatives (UTP), souvent ignorés par la terminologie classique, revêtent une grande importance lorsqu’il s’agit d’exprimer une action, un processus ou un évènement. Or, la description de ces unités nécessite un modèle de description terminologique qui rend compte de leurs particularités. Un certain nombre de terminologues (Condamines 1993, Mathieu-Colas 2002, Gross et Mathieu-Colas 2001 et L’Homme 2012, 2015) ont d’ailleurs proposé des modèles de description basés sur différents cadres théoriques. Notre recherche consiste à proposer une méthodologie de description terminologique des UTP de la langue arabe, notamment l’arabe standard moderne (ASM), selon la théorie de la Sémantique des cadres (Frame Semantics) de Fillmore (1976, 1977, 1982, 1985) et son application, le projet FrameNet (Ruppenhofer et al. 2010). Le domaine de spécialité qui nous intéresse est l’informatique. Dans notre recherche, nous nous appuyons sur un corpus recueilli du web et nous nous inspirons d’une ressource terminologique existante, le DiCoInfo (L’Homme 2008), pour compiler notre propre ressource. Nos objectifs se résument comme suit. Premièrement, nous souhaitons jeter les premières bases d’une version en ASM de cette ressource. Cette version a ses propres particularités : 1) nous visons des unités bien spécifiques, à savoir les UTP verbales et déverbales; 2) la méthodologie développée pour la compilation du DiCoInfo original devra être adaptée pour prendre en compte une langue sémitique. Par la suite, nous souhaitons créer une version en cadres de cette ressource, où nous regroupons les UTP dans des cadres sémantiques, en nous inspirant du modèle de FrameNet. À cette ressource, nous ajoutons les UTP anglaises et françaises, puisque cette partie du travail a une portée multilingue. La méthodologie consiste à extraire automatiquement les unités terminologiques verbales et nominales (UTV et UTN), comme Ham~ala (حمل) (télécharger) et taHmiyl (تحميل) (téléchargement). Pour ce faire, nous avons adapté un extracteur automatique existant, TermoStat (Drouin 2004). Ensuite, à l’aide des critères de validation terminologique (L’Homme 2004), nous validons le statut terminologique d’une partie des candidats. Après la validation, nous procédons à la création de fiches terminologiques, à l’aide d’un éditeur XML, pour chaque UTV et UTN retenue. Ces fiches comprennent certains éléments comme la structure actancielle des UTP et jusqu’à vingt contextes annotés. La dernière étape consiste à créer des cadres sémantiques à partir des UTP de l’ASM. Nous associons également des UTP anglaises et françaises en fonction des cadres créés. Cette association a mené à la création d’une ressource terminologique appelée « DiCoInfo : A Framed Version ». Dans cette ressource, les UTP qui partagent les mêmes propriétés sémantiques et structures actancielles sont regroupées dans des cadres sémantiques. Par exemple, le cadre sémantique Product_development regroupe des UTP comme Taw~ara (طور) (développer), to develop et développer. À la suite de ces étapes, nous avons obtenu un total de 106 UTP ASM compilées dans la version en ASM du DiCoInfo et 57 cadres sémantiques associés à ces unités dans la version en cadres du DiCoInfo. Notre recherche montre que l’ASM peut être décrite avec la méthodologie que nous avons mise au point.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This paper deals with the measure of Aspect Ratio for mesh partitioning and gives hints why, for certain solvers, the Aspect Ratio of partitions plays an important role. We define and rate different kinds of Aspect Ratio, present a new center-based partitioning method which optimizes this measure implicitly and rate several existing partitioning methods and tools under the criterion of Aspect Ratio.
Resumo:
Background: Community participation has become an integral part of many areas of public policy over the last two decades. For a variety of reasons, ranging from concerns about social cohesion and unrest to perceived failings in public services, governments in the UK and elsewhere have turned to communities as both a site of intervention and a potential solution. In contemporary policy, the shift to community is exemplified by the UK Government’s Big Society/Localism agenda and the Scottish Government’s emphasis on Community Empowerment. Through such policies, communities have been increasingly encouraged to help themselves in various ways, to work with public agencies in reshaping services, and to become more engaged in the democratic process. These developments have led some theorists to argue that responsibilities are being shifted from the state onto communities, representing a new form of 'government through community' (Rose, 1996; Imrie and Raco, 2003). Despite this policy development, there is surprisingly little evidence which demonstrates the outcomes of the different forms of community participation. This study attempts to address this gap in two ways. Firstly, it explores the ways in which community participation policy in Scotland and England are playing out in practice. And secondly, it assesses the outcomes of different forms of community participation taking place within these broad policy contexts. Methodology: The study employs an innovative combination of the two main theory-based evaluation methodologies, Theories of Change (ToC) and Realist Evaluation (RE), building on ideas generated by earlier applications of each approach (Blamey and Mackenzie, 2007). ToC methodology is used to analyse the national policy frameworks and the general approach of community organisations in six case studies, three in Scotland and three in England. The local evidence from the community organisations’ theories of change is then used to analyse and critique the assumptions which underlie the Localism and Community Empowerment policies. Alongside this, across the six case studies, a RE approach is utilised to examine the specific mechanisms which operate to deliver outcomes from community participation processes, and to explore the contextual factors which influence their operation. Given the innovative methodological approach, the study also engages in some focused reflection on the practicality and usefulness of combining ToC and RE approaches. Findings: The case studies provide significant evidence of the outcomes that community organisations can deliver through directly providing services or facilities, and through influencing public services. Important contextual factors in both countries include particular strengths within communities and positive relationships with at least part of the local state, although this often exists in parallel with elements of conflict. Notably this evidence suggests that the idea of responsibilisation needs to be examined in a more nuanced fashion, incorporating issues of risk and power, as well the active agency of communities and the local state. Thus communities may sometimes willingly take on responsibility in return for power, although this may also engender significant risk, with the balance between these three elements being significantly mediated by local government. The evidence also highlights the impacts of austerity on community participation, with cuts to local government budgets in particular increasing the degree of risk and responsibility for communities and reducing opportunities for power. Furthermore, the case studies demonstrate the importance of inequalities within and between communities, operating through a socio-economic gradient in community capacity. This has the potential to make community participation policy regressive as more affluent communities are more able to take advantage of additional powers and local authorities have less resource to support the capacity of more disadvantaged communities. For Localism in particular, the findings suggest that some of the ‘new community rights’ may provide opportunities for communities to gain power and generate positive social outcomes. However, the English case studies also highlight the substantial risks involved and the extent to which such opportunities are being undermined by austerity. The case studies suggest that cuts to local government budgets have the potential to undermine some aspects of Localism almost entirely, and that the very limited interest in inequalities means that Localism may be both ‘empowering the powerful’ (Hastings and Matthews, 2014) and further disempowering the powerless. For Community Empowerment, the study demonstrates the ways in which community organisations can gain power and deliver positive social outcomes within the broad policy framework. However, whilst Community Empowerment is ostensibly less regressive, there are still significant challenges to be addressed. In particular, the case studies highlight significant constraints on the notion that communities can ‘choose their own level of empowerment’, and the assumption of partnership working between communities and the local state needs to take into account the evidence of very mixed relationships in practice. Most importantly, whilst austerity has had more limited impacts on local government in Scotland so far, the projected cuts in this area may leave Community Empowerment vulnerable to the dangers of regressive impact highlighted for Localism. Methodologically, the study shows that ToC and RE can be practically applied together and that there may be significant benefits of the combination. ToC offers a productive framework for policy analysis and combining this with data derived from local ToCs provides a powerful lens through which to examine and critique the aims and assumptions of national policy. ToC models also provide a useful framework within which to identify specific causal mechanisms, using RE methodology and, again, the data from local ToC work can enable significant learning about ‘what works for whom in what circumstances’ (Pawson and Tilley, 1997).