994 resultados para Computer Experiments
Resumo:
In this investigation, high-resolution, 1x1x1-mm(3) functional magnetic resonance imaging (fMRI) at 7 T is performed using a multichannel array head coil and a surface coil approach. Scan geometry was optimized for each coil separately to exploit the strengths of both coils. Acquisitions with the surface coil focused on partial brain coverage, while whole-brain coverage fMRI experiments were performed with the array head coil. BOLD sensitivity in the occipital lobe was found to be higher with the surface coil than with the head array, suggesting that restriction of signal detection to the area of interest may be beneficial for localized activation studies. Performing independent component analysis (ICA) decomposition of the fMRI data, we consistently detected BOLD signal changes and resting state networks. In the surface coil data, a small negative BOLD response could be detected in these resting state network areas. Also in the data acquired with the surface coil, two distinct components of the positive BOLD signal were consistently observed. These two components were tentatively assigned to tissue and venous signal changes.
Resumo:
BACKGROUND: Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomalies. METHODS: Multiple anomalies were defined as two or more major congenital anomalies, excluding sequences and syndromes. A computer algorithm for classification of major congenital anomaly cases in the EUROCAT database according to International Classification of Diseases (ICD)v10 codes was programmed, further developed, and implemented for 1 year's data (2004) from 25 registries. The group of cases classified with potential multiple congenital anomalies were manually reviewed by three geneticists to reach a final agreement of classification as "multiple congenital anomaly" cases. RESULTS: A total of 17,733 cases with major congenital anomalies were reported giving an overall prevalence of major congenital anomalies at 2.17%. The computer algorithm classified 10.5% of all cases as "potentially multiple congenital anomalies". After manual review of these cases, 7% were agreed to have true multiple congenital anomalies. Furthermore, the algorithm classified 15% of all cases as having chromosomal anomalies, 2% as monogenic syndromes, and 76% as isolated congenital anomalies. The proportion of multiple anomalies varies by congenital anomaly subgroup with up to 35% of cases with bilateral renal agenesis. CONCLUSIONS: The implementation of the EUROCAT computer algorithm is a feasible, efficient, and transparent way to improve classification of congenital anomalies for surveillance and research.
Resumo:
Planning with partial observability can be formulated as a non-deterministic search problem in belief space. The problem is harder than classical planning as keeping track of beliefs is harder than keeping track of states, and searching for action policies is harder than searching for action sequences. In this work, we develop a framework for partial observability that avoids these limitations and leads to a planner that scales up to larger problems. For this, the class of problems is restricted to those in which 1) the non-unary clauses representing the uncertainty about the initial situation are nvariant, and 2) variables that are hidden in the initial situation do not appear in the body of conditional effects, which are all assumed to be deterministic. We show that such problems can be translated in linear time into equivalent fully observable non-deterministic planning problems, and that an slight extension of this translation renders the problem solvable by means of classical planners. The whole approach is sound and complete provided that in addition, the state-space is connected. Experiments are also reported.
Resumo:
The Learning Affect Monitor (LAM) is a new computer-based assessment system integrating basic dimensional evaluation and discrete description of affective states in daily life, based on an autonomous adapting system. Subjects evaluate their affective states according to a tridimensional space (valence and activation circumplex as well as global intensity) and then qualify it using up to 30 adjective descriptors chosen from a list. The system gradually adapts to the user, enabling the affect descriptors it presents to be increasingly relevant. An initial study with 51 subjects, using a 1 week time-sampling with 8 to 10 randomized signals per day, produced n = 2,813 records with good reliability measures (e.g., response rate of 88.8%, mean split-half reliability of .86), user acceptance, and usability. Multilevel analyses show circadian and hebdomadal patterns, and significant individual and situational variance components of the basic dimension evaluations. Validity analyses indicate sound assignment of qualitative affect descriptors in the bidimensional semantic space according to the circumplex model of basic affect dimensions. The LAM assessment module can be implemented on different platforms (palm, desk, mobile phone) and provides very rapid and meaningful data collection, preserving complex and interindividually comparable information in the domain of emotion and well-being.
Resumo:
Although paraphrasing is the linguistic mechanism underlying many plagiarism cases, little attention has been paid to its analysis in the framework of automatic plagiarism detection. Therefore, state-of-the-art plagiarism detectors find it difficult to detect cases of paraphrase plagiarism. In this article, we analyse the relationship between paraphrasing and plagiarism, paying special attention to which paraphrase phenomena underlie acts of plagiarism and which of them are detected by plagiarism detection systems. With this aim in mind, we created the P4P corpus, a new resource which uses a paraphrase typology to annotate a subset of the PAN-PC-10 corpus for automatic plagiarism detection. The results of the Second International Competition on Plagiarism Detection were analysed in the light of this annotation. The presented experiments show that (i) more complex paraphrase phenomena and a high density of paraphrase mechanisms make plagiarism detection more difficult, (ii) lexical substitutions are the paraphrase mechanisms used the most when plagiarising, and (iii) paraphrase mechanisms tend to shorten the plagiarized text. For the first time, the paraphrase mechanisms behind plagiarism have been analysed, providing critical insights for the improvement of automatic plagiarism detection systems.
Resumo:
A computer program to adjust roadway profiles has been developed to serve as an aid to the county engineers of the State of Iowa. Many hours are spent reducing field notes and calculating adjusted roadway profiles to prepare an existing roadway for paving that will produce a high quality ride and be as maintenance free as possible. Since the computer is very well adapted to performing long tedious tasks; programming this work for a computer would result in freeing the engineer of these tasks. Freed from manual calculations, the engineer is able to spend more time in solving engineering problems. The type of roadway that this computer program is designed to adjust is a road that at sometime. in its history was graded to a finished subgrade. After a period of time, this road is to receive a finished paved surface. The problem then arises whether to bring the existing roadway up to the de signed grade or to make profile adjustments and comprise between the existing and the design profiles. In order to achieve the latter condition using this program, the engineer needs to give the computer only a minimum amount of information.
Resumo:
Helping behavior is any intentional behavior that benefits another living being or group (Hogg & Vaughan, 2010). People tend to underestimate the probability that others will comply with their direct requests for help (Flynn & Lake, 2008). This implies that when they need help, they will assess the probability of getting it (De Paulo, 1982, cited in Flynn & Lake, 2008) and then they will tend to estimate one that is actually lower than the real chance, so they may not even consider worth asking for it. Existing explanations for this phenomenon attribute it to a mistaken cost computation by the help seeker, who will emphasize the instrumental cost of “saying yes”, ignoring that the potential helper also needs to take into account the social cost of saying “no”. And the truth is that, especially in face-to-face interactions, the discomfort caused by refusing to help can be very high. In short, help seekers tend to fail to realize that it might be more costly to refuse to comply with a help request rather than accepting. A similar effect has been observed when estimating trustworthiness of people. Fetchenhauer and Dunning (2010) showed that people also tend to underestimate it. This bias is reduced when, instead of asymmetric feedback (getting feedback only when deciding to trust the other person), symmetric feedback (always given) was provided. This cause could as well be applicable to help seeking as people only receive feedback when they actually make their request but not otherwise. Fazio, Shook, and Eiser (2004) studied something that could be reinforcing these outcomes: Learning asymmetries. By means of a computer game called BeanFest, they showed that people learn better about negatively valenced objects (beans in this case) than about positively valenced ones. This learning asymmetry esteemed from “information gain being contingent on approach behavior” (p. 293), which could be identified with what Fetchenhauer and Dunning mention as ‘asymmetric feedback’, and hence also with help requests. Fazio et al. also found a generalization asymmetry in favor of negative attitudes versus positive ones. They attributed it to a negativity bias that “weights resemblance to a known negative more heavily than resemblance to a positive” (p. 300). Applied to help seeking scenarios, this would mean that when facing an unknown situation, people would tend to generalize and infer that is more likely that they get a negative rather than a positive outcome from it, so, along with what it was said before, people will be more inclined to think that they will get a “no” when requesting help. Denrell and Le Mens (2011) present a different perspective when trying to explain judgment biases in general. They deviate from the classical inappropriate information processing (depicted among other by Fiske & Taylor, 2007, and Tversky & Kahneman, 1974) and explain this in terms of ‘adaptive sampling’. Adaptive sampling is a sampling mechanism in which the selection of sample items is conditioned by the values of the variable of interest previously observed (Thompson, 2011). Sampling adaptively allows individuals to safeguard themselves from experiences they went through once and turned out to lay negative outcomes. However, it also prevents them from giving a second chance to those experiences to get an updated outcome that could maybe turn into a positive one, a more positive one, or just one that regresses to the mean, whatever direction that implies. That, as Denrell and Le Mens (2011) explained, makes sense: If you go to a restaurant, and you did not like the food, you do not choose that restaurant again. This is what we think could be happening when asking for help: When we get a “no”, we stop asking. And here, we want to provide a complementary explanation for the underestimation of the probability that others comply with our direct help requests based on adaptive sampling. First, we will develop and explain a model that represents the theory. Later on, we will test it empirically by means of experiments, and will elaborate on the analysis of its results.
Resumo:
Report on selected computer systems operated by the State of Iowa for the period July 1, 1999 through June 30, 2014
Resumo:
This project was undertaken in coordination with the Environmental Assessment process on the Mt. Vernon Road Improvements project in Cedar Rapids, Iowa. The goal of the research was to determine the cost effectiveness of combined photo-imaging and computer animation as a presentation tool describing public road improvements. The Public Hearing, in combination with the involvement of a Citizen's Resource Group, afforded an opportunity to have an evaluation of the processes by interested citizens who were not familiar with engineering drawings or the construction industry. After the initial viewing of a draft version of the video, the Resource Group made recommendations to the staff developing the video. Discussion of these recommendations led to the development of an animated composite section that showed a combination of situations typically encountered throughout the project corridor, as well as critical considerations. The composite section did not show specific locations and therefore, individuals were not distracted by looking for the details pertaining to their properties. Concentration on the concepts involved rather than specifics provided the opportunity for a more thorough understanding by the citizens. The development of the composite concept was the primary discovery of the research.
Resumo:
Two portable Radio Frequency IDentification (RFID) systems (made by Texas Instruments and HiTAG) were developed and tested for bridge scour monitoring by the Department of Civil and Environmental Engineering at the University of Iowa (UI). Both systems consist of three similar components: 1) a passive cylindrical transponder of 2.2 cm in length (derived from transmitter/responder); 2) a low frequency reader (~134.2 kHz frequency); and 3) an antenna (of rectangular or hexagonal loop). The Texas Instruments system can only read one smart particle per time, while the HiTAG system was successfully modified here at UI by adding the anti-collision feature. The HiTAG system was equipped with four antennas and could simultaneously detect 1,000s of smart particles located in a close proximity. A computer code was written in C++ at the UI for the HiTAG system to allow simultaneous, multiple readouts of smart particles under different flow conditions. The code is written for the Windows XP operational system which has a user-friendly windows interface that provides detailed information regarding the smart particle that includes: identification number, location (orientation in x,y,z), and the instance the particle was detected.. These systems were examined within the context of this innovative research in order to identify the best suited RFID system for performing autonomous bridge scour monitoring. A comprehensive laboratory study that included 142 experimental runs and limited field testing was performed to test the code and determine the performance of each system in terms of transponder orientation, transponder housing material, maximum antenna-transponder detection distance, minimum inter-particle distance and antenna sweep angle. The two RFID systems capabilities to predict scour depth were also examined using pier models. The findings can be summarized as follows: 1) The first system (Texas Instruments) read one smart particle per time, and its effective read range was about 3ft (~1m). The second system (HiTAG) had similar detection ranges but permitted the addition of an anti-collision system to facilitate the simultaneous identification of multiple smart particles (transponders placed into marbles). Therefore, it was sought that the HiTAG system, with the anti-collision feature (or a system with similar features), would be preferable when compared to a single-read-out system for bridge scour monitoring, as the former could provide repetitive readings at multiple locations, which could help in predicting the scour-hole bathymetry along with maximum scour depth. 2) The HiTAG system provided reliable measures of the scour depth (z-direction) and the locations of the smart particles on the x-y plane within a distance of about 3ft (~1m) from the 4 antennas. A Multiplexer HTM4-I allowed the simultaneous use of four antennas for the HiTAG system. The four Hexagonal Loop antennas permitted the complete identification of the smart particles in an x, y, z orthogonal system as function of time. The HiTAG system can be also used to measure the rate of sediment movement (in kg/s or tones/hr). 3) The maximum detection distance of the antenna did not change significantly for the buried particles compared to the particles tested in the air. Thus, the low frequency RFID systems (~134.2 kHz) are appropriate for monitoring bridge scour because their waves can penetrate water and sand bodies without significant loss of their signal strength. 4) The pier model experiments in a flume with first RFID system showed that the system was able to successfully predict the maximum scour depth when the system was used with a single particle in the vicinity of pier model where scour-hole was expected. The pier model experiments with the second RFID system, performed in a sandbox, showed that system was able to successfully predict the maximum scour depth when two scour balls were used in the vicinity of the pier model where scour-hole was developed. 5) The preliminary field experiments with the second RFID system, at the Raccoon River, IA near the Railroad Bridge (located upstream of 360th street Bridge, near Booneville), showed that the RFID technology is transferable to the field. A practical method would be developed for facilitating the placement of the smart particles within the river bed. This method needs to be straightforward for the Department of Transportation (DOT) and county road working crews so it can be easily implemented at different locations. 6) Since the inception of this project, further research showed that there is significant progress in RFID technology. This includes the availability of waterproof RFID systems with passive or active transponders of detection ranges up to 60 ft (~20 m) within the water–sediment column. These systems do have anti-collision and can facilitate up to 8 powerful antennas which can significantly increase the detection range. Such systems need to be further considered and modified for performing automatic bridge scour monitoring. The knowledge gained from the two systems, including the software, needs to be adapted to the new systems.
Resumo:
It is well established that at ambient and supercooled conditions water can be described as a percolating network of H bonds. This work is aimed at identifying, by neutron diffraction experiments combined with computer simulations, a percolation line in supercritical water, where the extension of the H-bond network is in question. It is found that in real supercritical water liquidlike states are observed at or above the percolation threshold, while below this threshold gaslike water forms small, sheetlike configurations. Inspection of the three-dimensional arrangement of water molecules suggests that crossing of this percolation line is accompa- nied by a change of symmetry in the first neighboring shell of molecules from trigonal below the line to tetrahedral above.
Resumo:
Positron emission computed tomography (PET) is a functional, noninvasive method for imaging regional metabolic processes that is nowadays most often combined to morphological imaging with computed tomography (CT). Its use is based on the well-founded assumption that metabolic changes occur earlier in tumors than morphologic changes, adding another dimension to imaging. This article will review the established and investigational indications and radiopharmaceuticals for PET/CT imaging for prostate cancer, bladder cancer and testicular cancer, before presenting upcoming applications in radiation therapy.
Resumo:
This paper proposes an automatic hand detection system that combines the Fourier-Mellin Transform along with other computer vision techniques to achieve hand detection in cluttered scene color images. The proposed system uses the Fourier-Mellin Transform as an invariant feature extractor to perform RST invariant hand detection. In a first stage of the system a simple non-adaptive skin color-based image segmentation and an interest point detector based on corners are used in order to identify regions of interest that contains possible matches. A sliding window algorithm is then used to scan the image at different scales performing the FMT calculations only in the previously detected regions of interest and comparing the extracted FM descriptor of the windows with a hand descriptors database obtained from a train image set. The results of the performed experiments suggest the use of Fourier-Mellin invariant features as a promising approach for automatic hand detection.
Resumo:
Computer-Aided Tomography Angiography (CTA) images are the standard for assessing Peripheral artery disease (PAD). This paper presents a Computer Aided Detection (CAD) and Computer Aided Measurement (CAM) system for PAD. The CAD stage detects the arterial network using a 3D region growing method and a fast 3D morphology operation. The CAM stage aims to accurately measure the artery diameters from the detected vessel centerline, compensating for the partial volume effect using Expectation Maximization (EM) and a Markov Random field (MRF). The system has been evaluated on phantom data and also applied to fifteen (15) CTA datasets, where the detection accuracy of stenosis was 88% and the measurement accuracy was with an 8% error.