999 resultados para Falls View
Resumo:
The ENABLE project, which is partly funded by the European Commission, aims to assist elderly people to live well, independently and at case. In this project a wrist unit with both integrated and external sensors, and with a radio frequency link to a mobile phone. will be developed. ENABLE will provide a number of services for elderly people. among them also a remote control service for the home environment. This paper briefly describes the project in general and then focuses on the initial user needs investigation which was carried Out in early 2007 in six different European countries. The provisional findings are discussed and an outlook on the ongoing and future project work is given. A special focus of this paper is on the environmental control service.
Resumo:
Conflation of academic copyright issues with respect to books (whether text books, research monographs or popularisations) and research articles, is rife in the academic publishing industry. A charitable interpretation is that this is because to publishers they are all effectively the same: a product produced for commercial benefit. An uncharitable interpretation is that this is a classic Fear Uncertainty and Doubt approach, in an attempt to delay the inevitable move to Open Access (OA) to research articles. To authors, however, research articles and books are generally very different things. Research articles are produced without the expectation of direct financial return, whereas books generally include some consideration of financial return. Taylor’s “Copyright and research: an academic publisher’s perspective” (SCRIPT-ed 4:2) falls wholesale into this mental trap and in particular his lauding of the position paper of the Association of American Professional and Scholarly Publishers, shows a lack of understanding of the continuing huge loss to scholarship of a lack of OA to research articles. It should be regarded as a categorical imperative for scholars to embrace OA to research articles.
Resumo:
Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures.
Resumo:
This paper assesses the potential for using building integrated photovoltaic (BIPV) roof shingles made from triple-junction amorphous silicon (3a-Si) for electrification and as a roofing material in tropical countries, such as Accra, Ghana. A model roof was constructed using triple-junction amorphous (3a-Si) PV on one section and conventional roofing tiles on the other. The performance of the PV module and tiles were measured, over a range of ambient temperatures and solar irradiance. PVSyst (a computer design software) was used to determine the most appropriate angle of tilt. It was observed that 3a-Si performs well in conditions such as Accra, because it is insensitive to high temperatures. Building integration gives security benefits, and reduces construction costs and embodied energy, compared to freestanding PV systems. Again, it serves as a means of protection from salt spray from the oceans and works well even when shaded. However, compared to conventional roofing materials, 3a-Si would increase the indoor temperature by 1-2 °C depending on the surface area of the roof covered with the PV modules. The results presented in this research enhance the understanding of varying factors involved in the selection of an appropriate method of PV installation to offset the short falls of the conventional roofing material in Ghana.
Resumo:
This paper addresses the nature and cause of Specific Language Impairment (SLI) by reviewing recent research in sentence processing of children with SLI compared to typically developing (TD) children and research in infant speech perception. These studies have revealed that children with SLI are sensitive to syntactic, semantic, and real-world information, but do not show sensitivity to grammatical morphemes with low phonetic saliency, and they show longer reaction times than age-matched controls. TD children from the age of 4 show trace reactivation, but some children with SLI fail to show this effect, which resembles the pattern of adults and TD children with low working memory. Finally, findings from the German Language Development (GLAD) Project have revealed that a group of children at risk for SLI had a history of an auditory delay and impaired processing of prosodic information in the first months of their life, which is not detectable later in life. Although this is a single project that needs to be replicated with a larger group of children, it provides preliminary support for accounts of SLI which make an explicit link between an early deficit in the processing of phonology and later language deficits, and the Computational Complexity Hypothesis that argues that the language deficit in children with SLI lies in difficulties integrating different types of information at the interfaces.
Resumo:
A new autonomous ship collision free (ASCF) trajectory navigation and control system has been introduced with a new recursive navigation algorithm based on analytic geometry and convex set theory for ship collision free guidance. The underlying assumption is that the geometric information of ship environment is available in the form of a polygon shaped free space, which may be easily generated from a 2D image or plots relating to physical hazards or other constraints such as collision avoidance regulations. The navigation command is given as a heading command sequence based on generating a way point which falls within a small neighborhood of the current position, and the sequence of the way points along the trajectory are guaranteed to lie within a bounded obstacle free region using convex set theory. A neurofuzzy network predictor which in practice uses only observed input/output data generated by on board sensors or external sensors (or a sensor fusion algorithm), based on using rudder deflection angle for the control of ship heading angle, is utilised in the simulation of an ESSO 190000 dwt tanker model to demonstrate the effectiveness of the system.
Resumo:
Within the context of active vision, scant attention has been paid to the execution of motion saccades—rapid re-adjustments of the direction of gaze to attend to moving objects. In this paper we first develop a methodology for, and give real-time demonstrations of, the use of motion detection and segmentation processes to initiate capture saccades towards a moving object. The saccade is driven by both position and velocity of the moving target under the assumption of constant target velocity, using prediction to overcome the delay introduced by visual processing. We next demonstrate the use of a first order approximation to the segmented motion field to compute bounds on the time-to-contact in the presence of looming motion. If the bound falls below a safe limit, a panic saccade is fired, moving the camera away from the approaching object. We then describe the use of image motion to realize smooth pursuit, tracking using velocity information alone, where the camera is moved so as to null a single constant image motion fitted within a central image region. Finally, we glue together capture saccades with smooth pursuit, thus effecting changes in both what is being attended to and how it is being attended to. To couple the different visual activities of waiting, saccading, pursuing and panicking, we use a finite state machine which provides inherent robustness outside of visual processing and provides a means of making repeated exploration. We demonstrate in repeated trials that the transition from saccadic motion to tracking is more likely to succeed using position and velocity control, than when using position alone.
Resumo:
Svalgaard and Cliver (2010) recently reported a consensus between the various reconstructions of the heliospheric field over recent centuries. This is a significant development because, individually, each has uncertainties introduced by instrument calibration drifts, limited numbers of observatories, and the strength of the correlations employed. However, taken collectively, a consistent picture is emerging. We here show that this consensus extends to more data sets and methods than reported by Svalgaard and Cliver, including that used by Lockwood et al. (1999), when their algorithm is used to predict the heliospheric field rather than the open solar flux. One area where there is still some debate relates to the existence and meaning of a floor value to the heliospheric field. From cosmogenic isotope abundances, Steinhilber et al. (2010) have recently deduced that the near-Earth IMF at the end of the Maunder minimum was 1.80 ± 0.59 nT which is considerably lower than the revised floor of 4nT proposed by Svalgaard and Cliver. We here combine cosmogenic and geomagnetic reconstructions and modern observations (with allowance for the effect of solar wind speed and structure on the near-Earth data) to derive an estimate for the open solar flux of (0.48 ± 0.29) × 1014 Wb at the end of the Maunder minimum. By way of comparison, the largest and smallest annual means recorded by instruments in space between 1965 and 2010 are 5.75 × 1014 Wb and 1.37 × 1014 Wb, respectively, set in 1982 and 2009, and the maximum of the 11 year running means was 4.38 × 1014 Wb in 1986. Hence the average open solar flux during the Maunder minimum is found to have been 11% of its peak value during the recent grand solar maximum.