934 resultados para Running-based anaerobic sprint test
Resumo:
This paper presents an alternative approach to image segmentation by using the spatial distribution of edge pixels as opposed to pixel intensities. The segmentation is achieved by a multi-layered approach and is intended to find suitable landing areas for an aircraft emergency landing. We combine standard techniques (edge detectors) with novel developed algorithms (line expansion and geometry test) to design an original segmentation algorithm. Our approach removes the dependency on environmental factors that traditionally influence lighting conditions, which in turn have negative impact on pixel-based segmentation techniques. We present test outcomes on realistic visual data collected from an aircraft, reporting on preliminary feedback about the performance of the detection. We demonstrate consistent performances over 97% detection rate.
Resumo:
To ensure better concrete quality and long-term durability, there has been an increasing focus in recent years on the development of test methods for quality control of concrete. This paper presents a study to evaluate the effect of water accessible porosity and oven-dry unit weight on the resistance of concrete to chloride-ion penetration. Based on the experimental results and regression analyses, empirical relationships of the charge passed (ASTM C 1202) and chloride migration coefficient (NT Build 492) versus the water accessible porosity and oven dry unit weight of the concrete are established. Using basic physical properties of water accessible porosity and oven dry unit weight which can be easily determined, total charge passed and migration coefficient of the concrete can be estimated for quality control and for estimating durability of concrete.
Resumo:
In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.
Resumo:
In recent years face recognition systems have been applied in various useful applications, such as surveillance, access control, criminal investigations, law enforcement, and others. However face biometric systems can be highly vulnerable to spoofing attacks where an impostor tries to bypass the face recognition system using a photo or video sequence. In this paper a novel liveness detection method, based on the 3D structure of the face, is proposed. Processing the 3D curvature of the acquired data, the proposed approach allows a biometric system to distinguish a real face from a photo, increasing the overall performance of the system and reducing its vulnerability. In order to test the real capability of the methodology a 3D face database has been collected simulating spoofing attacks, therefore using photographs instead of real faces. The experimental results show the effectiveness of the proposed approach.
Resumo:
OBJECTIVE: The objective of this study was to describe the distribution of conjunctival ultraviolet autofluorescence (UVAF) in an adult population. METHODS: We conducted a cross-sectional, population-based study in the genetic isolate of Norfolk Island, South Pacific Ocean. In all, 641 people, aged 15 to 89 years, were recruited. UVAF and standard (control) photographs were taken of the nasal and temporal interpalpebral regions bilaterally. Differences between the groups for non-normally distributed continuous variables were assessed using the Wilcoxon-Mann-Whitney ranksum test. Trends across categories were assessed using Cuzick's non-parametric test for trend or Kendall's rank correlation τ. RESULTS: Conjunctival UVAF is a non-parametric trait with a positively skewed distribution. Median amount of conjunctival UVAF per person (sum of four measurements; right nasal/temporal and left nasal/temporal) was 28.2 mm(2) (interquartile range 14.5-48.2). There was an inverse, linear relationship between UVAF and advancing age (P<0.001). Males had a higher sum of UVAF compared with females (34.4 mm(2) vs 23.2 mm(2), P<0.0001). There were no statistically significant differences in area of UVAF between right and left eyes or between nasal and temporal regions. CONCLUSION: We have provided the first quantifiable estimates of conjunctival UVAF in an adult population. Further data are required to provide information about the natural history of UVAF and to characterise other potential disease associations with UVAF. UVR protective strategies should be emphasised at an early age to prevent the long-term adverse effects on health associated with excess UVR.
Resumo:
Geographical market expansion is included in various definitions of entrepreneurship as it entails the opening up of new markets (for example, Davidsson 2003). Expansion into new international markets and launch of new products in international markets are also consistent with definitions of entrepreneurship which center on the pursuit of opportunities {e.g.\Stevenson, 1983 #922;Gartner, 1993 #931}. Accordingly, the decision by managers of small, internationally active businesses to continue to internationalize can be viewed as an entrepreneurial act. In spite of the fact that both start-ups and existing firms can behave entrepreneurially by expanding into new international markets, the attention of entrepreneurship researchers interested in international activities has largely focused on international new ventures (INVs); that is, business organizations that internationalize from inception (Oviatt, and McDougall 1994; Oviatt, and McDougall 1997). Consequently, pursuit of international opportunities by established small and medium-sized enterprises (SMEs) lacks theoretical understanding and empirical investigation through an entrepreneurship lens. This paper contributes to the body of knowledge at the entrepreneurship-internationalization interface by testing whether Stevenson’s opportunity-based conceptualization of entrepreneurial management (Stevenson 1983; Stevenson and Gumpert 1985; Stevenson and Jarillo 1990) can explain the attainment of continued entrepreneurial outcomes by SMEs operating in foreign markets. We choose Stevenson’s conceptualization as it gauges firm-level characteristics that are theorized to facilitate the pursuit of entrepreneurial opportunities, which arguably is at the heart of SMEs’ continued venturing into international markets.
Resumo:
The inconsistent findings of past board diversity research demand a test of competing linear and curvilinear diversity–performance predictions. This research focuses on board age and gender diversity, and presents a positive linear prediction based on resource dependence theory, a negative linear prediction based on social identity theory, and an inverted U-shaped curvilinear prediction based on the integration of resource dependence theory with social identity theory. The predictions were tested using archival data on 288 large organizations listed on the Australian Securities Exchange, with a 1-year time lag between diversity (age and gender) and performance (employee productivity and return on assets). The results indicate a positive linear relationship between gender diversity and employee productivity, a negative linear relationship between age diversity and return on assets, and an inverted U-shaped curvilinear relationship between age diversity and return on assets. The findings provide additional evidence on the business case for board gender diversity and refine the business case for board age diversity.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and Exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an $R^2$ goodness of fit of 0.9994 and 0.9982 respectively over a 10 hour test period. The utility of the framework is demonstrated on a number of usage scenarios including real time monitoring and `what-if' analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
Incorporating engineering concepts into middle school curriculum is seen as an effective way to improve students’ problem-solving skills. A selection of findings is reported from a science, technology, engineering and mathematics (STEM)-based unit in which students in the second year (grade 8) of a three-year longitudinal study explored engineering concepts and principles pertaining to the functioning of simple machines. The culminating activity, the focus of this paper, required the students to design, construct, test, and evaluate a trebuchet catapult. We consider findings from one of the schools, a co-educational school, where we traced the design process developments of four student groups from two classes. The students’ descriptions and explanations of the simple machines used in their catapult design are examined, together with how they rated various aspects of their engineering designs. Included in the findings are students’ understanding of how their simple machines were simulated by the resources supplied and how the machines interacted in forming a complex machine. An ability to link physical materials with abstract concepts and an awareness of design constraints on their constructions were apparent, although a desire to create a ‘‘perfect’’ catapult despite limitations in the physical materials rather than a prototype for testing concepts was evident. Feedback from teacher interviews added further insights into the students’ developments as well as the teachers’ professional learning. An evolving framework for introducing engineering education in the pre-secondary years is proposed.
Resumo:
The pull-through/local dimpling failure strength of screwed connections is very important in the design of profiled steel cladding systems to help them resist storms and hurricanes. The current American and European provisions recommend four different test methods for the screwed connections in tension, but the accuracy of these methods in determining the connection strength is not known. It is unlikely that the four test methods are equivalent in all cases and thus it is necessary to reduce the number of methods recommended. This paper presents a review of these test methods based on some laboratory tests on crest- and valley-fixed claddings and then recommends alternative tests methods that reproduce the real behavior of the connections, including the bending and membrane deformations of the cladding around the screw fasteners and the tension load in the fastener.
Resumo:
This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.
Resumo:
The coupling of kurtosis based-indexes and envelope analysis represents one of the most successful and widespread procedures for the diagnostics of incipient faults on rolling element bearings. Kurtosis-based indexes are often used to select the proper demodulation band for the application of envelope-based techniques. Kurtosis itself, in slightly different formulations, is applied for the prognostic and condition monitoring of rolling element bearings, as a standalone tool for a fast indication of the development of faults. This paper shows for the first time the strong analytical connection which holds for these two families of indexes. In particular, analytical identities are shown for the squared envelope spectrum (SES) and the kurtosis of the corresponding band-pass filtered analytic signal. In particular, it is demonstrated how the sum of the peaks in the SES corresponds to the raw 4th order moment. The analytical results show as well a link with an another signal processing technique: the cepstrum pre-whitening, recently used in bearing diagnostics. The analytical results are the basis for the discussion on an optimal indicator for the choice of the demodulation band, the ratio of cyclic content (RCC), which endows the kurtosis with selectivity in the cyclic frequency domain and whose performance is compared with more traditional kurtosis-based indicators such as the protrugram. A benchmark, performed on numerical simulations and experimental data coming from two different test-rigs, proves the superior effectiveness of such an indicator. Finally a short introduction to the potential offered by the newly proposed index in the field of prognostics is given in an additional experimental example. In particular the RCC is tested on experimental data collected on an endurance bearing test-rig, showing its ability to follow the development of the damage with a single numerical index.
Resumo:
Delirium is a significant problem for older hospitalized people and is associated with poor outcomes. It is poorly recognized and evidence suggests that a major reason is lack of education. Nurses, who are educated about delirium, can play a significant role in improving delirium recognition. This study evaluated the impact of a delirium specific educational website. A cluster randomized controlled trial, with a pretest/post-test time series design, was conducted to measure delirium knowledge (DK) and delirium recognition (DR) over three time-points. Statistically significant differences were found between the intervention and non-intervention group. The intervention groups' DK scores were higher and the change over time results were statistically significant [T3 and T1 (t=3.78 p=<0.001) and T2 and T1 baseline (t=5.83 p=<0.001)]. Statistically significant improvements were also seen for DR when comparing T2 and T1 results (t=2.56 p=0.011) between both groups but not for changes in DR scores between T3 and T1 (t=1.80 p=0.074). Participants rated the website highly on the visual, functional and content elements. This study supports the concept that web-based delirium learning is an effective and satisfying method of information delivery for registered nurses. Future research is required to investigate clinical outcomes as a result of this web-based education.
Resumo:
Prolonged intermittent-sprint exercise (i.e., team sports) induce disturbances in skeletal muscle structure and function that are associated with reduced contractile function, a cascade of inflammatory responses, perceptual soreness, and a delayed return to optimal physical performance. In this context, recovery from exercise-induced fatigue is traditionally treated from a peripheral viewpoint, with the regeneration of muscle physiology and other peripheral factors the target of recovery strategies. The direction of this research narrative on post-exercise recovery differs to the increasing emphasis on the complex interaction between both central and peripheral factors regulating exercise intensity during exercise performance. Given the role of the central nervous system (CNS) in motor-unit recruitment during exercise, it too may have an integral role in post-exercise recovery. Indeed, this hypothesis is indirectly supported by an apparent disconnect in time-course changes in physiological and biochemical markers resultant from exercise and the ensuing recovery of exercise performance. Equally, improvements in perceptual recovery, even withstanding the physiological state of recovery, may interact with both feed-forward/feed-back mechanisms to influence subsequent efforts. Considering the research interest afforded to recovery methodologies designed to hasten the return of homeostasis within the muscle, the limited focus on contributors to post-exercise recovery from CNS origins is somewhat surprising. Based on this context, the current review aims to outline the potential contributions of the brain to performance recovery after strenuous exercise.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.