939 resultados para Continuously Stirred Bioreactor
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
Purpose: Within the context of high global competitiveness, knowledge management (KM) has proven to be one of the major factors contributing to enhanced business outcomes. Furthermore, knowledge sharing (KS) is one of the most critical of all KM activities. From a manufacturing industry perspective, supply chain management (SCM) and product development process (PDP) activities, require a high proportion of company resources such as budget and manpower. Therefore, manufacturing companies are striving to strengthen SCM, PDP and KS activities in order to accelerate rates of manufacturing process improvement, ultimately resulting in higher levels of business performance (BP). A theoretical framework along with a number of hypotheses are proposed and empirically tested through correlation, factor and path analyses. Design/methodology/approach: A questionnaire survey was administered to a sample of electronic manufacturing companies operating in Taiwan to facilitate testing the proposed relationships. More than 170 respondents from 83 organisations responded to the survey. The study identified top management commitment and employee empowerment, supplier evaluation and selection, and design simplification and modular design as the key business activities that are strongly associated with the business performance. Findings: The empirical study supports that key manufacturing business activities (i.e., SCM, PDP, and KS) are positively associated with BP. The findings also evealed that some specific business activities such as SCMF1,PDPF2, and KSF1 have the strongest influencing power on particular business outcomes (i.e., BPF1 and BPF2) within the context of electronic manufacturing companies operating in Taiwan. Practical implications: The finding regarding the relationship between SCM and BP identified the essential role of supplier evaluation and selection in improving business competitiveness and long term performance. The process of forming knowledge in companies, such as creation, storage/retrieval, and transfer do not necessarily lead to enhanced business performance; only through effectively applying knowledge to the right person at the right time does. Originality/value: Based on this finding it is recommended that companies should involve suppliers in partnerships to continuously improve operations and enhance product design efforts, which would ultimately enhance business performance. Business performance depends more on an employee’s ability to turn knowledge into effective action.
Resumo:
Purpose – This chapter examines an episode of pretend play amongst a group of young girls in an elementary school in Australia, highlighting how they interact within the membership categorization device ‘family’ to manage their social and power relationships. Approach – Using conversation analysis and membership categorization analysis, an episode of video-recorded interaction that occurs amongst a group of four young girls is analyzed. Findings – As disputes arise amongst the girls, the mother category is produced as authoritative through authoritative actions by the girl in the category of mother, and displays of subordination on the part of the other children, in the categories of sister, dog and cat. Value of paper – Examining play as a social practice provides insight into the social worlds of children. The analysis shows how the children draw upon and co-construct family-style relationships in a pretend play context, in ways that enable them to build and organize peer interaction. Authority is highlighted as a joint accomplishment that is part of the social and moral order continuously being negotiated by the children. The authority of the mother category is produced and oriented to as a means of managing the disputes within the pretend frame of play.
Resumo:
A satellite based observation system can continuously or repeatedly generate a user state vector time series that may contain useful information. One typical example is the collection of International GNSS Services (IGS) station daily and weekly combined solutions. Another example is the epoch-by-epoch kinematic position time series of a receiver derived by a GPS real time kinematic (RTK) technique. Although some multivariate analysis techniques have been adopted to assess the noise characteristics of multivariate state time series, statistic testings are limited to univariate time series. After review of frequently used hypotheses test statistics in univariate analysis of GNSS state time series, the paper presents a number of T-squared multivariate analysis statistics for use in the analysis of multivariate GNSS state time series. These T-squared test statistics have taken the correlation between coordinate components into account, which is neglected in univariate analysis. Numerical analysis was conducted with the multi-year time series of an IGS station to schematically demonstrate the results from the multivariate hypothesis testing in comparison with the univariate hypothesis testing results. The results have demonstrated that, in general, the testing for multivariate mean shifts and outliers tends to reject less data samples than the testing for univariate mean shifts and outliers under the same confidence level. It is noted that neither univariate nor multivariate data analysis methods are intended to replace physical analysis. Instead, these should be treated as complementary statistical methods for a prior or posteriori investigations. Physical analysis is necessary subsequently to refine and interpret the results.
Resumo:
Qualitative research methods are widely accepted in Information Systems and multiple approaches have been successfully used in IS qualitative studies over the years. These approaches include narrative analysis, discourse analysis, grounded theory, case study, ethnography and phenomenological analysis. Guided by critical, interpretive and positivist epistemologies (Myers 1997), qualitative methods are continuously growing in importance in our research community. In this special issue, we adopt Van Maanen's (1979: 520) definition of qualitative research as an umbrella term to cover an “array of interpretive techniques that can describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain more or less naturally occurring phenomena in the social world”. In the call for papers, we stated that the aim of the special issue was to provide a forum within which we can present and debate the significant number of issues, results and questions arising from the pluralistic approach to qualitative research in Information Systems. We recognise both the potential and the challenges that qualitative approaches offers for accessing the different layers and dimensions of a complex and constructed social reality (Orlikowski, 1993). The special issue is also a response to the need to showcase the current state of the art in IS qualitative research and highlight advances and issues encountered in the process of continuous learning that includes questions about its ontology, epistemological tenets, theoretical contributions and practical applications.
Resumo:
Atmospheric ultrafine particles play an important role in affecting human health, altering climate and degrading visibility. Numerous studies have been conducted to better understand the formation process of these particles, including field measurements, laboratory chamber studies and mathematical modeling approaches. Field studies on new particle formation found that formation processes were significantly affected by atmospheric conditions, such as the availability of particle precursors and meteorological conditions. However, those studies were mainly carried out in rural areas of the northern hemisphere and information on new particle formation in urban areas, especially those in subtropical regions, is limited. In general, subtropical regions display a higher level of solar radiation, along with stronger photochemical reactivity, than those regions investigated in previous studies. However, based on the results of these studies, the mechanisms involved in the new particle formation process remain unclear, particularly in the Southern Hemisphere. Therefore, in order to fill this gap in knowledge, a new particle formation study was conducted in a subtropical urban area in the Southern Hemisphere during 2009, which measured particle size distribution in different locations in Brisbane, Australia. Characterisation of nucleation events was conducted at the campus building of the Queensland University of Technology (QUT), located in an urban area of Brisbane. Overall, the annual average number concentrations of ultrafine, Aitken and nucleation mode particles were found to be 9.3 x 103, 3.7 x 103 and 5.6 x 103 cm-3, respectively. This was comparable to levels measured in urban areas of northern Europe, but lower than those from polluted urban areas such as the Yangtze River Delta, China and Huelva and Santa Cruz de Tenerife, Spain. Average particle number concentration (PNC) in the Brisbane region did not show significant seasonal variation, however a relatively large variation was observed during the warmer season. Diurnal variation of Aitken and nucleation mode particles displayed different patterns, which suggested that direct vehicle exhaust emissions were a major contributor of Aitken mode particles, while nucleation mode particles originated from vehicle exhaust emissions in the morning and photochemical production at around noon. A total of 65 nucleation events were observed during 2009, in which 40 events were classified as nucleation growth events and the remainder were nucleation burst events. An interesting observation in this study was that all nucleation growth events were associated with vehicle exhaust emission plumes, while the nucleation burst events were associated with industrial emission plumes from an industrial area. The average particle growth rate for nucleation events was found to be 4.6 nm hr-1 (ranging from 1.79-7.78 nm hr-1), which is comparable to other urban studies conducted in the United States, while monthly particle growth rates were found to be positively related to monthly solar radiation (r = 0.76, p <0.05). The particle growth rate values reported in this work are the first of their kind to be reported for the subtropical urban area of Australia. Furthermore, the influence of nucleation events on PNC within the urban airshed was also investigated. PNC was simultaneously measured at urban (QUT), roadside (Woolloongabba) and semi-urban (Rocklea) sites in Brisbane during 2009. Total PNC at these sites was found to be significantly affected by regional nucleation events. The relative fractions of PNC to total daily PNC observed at QUT, Woolloongabba and Rocklea were found to be 12%, 9% and 14%, respectively, during regional nucleation events. These values were higher than those observed as a result of vehicle exhaust emissions during weekday mornings, which ranged from 5.1-5.5% at QUT and Woolloongabba. In addition, PNC in the semi-urban area of Rocklea increased by a factor of 15.4 when it was upwind from urban pollution sources under the influence of nucleation burst events. Finally, we investigated the influence of sulfuric acid on new particle formation in the study region. A H2SO4 proxy was calculated by using [SO2], solar radiation and particle condensation sink data to represent the new particle production strength for the urban, roadside and semi-urban areas of Brisbane during the period June-July of 2009. The temporal variations of the H2SO4 proxies and the nucleation mode particle concentration were found to be in phase during nucleation events in the urban and roadside areas. In contrast, the peak of proxy concentration occurred 1-2 hr prior to the observed peak in nucleation mode particle concentration at the downwind semi-urban area of Brisbane. A moderate to strong linear relationship was found between the proxy and the freshly formed particles, with r2 values of 0.26-0.77 during the nucleation events. In addition, the log[H2SO4 proxy] required to produce new particles was found to be ~1.0 ppb Wm-2 s and below 0.5 ppb Wm-2 s for the urban and semi-urban areas, respectively. The particle growth rates were similar during nucleation events at the three study locations, with an average value of 2.7 ± 0.5 nm hr-1. This result suggested that a similar nucleation mechanism dominated in the study region, which was strongly related to sulphuric acid concentration, however the relationship between the proxy and PNC was poor in the semi-urban area of Rocklea. This can be explained by the fact that the nucleation process was initiated upwind of the site and the resultant particles were transported via the wind to Rocklea. This explanation is also supported by the higher geometric mean diameter value observed for particles during the nucleation event and the time lag relationship between the H2SO4 proxy and PNC observed at Rocklea. In summary, particle size distribution was continuously measured in a subtropical urban area of southern hemisphere during 2009, the findings from which formed the first particle size distribution dataset in the study region. The characteristics of nucleation events in the Brisbane region were quantified and the properties of the nucleation growth and burst events are discussed in detail using a case studies approach. To further investigate the influence of nucleation events on PNC in the study region, PNC was simultaneously measured at three locations to examine the spatial variation of PNC during the regional nucleation events. In addition, the impact of upwind urban pollution on the downwind semi-urban area was quantified during these nucleation events. Sulphuric acid was found to be an important factor influencing new particle formation in the urban and roadside areas of the study region, however, a direct relationship with nucleation events at the semi-urban site was not observed. This study provided an overview of new particle formation in the Brisbane region, and its influence on PNC in the surrounding area. The findings of this work are the first of their kind for an urban area in the southern hemisphere.
Resumo:
The 2011 floods in Southeast Queensland had a devastating impact on many sectors including transport. Road and rail systems across all flooded areas of Queensland were severely affected and significant economic losses occurred as a result of roadway and railway closures. Travellers were compelled to take alternative routes because of road closures or deteriorated traffic conditions on their regular route. Extreme changes in traffic volume can occur under such scenarios which disrupts the network re-equilibrium and re-stabilisation in the recovery phase as travellers continuously adjust their travel options. This study explores how travellers respond to such a major network disruption. A comprehensive study was undertaken focusing on how bus riders reacted to the floods in Southeast Queensland by comparing the ridership patterns before, during and after the floods. The study outcomes revealed the evolving reactions of transit users to direct and indirect impacts of a natural disaster. A good understanding of this process is crucial for developing appropriate strategies to encourage modal shift of automobile users to public transit and also for modelling of travel behaviours during and after a major network disruption caused by natural disasters.
Resumo:
The work presented in this thesis investigates the mathematical modelling of charge transport in electrolyte solutions, within the nanoporous structures of electrochemical devices. We compare two approaches found in the literature, by developing onedimensional transport models based on the Nernst-Planck and Maxwell-Stefan equations. The development of the Nernst-Planck equations relies on the assumption that the solution is infinitely dilute. However, this is typically not the case for the electrolyte solutions found within electrochemical devices. Furthermore, ionic concentrations much higher than those of the bulk concentrations can be obtained near the electrode/electrolyte interfaces due to the development of an electric double layer. Hence, multicomponent interactions which are neglected by the Nernst-Planck equations may become important. The Maxwell-Stefan equations account for these multicomponent interactions, and thus they should provide a more accurate representation of transport in electrolyte solutions. To allow for the effects of the electric double layer in both the Nernst-Planck and Maxwell-Stefan equations, we do not assume local electroneutrality in the solution. Instead, we model the electrostatic potential as a continuously varying function, by way of Poisson’s equation. Importantly, we show that for a ternary electrolyte solution at high interfacial concentrations, the Maxwell-Stefan equations predict behaviour that is not recovered from the Nernst-Planck equations. The main difficulty in the application of the Maxwell-Stefan equations to charge transport in electrolyte solutions is knowledge of the transport parameters. In this work, we apply molecular dynamics simulations to obtain the required diffusivities, and thus we are able to incorporate microscopic behaviour into a continuum scale model. This is important due to the small size scales we are concerned with, as we are still able to retain the computational efficiency of continuum modelling. This approach provides an avenue by which the microscopic behaviour may ultimately be incorporated into a full device-scale model. The one-dimensional Maxwell-Stefan model is extended to two dimensions, representing an important first step for developing a fully-coupled interfacial charge transport model for electrochemical devices. It allows us to begin investigation into ambipolar diffusion effects, where the motion of the ions in the electrolyte is affected by the transport of electrons in the electrode. As we do not consider modelling in the solid phase in this work, this is simulated by applying a time-varying potential to one interface of our two-dimensional computational domain, thus allowing a flow field to develop in the electrolyte. Our model facilitates the observation of the transport of ions near the electrode/electrolyte interface. For the simulations considered in this work, we show that while there is some motion in the direction parallel to the interface, the interfacial coupling is not sufficient for the ions in solution to be "dragged" along the interface for long distances.
Resumo:
Universities are more and more challenged by the emerging global higher education market, facilitated by advances in Information and Communication Technologies (ICT). This requires them to reconsider their mission and direction in order to function effectively and efficiently, and to be responsive to changes in their environment. In the face of increasing demands and competitive pressures, Universities like other companies, seek to continuously innovate and improve their performance. Universities are considering co-operating or sharing, both internally and externally, in a wide range of areas to achieve cost effectiveness and improvements in performance. Shared services are an effective model for re-organizing to reduce costs, increase quality and create new capabilities. Shared services are not limited to the Higher Education (HE) sector. Organizations across different sectors are adopting shared services, in particular for support functions such as Finance, Accounting, Human Resources and Information Technology. While shared services has been around for more than three decades, commencing in the 1970’s in the banking sector and then been adopted by other sectors, it is an under researched domain, with little consensus on the most fundamental issues even as basic as defining what shared services is. Moreover, the interest in shared services within Higher Education is a global phenomenon. This study on shared services is situated within the Higher Education Sector of Malaysia, and originated as an outcome resulting from a national project (2005 – 2007) conducted by the Ministry of Higher Education (MOHE) entitled "Knowledge, Information Communication Technology Strategic Plan (KICTSP) for Malaysian Public Higher Education"- where progress towards more collaborations via shared services was a key recommendation. The study’s primary objective was to understand the nature and potential for ICT shared services, in particular in the Malaysian HE sector; by laying a foundation in terms of definition, typologies and research agenda and deriving theoretically based conceptualisations of the potential benefits of shared services, success factors and issues of pursuing shared services. The study embarked on this objective with a literature review and pilot case study as a means to further define the context of the study, given the current under-researched status of ICT shared services and of shared services in Higher Education. This context definition phase illustrated a range of unaddressed issues; including a lack of common understanding of what shared services are, how they are formed, what objectives they full fill, who is involved etc. The study thus embarked on a further investigation of a more foundational nature with an exploratory phase that aimed to address these gaps, where a detailed archival analysis of shared services literature within the IS context was conducted to better understand shared services from an IS perspective. The IS literature on shared services was analysed in depth to report on the current status of shared services research in the IS domain; in particular definitions, objectives, stakeholders, the notion of sharing, theories used, and research methods applied were analysed, which provided a firmer base to this study’s design. The study also conducted a detailed content analysis of 36 cases (globally) of shared services implementations in the HE sector to better understand how shared services are structured within the HE sector and what is been shared. The results of the context definition phase and exploratory phase formed a firm basis in the multiple case studies phase which was designed to address the primary goals of this study (as presented above). Three case sites within the Malaysian HE sector was included in this analysis, resulting in empirically supported theoretical conceptualizations of shared services success factors, issues and benefits. A range of contributions are made through this study. First, the detailed archival analysis of shared services in Information Systems (IS) demonstrated the dearth of research on shared services within Information Systems. While the existing literature was synthesised to contribute towards an improved understanding of shared services in the IS domain, the areas that are yet under-developed and requires further exploration is identified and presented as a proposed research agenda for the field. This study also provides theoretical considerations and methodological guidelines to support the research agenda; to conduct better empirical research in this domain. A number of literatures based a priori frameworks (i.e. on the forms of sharing and shared services stakeholders etc) are derived in this phase, contributing to practice and research with early conceptualisations of critical aspects of shared services. Furthermore, the comprehensive archival analysis design presented and executed here is an exemplary approach of a systematic, pre-defined and tool-supported method to extract, analyse and report literature, and is documented as guidelines that can be applied for other similar literature analysis, with particular attention to supporting novice researchers. Second, the content analysis of 36 shared services initiatives in the Higher Education sector presented eight different types of structural arrangements for shared services, as observed in practice, and the salient dimensions along which those types can be usefully differentiated. Each of the eight structural arrangement types are defined and demonstrated through case examples, with further descriptive details and insights to what is shared and how the sharing occurs. This typology, grounded on secondary empirical evidence, can serve as a useful analytical tool for researchers investigating the shared services phenomenon further, and for practitioners considering the introduction or further development of shared services. Finally, the multiple case studies conducted in the Malaysian Higher Education sector, provided further empirical basis to instantiate the conceptual frameworks and typology derived from the prior phases and develops an empirically supported: (i) framework of issues and challenges, (ii) a preliminary theory of shared services success, and (iii) a benefits framework, for shared services in the Higher Education sector.
Resumo:
Proper functioning of Insulated Rail Joints (IRJs) is essential for the safe operation of the railway signalling systems and broken rail identification circuitries. The Conventional IRJ (CIRJ) resembles structural butt joints consisting of two pieces of rails connected together through two joint bars on either side of their web and the assembly is held together through pre-tensioned bolts. As the IRJs should maintain electrical insulation between the two rails, a gap between the rail ends must be retained at all times and all metal contacting surfaces should be electrically isolated from each other using non-conductive material. At the gap, the rail ends lose longitudinal continuity and hence the vertical sections of the rail ends are often severely damaged, especially at the railhead, due to the passage of wheels compared to other continuously welded rail sections. Fundamentally, the reason for the severe damage can be related to the singularities of the wheel-rail contact pressure and the railhead stress. No new generation designs that have emerged in the market to date have focussed on this fundamental; they only have provided attention to either the higher strength materials or the thickness of the sections of various components of the IRJs. In this thesis a novel method of shape optimisation of the railhead is developed to eliminate the pressure and stress singularities through changes to the original sharp corner shaped railhead into an arc profile in the longitudinal direction. The optimal shape of the longitudinal railhead profile has been determined using three nongradient methods in search of accuracy and efficiency: (1) Grid Search Method; (2) Genetic Algorithm Method and (3) Hybrid Genetic Algorithm Method. All these methods have been coupled with a parametric finite element formulation for the evaluation of the objective function for each iteration or generation depending on the search algorithm employed. The optimal shape derived from these optimisation methods is termed as Stress Minimised Railhead (SMRH) in this thesis. This optimal SMRH design has exhibited significantly reduced stress concentration that remains well below the yield strength of the head hardened rail steels and has shifted the stress concentration location away from the critical zone of the railhead end. The reduction in the magnitude and the relocation of the stress concentration in the SMRH design has been validated through a full scale wheel – railhead interaction test rig; Railhead strains under the loaded wheels have been recorded using a non-contact digital image correlation method. Experimental study has confirmed the accuracy of the numerical predications. Although the SMRH shaped IRJs eliminate stress singularities, they can still fail due to joint bar or bolt hole cracking; therefore, another conceptual design, termed as Embedded IRJ (EIRJ) in this thesis, with no joint bars and pre-tensioned bolts has been developed using a multi-objective optimisation formulation based on the coupled genetic algorithm – parametric finite element method. To achieve the required structural stiffness for the safe passage of the loaded wheels, the rails were embedded into the concrete of the post tensioned sleepers; the optimal solutions for the design of the EIRJ is shown to simplify the design through the elimination of the complex interactions and failure modes of the various structural components of the CIRJ. The practical applicability of the optimal shapes SMRH and EIRJ is demonstrated through two illustrative examples, termed as improved designs (IMD1 & IMD2) in this thesis; IMD1 is a combination of the CIRJ and the SMRH designs, whilst IMD2 is a combination of the EIRJ and SMRH designs. These two improved designs have been simulated for two key operating (speed and wagon load) and design (wheel diameter) parameters that affect the wheel-rail contact; the effect of these parameters has been found to be negligible to the performance of the two improved designs and the improved designs are in turn found far superior to the current designs of the CIRJs in terms of stress singularities and deformation under the passage of the loaded wheels. Therefore, these improved designs are expected to provide longer service life in relation to the CIRJs.
Resumo:
The legitimate resolution of disputes in online environments requires a complex understanding of the social norms of the community. The conventional legal approach to resolving disputes through literal interpretation of the contractual terms of service is highly problematic because it does not take into account potential conflict with community expectations. In this paper we examine the importance of consent to community governance and argue that a purely formal evaluation of consent is insufficient to legitimately resolve disputes. As online communities continue to grow in importance to the lives of their participants, the importance of resolving disputes legitimately, with reference to the consent of the community, will also continue to grow. Real consent, however, is difficult to identify. We present a case study of botting and real money trading in EVE Online that highlights the dynamic interaction of community norms and private governance processes. Through this case study, we argue that the major challenge facing regulators of online environments is that community norms are complex, contested, and continuously evolving. Developing legitimate regulatory frameworks then depends on the ability of regulators to create efficient and acceptable modes of dispute resolution that can take into account (and acceptably resolve) the tension between formal contractual rules and complex and conflicting community understandings of acceptable behaviour.
Resumo:
This is the essay prepared for the exhibition titled 'Hot Chocolate' held at the SASA Gallery, Adelaide, South Australia, 24 October - 29 November, 2012. Below are the words that start the essay and which provide a glimpse of the artworks in the exhibition. By agreeing to work together in this exhibition, the artists in Hot Chocolate delivered across an eclectic assortment of academic enquiry: • the politics of identity • the politics of desire • fetishisation of racial and othered bodies • origin and place • the politics of skin • events, moments, and ephemerality • need We too, talked, laughed, cried and worked through these issues in relation to the artworks submitted, including Pamela’s work, and to the theory and literature we have read and utilised in our words with each other and communities. We begin this piece by reflecting on the writings of bell hooks, whose words kissed us awake and stirred us at the start of our respective formal research journeys. We align her words with some of our activism, advocacy, academic and community work. We will weave the magical lyrics from the 1970s iconic band Hot Chocolate throughout this essay.
Resumo:
Meal-Induced Thermogenesis (MIT) research findings are highly inconsistent, in part, due to the variety of durations and protocols used to measure MIT. We aimed to determine: 1) the proportion of a 6 h MIT response completed at 3, 4 and 5 h; 2) the associations between the shorter durations and the 6 h measure; 3) whether shorter durations improved the reproducibility of the measurement. MIT was measured in response to a 2410 KJ mixed composition meal in ten individuals (5 male, 5 female) on two occasions. Energy expenditure was measured continuously for 6 h post-meal using indirect calorimetry and MIT was calculated as the increase in energy expenditure above the pre-meal RMR. On average, 76%, 89%, and 96% of the 6 h MIT response was completed within 3, 4 and 5 h respectively, and the MIT at each of these time points was strongly correlated to the 6 h MIT (range for correlations, r = 0.990 to 0.998; p < 0.01). The between-day CV for the 6 h measurement was 33%, but was significantly lower after 3 h of measurement (CV = 26%, p = 0.02). Despite variability in the total MIT between days, the proportion of the MIT that was complete at 3, 4 and 5 h was reproducible (mean CV: 5%). While 6 h is typically required to measure the complete MIT response, 3 h measures provide sufficient information about the magnitude of the MIT response and may be applicable for measuring individuals on repeated occasions.
Resumo:
Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1
Resumo:
Recently, vision-based systems have been deployed in professional sports to track the ball and players to enhance analysis of matches. Due to their unobtrusive nature, vision-based approaches are preferred to wearable sensors (e.g. GPS or RFID sensors) as it does not require players or balls to be instrumented prior to matches. Unfortunately, in continuous team sports where players need to be tracked continuously over long-periods of time (e.g. 35 minutes in field-hockey or 45 minutes in soccer), current vision-based tracking approaches are not reliable enough to provide fully automatic solutions. As such, human intervention is required to fix-up missed or false detections. However, in instances where a human can not intervene due to the sheer amount of data being generated - this data can not be used due to the missing/noisy data. In this paper, we investigate two representations based on raw player detections (and not tracking) which are immune to missed and false detections. Specifically, we show that both team occupancy maps and centroids can be used to detect team activities, while the occupancy maps can be used to retrieve specific team activities. An evaluation on over 8 hours of field hockey data captured at a recent international tournament demonstrates the validity of the proposed approach.