16 resultados para data science
em Greenwich Academic Literature Archive - UK
Resumo:
In this article, the buildingEXODUS (V1.1) evacuation model is described and discussed and attempts at qualitative and quantitative model validation are presented. The data set used for the validation is the Tsukuba pavilion evacuation data. This data set is of particular interest as the evacuation was influenced by external conditions, namely inclement weather. As part of the validation exercise, the sensitivity of the buildingEXODUS predictions to a range of variables and conditions is examined, including: exit flow capacity, occupant response times, and the impact of external conditions on the developing evacuation. The buildingEXODUS evacuation model was found to produce good qualitative and quantitative agreement with the experimental data.
Resumo:
Network analysis is distinguished from traditional social science by the dyadic nature of the standard data set. Whereas in traditional social science we study monadic attributes of individuals, in network analysis we study dyadic attributes of pairs of individuals. These dyadic attributes (e.g. social relations) may be represented in matrix form by a square 1-mode matrix. In contrast, the data in traditional social science are represented as 2-mode matrices. However, network analysis is not completely divorced from traditional social science, and often has occasion to collect and analyze 2-mode matrices. Furthermore, some of the methods developed in network analysis have uses in analysing non-network data. This paper presents and discusses ways of applying and interpreting traditional network analytic techniques to 2-mode data, as well as developing new techniques. Three areas are covered in detail: displaying 2-mode data as networks, detecting clusters and measuring centrality.
Resumo:
The Symposium, “Towards the sustainable use of Europe’s forests”, with sub-title “Forest ecosystem and landscape research: scientific challenges and opportunities” lists three fundamental substantive areas of research that are involved: Forest management and practices, Ecosystem processes and functional ecology, and Environmental economics and sociology. This paper argues that there are essential catalytic elements missing! Without these elements there is great danger that the aimed-for world leadership in the forest sciences will not materialize. What are the missing elements? All the sciences, and in particular biology, environmental sciences, sociology, economics, and forestry have evolved so that they include good scientific methodology. Good methodology is imperative in both the design and analysis of research studies, the management of research data, and in the interpretation of research finding. The methodological disciplines of Statistics, Modelling and Informatics (“SMI”) are crucial elements in a proposed Centre of European Forest Science, and the full involvement of professionals in these methodological disciplines is needed if the research of the Centre is to be world-class. Distributed Virtual Institute (DVI) for Statistics, Modelling and Informatics in Forestry and the Environment (SMIFE) is a consortium with the aim of providing world-class methodological support and collaboration to European research in the areas of Forestry and the Environment. It is suggested that DVI: SMIFE should be a formal partner in the proposed Centre for European Forest Science.
Resumo:
This paper presents a generic framework that can be used to describe study plans using meta-data. The context of this research and associated technologies and standards is presented. The approach adopted here has been developed within the mENU project that aims to provide a model for a European Networked University. The methodology for the design of the generic Framework is discussed and the main design requirements are presented. The approach adopted was based on a set of templates containing meta-data required for the description of programs of study and consisting of generic building elements annotated appropriately. The process followed to develop the templates is presented together with a set of evaluation criteria to test the suitability of the approach. The templates structure is presented and example templates are shown. A first evaluation of the approach has shown that the proposed framework can provide a flexible and competent means for the generic description of study plans for the purposes of a networked university.
Resumo:
Evacuation analysis of passenger and commercial shipping can be undertaken using computer-based simulation tools such as maritimeEXODUS. These tools emulate human shipboard behaviour during emergency scenarios; however it is largely based around the behaviour of civilian passengers and fixtures and fittings of merchant vessels. If these tools and procedures are to be applied to naval vessels there is a clear requirement to understand the behaviour of well-trained naval personnel interacting with the fixtures and fittings that are exclusive to warships. Human factor trials using Royal Navy training facilities were recently undertaken to collect data to improve our understanding of the performance of naval personnel in warship environments. The trials were designed and conducted by staff from the Fire Safety Engineering Group (FSEG) of the University of Greenwich on behalf of the Sea Technology Group (STG), Defence Procurement Agency. The trials involved a selection of RN volunteers with sea-going experience in warships, operating and traversing structural components under different angles of heel. This paper describes the trials and some of the collected data.
Resumo:
Serial Analysis of Gene Expression (SAGE) is a relatively new method for monitoring gene expression levels and is expected to contribute significantly to the progress in cancer treatment by enabling a precise and early diagnosis. A promising application of SAGE gene expression data is classification of tumors. In this paper, we build three event models (the multivariate Bernoulli model, the multinomial model and the normalized multinomial model) for SAGE data classification. Both binary classification and multicategory classification are investigated. Experiments on two SAGE datasets show that the multivariate Bernoulli model performs well with small feature sizes, but the multinomial performs better at large feature sizes, while the normalized multinomial performs well with medium feature sizes. The multinomial achieves the highest overall accuracy.
Resumo:
The passenger response time distributions adopted by the International Maritime Organisation (IMO)in their assessment of the assembly time for passanger ships involves two key assumptions. The first is that the response time distribution assumes the form of a uniform random distribution and the second concerns the actual response times. These two assumptions are core to the validity of the IMO analysis but are not based on real data, being the recommendations of an IMO committee. In this paper, response time data collected from assembly trials conducted at sea on a real passanger vessel using actual passangers are presented and discussed. Unlike the IMO specified response time distributions, the data collected from these trials displays a log-normal distribution, similar to that found in land based environments. Based on this data, response time distributions for use in the IMO assesmbly for the day and night scenarios are suggested
Resumo:
The Logit-Logistic (LL), Johnson's SB, and the Beta (GBD) are flexible four-parameter probability distribution models in terms of the (skewness-kurtosis) region covered, and each has been used for modeling tree diameter distributions in forest stands. This article compares bivariate forms of these models in terms of their adequacy in representing empirical diameter-height distributions from 102 sample plots. Four bivariate models are compared: SBB, the natural, well-known, and much-used bivariate generalization of SB; the bivariate distributions with LL, SB, and Beta as marginals, constructed using Plackett's method (LL-2P, etc.). All models are fitted using maximum likelihood, and their goodness-of-fits are compared using minus log-likelihood (equivalent to Akaike's Information Criterion, the AIC). The performance ranking in this case study was SBB, LL-2P, GBD-2P, and SB-2P
Resumo:
This paper describes the methodologies employed in the collection and storage of first-hand accounts of evacuation experiences derived from face-to-face interviews with evacuees from the World Trade Center (WTC) Twin Towers complex on 11 September 2001. In particular the paper describes the development of the High-rise Evacuation Evaluation Database (HEED). This is a flexible qualitative research tool which contains the full transcribed interview accounts and coded evacuee experiences extracted from those transcripts. The data and information captured and stored in the HEED database is not only unique, but it provides a means to address current and emerging issues relating to human factors associated with the evacuation of high-rise buildings.
Resumo:
This study investigates the use of computer modelled versus directly experimentally determined fire hazard data for assessing survivability within buildings using evacuation models incorporating Fractionally Effective Dose (FED) models. The objective is to establish a link between effluent toxicity, measured using a variety of small and large scale tests, and building evacuation. For the scenarios under consideration, fire simulation is typically used to determine the time non-survivable conditions develop within the enclosure, for example, when smoke or toxic effluent falls below a critical height which is deemed detrimental to evacuation or when the radiative fluxes reach a critical value leading to the onset of flashover. The evacuation calculation would the be used to determine whether people within the structure could evacuate before these critical conditions develop.
Resumo:
Two evacuation trials were conducted within Brazilian library facilities by FSEG staff in January 2005. These trials represent one of the first such trials conducted in Brazil. The purpose of these evacuation trials was to collect pre-evacuation time data from a population with a cultural background different to that found in western Europe. In total some 34 pre-evacuation times were collected from the experiments and these ranged from 5 to 98 seconds with a mean pre-evacuation time of 46.7 seconds
Resumo:
This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development Environments (DECADE). A brief discussion sets the background for IoT, and the development of the distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and quantitative analysis carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service architecture, combining a distributed data warehouse, web services for analysis agents, ontology agents and a verification engine, with a centrally verified outcome database maintained by certifying body for qualification/professional status.
Resumo:
Analysis of the generic attacks and countermeasures for block cipher based message authentication code algorithms (MAC) in sensor applications is undertaken; the conclusions are used in the design of two new MAC constructs Quicker Block Chaining MAC1 (QBC-MAC1) and Quicker Block Chaining MAC2 (QBC-MAC2). Using software simulation we show that our new constructs point to improvements in usage of CPU instruction clock cycle and energy requirement when benchmarked against the de facto Cipher Block Chaining MAC (CBC-MAC) based construct used in the TinySec security protocol for wireless sensor networks.
Resumo:
We consider the optimum design of pilot-symbol-assisted modulation (PSAM) schemes with feedback. The received signal is periodically fed back to the transmitter through a noiseless delayed link and the time-varying channel is modeled as a Gauss-Markov process. We optimize a lower bound on the channel capacity which incorporates the PSAM parameters and Kalman-based channel estimation and prediction. The parameters available for the capacity optimization are the data power adaptation strategy, pilot spacing and pilot power ratio, subject to an average power constraint. Compared to the optimized open-loop PSAM (i.e., the case where no feedback is provided from the receiver), our results show that even in the presence of feedback delay, the optimized power adaptation provides higher information rates at low signal-to-noise ratios (SNR) in medium-rate fading channels. However, in fast fading channels, even the presence of modest feedback delay dissipates the advantages of power adaptation.