953 resultados para volume-time curve
Resumo:
In recent times, fire has become a major disaster in buildings due to the increase in fire loads, as a result of modern furniture and light weight construction. This has caused problems for safe evacuation and rescue activities, and in some instances lead to the collapse of buildings (Lewis, 2008 and Nyman, 2002). Recent research has shown that the actual fire resistance of building elements exposed to building fires can be less than their specified fire resistance rating (Lennon and Moore, 2003, Jones, 2002, Nyman, 2002 and Abecassis-Empis et al. 2008). Conventionally the fire rating of building elements is determined using fire tests based on the standard fire time-temperature curve given in ISO 834. This ISO 834 curve was developed in the early 1900s, where wood was the basic fuel source. In reality, modern buildings make use of thermoplastic materials, synthetic foams and fabrics. These materials are high in calorific values and increase both the speed of fire growth and heat release rate, thus increasing the fire severity beyond that of the standard fire curve. Hence it suggests the need to use realistic fire time-temperature curves in tests. Real building fire temperature profiles depend on the fuel load representing the combustible building contents, ventilation openings and thermal properties of wall lining materials. Fuel load is selected based on a review and suitable realistic fire time-temperature curves were developed. Fire tests were then performed for plasterboard lined light gauge steel framed walls for the developed realistic fire curves. This paper presents the details of the development of suitable realistic building fire curves, and the fire tests using them. It describes the fire performance of tested walls in comparison to the standard fire tests and highlights the differences between them. This research has shown the need to use realistic fire exposures in assessing the fire resistance rating of building elements.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
The refereed papers contained in this set of conference proceedings were presented at the 2nd International Conference on Crime, Justice and Social Democracy, hosted by the Crime and Justice Research Centre, Faculty of Law, QUT. The conference attracted an impressive list of internationally distinguished keynote and panel speakers from the United Kingdom, United States, Australia, New Zealand, Canada and this time Latin America, as well as high quality paper submissions.
Resumo:
Objective: Modern series from high-volume esophageal centers report an approximate 40% 5-year survival in patients treated with curative intent and postoperative mortality rates of less than 4%. An objective analysis of factors that underpin current benchmarks within high-volume centers has not been performed. Methods: Three time periods were studied, 1990 to 1998 (period 1), 1999 to 2003 (period 2), and 2004 to 2008 (period 3), in which 471, 254, and 342 patients, respectively, with esophageal cancer were treated with curative intent. All data were prospectively recorded, and staging, pathology, treatment, operative, and oncologic outcomes were compared. Results: Five-year disease-specific survival was 28%, 35%, and 44%, and in-hospital postoperative mortality was 6.7%, 4.4%, and 1.7% for periods 1 to 3, respectively (P < .001). Period 3, compared with periods 1 and 2, respectively, was associated with significantly (P < .001) more early tumors (17% vs 4% and 6%), higher nodal yields (median 22 vs 11 and 18), and a higher R0 rate in surgically treated patients (81% vs 73% and 75%). The use of multimodal therapy increased (P < .05) across time periods. By multivariate analysis, age, T stage, N stage, vascular invasion, R status, and time period were significantly (P < .0001) associated with outcome. Conclusions: Improved survival with localized esophageal cancer in the modern era may reflect an increase of early tumors and optimized staging. Important surgical and pathologic standards, including a higher R0 resection rate and nodal yields, and lower postoperative mortality, were also observed. Copyright © 2012 by The American Association for Thoracic Surgery.
Resumo:
The application of the Bluetooth (BT) technology to transportation has been enabling researchers to make accurate travel time observations, in freeway and arterial roads. The Bluetooth traffic data are generally incomplete, for they only relate to those vehicles that are equipped with Bluetooth devices, and that are detected by the Bluetooth sensors of the road network. The fraction of detected vehicles versus the total number of transiting vehicles is often referred to as Bluetooth Penetration Rate (BTPR). The aim of this study is to precisely define the spatio-temporal relationship between the quantities that become available through the partial, noisy BT observations; and the hidden variables that describe the actual dynamics of vehicular traffic. To do so, we propose to incorporate a multi- class traffic model into a Sequential Montecarlo Estimation algorithm. Our framework has been applied for the empirical travel time investigations into the Brisbane Metropolitan region.
Resumo:
The rapid development of the World Wide Web has created massive information leading to the information overload problem. Under this circumstance, personalization techniques have been brought out to help users in finding content which meet their personalized interests or needs out of massively increasing information. User profiling techniques have performed the core role in this research. Traditionally, most user profiling techniques create user representations in a static way. However, changes of user interests may occur with time in real world applications. In this research we develop algorithms for mining user interests by integrating time decay mechanisms into topic-based user interest profiling. Time forgetting functions will be integrated into the calculation of topic interest measurements on in-depth level. The experimental study shows that, considering temporal effects of user interests by integrating time forgetting mechanisms shows better performance of recommendation.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
To enhance the performance of the k-nearest neighbors approach in forecasting short-term traffic volume, this paper proposed and tested a two-step approach with the ability of forecasting multiple steps. In selecting k-nearest neighbors, a time constraint window is introduced, and then local minima of the distances between the state vectors are ranked to avoid overlappings among candidates. Moreover, to control extreme values’ undesirable impact, a novel algorithm with attractive analytical features is developed based on the principle component. The enhanced KNN method has been evaluated using the field data, and our comparison analysis shows that it outperformed the competing algorithms in most cases.
Resumo:
The approach adopted for investigating the relationship between rainfall characteristics and pollutant wash-off process is commonly based on the use of parameters which represent the entire rainfall event. This does not permit the investigation of the influence of rainfall characteristics on different sectors of the wash-off process such as first flush where there is a high pollutant wash-off load at the initial stage of the runoff event. This research study analysed the influence of rainfall characteristics on the pollutant wash-off process using two sets of innovative parameters by partitioning wash-off and rainfall characteristics. It was found that the initial 10% of the wash-off process is closely linked to runoff volume related rainfall parameters including rainfall depth and rainfall duration while the remaining part of the wash-off process is primarily influenced by kinetic energy related rainfall parameters, namely, rainfall intensity. These outcomes prove that different sectors of the wash-off process are influenced by different segments of a rainfall event.
Resumo:
Hypoxia and the development and remodeling of blood vessels and connective tissue in granulation tissue that forms in a wound gap following full-thickness skin incision in the rat were examined as a function of time. A 1.5 cm-long incisional wound was created in rat groin skin and the opposed edges sutured together. Wounds were harvested between 3 days and 16 weeks and hypoxia, percent vascular volume, cell proliferation and apoptosis, α-smooth muscle actin, vascular endothelial growth factor-A, vascular endothelial growth factor receptor-2, and transforming growth factor-β 1 expression in granulation tissue were then assessed. Hypoxia was evident between 3 and 7 days while maximal cell proliferation at 3 days (123.6 ± 22.2 cells/mm 2, p < 0.001 when compared with normal skin) preceded the peak percent vascular volume that occurred at 7 days (15.83 ± 1.10%, p < 0.001 when compared with normal skin). The peak in cell apoptosis occurred at 3 weeks (12.1 ± 1.3 cells/mm 2, p < 0.001 when compared with normal skin). Intense α-smooth muscle actin labeling in myofibroblasts was evident at 7 and 10 days. Vascular endothelial growth factor receptor-2 and vascular endothelial growth factor-A were detectable until 2 and 3 weeks, respectively, while transforming growth factor-β 1 protein was detectable in endothelial cells and myofibroblasts until 3-4 weeks and in the extracellular matrix for 16 weeks. Incisional wound granulation tissue largely developed within 3-7 days in the presence of hypoxia. Remodeling, marked by a decline in the percent vascular volume and increased cellular apoptosis, occurred largely in the absence of detectable hypoxia. The expression of vascular endothelial growth factor-A, vascular endothelial growth factor receptor-2, and transforming growth factor-β 1 is evident prior, during, and after the peak of vascular volume reflecting multiple roles for these factors during wound healing.
Resumo:
The refereed papers contained in this volume of conference proceedings were among those presented at the 2nd International Conference on Crime, Justice and Social Democracy, hosted by the Crime and Justice Research Centre, Faculty of Law, QUT, from 8 – 11 July 2013. The conference attracted an impressive list of speakers from Australasia, Europe, North America and Latin America. These seven papers can be viewed at the Crime and Justice Research Centre’s website at http://crimejusticeconference.com/publications/ as can Volume 1 representing another 26 selected papers from the conference. As with the papers contained in the first volume, this set of papers raises important questions about the links between crime, justice and social democracy, and continues the contribution that the Crime and Justice Research Centre makes towards engaging with these topics. We thank all those who submitted papers for review for this second volume of proceedings, as well as the peer reviewers for taking the time to review the papers, often within very tight timelines.
Resumo:
A new test of hypothesis for classifying stationary time series based on the bias-adjusted estimators of the fitted autoregressive model is proposed. It is shown theoretically that the proposed test has desirable properties. Simulation results show that when time series are short, the size and power estimates of the proposed test are reasonably good, and thus this test is reliable in discriminating between short-length time series. As the length of the time series increases, the performance of the proposed test improves, but the benefit of bias-adjustment reduces. The proposed hypothesis test is applied to two real data sets: the annual real GDP per capita of six European countries, and quarterly real GDP per capita of five European countries. The application results demonstrate that the proposed test displays reasonably good performance in classifying relatively short time series.
Resumo:
Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.
Resumo:
This is the second volume of a five volume series that describes, assesses, and analyses football in Victoria during the nineteenth century. This volume looks at the cultural contexts of the sport in the late 1870s and early 1880s, describes the important matches played, and provides a full statistical account of this time period. This book is the first comprehensive discussion of the early period in Australian football's development.
Resumo:
As a result of India's extremely rapid economic growth, the scale and seriousness of environmental problems are no longer in doubt. Whether pollution abatement technologies are utilized more efficiently is crucial in the analysis of environmental management because it influences the cost of alternative production and pollution abatement technologies. In this study, we use state-level industry data of sulfur dioxide, nitrogen dioxide, and suspended particular matter over the period 1991-2003. Employing recently developed productivity measurement technique, we show that overall environmental productivities decrease over time in India. Furthermore, we analyze the determinants of environmental productivities and find environmental Kuznets curve type relationship existences between environmental productivity and income. Panel analysis results show that the scale effect dominates over the technique effect. Therefore, a combined effect of income on environmental productivity is negative.