123 resultados para Viscous Dampers,Five Step Method,Equivalent Static Analysis Procedure,Yielding Frames,Passive Energy Dissipation Systems
Resumo:
This paper presents a nonlinear finite element (FE) model for the analysis of very high strength (VHS) steel hollow sections wrapped by high modulus carbon fibre rein forced polymer (CFRP) sheets. The bond strength of CFRP wrapped VHS circular steel hollow section under tension is investigated using the FE model. The three dimensional FE model by Nonlinear static analysis has been carried out by Strand 7 finite element software. The model is validated by the experimental data obtained from Fawzia et al [1]. A detail parametric study has been performed to examine the effect of number of CFRP layers, different diameters of VHS steel tube and different bond lengths of CFRP sheet. The analytical model developed by Fawzia et al. [1] has been used to determine the load carrying capacity of different diameters of CFRP strengthened VHS steel tube by using the capacity from each layer of CFRP sheet. The results from FE model have found in reasonable agreement with the analytical model developed by Fawzia et al [1]. This validation was necessary because the analytical model by Fawzia et al [1] was developed by using only one diameter of VHS steel tube and fixed (five) number of CFRP layers. It can be concluded that the developed analytical model is valid for CFRP strengthened VHS steel tubes with diameter range of 38mm to 100mm and CFRP layer range of 3 to 5 layers. Based on the results it can also be concluded that the effective bond length is consistent for different diameters of steel tubes and different layers of CFRP. Three layers of CFRP is considered most effective wrapping scheme due to the cost effectiveness. Finally the distribution of longitudinal and hoop stress has been determined by the finite element model for different diameters of CFRP strengthened VHS steel tube.
Resumo:
Findings from numerous quantitative studies suggest that spouses of patients undergoing Coronary Artery Bypass (CAB) surgery experience both physical and emotional stress before and after their partner's surgery. Such studies have contributed to our understanding of the spouses' experiences, however they have largely failed to capture the qualitative experience of what it is like to be a spouse of a partner who has undergone CAB surgery. The objective of this study was to describe the experience of spouses of patients who had recently undergone CAB surgery. This study was guided by Husserl's phenomenological approach to qualitative research. In accordance with the nature of phenomenological research the number of participants necessarily needs to be small because phenomenology values the unique experience of individuals. Therefore this study gathered data from four participants utilising open ended indepth interviews. The method of analysis was adapted from Amedeo Giorgi's five step empirical phenomenological process which brackets preconceived notions, reducing participants' accounts to the essential essence or meanings. Numerous themes common to each of the spouses emerged. These included: seeking information; the necessity for rapid decision making; playing guardian; a desire to debrief with their partner and lastly, uncertainty of their future role. This study has attempted to understand the phenomena of the spouse's experience and in doing so, believe that we now have a better understanding and insight into the needs of spouses of CAB surgery patients. This has added another dimension to our existing body of knowledge and further facilitates holistic patient care.
Resumo:
This study develops a life-cycle model where investors make investment decisions in a realistic environment. Model results show that personal illiquid projects (housing and children), fixed costs (once-off/per-period participation costs plus variable/fixed transaction costs) and endogenous risky human capital (with permanent, transitory and disastrous shocks) together are able to address both the non-participation puzzle and the age-effects puzzle. Empirical implications of the model are examined using Heckman’s two-step method with the latest five Surveys of Consumer Finance (SCF). Regression results show that liquidity, informational cost and human capital are indeed the major determinants of participation and asset allocation decisions at different stages of an investor’s life.
Resumo:
In this paper, we consider a modified anomalous subdiffusion equation with a nonlinear source term for describing processes that become less anomalous as time progresses by the inclusion of a second fractional time derivative acting on the diffusion term. A new implicit difference method is constructed. The stability and convergence are discussed using a new energy method. Finally, some numerical examples are given. The numerical results demonstrate the effectiveness of theoretical analysis
Resumo:
This paper explores a method of comparative analysis and classification of data through perceived design affordances. Included is discussion about the musical potential of data forms that are derived through eco-structural analysis of musical features inherent in audio recordings of natural sounds. A system of classification of these forms is proposed based on their structural contours. The classifications include four primitive types; steady, iterative, unstable and impulse. The classification extends previous taxonomies used to describe the gestural morphology of sound. The methods presented are used to provide compositional support for eco-structuralism.
Resumo:
This paper presents the results from a study of information behaviors in the context of people's everyday lives undertaken in order to develop an integrated model of information behavior (IB). 34 participants from across 6 countries maintained a daily information journal or diary – mainly through a secure web log – for two weeks, to an aggregate of 468 participant days over five months. The text-rich diary data was analyzed using a multi-method qualitative-quantitative analysis in the following order: Grounded Theory analysis with manual coding, automated concept analysis using thesaurus-based visualization, and finally a statistical analysis of the coding data. The findings indicate that people engage in several information behaviors simultaneously throughout their everyday lives (including home and work life) and that sense-making is entangled in all aspects of them. Participants engaged in many of the information behaviors in a parallel, distributed, and concurrent fashion: many information behaviors for one information problem, one information behavior across many information problems, and many information behaviors concurrently across many information problems. Findings indicate also that information avoidance – both active and passive avoidance – is a common phenomenon and that information organizing behaviors or the lack thereof caused the most problems for participants. An integrated model of information behaviors is presented based on the findings.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
Sustainable transport has become a necessity instead of an option, to address the problems of congestion and urban sprawl, whose effects include increased trip lengths and travel time. A more sustainable form of development, known as Transit Oriented Development (TOD) is presumed to offer sustainable travel choices with reduced need to travel to access daily destinations, by providing a mixture of land uses together with good quality of public transport service, infrastructure for walking and cycling. However, performance assessment of these developments with respect to travel characteristics of their inhabitants is required. This research proposes a five step methodology for evaluating the transport impacts of TODs. The steps for TOD evaluation include pre–TOD assessment, traffic and travel data collection, determination of traffic impacts, determination of travel impacts, and drawing outcomes. Typically, TODs are comprised of various land uses; hence have various types of users. Assessment of characteristics of all user groups is essential for obtaining an accurate picture of transport impacts. A case study TOD, Kelvin Grove Urban Village (KGUV), located 2km of north west of the Brisbane central business district in Australia was selected for implementing the proposed methodology and to evaluate the transport impacts of a TOD from an Australian perspective. The outcomes of this analysis indicated that KGUV generated 27 to 48 percent less traffic compared to standard published rates specified for homogeneous uses. Further, all user groups of KGUV used more sustainable modes of transport compared to regional and similarly located suburban users, with higher trip length for shopping and education trips. Although the results from this case study development support the transport claims of reduced traffic generation and sustainable travel choices by way of TODs, further investigation is required, considering different styles, scales and locations of TODs. The proposed methodology may be further refined by using results from new TODs and a framework for TOD evaluation may be developed.
Resumo:
Defence organisations perform information security evaluations to confirm that electronic communications devices are safe to use in security-critical situations. Such evaluations include tracing all possible dataflow paths through the device, but this process is tedious and error-prone, so automated reachability analysis tools are needed to make security evaluations faster and more accurate. Previous research has produced a tool, SIFA, for dataflow analysis of basic digital circuitry, but it cannot analyse dataflow through microprocessors embedded within the circuit since this depends on the software they run. We have developed a static analysis tool that produces SIFA compatible dataflow graphs from embedded microcontroller programs written in C. In this paper we present a case study which shows how this new capability supports combined hardware and software dataflow analyses of a security critical communications device.
Resumo:
Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.
Resumo:
PURPOSE: The purpose of this study is to identify risk factors for developing complications following treatment of refractory glaucoma with transscleral diode laser cyclophotocoagulation (cyclodiode), to improve the safety profile of this treatment modality. METHOD: A retrospective analysis of 72 eyes from 70 patients who were treated with cyclodiode. RESULTS: The mean pre-treatment IOP was 37.0 mmHg (SD 11.0), with a mean post-treatment reduction in intraocular pressure (IOP) of 19.8 mmHg, and a mean IOP at last follow-up of 17.1 mmHg (SD 9.7). Mean total power delivered during treatment was 156.8 Joules (SD 82.7) over a mean of 1.3 treatments (SD 0.6). Sixteen eyes (22.2% of patients) developed complications from the treatment, with the most common being hypotony, occurring in 6 patients, including 4 with neovascular glaucoma. A higher pre-treatment IOP and higher mean total power delivery also were associated with higher complications. CONCLUSIONS: Cyclodiode is an effective treatment option for glaucoma that is refractory to other treatment options. By identifying risk factors for potential complications, cyclodiode can be modified accordingly for each patient to improve safety and efficacy.
Resumo:
Static anaylsis represents an approach of checking source code or compiled code of applications before it gets executed. Chess and McGraw state that static anaylsis promises to identify common coding problems automatically. While manual code checking is also a form of static analysis, software tools are used in most cases in order to perform the checks. Chess and McGraw additionaly claim that good static checkers can help to spot and eradicate common security bugs.
Resumo:
The deformation of rocks is commonly intimately associated with metamorphic reactions. This paper is a step towards understanding the behaviour of fully coupled, deforming, chemically reacting systems by considering a simple example of the problem comprising a single layer system with elastic-power law viscous constitutive behaviour where the deformation is controlled by the diffusion of a single chemical component that is produced during a metamorphic reaction. Analysis of the problem using the principles of non-equilibrium thermodynamics allows the energy dissipated by the chemical reaction-diffusion processes to be coupled with the energy dissipated during deformation of the layers. This leads to strain-rate softening behaviour and the resultant development of localised deformation which in turn nucleates buckles in the layer. All such diffusion processes, in leading to Herring-Nabarro, Coble or “pressure solution” behaviour, are capable of producing mechanical weakening through the development of a “chemical viscosity”, with the potential for instability in the deformation. For geologically realistic strain rates these chemical feed-back instabilities occur at the centimetre to micron scales, and so produce structures at these scales, as opposed to thermal feed-back instabilities that become important at the 100–1000 m scales.
Resumo:
In Australia, as in some other western nations, governments impose accountability measures on educational institutions (Earl, 2005). One such accountability measure is the National Assessment Program - Literacy and Numeracy (NAPLAN) from which high-stakes assessment data is generated. In this article, a practical method of data analysis known as the Over Time Assessment Data Analysis (OTADA) is offered as an analytical process by which schools can monitor their current and over time performances. This analysis developed by the author, is currently used extensively in schools throughout Queensland. By Analysing in this way, teachers, and in particular principals, can obtain a quick and insightful performance overview. For those seeking to track the achievements and progress of year level cohorts, the OTADA should be considered.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.