350 resultados para curve Bezier polinomi Bernstein interfaccia grafica utente GUI
Resumo:
Abstract. Fire resistance has become an important part in structural design due to the ever increasing loss of properties and lives every year. Conventionally the fire rating of load bearing Light gauge Steel Frame (LSF) walls is determined using standard fire tests based on the time-temperature curve given in ISO 834 [1]. Full scale fire testing based on this standard time-temperature curve originated from the application of wood burning furnaces in the early 1900s and it is questionable whether it truly represents the fuel loads in modern buildings. Hence a detailed fire research study into the performance of LSF walls was undertaken using real design fires based on Eurocode parametric curves [2] and Barnett’s ‘BFD’ curves [3]. This paper presents the development of these real fire curves and the results of full scale experimental study into the structural and fire behaviour of load bearing LSF stud wall systems.
Resumo:
Objective: To use our Bayesian method of motor unit number estimation (MUNE) to evaluate lower motor neuron degeneration in ALS. Methods: In subjects with ALS we performed serial MUNE studies. We examined the repeatability of the test and then determined whether the loss of MUs was fitted by an exponential or Weibull distribution. Results: The decline in motor unit (MU) numbers was well-fitted by an exponential decay curve. We calculated the half life of MUs in the abductor digiti minimi (ADM), abductor pollicis brevis (APB) and/or extensor digitorum brevis (EDB) muscles. The mean half life of the MUs of ADM muscle was greater than those of the APB or EDB muscles. The half-life of MUs was less in the ADM muscle of subjects with upper limb than in those with lower limb onset. Conclusions: The rate of loss of lower motor neurons in ALS is exponential, the motor units of the APB decay more quickly than those of the ADM muscle and the rate of loss of motor units is greater at the site of onset of disease. Significance: This shows that the Bayesian MUNE method is useful in following the course and exploring the clinical features of ALS. 2012 International Federation of Clinical Neurophysiology.
Resumo:
Background: Extreme temperatures are associated with cardiovascular disease (CVD) deaths. Previous studies have investigated the relative CVD mortality risk of temperature, but this risk is heavily influenced by deaths in frail elderly persons. To better estimate the burden of extreme temperatures we estimated their effects on years of life lost due to CVD. Methods and Results: The data were daily observations on weather and CVD mortality for Brisbane, Australia between 1996 and 2004. We estimated the association between daily mean temperature and years of life lost due to CVD, after adjusting for trend, season, day of the week, and humidity. To examine the non-linear and delayed effects of temperature, a distributed lag non-linear model was used. The model’s residuals were examined to investigate if there were any added effects due to cold spells and heat waves. The exposure-response curve between temperature and years of life lost was U-shaped, with the lowest years of life lost at 24 °C. The curve had a sharper rise at extremes of heat than of cold. The effect of cold peaked two days after exposure, whereas the greatest effect of heat occurred on the day of exposure. There were significantly added effects of heat waves on years of life lost. Conclusions: Increased years of life lost due to CVD are associated with both cold and hot temperatures. Research on specific interventions is needed to reduce temperature-related years of life lost from CVD deaths.
Resumo:
This paper outlines a feasible scheme to extract deck trend when a rotary-wing unmanned aerial vehicle (RUAV)approaches an oscillating deck. An extended Kalman filter (EKF) is de- veloped to fuse measurements from multiple sensors for effective estimation of the unknown deck heave motion. Also, a recursive Prony Analysis (PA) procedure is proposed to implement online curve-fitting of the estimated heave mo- tion. The proposed PA constructs an appropriate model with parameters identified using the forgetting factor recursive least square (FFRLS)method. The deck trend is then extracted by separating dominant modes. Performance of the proposed procedure is evaluated using real ship motion data, and simulation results justify the suitability of the proposed method into safe landing of RUAVs operating in a maritime environment.
Resumo:
Context: Postprandial dysmetabolism is emerging as an important cardiovascular risk factor. Augmentation index (AIx) is a measure of systemic arterial stiffness and independently predicts cardiovascular outcome. Objective: The objective of this study was to assess the effect of a standardized high-fat meal on metabolic parameters and AIx in 1) lean, 2) obese nondiabetic, and 3) subjects with type 2 diabetes mellitus (T2DM). Design and Setting: Male subjects (lean, n = 8; obese, n = 10; and T2DM, n = 10) were studied for 6 h after a high-fat meal and water control. Glucose, insulin, triglycerides, and AIx (radial applanation tonometry) were measured serially to determine the incremental area under the curve (iAUC). Results: AIx decreased in all three groups after a high-fat meal. A greater overall postprandial reduction in AIx was seen in lean and T2DM compared with obese subjects (iAUC, 2251 +/- 1204, 2764 +/- 1102, and 1187 +/- 429% . min, respectively; P < 0.05). The time to return to baseline AIx was significantly delayed in subjects with T2DM (297 +/- 68 min) compared with lean subjects (161 +/- 88 min; P < 0.05). There was a significant correlation between iAUC AIx and iAUC triglycerides (r = 0.50; P < 0.05). Conclusions: Obesity is associated with an attenuated overall postprandial decrease in AIx. Subjects with T2DM have a preserved, but significantly prolonged, reduction in AIx after a high-fat meal. The correlation between AIx and triglycerides suggests that postprandial dysmetabolism may impact on vascular dynamics. The markedly different response observed in the obese subjects compared with those with T2DM was unexpected and warrants additional evaluation.
Resumo:
Purpose – The purpose of this paper is to summarise a successfully defended doctoral thesis. The main purpose of this paper is to provide a summary of the scope, and main issues raised in the thesis so that readers undertaking studies in the same or connected areas may be aware of current contributions to the topic. The secondary aims are to frame the completed thesis in the context of doctoral-level research in project management as well as offer ideas for further investigation which would serve to extend scientific knowledge on the topic. Design/methodology/approach – Research reported in this paper is based on a quantitative study using inferential statistics aimed at better understanding the actual and potential usage of earned value management (EVM) as applied to external projects under contract. Theories uncovered during the literature review were hypothesized and tested using experiential data collected from 145 EVM practitioners with direct experience on one or more external projects under contract that applied the methodology. Findings – The results of this research suggest that EVM is an effective project management methodology. The principles of EVM were shown to be significant positive predictors of project success on contracted efforts and to be a relatively greater positive predictor of project success when using fixed-price versus cost-plus (CP) type contracts. Moreover, EVM's work-breakdown structure (WBS) utility was shown to positively contribute to the formation of project contracts. The contribution was not significantly different between fixed-price and CP contracted projects, with exceptions in the areas of schedule planning and payment planning. EVM's “S” curve benefited the administration of project contracts. The contribution of the S-curve was not significantly different between fixed-price and CP contracted projects. Furthermore, EVM metrics were shown to also be important contributors to the administration of project contracts. The relative contribution of EVM metrics to projects under fixed-price versus CP contracts was not significantly different, with one exception in the area of evaluating and processing payment requests. Practical implications – These results have important implications for project practitioners, EVM advocates, as well as corporate and governmental policy makers. EVM should be considered for all projects – not only for its positive contribution to project contract development and administration, for its contribution to project success as well, regardless of contract type. Contract type should not be the sole determining factor in the decision whether or not to use EVM. More particularly, the more fixed the contracted project cost, the more the principles of EVM explain the success of the project. The use of EVM mechanics should also be used in all projects regardless of contract type. Payment planning using a WBS should be emphasized in fixed-price contracts using EVM in order to help mitigate performance risk. Schedule planning using a WBS should be emphasized in CP contracts using EVM in order to help mitigate financial risk. Similarly, EVM metrics should be emphasized in fixed-price contracts in evaluating and processing payment requests. Originality/value – This paper provides a summary of cutting-edge research work and a link to the published thesis that researchers can use to help them understand how the research methodology was applied as well as how it can be extended.
Resumo:
There is a growing interest in the use of megavoltage cone-beam computed tomography (MV CBCT) data for radiotherapy treatment planning. To calculate accurate dose distributions, knowledge of the electron density (ED) of the tissues being irradiated is required. In the case of MV CBCT, it is necessary to determine a calibration-relating CT number to ED, utilizing the photon beam produced for MV CBCT. A number of different parameters can affect this calibration. This study was undertaken on the Siemens MV CBCT system, MVision, to evaluate the effect of the following parameters on the reconstructed CT pixel value to ED calibration: the number of monitor units (MUs) used (5, 8, 15 and 60 MUs), the image reconstruction filter (head and neck, and pelvis), reconstruction matrix size (256 by 256 and 512 by 512), and the addition of extra solid water surrounding the ED phantom. A Gammex electron density CT phantom containing EDs from 0.292 to 1.707 was imaged under each of these conditions. The linear relationship between MV CBCT pixel value and ED was demonstrated for all MU settings and over the range of EDs. Changes in MU number did not dramatically alter the MV CBCT ED calibration. The use of different reconstruction filters was found to affect the MV CBCT ED calibration, as was the addition of solid water surrounding the phantom. Dose distributions from treatment plans calculated with simulated image data from a 15 MU head and neck reconstruction filter MV CBCT image and a MV CBCT ED calibration curve from the image data parameters and a 15 MU pelvis reconstruction filter showed small and clinically insignificant differences. Thus, the use of a single MV CBCT ED calibration curve is unlikely to result in any clinical differences. However, to ensure minimal uncertainties in dose reporting, MV CBCT ED calibration measurements could be carried out using parameter-specific calibration measurements.
Resumo:
An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.
Resumo:
Cold-formed steel beams are increasingly used as floor joists and bearers in buildings and often their behaviour and moment capacities are influenced by lateral-torsional buckling. With increasing usage of cold-formed steel beams their fire safety design has become an important issue. Fire design rules are commonly based on past research on hot-rolled steel beams. Hence a detailed parametric study was undertaken using validated finite element models to investigate the lateral-torsional buckling behaviour of simply supported cold-formed steel lipped channel beams subjected to uniform bending at uniform elevated temperatures. The moment capacity results were compared with the predictions from the available ambient temperature and fire design rules and suitable recommendations were made. European fire design rules were found to be over-conservative while the ambient temperature design rules could not be used based on single buckling curve. Hence a new design method was proposed that includes the important non-linear stress-strain characteristics observed for cold-formed steels at elevated temperatures. Comparison with numerical moment capacities demonstrated the accuracy of the new design method. This paper presents the details of the parametric study, comparisons with current design rules and the new design rules proposed in this research for lateral-torsional buckling of cold-formed steel lipped channel beams at elevated temperatures.
Resumo:
The ability to forecast machinery health is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models which attempt to forecast machinery health based on condition data such as vibration measurements. This paper demonstrates how the population characteristics and condition monitoring data (both complete and suspended) of historical items can be integrated for training an intelligent agent to predict asset health multiple steps ahead. The model consists of a feed-forward neural network whose training targets are asset survival probabilities estimated using a variation of the Kaplan–Meier estimator and a degradation-based failure probability density function estimator. The trained network is capable of estimating the future survival probabilities when a series of asset condition readings are inputted. The output survival probabilities collectively form an estimated survival curve. Pump data from a pulp and paper mill were used for model validation and comparison. The results indicate that the proposed model can predict more accurately as well as further ahead than similar models which neglect population characteristics and suspended data. This work presents a compelling concept for longer-range fault prognosis utilising available information more fully and accurately.
Resumo:
We evaluated the Minnesota Multiphasic Personality Inventory-Second Edition (MMPI-2) Response Bias Scale (RBS). Archival data from 83 individuals who were referred for neuropsychological assessment with no formal diagnosis (n = 10), following a known or suspected traumatic brain injury (n = 36), with a psychiatric diagnosis (n = 20), or with a history of both trauma and a psychiatric condition (n = 17) were retrieved. The criteria for malingered neurocognitive dysfunction (MNCD) were applied, and two groups of participants were formed: poor effort (n = 15) and genuine responders (n = 68). Consistent with previous studies, the difference in scores between groups was greatest for the RBS (d = 2.44), followed by two established MMPI-2 validity scales, F (d = 0.25) and K (d = 0.23), and strong significant correlations were found between RBS and F (rs = .48) and RBS and K (r = −.41). When MNCD group membership was predicted using logistic regression, the RBS failed to add incrementally to F. In a separate regression to predict group membership, K added significantly to the RBS. Receiver-operating curve analysis revealed a nonsignificant area under the curve statistic, and at the ideal cutoff in this sample of >12, specificity was moderate (.79), sensitivity was low (.47), and positive and negative predictive power values at a 13% base rate were .25 and .91, respectively. Although the results of this study require replication because of a number of limitations, this study has made an important first attempt to report RBS classification accuracy statistics for predicting poor effort at a range of base rates.
Resumo:
This paper presents a novel evolutionary computation approach to three-dimensional path planning for unmanned aerial vehicles (UAVs) with tactical and kinematic constraints. A genetic algorithm (GA) is modified and extended for path planning. Two GAs are seeded at the initial and final positions with a common objective to minimise their distance apart under given UAV constraints. This is accomplished by the synchronous optimisation of subsequent control vectors. The proposed evolutionary computation approach is called synchronous genetic algorithm (SGA). The sequence of control vectors generated by the SGA constitutes to a near-optimal path plan. The resulting path plan exhibits no discontinuity when transitioning from curve to straight trajectories. Experiments and results show that the paths generated by the SGA are within 2% of the optimal solution. Such a path planner when implemented on a hardware accelerator, such as field programmable gate array chips, can be used in the UAV as on-board replanner, as well as in ground station systems for assisting in high precision planning and modelling of mission scenarios.
Resumo:
In this study, natural convection heat transfer and buoyancy driven flows have been investigated in a right angled triangular enclosure. The heater located on the bottom wall while the inclined wall is colder and the remaining walls are maintained as adiabatic. Governing equations of natural convection are solved through the finite volume approach, in which buoyancy is modeled via the Boussinesq approximation. Effects of different parameters such as Rayleigh number, aspect ratio, prantdl number and heater location are considered. Results show that heat transfer increases when the heater is moved toward the right corner of the enclosure. It is also revealed that increasing the Rayleigh number, increases the strength of free convection regime and consequently increases the value of heat transfer rate. Moreover, larger aspect ratio enclosure has larger Nusselt number value. In order to have better insight, streamline and isotherms are shown.
Resumo:
Numerical study is carried out using large eddy simulation to study the heat and toxic gases released from fires in real road tunnels. Due to disasters about tunnel fires in previous decade, it attracts increasing attention of researchers to create safe and reliable ventilation designs. In this research, a real tunnel with 10 MW fire (which approximately equals to the heat output speed of a burning bus) at the middle of tunnel is simulated using FDS (Fire Dynamic Simulator) for different ventilation velocities. Carbone monoxide concentration and temperature vertical profiles are shown for various locations to explore the flow field. It is found that, with the increase of the longitudinal ventilation velocity, the vertical profile gradients of CO concentration and smoke temperature were shown to be both reduced. However, a relatively large longitudinal ventilation velocity leads to a high similarity between the vertical profile of CO volume concentration and that of temperature rise.
Resumo:
The residence time distribution (RTD) is a crucial parameter when treating engine exhaust emissions with a Dielectric Barrier Discharge (DBD) reactor. In this paper, the residence time of such a reactor is investigated using a finite element based software: COMSOL Multiphysics 4.3. Non-thermal plasma (NTP) discharge is being introduced as a promising method for pollutant emission reduction. DBD is one of the most advantageous of NTP technologies. In a two cylinder co-axial DBD reactor, tubes are placed between two electrodes and flow passes through the annuals between these barrier tubes. If the mean residence time increases in a DBD reactor, there will be a corresponding increase in reaction time and consequently, the pollutant removal efficiency can increase. However, pollutant formation can occur during increased mean residence time and so the proportion of fluid that may remain for periods significantly longer than the mean residence time is of great importance. In this study, first, the residence time distribution is calculated based on the standard reactor used by the authors for ultrafine particle (10-500 nm) removal. Then, different geometrics and various inlet velocities are considered. Finally, for selected cases, some roughness elements added inside the reactor and the residence time is calculated. These results will form the basis for a COMSOL plasma and CFD module investigation.