933 resultados para Set of Weak Stationary Dynamic Actions
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
Vascular endothelial growth factor (VEGF) has potent angiogenic and neuroprotective effects in the ischemic brain. Its effect on axonal plasticity and neurological recovery in the post-acute stroke phase was unknown. Using behavioral tests combined with anterograde tract tracing studies and with immunohistochemical and molecular biological experiments, we examined effects of a delayed i.c.v. delivery of recombinant human VEGF(165), starting 3 days after stroke, on functional neurological recovery, corticorubral plasticity and inflammatory brain responses in mice submitted to 30 min of middle cerebral artery occlusion. We herein show that the slowly progressive functional improvements of motor grip strength and coordination, which are induced by VEGF, are accompanied by enhanced sprouting of contralesional corticorubral fibres that branched off the pyramidal tract in order to cross the midline and innervate the ipsilesional parvocellular red nucleus. Infiltrates of CD45+ leukocytes were noticed in the ischemic striatum of vehicle-treated mice that closely corresponded to areas exhibiting Iba-1+ activated microglia. VEGF attenuated the CD45+ leukocyte infiltrates at 14 but not 30 days post ischemia and diminished the microglial activation. Notably, the VEGF-induced anti-inflammatory effect of VEGF was associated with a downregulation of a broad set of inflammatory cytokines and chemokines in both brain hemispheres. These data suggest a link between VEGF's immunosuppressive and plasticity-promoting actions that may be important for successful brain remodeling. Accordingly, growth factors with anti-inflammatory action may be promising therapeutics in the post-acute stroke phase.
Experimental Evaluation of the Influence of Human-Structure Interaction for Vibration Serviceability
Resumo:
The effects of human-structure interaction on the dynamic performance of occupied structures have long been observed. The inclusion of the effects of human-structure interaction is important to ensure that the dynamic response of a structure is not overestimated. Previous observations, both in service and in the laboratory, have yielded results indicating that the effects are dependent on the natural frequency of the structure, the posture of the occupants, and the mass ratio of the occupants to the structure. These results are noteworthy, but are limited in their application,because the data are sparse and are only pertinent to a specific set of characteristics identified in a given study. To examine these characteristics simultaneously and consistently, an experimental test structure was designed with variable properties to replicate a variety of configurations within a controlled setting focusing on the effects of passive occupants. Experimental modal analysis techniques were employed to both the empty and occupied conditions of the structure and the dynamic properties associated with each condition were compared. Results similar to previous investigations were observed, including both an increase and a decrease in natural frequency of the occupied structure with respect to the empty structure, as well as the identification of a second mode of vibration. The damping of the combined system was higher for all configurations. Overall, this study provides a broad data set representing a wide array of configurations. The experimental results of this study were used to assess current recommendations for the dynamic properties of a crowd to analytically predict the effects of human-structure interaction. The experimental results were used to select a set of properties for passive, standing occupants and develop a new model that can more accurately represent the behavior of the human-structure system as experimentally measured in this study.
Resumo:
'Weak senses' are a specific type of semantic information as opposed to assertions and presuppositions. The universal trait of weak senses is that they assume 'if' modality in negative contexts. In addition they exhibit several other diagnostic properties, e.g. they fill at least one of their valency places with a semantic element sensitive to negation (i.e. with an assertion or other weak sense), they normally do not fall within the scope of functors, do not play any role in causal relations, and resist intensification. As weak senses are widespread in lexical, grammatical and referential semantics, this notion holds the clue to phenomena as diverse as the oppositions little - a little, few - a few, edva ('hardly') - cut' ('slightly), where a little, a few, cut, convey 'weakly' approximately what little, few, and edva do in an assertive way, the semantics of the Russian perfect aspect, and the formation rules for conjunction strings. Zeldovich outlines a typology of weak senses, the main distinction being between weak senses unilaterally dependent upon the truthfulness of what they saturate their valency with, and weak senses exerting their own influence on the main situation. The latter, called, non-trivial, are instantiated by existential quantifiers involved in the semantics of indefinite pronouns, iterative verbs, etc.
Resumo:
Concern over possible adverse effects of endocrine-disrupting compounds on fish has caused the development of appropriate testing methods. In vitro screening assays may provide initial information on endocrine activities of a test compound and thereby may direct and optimize subsequent testing. Induction of vitellogenin (VTG) is used as a biomarker of exposure of fish to estrogen-active substances. Since VTG induction can be measured not only in vivo but also in fish hepatocytes in vitro, the use of VTG induction response in isolated fish liver cells has been suggested as in vitro screen for identifying estrogenic-active substances. The main advantages of the hepatocyte VTG assay are considered its ability to detect effects of estrogenic metabolites, since hepatocytes in vitro remain metabolically competent, and its ability to detect both estrogenic and anti-estrogenic effects. In this article, we critically review the current knowledge on the VTG response of cultured fish hepatocytes to (anti)estrogenic substances. In particular, we discuss the sensitivity, specificity, and variability of the VTG hepatocyte assay. In addition, we review the available data on culture factors influencing basal and induced VTG production, the response to natural and synthetic estrogens as well as to xenoestrogens, the detection of indirect estrogens, and the sources of assay variability. The VTG induction in cultured fish hepatocytes is clearly influenced by culture conditions (medium composition, temperature, etc.) and culture system (hepatocyte monolayers, aggregates, liver slices, etc.). The currently available database on estrogen-mediated VTG induction in cultured teleost hepatocytes is too small to support conclusive statements on whether there exist systematic differences of the VTG response between in vitro culture systems, VTG analytical methods or fish species. The VTG hepatocyte assay detects sensitively natural and synthetic estrogens, whereas the response to xenoestrogens appears to be more variable. The detection of weak estrogens can be critical due to the overshadow with cytotoxic concentrations. Moreover, the VTG hepatocyte assay is able to detect antiestrogens as well as indirect estrogens, i.e substances which require metabolic activation to induce an estrogenic response. Nevertheless, more chemicals need to be analysed to corroborate this statement. It will be necessary to establish standardized protocols to minimize assay variability, and to develop a set of pass-fail criteria as well as cut-offs for designating positive and negative responses.
Resumo:
OBJECTIVE: To assess the intra-reader and inter-reader reliabilities of interpreting ultrasonography by several experts using video clips. METHOD: 99 video clips of healthy and rheumatic joints were recorded and delivered to 17 physician sonographers in two rounds. The intra-reader and inter-reader reliabilities of interpreting the ultrasound results were calculated using a dichotomous system (normal/abnormal) and a graded semiquantitative scoring system. RESULTS: The video reading method worked well. 70% of the readers could classify at least 70% of the cases correctly as normal or abnormal. The distribution of readers answering correctly was wide. The most difficult joints to assess were the elbow, wrist, metacarpophalangeal (MCP) and knee joints. The intra-reader and inter-reader agreements on interpreting dynamic ultrasound images as normal or abnormal, as well as detecting and scoring a Doppler signal were moderate to good (kappa = 0.52-0.82). CONCLUSIONS: Dynamic image assessment (video clips) can be used as an alternative method in ultrasonography reliability studies. The intra-reader and inter-reader reliabilities of ultrasonography in dynamic image reading are acceptable, but more definitions and training are needed to improve sonographic reproducibility.
Resumo:
Our dynamic capillary electrophoresis model which uses material specific input data for estimation of electroosmosis was applied to investigate fundamental aspects of isoelectric focusing (IEF) in capillaries or microchannels made from bare fused-silica (FS), FS coated with a sulfonated polymer, polymethylmethacrylate (PMMA) and poly(dimethylsiloxane) (PDMS). Input data were generated via determination of the electroosmotic flow (EOF) using buffers with varying pH and ionic strength. Two models are distinguished, one that neglects changes of ionic strength and one that includes the dependence between electroosmotic mobility and ionic strength. For each configuration, the models provide insight into the magnitude and dynamics of electroosmosis. The contribution of each electrophoretic zone to the net EOF is thereby visualized and the amount of EOF required for the detection of the zone structures at a particular location along the capillary, including at its end for MS detection, is predicted. For bare FS, PDMS and PMMA, simulations reveal that EOF is decreasing with time and that the entire IEF process is characterized by the asymptotic formation of a stationary steady-state zone configuration in which electrophoretic transport and electroosmotic zone displacement are opposite and of equal magnitude. The location of immobilization of the boundary between anolyte and most acidic carrier ampholyte is dependent on EOF, i.e. capillary material and anolyte. Overall time intervals for reaching this state in microchannels produced by PDMS and PMMA are predicted to be similar and about twice as long compared to uncoated FS. Additional mobilization for the detection of the entire pH gradient at the capillary end is required. Using concomitant electrophoretic mobilization with an acid as coanion in the catholyte is shown to provide sufficient additional cathodic transport for that purpose. FS capillaries dynamically double coated with polybrene and poly(vinylsulfonate) are predicted to provide sufficient electroosmotic pumping for detection of the entire IEF gradient at the cathodic column end.
Resumo:
Although assessment of asthma control is important to guide treatment, it is difficult since the temporal pattern and risk of exacerbations are often unpredictable. In this Review, we summarise the classic methods to assess control with unidimensional and multidimensional approaches. Next, we show how ideas from the science of complexity can explain the seemingly unpredictable nature of bronchial asthma and emphysema, with implications for chronic obstructive pulmonary disease. We show that fluctuation analysis, a method used in statistical physics, can be used to gain insight into asthma as a dynamic disease of the respiratory system, viewed as a set of interacting subsystems (eg, inflammatory, immunological, and mechanical). The basis of the fluctuation analysis methods is the quantification of the long-term temporal history of lung function parameters. We summarise how this analysis can be used to assess the risk of future asthma episodes, with implications for asthma severity and control both in children and adults.
Resumo:
Energy crisis and worldwide environmental problem make hydrogen a prospective energy carrier. However, storage and transportation of hydrogen in large quantities at small volume is currently not practical. Lots of materials and devices have been developed for storage hydrogen, but to today none is able to meet the DOE targets. Activated carbon has been found to be a good hydrogen adsorbent due to its high surface area. However, the weak van der Waals force between hydrogen and the adsorbent has limited the adsorption capacity. Previous studies have found that enhanced adsorption can be obtained with applied electric field. Stronger interaction between the polarized hydrogen and the charged sorbents under high voltage is considered as the reason. This study was initiated to investigate if the adsorption can be further enhanced when the activated carbon particles are separated with a dielectric coating. Dielectric TiO2 nanoparticles were first utilized. Hydrogen adsorption measurements on the TiO2-coated carbon materials, with or without an external electric field, were made. The results showed that the adsorption capacity enhancement increased with the increasing amount of TiO2 nanoparticles with an applied electric field. Since the hydrogen adsorption capacity on TiO2 particles is very low and there is no hydrogen adsorption enhancement on TiO2 particles alone when electric field is applied, the effect of dielectric coating is demonstrated. Another set of experiments investigated the behavior of hydrogen adsorption over TiO2-coated activated carbon under various electric potentials. The results revealed that the hydrogen adsorption first increased and then decreased with the increase of electric field. The improved storage was due to a stronger interaction between charged carbon surface and polarized hydrogen molecule caused by field induced polarization of TiO2 coating. When the electric field was sufficient to cause considerable ionization of hydrogen, the decrease of hydrogen adsorption occurred. The current leak detected at 3000 V was a sign of ionization of hydrogen. Experiments were also carried out to examine the hydrogen adsorption performances over activated carbon separated by other dielectric materials, MgO, ZnO and BaTiO3, respectively. For the samples partitioned with MgO and ZnO, the measurements with and without an electric field indicated negligible differences. Electric field enhanced adsorption has been observed on the activated carbon separated with BaTiO3, a material with unusually high dielectric constant. Corresponding computational calculations using Density Functional Theory have been performed on hydrogen interaction with charged TiO2 molecule as well as TiO2 molecule, coronene and TiO2-doped coronene in the presence of an electric field. The simulated results were consistent with the observations from experiments, further confirming the proposed hypotheses.
Resumo:
The objective of this research was to develop a high-fidelity dynamic model of a parafoilpayload system with respect to its application for the Ship Launched Aerial Delivery System (SLADS). SLADS is a concept in which cargo can be transfered from ship to shore using a parafoil-payload system. It is accomplished in two phases: An initial towing phase when the glider follows the towing vessel in a passive lift mode and an autonomous gliding phase when the system is guided to the desired point. While many previous researchers have analyzed the parafoil-payload system when it is released from another airborne vehicle, limited work has been done in the area of towing up the system from ground or sea. One of the main contributions of this research was the development of a nonlinear dynamic model of a towed parafoil-payload system. After performing an extensive literature review of the existing methods of modeling a parafoil-payload system, a five degree-of-freedom model was developed. The inertial and geometric properties of the system were investigated to predict accurate results in the simulation environment. Since extensive research has been done in determining the aerodynamic characteristics of a paraglider, an existing aerodynamic model was chosen to incorporate the effects of air flow around the flexible paraglider wing. During the towing phase, it is essential that the parafoil-payload system follow the line of the towing vessel path to prevent an unstable flight condition called ‘lockout’. A detailed study of the causes of lockout, its mathematical representation and the flight conditions and the parameters related to lockout, constitute another contribution of this work. A linearized model of the parafoil-payload system was developed and used to analyze the stability of the system about equilibrium conditions. The relationship between the control surface inputs and the stability was investigated. In addition to stability of flight, one more important objective of SLADS is to tow up the parafoil-payload system as fast as possible. The tension in the tow cable is directly proportional to the rate of ascent of the parafoil-payload system. Lockout instability is more favorable when tow tensions are large. Thus there is a tradeoff between susceptibility to lockout and rapid deployment. Control strategies were also developed for optimal tow up and to maintain stability in the event of disturbances.
Resumo:
The use of conventional orifice-plate meter is typically restricted to measurements of steady flows. This study proposes a new and effective computational-experimental approach for measuring the time-varying (but steady-in-the-mean) nature of turbulent pulsatile gas flows. Low Mach number (effectively constant density) steady-in-the-mean gas flows with large amplitude fluctuations (whose highest significant frequency is characterized by the value fF) are termed pulsatile if the fluctuations have a direct correlation with the time-varying signature of the imposed dynamic pressure difference and, furthermore, they have fluctuation amplitudes that are significantly larger than those associated with turbulence or random acoustic wave signatures. The experimental aspect of the proposed calibration approach is based on use of Coriolis-meters (whose oscillating arm frequency fcoriolis >> fF) which are capable of effectively measuring the mean flow rate of the pulsatile flows. Together with the experimental measurements of the mean mass flow rate of these pulsatile flows, the computational approach presented here is shown to be effective in converting the dynamic pressure difference signal into the desired dynamic flow rate signal. The proposed approach is reliable because the time-varying flow rate predictions obtained for two different orifice-plate meters exhibit the approximately same qualitative, dominant features of the pulsatile flow.
Resumo:
Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
This dissertation established a standard foam index: the absolute foam index test. This test characterized a wide range of coal fly ash by the absolute volume of air-entraining admixture (AEA) necessary to produce a 15-second metastable foam in a coal fly ash-cement slurry in a specified time. The absolute foam index test was used to characterize fly ash samples having loss on ignition (LOI) values that ranged from 0.17 to 23.3 %wt. The absolute foam index characterized the fly ash samples by absolute volume of AEA, defined as the amount of undiluted AEA solution added to obtain a 15-minute endpoint signified by 15-second metastable foam. Results were compared from several foam index test time trials that used different initial test concentrations to reach termination at selected times. Based on the coefficient of variation (CV), a 15-minute endpoint, with limits of 12 to 18 minutes was chosen. Various initial test concentrations were used to accomplish consistent contact times and concentration gradients for the 15-minute test endpoint for the fly ash samples. A set of four standard concentrations for the absolute foam index test were defined by regression analyses and a procedure simplifying the test process. The set of standard concentrations for the absolute foam index test was determined by analyzing experimental results of 80 tests on coal fly ashes with loss on ignition (LOI) values ranging from 0.39 to 23.3 wt.%. A regression analysis informed selection of four concentrations (2, 6, 10, and 15 vol.% AEA) that are expected to accommodate fly ashes with 0.39 to 23.3 wt.% LOI, depending on the AEA type. Higher concentrations should be used for high-LOI fly ash when necessary. A procedure developed using these standard concentrations is expected to require only 1-3 trials to meet specified endpoint criteria for most fly ashes. The AEA solution concentration that achieved the metastable foam in the foam index test was compared to the AEA equilibrium concentration obtained from the direct adsorption isotherm test with the same fly ash. The results showed that the AEA concentration that satisfied the absolute foam index test was much less than the equilibrium concentration. This indicated that the absolute foam index test was not at or near equilibrium. Rather, it was a dynamic test where the time of the test played an important role in the results. Even though the absolute foam index was not an equilibrium condition, a correlation was made between the absolute foam index and adsorption isotherms. Equilibrium isotherm equations obtained from direct isotherm tests were used to calculate the equilibrium concentrations and capacities of fly ash from 0.17 to 10.5% LOI. The results showed that the calculated fly ash capacity was much less than capacities obtained from isotherm tests that were conducted with higher initial concentrations. This indicated that the absolute foam index was not equilibrium. Rather, the test is dynamic where the time of the test played an important role in the results. Even though the absolute foam index was not an equilibrium condition, a correlation was made between the absolute foam index and adsorption isotherms for fly ash of 0.17 to 10.5% LOI. Several batches of mortars were mixed for the same fly ash type increasing only the AEA concentration (dosage) in each subsequent batch. Mortar air test results for each batch showed for each increase in AEA concentration, air contents increased until a point where the next increase in AEA concentration resulted in no increase in air content. This was maximum air content that could be achieved by the particular mortar system; the system reached its air capacity at the saturation limit. This concentration of AEA was compared to the critical micelle concentration (CMC) for the AEA and the absolute foam index.