758 resultados para ALEPH Order Number
Resumo:
There is an increasing global reliance on the Internet for retrieving information on health, illness, and recovery (Sillence et al, 2007; Laurent et al, 2009; Adams, 2010). People suffering from a vast array of illnesses, conditions, and complaints, as well as healthy travelers seeking advice about safe practices abroad, and teens seeking information about safe sexual practices are all now more likely to go to the internet for information than they are to rely solely on a general practitioner or physician (Santor et al, 2007; Moreno et al, 2009; Bartlett et al, 2010). Women in particular seek advice and support online for a number of health-related concerns regarding issues such as puberty, conception, pregnancy, postnatal depression, mothering, breast-cancer recovery, and ageing healthily (van Zutphen, 2008; Raymond et al, 2005). In keeping with this increasing socio-technological trend, the Women’s Health Unit at the Queensland University of Technology (Q.U.T), Brisbane, Australia, introduced the research, design, and development of online information resources for issues affecting the health of Australian women as an assessment item for students in the undergraduate Public Health curriculum. Students were required to research a particular health issue affecting Australian women, including pregnancy, pregnancy terminations, postnatal depression, returning to the work force after having a baby, breast cancer recovery, chronic disease prevention, health and safety for sex-workers, and ageing healthily. Students were required to design and develop websites that supported people living with these conditions, or who were in these situations. The websites were designed for communicating effectively with both women seeking information about their health, and their health practitioners. The pedagogical challenge inherent in this exercise was twofold: firstly, to encourage students to develop the skills to design and maintain software for online health forums; and secondly, to challenge public health students to go beyond generating ‘awareness’ and imparting health information to developing a nuanced understanding of the worlds and perspectives of their audiences, who require supportive networks and options that resonate with their restrictions, capabilities, and dispositions. This latter challenge spanned the realms of research, communication, and aesthetic design. This paper firstly, discusses an increasing reliance on the Internet by women seeking health-related information and the potential health risks and benefits of this trend. Secondly, it applies a post-structural analysis of the de-centred and mobile female self, as online social ‘spaces’ and networks supersede geographical ‘places’ and hierarchies, with implications for democracy, equality, power, and ultimately women’s health. Thirdly, it depicts the processes (learning reflections) and products (developed websites) created within this Women’s Health Unit by the students. Finally, we review this development in the undergraduate curriculum in terms of the importance of providing students with skills in research, communication, and technology in order to share and implement improved health care and social marketing for women as both recipients and providers of health care in the Internet Age.
Resumo:
We have developed a bioreactor vessel design which has the advantages of simplicity and ease of assembly and disassembly, and with the appropriately determined flow rate, even allows for a scaffold to be suspended freely regardless of its weight. This article reports our experimental and numerical investigations to evaluate the performance of a newly developed non-perfusion conical bioreactor by visualizing the flow through scaffolds with 45° and 90° fiber lay down patterns. The experiments were conducted at the Reynolds numbers (Re) 121, 170, and 218 based on the local velocity and width of scaffolds. The flow fields were captured using short-time exposures of 60 µm particles suspended in the bioreactor and illuminated using a thin laser sheet. The effects of scaffold fiber lay down pattern and Reynolds number were obtained and correspondingly compared to results obtained from a computational fluid dynamics (CFD) software package. The objectives of this article are twofold: to investigate the hypothesis that there may be an insufficient exchange of medium within the interior of the scaffold when using our non-perfusion bioreactor, and second, to compare the flows within and around scaffolds of 45° and 90° fiber lay down patterns. Scaffold porosity was also found to influence flow patterns. It was therefore shown that fluidic transport could be achieved within scaffolds with our bioreactor design, being a non-perfusion vessel. Fluid velocities were generally same of the same or one order lower in magnitude as compared to the inlet flow velocity. Additionally, the 90° fiber lay down pattern scaffold was found to allow for slightly higher fluid velocities within, as compared to the 45° fiber lay down pattern scaffold. This was due to the architecture and pore arrangement of the 90° fiber lay down pattern scaffold, which allows for fluid to flow directly through (channel-like flow).
Resumo:
My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').
Resumo:
In an open railway access market, the Infrastructure Provider (IP), upon the receipts of service bids from the Train Service Providers (TSPs), assigns track access rights according to its own business objectives and the merits of the bids; and produces the train service timetable through negotiations. In practice, IP chooses to negotiate with the TSPs one by one in such a sequence that IP optimizes its objectives. The TSP bids are usually very complicated, containing a large number of parameters in different natures. It is a difficult task even for an expert to give a priority sequence for negotiations from the contents of the bids. This study proposes the application of fuzzy ranking method to compare and prioritize the TSP bids in order to produce a negotiation sequence. The results of this study allow investigations on the behaviors of the stakeholders in bid preparation and negotiation, as well as evaluation of service quality in the open railway market.
Resumo:
Red light cameras (RLCs) have been used in a number of US cities to yield a demonstrable reduction in red light violations; however, evaluating their impact on safety (crashes) has been relatively more difficult. Accurately estimating the safety impacts of RLCs is challenging for several reasons. First, many safety related factors are uncontrolled and/or confounded during the periods of observation. Second, “spillover” effects caused by drivers reacting to non-RLC equipped intersections and approaches can make the selection of comparison sites difficult. Third, sites selected for RLC installation may not be selected randomly, and as a result may suffer from the regression to the mean bias. Finally, crash severity and resulting costs need to be considered in order to fully understand the safety impacts of RLCs. Recognizing these challenges, a study was conducted to estimate the safety impacts of RLCs on traffic crashes at signalized intersections in the cities of Phoenix and Scottsdale, Arizona. Twenty-four RLC equipped intersections in both cities are examined in detail and conclusions are drawn. Four different evaluation methodologies were employed to cope with the technical challenges described in this paper and to assess the sensitivity of results based on analytical assumptions. The evaluation results indicated that both Phoenix and Scottsdale are operating cost-effective installations of RLCs: however, the variability in RLC effectiveness within jurisdictions is larger in Phoenix. Consistent with findings in other regions, angle and left-turn crashes are reduced in general, while rear-end crashes tend to increase as a result of RLCs.
Resumo:
Routing trains within passenger stations in major cities is a common scheduling problem for railway operation. Various studies have been undertaken to derive and formulate solutions to this route allocation problem (RAP) which is particularly evident in mainland China nowadays because of the growing traffic demand and limited station capacity. A reasonable solution must be selected from a set of available RAP solutions attained in the planning stage to facilitate station operation. The selection is however based on the experience of the operators only and objective evaluation of the solutions is rarely addressed. In order to maximise the utilisation of station capacity while maintaining service quality and allowing for service disturbance, quantitative evaluation of RAP solutions is highly desirable. In this study, quantitative evaluation of RAP solutions is proposed and it is enabled by a set of indices covering infrastructure utilisation, buffer times and delay propagation. The proposed evaluation is carried out on a number of RAP solutions at a real-life busy railway station in mainland China and the results highlight the effectiveness of the indices in pinpointing the strengths and weaknesses of the solutions. This study provides the necessary platform to improve the RAP solution in planning and to allow train re-routing upon service disturbances.
Resumo:
Purpose of study: Traffic conflicts occur when trains on different routes approach a converging junction in a railway network at the same time. To prevent collisions, a right-of-way assignment is needed to control the order in which the trains should pass the junction. Such control action inevitably requires the braking and/or stopping of trains, which lengthens their travelling times and leads to delays. Train delays cause a loss of punctuality and hence directly affect the quality of service. It is therefore important to minimise the delays by devising a suitable right-of-way assignment. One of the major difficulties in attaining the optimal right-of-way assignment is that the number of feasible assignments increases dramatically with the number of trains. Connected-junctions further complicate the problem. Exhaustive search for the optimal solution is time-consuming and infeasible for area control (multi-junction). Even with the more intelligent deterministic optimisation method revealed in [1], the computation demand is still considerable, which hinders real-time control. In practice, as suggested in [2], the optimality may be traded off by shorter computation time, and heuristic searches provide alternatives for this optimisation problem.
Resumo:
Electrostatic discharge is the sudden and brief electric current that flashes between two objects at different voltages. This is a serious issue ranging in application from solid-state electronics to spectacular and dangerous lightning strikes (arc flashes). The research herein presents work on the experimental simulation and measurement of the energy in an electrostatic discharge. The energy released in these discharges has been linked to ignitions and burning in a number of documented disasters and can be enormously hazardous in many other industrial scenarios. Simulations of electrostatic discharges were designed to specifications by IEC standards. This is typically based on the residual voltage/charge on the discharge capacitor, whereas this research examines the voltage and current in the actual spark in order to obtain a more precise comparative measurement of the energy dissipated.
Resumo:
Purpose: We compared subjective blur limits for defocus and the higher-order aberrations of coma, trefoil, and spherical aberration. ---------- Methods: Spherical aberration was presented in both Zernike and Seidel forms. Black letter targets (0.1, 0.35, and 0.6 logMAR) on white backgrounds were blurred using an adaptive optics system for six subjects under cycloplegia with 5 mm artificial pupils. Three blur criteria of just noticeable, just troublesome, and just objectionable were used.---------- Results: When expressed as wave aberration coefficients, the just noticeable blur limits for coma and trefoil were similar to those for defocus, whereas the just noticeable limits for Zernike spherical aberration and Seidel spherical aberration (the latter given as an “rms equivalent”) were considerably smaller and larger, respectively, than defocus limits.---------- Conclusions: Blur limits increased more quickly for the higher order aberrations than for defocus as the criterion changed from just noticeable to just troublesome and then to just objectionable.
Resumo:
Purpose. To investigate the effect of various presbyopic vision corrections on nighttime driving performance on a closed-road driving circuit. Methods. Participants were 11 presbyopes (mean age, 57.3 ± 5.8 years), with a mean best sphere distance refractive error of R+0.23±1.53 DS and L+0.20±1.50 DS, whose only experience of wearing presbyopic vision correction was reading spectacles. The study involved a repeated-measures design by which a participant's nighttime driving performance was assessed on a closed-road circuit while wearing each of four power-matched vision corrections. These included single-vision distance lenses (SV), progressive-addition spectacle lenses (PAL), monovision contact lenses (MV), and multifocal contact lenses (MTF CL) worn in a randomized order. Measures included low-contrast road hazard detection and avoidance, road sign and near target recognition, lane-keeping, driving time, and legibility distance for street signs. Eye movement data (fixation duration and number of fixations) were also recorded. Results. Street sign legibility distances were shorter when wearing MV and MTF CL than SV and PAL (P < 0.001), and participants drove more slowly with MTF CL than with PALs (P = 0.048). Wearing SV resulted in more errors (P < 0.001) and in more (P = 0.002) and longer (P < 0.001) fixations when responding to near targets. Fixation duration was also longer when viewing distant signs with MTF CL than with PAL (P = 0.031). Conclusions. Presbyopic vision corrections worn by naive, unadapted wearers affected nighttime driving. Overall, spectacle corrections (PAL and SV) performed well for distance driving tasks, but SV negatively affected viewing near dashboard targets. MTF CL resulted in the shortest legibility distance for street signs and longer fixation times.
Resumo:
Balancing between the provision of high quality of service and running within a tight budget is one of the biggest challenges for most metro railway operators around the world. Conventionally, one possible approach for the operator to adjust the time schedule is to alter the stop time at stations, if other system constraints, such as traction equipment characteristic, are not taken into account. Yet it is not an effective, flexible and economical method because the run-time of a train simply cannot be extended without limitation, and a balance between run-time and energy consumption has to be maintained. Modification or installation of a new signalling system not only increases the capital cost, but also affects the normal train service. Therefore, in order to procure a more effective, flexible and economical means to improve the quality of service, optimisation of train performance by coasting point identification has become more attractive and popular. However, identifying the necessary starting points for coasting under the constraints of current service conditions is no simple task because train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting points and investigates the possible improvement on computation time and fitness of genes.
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
Recent studies have detected a dominant accumulation mode (~100 nm) in the Sea Spray Aerosol (SSA) number distribution. There is evidence to suggest that particles in this mode are composed primarily of organics. To investigate this hypothesis we conducted experiments on NaCl, artificial SSA and natural SSA particles with a Volatility-Hygroscopicity-Tandem-Differential-Mobility-Analyser (VH-TDMA). NaCl particles were atomiser generated and a bubble generator was constructed to produce artificial and natural SSA particles. Natural seawater samples for use in the bubble generator were collected from biologically active, terrestrially-affected coastal water in Moreton Bay, Australia. Differences in the VH-TDMA-measured volatility curves of artificial and natural SSA particles were used to investigate and quantify the organic fraction of natural SSA particles. Hygroscopic Growth Factor (HGF) data, also obtained by the VH-TDMA, were used to confirm the conclusions drawn from the volatility data. Both datasets indicated that the organic fraction of our natural SSA particles evaporated in the VH-TDMA over the temperature range 170–200°C. The organic volume fraction for 71–77 nm natural SSA particles was 8±6%. Organic volume fraction did not vary significantly with varying water residence time (40 secs to 24 hrs) in the bubble generator or SSA particle diameter in the range 38–173 nm. At room temperature we measured shape- and Kelvin-corrected HGF at 90% RH of 2.46±0.02 for NaCl, 2.35±0.02 for artifical SSA and 2.26±0.02 for natural SSA particles. Overall, these results suggest that the natural accumulation mode SSA particles produced in these experiments contained only a minor organic fraction, which had little effect on hygroscopic growth. Our measurement of 8±6% is an order of magnitude below two previous measurements of the organic fraction in SSA particles of comparable sizes. We stress that our results were obtained using coastal seawater and they can’t necessarily be applied on a regional or global ocean scale. Nevertheless, considering the order of magnitude discrepancy between this and previous studies, further research with independent measurement techniques and a variety of different seawaters is required to better quantify how much organic material is present in accumulation mode SSA.
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
Airborne measurements of particle number concentrations from biomass burning were conducted in the Northern Territory, Australia, during June and September campaigns in 2003, which is the early and the late dry season in that region. The airborne measurements were performed along horizontal flight tracks, at several heights in order to gain insight into the particle concentration levels and their variation with height within the lower boundary layer (LBL), upper boundary layer (UBL), and also in the free troposphere (FT). The measurements found that the concentration of particles during the early dry season was lower than that for the late dry season. For the June campaign, the concentration of particles in LBL, UBL, and FT were (685 ± 245) particles/cm3, (365 ± 183) particles/cm3, and (495 ± 45) particle/cm3 respectively. For the September campaign, the concentration of particles were found to be (1233 ± 274) particles/cm3 in the LBL, (651 ± 68) particles/cm3 in the UBL, and (568 ± 70) particles/cm3 in the FT. The particle size distribution measurements indicate that during the late dry season there was no change in the particle size distribution below (LBL) and above the boundary layer (UBL). This indicates that there was possibly some penetration of biomass burning particles into the upper boundary layer. In the free troposphere the particle concentration and size measured during both campaigns were approximately the same.