90 resultados para Resolution in azimuth direction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of recent books on ethics (Hirst and Patching 2005, Tanner et al, 2005, Ward, 2006)have indicated that traditional understandings of journalism "objectivity" are in need of renovation if they are to sustain the claim as a guide to ethical action. Ward argues for the recasting of the notions of traditional objectivity to offer a "pragmatic objectivity" as an alternative and plausible underpinning to ethical journalism practice. He argues that a recast or "pragmatic objectivity" should respond to the changing rhetorical relationship between journalists and their audiences; and, in so doing, should take inspiration from attempts to be objective in other domains---professions such as law and public relations in seeking models. This paper seeks to take a step in that direction through illustrating how journalism interviews do "objectivity" through an adaptation of the principles of the "Fourth Estate" to political interviews. It turns such analysis to the ends of establishing the particular "pragmatic ethic" underpinning such practices and how journalism interviewing techniques has allowed for proactive journalists to strike a workable balance between pursuing the public interest and observing the restraining protocols of modern journalistic practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Because studies of crowding in long-term care settings are lacking, the authors sought to: (1) generate initial estimates of crowding in nursing homes and assisted living facilities; and (2) evaluate two operational approaches to its measurement. ----- ----- Background: Reactions to density and proximity are complex. Greater density intensifies people's reaction to a situation in the direction (positive or negative) that they would react if the situation were to occur under less dense conditions. People with dementia are especially reactive to the environment. ----- ----- Methods: Using a cross-sectional correlational design in nursing homes and assisted living facilities involving 185 participants, multiple observations (N = 6,455) of crowding and other environmental variables were made. Crowding, location, and sound were measured three times per observation; ambiance was measured once. Data analyses consisted of descriptive statistics, t-tests, and one-way analysis of variance. ----- ----- Results: Crowding estimates were higher for nursing homes and in dining and activity rooms. Crowding also varied across settings and locations by time of day. Overall, the interaction of location and time affected crowding significantly (N = 5,559, df [47, 511], F = 105.69, p < .0001); effects were greater within location-by-hour than between location-by-hour, but the effect explained slightly less variance in Long-Term Care Crowding Index (LTC-CI) estimates (47.41%) than location alone. Crowding had small, direct, and highly significant correlations with sound and with the engaging subscale for ambiance; a similar, though inverse, correlation was seen with the soothing subscale for ambiance. ----- ----- Conclusions: Crowding fluctuates consistent with routine activities such as meals in long-term care settings. Furthermore, a relationship between crowding and other physical characteristics of the environment was found. The LTC-CI is likely to be more sensitive than simple people counts when seeking to evaluate the effects of crowding on the behavior of elders-particularly those with dementia-in long-term care settings. aging in place.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Competitive sailing is characterised by continuous interdependencies of decisions and actions. All actions imply a permanent monitoring of the environmental conditions, such as intensity and direction of the wind, sea characteristics, and the behaviour of the opponent sailors. These constraints on sailors’ behavior are in constant change implying continuous adjustments in sailors’ actions and decisions. Among the different parts of a regatta, tactics and strategy at the start are particularly relevant. Among coaches there is an adage that says that “the start is 50% of a regatta” (Houghton, 1984; Saltonstall, 1983/1986). Olympic sailing regattas are performed with boats of the same class, by one, two or three sailors, depending on the boat class. Normally before the start, sailors visit the racing venue and analyse wind and sea characteristics, in order to fine- tune their boats accordingly. Then, five minutes before the start, sailors initiate starting procedures in order to be in a favourable position at the starting line (at the “second zero”). This position is selected during the start period according to wind shifts tendencies and the actions of other boats (Figure 11.1). Only after the start signal can the boats cross the imaginary starting line between the race committee signal boat “A” and the pin end boat. The start takes place against the wind (upwind), and the boats start racing in the direction of mark 1. Based on the evaluation of the sea and wind characteristics (e.g. if the wind is stronger at a particular place on the course), sailors re- adjust their strategy for the regatta. This strategy may change during the regatta, according to wind changes and adversary actions. More to the point, strategic decisions constrain and are constrained by on- line decisions during the regatta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Review examined socioeconomic inequalities in intakes of dietary factors associated with weight gain, overweight/obesity among adults in Europe. Literature searches of studies published between 1990 and 2007 examining socioeconomic position (SEP) and the consumption of energy, fat, fibre, fruit, vegetables, energy-rich drinks and meal patterns were conducted. Forty-seven articles met the inclusion criteria. The direction of associations between SEP and energy intakes were inconsistent. Approximately half the associations examined between SEP and fat intakes showed higher total fat intakes among socioeconomically disadvantaged groups. There was some evidence that these groups consume a diet lower in fibre. The most consistent evidence of dietary inequalities was for fruit and vegetable consumption; lower socioeconomic groups were less likely to consume fruit and vegetables. Differences in energy, fat and fibre intakes (when found) were small-to-moderate in magnitude; however, differences were moderate-to-large for fruit and vegetable intakes. Socioeconomic inequalities in the consumption of energy-rich drinks and meal patterns were relatively under-studied compared with other dietary factors. There were no regional or gender differences in the direction and magnitude of the inequalities in the dietary factors examined. The findings suggest that dietary behaviours may contribute to socioeconomic inequalities in overweight/obesity in Europe. However, there is only consistent evidence that fruit and vegetables may make an important contribution to inequalities in weight status across European regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba), a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In spite of significant research in the development of efficient algorithms for three carrier ambiguity resolution, full performance potential of the additional frequency signals cannot be demonstrated effectively without actual triple frequency data. In addition, all the proposed algorithms showed their difficulties in reliable resolution of the medium-lane and narrow-lane ambiguities in different long-range scenarios. In this contribution, we will investigate the effects of various distance-dependent biases, identifying the tropospheric delay to be the key limitation for long-range three carrier ambiguity resolution. In order to achieve reliable ambiguity resolution in regional networks with the inter-station distances of hundreds of kilometers, a new geometry-free and ionosphere-free model is proposed to fix the integer ambiguities of the medium-lane or narrow-lane observables over just several minutes without distance constraint. Finally, the semi-simulation method is introduced to generate the third frequency signals from dual-frequency GPS data and experimentally demonstrate the research findings of this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The low resolution of images has been one of the major limitations in recognising humans from a distance using their biometric traits, such as face and iris. Superresolution has been employed to improve the resolution and the recognition performance simultaneously, however the majority of techniques employed operate in the pixel domain, such that the biometric feature vectors are extracted from a super-resolved input image. Feature-domain superresolution has been proposed for face and iris, and is shown to further improve recognition performance by capitalising on direct super-resolving the features which are used for recognition. However, current feature-domain superresolution approaches are limited to simple linear features such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which are not the most discriminant features for biometrics. Gabor-based features have been shown to be one of the most discriminant features for biometrics including face and iris. This paper proposes a framework to conduct super-resolution in the non-linear Gabor feature domain to further improve the recognition performance of biometric systems. Experiments have confirmed the validity of the proposed approach, demonstrating superior performance to existing linear approaches for both face and iris biometrics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the university education arena, it is becoming apparent that traditional methods of conducting classes are not the most effective ways to achieve desired learning outcomes. The traditional class/method involves the instructor verbalizing information for passive, note-taking students who are assumed to be empty receptacles waiting to be filled with knowledge. This method is limited in its effectiveness, as the flow of information is usually only in one direction. Furthermore, “It has been demonstrated that students in many cases can recite and apply formulas in numerical problems, but the actual meaning and understanding of the concept behind the formula is not acquired (Crouch & Mazur)”. It is apparent that memorization is the main technique present in this approach. A more effective method of teaching involves increasing the students’ level of activity during, and hence their involvement in the learning process. This technique stimulates self- learning and assists in keeping these students’ levels of concentration more uniform. In this work, I am therefore interested in studying the influence of a particular TLA on students’ learning-outcomes. I want to foster high-level understanding and critical thinking skills using active learning (Silberman, 1996) techniques. The TLA in question aims to promote self-study by students and to expose them to a situation where their learning-outcomes can be tested. The motivation behind this activity is based on studies which suggest that some sensory modalities are more effective than others. Using various instruments for data collection and by means of a thorough analysis I present evidence of the effectiveness of this action research project which aims to improve my own teaching practices, with the ultimate goal of enhancing student’s learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cell migration is a highly complex process that requires the extension of cell membrane in the direction of travel. This membrane is continuously remodeled to expand the leading edge and alter its membrane properties. For a long time it has been known that there is a continual flow of polarized membrane traffic towards the leading edge during migration and that this trafficking is essential for cell migration. However, there is little information on how the cell coordinates exocytosis at the leading edge. It is also unclear whether these internal membranes are incorporated into the leading edge or are just delivering the necessary proteins for migration to occur. We have shown that recycling endosome membrane is incorporated into the plasma membrane at the leading edge to expand the membrane and at the same time delivers receptors to the leading edge to mediate migration. In order for this to happen the surface Q-SNARE complex Stx4/SNAP23 translocates to the leading edge where it binds to the R-SNARE VAMP3 on the recycling endosome allowing incorporation into the plasma membrane. Loss of any one of the components of this complex reduces efficient lamellipodia formation and restrains cell migration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study the impact of message strategy on advertising performance will be in examined in a business-to-business (B2B) context. From a theoretical standpoint, the study will explore differences in message type between symbolic and literal approaches in B2B advertisements. While there has been much discussion on the effect of symbolism, (eg. metaphors, abstract images and figurative language), an empirically-tested scale that measures the degree of symbolism has not been developed. This research project focuses on development of a methodological scale to accurately test the difference in the direction of message appeals. Thus, insights in the role of message strategy in the B2B adoption process are anticipated with contributions in future consumer and business advertising research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this thesis investigates the mathematical modelling of charge transport in electrolyte solutions, within the nanoporous structures of electrochemical devices. We compare two approaches found in the literature, by developing onedimensional transport models based on the Nernst-Planck and Maxwell-Stefan equations. The development of the Nernst-Planck equations relies on the assumption that the solution is infinitely dilute. However, this is typically not the case for the electrolyte solutions found within electrochemical devices. Furthermore, ionic concentrations much higher than those of the bulk concentrations can be obtained near the electrode/electrolyte interfaces due to the development of an electric double layer. Hence, multicomponent interactions which are neglected by the Nernst-Planck equations may become important. The Maxwell-Stefan equations account for these multicomponent interactions, and thus they should provide a more accurate representation of transport in electrolyte solutions. To allow for the effects of the electric double layer in both the Nernst-Planck and Maxwell-Stefan equations, we do not assume local electroneutrality in the solution. Instead, we model the electrostatic potential as a continuously varying function, by way of Poisson’s equation. Importantly, we show that for a ternary electrolyte solution at high interfacial concentrations, the Maxwell-Stefan equations predict behaviour that is not recovered from the Nernst-Planck equations. The main difficulty in the application of the Maxwell-Stefan equations to charge transport in electrolyte solutions is knowledge of the transport parameters. In this work, we apply molecular dynamics simulations to obtain the required diffusivities, and thus we are able to incorporate microscopic behaviour into a continuum scale model. This is important due to the small size scales we are concerned with, as we are still able to retain the computational efficiency of continuum modelling. This approach provides an avenue by which the microscopic behaviour may ultimately be incorporated into a full device-scale model. The one-dimensional Maxwell-Stefan model is extended to two dimensions, representing an important first step for developing a fully-coupled interfacial charge transport model for electrochemical devices. It allows us to begin investigation into ambipolar diffusion effects, where the motion of the ions in the electrolyte is affected by the transport of electrons in the electrode. As we do not consider modelling in the solid phase in this work, this is simulated by applying a time-varying potential to one interface of our two-dimensional computational domain, thus allowing a flow field to develop in the electrolyte. Our model facilitates the observation of the transport of ions near the electrode/electrolyte interface. For the simulations considered in this work, we show that while there is some motion in the direction parallel to the interface, the interfacial coupling is not sufficient for the ions in solution to be "dragged" along the interface for long distances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the ‘zero-scan’ method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner’s x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.