264 resultados para OPACITY CALCULATIONS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Number lines are part of our everyday life (e.g., thermometers, kitchen scales) and are frequently used in primary mathematics as instructional aids, in texts and for assessment purposes on mathematics tests. There are two major types of number lines; structured number lines, which are the focus of this paper, and empty number lines. Structured number lines represent mathematical information by the placement of marks on a horizontal or vertical line which has been marked into proportional segments (Figure 1). Empty number lines are blank lines which students can use for calculations (Figure 2) and are not discussed further here (see van den Heuvel-Panhuizen, 2008, on the role of empty number lines). In this article, we will focus on how students’ knowledge of the structured number line develops and how they become successful users of this mathematical tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past century numerous waves of transnational media have washed across East Asia with cycles emanating from various centers of cultural production, such as Tokyo, Hong Kong, and Seoul. Most recently the People’s Republic of China (PRC) has begun to exert growing influence over the production and flow of screen media, a phenomenon tied to the increasing size and power of its overall economy. The country’s rising status achieved truly global recognition during the 2008 Beijing Olympics. In the seven years leading up to the event, the Chinese economy tripled in size, expanding from $1.3 trillion to almost $4 trillion, a figure that made it the world’s third largest economy, slightly behind Japan, but decisively ahead of its European counterparts, Germany, France, and the United Kingdom. The scale and speed of this transformation are stunning. Just as momentous are the changes in its film, television, and digital media markets, which now figure prominently in the calculations of producers throughout East Asia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with the design and implementation of control strategies onto a test-bed vehicle with six degrees-of-freedom. We design our trajectories to be efficient in time and in power consumption. Moreover, we also consider cases when actuator failure can arise and discuss alternate control strategies in this situation. Our calculations are supplemented by experimental results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Untreated Chlamydia trachomatis infections in women can result in disease sequelae such as salpingitis and pelvic inflammatory disease (PID), ultimately culminating in tubal occlusion and infertility. Whilst nucleic acid amplification tests can effectively diagnose uncomplicated lower genital tract (LGT) infections, they are not suitable for diagnosing upper genital tract (UGT) pathological sequelae. As a consequence, this study aimed to identify serological markers that can, with a high degree of sensitivity and specificity, discriminate between LGT infections and UGT pathology. Methods: Plasma was collected from 73 women with a history of LGT infection, UGT pathology due to C. trachomatis or no serological evidence of C. trachomatis infection. Western blotting was used to analyse antibody reactivity against extracted chlamydial proteins. Sensitivity and specificity of differential markers were also calculated. Results: Four antigens (CT157, CT423, CT727 and CT396) were identified and found to be capable of discriminating between the infection and disease sequelae state. Sensitivity and specificity calculations showed that our assay for diagnosing LGT infection had a sensitivity and specificity of 75% and 76% respectively, whilst the assay for identifying UGT pathology demonstrated 80% sensitivity and 86% specificity. Conclusions: The use of these assays could potentially facilitate earlier diagnoses in women suffering UGT pathology due to C. trachomatis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The common approach to estimate bus dwell time at a BRT station is to apply the traditional dwell time methodology derived for suburban bus stops. In spite of being sensitive to boarding and alighting passenger numbers and to some extent towards fare collection media, these traditional dwell time models do not account for the platform crowding. Moreover, they fall short in accounting for the effects of passenger/s walking along a relatively longer BRT platform. Using the experience from Brisbane busway (BRT) stations, a new variable, Bus Lost Time (LT), is introduced in traditional dwell time model. The bus lost time variable captures the impact of passenger walking and platform crowding on bus dwell time. These are two characteristics which differentiate a BRT station from a bus stop. This paper reports the development of a methodology to estimate bus lost time experienced by buses at a BRT platform. Results were compared with the Transit Capacity and Quality of Servce Manual (TCQSM) approach of dwell time and station capacity estimation. When the bus lost time was used in dwell time calculations it was found that the BRT station platform capacity reduced by 10.1%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a comprehensive review of scientific and grey literature on gross pollutant traps (GPTs). GPTs are designed with internal screens to capture gross pollutants—organic matter and anthropogenic litter. Their application involves professional societies, research organisations, local city councils, government agencies and the stormwater industry—often in partnership. In view of this, the 113 references include unpublished manuscripts from these bodies along with scientific peer-reviewed conference papers and journal articles. The literature reviewed was organised into a matrix of six main devices and nine research areas (testing methodologies) which include: design appraisal study, field monitoring/testing, experimental flow fields, gross pollutant capture/retention characteristics, residence time calculations, hydraulic head loss, screen blockages, flow visualisations and computational fluid dynamics (CFD). When the fifty-four item matrix was analysed, twenty-eight research gaps were found in the tabulated literature. It was also found that the number of research gaps increased if only the scientific literature was considered. It is hoped, that in addition to informing the research community at QUT, this literature review will also be of use to other researchers in this field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The single crystal Raman spectra of minerals brandholzite and bottinoite, formula M[Sb(OH)6]2•6H2O, where M is Mg+2 and Ni+2 respectively, and the non-aligned Raman spectrum of mopungite, formula Na[Sb(OH)6], are presented for the first time. The mixed metal minerals comprise of alternating layers of [Sb(OH)6]-1 octahedra and mixed [M(H2O)6]+2 / [Sb(OH)6]-1 octahedra. Mopungite comprises hydrogen bonded layers of [Sb(OH)6]-1 octahedra linked within the layer by Na+ ions. The spectra of the three minerals were dominated by the Sb-O symmetric stretch of the [Sb(OH)6]-1 octahedron, which occurs at approximately 620 cm-1. The Raman spectrum of mopungite showed many similarities to spectra of the di-octahedral minerals informing the view that the Sb octahedra gave rise to most of the Raman bands observed, particularly below 1200 cm-1. Assignments have been proposed based on the spectral comparison between the minerals, prior literature and density field theory calculations of the vibrational spectra of the free [Sb(OH)6]-1 and [M(H2O)6]+2 octahedra by a model chemistry of B3LYP/6-31G(d) and lanl2dz for the Sb atom. The single crystal data spectra showed good mode separation, allowing the majority of the bands to be assigned a symmetry species of A or E.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanism for the decomposition of hydrotalcite remains unsolved. Controlled rate thermal analysis enables this decomposition pathway to be explored. The thermal decomposition of hydrotalcites with hexacyanoferrite(II) and hexacyanoferrate(III) in the interlayer has been studied using controlled rate thermal analysis technology. X-ray diffraction shows the hydrotalcites studied have a d(003) spacing of 11.1 and 10.9 Å which compares with a d-spacing of 7.9 and 7.98 Å for the hydrotalcite with carbonate or sulphate in the interlayer. Calculations based upon CRTA measurements show that 7 moles of water is lost, proving the formula of hexacyanoferrite(II) intercalated hydrotalcite is Mg6Al2(OH)16[Fe(CN)6]0.5 .7 H2O and for the hexacyanoferrate(III) intercalated hydrotalcite is Mg6Al2(OH)16[Fe(CN)6]0.66 * 9 H2O. Dehydroxylation combined with CN unit loss occurs in three steps between a) 310 and 367°C b) 367 and 390°C and c) between 390 and 428°C for both the hexacyanoferrite(II) and hexacyanoferrate(III) intercalated hydrotalcite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study uses dosimetry film measurements and Monte Carlo simulations to investigate the accuracy of type-a (pencil-beam) dose calculations for predicting the radiation doses delivered during stereotactic radiotherapy treatments of the brain. It is shown that when evaluating doses in a water phantom, the type-a algorithm provides dose predictions which are accurate to within clinically relevant criteria, gamma(3%,3mm), but these predictions are nonetheless subtly different from the results of evaluating doses from the same fields using radiochromic film and Monte Carlo simulations. An analysis of a clinical meningioma treatment suggests that when predicting stereotactic radiotherapy doses to the brain, the inaccuracies of the type-a algorithm can be exacerbated by inadequate evaluation of the effects of nearby bone or air, resulting in dose differences of up to 10% for individual fields. The results of this study indicate the possible advantage of using Monte Carlo calculations, as well as measurements with high-spatial resolution media, to verify type-a predictions of dose delivered in cranial treatments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the rapid increase in electrical energy demand, power generation in the form of distributed generation is becoming more important. However, the connections of distributed generators (DGs) to a distribution network or a microgrid can create several protection issues. The protection of these networks using protective devices based only on current is a challenging task due to the change in fault current levels and fault current direction. The isolation of a faulted segment from such networks will be difficult if converter interfaced DGs are connected as these DGs limit their output currents during the fault. Furthermore, if DG sources are intermittent, the current sensing protective relays are difficult to set since fault current changes with time depending on the availability of DG sources. The system restoration after a fault occurs is also a challenging protection issue in a converter interfaced DG connected distribution network or a microgrid. Usually, all the DGs will be disconnected immediately after a fault in the network. The safety of personnel and equipment of the distribution network, reclosing with DGs and arc extinction are the major reasons for these DG disconnections. In this thesis, an inverse time admittance (ITA) relay is proposed to protect a distribution network or a microgrid which has several converter interfaced DG connections. The ITA relay is capable of detecting faults and isolating a faulted segment from the network, allowing unfaulted segments to operate either in grid connected or islanded mode operations. The relay does not make the tripping decision based on only the fault current. It also uses the voltage at the relay location. Therefore, the ITA relay can be used effectively in a DG connected network in which fault current level is low or fault current level changes with time. Different case studies are considered to evaluate the performance of the ITA relays in comparison to some of the existing protection schemes. The relay performance is evaluated in different types of distribution networks: radial, the IEEE 34 node test feeder and a mesh network. The results are validated through PSCAD simulations and MATLAB calculations. Several experimental tests are carried out to validate the numerical results in a laboratory test feeder by implementing the ITA relay in LabVIEW. Furthermore, a novel control strategy based on fold back current control is proposed for a converter interfaced DG to overcome the problems associated with the system restoration. The control strategy enables the self extinction of arc if the fault is a temporary arc fault. This also helps in self system restoration if DG capacity is sufficient to supply the load. The coordination with reclosers without disconnecting the DGs from the network is discussed. This results in increased reliability in the network by reduction of customer outages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This special feature section of Journal of Management & Organization (Volume 17/1 - March 2011) sets out to widen understanding of the processes of stability and change in today's organizations, with a particular emphasis on the contribution of institutional approaches to organizational studies. Institutional perspectives on organization theory assume that rational, economic calculations, such as the maximization of profits or the optimization of resource allocation, are not sufficient to understand the behavior of organizations and their strategic choices. Institutionalists acknowledge the great uncertainty associated with the conduct of organizations and suggest that taken-for-granted values, beliefs and meanings within and outside organizations also play an important role in the determination of legitimate action.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eight new N-arylstilbazolium chromophores with electron donating –NR2 (R = Me or Ph) substituents have been synthesized via Knoevenagel condensations and isolated as their PF6− salts. These compounds have been characterized by using various techniques including 1H NMR and IR spectroscopies and electrospray mass spectrometry. UV–vis absorption spectra recorded in acetonitrile are dominated by intense, low energy π → π* intramolecular charge-transfer (ICT) bands, and replacing Me with Ph increases the ICT energies. Cyclic voltammetric studies show irreversible reduction processes, together with oxidation waves that are irreversible for R = Me, but reversible for R = Ph. Single crystal X-ray structures have been determined for three of the methyl ester-substituted stilbazolium salts and for the Cl− salts of their picolinium precursors. Time-dependent density functional theory calculations afford reasonable predictions of ICT energies, but greater rigour is necessary for –NPh2 derivatives. The four new acid-functionalized dyes give moderate sensitization efficiencies (ca. 0.2%) when using TiO2-based photoanodes, with relatively higher values for R = Ph vs Me, while larger efficiencies (up to 0.8%) are achieved with ZnO substrates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated how the interpretation of mathematical problems by Year 7 students impacted on their ability to demonstrate what they can do in NAPLAN numeracy testing. In the study, mathematics is viewed as a culturally and socially determined system of signs and signifiers that establish the meaning, origins and importance of mathematics. The study hypothesises that students are unable to succeed in NAPLAN numeracy tests because they cannot interpret the questions, even though they may be able to perform the necessary calculations. To investigate this, the study applied contemporary theories of literacy to the context of mathematical problem solving. A case study design with multiple methods was used. The study used a correlation design to explore the connections between NAPLAN literacy and numeracy outcomes of 198 Year 7 students in a Queensland school. Additionally, qualitative methods provided a rich description of the effect of the various forms of NAPLAN numeracy questions on the success of ten Year 7 students in the same school. The study argues that there is a quantitative link between reading and numeracy. It illustrates that interpretation (literacy) errors are the most common error type in the selected NAPLAN questions, made by students of all abilities. In contrast, conceptual (mathematical) errors are less frequent amongst more capable students. This has important implications in preparing students for NAPLAN numeracy tests. The study concluded by recommending that increased focus on the literacies of mathematics would be effective in improving NAPLAN results.