896 resultados para Polyhedral sets
Resumo:
PURPOSE. This study was conducted to determine the magnitude of pupil center shift between the illumination conditions provided by corneal topography measurement (photopic illuminance) and by Hartmann-Shack aberrometry (mesopic illuminance) and to investigate the importance of this shift when calculating corneal aberrations and for the success of wavefront-guided surgical procedures. METHODS. Sixty-two subjects with emmetropia underwent corneal topography and Hartmann-Shack aberrometry. Corneal limbus and pupil edges were detected, and the differences between their respective centers were determined for both procedures. Corneal aberrations were calculated using the pupil centers for corneal topography and for Hartmann-Shack aberrometry. Bland-Altmann plots and paired t-tests were used to analyze the differences between corneal aberrations referenced to the two pupil centers. RESULTS. The mean magnitude (modulus) of the displacement of the pupil with the change of the illumination conditions was 0.21 ± 0.11 mm. The effect of this pupillary shift was manifest for coma corneal aberrations for 5-mm pupils, but the two sets of aberrations calculated with the two pupil positions were not significantly different. Sixty-eight percent of the population had differences in coma smaller than 0.05 µm, and only 4% had differences larger than 0.1 µm. Pupil displacement was not large enough to significantly affect other higher-order Zernike modes. CONCLUSIONS. Estimated corneal aberrations changed slightly between photopic and mesopic illumination conditions given by corneal topography and Hartmann-Shack aberrometry. However, this systematic pupil shift, according to the published tolerances ranges, is enough to deteriorate the optical quality below the theoretically predicted diffraction limit of wavefront-guided corneal surgery.
Resumo:
This article explores two matrix methods to induce the ``shades of meaning" (SoM) of a word. A matrix representation of a word is computed from a corpus of traces based on the given word. Non-negative Matrix Factorisation (NMF) and Singular Value Decomposition (SVD) compute a set of vectors corresponding to a potential shade of meaning. The two methods were evaluated based on loss of conditional entropy with respect to two sets of manually tagged data. One set reflects concepts generally appearing in text, and the second set comprises words used for investigations into word sense disambiguation. Results show that for NMF consistently outperforms SVD for inducing both SoM of general concepts as well as word senses. The problem of inducing the shades of meaning of a word is more subtle than that of word sense induction and hence relevant to thematic analysis of opinion where nuances of opinion can arise.
Resumo:
Collaborative tagging can help users organize, share and retrieve information in an easy and quick way. For the collaborative tagging information implies user’s important personal preference information, it can be used to recommend personalized items to users. This paper proposes a novel tag-based collaborative filtering approach for recommending personalized items to users of online communities that are equipped with tagging facilities. Based on the distinctive three dimensional relationships among users, tags and items, a new similarity measure method is proposed to generate the neighborhood of users with similar tagging behavior instead of similar implicit ratings. The promising experiment result shows that by using the tagging information the proposed approach outperforms the standard user and item based collaborative filtering approaches.
Resumo:
An asset registry arguably forms the core system that needs to be in place before other systems can operate or interoperate. Most systems have rudimentary asset registry functionality that store assets, relationships, or characteristics, and this leads to different asset management systems storing similar sets of data in multiple locations in an organisation. As organisations have been slowly moving their information architecture toward a service-oriented architecture, they have also been consolidating their multiple data stores, to form a “single point of truth”. As part of a strategy to integrate several asset management systems in an Australian railway organisation, a case study for developing a consolidated asset registry was conducted. A decision was made to use the MIMOSA OSA-EAI CRIS data model as well as the OSA-EAI Reference Data in building the platform due to the standard’s relative maturity and completeness. A pilot study of electrical traction equipment was selected, and the data sources feeding into the asset registry were primarily diagrammatic based. This paper presents the pitfalls encountered, approaches taken, and lessons learned during the development of the asset registry.
Resumo:
Association rule mining is one technique that is widely used when querying databases, especially those that are transactional, in order to obtain useful associations or correlations among sets of items. Much work has been done focusing on efficiency, effectiveness and redundancy. There has also been a focusing on the quality of rules from single level datasets with many interestingness measures proposed. However, with multi-level datasets now being common there is a lack of interestingness measures developed for multi-level and cross-level rules. Single level measures do not take into account the hierarchy found in a multi-level dataset. This leaves the Support-Confidence approach,which does not consider the hierarchy anyway and has other drawbacks, as one of the few measures available. In this paper we propose two approaches which measure multi-level association rules to help evaluate their interestingness. These measures of diversity and peculiarity can be used to help identify those rules from multi-level datasets that are potentially useful.
Resumo:
Silverstone’s Why Study the Media? (hereafter WSM) is a dif� cult book to review, especially in such a short space. The content spans millennia of theoretical, analytical and historical perspectives on our media, but is none the less entirely contemporary and relevant in its focus. Silverstone’s perspective is at times elusive because the book sets out, successfully I think, to answer the question posed in the title. But it does so by raising major questions in media studies, important questions, in a way that does not imply quick and easy answers.
Resumo:
This study was designed to examine affective leader behaviours, and their impact on cognitive, affective and behavioural engagement. Researchers (e.g., Cropanzano & Mitchell, 2005; Moorman et al., 1998) have called for more research to be directed toward modelling and testing sets of relationships which better approximate the complexity associated with contemporary organisational experience. This research has attempted to do this by clarifying and defining the construct of engagement, and then by examining how each of the engagement dimensions are impacted by affective leader behaviours. Specifically, a model was tested that identifies leader behaviour antecedents of cognitive, affective and behavioural engagement. Data was collected from five public-sector organisations. Structural equation modelling was used to identify the relationships between the engagement dimensions and leader behaviours. The results suggested that affective leader behaviours had a substantial direct impact on cognitive engagement, which in turn influenced affective engagement, which then influenced intent to stay and extra-role performance. The results indicated a directional process for engagement, but particularly highlighted the significant impact of affective leader behaviours as an antecedent to engagement. In general terms, the findings will provide a platform from which to develop a robust measure of engagement, and will be helpful to human resource practitioners interested in understanding the directional process of engagement and the importance of affective leadership as an antecedent to engagement.
Resumo:
1. Species' distribution modelling relies on adequate data sets to build reliable statistical models with high predictive ability. However, the money spent collecting empirical data might be better spent on management. A less expensive source of species' distribution information is expert opinion. This study evaluates expert knowledge and its source. In particular, we determine whether models built on expert knowledge apply over multiple regions or only within the region where the knowledge was derived. 2. The case study focuses on the distribution of the brush-tailed rock-wallaby Petrogale penicillata in eastern Australia. We brought together from two biogeographically different regions substantial and well-designed field data and knowledge from nine experts. We used a novel elicitation tool within a geographical information system to systematically collect expert opinions. The tool utilized an indirect approach to elicitation, asking experts simpler questions about observable rather than abstract quantities, with measures in place to identify uncertainty and offer feedback. Bayesian analysis was used to combine field data and expert knowledge in each region to determine: (i) how expert opinion affected models based on field data and (ii) how similar expert-informed models were within regions and across regions. 3. The elicitation tool effectively captured the experts' opinions and their uncertainties. Experts were comfortable with the map-based elicitation approach used, especially with graphical feedback. Experts tended to predict lower values of species occurrence compared with field data. 4. Across experts, consensus on effect sizes occurred for several habitat variables. Expert opinion generally influenced predictions from field data. However, south-east Queensland and north-east New South Wales experts had different opinions on the influence of elevation and geology, with these differences attributable to geological differences between these regions. 5. Synthesis and applications. When formulated as priors in Bayesian analysis, expert opinion is useful for modifying or strengthening patterns exhibited by empirical data sets that are limited in size or scope. Nevertheless, the ability of an expert to extrapolate beyond their region of knowledge may be poor. Hence there is significant merit in obtaining information from local experts when compiling species' distribution models across several regions.
Resumo:
Fours sets of PM10 samples were collected in three sites in SEQ from December 2002 to August 2004. Three of these sets of samples were collected by QLD EPA as a part of their regular air monitoring program at Woolloongabba, Rocklea and Eagle Farm. Half of the samples were used in this study for the analysis of water-soluble ions, which are Na+, K+, Mg2+, Ca2+, NH4 +, Cl-, NO3 -, SO4 2-, F-, Br-, NO2 -, PO4 -3 and the other half was retained by QLD EPA. The fourth set of samples was collected at Rocklea, specifically for this study. A quarter of the samples obtained from this set of samples were used to analyse water-soluble ions; a quarter of the sample was used to analyse Pb, Cu, Al, Fe, Mn and Zn; and the rests were used to analyse US EPA 16 priority PAHs. The water-soluble ions were extracted ultrasonically with water and the major watersoluble anions as well as NH4 + were analysed using IC. Na+, K+, Mg2+, Ca2+ Pb, Cu, Al, Fe, Mn and Zn were analysed using ICP-AES while PAHs were extracted by acetonitrile and analysed using HPLC. Of the analysed water-soluble ions, Cl-, NO3 -, SO4 2-, Na+, K+, Mg2+ and Ca2+ were high in concentration and determined in all the samples. F-, Br-, NO2 -, PO4 -3 and NH4 + ions were lower in concentration and determined only in some samples. Na+ and Cl- were high in all samples indicating the importance of a marine source. Principal Component Analysis (PCA) was used to examine the temporal variations of the water-soluble ions at the three sites. The results indicated that there was no major difference between the three sites. However, comparing the average concentrations of ions and Cl-/Na+ it was concluded that Woolloongabba had more marine influence than the other sites. Al, Fe and Zn were detected in all samples. Al and Fe were high in all samples indicating the significance of a source of crustal matter. Cu, Mn and Pb were in low concentrations and were determined only in some samples. The lower Pb concentrations observed in the study than in previous studies indicate that the phasing-out of leaded petrol had an appreciable impact on Pb levels in SEQ. This study reports for the first time, simultaneous data on the water-soluble, metal ion and PAH levels of PM10 aerosols in Brisbane, and provides information on the most likely sources of these chemical species. Such information can be used alongside those that already exist to formulate PM10 pollution reduction strategies for SEQ in order to protect the community from the adverse effects of PM pollution.
Resumo:
Purpose: This study explored the spatial distribution of notified cryptosporidiosis cases and identified major socioeconomic factors associated with the transmission of cryptosporidiosis in Brisbane, Australia. Methods: We obtained the computerized data sets on the notified cryptosporidiosis cases and their key socioeconomic factors by statistical local area (SLA) in Brisbane for the period of 1996 to 2004 from the Queensland Department of Health and Australian Bureau of Statistics, respectively. We used spatial empirical Bayes rates smoothing to estimate the spatial distribution of cryptosporidiosis cases. A spatial classification and regression tree (CART) model was developed to explore the relationship between socioeconomic factors and the incidence rates of cryptosporidiosis. Results: Spatial empirical Bayes analysis reveals that the cryptosporidiosis infections were primarily concentrated in the northwest and southeast of Brisbane. A spatial CART model shows that the relative risk for cryptosporidiosis transmission was 2.4 when the value of the social economic index for areas (SEIFA) was over 1028 and the proportion of residents with low educational attainment in an SLA exceeded 8.8%. Conclusions: There was remarkable variation in spatial distribution of cryptosporidiosis infections in Brisbane. Spatial pattern of cryptosporidiosis seems to be associated with SEIFA and the proportion of residents with low education attainment.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Motor vehicles are a major source of gaseous and particulate matter pollution in urban areas, particularly of ultrafine sized particles (diameters < 0.1 µm). Exposure to particulate matter has been found to be associated with serious health effects, including respiratory and cardiovascular disease, and mortality. Particle emissions generated by motor vehicles span a very broad size range (from around 0.003-10 µm) and are measured as different subsets of particle mass concentrations or particle number count. However, there exist scientific challenges in analysing and interpreting the large data sets on motor vehicle emission factors, and no understanding is available of the application of different particle metrics as a basis for air quality regulation. To date a comprehensive inventory covering the broad size range of particles emitted by motor vehicles, and which includes particle number, does not exist anywhere in the world. This thesis covers research related to four important and interrelated aspects pertaining to particulate matter generated by motor vehicle fleets. These include the derivation of suitable particle emission factors for use in transport modelling and health impact assessments; quantification of motor vehicle particle emission inventories; investigation of the particle characteristic modality within particle size distributions as a potential for developing air quality regulation; and review and synthesis of current knowledge on ultrafine particles as it relates to motor vehicles; and the application of these aspects to the quantification, control and management of motor vehicle particle emissions. In order to quantify emissions in terms of a comprehensive inventory, which covers the full size range of particles emitted by motor vehicle fleets, it was necessary to derive a suitable set of particle emission factors for different vehicle and road type combinations for particle number, particle volume, PM1, PM2.5 and PM1 (mass concentration of particles with aerodynamic diameters < 1 µm, < 2.5 µm and < 10 µm respectively). The very large data set of emission factors analysed in this study were sourced from measurement studies conducted in developed countries, and hence the derived set of emission factors are suitable for preparing inventories in other urban regions of the developed world. These emission factors are particularly useful for regions with a lack of measurement data to derive emission factors, or where experimental data are available but are of insufficient scope. The comprehensive particle emissions inventory presented in this thesis is the first published inventory of tailpipe particle emissions prepared for a motor vehicle fleet, and included the quantification of particle emissions covering the full size range of particles emitted by vehicles, based on measurement data. The inventory quantified particle emissions measured in terms of particle number and different particle mass size fractions. It was developed for the urban South-East Queensland fleet in Australia, and included testing the particle emission implications of future scenarios for different passenger and freight travel demand. The thesis also presents evidence of the usefulness of examining modality within particle size distributions as a basis for developing air quality regulations; and finds evidence to support the relevance of introducing a new PM1 mass ambient air quality standard for the majority of environments worldwide. The study found that a combination of PM1 and PM10 standards are likely to be a more discerning and suitable set of ambient air quality standards for controlling particles emitted from combustion and mechanically-generated sources, such as motor vehicles, than the current mass standards of PM2.5 and PM10. The study also reviewed and synthesized existing knowledge on ultrafine particles, with a specific focus on those originating from motor vehicles. It found that motor vehicles are significant contributors to both air pollution and ultrafine particles in urban areas, and that a standardized measurement procedure is not currently available for ultrafine particles. The review found discrepancies exist between outcomes of instrumentation used to measure ultrafine particles; that few data is available on ultrafine particle chemistry and composition, long term monitoring; characterization of their spatial and temporal distribution in urban areas; and that no inventories for particle number are available for motor vehicle fleets. This knowledge is critical for epidemiological studies and exposure-response assessment. Conclusions from this review included the recommendation that ultrafine particles in populated urban areas be considered a likely target for future air quality regulation based on particle number, due to their potential impacts on the environment. The research in this PhD thesis successfully integrated the elements needed to quantify and manage motor vehicle fleet emissions, and its novelty relates to the combining of expertise from two distinctly separate disciplines - from aerosol science and transport modelling. The new knowledge and concepts developed in this PhD research provide never before available data and methods which can be used to develop comprehensive, size-resolved inventories of motor vehicle particle emissions, and air quality regulations to control particle emissions to protect the health and well-being of current and future generations.
Resumo:
As a consequence of the increased incidence of collaborative arrangements between firms, the competitive environment characterising many industries has undergone profound change. It is suggested that rivalry is not necessarily enacted by individual firms according to the traditional mechanisms of direct confrontation in factor and product markets, but rather as collaborative orchestration between a number of participants or network members. Strategic networks are recognised as sets of firms within an industry that exhibit denser strategic linkages among themselves than other firms within the same industry. Based on this, strategic networks are determined according to evidence of strategic alliances between firms comprising the industry. As a result, a single strategic network represents a group of firms closely linked according to collaborative ties. Arguably, the collective outcome of these strategic relationships engineered between firms suggest that the collaborative benefits attributed to interorganisational relationships require closer examination in respect to their propensity to influence rivalry in intraindustry environments. Derived in large from the social sciences, network theory allows for the micro and macro examination of the opportunities and constraints inherent in the structure of relationships in strategic networks, establishing a relational approach upon which the conduct and performance of firms can be more fully understood. Research to date has yet to empirically investigate the relationship between strategic networks and rivalry. The limited research that has been completed utilising a network rationale to investigate competitive patterns in contemporary industry environments has been characterised by a failure to directly measure rivalry. Further, this prior research has typically embedded investigation in industry settings dominated by technological or regulatory imperatives, such as the microprocessor and airline industries. These industries, due to the presence of such imperatives, are arguably more inclined to support the realisation of network rivalry, through subscription to prescribed technological standards (eg., microprocessor industry) or by being bound by regulatory constraints dictating operation within particular market segments (airline industry). In order to counter these weaknesses, the proposition guiding research - Are patterns of rivalry predicted by strategic network membership? – is embedded in the United States Light Vehicles Industry, an industry not dominated by technological or regulatory imperatives. Further, rivalry is directly measured and utilised in research, thus distinguishing this investigation from prior research efforts. The timeframe of investigation is 1993 – 1999, with all research data derived from secondary sources. Strategic networks were defined within the United States Light Vehicles Industry based on evidence of horizontal strategic relationships between firms comprising the industry. The measure of rivalry used to directly ascertain the competitive patterns of industry participants was derived from the traditional Herfindahl Index, modified to account for patterns of rivalry observed at the market segment level. Statistical analyses of the strategic network and rivalry constructs found little evidence to support the contention of network rivalry; indeed, greater levels of rivalry were observed between firms comprising the same strategic network than between firms participating in opposing network structures. Based on these results, patterns of rivalry evidenced in the United States Light Vehicle Industry over the period 1993 – 1999 were not found to be predicted by strategic network membership. The findings generated by this research are in contrast to current theorising in the strategic network – rivalry realm. In this respect, these findings are surprising. The relevance of industry type, in conjunction with prevailing network methodology, provides the basis upon which these findings are contemplated. Overall, this study raises some important questions in relation to the relevancy of the network rivalry rationale, establishing a fruitful avenue for further research.
Resumo:
The aim of this exploratory study was to gain an insight into Asian and Western public relations practices by investigating them through job advertisements and thus reflecting on what organisations expect from the public relations professionals. Grunig's (1984) four models of public relations and the concept of relationships management were used as the foundation for this study. Australia was used to represent the Western region and India was used to represent the Asian region. Sample sets of public relations recruitment advertisements from both countries were examined against Grunig's one-way communication, two-way communication and relationship management attributes.
Resumo:
We introduce multiple-control fuzzy vaults allowing generalised threshold, compartmented and multilevel access structure. The presented schemes enable many useful applications employing multiple users and/or multiple locking sets. Introducing the original single control fuzzy vault of Juels and Sudan we identify several similarities and differences between their vault and secret sharing schemes which influence how best to obtain working generalisations. We design multiple-control fuzzy vaults suggesting applications using biometric credentials as locking and unlocking values. Furthermore we assess the security of our obtained generalisations for insider/ outsider attacks and examine the access-complexity for legitimate vault owners.