993 resultados para data auditing
Resumo:
This paper is concerned with the integration of voice and data on an experimental local area network used by the School of Automation, of the Indian Institute of Science. SALAN (School of Automation Local Area Network) consists of a number of microprocessor-based communication nodes linked to a shared coaxial cable transmission medium. The communication nodes handle the various low-level functions associated with computer communication, and interface user data equipment to the network. SALAN at present provides a file transfer facility between an Intel Series III microcomputer development system and a Texas Instruments Model 990/4 microcomputer system. Further, a packet voice communication system has also been implemented on SALAN. The various aspects of the design and implementation of the above two utilities are discussed.
Resumo:
The mountain yellow-legged frog Rana muscosa sensu lato, once abundant in the Sierra Nevada of California and Nevada, and the disjunct Transverse Ranges of southern California, has declined precipitously throughout its range, even though most of its habitat is protected. The species is now extinct in Nevada and reduced to tiny remnants in southern California, where as a distinct population segment, it is classified as Endangered. Introduced predators (trout), air pollution and an infectious disease (chytridiomycosis) threaten remaining populations. A Bayesian analysis of 1901 base pairs of mitochondrial DNA confirms the presence of two deeply divergent clades that come into near contact in the Sierra Nevada. Morphological studies of museum specimens and analysis of acoustic data show that the two major mtDNA clades are readily differentiated phenotypically. Accordingly, we recognize two species, Rana sierrae, in the northern and central Sierra Nevada, and R. muscosa, in the southern Sierra Nevada and southern California. Existing data indicate no range overlap. These results have important implications for the conservation of these two species as they illuminate a profound mismatch between the current delineation of the distinct population segments (southern California vs. Sierra Nevada) and actual species boundaries. For example, our study finds that remnant populations of R. muscosa exist in both the southern Sierra Nevada and the mountains of southern California, which may broaden options for management. In addition, despite the fact that only the southern California populations are listed as Endangered, surveys conducted since 1995 at 225 historic (1899-1994) localities from museum collections show that 93.3% (n=146) of R. sierrae populations and 95.2% (n=79) of R. muscosa populations are extinct. Evidence presented here underscores the need for revision of protected population status to include both species throughout their ranges.
Resumo:
Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.
Resumo:
This paper presents the site classification of Bangalore Mahanagar Palike (BMP) area using geophysical data and the evaluation of spectral acceleration at ground level using probabilistic approach. Site classification has been carried out using experimental data from the shallow geophysical method of Multichannel Analysis of Surface wave (MASW). One-dimensional (1-D) MASW survey has been carried out at 58 locations and respective velocity profiles are obtained. The average shear wave velocity for 30 m depth (Vs(30)) has been calculated and is used for the site classification of the BMP area as per NEHRP (National Earthquake Hazards Reduction Program). Based on the Vs(30) values major part of the BMP area can be classified as ``site class D'', and ``site class C'. A smaller portion of the study area, in and around Lalbagh Park, is classified as ``site class B''. Further, probabilistic seismic hazard analysis has been carried out to map the seismic hazard in terms spectral acceleration (S-a) at rock and the ground level considering the site classes and six seismogenic sources identified. The mean annual rate of exceedance and cumulative probability hazard curve for S. have been generated. The quantified hazard values in terms of spectral acceleration for short period and long period are mapped for rock, site class C and D with 10% probability of exceedance in 50 years on a grid size of 0.5 km. In addition to this, the Uniform Hazard Response Spectrum (UHRS) at surface level has been developed for the 5% damping and 10% probability of exceedance in 50 years for rock, site class C and D These spectral acceleration and uniform hazard spectrums can be used to assess the design force for important structures and also to develop the design spectrum.
Resumo:
In this paper, we present an approach to estimate fractal complexity of discrete time signal waveforms based on computation of area bounded by sample points of the signal at different time resolutions. The slope of best straight line fit to the graph of log(A(rk)A / rk(2)) versus log(l/rk) is estimated, where A(rk) is the area computed at different time resolutions and rk time resolutions at which the area have been computed. The slope quantifies complexity of the signal and it is taken as an estimate of the fractal dimension (FD). The proposed approach is used to estimate the fractal dimension of parametric fractal signals with known fractal dimensions and the method has given accurate results. The estimation accuracy of the method is compared with that of Higuchi's and Sevcik's methods. The proposed method has given more accurate results when compared with that of Sevcik's method and the results are comparable to that of the Higuchi's method. The practical application of the complexity measure in detecting change in complexity of signals is discussed using real sleep electroencephalogram recordings from eight different subjects. The FD-based approach has shown good performance in discriminating different stages of sleep.
Resumo:
Values of Ko, Flory constant related to unperturbed dimensions, are evaluated for methyl methacrylate-acrylonitrile random copolymers using Flory-Fox, Kurata-Stockmayer and Inagaki-Ptitsyn methods and compared with the Ko values obtained by Stockmayer-Fixman method. Ko values are seen to be less in solvents which have large a (Mark-Houwink exponent) values. A correlation between Ko and a is developed to arrive at a more reliable estimate of Ko for this copolymer system.
Resumo:
- Purpose Communication of risk management practices are a critical component of good corporate governance. Research to date has been of little benefit in informing regulators internationally. This paper seeks to contribute to the literature by investigating how listed Australian companies in a setting where disclosures are explicitly required by the ASX corporate governance framework, disclose risk management (RM) information in the corporate governance statements within annual reports. - Design/methodology/approach To address our study’s research questions and related hypotheses, we examine the top 300 ASX-listed companies by market capitalisation at 30 June 2010. For these firms, we identify, code and categorise RM disclosures made in the annual reports according to the disclosure categories specified in Australian Stock Exchange Corporate Governance Principles and Recommendations (ASX CGPR). The derived data is then examined using a comprehensive approach comprising thematic content analysis and regression analysis. - Findings The results indicate widespread divergence in disclosure practices and low conformance with the Principle 7 of the ASX CGPR. This result suggests that companies are not disclosing all ‘material business risks’ possibly due to ignorance at the board level, or due to the intentional withholding of sensitive information from financial statement users. The findings also show mixed results across the factors expected to influence disclosure behaviour. Notably, the presence of a risk committee (RC) (in particular, a standalone RC) and technology committee (TC) are found to be associated with improved levels of disclosure. we do not find evidence that company risk measures (as proxied by equity beta and the market-to-book ratio) are significantly associated with greater levels of RM disclosure. Also, contrary to common findings in the disclosure literature, factors such as board independence and expertise, audit committee independence, and the usage of a Big-4 auditor do not seem to impact the level of RM disclosure in the Australian context. - Research limitation/implications The study is limited by the sample and study period selection as the RM disclosures of only the largest (top 300) ASX firms are examined for the fiscal year 2010. Thus, the finding may not be generalisable to smaller firms, or earlier/later years. Also, the findings may have limited applicability in other jurisdictions with different regulatory environments. - Practical implications The study’s findings suggest that insufficient attention has been applied to RM disclosures by listed companies in Australia. These results suggest that the RM disclosures practices observed in the Australian setting may not be meeting the objectives of regulators and the needs of stakeholders. - Originality/value Despite the importance of risk management communication, it is unclear whether disclosures in annual financial reports achieve this communication. The Australian setting provides an ideal environment to examine the nature and extent of risk management communication as the Australian Securities Exchange (ASX) has recommended risk management disclosures follow Principle 7 of its principle-based governance rules since 2007.
Resumo:
The goal of this research was to establish the necessary conditions under which individuals are prepared to commit themselves to quality assurance work in the organisation of a Polytechnic. The conditions were studied using four main concepts: awareness of quality, commitment to the organisation, leadership and work welfare. First, individuals were asked to describe these four concepts. Then, relationships between the concepts were analysed in order to establish the conditions for the commitment of an individual towards quality assurance work (QA). The study group comprised the entire personnel of Helsinki Polytechnic, of which 341 (44.5%) individuals participated. Mixed methods were used as the methodological base. A questionnaire and interviews were used as the research methods. The data from the interviews were used for the validation of the results, as well as for completing the analysis. The results of these interviews and analyses were integrated using the concurrent nested design method. In addition, the questionnaire was used to separately analyse the impressions and meanings of the awareness of quality and leadership, because, according to the pre-understanding, impressions of phenomena expressed in terms of reality have an influence on the commitment to QA. In addition to statistical figures, principal component analysis was used as a description method. For comparisons between groups, one way variance analysis and effect size analysis were used. For explaining the analysis methods, forward regression analysis and structural modelling were applied. As a result of the research it was found that 51% of the conditions necessary for a commitment to QA were explained by an individual’s experience/belief that QA was a method of development, that QA was possible to participate in and that the meaning of quality included both product and process qualities. If analysed separately, other main concepts (commitment to the organisation, leadership and work welfare) played only a small part in explaining an individual’s commitment. In the context of this research, a structural path model of the main concepts was built. In the model, the concepts were interconnected by paths created as a result of a literature search covering the main concepts, as well as a result of an analysis of the empirical material of this thesis work. The path model explained 46% of the necessary conditions under which individuals are prepared to commit themselves to QA. The most important path for achieving a commitment stemmed from product and system quality emanating from the new goals of the Polytechnic, moved through the individual’s experience that QA is a method of the total development of quality and ended in a commitment to QA. The second most important path stemmed from the individual’s experience of belonging to a supportive work community, moved through the supportive value of the job and through affective commitment to the organisation and ended in a commitment to QA. The third path stemmed from an individual’s experiences in participating in QA, moved through collective system quality and through these to the supportive value of the job to affective commitment to the organisation and ended in a commitment to QA. The final path in the path model stemmed from leadership by empowerment, moved through collective system quality, the supportive value of the job and an affective commitment to the organisation, and again, ended in a commitment to QA. As a result of the research, it was found that the individual’s functional department was an important factor in explaining the differences between groups. Therefore, it was found that understanding the processing of part cultures in the organisation is important when developing QA. Likewise, learning-teaching paradigms proved to be a differentiating factor. Individuals thinking according to the humanistic-constructivistic paradigm showed more commitment to QA than technological-rational thinkers. Also, it was proved that the QA training program did not increase commitment, as the path model demonstrated that those who participated in training showed 34% commitment, whereas those who did not showed 55% commitment. As a summary of the results it can be said that the necessary conditions under which individuals are prepared to commit themselves to QA cannot be treated in a reductionistic way. Instead, the conditions must be treated as one totality, with all the main concepts interacting simultaneously. Also, the theoretical framework of quality must include its dynamic aspect, which means the development of the work of the individual and learning through auditing. In addition, this dynamism includes the reflection of the paradigm of the functions of the individual as well as that of all parts of the organisation. It is important to understand and manage the various ways of thinking and the cultural differences produced by the fragmentation of the organisation. Finally, it seems possible that the path model can be generalised for use in any organisation development project where the personnel should be committed.
Resumo:
Making Sense of Mass Education provides an engaging and accessible analysis of traditional issues associated with mass education. The book challenges preconceptions about social class, gender and ethnicity discrimination; highlights the interplay between technology, media, popular culture and schooling; and inspects the relevance of ethics and philosophy in the modern classroom. This new edition has been comprehensively updated to provide current information regarding literature, statistics and legal policies, and significantly expands on the previous edition's structure of derailing traditional myths about education as a point of discussion. It also features two new chapters on Big Data and Globalisation and what they mean for the Australian classroom. Written for students, practising teachers and academics alike, Making Sense of Mass Education summarises the current educational landscape in Australia and looks at fundamental issues in society as they relate to education.
Resumo:
OBJECTIVE Corneal confocal microscopy is a novel diagnostic technique for the detection of nerve damage and repair in a range of peripheral neuropathies, in particular diabetic neuropathy. Normative reference values are required to enable clinical translation and wider use of this technique. We have therefore undertaken a multicenter collaboration to provide worldwide age-adjusted normative values of corneal nerve fiber parameters. RESEARCH DESIGN AND METHODS A total of 1,965 corneal nerve images from 343 healthy volunteers were pooled from six clinical academic centers. All subjects underwent examination with the Heidelberg Retina Tomograph corneal confocal microscope. Images of the central corneal subbasal nerve plexus were acquired by each center using a standard protocol and analyzed by three trained examiners using manual tracing and semiautomated software (CCMetrics). Age trends were established using simple linear regression, and normative corneal nerve fiber density (CNFD), corneal nerve fiber branch density (CNBD), corneal nerve fiber length (CNFL), and corneal nerve fiber tortuosity (CNFT) reference values were calculated using quantile regression analysis. RESULTS There was a significant linear age-dependent decrease in CNFD (-0.164 no./mm(2) per year for men, P < 0.01, and -0.161 no./mm(2) per year for women, P < 0.01). There was no change with age in CNBD (0.192 no./mm(2) per year for men, P = 0.26, and -0.050 no./mm(2) per year for women, P = 0.78). CNFL decreased in men (-0.045 mm/mm(2) per year, P = 0.07) and women (-0.060 mm/mm(2) per year, P = 0.02). CNFT increased with age in men (0.044 per year, P < 0.01) and women (0.046 per year, P < 0.01). Height, weight, and BMI did not influence the 5th percentile normative values for any corneal nerve parameter. CONCLUSIONS This study provides robust worldwide normative reference values for corneal nerve parameters to be used in research and clinical practice in the study of diabetic and other peripheral neuropathies.
Resumo:
To facilitate marketing and export, the Australian macadamia industry requires accurate crop forecasts. Each year, two levels of crop predictions are produced for this industry. The first is an overall longer-term forecast based on tree census data of growers in the Australian Macadamia Society (AMS). This data set currently accounts for around 70% of total production, and is supplemented by our best estimates of non-AMS orchards. Given these total tree numbers, average yields per tree are needed to complete the long-term forecasts. Yields from regional variety trials were initially used, but were found to be consistently higher than the average yields that growers were obtaining. Hence, a statistical model was developed using growers' historical yields, also taken from the AMS database. This model accounted for the effects of tree age, variety, year, region and tree spacing, and explained 65% of the total variation in the yield per tree data. The second level of crop prediction is an annual climate adjustment of these overall long-term estimates, taking into account the expected effects on production of the previous year's climate. This adjustment is based on relative historical yields, measured as the percentage deviance between expected and actual production. The dominant climatic variables are observed temperature, evaporation, solar radiation and modelled water stress. Initially, a number of alternate statistical models showed good agreement within the historical data, with jack-knife cross-validation R2 values of 96% or better. However, forecasts varied quite widely between these alternate models. Exploratory multivariate analyses and nearest-neighbour methods were used to investigate these differences. For 2001-2003, the overall forecasts were in the right direction (when compared with the long-term expected values), but were over-estimates. In 2004 the forecast was well under the observed production, and in 2005 the revised models produced a forecast within 5.1% of the actual production. Over the first five years of forecasting, the absolute deviance for the climate-adjustment models averaged 10.1%, just outside the targeted objective of 10%.
Resumo:
Cu K-edge EXAFS spectra of Cu-Ni/Al2O3 and Cu-ZnO catalysts, both of which contain more than one Cu species, have been analysed making use of an additive relation for the EXAFS function. The analysis, which also makes use of residual spectra for identifying the species, shows good agreement between experimental and calculated spectra.
Resumo:
Objective Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. Methods This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. Results The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semi-automatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and positive predictive value and reduced the need for human coding to less than one-third of cases in one large occupational injury database. Conclusion The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of ‘big injury narrative data’ opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice.
Resumo:
[From Preface] The Consumer Expenditure Survey is among the oldest publications of the Bureau of Labor Statistics. With information on the expenditures, incomes, and demographic characteristics of households, the survey documents the spending patterns and economic status of American families. This report offers a new approach to the use of Consumer Expenditure Survey data. Normally, the survey presents an indepth look at American households at a specific point in time, the reference period being a calendar year. Here, the authors use consumer expenditure data longitudinally and draw on information from decennial census reports to present a 100-year history of significant changes in consumer spending, economic status, and family demographics in the country as a whole, as well as in New York City and Boston.