944 resultados para EXAMPLES
Resumo:
Photochemistry has made significant contributions to our understanding of many important natural processes as well as the scientific discoveries of the man-made world. The measurements from such studies are often complex and may require advanced data interpretation with the use of multivariate or chemometrics methods. In general, such methods have been applied successfully for data display, classification, multivariate curve resolution and prediction in analytical chemistry, environmental chemistry, engineering, medical research and industry. However, in photochemistry, by comparison, applications of such multivariate approaches were found to be less frequent although a variety of methods have been used, especially with spectroscopic photochemical applications. The methods include Principal Component Analysis (PCA; data display), Partial Least Squares (PLS; prediction), Artificial Neural Networks (ANN; prediction) and several models for multivariate curve resolution related to Parallel Factor Analysis (PARAFAC; decomposition of complex responses). Applications of such methods are discussed in this overview and typical examples include photodegradation of herbicides, prediction of antibiotics in human fluids (fluorescence spectroscopy), non-destructive in- and on-line monitoring (near infrared spectroscopy) and fast-time resolution of spectroscopic signals from photochemical reactions. It is also quite clear from the literature that the scope of spectroscopic photochemistry was enhanced by the application of chemometrics. To highlight and encourage further applications of chemometrics in photochemistry, several additional chemometrics approaches are discussed using data collected by the authors. The use of a PCA biplot is illustrated with an analysis of a matrix containing data on the performance of photocatalysts developed for water splitting and hydrogen production. In addition, the applications of the Multi-Criteria Decision Making (MCDM) ranking methods and Fuzzy Clustering are demonstrated with an analysis of water quality data matrix. Other examples of topics include the application of simultaneous kinetic spectroscopic methods for prediction of pesticides, and the use of response fingerprinting approach for classification of medicinal preparations. In general, the overview endeavours to emphasise the advantages of chemometrics' interpretation of multivariate photochemical data, and an Appendix of references and summaries of common and less usual chemometrics methods noted in this work, is provided. Crown Copyright © 2010.
Resumo:
In this chapter we take a high-level view of social media, focusing not on specific applications, domains, websites, or technologies, but instead our interest is in the forms of engagement that social media engender. This is not to suggest that all social media are the same, or even that everyone’s experience with any particular medium or technology is the same. However, we argue common issues arise that characterize social media in a broad sense, and provide a different analytic perspective than we would gain from looking at particular systems or applications. We do not take the perspective that social life merely happens “within” such systems, nor that social life “shapes” such systems, but rather these systems provide a site for the production of social and cultural reality – that media are always already social and the engagement with, in, and through media of all sorts is a thoroughly social phenomenon. Accordingly, in this chapter, we examine two phenomena concurrently: social life seen through the lens of social media, and social media seen through the lens of social life. In particular, we want to understand the ways that a set of broad phenomena concerning forms of participation in social life is articulated in the domain of social media. As a conceptual entry-point, we use the notion of the “moral economy” as a means to open up the domain of inquiry. We first discuss the notion of the “moral economy” as it has been used by a number of social theorists, and then identify a particular set of conceptual concerns that we suggest link it to the phenomena of social networking in general. We then discuss a series of examples drawn from a range of studies to elaborate and ground this conceptual framework in empirical data. This leads us to a broader consideration of audiences and publics in social media that, we suggest, holds important lessons for how we treat social media analytically.
Resumo:
The world’s increasing complexity, competitiveness, interconnectivity, and dependence on technology generate new challenges for nations and individuals that cannot be met by continuing education as usual (Katehi, Pearson, & Feder, 2009). With the proliferation of complex systems have come new technologies for communication, collaboration, and conceptualisation. These technologies have led to significant changes in the forms of mathematical and scientific thinking that are required beyond the classroom. Modelling, in its various forms, can develop and broaden children’s mathematical and scientific thinking beyond the standard curriculum. This paper first considers future competencies in the mathematical sciences within an increasingly complex world. Next, consideration is given to interdisciplinary problem solving and models and modelling. Examples of complex, interdisciplinary modelling activities across grades are presented, with data modelling in 1st grade, model-eliciting in 4th grade, and engineering-based modelling in 7th-9th grades.
Resumo:
Airports worldwide represent key forms of critical infrastructure in addition to serving as nodes in the international aviation network. While the continued operation of airports is critical to the functioning of reliable air passenger and freight transportation, these infrastructure systems face a number of sources of disturbance that threaten their operational viability. Recent examples of high magnitude events include the eruption of Iceland’s Eyjafjallajokull volcano eruption (Folattau and Schofield 2010), the failure of multiple systems at the opening of Heathrow’s Terminal 5 (Brady and Davies 2010) and the Glasgow airport 2007 terrorist attack (Crichton 2008). While these newsworthy events do occur, a multitude of lower-level more common disturbances also have the potential to cause significant discontinuity to airport operations. Regional airports face a unique set of challenges, particularly in a nation like Australia where they serve to link otherwise remote and isolated communities to metropolitan hubs (Wheeler 2005), often without the resources and political attention received by larger capital city airports. This paper discusses conceptual relationships between Business Continuity Management (BCM) and High Reliability Theory, and proposes BCM as an appropriate risk-based management process to ensure continued airport operation in the face of uncertainty. In addition, it argues that that correctly implemented BCM can lead to highly reliable organisations. This is framed within the broader context of critical infrastructures and the need for adequate crisis management approaches suited to their unique requirements (Boin and McConnell 2007).
Resumo:
Web 2.0 technology and concepts are being used increasingly by organisations to enhance knowledge, efficiency, engagement and reputation. Understanding the concepts of Web 2.0, its characteristics, and how the technology and concepts can be adopted, is essential to successfully reap the potential benefits. In fact, there is a debate about using the Web 2.0 idiom to refer to the concept behind it; however, this term is widely used in literature as well as in industry. In this paper, the definition of Web 2.0 technology, its characteristics and the attributes, will be presented. In addition, the adoption of such technology is further explored through the presentation of two separate case examples of Web 2.0 being used: to enhance an enterprise; and to enhance university teaching. The similarities between these implementations are identified and discussed, including how the findings point to generic principles of adoption.
Resumo:
The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units’ databases.
Resumo:
The presence of arsenic in the environment is a hazard. The accumulation of arsenate by a range of cations in the formation of minerals provides a mechanism for the accumulation of arsenate. The formation of the tsumcorite minerals is an example of a series of minerals which accumulate arsenate. There are about twelve examples in this mineral group. Raman spectroscopy offers a method for the analysis of these minerals. The structure of selected tsumcorite minerals with arsenate and sulphate anions were analysed by Raman spectroscopy. Isomorphic substitution of sulphate for arsenate is observed for gartrellite and thometzekite. A comparison is made with the sulphate bearing mineral natrochalcite. The position of the hydroxyl and water stretching vibrations are related to the strength of the hydrogen bond formed between the OH unit and the AsO43- anion. Characteristic Raman spectra of the minerals enable the assignment of the bands to specific vibrational modes.
Resumo:
With examples drawn from media coverage of the War on Terror, the 2003 invasion of Iraq, Hurricane Katrina and the London underground bombings, Cultural Chaos explores the changing relationship between journalism and power in an increasingly globalised news culture. In this new text, Brian McNair examines the processes of cultural, geographic and political dissolution in the post-Cold War era and the rapid evolution of information and communication technologies. He investigates the impact of these trends on domestic and international journalism and on political processes in democratic and authoritarian societies across the world. Written in a lively and accessible style, Cultural Chaos provides students with an overview of the evolution of the sociology of journalism, a critical review of current thinking within media studies and an argument for a revision and renewal of the paradigms that have dominated the field since the early twentieth century. Separate chapters are devoted to new developments such as the rise of the blogosphere and satellite television news and their impact on journalism more generally. Cultural Chaos will be essential reading for all those interested in the emerging globalised news culture of the twenty-first century.
Resumo:
Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.
Resumo:
Barreto-Lynn-Scott (BLS) curves are a stand-out candidate for implementing high-security pairings. This paper shows that particular choices of the pairing-friendly search parameter give rise to four subfami- lies of BLS curves, all of which offer highly efficient and implementation- friendly pairing instantiations. Curves from these particular subfamilies are defined over prime fields that support very efficient towering options for the full extension field. The coefficients for a specific curve and its correct twist are automat-ically determined without any computational effort. The choice of an extremely sparse search parameter is immediately reflected by a highly efficient optimal ate Miller loop and final exponentiation. As a resource for implementors, we give a list with examples of implementation-friendly BLS curves through several high-security levels.
Resumo:
Concrete is commonly used as a primary construction material for tall building construction. Load bearing components such as columns and walls in concrete buildings are subjected to instantaneous and long term axial shortening caused by the time dependent effects of "shrinkage", "creep" and "elastic" deformations. Reinforcing steel content, variable concrete modulus, volume to surface area ratio of the elements and environmental conditions govern axial shortening. The impact of differential axial shortening among columns and core shear walls escalate with increasing building height. Differential axial shortening of gravity loaded elements in geometrically complex and irregular buildings result in permanent distortion and deflection of the structural frame which have a significant impact on building envelopes, building services, secondary systems and the life time serviceability and performance of a building. Existing numerical methods commonly used in design to quantify axial shortening are mainly based on elastic analytical techniques and therefore unable to capture the complexity of non-linear time dependent effect. Ambient measurements of axial shortening using vibrating wire, external mechanical strain, and electronic strain gauges are methods that are available to verify pre-estimated values from the design stage. Installing these gauges permanently embedded in or on the surface of concrete components for continuous measurements during and after construction with adequate protection is uneconomical, inconvenient and unreliable. Therefore such methods are rarely if ever used in actual practice of building construction. This research project has developed a rigorous numerical procedure that encompasses linear and non-linear time dependent phenomena for prediction of axial shortening of reinforced concrete structural components at design stage. This procedure takes into consideration (i) construction sequence, (ii) time varying values of Young's Modulus of reinforced concrete and (iii) creep and shrinkage models that account for variability resulting from environmental effects. The capabilities of the procedure are illustrated through examples. In order to update previous predictions of axial shortening during the construction and service stages of the building, this research has also developed a vibration based procedure using ambient measurements. This procedure takes into consideration the changes in vibration characteristic of structure during and after construction. The application of this procedure is illustrated through numerical examples which also highlight the features. The vibration based procedure can also be used as a tool to assess structural health/performance of key structural components in the building during construction and service life.
Resumo:
Teacher professional development provided by education advisors as one-off, centrally offered sessions does not always result in change in teacher knowledge, beliefs, attitudes or practice in the classroom. As the mathematics education advisor in this study, I set out to investigate a particular method of professional development so as to influence change in a practising classroom teacher’s knowledge and practices. The particular method of professional development utilised in this study was based on several principles of effective teacher professional development and saw me working regularly in a classroom with the classroom teacher as well as providing ongoing support for her for a full school year. The intention was to document the effects of this particular method of professional development in terms of the classroom teacher’s and my professional growth to provide insights for others working as education advisors. The professional development for the classroom teacher consisted of two components. The first was the co-operative development and implementation of a mental computation instructional program for the Year 3 class. The second component was the provision of ongoing support for the classroom teacher by the education advisor. The design of the professional development and the mental computation instructional program were progressively refined throughout the year. The education advisor fulfilled multiple roles in the study as teacher in the classroom, teacher educator working with the classroom teacher and researcher. Examples of the professional growth of the classroom teacher and the education advisor which occurred as sequences of changes (growth networks, Hollingsworth, 1999) in the domains of the professional world of the classroom teacher and education advisor were drawn from the large body of data collected through regular face-to-face and email communications between the classroom teacher and the education advisor as well as from transcripts of a structured interview. The Interconnected Model of Professional Growth (Clarke & Hollingsworth, 2002; Hollingsworth, 1999) was used to summarise and represent each example of the classroom teacher’s professional growth. A modified version of this model was used to summarise and represent the professional growth of the education advisor. This study confirmed that the method of professional development utilised could lead to significant teacher professional growth related directly to her work in the classroom. Using the Interconnected Model of Professional Growth to summarise and represent the classroom teacher’s professional growth and the modified version for my professional growth assisted with the recognition of examples of how we both changed. This model has potential to be used more widely by education advisors when preparing, implementing, evaluating and following-up on planned teacher professional development activities. The mental computation instructional program developed and trialled in the study was shown to be a successful way of sequencing and managing the teaching of mental computation strategies and related number sense understandings to Year 3 students. This study was conducted in one classroom, with one teacher in one school. The strength of this study was the depth of teacher support provided made possible by the particular method of the professional development, and the depth of analysis of the process. In another school, or with another teacher, this might not have been as successful. While I set out to change my practice as an education advisor I did not expect the depth of learning I experienced in terms of my knowledge, beliefs, attitudes and practices as an educator of teachers. This study has changed the way in which I plan to work as an education advisor in the future.
Resumo:
“The process of innovation is often seen as being very linear, with research results, new technologies or user insights being channelled, often prematurely, into specific products and process” (Kyffin and Gardien 2009). It is precisely this perception of innovation-as-linear-process which this paper seeks to challenge. While there are many current theories and much contemporary literature available which discuss the management and catalysts of innovation, what is missing are examples of how innovation occurs from the application of these theories and literature (Wrigley & Bucolo 2010). This paper addresses both this gap and perceptions of the viability of linear innovation by presenting a case study for the commercialisation of a core technology (a cleantech, semi-portable mass-energy generator posited as a direct competitor to conventional energy provision systems), within an 18-month timeframe by the use of the Design-Led Innovation approach: “a process of creating a sustainable competitive advantage by radically changing the customer value proposition” (Bucolo & Matthews 2011).
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).
Resumo:
This research is one of several ongoing studies conducted within the IT Professional Services (ITPS) research programme at Queensland University of Technology (QUT). In 2003, ITPS introduced the IS-Impact model, a measurement model for measuring information systems success from the viewpoint of multiple stakeholders. The model, along with its instrument, is robust, simple, yet generalisable, and yields results that are comparable across time, stakeholders, different systems and system contexts. The IS-Impact model is defined as “a measure at a point in time, of the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. The model represents four dimensions, which are ‘Individual Impact’, ‘Organizational Impact’, ‘Information Quality’ and ‘System Quality’. The two Impact dimensions measure the up-to-date impact of the evaluated system, while the remaining two Quality dimensions act as proxies for probable future impacts (Gable, Sedera & Chan, 2008). To fulfil the goal of ITPS, “to develop the most widely employed model” this research re-validates and extends the IS-Impact model in a new context. This method/context-extension research aims to test the generalisability of the model by addressing known limitations of the model. One of the limitations of the model relates to the extent of external validity of the model. In order to gain wide acceptance, a model should be consistent and work well in different contexts. The IS-Impact model, however, was only validated in the Australian context, and packaged software was chosen as the IS understudy. Thus, this study is concerned with whether the model can be applied in another different context. Aiming for a robust and standardised measurement model that can be used across different contexts, this research re-validates and extends the IS-Impact model and its instrument to public sector organisations in Malaysia. The overarching research question (managerial question) of this research is “How can public sector organisations in Malaysia measure the impact of information systems systematically and effectively?” With two main objectives, the managerial question is broken down into two specific research questions. The first research question addresses the applicability (relevance) of the dimensions and measures of the IS-Impact model in the Malaysian context. Moreover, this research question addresses the completeness of the model in the new context. Initially, this research assumes that the dimensions and measures of the IS-Impact model are sufficient for the new context. However, some IS researchers suggest that the selection of measures needs to be done purposely for different contextual settings (DeLone & McLean, 1992, Rai, Lang & Welker, 2002). Thus, the first research question is as follows, “Is the IS-Impact model complete for measuring the impact of IS in Malaysian public sector organisations?” [RQ1]. The IS-Impact model is a multidimensional model that consists of four dimensions or constructs. Each dimension is represented by formative measures or indicators. Formative measures are known as composite variables because these measures make up or form the construct, or, in this case, the dimension in the IS-Impact model. These formative measures define different aspects of the dimension, thus, a measurement model of this kind needs to be tested not just on the structural relationship between the constructs but also the validity of each measure. In a previous study, the IS-Impact model was validated using formative validation techniques, as proposed in the literature (i.e., Diamantopoulos and Winklhofer, 2001, Diamantopoulos and Siguaw, 2006, Petter, Straub and Rai, 2007). However, there is potential for improving the validation testing of the model by adding more criterion or dependent variables. This includes identifying a consequence of the IS-Impact construct for the purpose of validation. Moreover, a different approach is employed in this research, whereby the validity of the model is tested using the Partial Least Squares (PLS) method, a component-based structural equation modelling (SEM) technique. Thus, the second research question addresses the construct validation of the IS-Impact model; “Is the IS-Impact model valid as a multidimensional formative construct?” [RQ2]. This study employs two rounds of surveys, each having a different and specific aim. The first is qualitative and exploratory, aiming to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. This survey was conducted in a state government in Malaysia. A total of 77 valid responses were received, yielding 278 impact statements. The results from the qualitative analysis demonstrate the applicability of most of the IS-Impact measures. The analysis also shows a significant new measure having emerged from the context. This new measure was added as one of the System Quality measures. The second survey is a quantitative survey that aims to operationalise the measures identified from the qualitative analysis and rigorously validate the model. This survey was conducted in four state governments (including the state government that was involved in the first survey). A total of 254 valid responses were used in the data analysis. Data was analysed using structural equation modelling techniques, following the guidelines for formative construct validation, to test the validity and reliability of the constructs in the model. This study is the first research that extends the complete IS-Impact model in a new context that is different in terms of nationality, language and the type of information system (IS). The main contribution of this research is to present a comprehensive, up-to-date IS-Impact model, which has been validated in the new context. The study has accomplished its purpose of testing the generalisability of the IS-Impact model and continuing the IS evaluation research by extending it in the Malaysian context. A further contribution is a validated Malaysian language IS-Impact measurement instrument. It is hoped that the validated Malaysian IS-Impact instrument will encourage related IS research in Malaysia, and that the demonstrated model validity and generalisability will encourage a cumulative tradition of research previously not possible. The study entailed several methodological improvements on prior work, including: (1) new criterion measures for the overall IS-Impact construct employed in ‘identification through measurement relations’; (2) a stronger, multi-item ‘Satisfaction’ construct, employed in ‘identification through structural relations’; (3) an alternative version of the main survey instrument in which items are randomized (rather than blocked) for comparison with the main survey data, in attention to possible common method variance (no significant differences between these two survey instruments were observed); (4) demonstrates a validation process of formative indexes of a multidimensional, second-order construct (existing examples mostly involved unidimensional constructs); (5) testing the presence of suppressor effects that influence the significance of some measures and dimensions in the model; and (6) demonstrates the effect of an imbalanced number of measures within a construct to the contribution power of each dimension in a multidimensional model.