523 resultados para due process


Relevância:

20.00% 20.00%

Publicador:

Resumo:

For millennia humans have sought, organized, and used information as they learned and evolved patterns of human information behaviors to resolve their human problems and survive. However, despite the current focus on living in an "information age," we have a limited evolutionary understanding of human information behavior. In this article the authors examine the current three interdisciplinary approaches to conceptualizing how humans have sought information including (a) the everyday life information seeking-sense-making approach, (b) the information foraging approach, and (c) the problem-solution perspective on information seeking approach. In addition, due to the lack of clarity regarding the role of information use in information behavior, a fourth information approach is provided based on a theory of information use. The use theory proposed starts from an evolutionary psychology notion that humans are able to adapt to their environment and survive because of our modular cognitive architecture. Finally, the authors begin the process of conceptualizing these diverse approaches, and the various aspects or elements of these approaches, within an integrated model with consideration of information use. An initial integrated model of these different approaches with information use is proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flow regime transition criteria are of practical importance for two-phase flow analyses at reduced gravity conditions. Here, flow regime transition criteria which take the friction pressure loss effect into account were studied in detail. Criteria at reduced gravity conditions were developed by extending an existing model with various experimental datasets taken at microgravity conditions showed satisfactory agreement. Sample computations of the model were performed at various gravity conditions, such as 0.196, 1.62, 3.71, and 9.81 m/s2 corresponding to micro-gravity and lunar, Martian and Earth surface gravity, respectively. It was found that the effect of gravity on bubbly-slug and slug-annular (churn) transitions in a two-phase flow system was more pronounced at low liquid flow conditions, whereas the gravity effect could be ignored at high mixture volumetric flux conditions. While for the annular flow transitions due to flow reversal and onset of dropset entrainment, higher superficial gas velocity was obtained at higher gravity level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Shattuckite Cu5(SiO3)4(OH)2 is a copper hydroxy silicate and is commonly known as a ‘healing’ mineral. Three shattuckite mineral samples from three different origins were analysed by Raman spectroscopy. Some Raman bands are common in the spectra of the minerals. Raman bands at around 890, 1058 and 1102 are described as the ν3 –SiO3 antisymmetric stretching vibrations. The Raman band at 670 cm-1 is assigned to the ν4 bending modes of the -SiO3 units and the band at around 785 cm-1is due to Si-O-Si chain stretching mode. Raman (and infrared) spectroscopy proves that water is in the molecular structure of shattuckite; thus the formula is better written as Cu5(SiO3)4(OH)2•xH2O.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technologies and languages for integrated processes are a relatively recent innovation. Over that period many divergent waves of innovation have transformed process integration. Like sockets and distributed objects, early workflow systems ordered programming interfaces that connected the process modelling layer to any middleware. BPM systems emerged later, connecting the modelling world to middleware through components. While BPM systems increased ease of use (modelling convenience), long-standing and complex interactions involving many process instances remained di±cult to model. Enterprise Service Buses (ESBs), followed, connecting process models to heterogeneous forms of middleware. ESBs, however, generally forced modellers to choose a particular underlying middleware and to stick to it, despite their ability to connect with many forms of middleware. Furthermore ESBs encourage process integrations to be modelled on their own, logically separate from the process model. This can lead to the inability to reason about long standing conversations at the process layer. Technologies and languages for process integration generally lack formality. This has led to arbitrariness in the underlying language building blocks. Conceptual holes exist in a range of technologies and languages for process integration and this can lead to customer dissatisfaction and failure to bring integration projects to reach their potential. Standards for process integration share similar fundamental flaws to languages and technologies. Standards are also in direct competition with other standards causing a lack of clarity. Thus the area of greatest risk in a BPM project remains process integration, despite major advancements in the technology base. This research examines some fundamental aspects of communication middleware and how these fundamental building blocks of integration can be brought to the process modelling layer in a technology agnostic manner. This way process modelling can be conceptually complete without becoming stuck in a particular middleware technology. Coloured Petri nets are used to define a formal semantics for the fundamental aspects of communication middleware. They provide the means to define and model the dynamic aspects of various integration middleware. Process integration patterns are used as a tool to codify common problems to be solved. Object Role Modelling is a formal modelling technique that was used to define the syntax of a proposed process integration language. This thesis provides several contributions to the field of process integration. It proposes a framework defining the key notions of integration middleware. This framework provides a conceptual foundation upon which a process integration language could be built. The thesis defines an architecture that allows various forms of middleware to be aggregated and reasoned about at the process layer. This thesis provides a comprehensive set of process integration patterns. These constitute a benchmark for the kinds of problems a process integration language must support. The thesis proposes a process integration modelling language and a partial implementation that is able to enact the language. A process integration pilot project in a German hospital is brie°y described at the end of the thesis. The pilot is based on ideas in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter describes current trends in the global media environment, with a focus on their implications for the management of public agendas and political processes. It assesses the extent to which trends such as the growth of the blogosphere, "citizen journalism," and other forms if user-generated content, have complicated and problematized news and agenda management as engaged in by both media and political elites. It argues that, in large part due to the rise of the internet and the proliferation if online producers of information and commentary, alongside 24-hour news channels such as CNN and Al Jazeera, political and social actors today face a much more complex, chaotic communication environment than ever bifore, an environment characterized as one of cultural chaos. Having outlined the roots of this trend in the emergence of an expanded, globalized public sphere, the chapter goes on to ask if elite control over the political agenda has been eroded, and if it has, what the consequences for governmmt and the exercise if power might be. Can authoritarian regimes in China, the Middle East, and elsewhere survive the onset if internet-fueled global journalism, for example? In a political environment where public opinion is driven and buffeted by news coverage if unprecedented speed and volume, can democratic governments retain sufficient control over decision- and policy-making processes to enable competent social administration al'ld political management? Can the citizens of contemporary democracies use the emerging media environment to enhance elite accountability and strengthen the democratic process? The chapter concludes that the changing global media environment has the potmtial to strengthen democratic processes, though there is no sil'lgle template for the impact of the internet and other new media on specific countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Crisis holds the potential for profound change in organizations and industries. The past 50 years of crisis management highlight key shifts in crisis practice, creating opportunities for multiple theories and research tracks. Defining crises such as Tylenol, Exxon Valdez, and September 11 terrorist attacks have influenced or challenged the principles of best practice of crisis communication in public relations. This study traces the development of crisis process and practice by identifying shifts in crisis research and models and mapping these against key management theories and practices. The findings define three crisis domains: crisis planning, building and testing predictive models, and mapping and measuring external environmental influences. These crisis domains mirror but lag the evolution of management theory, suggesting challenges for researchers to reshape the research agenda to close the gap and lead the next stage of development in the field of crisis communication for effective organizational outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Providing effective IT support for business processes has become crucial for enterprises to stay competitive. In response to this need numerous process support paradigms (e.g., workflow management, service flow management, case handling), process specification standards (e.g., WS-BPEL, BPML, BPMN), process tools (e.g., ARIS Toolset, Tibco Staffware, FLOWer), and supporting methods have emerged in recent years. Summarized under the term “Business Process Management” (BPM), these paradigms, standards, tools, and methods have become a success-critical instrument for improving process performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, business process management is an important approach for managing organizations from an operational perspective. As a consequence, it is common to see organizations develop collections of hundreds or even thousands of business process models. Such large collections of process models bring new challenges and provide new opportunities, as the knowledge that they encapsulate requires to be properly managed. Therefore, a variety of techniques for managing large collections of business process models is being developed. The goal of this paper is to provide an overview of the management techniques that currently exist, as well as the open research challenges that they pose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum theory has recently been employed to further advance the theory of information retrieval (IR). A challenging research topic is to investigate the so called quantum-like interference in users’ relevance judgement process, where users are involved to judge the relevance degree of each document with respect to a given query. In this process, users’ relevance judgement for the current document is often interfered by the judgement for previous documents, due to the interference on users’ cognitive status. Research from cognitive science has demonstrated some initial evidence of quantum-like cognitive interference in human decision making, which underpins the user’s relevance judgement process. This motivates us to model such cognitive interference in the relevance judgement process, which in our belief will lead to a better modeling and explanation of user behaviors in relevance judgement process for IR and eventually lead to more user-centric IR models. In this paper, we propose to use probabilistic automaton(PA) and quantum finite automaton (QFA), which are suitable to represent the transition of user judgement states, to dynamically model the cognitive interference when the user is judging a list of documents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of highway infrastructure typically requires major capital input over a long period. This often causes serious financial constraints for investors. The push for sustainability has added new dimensions to the complexity in the evaluation of highway projects, particularly on the cost front. This makes the determination of long-term viability even more a precarious exercise. Life-cycle costing analysis (LCCA) is generally recognised as a valuable tool for the assessment of financial decisions on construction works. However to date, existing LCCA models are deficient in dealing with sustainability factors, particularly for infrastructure projects due to their inherent focus on the economic issues alone. This research probed into the major challenges of implementing sustainability in highway infrastructure development in terms of financial concerns and obligations. Using results of research through literature review, questionnaire survey of industry stakeholders and semi-structured interview of senior practitioners involved in highway infrastructure development, the research identified the relative importance of cost components relating to sustainability measures and on such basis, developed ways of improving existing LCCA models to incorporate sustainability commitments into long-term financial management. On such a platform, a decision support model incorporated Fuzzy Analytical Hierarchy Process and LCCA for the evaluation of the specific cost components most concerned by infrastructure stakeholders. Two real highway infrastructure projects in Australia were then used for testing, application and validation, before the decision support model was finalised. Improved industry understanding and tools such as the developed model will lead to positive sustainability deliverables while ensuring financial viability over the lifecycle of highway infrastructure projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The compressed gas industry and government agencies worldwide utilize "adiabatic compression" testing for qualifying high-pressure valves, regulators, and other related flow control equipment for gaseous oxygen service. This test methodology is known by various terms including adiabatic compression testing, gaseous fluid impact testing, pneumatic impact testing, and BAM testing as the most common terms. The test methodology will be described in greater detail throughout this document but in summary it consists of pressurizing a test article (valve, regulator, etc.) with gaseous oxygen within 15 to 20 milliseconds (ms). Because the driven gas1 and the driving gas2 are rapidly compressed to the final test pressure at the inlet of the test article, they are rapidly heated by the sudden increase in pressure to sufficient temperatures (thermal energies) to sometimes result in ignition of the nonmetallic materials (seals and seats) used within the test article. In general, the more rapid the compression process the more "adiabatic" the pressure surge is presumed to be and the more like an isentropic process the pressure surge has been argued to simulate. Generally speaking, adiabatic compression is widely considered the most efficient ignition mechanism for directly kindling a nonmetallic material in gaseous oxygen and has been implicated in many fire investigations. Because of the ease of ignition of many nonmetallic materials by this heating mechanism, many industry standards prescribe this testing. However, the results between various laboratories conducting the testing have not always been consistent. Research into the test method indicated that the thermal profile achieved (i.e., temperature/time history of the gas) during adiabatic compression testing as required by the prevailing industry standards has not been fully modeled or empirically verified, although attempts have been made. This research evaluated the following questions: 1) Can the rapid compression process required by the industry standards be thermodynamically and fluid dynamically modeled so that predictions of the thermal profiles be made, 2) Can the thermal profiles produced by the rapid compression process be measured in order to validate the thermodynamic and fluid dynamic models; and, estimate the severity of the test, and, 3) Can controlling parameters be recommended so that new guidelines may be established for the industry standards to resolve inconsistencies between various test laboratories conducting tests according to the present standards?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach to building an observation likelihood function from a set of sparse, noisy training observations taken from known locations by a sensor with no obvious geometric model. The basic approach is to fit an interpolant to the training data, representing the expected observation, and to assume additive sensor noise. This paper takes a Bayesian view of the problem, maintaining a posterior over interpolants rather than simply the maximum-likelihood interpolant, giving a measure of uncertainty in the map at any point. This is done using a Gaussian process framework. To validate the approach experimentally, a model of an environment is built using observations from an omni-directional camera. After a model has been built from the training data, a particle filter is used to localise while traversing this environment