917 resultados para System complexity
Resumo:
The gastric-derived orexigenic peptide ghrelin affects brain circuits involved in energy balance as well as in reward. Indeed, ghrelin activates an important reward circuit involved in natural- as well as drug-induced reward, the cholinergic-dopaminergic reward link. It has been hypothesized that there is a common reward mechanism for alcohol and sweet substances in both animals and humans. Alcohol dependent individuals have higher craving for sweets than do healthy controls and the hedonic response to sweet taste may, at least in part, depend on genetic factors. Rat selectively bred for high sucrose intake have higher alcohol consumption than non-sucrose preferring rats and vice versa. In the present study a group of alcohol-consuming individuals selected from a population cohort was investigated for genetic variants of the ghrelin signalling system in relation to both their alcohol and sucrose consumption. Moreover, the effects of GHS-R1A antagonism on voluntary sucrose- intake and operant self-administration, as well as saccharin intake were investigated in preclinical studies using rodents. The effects of peripheral grelin administration on sucrose intake were also examined. Here we found associations with the ghrelin gene haplotypes and increased sucrose consumption, and a trend for the same association was seen in the high alcohol consumers. The preclinical data show that a GHS-R1A antagonist reduces the intake and self-administration of sucrose in rats as well as saccharin intake in mice. Further, ghrelin increases the intake of sucrose in rats. Collectively, our data provide a clear indication that the GHS-R1A antagonists reduces and ghrelin increases the intake of rewarding substances and hence, the central ghrelin signalling system provides a novel target for the development of drug strategies to treat addictive behaviours. © 2011 Landgren et al.
Resumo:
Existing recommendation systems often recommend products to users by capturing the item-to-item and user-to-user similarity measures. These types of recommendation systems become inefficient in people-to-people networks for people to people recommendation that require two way relationship. Also, existing recommendation methods use traditional two dimensional models to find inter relationships between alike users and items. It is not efficient enough to model the people-to-people network with two-dimensional models as the latent correlations between the people and their attributes are not utilized. In this paper, we propose a novel tensor decomposition-based recommendation method for recommending people-to-people based on users profiles and their interactions. The people-to-people network data is multi-dimensional data which when modeled using vector based methods tend to result in information loss as they capture either the interactions or the attributes of the users but not both the information. This paper utilizes tensor models that have the ability to correlate and find latent relationships between similar users based on both information, user interactions and user attributes, in order to generate recommendations. Empirical analysis is conducted on a real-life online dating dataset. As demonstrated in results, the use of tensor modeling and decomposition has enabled the identification of latent correlations between people based on their attributes and interactions in the network and quality recommendations have been derived using the 'alike' users concept.
Resumo:
Recent surveys of information technology management professionals show that understanding business domains in terms of business productivity and cost reduction potential, knowledge of different vertical industry segments and their information requirements, understanding of business processes and client-facing skills are more critical for Information Systems personnel than ever before. In an attempt to restrucuture the information systems curriculum accordingly, our view it that information systems students need to develop an appreciation for organizational work systems in order to understand the operation and significance of information systems within such work systems.
Resumo:
This chapter argues that evolutionary economics should be founded upon complex systems theory rather than neo-Darwinian analogies concerning natural selection, which focus on supply side considerations and competition amongst firms and technologies. It suggests that conceptions such as production and consumption functions should be replaced by network representations, in which the preferences or, more correctly, the aspirations of consumers are fundamental and, as such, the primary drivers of economic growth. Technological innovation is viewed as a process that is intermediate between these aspirational networks, and the organizational networks in which goods and services are produced. Consumer knowledge becomes at least as important as producer knowledge in determining how economic value is generated. It becomes clear that the stability afforded by connective systems of rules is essential for economic flexibility to exist, but that too many rules result in inert and structurally unstable states. In contrast, too few rules result in a more stable state, but at a low level of ordered complexity. Economic evolution from this perspective is explored using random and scale free network representations of complex systems.
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
This paper investigates the use of visual artifacts to represent a complex adaptive system (CAS). The integrated master schedule (IMS) is one of those visuals widely used in complex projects for scheduling, budgeting, and project management. In this paper, we discuss how the IMS outperforms the traditional timelines and acts as a ‘multi-level and poly-temporal boundary object’ that visually represents the CAS. We report the findings of a case study project on the way the IMS mapped interactions, interdependencies, constraints and fractal patterns in a complex project. Finally, we discuss how the IMS was utilised as a complex boundary object by eliciting commitment and development of shared mental models, and facilitating negotiation through the layers of multiple interpretations from stakeholders.
Resumo:
The main aim of this thesis is to analyse and optimise a public hospital Emergency Department. The Emergency Department (ED) is a complex system with limited resources and a high demand for these resources. Adding to the complexity is the stochastic nature of almost every element and characteristic in the ED. The interaction with other functional areas also complicates the system as these areas have a huge impact on the ED and the ED is powerless to change them. Therefore it is imperative that OR be applied to the ED to improve the performance within the constraints of the system. The main characteristics of the system to optimise included tardiness, adherence to waiting time targets, access block and length of stay. A validated and verified simulation model was built to model the real life system. This enabled detailed analysis of resources and flow without disruption to the actual ED. A wide range of different policies for the ED and a variety of resources were able to be investigated. Of particular interest was the number and type of beds in the ED and also the shift times of physicians. One point worth noting was that neither of these resources work in isolation and for optimisation of the system both resources need to be investigated in tandem. The ED was likened to a flow shop scheduling problem with the patients and beds being synonymous with the jobs and machines typically found in manufacturing problems. This enabled an analytic scheduling approach. Constructive heuristics were developed to reactively schedule the system in real time and these were able to improve the performance of the system. Metaheuristics that optimised the system were also developed and analysed. An innovative hybrid Simulated Annealing and Tabu Search algorithm was developed that out-performed both simulated annealing and tabu search algorithms by combining some of their features. The new algorithm achieves a more optimal solution and does so in a shorter time.
Resumo:
DeLone and McLean (1992, p. 16) argue that the concept of “system use” has suffered from a “too simplistic definition.” Despite decades of substantial research on system use, the concept is yet to receive strong theoretical scrutiny. Many measures of system use and the development of measures have been often idiosyncratic and lack credibility or comparability. This paper reviews various attempts at conceptualization and measurement of system use and then proposes a re-conceptualization of it as “the level of incorporation of an information system within a user’s processes.” The definition is supported with the theory of work systems, system, and Key-User-Group considerations. We then go on to develop the concept of a Functional- Interface-Point (FIP) and four dimensions of system usage: extent, the proportion of the FIPs used by the business process; frequency, the rate at which FIPs are used by the participants in the process; thoroughness, the level of use of information/functionality provided by the system at an FIP; and attitude towards use, a set of measures that assess the level of comfort, degree of respect and the challenges set forth by the system. The paper argues that the automation level, the proportion of the business process encoded by the information system has a mediating impact on system use. The article concludes with a discussion of some implications of this re-conceptualization and areas for follow on research.
Resumo:
This article explores the way in which a major Australian radiology organization implemented a complex accounting information system and how workers in the 72 radiology practises that had to use it resisted the change. The study reports on the issues that led to the circumvention of the system by individuals and, after only three years, complete withdrawal of the accounting information system by the parent organization. This article has implications for firms in the health care and other sectors considering implementing new accounting information systems. Organizations need to incorporate change management techniques and provide open communication to all stakeholders to minimize disruption and potential problems.
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
Despite promising benefits and advantages, there are reports of failures and low realisation of benefits in Enterprise System (ES) initiatives. Among the research on the factors that influence ES success, there is a dearth of studies on the knowledge implications of multiple end-user groups using the same ES application. An ES facilitates the work of several user groups, ranging from strategic management, management, to operational staff, all using the same system for multiple objectives. Given the fundamental characteristics of ES – integration of modules, business process views, and aspects of information transparency – it is necessary that all frequent end-users share a reasonable amount of common knowledge and integrate their knowledge to yield new knowledge. Recent literature on ES implementation highlights the importance of Knowledge Integration (KI) for implementation success. Unfortunately, the importance of KI is often overlooked and little about the role of KI in ES success is known. Many organisations do not achieve the potential benefits from their ES investment because they do not consider the need or their ability to integrate their employees’ knowledge. This study is designed to improve our understanding of the influence of KI among ES end-users on operational ES success. The three objectives of the study are: (I) to identify and validate the antecedents of KI effectiveness, (II) to investigate the impact of KI effectiveness on the goodness of individuals’ ES-knowledge base, and (III) to examine the impact of the goodness of individuals’ ES-knowledge base on the operational ES success. For this purpose, we employ the KI factors identified by Grant (1996) and an IS-impact measurement model from the work of Gable et al. (2008) to examine ES success. The study derives its findings from data gathered from six Malaysian companies in order to obtain the three-fold goal of this thesis as outlined above. The relationships between the antecedents of KI effectiveness and its consequences are tested using 188 responses to a survey representing the views of management and operational employment cohorts. Using statistical methods, we confirm three antecedents of KI effectiveness and the consequences of the antecedents on ES success are validated. The findings demonstrate a statistically positive impact of KI effectiveness of ES success, with KI effectiveness contributing to almost one-third of ES success. This research makes a number of contributions to the understanding of the influence of KI on ES success. First, based on the empirical work using a complete nomological net model, the role of KI effectiveness on ES success is evidenced. Second, the model provides a theoretical lens for a more comprehensive understanding of the impact of KI on the level of ES success. Third, restructuring the dimensions of the knowledge-based theory to fit the context of ES extends its applicability and generalisability to contemporary Information Systems. Fourth, the study develops and validates measures for the antecedents of KI effectiveness. Fifth, the study demonstrates the statistically significant positive influence of the goodness of KI on ES success. From a practical viewpoint, this study emphasises the importance of KI effectiveness as a direct antecedent of ES success. Practical lessons can be drawn from the work done in this study to empirically identify the critical factors among the antecedents of KI effectiveness that should be given attention.
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.