945 resultados para black-box modelling
Resumo:
For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.
Resumo:
Objectives. Considerable evidence suggests that enforcement efforts cannot fully explain the high degree of tax compliance. To resolve this puzzle of tax compliance, several researchers have argued that citizens' attitudes toward paying taxes, defined as tax morale, helps to explain the high degree of tax compliance. However, most studies have treated tax morale as a black box, without discussing which factors shape it. Additionally, the tax compliance literature provides little empirical research that investigates attitudes toward paying taxes in Europe. Methods. Thus, this article is unique in its examination of citizen tax morale within three multicultural European countries, Switzerland, Belgium, and Spain, a choice that allows far more detailed examination of the impact of culture and institutions using data sets from the World Values Survey and the European Values Survey. Results. The results indicate the tendency that cultural and regional differences affect tax morale. Conclusion. The findings suggest that higher legitimacy for political institutions leads to higher tax morale.
Resumo:
Charmed is a tangible interactive media artwork that explores aspects of daily life in urban environments. The work was commissioned by Experimenta MEdia Arts for the Experimenta Playground Biennial of Media Art (2007) held at Black Box Melbourne Victoria. The work also shown in Play ++ at the International Symposium of Electronic Art July - August 2008
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
Many jurisdictions have developed mature infrastructures, both administratively and legislatively, to promote competition. Substantial funds have been expended to monitor activities that are anticompetitive and many jurisdictions also have adopted a form of "Cartel Leniency Program", first developed by the US Federal Trade Commission, to assist in cartel detection. Further, some jurisdictions are now criminalizing cartel behaviour so that cartel participants can be held criminally liable with substantial custodial penalties imposed. Notwithstanding these multijurisdictional approaches, a new form of possibly anticompetitive behaviour is looming. Synergistic monopolies („synopolies‟) involve not competitors within a horizontal market but complimentors within separate vertical markets. Where two complimentary corporations are monopolists in their own market they can, through various technologies, assist each other to expand their respective monopolies thus creating a barrier to new entrants and/or blocking existing participants from further participation in that market. The nature of the technologies involved means that it is easy for this potentially anti-competitive activity to enter and affect the global marketplace. Competition regulators need to be aware of this potential for abuse and ensure that their respective competition frameworks appropriately address this activity. This paper discusses how new technologies can be used to create a synopoly.
Resumo:
One of the earliest cryptographic applications of quantum information was to create quantum digital cash that could not be counterfeited. In this paper, we describe a new type of quantum money: quantum coins, where all coins of the same denomination are represented by identical quantum states. We state desirable security properties such as anonymity and unforgeability and propose two candidate quantum coin schemes: one using black box operations, and another using blind quantum computation.
Resumo:
The law and popular opinion expect boards of directors will actively monitor their organisations. Further, public opinion is that boards should have a positive impact on organisational performance. However, the processes of board monitoring and judgment are poorly understood, and board influence on organisational performance needs to be better understood. This thesis responds to the repeated calls to open the ‘black box’ linking board practices and organisational performance by investigating the processual behaviours of boards. The work of four boards1 of micro and small-sized nonprofit organisations were studied for periods of at least one year, using a processual research approach, drawing on observations of board meetings, interviews with directors, and the documents of the boards. The research shows that director turnover, the difficulty recruiting and engaging directors, and the administration of reporting, had strong impacts upon board monitoring, judging and/or influence. In addition, board monitoring of organisational performance was adversely affected by directors’ limited awareness of their legal responsibilities and directors’ limited financial literacy. Directors on average found all sources of information about their organisation’s work useful. Board judgments about the financial aspects of organisational performance were regulated by the routines of financial reporting. However, there were no comparable routines facilitating judgments about non-financial performance, and such judgments tended to be limited to specific aspects of performance and were ad hoc, largely in response to new information or the repackaging of existing information in a new form. The thesis argues that Weick’s theory of sensemaking offers insight into the way boards went about the task of understanding organisational performance. Board influence on organisational performance was demonstrated in the areas of: compliance; instrumental influence through service and through discussion and decision-making; and by symbolic, legitimating and protective means. The degree of instrumental influence achieved by boards depended on director competency, access to networks of influence, and understandings of board roles, and by the agency demonstrated by directors. The thesis concludes that there is a crowding out effect whereby CEO competence and capability limits board influence. The thesis also suggests that there is a second ‘agency problem’, a problem of director volition. The research potentially has profound implications for the work of nonprofit boards. Rather than purporting to establish a general theory of board governance, the thesis embraces calls to build situation-specific mini-theories about board behaviour.
Resumo:
Rapidly developing information and telecommunication technologies and their platforms in the late 20th Century helped improve urban infrastructure management and influenced quality of life. Telecommunication technologies make it possible for people to deliver text, audio and video material using wired, wireless or fibre-optic networks. Technologies convergence amongst these digital devices continues to create new ways in which the information and telecommunication technologies are used. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices such as mobile phones. This chapter discusses the recent developments in telecommunication networks and trends in convergence technologies, their implications for urban infrastructure planning, and for the quality of life of urban residents.
Resumo:
Collaboration has been enacted as a core strategy by both the government and nongovernment sectors to address many of the intractable issues confronting contemporary society. The cult of collaboration has become so pervasive that it is now an elastic term referring generally to any form of ‘working together’. The lack of specificity about collaboration and its practice means that it risks being reduced to mere rhetoric without sustained practice or action. Drawing on an extensive data set (qualitative, quantitative) of broadly collaborative endeavours gathered over ten years in Queensland, Australia, this paper aims to fill out the black box of collaboration. Specifically it examines the drivers for collaboration, dominant structures and mechanisms adopted, what has worked and unintended consequences. In particular it investigates the skills and competencies required in an embeded collaborative endeavour within and across organisations. Social network analysis is applied to isolate the structural properties of collaborations over other forms of integration as well as highlighting key roles and tasks. Collaboration is found to be a distinctive form of working together, characterised by intense and interdependent relationships and exchanges, higher levels of cohesion (density) and requiring new ways of behaving, working, managing and leading. These elements are configured into a practice framework. Developing an empirical evidence base for collaboration structure, practice and strategy provides a useful foundation for theory extension. The paper concludes that for collaboration, to be successfully employed as a management strategy it must move beyond rhetoric and develop a coherent model for action.
Resumo:
The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
This study aims to open-up the black box of the boardroom by directly observing directors’ interactions during meetings to better understand board processes. Design/methodology/approach: We analyse videotaped observations of board meetings at two Australian companies to develop insights into what directors do in meetings and how they participate in decision-making processes. The direct observations are triangulated with semi-structured interviews, mini-surveys and document reviews. Findings: Our analyses lead to two key findings: (i) while board meetings appear similar at a surface-level, boardroom interactions vary significantly at a deeper level (i.e. board members participate differently during different stages of discussions) and (ii) factors at multiple levels of analysis explain differences in interaction patterns, revealing the complex and nested nature of boardroom discussions. Research implications: By documenting significant intra- and inter-board meeting differences our study (i) challenges the widespread notion of board meetings as rather homogeneous and monolithic, (ii) points towards agenda items as a new unit of analysis (iii) highlights the need for more multi-level analyses in a board setting. Practical implications: While policy makers have been largely occupied with the “right” board composition, our findings suggest that decision outcomes or roles’ execution could be potentially affected by interactions at a board level. Differences in board meeting styles might explain prior ambiguous board structure-performance results, enhancing the need for greater normative consideration of how boards do their work. Originality/value: Our study complements existing research on boardroom dynamics and provides a systematic account of director interactions during board meetings.
Resumo:
Classical architecture has a long history of representing the idealized proportions of the human body, derived from the Vitruvian man. This association with the idealized human form has also associated architecture as symbiotic with prevailing power structures. Meaning that architecture is always loaded with some signification, it creates a highly inscribed space. In the absence of architecture space is not necessarily without inscription, for within the void there can exist an anti-architecture. Like the black box theatre, it is both empty and full at the same time, in the absence of the architecture, the void of space and how it is occupied becomes much more profound. As Dorita Hannah writes, ‘In denying a purely visual apprehension of built space, and suggesting a profound interiority, the black-box posits a new way of regarding the body in space.’ This paper analyses the work of Harold Pinter and his use of the body to create an anti-architecture to subvert oppressors and power structures. Pinter’s works are an important case study in this research due to their political nature. His works are also heavily tied to territory, which bound the works in a dependent relationship with a simulated ‘place’. In the citation accompanying the playwright’s Nobel Laureate it states, '...in his plays [he] uncovers the precipice under everyday prattle and forces entry into oppression's closed rooms.' In Pinter’s work oppression manifests itself in the representation of a room, the architecture, which is the cause of a power struggle when objectified and defeated when subjectified. The following work examines how Pinter uses the body to subjectify and represent architecture as authority in his earlier works, which relied on detailed mimetic sets of domestic rooms, and then in his later political works, that were freed of representational scenography. This paper will also look at the adaption of Pinter’s work by the Belarus Free Theatre in their 2008 production of ‘Being Harold Pinter.’ The work of Pinter and the Belarus Free Theatre are concerned with authoritarian political structures. That is, political structures that works against ideas of individualism, ascribing to a mass-produced body as an artifact of dictatorship and conservatism. The focus on the body in space on an empty stage draws attention to the individual – the body amongst scenography can become merely another prop, lost in the borders and boundaries the scenery dictates. Through an analysis of selected works by Harold Pinter and their interpretations, this paper examines this paradox of emptiness and fullness through the body as anti-architecture in performance.
Resumo:
We study the natural problem of secure n-party computation (in the computationally unbounded attack model) of circuits over an arbitrary finite non-Abelian group (G,⋅), which we call G-circuits. Besides its intrinsic interest, this problem is also motivating by a completeness result of Barrington, stating that such protocols can be applied for general secure computation of arbitrary functions. For flexibility, we are interested in protocols which only require black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our investigations focus on the passive adversarial model, where up to t of the n participating parties are corrupted.