953 resultados para What-if Analysis
Resumo:
University of Buffalo New York Department of Art Gallery. The ancient philosopher Protagoras is most famous for his claim: “Of all things the measure is Man” and today, Western societies continue to promote anthropocentrism, an approach to the world that assumes humans are the principal species of the planet. We naturalize a scale of worth, in which beings that most resemble our own forms or benefit us are valued over those that do not. The philosophy of humanism has been trumpeted as the hallmark of a civilized society, founded on the unquestioned value of humankind defining not only our economic, political, religious, and social systems, but also our ethical code. However, artists recently have questioned whether humanism has actually lived up to its promises and made the world a better place for humankind. Are we better off privileging humans above all else or could there be other, preferable, ways to value life? With the continued prevalence of violent crimes, even genocide, in the twentieth and twenty-first centuries, we see the ways in which the discourse of humanism falters, as groups are targeted through rhetoric reducing them to the subhuman, and therefore disposable. But what if the subhuman, nonhuman, and even the non-animal and material, were reconsidered as objects of worth even if far removed from us?
Resumo:
What if the architectural process of making could incorporate time? All designers who impact the physical environment- consciously and unconsciously are gatekeepers of the past, commentators of the present, and speculators of the future. This project proposes the creation of architecture and adaptive public space that looks to historical memories, foster present day cultural formation, and new alternative visions for the city of the future. The thesis asks what it means to design for stasis and change in a variety of scales- urban, architectural, and detail and arrives at a speculated new neighborhood, institutional buildings, and landscape. Central to this project is the idea of the architect as archeologist, anthropologist, and artist. The project focuses on a rapidly changing part of the city of Fort Worth, Texas and assigns a multipurpose institutional buildings and public space as a method of investigation. The thesis hopes to further architectural discourse about into the role of architecture in the preservation of memory, adaptive potential of public spaces, and the role of time in architecture.
Resumo:
The U.S. railroad companies spend billions of dollars every year on railroad track maintenance in order to ensure safety and operational efficiency of their railroad networks. Besides maintenance costs, other costs such as train accident costs, train and shipment delay costs and rolling stock maintenance costs are also closely related to track maintenance activities. Optimizing the track maintenance process on the extensive railroad networks is a very complex problem with major cost implications. Currently, the decision making process for track maintenance planning is largely manual and primarily relies on the knowledge and judgment of experts. There is considerable potential to improve the process by using operations research techniques to develop solutions to the optimization problems on track maintenance. In this dissertation study, we propose a range of mathematical models and solution algorithms for three network-level scheduling problems on track maintenance: track inspection scheduling problem (TISP), production team scheduling problem (PTSP) and job-to-project clustering problem (JTPCP). TISP involves a set of inspection teams which travel over the railroad network to identify track defects. It is a large-scale routing and scheduling problem where thousands of tasks are to be scheduled subject to many difficult side constraints such as periodicity constraints and discrete working time constraints. A vehicle routing problem formulation was proposed for TISP, and a customized heuristic algorithm was developed to solve the model. The algorithm iteratively applies a constructive heuristic and a local search algorithm in an incremental scheduling horizon framework. The proposed model and algorithm have been adopted by a Class I railroad in its decision making process. Real-world case studies show the proposed approach outperforms the manual approach in short-term scheduling and can be used to conduct long-term what-if analyses to yield managerial insights. PTSP schedules capital track maintenance projects, which are the largest track maintenance activities and account for the majority of railroad capital spending. A time-space network model was proposed to formulate PTSP. More than ten types of side constraints were considered in the model, including very complex constraints such as mutual exclusion constraints and consecution constraints. A multiple neighborhood search algorithm, including a decomposition and restriction search and a block-interchange search, was developed to solve the model. Various performance enhancement techniques, such as data reduction, augmented cost function and subproblem prioritization, were developed to improve the algorithm. The proposed approach has been adopted by a Class I railroad for two years. Our numerical results show the model solutions are able to satisfy all hard constraints and most soft constraints. Compared with the existing manual procedure, the proposed approach is able to bring significant cost savings and operational efficiency improvement. JTPCP is an intermediate problem between TISP and PTSP. It focuses on clustering thousands of capital track maintenance jobs (based on the defects identified in track inspection) into projects so that the projects can be scheduled in PTSP. A vehicle routing problem based model and a multiple-step heuristic algorithm were developed to solve this problem. Various side constraints such as mutual exclusion constraints and rounding constraints were considered. The proposed approach has been applied in practice and has shown good performance in both solution quality and efficiency.
Resumo:
Due to their unique physicochemical properties, including superparamagnetism, iron oxide nanoparticles (ION) have a number of interesting applications, especially in the biomedical field, that make them one of the most fascinating nanomaterials. They are used as contrast agents for magnetic resonance imaging, in targeted drug delivery, and for induced hyperthermia cancer treatments. Together with these valuable uses, concerns regarding the onset of unexpected adverse health effects following exposure have been also raised. Nevertheless, despite the numerous ION purposes being explored, currently available information on their potential toxicity is still scarce and controversial data have been reported. Although ION have traditionally been considered as biocompatible - mainly on the basis of viability tests results - influence of nanoparticle surface coating, size, or dose, and of other experimental factors such as treatment time or cell type, has been demonstrated to be important for ION in vitro toxicity manifestation. In vivo studies have shown distribution of ION to different tissues and organs, including brain after passing the blood-brain barrier; nevertheless results from acute toxicity, genotoxicity, immunotoxicity, neurotoxicity and reproductive toxicity investigations in different animal models do not provide a clear overview on ION safety yet, and epidemiological studies are almost inexistent. Much work has still to be done to fully understand how these nanomaterials interact with cellular systems and what, if any, potential adverse health consequences can derive from ION exposure.
Resumo:
Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK’s top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we have looked at employee development and cashier empowerment as two examples of shop floor management practices. In this paper we describe the underlying conceptual ideas and the features of our simulation model. We present a selection of experiments we have conducted in order to validate our simulation model and to show its potential for answering “what-if” questions in a retail context. We also introduce a novel performance measure which we have created to quantify customers’ satisfaction with service, based on their individual shopping experiences.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
In the frame of the Ukrainian crisis the question of spheres of influence has returned to political discourse. This may be an awkward subject, but what if we only deny the existence of such power constellations as spheres of influence? Do spheres of influence exist, or are they relics of history, and mere rhetoric? And if they exist, where can we find them? The hypothesis in this article is that instead of being a tangible reality, spheres of influence are obscure and contested political constructions, which nevertheless can have an impact on political behaviour. To demonstrate this, the article will first introduce a few examples of the current use of the concept. Secondly, a few remarks follow concerning the different IR schools of thought, and conceptual history as a method. Next, the article turns to discussing a few dictionaries and the empirical material for the present inquiry, which consists of recent textbooks, i.e. the current political science curriculum in one particular university, at the University of Tampere, Finland. More empirical cases deal with the division of Africa, the post-WWII situation, and the Hungarian revolution of 1956.
Resumo:
Nel panorama aziendale odierno, risulta essere di fondamentale importanza la capacità, da parte di un’azienda o di una società di servizi, di orientare in modo programmatico la propria innovazione in modo tale da poter essere competitivi sul mercato. In molti casi, questo e significa investire una cospicua somma di denaro in progetti che andranno a migliorare aspetti essenziali del prodotto o del servizio e che avranno un importante impatto sulla trasformazione digitale dell’azienda. Lo studio che viene proposto riguarda in particolar modo due approcci che sono tipicamente in antitesi tra loro proprio per il fatto che si basano su due tipologie di dati differenti, i Big Data e i Thick Data. I due approcci sono rispettivamente il Data Science e il Design Thinking. Nel corso dei seguenti capitoli, dopo aver definito gli approcci di Design Thinking e Data Science, verrà definito il concetto di blending e la problematica che ruota attorno all’intersezione dei due metodi di innovazione. Per mettere in evidenza i diversi aspetti che riguardano la tematica, verranno riportati anche casi di aziende che hanno integrato i due approcci nei loro processi di innovazione, ottenendo importanti risultati. In particolar modo verrà riportato il lavoro di ricerca svolto dall’autore riguardo l'esame, la classificazione e l'analisi della letteratura esistente all'intersezione dell'innovazione guidata dai dati e dal pensiero progettuale. Infine viene riportato un caso aziendale che è stato condotto presso la realtà ospedaliero-sanitaria di Parma in cui, a fronte di una problematica relativa al rapporto tra clinici dell’ospedale e clinici del territorio, si è progettato un sistema innovativo attraverso l’utilizzo del Design Thinking. Inoltre, si cercherà di sviluppare un’analisi critica di tipo “what-if” al fine di elaborare un possibile scenario di integrazione di metodi o tecniche provenienti anche dal mondo del Data Science e applicarlo al caso studio in oggetto.
Resumo:
Con l’avvento dell’Industry 4.0, l’utilizzo dei dispositivi Internet of Things (IoT) è in continuo aumento. Le aziende stanno spingendo sempre più verso l’innovazione, andando ad introdurre nuovi metodi in grado di rinnovare sistemi IoT esistenti e crearne di nuovi, con prestazioni all’avanguardia. Un esempio di tecniche innovative emergenti è l’utilizzo dei Digital Twins (DT). Essi sono delle entità logiche in grado di simulare il reale comportamento di un dispositivo IoT fisico; possono essere utilizzati in vari scenari: monitoraggio di dati, rilevazione di anomalie, analisi What-If oppure per l’analisi predittiva. L’integrazione di tali tecnologie con nuovi paradigmi innovativi è in rapido sviluppo, uno tra questi è rappresentato dal Web of Things (WoT). Il Web of Thing è un termine utilizzato per descrivere un paradigma che permette ad oggetti del mondo reale di essere gestiti attraverso interfacce sul World Wide Web, rendendo accessibile la comunicazione tra più dispositivi con caratteristiche hardware e software differenti. Nonostante sia una tecnologia ancora in fase di sviluppo, il Web of Thing sta già iniziando ad essere utilizzato in molte aziende odierne. L’elaborato avrà come obiettivo quello di poter definire un framework capace di integrare un meccanismo di generazione automatica di Digital Twin su un contesto Web of Thing. Combinando tali tecnologie, si potrebbero sfruttare i vantaggi dell’interoperabilità del Web of Thing per poter generare un Digital Twin, indipendentemente dalle caratteristiche hardware e software degli oggetti da replicare.
Resumo:
This paper analyzes two claims that have been made about the Target2 payment system. The first one is that this system has been used to support unsustainable current account deficits of Southern European countries. The second one is that the large accumulation of Target2 claims by the Bundesbank represents an unacceptable risk for Germany if the eurozone were to break up. We argue that these claims are unfounded. They also lead to unnecessary fears in Germany that make a solution of the eurozone crisis more difficult. Ultimately, this fear increases the risk of a break-up of the eurozone. Or to paraphrase Franklin Roosevelt, what Germany should fear most is simply its own fear.
Resumo:
This paper presents a neuroscientific study of aesthetic judgments on written texts. In an fMRI experiment participants read a number of proverbs without explicitly evaluating them. In a post-scan rating they rated each item for familiarity and beauty. These individual ratings were correlated with the functional data to investigate the neural correlates of implicit aesthetic judgments. We identified clusters in which BOLD activity was correlated with individual post-scan beauty ratings. This indicates that some spontaneous aesthetic evaluation takes place during reading, even if not required by the task. Positive correlations were found in the ventral striatum and in medial prefrontal cortex, likely reflecting the rewarding nature of sentences that are aesthetically pleasing. On the contrary, negative correlations were observed in the classic left frontotemporal reading network. Midline structures and bilateral temporo-parietal regions correlated positively with familiarity, suggesting a shift from the task-network towards the default network with increasing familiarity.
Resumo:
Although physician recommendation has been significantly associated with colorectal cancer screening (CRCS), it still does not motivate all patients to get CRCS. Although improved physician recommendation for CRCS has been shown to increase patient CRCS screening, questions remain about what elements of that discussion may lead to screening. The objective of this study is to describe patients' perceptions and interpretations about their physician's recommendation for CRCS during their annual wellness exam. A subset of patients (n=51) participating in a supplement study of a behavioral intervention trial designed to increase CRCS completed a follow-up, open-ended interview two to four weeks after their annual wellness visit. Using qualitative methods, transcripts of these interviews were analyzed. Findings suggest that most patients would follow their physician's recommendation for CRCS despite not engaging in much discussion. Patients may refrain from CRCS discussion because of a commitment to CRCS, awareness of screening guidelines, and trust in physician's honesty and beneficence. Yet many patients left their wellness exams with questions, refraining because of future plans to consult with their physicians, perceived time constraints or a lack of a patient-physician relationship. If patients are leaving their wellness exams with unanswered questions, interventions should prepare physicians for patient reticence, teaching physicians how to assure patients that CRCS is a primary care activity where all questions and concerns, including cost and scheduling, may be resolved.^
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. ^ To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. ^ Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. ^ The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance. ^
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance.