951 resultados para one-meson-exchange: independent-particle shell model
Resumo:
Value chain collaboration has been a prevailing topic for research, and there is a constantly growing interest in developing collaborative models for improved efficiency in logistics. One area of collaboration is demand information management, which enables improved visibility and decrease of inventories in the value chain. Outsourcing of non-core competencies has changed the nature of collaboration from intra-enterprise to cross-enterprise activity, and this together with increasing competition in the globalizing markets have created a need for methods and tools for collaborative work. The retailer part in the value chain of consumer packaged goods (CPG) has been studied relatively widely, proven models have been defined, and there exist several best practice collaboration cases. The information and communications technology has developed rapidly, offering efficient solutions and applications to exchange information between value chain partners. However, the majority of CPG industry still works with traditional business models and practices. This concerns especially companies operating in the upstream of the CPG value chain. Demand information for consumer packaged goods originates at retailers' counters, based on consumers' buying decisions. As this information does not get transferred along the value chain towards the upstream parties, each player needs to optimize their part, causing safety margins for inventories and speculation in purchasing decisions. The safety margins increase with each player, resulting in a phenomenon known as the bullwhip effect. The further the company is from the original demand information source, the more distorted the information is. This thesis concentrates on the upstream parts of the value chain of consumer packaged goods, and more precisely the packaging value chain. Packaging is becoming a part of the product with informative and interactive features, and therefore is not just a cost item needed to protect the product. The upstream part of the CPG value chain is distinctive, as the product changes after each involved party, and therefore the original demand information from the retailers cannot be utilized as such – even if it were transferred seamlessly. The objective of this thesis is to examine the main drivers for collaboration, and barriers causing the moderate adaptation level of collaborative models. Another objective is to define a collaborative demand information management model and test it in a pilot business situation in order to see if the barriers can be eliminated. The empirical part of this thesis contains three parts, all related to the research objective, but involving different target groups, viewpoints and research approaches. The study shows evidence that the main barriers for collaboration are very similar to the barriers in the lower part of the same value chain; lack of trust, lack of business case and lack of senior management commitment. Eliminating one of them – the lack of business case – is not enough to eliminate the two other barriers, as the operational model in this thesis shows. The uncertainty of the future, fear of losing an independent position in purchasing decision making and lack of commitment remain strong enough barriers to prevent the implementation of the proposed collaborative business model. The study proposes a new way of defining the value chain processes: it divides the contracting and planning process into two processes, one managing the commercial parts and the other managing the quantity and specification related issues. This model can reduce the resistance to collaboration, as the commercial part of the contracting process would remain the same as in the traditional model. The quantity/specification-related issues would be managed by the parties with the best capabilities and resources, as well as access to the original demand information. The parties in between would be involved in the planning process as well, as their impact for the next party upstream is significant. The study also highlights the future challenges for companies operating in the CPG value chain. The markets are becoming global, with toughening competition. Also, the technology development will most likely continue with a speed exceeding the adaptation capabilities of the industry. Value chains are also becoming increasingly dynamic, which means shorter and more agile business relationships, and at the same time the predictability of consumer demand is getting more difficult due to shorter product life cycles and trends. These changes will certainly have an effect on companies' operational models, but it is very difficult to estimate when and how the proven methods will gain wide enough adaptation to become standards.
Resumo:
Tutkin kandidaatin tutkielmassani yhteiskuntavastuuviestintää suomalaisten pörssiyhtiöiden vuosikertomuksissa 2004. Tutkimus osoitti, että viestintä on hyvin eritasoista eri yrityksissä. Valveutuneet yhteiskuntavastuuviestijät käyttivät kolmen pilarin mallia kategorioidessaan toimintaansa. Selkeästi oli havaittavissa myös toimialakohtaisia eroja. Tässä tutkimuksessa jatkan samalla aihepiirillä selvittämällä sitä, kuinka Helsingin pörssissä listattujen yhtiöiden yhteiskuntavastuuviestintä on muuttunut kun verrataan vuoden 2004 ja 2008 vuosikertomuksia toisiinsa. Tutkimusmetodi on kvalitatiivinen. Diskurssianalyysin keinoin selvitän miten yritykset viestivät vastuullisuudestaan. Tutkimus osoittaa, että yhteiskuntavastuuviestintä ei ole edelleenkään jokaisen yhtiön intresseissä. Yrityksistä noin kaksi kolmesta viestii jotain yhteiskuntavastuun alueeseen liittyvää. Ward on hieman vähentynyt vuodesta 2004. Näistä yrityksistä vastaavasti noin kahdella kolmesta yhteiskuntavastuutoiminta on johdettua ja tavoitteellista tämä näkyy korkealaatuisena yhteiskuntavastuuviestintänä. Taantuma vuosikertomuksissa näkyi etenkin taloudellisen vastuun lisääntyneenä raportointina.
Resumo:
This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.
Resumo:
The Travel and Tourism field is undergoing changes due to the rapid development of information technology and digital services. Online travel has profoundly changed the way travel and tourism organizations interact with their customers. Mobile technology such as mobile services for pocket devices (e.g. mobile phones) has the potential to take this development even further. Nevertheless, many issues have been highlighted since the early days of mobile services development (e.g. the lack of relevance, ease of use of many services). However, the wide adoption of smartphones and the mobile Internet in many countries as well as the formation of so-called ecosystems between vendors of mobile technology indicate that many of these issues have been overcome. Also when looking at the numbers of downloaded applications related to travel in application stores like Google Play, it seems obvious that mobile travel and tourism services are adopted and used by many individuals. However, as business is expected to start booming in the mobile era, many issues have a tendency to be overlooked. Travelers are generally on the go and thus services that work effectively in mobile settings (e.g. during a trip) are essential. Hence, the individuals’ perceived drivers and barriers to use mobile travel and tourism services in on-site or during trip settings seem particularly valuable to understand; thus this is one primary aim of the thesis. We are, however, also interested in understanding different types of mobile travel service users. Individuals may indeed be very different in their propensity to adopt and use technology based innovations (services). Research is also switching more from investigating issues of mobile service development to understanding individuals’ usage patterns of mobile services. But designing new mobile services may be a complex matter from a service provider perspective. Hence, our secondary aim is to provide insights into drivers and barriers of mobile travel and tourism service development from a holistic business model perspective. To accomplish the research objectives seven different studies have been conducted over a time period from 2002 – 2013. The studies are founded on and contribute to theories within diffusion of innovations, technology acceptance, value creation, user experience and business model development. Several different research methods are utilized: surveys, field and laboratory experiments and action research. The findings suggest that a successful mobile travel and tourism service is a service which supports one or several mobile motives (needs) of individuals such as spontaneous needs, time-critical arrangements, efficiency ambitions, mobility related needs (location features) and entertainment needs. The service could be customized to support travelers’ style of traveling (e.g. organized travel or independent travel) and should be easy to use, especially easy to take into use (access, install and learn) during a trip, without causing security concerns and/or financial risks for the user. In fact, the findings suggest that the most prominent barrier to the use of mobile travel and tourism services during a trip is an individual’s perceived financial cost (entry costs and usage costs). It should, however, be noted that regulations are put in place in the EU regarding data roaming prices between European countries and national telecom operators are starting to see ‘international data subscriptions’ as a sales advantage (e.g. Finnish Sonera provides a data subscription in the Baltic and Nordic region at the same price as in Finland), which will enhance the adoption of mobile travel and tourism services also in international contexts. In order to speed up the adoption rate travel service providers could consider e.g. more local initiatives of free Wi-Fi networks, development of services that can be used, at least to some extent, in an offline mode (do not require costly network access during a trip) and cooperation with telecom operators (e.g. lower usage costs for travelers who use specific mobile services or travel with specific vendors). Furthermore, based on a developed framework for user experience of mobile trip arrangements, the results show that a well-designed mobile site and/or native application, which preferably supports integration with other mobile services, is a must for true mobile presence. In fact, travel service providers who want to build a relationship with their customers need to consider a downloadable native application, but in order to be found through the mobile channel and make contact with potential new customers, a mobile website should be available. Moreover, we have made a first attempt with cluster analysis to identify user categories of mobile services in a travel and tourism context. The following four categories were identified: info-seekers, checkers, bookers and all-rounders. For example “all-rounders”, represented primarily by individuals who use their pocket device for almost any of the investigated mobile travel services, constituted primarily of 23 to 50 year old males with high travel frequency and great online experience. The results also indicate that travel service providers will increasingly become multi-channel providers. To manage multiple online channels, closely integrated and hybrid online platforms for different devices, supporting all steps in a traveler process should be considered. It could be useful for travel service providers to focus more on developing browser-based mobile services (HTML5-solutions) than native applications that work only with specific operating systems and for specific devices. Based on an action research study and utilizing a holistic business model framework called STOF we found that HTML5 as an emerging platform, at least for now, has some limitations regarding the development of the user experience and monetizing the application. In fact, a native application store (e.g. Google Play) may be a key mediator in the adoption of mobile travel and tourism services both from a traveler and a service provider perspective. Moreover, it must be remembered that many device and mobile operating system developers want service providers to specifically create services for their platforms and see native applications as a strategic advantage to sell more devices of a certain kind. The mobile telecom industry has moved into a battle of ecosystems where device makers, developers of operating systems and service developers are to some extent forced to choose their development platforms.
Resumo:
One of the main problems related to the transport and manipulation of multiphase fluids concerns the existence of characteristic flow patterns and its strong influence on important operation parameters. A good example of this occurs in gas-liquid chemical reactors in which maximum efficiencies can be achieved by maintaining a finely dispersed bubbly flow to maximize the total interfacial area. Thus, the ability to automatically detect flow patterns is of crucial importance, especially for the adequate operation of multiphase systems. This work describes the application of a neural model to process the signals delivered by a direct imaging probe to produce a diagnostic of the corresponding flow pattern. The neural model is constituted of six independent neural modules, each of which trained to detect one of the main horizontal flow patterns, and a last winner-take-all layer responsible for resolving when two or more patterns are simultaneously detected. Experimental signals representing different bubbly, intermittent, annular and stratified flow patterns were used to validate the neural model.
Resumo:
This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.
Resumo:
Serine-proteases are involved in vital processes in virtually all species. They are important targets for researchers studying the relationships between protein structure and activity, for the rational design of new pharmaceuticals. Trypsin was used as a model to assess a possible differential contribution of hydration water to the binding of two synthetic inhibitors. Thermodynamic parameters for the association of bovine ß-trypsin (homogeneous material, observed 23,294.4 ± 0.2 Da, theoretical 23,292.5 Da) with the inhibitors benzamidine and berenil at pH 8.0, 25ºC and with 25 mM CaCl2, were determined using isothermal titration calorimetry and the osmotic stress method. The association constant for berenil was about 12 times higher compared to the one for benzamidine (binding constants are K = 596,599 ± 25,057 and 49,513 ± 2,732 M-1, respectively; the number of binding sites is the same for both ligands, N = 0.99 ± 0.05). Apparently the driving force responsible for this large difference of affinity is not due to hydrophobic interactions because the variation in heat capacity (DCp), a characteristic signature of these interactions, was similar in both systems tested (-464.7 ± 23.9 and -477.1 ± 86.8 J K-1 mol-1 for berenil and benzamidine, respectively). The results also indicated that the enzyme has a net gain of about 21 water molecules regardless of the inhibitor tested. It was shown that the difference in affinity could be due to a larger number of interactions between berenil and the enzyme based on computational modeling. The data support the view that pharmaceuticals derived from benzamidine that enable hydrogen bond formation outside the catalytic binding pocket of ß-trypsin may result in more effective inhibitors.
Resumo:
The aim of the present study was to determine the ventilation/perfusion ratio that contributes to hypoxemia in pulmonary embolism by analyzing blood gases and volumetric capnography in a model of experimental acute pulmonary embolism. Pulmonary embolization with autologous blood clots was induced in seven pigs weighing 24.00 ± 0.6 kg, anesthetized and mechanically ventilated. Significant changes occurred from baseline to 20 min after embolization, such as reduction in oxygen partial pressures in arterial blood (from 87.71 ± 8.64 to 39.14 ± 6.77 mmHg) and alveolar air (from 92.97 ± 2.14 to 63.91 ± 8.27 mmHg). The effective alveolar ventilation exhibited a significant reduction (from 199.62 ± 42.01 to 84.34 ± 44.13) consistent with the fall in alveolar gas volume that effectively participated in gas exchange. The relation between the alveolar ventilation that effectively participated in gas exchange and cardiac output (V Aeff/Q ratio) also presented a significant reduction after embolization (from 0.96 ± 0.34 to 0.33 ± 0.17 fraction). The carbon dioxide partial pressure increased significantly in arterial blood (from 37.51 ± 1.71 to 60.76 ± 6.62 mmHg), but decreased significantly in exhaled air at the end of the respiratory cycle (from 35.57 ± 1.22 to 23.15 ± 8.24 mmHg). Exhaled air at the end of the respiratory cycle returned to baseline values 40 min after embolism. The arterial to alveolar carbon dioxide gradient increased significantly (from 1.94 ± 1.36 to 37.61 ± 12.79 mmHg), as also did the calculated alveolar (from 56.38 ± 22.47 to 178.09 ± 37.46 mL) and physiological (from 0.37 ± 0.05 to 0.75 ± 0.10 fraction) dead spaces. Based on our data, we conclude that the severe arterial hypoxemia observed in this experimental model may be attributed to the reduction of the V Aeff/Q ratio. We were also able to demonstrate that V Aeff/Q progressively improves after embolization, a fact attributed to the alveolar ventilation redistribution induced by hypocapnic bronchoconstriction.
Resumo:
Experimental models of sepsis-induced pulmonary alterations are important for the study of pathogenesis and for potential intervention therapies. The objective of the present study was to characterize lung dysfunction (low PaO2 and high PaCO2, and increased cellular infiltration, protein extravasation, and malondialdehyde (MDA) production assessed in bronchoalveolar lavage) in a sepsis model consisting of intraperitoneal (ip) injection of Escherichia coli and the protective effects of pentoxifylline (PTX). Male Wistar rats (weighing between 270 and 350 g) were injected ip with 10(7) or 10(9) CFU/100 g body weight or saline and samples were collected 2, 6, 12, and 24 h later (N = 5 each group). PaO2, PaCO2 and pH were measured in blood, and cellular influx, protein extravasation and MDA concentration were measured in bronchoalveolar lavage. In a second set of experiments either PTX or saline was administered 1 h prior to E. coli ip injection (N = 5 each group) and the animals were observed for 6 h. Injection of 10(7) or 10(9) CFU/100 g body weight of E. coli induced acidosis, hypoxemia, and hypercapnia. An increased (P < 0.05) cell influx was observed in bronchoalveolar lavage, with a predominance of neutrophils. Total protein and MDA concentrations were also higher (P < 0.05) in the septic groups compared to control. A higher tumor necrosis factor-alpha (P < 0.05) concentration was also found in these animals. Changes in all parameters were more pronounced with the higher bacterial inoculum. PTX administered prior to sepsis reduced (P < 0.05) most functional alterations. These data show that an E. coli ip inoculum is a good model for the induction of lung dysfunction in sepsis, and suitable for studies of therapeutic interventions.
Resumo:
We investigated whether hepatic artery endothelium may be the earliest site of injury consequent to liver ischemia and reperfusion. Twenty-four heartworm-free mongrel dogs of either sex exposed to liver ischemia/reperfusion in vivo were randomized into four experimental groups (N = 6): a) control, sham-operated dogs, b) dogs subjected to 60 min of ischemia, c) dogs subjected to 30 min of ischemia and 60 min of reperfusion, and d) animals subjected to 45 min of ischemia and 120 min of reperfusion. The nitric oxide endothelium-dependent relaxation of hepatic artery rings contracted with prostaglandin F2a and exposed to increasing concentrations of acetylcholine, calcium ionophore A23187, sodium fluoride, phospholipase-C, poly-L-arginine, isoproterenol, and sodium nitroprusside was evaluated in organ-chamber experiments. Lipid peroxidation was estimated by malondialdehyde activity in liver tissue samples and by blood lactic dehydrogenase (LDH), serum aspartate aminotransferase (AST) and serum alanine aminotransferase (ALT) activities. No changes were observed in hepatic artery relaxation for any agonist tested. The group subjected to 45 min of ischemia and 120 min of reperfusion presented marked increases of serum aminotransferases (ALT = 2989 ± 1056 U/L and AST = 1268 ± 371 U/L; P < 0.01), LDH = 2887 ± 1213 IU/L; P < 0.01) and malondialdehyde in liver samples (0.360 ± 0.020 nmol/mgPT; P < 0.05). Under the experimental conditions utilized, no abnormal changes in hepatic arterial vasoreactivity were observed: endothelium-dependent and independent hepatic artery vasodilation were not impaired in this canine model of ischemia/reperfusion injury. In contrast to other vital organs and in the ischemia/reperfusion injury environment, dysfunction of the main artery endothelium is not the first site of reperfusion injury.
Resumo:
Wear particles are phagocytosed by macrophages and other inflammatory cells, resulting in cellular activation and release of proinflammatory factors, which cause periprosthetic osteolysis and subsequent aseptic loosening, the most common causes of total joint arthroplasty failure. During this pathological process, tumor necrosis factor-alpha (TNF-α) plays an important role in wear-particle-induced osteolysis. In this study, recombination adenovirus (Ad) vectors carrying both target genes [TNF-α small interfering RNA (TNF-α-siRNA) and bone morphogenetic protein 2 (BMP-2)] were synthesized and transfected into RAW264.7 macrophages and pro-osteoblastic MC3T3-E1 cells, respectively. The target gene BMP-2, expressed on pro-osteoblastic MC3T3-E1 cells and silenced by the TNF-α gene on cells, was treated with titanium (Ti) particles that were assessed by real-time PCR and Western blot. We showed that recombinant adenovirus (Ad-siTNFα-BMP-2) can induce osteoblast differentiation when treated with conditioned medium (CM) containing RAW264.7 macrophages challenged with a combination of Ti particles and Ad-siTNFα-BMP-2 (Ti-ad CM) assessed by alkaline phosphatase activity. The receptor activator of nuclear factor-κB ligand was downregulated in pro-osteoblastic MC3T3-E1 cells treated with Ti-ad CM in comparison with conditioned medium of RAW264.7 macrophages challenged with Ti particles (Ti CM). We suggest that Ad-siTNFα-BMP-2 induced osteoblast differentiation and inhibited osteoclastogenesis on a cell model of a Ti particle-induced inflammatory response, which may provide a novel approach for the treatment of periprosthetic osteolysis.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.