978 resultados para Digital surface model (DSM)
Resumo:
The Shadow Moiré fringe patterns are level lines of equal depth generated by interference between a master grid and its shadow projected on the surface. In simplistic approach, the minimum error is about the order of the master grid pitch, that is, always larger than 0,1 mm, resulting in an experimental technique of low precision. The use of a phase shift increases the accuracy of the Shadow Moiré technique. The current work uses the phase shifting method to determine the surfaces three-dimensional shape using isothamic fringe patterns and digital image processing. The current study presents the method and applies it to images obtained by simulation for error evaluation, as well as to a buckled plate, obtaining excellent results. The method hands itself particularly useful to decrease the errors in the interpretation of the Moiré fringes that can adversely affect the calculations of displacements in pieces containing many concave and convex regions in relatively small areas.
Resumo:
This work studies the forced convection problem in internal flow between concentric annular ducts, with radial fins at the internal tube surface. The finned surface heat transfer is analyzed by two different approaches. In the first one, it is assumed one-dimensional heat conduction along the internal tube wall and fins, with the convection heat transfer coefficient being a known parameter, determined by an uncoupled solution. In the other way, named conjugated approach, the mathematical model (continuity, momentum, energy and K-epsilon equations) applied to tube annuli problem was numerically solved using finite element technique in a coupled formulation. At first time, a comparison was made between results obtained for the conjugated problem and experimental data, showing good agreement. Then, the temperature profiles under these two approaches were compared to each other to analyze the validity of the one-dimensional classical formulation that has been utilized in the heat exchanger design.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Influence of surface functionalization on the behavior of silica nanoparticles in biological systems
Resumo:
Personalized nanomedicine has been shown to provide advantages over traditional clinical imaging, diagnosis, and conventional medical treatment. Using nanoparticles can enhance and clarify the clinical targeting and imaging, and lead them exactly to the place in the body that is the goal of treatment. At the same time, one can reduce the side effects that usually occur in the parts of the body that are not targets for treatment. Nanoparticles are of a size that can penetrate into cells. Their surface functionalization offers a way to increase their sensitivity when detecting target molecules. In addition, it increases the potential for flexibility in particle design, their therapeutic function, and variation possibilities in diagnostics. Mesoporous nanoparticles of amorphous silica have attractive physical and chemical characteristics such as particle morphology, controllable pore size, and high surface area and pore volume. Additionally, the surface functionalization of silica nanoparticles is relatively straightforward, which enables optimization of the interaction between the particles and the biological system. The main goal of this study was to prepare traceable and targetable silica nanoparticles for medical applications with a special focus on particle dispersion stability, biocompatibility, and targeting capabilities. Nanoparticle properties are highly particle-size dependent and a good dispersion stability is a prerequisite for active therapeutic and diagnostic agents. In the study it was shown that traceable streptavidin-conjugated silica nanoparticles which exhibit a good dispersibility could be obtained by the suitable choice of a proper surface functionalization route. Theranostic nanoparticles should exhibit sufficient hydrolytic stability to effectively carry the medicine to the target cells after which they should disintegrate and dissolve. Furthermore, the surface groups should stay at the particle surface until the particle has been internalized by the cell in order to optimize cell specificity. Model particles with fluorescently-labeled regions were tested in vitro using light microscopy and image processing technology, which allowed a detailed study of the disintegration and dissolution process. The study showed that nanoparticles degrade more slowly outside, as compared to inside the cell. The main advantage of theranostic agents is their successful targeting in vitro and in vivo. Non-porous nanoparticles using monoclonal antibodies as guiding ligands were tested in vitro in order to follow their targeting ability and internalization. In addition to the targeting that was found successful, a specific internalization route for the particles could be detected. In the last part of the study, the objective was to clarify the feasibility of traceable mesoporous silica nanoparticles, loaded with a hydrophobic cancer drug, being applied for targeted drug delivery in vitro and in vivo. Particles were provided with a small molecular targeting ligand. In the study a significantly higher therapeutic effect could be achieved with nanoparticles compared to free drug. The nanoparticles were biocompatible and stayed in the tumor for a longer time than a free medicine did, before being eliminated by renal excretion. Overall, the results showed that mesoporous silica nanoparticles are biocompatible, biodegradable drug carriers and that cell specificity can be achieved both in vitro and in vivo.
Resumo:
Since the most characteristic feature of paraquat poisoning is lung damage, a prospective controlled study was performed on excised rat lungs in order to estimate the intensity of lesion after different doses. Twenty-five male, 2-3-month-old non-SPF Wistar rats, divided into 5 groups, received paraquat dichloride in a single intraperitoneal injection (0, 1, 5, 25, or 50 mg/kg body weight) 24 h before the experiment. Static pressure-volume (PV) curves were performed in air- and saline-filled lungs; an estimator of surface tension and tissue works was computed by integrating the area of both curves and reported as work/ml of volume displacement. Paraquat induced a dose-dependent increase of inspiratory surface tension work that reached a significant two-fold order of magnitude for 25 and 50 mg/kg body weight (P<0.05, ANOVA), sparing lung tissue. This kind of lesion was probably due to functional abnormalities of the surfactant system, as was shown by the increase in the hysteresis of the paraquat groups at the highest doses. Hence, paraquat poisoning provides a suitable model of acute lung injury with alveolar instability that can be easily used in experimental protocols of mechanical ventilation
Resumo:
Electroacupuncture has been proposed to be a low cost and practical method that allows effective pain management with minimal collateral effects. In this study we have examined the effect of electroacupuncture against the hyperalgesia developed in a model of post-incisional pain in rats. A 1-cm longitudinal incision was made through the skin and fascia of the plantar region of the animal hind paw. Mechanical hyperalgesia in the incision was evaluated 135 min after the surgery with von Frey filaments. The tension threshold was reduced from 75 g (upper limit of the test) to 1.36 ± 0.36 g (mean ± SEM) in control rats. It is shown that a 15-min period of electroacupuncture applied 120 min after surgery to the Zusanli (ST36) and Sanyinjiao (SP6) points, but not to non-acupoints, produces a significant and long-lasting reduction of the mechanical hyperalgesia induced by the surgical incision of the plantar surface of the ipsilateral hind paw. The tension threshold was reduced from 75 to 27.6 ± 4.2 g in animals soon after the end of electroacupuncture. The mechanical threshold in this group was about 64% less than in control. Electroacupuncture was ineffective in rats treated 10 min earlier with naloxone (1 mg/kg, ip), thus confirming the involvement of opioid mechanisms in the antinociceptive effects of such procedure. The results indicate that post-incisional pain is a useful model for studying the anti-hyperalgesic properties of electroacupuncture in laboratory animals.
Resumo:
Digital business ecosystems (DBE) are becoming an increasingly popular concept for modelling and building distributed systems in heterogeneous, decentralized and open environments. Information- and communication technology (ICT) enabled business solutions have created an opportunity for automated business relations and transactions. The deployment of ICT in business-to-business (B2B) integration seeks to improve competitiveness by establishing real-time information and offering better information visibility to business ecosystem actors. The products, components and raw material flows in supply chains are traditionally studied in logistics research. In this study, we expand the research to cover the processes parallel to the service and information flows as information logistics integration. In this thesis, we show how better integration and automation of information flows enhance the speed of processes and, thus, provide cost savings and other benefits for organizations. Investments in DBE are intended to add value through business automation and are key decisions in building up information logistics integration. Business solutions that build on automation are important sources of value in networks that promote and support business relations and transactions. Value is created through improved productivity and effectiveness when new, more efficient collaboration methods are discovered and integrated into DBE. Organizations, business networks and collaborations, even with competitors, form DBE in which information logistics integration has a significant role as a value driver. However, traditional economic and computing theories do not focus on digital business ecosystems as a separate form of organization, and they do not provide conceptual frameworks that can be used to explore digital business ecosystems as value drivers—combined internal management and external coordination mechanisms for information logistics integration are not the current practice of a company’s strategic process. In this thesis, we have developed and tested a framework to explore the digital business ecosystems developed and a coordination model for digital business ecosystem integration; moreover, we have analysed the value of information logistics integration. The research is based on a case study and on mixed methods, in which we use the Delphi method and Internetbased tools for idea generation and development. We conducted many interviews with key experts, which we recoded, transcribed and coded to find success factors. Qualitative analyses were based on a Monte Carlo simulation, which sought cost savings, and Real Option Valuation, which sought an optimal investment program for the ecosystem level. This study provides valuable knowledge regarding information logistics integration by utilizing a suitable business process information model for collaboration. An information model is based on the business process scenarios and on detailed transactions for the mapping and automation of product, service and information flows. The research results illustrate the current cap of understanding information logistics integration in a digital business ecosystem. Based on success factors, we were able to illustrate how specific coordination mechanisms related to network management and orchestration could be designed. We also pointed out the potential of information logistics integration in value creation. With the help of global standardization experts, we utilized the design of the core information model for B2B integration. We built this quantitative analysis by using the Monte Carlo-based simulation model and the Real Option Value model. This research covers relevant new research disciplines, such as information logistics integration and digital business ecosystems, in which the current literature needs to be improved. This research was executed by high-level experts and managers responsible for global business network B2B integration. However, the research was dominated by one industry domain, and therefore a more comprehensive exploration should be undertaken to cover a larger population of business sectors. Based on this research, the new quantitative survey could provide new possibilities to examine information logistics integration in digital business ecosystems. The value activities indicate that further studies should continue, especially with regard to the collaboration issues on integration, focusing on a user-centric approach. We should better understand how real-time information supports customer value creation by imbedding the information into the lifetime value of products and services. The aim of this research was to build competitive advantage through B2B integration to support a real-time economy. For practitioners, this research created several tools and concepts to improve value activities, information logistics integration design and management and orchestration models. Based on the results, the companies were able to better understand the formulation of the digital business ecosystem and the importance of joint efforts in collaboration. However, the challenge of incorporating this new knowledge into strategic processes in a multi-stakeholder environment remains. This challenge has been noted, and new projects have been established in pursuit of a real-time economy.
Resumo:
Ventricular late potentials are low-amplitude signals originating from damaged myocardium and detected on the body surface by ECG filtering and averaging. Digital filters present in commercial equipment may interfere with the ability of arrhythmia stratification. We compared 40-Hz BiSpec (BI) and classical 40- to 250-Hz band-pass Butterworth bidirectional (BD) filters in terms of impact on time domain variables and diagnostic properties. In a transverse retrospective age-adjusted case-control study, 221 subjects with sinus rhythm without bundle branch block were divided into three groups after signal-averaged ECG acquisition: GI (N = 40), clinically normal controls, GII (N = 158), subjects with coronary heart disease without sustained monomorphic ventricular tachycardia (SMVT), and GIII (N = 23), subjects with heart disease and documented SMVT. Conventional variables analyzed from vector magnitude data after averaging to 0.3 µV final noise were obtained by application of each filter to the averaged signal, and evaluated in pairs by numerical comparison and by diagnostic agreement assessment, using conventional and optimized thresholds of normality. Significant differences were found between BI and BD variables in all groups, with diagnostic results showing significant disagreement between both filters [kappa value of 0.61 (P<0.05) for GII and 0.31 for GIII (P = NS)]. Sensitivity for SMVT was lower with BI than with BD (65.2 vs 91.3%, respectively, P<0.05). Filters provided significantly different numerical and diagnostic results and the BI filter showed only limited clinical application to risk stratification of ventricular arrhythmia.
Resumo:
In this thesis, stepwise titration with hydrochloric acid was used to obtain chemical reactivities and dissolution rates of ground limestones and dolostones of varying geological backgrounds (sedimentary, metamorphic or magmatic). Two different ways of conducting the calculations were used: 1) a first order mathematical model was used to calculate extrapolated initial reactivities (and dissolution rates) at pH 4, and 2) a second order mathematical model was used to acquire integrated mean specific chemical reaction constants (and dissolution rates) at pH 5. The calculations of the reactivities and dissolution rates were based on rate of change of pH and particle size distributions of the sample powders obtained by laser diffraction. The initial dissolution rates at pH 4 were repeatedly higher than previously reported literature values, whereas the dissolution rates at pH 5 were consistent with former observations. Reactivities and dissolution rates varied substantially for dolostones, whereas for limestones and calcareous rocks, the variation can be primarily explained by relatively large sample standard deviations. A list of the dolostone samples in a decreasing order of initial reactivity at pH 4 is: 1) metamorphic dolostones with calcite/dolomite ratio higher than about 6% 2) sedimentary dolostones without calcite 3) metamorphic dolostones with calcite/dolomite ratio lower than about 6% The reactivities and dissolution rates were accompanied by a wide range of experimental techniques to characterise the samples, to reveal how different rocks changed during the dissolution process, and to find out which factors had an influence on their chemical reactivities. An emphasis was put on chemical and morphological changes taking place at the surfaces of the particles via X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Supporting chemical information was obtained with X-Ray Fluorescence (XRF) measurements of the samples, and Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) and Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES) measurements of the solutions used in the reactivity experiments. Information on mineral (modal) compositions and their occurrence was provided by X-Ray Diffraction (XRD), Energy Dispersive X-ray analysis (EDX) and studying thin sections with a petrographic microscope. BET (Brunauer, Emmet, Teller) surface areas were determined from nitrogen physisorption data. Factors increasing chemical reactivity of dolostones and calcareous rocks were found to be sedimentary origin, higher calcite concentration and smaller quartz concentration. Also, it is assumed that finer grain size and larger BET surface areas increase the reactivity although no certain correlation was found in this thesis. Atomic concentrations did not correlate with the reactivities. Sedimentary dolostones, unlike metamorphic ones, were found to have porous surface structures after dissolution. In addition, conventional (XPS) and synchrotron based (HRXPS) X-ray Photoelectron Spectroscopy were used to study bonding environments on calcite and dolomite surfaces. Both samples are insulators, which is why neutralisation measures such as electron flood gun and a conductive mask were used. Surface core level shifts of 0.7 ± 0.1 eV for Ca 2p spectrum of calcite and 0.75 ± 0.05 eV for Mg 2p and Ca 3s spectra of dolomite were obtained. Some satellite features of Ca 2p, C 1s and O 1s spectra have been suggested to be bulk plasmons. The origin of carbide bonds was suggested to be beam assisted interaction with hydrocarbons found on the surface. The results presented in this thesis are of particular importance for choosing raw materials for wet Flue Gas Desulphurisation (FGD) and construction industry. Wet FGD benefits from high reactivity, whereas construction industry can take advantage of slow reactivity of carbonate rocks often used in the facades of fine buildings. Information on chemical bonding environments may help to create more accurate models for water-rock interactions of carbonates.
Resumo:
Permanent magnet synchronous machines (PMSM) have become widely used in applications because of high efficiency compared to synchronous machines with exciting winding or to induction motors. This feature of PMSM is achieved through the using the permanent magnets (PM) as the main excitation source. The magnetic properties of the PM have significant influence on all the PMSM characteristics. Recent observations of the PM material properties when used in rotating machines revealed that in all PMSMs the magnets do not necessarily operate in the second quadrant of the demagnetization curve which makes the magnets prone to hysteresis losses. Moreover, still no good analytical approach has not been derived for the magnetic flux density distribution along the PM during the different short circuits faults. The main task of this thesis is to derive simple analytical tool which can predict magnetic flux density distribution along the rotor-surface mounted PM in two cases: during normal operating mode and in the worst moment of time from the PM’s point of view of the three phase symmetrical short circuit. The surface mounted PMSMs were selected because of their prevalence and relatively simple construction. The proposed model is based on the combination of two theories: the theory of the magnetic circuit and space vector theory. The comparison of the results in case of the normal operating mode obtained from finite element software with the results calculated with the proposed model shows good accuracy of model in the parts of the PM which are most of all prone to hysteresis losses. The comparison of the results for three phase symmetrical short circuit revealed significant inaccuracy of the proposed model compared with results from finite element software. The analysis of the inaccuracy reasons was provided. The impact on the model of the Carter factor theory and assumption that air have permeability of the PM were analyzed. The propositions for the further model development are presented.
Resumo:
We developed a forced non-electric-shock running wheel (FNESRW) system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h). An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS), an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05) and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use.
Resumo:
The purpose of this thesis is to explore a different kind of digital content management model and to propose a process in order to manage properly the content on an organization’s website. This process also defines briefly the roles and responsibilities of the different actors implicated. In order to create this process, the thesis has been divided into two parts. First, the theoretical analysis helps to find the two main different content management models, content management adaptation and content management localization model. Each of these models, have been analyzed through a SWOT model in order to identify their particularities and which of them is the best option according to particular organizational objectives. In the empirical part, this thesis has measured the organizational website performance comparing two main data. On one hand, the international website is analyzed in order to identify the results of the content management standardization. On the other hand, content management adaptation, also called content management localization model, is analyzed by looking through the key measure of the Dutch page from the same organization. The resulted output is a process model for localization as well as recommendations on how to proceed when creating a digital content management strategy. However, more research is recommended to provide more comprehensive managerial solutions.