996 resultados para Getting Real
Resumo:
Opinnäytetyö käsittelee sosiaalisen median palvelun suunnittelua yritysympäristössä. Työ perustuu todelliseen projektiin, joka toteutui kevään 2007 aikana Helsingin ammattikorkeakoulu Stadian ja Itella Oyj:n välillä. Projekti toteutettiin käyttämällä perinteistä multimediatuotannon suunnitteluprosessia, joka ei sellaisenaan vastannut projektin erityistarpeisiin. Sosiaalinen media muodostuu internet-sovelluksista, jotka mahdollistavat yhteistoiminnallisia tapoja tuottaa erilaisia sisältöjä, kuten tekstejä, kuvia ja videoita, ja reagoida niihin. Osallistumiseen ja sosiaalisuuteen liittyvien ominaisuuksien vuoksi on vaikeaa suunnitella sosiaalisen median sovellusta noudattamalla perinteistä suunnitteluprosessia, joka perustuu laajamittaiseen alkusuunnitteluun ja käsikirjoittamiseen. Työ tarkastelee vaihtoehtoista suunnittelumallia Getting Realia, joka perustuu pienen ohjelmistoyrityksen 37signalsin työskentelytapoihin. Getting Real soveltaa toistuviin kehitysjaksoihin perustuvaa prosessia, joka suosii pientä mutta monipuolista työryhmää ja dokumentoinnin minimoimista. Työ vertaa perinteistä suunnitteluprosessia ja Getting Realia toteutuneeseen projektiin. Analyysin lopputuloksena on ehdotus suunnitteluprosessiksi, joka yhdistää joustavan sovelluskehityksen ja systemaattisen prosessin, joka syntyy toimeksiantajan tarpeista.
Resumo:
Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
This Working Project studies five portfolios of currency carry trades formed with the G10 currencies. Performance varies among strategies and the most basic one presents the worst results. I also study the equity and Pure FX risk factors which can explain the portfolios’ returns. Equity factors do not explain these returns while the Pure FX do for some of the strategies. Downside risk measures indicate the importance of using regime indicators to avoid losses. I conclude that although using VAR and threshold regression models with a variety of regime indicators do not allow the perception of different regimes, with a defined exogenous threshold on real exchange rates, an indicator of liquidity and the volatilities of the spot exchange rates it is possible to increase the average returns and reduce drawdowns of the carry trades
Resumo:
Objective: The chance of obtaining a conclusive DNA profile strongly depends on the quantity of biological material that can be recovered from a crime scene sample. Optimizing the collection strategy is therefore of prime interest. A difference in the level of tightness of the cotton meshed around the shaft has been observed between manufacturers and is hypothesized to affect the collection and subsequent release capacity of cotton swabs. Consequently, we compared the performance of cotton swabs from two different suppliers: Applimed SA and DryswabTM. Methods: These swabs were used to recover 50 ml of blood, either pure or diluted (1:1000 and 1:5000), deposited on both smooth and absorbent surfaces. Performance was compared in terms of ease of use, concentration of extracted DNA, and quality of DNA profiles. DNA quantification was obtained by real-time PCR using the QuantifilerTM Human DNA Quantification Kit. Evaluation of DNA profiles was based on profiles obtained using AmpFlSTR® NGM SElectTM PCR Amplification kit. Results: When considering smooth surfaces, recovered DNA was more concentrated when using the DryswabTM than the Applimed SA cotton swab. More precisely, DNA concentrations ranged from 15.7 to 28.8 ng/ml and 6.7 to 21.2 ng/ml, respectively for samples of pure blood. The same trend was observed for the absorbent surface, with 2.0 to 5.0 ng/ml and 0.9 to 1.4 ng/ml, respectively. Conclusion: Our results illustrate that different cotton swabs produce different results in terms of ease of use and quantity of recovered DNA and this should be taken into consideration when choosing which swab to use at both the crime scene and laboratory. More specifically, results from the present study suggest that looser meshing of the cotton fibres
Resumo:
Desde el año 2000 el idilio China-África está marcado principalmente por un foro de cooperación, mecanismo de diálogo y cooperación colectiva ideado por China. Sin embargo, destaca que esta relación ha evolucionado en función de los intereses estratégicos de los chinos. China se inserta en los circuitos económico-comerciales africanos de manera metódica y decidida. A diferencia de sus competidores (Estados Unidos, Unión Europea, Canadá, Japón, etc.), que actúan en África de manera preferencial, China invierte en todos los países africanos sin ninguna excepción, sin importar su régimen político, su situación económico-financiera o su ubicación geográfica. Sin embargo, la voracidad energética china se ha vuelto objeto de preocupación en el Consejo de Seguridad de Naciones Unidas, sobre todo por su ofensiva por acaparar el mercado petrolero africano. Puede afirmarse que el actuar chino en África es una expresión de su pragmatismo económico-comercial, con efectos colaterales negativos para la integración y el desarrollo de África.-----Since 2000, the relationship between China and Africa is growing up because of the Forum on China-Africa Cooperation (FOCAC) which is a collective mechanism of dialogue and cooperation. Meanwhile, it’s important to mention that this relationship has increased regarding the Chinese strategic interests in Africa. China is getting inside the African economic and trade networks in a methodic and aggressive way. Differently from his competitors (USA, EU, Canada, Japan...) which only seem to use Africa, China invests in all the African countries without looking at their economical or financial situation, neither their geographical location. The energetical voracity of China has become a real issue at the United Nations Security Council, especially for the Chinese strike in the African petroleum market. For many reasons, we can affirm that the way China is acting in Africa is only the expression of its economical and trade pragmatism, which also has got negative results in the african development and integration process.
Resumo:
Past research has documented a substitution effect between real earnings management (RM) and accrual-based earnings management (AM), depending on relative costs. This study contributes to this research by examining whether levels of (and changes in) financial leverage have an impact on this empirically documented trade-off. We hypothesise that in the presence of high leverage, firms that engage in earnings manipulation tactics will exhibit a preference for RM due to a lower possibility—and subsequent costs—of getting caught. We show that leverage levels and increases positively and significantly affect upward RM, with no significant effect on income-increasing AM, while our findings point towards a complementarity effect between unexpected levels of RM and AM for firms with very high leverage levels and changes. This is interpreted as an indication that high leverage could attract heavy outsider scrutiny, making it necessary for firms to use both forms of earnings management in order to achieve earnings targets. Furthermore, we document that equity investors exhibit a significantly stronger penalising reaction to AM vs. RM, indicating that leverage-induced RM is not as easily detectable by market participants as debt-induced AM, despite the fact that the former could imply deviation from optimal business practices.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Getting a lower energy cost has always been a challenge for concentrated photovoltaic. The FK concentrator enhances the performance (efficiency, acceptance angle and manufacturing tolerances) of the conventional CPV system based on a Fresnel primary stage and a secondary lens, while keeping its simplicity and potentially low‐cost manufacturing. At the same time F‐XTP (Fresnel lens+reflective prism), at the first glance has better cost potential but significantly higher sensitivity to manufacturing errors. This work presents comparison of these two approaches applied to two main technologies of Fresnel lens production (PMMA and Silicone on Glass) and effect of standard deformations that occur under real operation conditions
Resumo:
El objetivo de esta Tesis ha sido la consecución de simulaciones en tiempo real de vehículos industriales modelizados como sistemas multicuerpo complejos formados por sólidos rígidos. Para el desarrollo de un programa de simulación deben considerarse cuatro aspectos fundamentales: la modelización del sistema multicuerpo (tipos de coordenadas, pares ideales o impuestos mediante fuerzas), la formulación a utilizar para plantear las ecuaciones diferenciales del movimiento (coordenadas dependientes o independientes, métodos globales o topológicos, forma de imponer las ecuaciones de restricción), el método de integración numérica para resolver estas ecuaciones en el tiempo (integradores explícitos o implícitos) y finalmente los detalles de la implementación realizada (lenguaje de programación, librerías matemáticas, técnicas de paralelización). Estas cuatro etapas están interrelacionadas entre sí y todas han formado parte de este trabajo. Desde la generación de modelos de una furgoneta y de camión con semirremolque, el uso de tres formulaciones dinámicas diferentes, la integración de las ecuaciones diferenciales del movimiento mediante métodos explícitos e implícitos, hasta el uso de funciones BLAS, de técnicas de matrices sparse y la introducción de paralelización para utilizar los distintos núcleos del procesador. El trabajo presentado en esta Tesis ha sido organizado en 8 capítulos, dedicándose el primero de ellos a la Introducción. En el Capítulo 2 se presentan dos formulaciones semirrecursivas diferentes, de las cuales la primera está basada en una doble transformación de velocidades, obteniéndose las ecuaciones diferenciales del movimiento en función de las aceleraciones relativas independientes. La integración numérica de estas ecuaciones se ha realizado con el método de Runge-Kutta explícito de cuarto orden. La segunda formulación está basada en coordenadas relativas dependientes, imponiendo las restricciones por medio de penalizadores en posición y corrigiendo las velocidades y aceleraciones mediante métodos de proyección. En este segundo caso la integración de las ecuaciones del movimiento se ha llevado a cabo mediante el integrador implícito HHT (Hilber, Hughes and Taylor), perteneciente a la familia de integradores estructurales de Newmark. En el Capítulo 3 se introduce la tercera formulación utilizada en esta Tesis. En este caso las uniones entre los sólidos del sistema se ha realizado mediante uniones flexibles, lo que obliga a imponer los pares por medio de fuerzas. Este tipo de uniones impide trabajar con coordenadas relativas, por lo que la posición del sistema y el planteamiento de las ecuaciones del movimiento se ha realizado utilizando coordenadas Cartesianas y parámetros de Euler. En esta formulación global se introducen las restricciones mediante fuerzas (con un planteamiento similar al de los penalizadores) y la estabilización del proceso de integración numérica se realiza también mediante proyecciones de velocidades y aceleraciones. En el Capítulo 4 se presenta una revisión de las principales herramientas y estrategias utilizadas para aumentar la eficiencia de las implementaciones de los distintos algoritmos. En primer lugar se incluye una serie de consideraciones básicas para aumentar la eficiencia numérica de las implementaciones. A continuación se mencionan las principales características de los analizadores de códigos utilizados y también las librerías matemáticas utilizadas para resolver los problemas de álgebra lineal tanto con matrices densas como sparse. Por último se desarrolla con un cierto detalle el tema de la paralelización en los actuales procesadores de varios núcleos, describiendo para ello el patrón empleado y las características más importantes de las dos herramientas propuestas, OpenMP y las TBB de Intel. Hay que señalar que las características de los sistemas multicuerpo problemas de pequeño tamaño, frecuente uso de la recursividad, y repetición intensiva en el tiempo de los cálculos con fuerte dependencia de los resultados anteriores dificultan extraordinariamente el uso de técnicas de paralelización frente a otras áreas de la mecánica computacional, tales como por ejemplo el cálculo por elementos finitos. Basándose en los conceptos mencionados en el Capítulo 4, el Capítulo 5 está dividido en tres secciones, una para cada formulación propuesta en esta Tesis. En cada una de estas secciones se describen los detalles de cómo se han realizado las distintas implementaciones propuestas para cada algoritmo y qué herramientas se han utilizado para ello. En la primera sección se muestra el uso de librerías numéricas para matrices densas y sparse en la formulación topológica semirrecursiva basada en la doble transformación de velocidades. En la segunda se describe la utilización de paralelización mediante OpenMP y TBB en la formulación semirrecursiva con penalizadores y proyecciones. Por último, se describe el uso de técnicas de matrices sparse y paralelización en la formulación global con uniones flexibles y parámetros de Euler. El Capítulo 6 describe los resultados alcanzados mediante las formulaciones e implementaciones descritas previamente. Este capítulo comienza con una descripción de la modelización y topología de los dos vehículos estudiados. El primer modelo es un vehículo de dos ejes del tipo chasis-cabina o furgoneta, perteneciente a la gama de vehículos de carga medianos. El segundo es un vehículo de cinco ejes que responde al modelo de un camión o cabina con semirremolque, perteneciente a la categoría de vehículos industriales pesados. En este capítulo además se realiza un estudio comparativo entre las simulaciones de estos vehículos con cada una de las formulaciones utilizadas y se presentan de modo cuantitativo los efectos de las mejoras alcanzadas con las distintas estrategias propuestas en esta Tesis. Con objeto de extraer conclusiones más fácilmente y para evaluar de un modo más objetivo las mejoras introducidas en la Tesis, todos los resultados de este capítulo se han obtenido con el mismo computador, que era el top de la gama Intel Xeon en 2007, pero que hoy día está ya algo obsoleto. Por último los Capítulos 7 y 8 están dedicados a las conclusiones finales y las futuras líneas de investigación que pueden derivar del trabajo realizado en esta Tesis. Los objetivos de realizar simulaciones en tiempo real de vehículos industriales de gran complejidad han sido alcanzados con varias de las formulaciones e implementaciones desarrolladas. ABSTRACT The objective of this Dissertation has been the achievement of real time simulations of industrial vehicles modeled as complex multibody systems made up by rigid bodies. For the development of a simulation program, four main aspects must be considered: the modeling of the multibody system (types of coordinates, ideal joints or imposed by means of forces), the formulation to be used to set the differential equations of motion (dependent or independent coordinates, global or topological methods, ways to impose constraints equations), the method of numerical integration to solve these equations in time (explicit or implicit integrators) and the details of the implementation carried out (programming language, mathematical libraries, parallelization techniques). These four stages are interrelated and all of them are part of this work. They involve the generation of models for a van and a semitrailer truck, the use of three different dynamic formulations, the integration of differential equations of motion through explicit and implicit methods, the use of BLAS functions and sparse matrix techniques, and the introduction of parallelization to use the different processor cores. The work presented in this Dissertation has been structured in eight chapters, the first of them being the Introduction. In Chapter 2, two different semi-recursive formulations are shown, of which the first one is based on a double velocity transformation, thus getting the differential equations of motion as a function of the independent relative accelerations. The numerical integration of these equations has been made with the Runge-Kutta explicit method of fourth order. The second formulation is based on dependent relative coordinates, imposing the constraints by means of position penalty coefficients and correcting the velocities and accelerations by projection methods. In this second case, the integration of the motion equations has been carried out by means of the HHT implicit integrator (Hilber, Hughes and Taylor), which belongs to the Newmark structural integrators family. In Chapter 3, the third formulation used in this Dissertation is presented. In this case, the joints between the bodies of the system have been considered as flexible joints, with forces used to impose the joint conditions. This kind of union hinders to work with relative coordinates, so the position of the system bodies and the setting of the equations of motion have been carried out using Cartesian coordinates and Euler parameters. In this global formulation, constraints are introduced through forces (with a similar approach to the penalty coefficients) are presented. The stabilization of the numerical integration is carried out also by velocity and accelerations projections. In Chapter 4, a revision of the main computer tools and strategies used to increase the efficiency of the implementations of the algorithms is presented. First of all, some basic considerations to increase the numerical efficiency of the implementations are included. Then the main characteristics of the code’ analyzers used and also the mathematical libraries used to solve linear algebra problems (both with dense and sparse matrices) are mentioned. Finally, the topic of parallelization in current multicore processors is developed thoroughly. For that, the pattern used and the most important characteristics of the tools proposed, OpenMP and Intel TBB, are described. It needs to be highlighted that the characteristics of multibody systems small size problems, frequent recursion use and intensive repetition along the time of the calculation with high dependencies of the previous results complicate extraordinarily the use of parallelization techniques against other computational mechanics areas, as the finite elements computation. Based on the concepts mentioned in Chapter 4, Chapter 5 is divided into three sections, one for each formulation proposed in this Dissertation. In each one of these sections, the details of how these different proposed implementations have been made for each algorithm and which tools have been used are described. In the first section, it is shown the use of numerical libraries for dense and sparse matrices in the semirecursive topological formulation based in the double velocity transformation. In the second one, the use of parallelization by means OpenMP and TBB is depicted in the semi-recursive formulation with penalization and projections. Lastly, the use of sparse matrices and parallelization techniques is described in the global formulation with flexible joints and Euler parameters. Chapter 6 depicts the achieved results through the formulations and implementations previously described. This chapter starts with a description of the modeling and topology of the two vehicles studied. The first model is a two-axle chassis-cabin or van like vehicle, which belongs to the range of medium charge vehicles. The second one is a five-axle vehicle belonging to the truck or cabin semi-trailer model, belonging to the heavy industrial vehicles category. In this chapter, a comparative study is done between the simulations of these vehicles with each one of the formulations used and the improvements achieved are presented in a quantitative way with the different strategies proposed in this Dissertation. With the aim of deducing the conclusions more easily and to evaluate in a more objective way the improvements introduced in the Dissertation, all the results of this chapter have been obtained with the same computer, which was the top one among the Intel Xeon range in 2007, but which is rather obsolete today. Finally, Chapters 7 and 8 are dedicated to the final conclusions and the future research projects that can be derived from the work presented in this Dissertation. The objectives of doing real time simulations in high complex industrial vehicles have been achieved with the formulations and implementations developed.
Resumo:
The development and maintenance of the sealing of the root canal system is the key to the success of root canal treatment. The resin-based adhesive material has the potential to reduce the microleakage of the root canal because of its adhesive properties and penetration into dentinal walls. Moreover, the irrigation protocols may have an influence on the adhesiveness of resin-based sealers to root dentin. The objective of the present study was to evaluate the effect of different irrigant protocols on coronal bacterial microleakage of gutta-percha/AH Plus and Resilon/Real Seal Self-etch systems. One hundred ninety pre-molars were used. The teeth were divided into 18 experimental groups according to the irrigation protocols and filling materials used. The protocols used were: distilled water; sodium hypochlorite (NaOCl)+eDTA; NaOCl+H3PO4; NaOCl+eDTA+chlorhexidine (CHX); NaOCl+H3PO4+CHX; CHX+eDTA; CHX+ H3PO4; CHX+eDTA+CHX and CHX+H3PO4+CHX. Gutta-percha/AH Plus or Resilon/Real Seal Se were used as root-filling materials. The coronal microleakage was evaluated for 90 days against Enterococcus faecalis. Data were statistically analyzed using Kaplan-Meier survival test, Kruskal-Wallis and Mann-Whitney tests. No significant difference was verified in the groups using chlorhexidine or sodium hypochlorite during the chemo-mechanical preparation followed by eDTA or phosphoric acid for smear layer removal. The same results were found for filling materials. However, the statistical analyses revealed that a final flush with 2% chlorhexidine reduced significantly the coronal microleakage. A final flush with 2% chlorhexidine after smear layer removal reduces coronal microleakage of teeth filled with gutta-percha/AH Plus or Resilon/Real Seal SE.
Resumo:
Fingolimod is a new and efficient treatment for multiple sclerosis (MS). The drug administration requires special attention to the first dose, since cardiovascular adverse events can be observed during the initial six hours of fingolimod ingestion. The present study consisted of a review of cardiovascular data on 180 patients with MS receiving the first dose of fingolimod. The rate of bradycardia in these patients was higher than that observed in clinical trials with very strict inclusion criteria for patients. There were less than 10% of cases requiring special attention, but no fatal cases. All but one patient continued the treatment after this initial dose. This is the first report on real-life administration of fingolimod to Brazilian patients with MS, and one of the few studies with these characteristics in the world.
Resumo:
Using a desorption/ionization technique, easy ambient sonic-spray ionization coupled to mass spectrometry (EASI-MS), documents related to the 2nd generation of Brazilian Real currency (R$) were screened in the positive ion mode for authenticity based on chemical profiles obtained directly from the banknote surface. Characteristic profiles were observed for authentic, seized suspect counterfeit and counterfeited homemade banknotes from inkjet and laserjet printers. The chemicals in the authentic banknotes' surface were detected via a few minor sets of ions, namely from the plasticizers bis(2-ethylhexyl)phthalate (DEHP) and dibutyl phthalate (DBP), most likely related to the official offset printing process, and other common quaternary ammonium cations, presenting a similar chemical profile to 1st-generation R$. The seized suspect counterfeit banknotes, however, displayed abundant diagnostic ions in the m/z 400-800 range due to the presence of oligomers. High-accuracy FT-ICR MS analysis enabled molecular formula assignment for each ion. The ions were separated by 44 m/z, which enabled their characterization as Surfynol® 4XX (S4XX, XX=40, 65, and 85), wherein increasing XX values indicate increasing amounts of ethoxylation on a backbone of 2,4,7,9-tetramethyl-5-decyne-4,7-diol (Surfynol® 104). Sodiated triethylene glycol monobutyl ether (TBG) of m/z 229 (C10H22O4Na) was also identified in the seized counterfeit banknotes via EASI(+) FT-ICR MS. Surfynol® and TBG are constituents of inks used for inkjet printing.
Resumo:
Ammonium nitrate fuel oil (ANFO) is an explosive used in many civil applications. In Brazil, ANFO has unfortunately also been used in criminal attacks, mainly in automated teller machine (ATM) explosions. In this paper, we describe a detailed characterization of the ANFO composition and its two main constituents (diesel and a nitrate explosive) using high resolution and accuracy mass spectrometry performed on an FT-ICR-mass spectrometer with electrospray ionization (ESI(±)-FTMS) in both the positive and negative ion modes. Via ESI(-)-MS, an ion marker for ANFO was characterized. Using a direct and simple ambient desorption/ionization technique, i.e., easy ambient sonic-spray ionization mass spectrometry (EASI-MS), in a simpler, lower accuracy but robust single quadrupole mass spectrometer, the ANFO ion marker was directly detected from the surface of banknotes collected from ATM explosion theft.