921 resultados para pacs: C6170K knowledge engineering techniques
Resumo:
The reaction of living anionic polymers with 2,2,5,5-tetramethyl-1-(3-bromopropyl)-1-aza-2,5- disilacyclopentane (1) was investigated using coupled thin layer chromatography and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Structures of byproducts as well as the major product were determined. The anionic initiator having a protected primary amine functional group, 2,2,5,5-tetramethyl- 1-(3-lithiopropyl)-1-aza-2,5-disilacyclopentane (2), was synthesized using all-glass high-vacuum techniques, which allows the long-term stability of this initiator to be maintained. The use of 2 in the preparation of well-defined aliphatic primary amine R-end-functionalized polystyrene and poly(methyl methacrylate) was investigated. Primary amino R-end-functionalized poly(methyl methacrylate) can be obtained near-quantitatively by reacting 2 with 1,1-diphenylethylene in tetrahydrofuran at room temperature prior to polymerizing methyl methacrylate at -78 °C. When 2 is used to initiate styrene at room temperature in benzene, an additive such as N,N,N',N'- tetramethylethylenediamine is necessary to activate the polymerization. However, although the resulting polymers have narrow molecular weight distributions and well-controlled molecular weights, our mass spectra data suggest that the yield of primary amine α-end-functionalized polystyrene from these syntheses is very low. The majority of the products are methyl α-end-functionalized polystyrene.
Resumo:
Product miniaturization for applications in fields such as biotechnology, medical devices, aerospace, optics and communications has made the advancement of micromachining techniques essential. Machining of hard and brittle materials such as ceramics, glass and silicon is a formidable task. Rotary ultrasonic machining (RUM) is capable of machining these materials. RUM is a hybrid machining process which combines the mechanism of material removal of conventional grinding and ultrasonic machining. Downscaling of RUM for micro scale machining is essential to generate miniature features or parts from hard and brittle materials. The goal of this thesis is to conduct a feasibility study and to develop a knowledge base for micro rotary ultrasonic machining (MRUM). Positive outcome of the feasibility study led to a comprehensive investigation on the effect of process parameters. The effect of spindle speed, grit size, vibration amplitude, tool geometry, static load and coolant on the material removal rate (MRR) of MRUM was studied. In general, MRR was found to increase with increase in spindle speed, vibration amplitude and static load. MRR was also noted to depend upon the abrasive grit size and tool geometry. The behavior of the cutting forces was modeled using time series analysis. Being a vibration assisted machining process, heat generation in MRUM is low which is essential for bone machining. Capability of MRUM process for machining bone tissue was investigated. Finally, to estimate the MRR a predictive model was proposed. The experimental and the theoretical results exhibited a matching trend.
Resumo:
Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability.
Resumo:
Pulmonary arterial hypertension (PAH) is a disease of the pulmonary vasculature characterized by vasoconstriction and vascular remodeling leading to a progressive increase in pulmonary vascular resistance (PVR). It is becoming increasingly recognized that it is the response of the right ventricle (RV) to the increased afterload resulting from this increase in PVR that is the most important determinant of patient outcome. A range of hemodynamic, structural, and functional measures associated with the RV have been found to have prognostic importance in PAH and, therefore, have potential value as parameters for the evaluation and follow-up of patients. If such measures are to be used clinically, there is a need for simple, reproducible, accurate, easy-to-use, and noninvasive methods to assess them. Cardiac magnetic resonance imaging (CMRI) is regarded as the "gold standard" method for assessment of the RV, the complex structure of which makes accurate assessment by 2-dimensional methods, such as echocardiography, challenging. However, the majority of data concerning the use of CMRI in PAH have come from studies evaluating a variety of different measures and using different techniques and protocols, and there is a clear need for the development of standardized methodology if CMRI is to be established in the routine assessment of patients with PAH. Should such standards be developed, it seems likely that CMRI will become an important method for the noninvasive assessment and monitoring of patients with PAH. (C) 2012 Elsevier Inc. All rights reserved. (Am J Cardiol 2012;110[suppl]:25S-31S)
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
Purpose - The aim of this study is to investigate whether knowledge management (KM) contributes to the development of strategic orientation and to enhance innovativeness, and whether these three factors contribute to improve business performance. Design/methodology/approach - A sample of 241 Brazilian companies was surveyed, using Web-based questionnaires with 54 questions, using ten-point scales to measure the degree of agreement on each item of each construct. Structural equation modeling techniques were applied for model assessment and analysis of the relationships among constructs. Exploratory factor analysis, confirmatory factor analysis, and path analysis using the technique of structural equation modeling were applied to the data. Findings - Effective KM contributes positively to strategic orientation. Although there is no significant direct effect of KM on innovativeness, the relationship is significant when mediated by strategic orientation. Similarly effective KM has no direct effect on business performance, but this relationship becomes statistically significant when mediated by strategic orientation and innovativeness. Research limitations/implications - The findings indicate that KM permeates all relationships among the constructs, corroborating the argument that knowledge is an essential organizational resource that leverages all value-creating activities. The results indicate that both KM and innovativeness produce significant impacts on performance when they are aligned with a strategic orientation that enables the organization to anticipate and respond to changing market conditions. Originality/value - There is a substantial body of research on several types of relationships involving KM, strategic orientation, innovativeness and performance. This study offers an original contribution by analyzing all of those constructs simultaneously using established scales so that comparative studies are possible.
Resumo:
Based on literature, this article aims to present the "participant-observation" research protocol, and its practical application in the industrial engineering field, more specifically within the area of design development, and in the case shown by this article, of interiors' design. The main target is to identify the concept of the method, i.e., from its characteristics to structure a general sense about the subject, so that the protocol can be used in different areas of knowledge, especially those ones which are committed with the scientific research involving the expertise from researchers, and subjective feelings and opinions of the users of an engineering product, and how this knowledge can be benefic for product design, contributing since the earliest stage of design.
Resumo:
The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.
Resumo:
This research investigated someone of the main problems connected to the application of Tissue Engineering in the prosthetic field, in particular about the characterization of the scaffolding materials and biomimetic strategies adopted in order to promote the implant integration. The spectroscopic and thermal analysis techniques were usefully applied to characterize the chemico-physical properties of the materials such as – crystallinity; – relative composition in case of composite materials; – Structure and conformation of polymeric and peptidic chains; – mechanism and degradation rate; – Intramolecular and intermolecular interactions (hydrogen bonds, aliphatic interactions). This kind of information are of great importance in the comprehension of the interactions that scaffold undergoes when it is in contact with biological tissues; this information are fundamental to predict biodegradation mechanisms and to understand how chemico-physical properties change during the degradation process. In order to fully characterize biomaterials, this findings must be integrated by information relative to mechanical aspects and in vitro and in vivo behavior thanks to collaborations with biomedical engineers and biologists. This study was focussed on three different systems that correspond to three different strategies adopted in Tissue Engineering: biomimetic replica of fibrous 3-D structure of extracellular matrix (PCL-PLLA), incorporation of an apatitic phase similar to bone inorganic phase to promote biomineralization (PCL-HA), surface modification with synthetic oligopeptides that elicit the interaction with osteoblasts. The characterization of the PCL-PLLA composite underlined that the degradation started along PLLA fibres, which are more hydrophylic, and they serve as a guide for tissue regeneration. Moreover it was found that some cellular lines are more active in the colonization of the scaffold. In the PCL-HA composite, the weight ratio between the polymeric and the inorganic phase plays an essential role both in the degradation process and in the biomineralization of the material. The study of self-assembling peptides allowed to clarify the influence of primary structure on intermolecular and intermolecular interactions, that lead to the formation of the secondary structure and it was possible to find a new class of oligopeptides useful to functionalize materials surface. Among the analytical techniques used in this study, Raman vibrational spectroscopy played a major role, being non-destructive and non-invasive, two properties that make it suitable to degradation studies and to morphological characterization. Also micro-IR spectroscopy was useful in the comprehension of peptide structure on oxidized titanium: up to date this study was one of the first to employ this relatively new technique in the biomedical field.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.
Resumo:
[EN]The meccano method is a novel and promising mesh generation technique for simultaneously creating adaptive tetrahedral meshes and volume parameterizations of a complex solid. The method combines several former procedures: a mapping from the meccano boundary to the solid surface, a 3-D local refinement algorithm and a simultaneous mesh untangling and smoothing. In this paper we present the main advantages of our method against other standard mesh generation techniques. We show that our method constructs meshes that can be locally refined by using the Kossaczky bisection rule and maintaining a high mesh quality. Finally, we generate volume T-mesh for isogeometric analysis, based on the volume parameterization obtained by the method…
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.
Resumo:
The aims of this research were: - To identify the characteristics, properties and provenance of the building and decorative material found in three Hungarian Roman sites: Nagyharsány, Nemesvámos-Balácapuszta and Aquincum - To provide a database of information on the different sites - To have an overview of main conservation strategies applied in Hungary. Geological studies, macroscopical and microscopical observations, XRD investigations, physical and chemical analyses allowed us to define the characteristics and properties of the different kinds of collected materials. Building stones sampled from Nagyharsány site showed two different kinds of massive limestone belonging to the areas surrounding the villa. Also Building stones sampled from Nemesvámos-Balácapuszta Roman villa proved to be compatible with limestone belonging to local sources. Mural painting fragments show that all samples are units composed of multilayered structures. Mosaic tesserae can be classified as following: -Pale yellow , blackish and pink tesserae are comparable with local limestone; -White tessera, composed of marble, was probably imported from distant regions of the Empire, as the usual practice of Romans. Mortars present different characteristics according to the age, the site and the functions: -Building mortars are generally lime based, white or pale yellow in colour, present a high percentage of aggregates represented by fine sand; -Supporting mortars from both mosaics and mural paintings are reddish or pinkish in colour, due to the presence of high percentage of brick dust and tiles fragments, and present a higher content of MgO. Although the condition of the sites, there is an insignificant content of soluble salts. Database The whole study has allowed us to provide work sheets for each samples, including all characteristics and properties. Furthermore, all sites included in the frame of the research have been described and illustrated on the base of their floor plans, material and construction methodologies. It can be concluded that: 1. In Nagyharsány Archaeological site, it is possible to define a sequence of different construction phases on the base of the study of building material and mortars. The results are comparable with the chronology of the site provided by the archaeologists 2. The material used for construction was of local origin while the more precious ones, used for decorative elements, were probably imported from long distance 3. Construction techniques in Hungary mainly refer to the usual Roman knowledge and practice (Vitruvius); few differences have been found 4. The database will represent an archive for Archaeologists, Historians and Conservators dealing with Roman period in Hungary.