920 resultados para non-polluting systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To describe and compare three alternative methods for controlling classical friction: Self-ligating brackets (SLB), special brackets (SB) and special elastomeric ligatures (SEB). METHODS: The study compared Damon MX, Smart Clip, In-Ovation and Easy Clip self-ligating bracket systems, the special Synergy brackets and Morelli's twin bracket with special 8-shaped elastomeric ligatures. New and used Morelli brackets with new and used elastomeric ligatures were used as control. All brackets had 0.022 x 0.028-in slots. 0.014-in nickel-titanium and stainless steel 0.019 x 0.025-in wires were tied to first premolar steel brackets using each archwire ligation method and pulled by an Instron machine at a speed of 0.5 mm/minute. Prior to the mechanical tests the absence of binding in the device was ruled out. Statistical analysis consisted of the Kruskal-Wallis test and multiple non-parametric analyses at a 1% significance level. RESULTS: When a 0.014-in archwire was employed, all ligation methods exhibited classical friction forces close to zero, except Morelli brackets with new and old elastomeric ligatures, which displayed 64 and 44 centiNewtons, respectively. When a 0.019 x 0.025-in archwire was employed, all ligation methods exhibited values close to zero, except the In-Ovation brackets, which yielded 45 cN, and the Morelli brackets with new and old elastomeric ligatures, which displayed 82 and 49 centiNewtons, respectively. CONCLUSIONS: Damon MX, Easy Clip, Smart Clip, Synergy bracket systems and 8-shaped ligatures proved to be equally effective alternatives for controlling classical friction using 0.014-in nickel-titanium archwires and 0.019 x 0.025-in steel archwires, while the In-Ovation was efficient with 0.014-in archwires but with 0.019 x 0.025-in archwires it exhibited friction that was similar to conventional brackets with used elastomeric ligatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to study the features of a simple replicator chemical model of the relation between kinetic stability and entropy production under the action of external perturbations. We quantitatively explore the different paths leading to evolution in a toy model where two independent replicators compete for the same substrate. To do that, the same scenario described originally by Pross (J Phys Org Chem 17:312–316, 2004) is revised and new criteria to define the kinetic stability are proposed. Our results suggest that fast replicator populations are continually favored by the effects of strong stochastic environmental fluctuations capable to determine the global population, the former assumed to be the only acting evolution force. We demonstrate that the process is continually driven by strong perturbations only, and that population crashes may be useful proxies for these catastrophic environmental fluctuations. As expected, such behavior is particularly enhanced under very large scale perturbations, suggesting a likely dynamical footprint in the recovery patterns of new species after mass extinction events in the Earth’s geological past. Furthermore, the hypothesis that natural selection always favors the faster processes may give theoretical support to different studies that claim the applicability of maximum principles like the Maximum Metabolic Flux (MMF) or Maximum Entropy Productions Principle (MEPP), seen as the main goal of biological evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methods We conducted a phase I, multicenter, randomized, double-blind, placebo-controlled, multi-arm (10) parallel study involving healthy adults to evaluate the safety and immunogenicity of influenza A (H1N1) 2009 non-adjuvanted and adjuvanted candidate vaccines. Subjects received two intramuscular injections of one of the candidate vaccines administered 21 days apart. Antibody responses were measured by means of hemagglutination-inhibition assay before and 21 days after each vaccination. The three co-primary immunogenicity end points were the proportion of seroprotection >70%, seroconversion >40%, and the factor increase in the geometric mean titer >2.5. Results A total of 266 participants were enrolled into the study. No deaths or serious adverse events were reported. The most commonly solicited local and systemic adverse events were injection-site pain and headache, respectively. Only three subjects (1.1%) reported severe injection-site pain. Four 2009 influenza A (H1N1) inactivated monovalent candidate vaccines that met the three requirements to evaluate influenza protection, after a single dose, were identified: 15 μg of hemagglutinin antigen without adjuvant; 7.5 μg of hemagglutinin antigen with aluminum hydroxide, MPL and squalene; 3.75 μg of hemagglutinin antigen with aluminum hydroxide and MPL; and 3.75 μg of hemagglutinin antigen with aluminum hydroxide and squalene. Conclusions Adjuvant systems can be safely used in influenza vaccines, including the adjuvant monophosphoryl lipid A (MPL) derived from Bordetella pertussis with squalene and aluminum hydroxide, MPL with aluminum hydroxide, and squalene and aluminum hydroxide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topologies of motor drive systems are studied, aiming the reduction of common-mode (CM) currents. Initially, the aspects concerning the CM currents circulation are analysed. The reason of common-mode voltages generation, the circulating paths for the resulting CM currents and their effects are discussed. Then, a non-conventional drive system configuration is proposed in order to reduce the CM currents and their effects. This configuration comprehends a non-conventional inverter module wired to a motor with an unusual connection. The cables arrangement differs from the standard solution, too. The proposed topology is compared with other ones, like the active circuit for common-mode voltages compensation. The contribution of the configuration to the reduction of CM voltages and currents and their related interferences are evaluated, based on numerical simulations. Some results are presented and discussed regarding the suitability of the proposed configuration as a potential solution to reduce the CM currents effects, when the state of art and implementation cost of drives are taken into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In molecular and atomic devices the interaction between electrons and ionic vibrations has an important role in electronic transport. The electron-phonon coupling can cause the loss of the electron's phase coherence, the opening of new conductance channels and the suppression of purely elastic ones. From the technological viewpoint phonons might restrict the efficiency of electronic devices by energy dissipation, causing heating, power loss and instability. The state of the art in electron transport calculations consists in combining ab initio calculations via Density Functional Theory (DFT) with Non-Equilibrium Green's Function formalism (NEGF). In order to include electron-phonon interactions, one needs in principle to include a self-energy scattering term in the open system Hamiltonian which takes into account the effect of the phonons over the electrons and vice versa. Nevertheless this term could be obtained approximately by perturbative methods. In the First Born Approximation one considers only the first order terms of the electronic Green's function expansion. In the Self-Consistent Born Approximation, the interaction self-energy is calculated with the perturbed electronic Green's function in a self-consistent way. In this work we describe how to incorporate the electron-phonon interaction to the SMEAGOL program (Spin and Molecular Electronics in Atomically Generated Orbital Landscapes), an ab initio code for electronic transport based on the combination of DFT + NEGF. This provides a tool for calculating the transport properties of materials' specific system, particularly in molecular electronics. Preliminary results will be presented, showing the effects produced by considering the electron-phonon interaction in nanoscale devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide use of e-technologies represents a great opportunity for underserved segments of the population, especially with the aim of reintegrating excluded individuals back into society through education. This is particularly true for people with different types of disabilities who may have difficulties while attending traditional on-site learning programs that are typically based on printed learning resources. The creation and provision of accessible e-learning contents may therefore become a key factor in enabling people with different access needs to enjoy quality learning experiences and services. Another e-learning challenge is represented by m-learning (which stands for mobile learning), which is emerging as a consequence of mobile terminals diffusion and provides the opportunity to browse didactical materials everywhere, outside places that are traditionally devoted to education. Both such situations share the need to access materials in limited conditions and collide with the growing use of rich media in didactical contents, which are designed to be enjoyed without any restriction. Nowadays, Web-based teaching makes great use of multimedia technologies, ranging from Flash animations to prerecorded video-lectures. Rich media in e-learning can offer significant potential in enhancing the learning environment, through helping to increase access to education, enhance the learning experience and support multiple learning styles. Moreover, they can often be used to improve the structure of Web-based courses. These highly variegated and structured contents may significantly improve the quality and the effectiveness of educational activities for learners. For example, rich media contents allow us to describe complex concepts and process flows. Audio and video elements may be utilized to add a “human touch” to distance-learning courses. Finally, real lectures may be recorded and distributed to integrate or enrich on line materials. A confirmation of the advantages of these approaches can be seen in the exponential growth of video-lecture availability on the net, due to the ease of recording and delivering activities which take place in a traditional classroom. Furthermore, the wide use of assistive technologies for learners with disabilities injects new life into e-learning systems. E-learning allows distance and flexible educational activities, thus helping disabled learners to access resources which would otherwise present significant barriers for them. For instance, students with visual impairments have difficulties in reading traditional visual materials, deaf learners have trouble in following traditional (spoken) lectures, people with motion disabilities have problems in attending on-site programs. As already mentioned, the use of wireless technologies and pervasive computing may really enhance the educational learner experience by offering mobile e-learning services that can be accessed by handheld devices. This new paradigm of educational content distribution maximizes the benefits for learners since it enables users to overcome constraints imposed by the surrounding environment. While certainly helpful for users without disabilities, we believe that the use of newmobile technologies may also become a fundamental tool for impaired learners, since it frees them from sitting in front of a PC. In this way, educational activities can be enjoyed by all the users, without hindrance, thus increasing the social inclusion of non-typical learners. While the provision of fully accessible and portable video-lectures may be extremely useful for students, it is widely recognized that structuring and managing rich media contents for mobile learning services are complex and expensive tasks. Indeed, major difficulties originate from the basic need to provide a textual equivalent for each media resource composing a rich media Learning Object (LO). Moreover, tests need to be carried out to establish whether a given LO is fully accessible to all kinds of learners. Unfortunately, both these tasks are truly time-consuming processes, depending on the type of contents the teacher is writing and on the authoring tool he/she is using. Due to these difficulties, online LOs are often distributed as partially accessible or totally inaccessible content. Bearing this in mind, this thesis aims to discuss the key issues of a system we have developed to deliver accessible, customized or nomadic learning experiences to learners with different access needs and skills. To reduce the risk of excluding users with particular access capabilities, our system exploits Learning Objects (LOs) which are dynamically adapted and transcoded based on the specific needs of non-typical users and on the barriers that they can encounter in the environment. The basic idea is to dynamically adapt contents, by selecting them from a set of media resources packaged in SCORM-compliant LOs and stored in a self-adapting format. The system schedules and orchestrates a set of transcoding processes based on specific learner needs, so as to produce a customized LO that can be fully enjoyed by any (impaired or mobile) student.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nandrolone and other anabolic androgenic steroids (AAS) at elevated concentration can alter the expression and function of neurotransmitter systems and contribute to neuronal cell death. This effect can explain the behavioural changes, drug dependence and neuro degeneration observed in steroid abuser. Nandrolone treatment (10-8M–10-5M) caused a time- and concentration-dependent downregulation of mu opioid receptor (MOPr) transcripts in SH-SY5Y human neuroblastoma cells. This effect was prevented by the androgen receptor (AR) antagonist hydroxyflutamide. Receptor binding assays confirmed a decrease in MOPr of approximately 40% in nandrolonetreated cells. Treatment with actinomycin D (10-5M), a transcription inhibitor, revealed that nandrolone may regulate MOPr mRNA stability. In SH-SY5Y cells transfected with a human MOPr luciferase promoter/reporter construct, nandrolone did not alter the rate of gene transcription. These results suggest that nandrolone may regulate MOPr expression through post-transcriptional mechanisms requiring the AR. Cito-toxicity assays demonstrated a time- and concentration dependent decrease of cells viability in SH-SY5Y cells exposed to steroids (10-6M–10-4M). This toxic effects is independent of activation of AR and sigma-2 receptor. An increased of caspase-3 activity was observed in cells treated with Nandrolone 10-6M for 48h. Collectively, these data support the existence of two cellular mechanisms that might explain the neurological syndromes observed in steroids abuser.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reasoning under uncertainty is a human capacity that in software system is necessary and often hidden. Argumentation theory and logic make explicit non-monotonic information in order to enable automatic forms of reasoning under uncertainty. In human organization Distributed Cognition and Activity Theory explain how artifacts are fundamental in all cognitive process. Then, in this thesis we search to understand the use of cognitive artifacts in an new argumentation framework for an agent-based artificial society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the statics and dynamics of a glassy,non-entangled, short bead-spring polymer melt with moleculardynamics simulations. Temperature ranges from slightlyabove the mode-coupling critical temperature to the liquidregime where features of a glassy liquid are absent. Ouraim is to work out the polymer specific effects on therelaxation and particle correlation. We find the intra-chain static structure unaffected bytemperature, it depends only on the distance of monomersalong the backbone. In contrast, the distinct inter-chainstructure shows pronounced site-dependence effects at thelength-scales of the chain and the nearest neighbordistance. There, we also find the strongest temperaturedependence which drives the glass transition. Both the siteaveraged coupling of the monomer and center of mass (CM) andthe CM-CM coupling are weak and presumably not responsiblefor a peak in the coherent relaxation time at the chain'slength scale. Chains rather emerge as soft, easilyinterpenetrating objects. Three particle correlations arewell reproduced by the convolution approximation with theexception of model dependent deviations. In the spatially heterogeneous dynamics of our system weidentify highly mobile monomers which tend to follow eachother in one-dimensional paths forming ``strings''. Thesestrings have an exponential length distribution and aregenerally short compared to the chain length. Thus, arelaxation mechanism in which neighboring mobile monomersmove along the backbone of the chain seems unlikely.However, the correlation of bonded neighbors is enhanced. When liquids are confined between two surfaces in relativesliding motion kinetic friction is observed. We study ageneric model setup by molecular dynamics simulations for awide range of sliding speeds, temperatures, loads, andlubricant coverings for simple and molecular fluids. Instabilities in the particle trajectories are identified asthe origin of kinetic friction. They lead to high particlevelocities of fluid atoms which are gradually dissipatedresulting in a friction force. In commensurate systemsfluid atoms follow continuous trajectories for sub-monolayercoverings and consequently, friction vanishes at low slidingspeeds. For incommensurate systems the velocity probabilitydistribution exhibits approximately exponential tails. Weconnect this velocity distribution to the kinetic frictionforce which reaches a constant value at low sliding speeds. This approach agrees well with the friction obtaineddirectly from simulations and explains Amontons' law on themicroscopic level. Molecular bonds in commensurate systemslead to incommensurate behavior, but do not change thequalitative behavior of incommensurate systems. However,crossed chains form stable load bearing asperities whichstrongly increase friction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work the growth and the magnetic properties of the transition metals molybdenum, niobium, and iron and of the highly-magnetostrictive C15 Laves phases of the RFe2 compounds (R: Rare earth metals: here Tb, Dy, and Tb{0.3}Dy{0.7} deposited on alpha-Al2O3 (sapphire) substrates are analyzed. Next to (11-20) (a-plane) oriented sapphire substrates mainly (10-10) (m-plane) oriented substrates were used. These show a pronounced facetting after high temperature annealing in air. Atomic force microscopy (AFM) measurements reveal a dependence of the height, width, and angle of the facets with the annealing temperature. The observed deviations of the facet angles with respect to the theoretical values of the sapphire (10-1-2) and (10-11) surfaces are explained by cross section high resolution transmission electron microscopy (HR-TEM) measurements. These show the plain formation of the (10-11) surface while the second, energy reduced (10-1-2) facet has a curved shape given by atomic steps of (10-1-2) layers and is formed completely solely at the facet ridges and valleys. Thin films of Mo and Nb, respectively, deposited by means of molecular beam epitaxy (MBE) reveal a non-twinned, (211)-oriented epitaxial growth as well on non-faceted as on faceted sapphire m-plane, as was shown by X-Ray and TEM evaluations. In the case of faceted sapphire the two bcc crystals overgrow the facets homogeneously. Here, the bcc (111) surface is nearly parallel to the sapphire (10-11) facet and the Mo/Nb (100) surface is nearly parallel to the sapphire (10-1-2) surface. (211)-oriented Nb templates on sapphire m-plane can be used for the non-twinned, (211)-oriented growth of RFe2 films by means of MBE. Again, the quality of the RFe2 films grown on faceted sapphire is almost equal to films on the non-faceted substrate. For comparison thin RFe2 films of the established (110) and (111) orientation were prepared. Magnetic and magnetoelastic measurements performed in a self designed setup reveal a high quality of the samples. No difference between samples with undulated and flat morphology can be observed. In addition to the preparation of covering, undulating thin films on faceted sapphire m-plane nanoscopic structures of Nb and Fe were prepared by shallow incidence MBE. The formation of the nanostructures can be explained by a shadowing of the atomic beam due to the facets in addition to de-wetting effects of the metals on the heated sapphire surface. Accordingly, the nanostructures form at the facet ridges and overgrow them. The morphology of the structures can be varied by deposition conditions as was shown for Fe. The shape of the structures vary from pearl-necklet strung spherical nanodots with a diameter of a few 10 nm to oval nanodots of a few 100 nm length to continuous nanowires. Magnetization measurements reveal uniaxial magnetic anisotropy with the easy axis of magnetization parallel to the facet ridges. The shape of the hysteresis is depending on the morphology of the structures. The magnetization reversal processes of the spherical and oval nanodots were simulated by micromagnetic modelling and can be explained by the formation of magnetic vortices.