773 resultados para minimization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La presente tesi di dottorato ha come argomento la produzione d’idrogeno per via fermentativa sfruttando il metabolismo anaerobico di particolari batteri estremofili del genere Thermotoga. In questo lavoro, svolto in seno al progetto Bio-Hydro, sfruttando reattori batch da 116 mL, è stato selezionato il ceppo migliore di Thermotoga fra i quatto ceppi testati: T. neapolitana. Una volta individuato il candidato batterico migliore è stato individuato il valore ottimale di pH (8.5 a t.amb) per la produzione d’idrogeno. Un intenso lavoro è stato svolto sul medium di coltura permettendone la minimizzazione e rendendolo così economicamente sostenibile per il suo utilizzo nel reattore da 19L; in questo caso il glucosio è stato completamente sostituito con due sottoprodotti agroindustriali individuati in precedenza, il melasso di barbabietola e il siero di latte. Sono stati poi eliminati i gravosi micronutrienti e le vitamine. È stata sfruttata la capacità di T. neapolitana di produrre biofilm e sono stati testati 4 diversi supporti in vetro sinterizzato e ceramici, tali test hanno permesso di individuare Biomax come supporto migliore. Sono stati svolti studi sul metabolismo di T. neapolitana volti ad individuare le concentrazioni inibenti di ogni substrato testato, l’inibizione da prodotto (idrogeno) e l’inibizione da ossigeno. Tutte queste prove hanno dato le conoscenze di base per la conduzione di esperienze su reattore da 19L. L’innovativo reattore di tipo SPCSTR è stato interamente studiato, progettato e costruito presso il DICMA. dell’Università di Bologna. La conduzione di esperienze batch su SPCSTR ha dato la possibilità di verificare il funzionamento del nuovo tipo d’impianto. Presso il Wageningen UR (NL), è stata svolta la selezione del miglior ceppo di Caldicellulosisruptor fra 3 testati e del miglior supporto per la produzione d’idrogeno; è stato poi costruito testato e condotto in continuo l’innovativo reattore CMTB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The electric dipole response of neutron-rich nickel isotopes has been investigated using the LAND setup at GSI in Darmstadt (Germany). Relativistic secondary beams of 56−57Ni and 67−72Ni at approximately 500 AMeV have been generated using projectile fragmentation of stable ions on a 4 g/cm2 Be target and subsequent separation in the magnetic dipole fields of the FRagment Separator (FRS). After reaching the LAND setup in Cave C, the radioactive ions were excited electromagnetically in the electric field of a Pb target. The decay products have been measured in inverse kinematics using various detectors. Neutron-rich 67−69Ni isotopes decay by the emission of neutrons, which are detected in the LAND detector. The present analysis concentrates on the (gamma,n) and (gamma,2n) channels in these nuclei, since the proton and three-neutron thresholds are unlikely to be reached considering the virtual photon spectrum for nickel ions at 500 AMeV. A measurement of the stable 58Ni isotope is used as a benchmark to check the accuracy of the present results with previously published data. The measured (gamma,n) and (gamma,np) channels are compared with an inclusive photoneutron measurement by Fultz and coworkers, which are consistent within the respective errors. The measured excitation energy distributions of 67−69Ni contain a large portion of the Giant Dipole Resonance (GDR) strength predicted by the Thomas-Reiche-Kuhn energy-weighted sum rule, as well as a significant amount of low-lying E1 strength, that cannot be attributed to the GDR alone. The GDR distribution parameters are calculated using well-established semi-empirical systematic models, providing the peak energies and widths. The GDR strength is extracted from the chi-square minimization of the model GDR to the measured data of the (gamma,2n) channel, thereby excluding any influence of eventual low-lying strength. The subtraction of the obtained GDR distribution from the total measured E1 strength provides the low-lying E1 strength distribution, which is attributed to the Pygmy Dipole Resonance (PDR). The extraction of the peak energy, width and strength is performed using a Gaussian function. The minimization of trial Gaussian distributions to the data does not converge towards a sharp minimum. Therefore, the results are presented by a chi-square distribution as a function of all three Gaussian parameters. Various predictions of PDR distributions exist, as well as a recent measurement of the 68Ni pygmy dipole-resonance obtained by virtual photon scattering, to which the present pygmy dipole-resonance distribution is also compared.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’obiettivo del lavoro consiste nell’implementare una metodologia operativa volta alla progettazione di reti di monitoraggio e di campagne di misura della qualità dell’aria con l’utilizzo del laboratorio mobile, ottimizzando le posizioni dei dispositivi di campionamento rispetto a differenti obiettivi e criteri di scelta. La revisione e l’analisi degli approcci e delle indicazioni fornite dalla normativa di riferimento e dai diversi autori di lavori scientifici ha permesso di proporre un approccio metodologico costituito da due fasi operative principali, che è stato applicato ad un caso studio rappresentato dal territorio della provincia di Ravenna. La metodologia implementata prevede l’integrazione di numerosi strumenti di supporto alla valutazione dello stato di qualità dell’aria e degli effetti che gli inquinanti atmosferici possono generare su specifici recettori sensibili (popolazione residente, vegetazione, beni materiali). In particolare, la metodologia integra approcci di disaggregazione degli inventari delle emissioni attraverso l’utilizzo di variabili proxy, strumenti modellistici per la simulazione della dispersione degli inquinanti in atmosfera ed algoritmi di allocazione degli strumenti di monitoraggio attraverso la massimizzazione (o minimizzazione) di specifiche funzioni obiettivo. La procedura di allocazione sviluppata è stata automatizzata attraverso lo sviluppo di un software che, mediante un’interfaccia grafica di interrogazione, consente di identificare delle aree ottimali per realizzare le diverse campagne di monitoraggio

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Materialverfolgung gewinnt in der Metallindustrie immer mehr an Bedeutung:rnEs ist notwendig, dass ein Metallband im Fertigungsprozess ein festgelegtes Programm durchläuft - erst dann ist die Qualität des Endprodukts garantiert. Die bisherige Praxis besteht darin, jedem Metallband eine Nummer zuzuordnen, mit der dieses Band beschriftet wird. Bei einer tagelangen Lagerung der Bänder zwischen zwei Produktionsschritten erweist sich diese Methode als fehleranfällig: Die Beschriftungen können z.B. verloren gehen, verwechselt, falsch ausgelesen oder unleserlich werden. 2007 meldete die iba AG das Patent zur Identifikation der Metallbänder anhand ihres Dickenprofils an (Anhaus [3]) - damit kann die Identität des Metallbandes zweifelsfrei nachgewiesen werden, eine zuverlässige Materialverfolgung wurde möglich.Es stellte sich jedoch heraus, dass die messfehlerbehafteten Dickenprofile, die als lange Zeitreihen aufgefasst werden können, mit Hilfe von bisherigen Verfahren (z.B. L2-Abstandsminimierung oder Dynamic Time Warping) nicht erfolgreich verglichen werden können.Diese Arbeit stellt einen effizienten feature-basierten Algorithmus zum Vergleichrnzweier Zeitreihen vor. Er ist sowohl robust gegenüber Rauschen und Messausfällen als auch invariant gegenüber solchen Koordinatentransformationen der Zeitreihen wie Skalierung und Translation. Des Weiteren sind auch Vergleiche mit Teilzeitreihen möglich. Unser Framework zeichnet sich sowohl durch seine hohe Genauigkeit als auch durch seine hohe Geschwindigkeit aus: Mehr als 99.5% der Anfragen an unsere aus realen Profilen bestehende Testdatenbank werden richtig beantwortet. Mit mehreren hundert Zeitreihen-Vergleichen pro Sekunde ist es etwa um den Faktor 10 schneller als die auf dem Gebiet der Zeitreihenanalyse etablierten Verfahren, die jedoch nicht im Stande sind, mehr als 90% der Anfragen korrekt zu verarbeiten. Der Algorithmus hat sich als industrietauglich erwiesen. Die iba AG setzt ihn in einem weltweit einzigartigen dickenprofilbasierten Überwachungssystemrnzur Materialverfolgung ein, das in ersten Stahl- und Aluminiumwalzwerkenrnbereits erfolgreich zum Einsatz kommt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hand transplantation has been indicated in selective patients after traumatic upper extremity amputation and only performed in a few centers around the world for the last decade. In comparison to solid organ transplantation, there is a challenge to overcome the host immunological barrier due to complex antigenicity of the different included tissues, the skin being the most susceptible to rejection. Patients require lifelong immunosuppression for non life-threatening conditions. Minimization of maintenance immunosuppression represents the key step for promoting wider applicability of hand transplantation. Current research is working towards the understanding mechanisms of composite tissue allograft (CTA) rejection. Worldwide, in 51 patients 72 hands (21 double hand transplants) and once both arms have been successfully transplanted since 1998.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines three questions regarding the content of Bucknell University‟s waste stream and the contributors to campus recycling and solid waste disposal. The first asks, “What does Bucknell‟s waste stream consist of?” To answer this question, I designed a campus-wide waste audit procedure that sampled one dumpster from each of the eleven „activity‟ types on campus in order to better understand Bucknell‟s waste composition. The audit was implemented during the Fall semester of the 2011-2012 school year. The waste from each dumpster was sorted into several recyclable and non-recyclable categories and then weighed individually. Results showed the Bison and Carpenter Shop dumpsters to contain the highest percentage of divertible materials (through recycling and/or composting). When extrapolated, results also showed the Dining Services buildings and Facilities buildings to be the most waste dense in terms of pounds of waste generated per square foot. The Bison also generated the most overall waste by weight. The average composition of all dumpsters revealed that organic waste composed 24% of all waste, 23% was non-recyclable paper, and 20% was non-recyclable plastic. It will be important to move forward using these results to help create effective waste programs that target the appropriate areas of concern. My second question asks, “What influences waste behavior to contribute to this „picture‟ of the waste stream?” To answer this question, I created a survey that was sent out to randomly selected sub-group of the university‟s three constituencies: students, faculty, and staff. The survey sought responses regarding each constituency‟s solid waste disposal and recycling behavior, attitudes toward recycling, and motivating factors for solid waste disposal behaviors across different sectors of the university. Using regression analysis, I found three statistically significant motivating factors that influence solid waste disposal behavior: knowledge and awareness, moral value, and social norms. I further examined how a person‟s characteristics associate to these motivating factors and found that one‟s position on campus proved a significant association. Consistently, faculty and staff were strongly influenced by the aforementioned motivating factors, while students‟ behavior was less influenced by them. This suggests that new waste programs should target students to help increase the influence of these motivators to improve the recycling rate and lower overall solid waste disposal on campus. After making overall conclusions regarding the waste audit and survey, I ask my third question, which inquires, “What actions can Bucknell take to increase recycling rates and decrease solid waste generation?” Bucknell currently features several recycling and waste minimization programs on campus. However, using results from the waste audit and campus survey, we can better understand what are the issues of the waste stream, how do we go about addressing these issues, and who needs to be addressed. I propose several suggestions for projects that future students may take on for summer or thesis research. Suggestions include targeting the appropriate categories of waste that occur most frequently in the waste stream, as well as the building types that have the highest waste density and potential recovery rates. Additionally, certain groups on campus should be targeted more directly than others, namely the student body, which demonstrates the lowest influence by motivators of recycling and waste behavior. Several variables were identified as significant motivators of waste and recycling behavior, and could be used as program tactics to encourage more effective behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anaerobic digestion of food scraps has the potential to accomplish waste minimization, energy production, and compost or humus production. At Bucknell University, removal of food scraps from the waste stream could reduce municipal solid waste transportation costs and landfill tipping fees, and provide methane and humus for use on campus. To determine the suitability of food waste produced at Bucknell for high-solids anaerobic digestion (HSAD), a year-long characterization study was conducted. Physical and chemical properties, waste biodegradability, and annual production of biodegradable waste were assessed. Bucknell University food and landscape waste was digested at pilot-scale for over a year to test performance at low and high loading rates, ease of operation at 20% solids, benefits of codigestion of food and landscape waste, and toprovide digestate for studies to assess the curing needs of HSAD digestate. A laboratory-scale curing study was conducted to assess the curing duration required to reduce microbial activity, phytotoxicity, and odors to acceptable levels for subsequent use ofhumus. The characteristics of Bucknell University food and landscape waste were tested approximately weekly for one year, to determine chemical oxygen demand (COD), total solids (TS), volatile solids (VS), and biodegradability (from batch digestion studies). Fats, oil, and grease and total Kjeldahl nitrogen were also tested for some food waste samples. Based on the characterization and biodegradability studies, Bucknell University dining hall food waste is a good candidate for HSAD. During batch digestion studies Bucknell University food waste produced a mean of 288 mL CH4/g COD with a 95%confidence interval of 0.06 mL CH4/g COD. The addition of landscape waste for digestion increased methane production from both food and landscape waste; however, because the landscape waste biodegradability was extremely low the increase was small.Based on an informal waste audit, Bucknell could collect up to 100 tons of food waste from dining facilities each year. The pilot-scale high-solids anaerobic digestion study confirmed that digestion ofBucknell University food waste combined with landscape waste at a low organic loading rate (OLR) of 2 g COD/L reactor volume-day is feasible. During low OLR operation, stable reactor performance was demonstrated through monitoring of biogas production and composition, reactor total and volatile solids, total and soluble chemical oxygendemand, volatile fatty acid content, pH, and bicarbonate alkalinity. Low OLR HSAD of Bucknell University food waste and landscape waste combined produced 232 L CH4/kg COD and 229 L CH4/kg VS. When OLR was increased to high loading (15 g COD/L reactor volume-day) to assess maximum loading conditions, reactor performance became unstable due to ammonia accumulation and subsequent inhibition. The methaneproduction per unit COD also decreased (to 211 L CH4/kg COD fed), although methane production per unit VS increased (to 272 L CH4/kg VS fed). The degree of ammonia inhibition was investigated through respirometry in which reactor digestate was diluted and exposed to varying concentrations of ammonia. Treatments with low ammoniaconcentrations recovered quickly from ammonia inhibition within the reactor. The post-digestion curing process was studied at laboratory-scale, to provide a preliminary assessment of curing duration. Digestate was mixed with woodchips and incubated in an insulated container at 35 °C to simulate full-scale curing self-heatingconditions. Degree of digestate stabilization was determined through oxygen uptake rates, percent O2, temperature, volatile solids, and Solvita Maturity Index. Phytotoxicity was determined through observation of volatile fatty acid and ammonia concentrations.Stabilization of organics and elimination of phytotoxic compounds (after 10–15 days of curing) preceded significant reductions of volatile sulfur compounds (hydrogen sulfide, methanethiol, and dimethyl sulfide) after 15–20 days of curing. Bucknell University food waste has high biodegradability and is suitable for high-solids anaerobic digestion; however, it has a low C:N ratio which can result in ammonia accumulation under some operating conditions. The low biodegradability of Bucknell University landscape waste limits the amount of bioavailable carbon that it can contribute, making it unsuitable for use as a cosubstrate to increase the C:N ratio of food waste. Additional research is indicated to determine other cosubstrates with higher biodegradabilities that may allow successful HSAD of Bucknell University food waste at high OLRs. Some cosubstrates to investigate are office paper, field residues, or grease trap waste. A brief curing period of less than 3 weeks was sufficient to produce viable humus from digestate produced by low OLR HSAD of food and landscape waste.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interest in the study of magnetic/non-magnetic multilayered structures took a giant leap since Grünberg and his group established that the interlayer exchange coupling (IEC) is a function of the non-magnetic spacer width. This interest was further fuelled by the discovery of the phenomenal Giant Magnetoresistance (GMR) effect. In fact, in 2007 Albert Fert and Peter Grünberg were awarded the Nobel Prize in Physics for their contribution to the discovery of GMR. GMR is the key property that is being used in the read-head of the present day computer hard drive as it requires a high sensitivity in the detection of magnetic field. The recent increase in demand for device miniaturization encouraged researchers to look for GMR in nanoscale multilayered structures. In this context, one dimensional(1-D) multilayerd nanowire structure has shown tremendous promise as a viable candidate for ultra sensitive read head sensors. In fact, the phenomenal giant magnetoresistance(GMR) effect, which is the novel feature of the currently used multilayered thin film, has already been observed in multilayered nanowire systems at ambient temperature. Geometrical confinement of the supper lattice along the 2-dimensions (2-D) to construct the 1-D multilayered nanowire prohibits the minimization of magnetic interaction- offering a rich variety of magnetic properties in nanowire that can be exploited for novel functionality. In addition, introduction of non-magnetic spacer between the magnetic layers presents additional advantage in controlling magnetic properties via tuning the interlayer magnetic interaction. Despite of a large volume of theoretical works devoted towards the understanding of GMR and IEC in super lattice structures, limited theoretical calculations are reported in 1-D multilayered systems. Thus to gauge their potential application in new generation magneto-electronic devices, in this thesis, I have discussed the usage of first principles density functional theory (DFT) in predicting the equilibrium structure, stability as well as electronic and magnetic properties of one dimensional multilayered nanowires. Particularly, I have focused on the electronic and magnetic properties of Fe/Pt multilayered nanowire structures and the role of non-magnetic Pt spacer in modulating the magnetic properties of the wire. It is found that the average magnetic moment per atom in the nanowire increases monotonically with an ~1/(N(Fe)) dependance, where N(Fe) is the number of iron layers in the nanowire. A simple model based upon the interfacial structure is given to explain the 1/(N(Fe)) trend in magnetic moment obtained from the first principle calculations. A new mechanism, based upon spin flip with in the layer and multistep electron transfer between the layers, is proposed to elucidate the enhancement of magnetic moment of Iron atom at the Platinum interface. The calculated IEC in the Fe/Pt multilayered nanowire is found to switch sign as the width of the non-magnetic spacer varies. The competition among short and long range direct exchange and the super exchange has been found to play a key role for the non-monotonous sign in IEC depending upon the width of the Platinum spacer layer. The calculated magnetoresistance from Julliere's model also exhibit similar switching behavior as that of IEC. The universality of the behavior of exchange coupling has also been looked into by introducing different non-magnetic spacers like Palladium, Copper, Silver, and Gold in between magnetic Iron layers. The nature of hybridization between Fe and other non-magnetic spacer is found to dictate the inter layer magnetic interaction. For example, in Fe/Pd nanowire the d-p hybridization in two spacer layer case favors anti-ferromagnetic (AFM) configuration over ferromagnetic (FM) configuration. However, the hybridization between half-filled Fe(d) and filled Cu(p) state in Fe/Cu nanowire favors FM coupling in the 2-spacer system.