818 resultados para Hiking -- Tools and equipment
Resumo:
The thesis, developed in collaboration between the team Systems and Equipment for Energy and Environment of Bologna University and Chalmers University of Technology in Goteborg, aims to study the benefits resulting from the adoption of a thermal storage system for marine application. To that purpose a chruis ship has been considered. To reach the purpose has been used the software EGO (Energy Greed Optimization) developed by University of Bologna.
Resumo:
Global climate change in recent decades has strongly influenced the Arctic generating pronounced warming accompanied by significant reduction of sea ice in seasonally ice-covered seas and a dramatic increase of open water regions exposed to wind [Stephenson et al., 2011]. By strongly scattering the wave energy, thick multiyear ice prevents swell from penetrating deeply into the Arctic pack ice. However, with the recent changes affecting Arctic sea ice, waves gain more energy from the extended fetch and can therefore penetrate further into the pack ice. Arctic sea ice also appears weaker during melt season, extending the transition zone between thick multi-year ice and the open ocean. This region is called the Marginal Ice Zone (MIZ). In the Arctic, the MIZ is mainly encountered in the marginal seas, such as the Nordic Seas, the Barents Sea, the Beaufort Sea and the Labrador Sea. Formed by numerous blocks of sea ice of various diameters (floes) the MIZ, under certain conditions, allows maritime transportation stimulating dreams of industrial and touristic exploitation of these regions and possibly allowing, in the next future, a maritime connection between the Atlantic and the Pacific. With the increasing human presence in the Arctic, waves pose security and safety issues. As marginal seas are targeted for oil and gas exploitation, understanding and predicting ocean waves and their effects on sea ice become crucial for structure design and for real time safety of operations. The juxtaposition of waves and sea ice represents a risk for personnel and equipment deployed on ice, and may complicate critical operations such as platform evacuations. The risk is difficult to evaluate because there are no long-term observations of waves in ice, swell events are difficult to predict from local conditions, ice breakup can occur on very short time-scales and wave-ice interactions are beyond the scope of current forecasting models [Liu and Mollo-Christensen, 1988,Marko, 2003]. In this thesis, a newly developed Waves in Ice Model (WIM) [Williams et al., 2013a,Williams et al., 2013b] and its related Ocean and Sea Ice model (OSIM) will be used to study the MIZ and the improvements of wave modeling in ice infested waters. The following work has been conducted in collaboration with the Nansen Environmental and Remote Sensing Center and within the SWARP project which aims to extend operational services supporting human activity in the Arctic by including forecast of waves in ice-covered seas, forecast of sea-ice in the presence of waves and remote sensing of both waves and sea ice conditions. The WIM will be included in the downstream forecasting services provided by Copernicus marine environment monitoring service.
Resumo:
Capuchin monkeys are notable among New World monkeys for their widespread use of tools. They use both hammer tools and insertion tools in the wild to acquire food that would be unobtainable otherwise. Evidence indicates that capuchins transport stones to anvil sites and use the most functionally efficient stones to crack nuts. We investigated capuchins’ assessment of functionality by testing their ability to select a tool that was appropriate for two different tool-use tasks: A stone for a hammer task and a stick for an insertion task. To select the appropriate tools, the monkeys investigated a baited tool-use apparatus (insertion or hammer), traveled to a location in their enclosure where they could no longer see the apparatus, made a selection between two tools (stick or stone), and then could transport the tool back to the apparatus to obtain a walnut. Four capuchins were first trained to select and use the appropriate tool for each apparatus. After training, they were then tested by allowing them to view a baited apparatus and then travel to a location 8 m distant where they could select a tool while out of view of the apparatus. All four monkeys chose the correct tool significantly more than expected and transported the tools back to the apparatus. Results confirm capuchins’ propensity for transporting tools, demonstrate their capacity to select the functionally appropriate tool for two different tool-use tasks, and indicate that they can retain the memory of the correct choice during a travel time of several seconds.
Resumo:
The lack of effective tools have hampered our ability to assess the size, growth and ages of clonal plants. With Serenoa repens (saw palmetto) as a model, we introduce a novel analytical framework that integrates DNA fingerprinting and mathematical modelling to simulate growth and estimate ages of clonal plants. We also demonstrate the application of such life-history information of clonal plants to provide insight into management plans. Serenoa is an ecologically important foundation species in many Southeastern United States ecosystems; yet, many land managers consider Serenoa a troublesome invasive plant. Accordingly, management plans have been developed to reduce or eliminate Serenoa with little understanding of its life history. Using Amplified Fragment Length Polymorphisms, we genotyped 263 Serenoa and 134 Sabal etonia (a sympatric non-clonal palmetto) samples collected from a 20 X 20 m study plot in Florida scrub. Sabal samples were used to assign small field-unidentifiable palmettos to Serenoa or Sabal and also as a negative control for clone detection. We then mathematically modelled clonal networks to estimate genet ages. Our results suggest that Serenoa predominantly propagate via vegetative sprouts and 10000-year-old genets may be common, while showing no evidence of clone formation by Sabal. The results of this and our previous studies suggest that: (i) Serenoa has been part of scrub associations for thousands of years, (ii) Serenoa invasion are unlikely and (ii) once Serenoa is eliminated from local communities, its restoration will be difficult. Reevaluation of the current management tools and plans is an urgent task.
Resumo:
The lack of effective tools has hampered our ability to assess the size, growth and ages of clonal plants. With Serenoa repens (saw palmetto) as a model, we introduce a novel analytical frame work that integrates DNA fingerprinting and mathematical modelling to simulate growth and estimate ages of clonal plants. We also demonstrate the application of such life-history information of clonal plants to provide insight into management plans. Serenoa is an ecologically important foundation species in many Southeastern United States ecosystems; yet, many land managers consider Serenoa a troublesome invasive plant. Accordingly, management plans have been developed to reduce or eliminate Serenoa with little understanding of its life history. Using Amplified Fragment Length Polymorphisms, we genotyped 263 Serenoa and 134 Sabal etonia (a sympatric non-clonal palmetto) samples collected from a 20 x 20 m study plot in Florida scrub. Sabal samples were used to assign small field-unidentifiable palmettos to Serenoa or Sabal and also as a negative control for clone detection. We then mathematically modelled clonal networks to estimate genet ages. Our results suggest that Serenoa predominantly propagate via vegetative sprouts and 10000-year-old genets maybe common, while showing no evidence of clone formation by Sabal. The results of this and our previous studies suggest that: (i) Serenoa has been part of scrub associations for thousands of years, (ii) Serenoa invasions are unlikely and (ii) once Serenoa is eliminated from local communities, its restoration will be difficult. Reevaluation of the current management tools and plans is an urgent task.
Resumo:
Research for Sustainable Development is based on the experiences of a decade of inter- and transdisciplinary research in partnership in nine regions of the world. It presents 29 articles in which interdisciplinary teams reflect on the foundations of sustainability-oriented research, propose and illustrate concrete concepts, tools, and approaches to overcome the challenges of such research, and show how research practice related to specific issues of sustainable development has led to new thematic and methodological insights. The book seeks to stimulate the advancement of research towards more relevant, scientifically sound, and concrete contributions to realising the vision of sustainable development.
Resumo:
The Simulation Automation Framework for Experiments (SAFE) is a project created to raise the level of abstraction in network simulation tools and thereby address issues that undermine credibility. SAFE incorporates best practices in network simulationto automate the experimental process and to guide users in the development of sound scientific studies using the popular ns-3 network simulator. My contributions to the SAFE project: the design of two XML-based languages called NEDL (ns-3 Experiment Description Language) and NSTL (ns-3 Script Templating Language), which facilitate the description of experiments and network simulationmodels, respectively. The languages provide a foundation for the construction of better interfaces between the user and the ns-3 simulator. They also provide input to a mechanism which automates the execution of network simulation experiments. Additionally,this thesis demonstrates that one can develop tools to generate ns-3 scripts in Python or C++ automatically from NSTL model descriptions.
Resumo:
Theoretical studies of the problems of the securities markets in the Russian Federation incline to one or other of the two traditional approaches. The first consists of comparing the definition of "valuable paper" set forth in the current legislation of the Russian Federation, with the theoretical model of "Wertpapiere" elaborated by German scholars more than 90 years ago. The problem with this approach is, in Mr. Pentsov's opinion, that any new features of the definition of "security" that do not coincide with the theoretical model of "Wertpapiere" (such as valuable papers existing in non-material, electronic form) are claimed to be incorrect and removed from the current legislation of the Russian Federation. The second approach works on the basis of the differentiation between the Common Law concept of "security" and the Civil Law concept of "valuable paper". Mr. Pentsov's research, presented in an article written in English, uses both methodological tools and involves, firstly, a historical study of the origin and development of certain legal phenomena (securities) as they evolved in different countries, and secondly, a comparative, synchronic study of equivalent legal phenomena as they exist in different countries today. Employing the first method, Mr. Pentsov divided the historical development of the conception of "valuable paper" in Russia into five major stages. He found that, despite the existence of a relatively wide circulation of valuable papers, especially in the second half of the 19th century, Russian legislation before 1917 (the first stage) did not have a unified definition of valuable paper. The term was used, in both theoretical studies and legislation, but it covered a broad range of financial instruments such as stocks, bonds, government bonds, promissory notes, bills of exchange, etc. During the second stage, also, the legislation of the USSR did not have a unified definition of "valuable paper". After the end of the "new economic policy" (1922 - 1930) the stock exchanges and the securities markets in the USSR, with a very few exceptions, were abolished. And thus during the third stage (up to 1985), the use of valuable papers in practice was reduced to foreign economic relations (bills of exchange, stocks in enterprises outside the USSR) and to state bonds. Not surprisingly, there was still no unified definition of "valuable paper". After the beginning of Gorbachev's perestroika, a securities market began to re-appear in the USSR. However, the successful development of securities markets in the USSR was retarded by the absence of an appropriate regulatory framework. The first effort to improve the situation was the adoption of the Regulations on Valuable Papers, approved by resolution No. 590 of the Council of Ministers of the USSR, dated June 19, 1990. Section 1 of the Regulation contained the first statutory definition of "valuable paper" in the history of Russia. At the very beginning of the period of transition to a market economy, a number of acts contained different definitions of "valuable paper". This diversity clearly undermined the stability of the Russian securities market and did not achieve the goal of protecting the investor. The lack of unified criteria for the consideration of such non-standard financial instruments as "valuable papers" significantly contributed to the appearance of numerous fraudulent "pyramid" schemes that were outside of the regulatory scheme of Russia legislation. The situation was substantially improved by the adoption of the new Civil Code of the Russian Federation. According to Section 1 of Article 142 of the Civil Code, a valuable paper is a document that confirms, in compliance with an established form and mandatory requisites, certain material rights whose realisation or transfer are possible only in the process of its presentation. Finally, the recent Federal law No. 39 - FZ "On the Valuable Papers Market", dated April 22 1996, has also introduced the term "emission valuable papers". According to Article 2 of this Law, an "emission valuable paper" is any valuable paper, including non-documentary, that simultaneously has the following features: it fixes the composition of material and non-material rights that are subject to confirmation, cession and unconditional realisation in compliance with the form and procedure established by this federal law; it is placed by issues; and it has equal amount and time of realisation of rights within the same issue regardless of when the valuable paper was purchased. Thus the introduction of the conception of "emission valuable paper" became the starting point in the Russian federation's legislation for the differentiation between the legal regimes of "commercial papers" and "investment papers" similar to the Common Law approach. Moving now to the synchronic, comparative method of research, Mr. Pentsov notes that there are currently three major conceptions of "security" and, correspondingly, three approaches to its legal definition: the Common Law concept, the continental law concept, and the concept employed by Japanese Law. Mr. Pentsov proceeds to analyse the differences and similarities of all three, concluding that though the concept of "security" in the Common Law system substantially differs from that of "valuable paper" in the Continental Law system, nevertheless the two concepts are developing in similar directions. He predicts that in the foreseeable future the existing differences between these two concepts will become less and less significant. On the basis of his research, Mr. Pentsov arrived at the conclusion that the concept of "security" (and its equivalents) is not a static one. On the contrary, it is in the process of permanent evolution that reflects the introduction of new financial instruments onto the capital markets. He believes that the scope of the statutory definition of "security" plays an extremely important role in the protection of investors. While passing the Securities Act of 1933, the United States Congress determined that the best way to achieve the goal of protecting investors was to define the term "security" in sufficiently broad and general terms so as to include within the definition the many types of instruments that in the commercial world fall within the ordinary concept of "security' and to cover the countless and various devices used by those who seek to use the money of others on the promise of profits. On the other hand, the very limited scope of the current definition of "emission valuable paper" in the Federal Law of the Russian Federation entitled "On the Valuable Papers Market" does not allow the anti-fraud provisions of this law to be implemented in an efficient way. Consequently, there is no basis for the protection of investors. Mr. Pentsov proposes amendments which he believes would enable the Russian markets to become more efficient and attractive for both foreign and domestic investors.
Resumo:
Software systems need to continuously change to remain useful. Change appears in several forms and needs to be accommodated at different levels. We propose ChangeBoxes as a mechanism to encapsulate, manage, analyze and exploit changes to software systems. Our thesis is that only by making change explicit and manipulable can we enable the software developer to manage software change more effectively than is currently possible. Furthermore we argue that we need new insights into assessing the impact of changes and we need to provide new tools and techniques to manage them. We report on the results of some initial prototyping efforts, and we outline a series of research activities that we have started to explore the potential of ChangeBoxes.
Resumo:
Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
The purpose of this study is to provide a procedure to include emissions to the atmosphere resulting from the combustion of diesel fuel during dredging operations into the decision-making process of dredging equipment selection. The proposed procedure is demonstrated for typical dredging methods and data from the Illinois Waterway as performed by the U.S. Army Corps of Engineers, Rock Island District. The equipment included in this study is a 16-inch cutterhead pipeline dredge and a mechanical bucket dredge used during the 2005 dredging season on the Illinois Waterway. Considerable effort has been put forth to identify and reduce environmental impacts from dredging operations. Though environmental impacts of dredging have been studied no efforts have been applied to the evaluation of air emissions from comparable types of dredging equipment, as in this study. By identifying the type of dredging equipment with the lowest air emissions, when cost, site conditions, and equipment availability are comparable, adverse environmental impacts can be minimized without compromising the dredging project. A total of 48 scenarios were developed by varying the dredged material quantity, transport distance, and production rates. This produced an “envelope” of results applicable to a broad range of site conditions. Total diesel fuel consumed was calculated using standard cost estimating practices as defined in the U.S. Army Corps of Engineers Construction Equipment Ownership and Operating Expense Schedule (USACE, 2005). The diesel fuel usage was estimated for all equipment used to mobilize and/or operate each dredging crew for every scenario. A Limited Life Cycle Assessment (LCA) was used to estimate the air emissions from two comparable dredging operations utilizing SimaPro LCA software. An Environmental Impact Single Score (EISS) was the SimaPro output selected for comparison with the cost per CY of dredging, potential production rates, and transport distances to identify possible decision points. The total dredging time was estimated for each dredging crew and scenario. An average hourly cost for both dredging crews was calculated based on Rock Island District 2005 dredging season records (Graham 2007/08). The results from this study confirm commonly used rules of thumb in the dredging industry by indicating that mechanical bucket dredges are better suited for long transport distances and have lower air emissions and cost per CY for smaller quantities of dredged material. In addition, the results show that a cutterhead pipeline dredge would be preferable for moderate and large volumes of dredged material when no additional booster pumps are required. Finally, the results indicate that production rates can be a significant factor when evaluating the air emissions from comparable dredging equipment.
Resumo:
Chapter 1 is used to introduce the basic tools and mechanics used within this thesis. Most of the definitions used in the thesis will be defined, and we provide a basic survey of topics in graph theory and design theory pertinent to the topics studied in this thesis. In Chapter 2, we are concerned with the study of fixed block configuration group divisible designs, GDD(n; m; k; λ1; λ2). We study those GDDs in which each block has configuration (s; t), that is, GDDs in which each block has exactly s points from one of the two groups and t points from the other. Chapter 2 begins with an overview of previous results and constructions for small group size and block sizes 3, 4 and 5. Chapter 2 is largely devoted to presenting constructions and results about GDDs with two groups and block size 6. We show the necessary conditions are sufficient for the existence of GDD(n, 2, 6; λ1, λ2) with fixed block configuration (3; 3). For configuration (1; 5), we give minimal or nearminimal index constructions for all group sizes n ≥ 5 except n = 10, 15, 160, or 190. For configuration (2, 4), we provide constructions for several families ofGDD(n, 2, 6; λ1, λ2)s. Chapter 3 addresses characterizing (3, r)-regular graphs. We begin with providing previous results on the well studied class of (2, r)-regular graphs and some results on the structure of large (t; r)-regular graphs. In Chapter 3, we completely characterize all (3, 1)-regular and (3, 2)-regular graphs, as well has sharpen existing bounds on the order of large (3, r)- regular graphs of a certain form for r ≥ 3. Finally, the appendix gives computational data resulting from Sage and C programs used to generate (3, 3)-regular graphs on less than 10 vertices.
Resumo:
Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.
Resumo:
Chapter 1 is used to introduce the basic tools and mechanics used within this thesis. Some historical uses and background are touched upon as well. The majority of the definitions are contained within this chapter as well. In Chapter 2 we consider the question whether one can decompose λ copies of monochromatic Kv into copies of Kk such that each copy of the Kk contains at most one edge from each Kv. This is called a proper edge coloring (Hurd, Sarvate, [29]). The majority of the content in this section is a wide variety of examples to explain the constructions used in Chapters 3 and 4. In Chapters 3 and 4 we investigate how to properly color BIBD(v, k, λ) for k = 4, and 5. Not only will there be direct constructions of relatively small BIBDs, we also prove some generalized constructions used within. In Chapter 5 we talk about an alternate solution to Chapters 3 and 4. A purely graph theoretical solution using matchings, augmenting paths, and theorems about the edgechromatic number is used to develop a theorem that than covers all possible cases. We also discuss how this method performed compared to the methods in Chapters 3 and 4. In Chapter 6, we switch topics to Latin rectangles that have the same number of symbols and an equivalent sized matrix to Latin squares. Suppose ab = n2. We define an equitable Latin rectangle as an a × b matrix on a set of n symbols where each symbol appears either [b/n] or [b/n] times in each row of the matrix and either [a/n] or [a/n] times in each column of the matrix. Two equitable Latin rectangles are orthogonal in the usual way. Denote a set of ka × b mutually orthogonal equitable Latin rectangles as a k–MOELR(a, b; n). We show that there exists a k–MOELR(a, b; n) for all a, b, n where k is at least 3 with some exceptions.