34 resultados para Nuisance attribute projection
em Helda - Digital Repository of University of Helsinki
Resumo:
Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.
Resumo:
This study presents a population projection for Namibia for years 2011–2020. In many countries of sub-Saharan Africa, including Namibia, the population growth is still continuing even though the fertility rates have declined. However, many of these countries suffer from a large HIV epidemic that is slowing down the population growth. In Namibia, the epidemic has been severe. Therefore, it is important to assess the effect of HIV/AIDS on the population of Namibia in the future. Demographic research on Namibia has not been very extensive, and data on population is not widely available. According to the studies made, fertility has been shown to be generally declining and mortality has been significantly increasing due to AIDS. Previous population projections predict population growth for Namibia in the near future, yet HIV/AIDS is affecting the future population developments. For the projection constructed in this study, data on population is taken from the two most recent censuses, from 1991 and 2001. Data on HIV is available from HIV Sentinel Surveys 1992–2008, which test pregnant women for HIV in antenatal clinics. Additional data are collected from different sources and recent studies. The projection is made with software (EPP and Spectrum) specially designed for developing countries with scarce data. The projection includes two main scenarios which have different assumptions concerning the development of the HIV epidemic. In addition, two hypothetical scenarios are made: the first considering the case where HIV epidemic would never have existed and the second considering the case where HIV treatment would never have existed. The results indicate population growth for Namibia. Population in the 2001 census was 1.83 million and is projected to result in 2.38/2.39 million in 2020 in the first two scenarios. Without HIV, population would be 2.61 million and without treatment 2.30 million in 2020. Urban population is growing faster than rural. Even though AIDS is increasing mortality, the past high fertility rates still keep young adult age groups quite large. The HIV epidemic shows to be slowing down, but it is still increasing the mortality of the working-aged population. The initiation of HIV treatment in 2004 in the public sector seems to have had an effect on many projected indicators, diminishing the impact of HIV on the population. For example, the rise of mortality is slowing down.
Resumo:
In dentistry, basic imaging techniques such as intraoral and panoramic radiography are in most cases the only imaging techniques required for the detection of pathology. Conventional intraoral radiographs provide images with sufficient information for most dental radiographic needs. Panoramic radiography produces a single image of both jaws, giving an excellent overview of oral hard tissues. Regardless of the technique, plain radiography has only a limited capability in the evaluation of three-dimensional (3D) relationships. Technological advances in radiological imaging have moved from two-dimensional (2D) projection radiography towards digital, 3D and interactive imaging applications. This has been achieved first by the use of conventional computed tomography (CT) and more recently by cone beam CT (CBCT). CBCT is a radiographic imaging method that allows accurate 3D imaging of hard tissues. CBCT has been used for dental and maxillofacial imaging for more than ten years and its availability and use are increasing continuously. However, at present, only best practice guidelines are available for its use, and the need for evidence-based guidelines on the use of CBCT in dentistry is widely recognized. We evaluated (i) retrospectively the use of CBCT in a dental practice, (ii) the accuracy and reproducibility of pre-implant linear measurements in CBCT and multislice CT (MSCT) in a cadaver study, (iii) prospectively the clinical reliability of CBCT as a preoperative imaging method for complicated impacted lower third molars, and (iv) the tissue and effective radiation doses and image quality of dental CBCT scanners in comparison with MSCT scanners in a phantom study. Using CBCT, subjective identification of anatomy and pathology relevant in dental practice can be readily achieved, but dental restorations may cause disturbing artefacts. CBCT examination offered additional radiographic information when compared with intraoral and panoramic radiographs. In terms of the accuracy and reliability of linear measurements in the posterior mandible, CBCT is comparable to MSCT. CBCT is a reliable means of determining the location of the inferior alveolar canal and its relationship to the roots of the lower third molar. CBCT scanners provided adequate image quality for dental and maxillofacial imaging while delivering considerably smaller effective doses to the patient than MSCT. The observed variations in patient dose and image quality emphasize the importance of optimizing the imaging parameters in both CBCT and MSCT.
Resumo:
The United States is the world s single biggest market area, where the demand for graphic papers has increased by 80 % during the last three decades. However, during the last two decades there have been very big unpredictable changes in the graphic paper markets. For example, the consumption of newsprint started to decline from the late 1980 s, which was surprising compared to the historical consumption and projections. The consumption has declined since. The aim of this study was to see how magazine paper consumption will develop in the United States until 2030. The long-term consumption projection was made using mainly two methods. The first method was to use trend analysis to see how and if the consumption has changed since 1980. The second method was to use qualitative estimate. These estimates are then compared to the so-called classical model projections, which are usually mentioned and used in forestry literature. The purpose of the qualitative analysis is to study magazine paper end-use purposes and to analyze how and with what intensity the changes in society will effect to magazine paper consumption in the long-term. The framework of this study covers theories such as technology adaptation, electronic substitution, electronic publishing and Porter s threat of substitution. Because this study deals with markets, which have showed signs of structural change, a very substantial part of this study covers recent development and newest possible studies and statistics. The following were among the key findings of this study. Different end-uses have very different kinds of future. Electronic substitution is very likely in some end-use purposes, but not in all. Young people i.e. future consumers have very different manners, habits and technological opportunities than our parents did. These will have substantial effects in magazine paper consumption in the long-term. This study concludes to the fact that the change in magazine paper consumption is more likely to be gradual (evolutionary) than sudden collapse (revolutionary). It is also probable that the years of fast growing consumption of magazine papers are behind. Besides the decelerated growth, the consumption of magazine papers will decline slowly in the long-term. The decline will be faster depending on how far in the future we ll extend the study to.
Resumo:
The safety of food has become an increasingly interesting issue to consumers and the media. It has also become a source of concern, as the amount of information on the risks related to food safety continues to expand. Today, risk and safety are permanent elements within the concept of food quality. Safety, in particular, is the attribute that consumers find very difficult to assess. The literature in this study consists of three main themes: traceability; consumer behaviour related to both quality and safety issues and perception of risk; and valuation methods. The empirical scope of the study was restricted to beef, because the beef labelling system enables reliable tracing of the origin of beef, as well as attributes related to safety, environmental friendliness and animal welfare. The purpose of this study was to examine what kind of information flows are required to ensure quality and safety in the food chain for beef, and who should produce that information. Studying the willingness to pay of consumers makes it possible to determine whether the consumers consider the quantity of information available on the safety and quality of beef sufficient. One of the main findings of this study was that the majority of Finnish consumers (73%) regard increased quality information as beneficial. These benefits were assessed using the contingent valuation method. The results showed that those who were willing to pay for increased information on the quality and safety of beef would accept an average price increase of 24% per kilogram. The results showed that certain risk factors impact consumer willingness to pay. If the respondents considered genetic modification of food or foodborne zoonotic diseases as harmful or extremely harmful risk factors in food, they were more likely to be willing to pay for quality information. The results produced by the models thus confirmed the premise that certain food-related risks affect willingness to pay for beef quality information. The results also showed that safety-related quality cues are significant to the consumers. In the first place, the consumers would like to receive information on the control of zoonotic diseases that are contagious to humans. Similarly, other process-control related information ranked high among the top responses. Information on any potential genetic modification was also considered important, even though genetic modification was not regarded as a high risk factor.
Resumo:
The topic of this dissertation is the geometric and isometric theory of Banach spaces. This work is motivated by the known Banach-Mazur rotation problem, which asks whether each transitive separable Banach space is isometrically a Hilbert space. A Banach space X is said to be transitive if the isometry group of X acts transitively on the unit sphere of X. In fact, some weaker symmetry conditions than transitivity are studied in the dissertation. One such condition is an almost isometric version of transitivity. Another investigated condition is convex-transitivity, which requires that the closed convex hull of the orbit of any point of the unit sphere under the rotation group is the whole unit ball. Following the tradition developed around the rotation problem, some contemporary problems are studied. Namely, we attempt to characterize Hilbert spaces by using convex-transitivity together with the existence of a 1-dimensional bicontractive projection on the space, and some mild geometric assumptions. The convex-transitivity of some vector-valued function spaces is studied as well. The thesis also touches convex-transitivity of Banach lattices and resembling geometric cases.
Resumo:
The concept of an atomic decomposition was introduced by Coifman and Rochberg (1980) for weighted Bergman spaces on the unit disk. By the Riemann mapping theorem, functions in every simply connected domain in the complex plane have an atomic decomposition. However, a decomposition resulting from a conformal mapping of the unit disk tends to be very implicit and often lacks a clear connection to the geometry of the domain that it has been mapped into. The lattice of points, where the atoms of the decomposition are evaluated, usually follows the geometry of the original domain, but after mapping the domain into another this connection is easily lost and the layout of points becomes seemingly random. In the first article we construct an atomic decomposition directly on a weighted Bergman space on a class of regulated, simply connected domains. The construction uses the geometric properties of the regulated domain, but does not explicitly involve any conformal Riemann map from the unit disk. It is known that the Bergman projection is not bounded on the space L-infinity of bounded measurable functions. Taskinen (2004) introduced the locally convex spaces LV-infinity consisting of measurable and HV-infinity of analytic functions on the unit disk with the latter being a closed subspace of the former. They have the property that the Bergman projection is continuous from LV-infinity onto HV-infinity and, in some sense, the space HV-infinity is the smallest possible substitute to the space H-infinity of analytic functions. In the second article we extend the above result to a smoothly bounded strictly pseudoconvex domain. Here the related reproducing kernels are usually not known explicitly, and thus the proof of continuity of the Bergman projection is based on generalised Forelli-Rudin estimates instead of integral representations. The minimality of the space LV-infinity is shown by using peaking functions first constructed by Bell (1981). Taskinen (2003) showed that on the unit disk the space HV-infinity admits an atomic decomposition. This result is generalised in the third article by constructing an atomic decomposition for the space HV-infinity on a smoothly bounded strictly pseudoconvex domain. In this case every function can be presented as a linear combination of atoms such that the coefficient sequence belongs to a suitable Köthe co-echelon space.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
This thesis examines the mythology in and social reality behind a group of texts from the Nag Hammadi and related literature, to which certain leaders of the early church attached the label, Ophite, i.e., snake people. In the mythology, which essentially draws upon and rewrites the Genesis paradise story, the snake's advice to eat from the tree of knowledge is positive, the creator and his angels are demonic beasts and the true godhead is depicted as an androgynous heavenly projection of Adam and Eve. It will be argued that this unique mythology is attested in certain Coptic texts from the Nag Hammadi and Berlin 8502 Codices (On the Origin of the World, Hypostasis of the Archons, Apocryphon of John, Eugnostos, Sophia of Jesus Christ), as well as in reports by Irenaeus (Adversus Haereses 1.30), Origen (Contra Celsum 6.24-38) and Epiphanius (Panarion 26). It will also be argued that this so-called Ophite evidence is essential for a proper understanding of Sethian Gnosticism, often today considered one of the earliest forms of Gnosticism; there seems to have occurred a Sethianization of Ophite mythology. I propose that we replace the current Sethian Gnostic category by a new one that not only adds texts that draw upon the Ophite mythology alongside these Sethian texts, but also arranges the material in smaller typological units. I also propose we rename this remodelled and expanded Sethian corpus "Classic Gnostic." I have divided the thesis into four parts: (I) Introduction; (II) Myth and Innovation; (III) Ritual; and (IV) Conclusion. In Part I, the sources and previous research on Ophites and Sethians will be examined, and the new Classic Gnostic category will be introduced to provide a framework for the study of the Ophite evidence. Chapters in Part II explore key themes in the mythology of our texts, first by text comparison (to show that certain texts represent the Ophite mythology and that this mythology is different from Sethianism), and then by attempting to unveil social circumstances that may have given rise to such myths. Part III assesses heresiological claims of Ophite rituals, and Part IV is the conclusion.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.
Resumo:
Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.
Resumo:
This thesis is a study of the x-ray scattering properties of tissues and tumours of the breast. Clinical radiography is based on the absorption of the x-rays when passing right through the human body and gives information about the densities of the tissues. Besides being absorbed, x-rays may change their direction within the tissues due to elastic scattering or even to refraction. The phenomenon of scattering is a nuisance to radiography in general, and to mammography in particular, because it reduces the quality of the images. However, scattered x-rays bear very useful information about the structure of the tissues at the supra-molecular level. Some pathologies, like breast cancer, produce alterations to the structures of the tissues, being especially evident in collagen-rich tissues. On the other hand, the change of direction due to refraction of the x-rays on the tissue boundaries can be mapped. The diffraction enhanced imaging (DEI) technique uses a perfect crystal to convert the angular deviations of the x-rays into intensity variations, which can be recorded as images. This technique is of especial interest in the cases were the densities of the tissues are very similar (like in mammography) and the absorption images do not offer enough contrast. This thesis explores the structural differences existing in healthy and pathological collagen in breast tissue samples by the small-angle x-ray scattering (SAXS) technique and compares these differences with the morphological information found in the DEI images and the histo-pathology of the same samples. Several breast tissue samples were studied by SAXS technique in the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. Scattering patterns of the different tissues of the breast were acquired and compared with the histology of the samples. The scattering signals from adipose tissue (fat), connective tissue (collagen) and necrotic tissue were identified. Moreover, a clear distinction could be done between the scattering signals from healthy collagen and from collagen from an invasive tumour. Scattering from collagen is very characteristic. It includes several scattering peaks and scattering features that carry information about the size and the spacing of the collagen fibrils in the tissues. It was found that the collagen fibrils in invaded tumours were thinner and had a d-spacing length 0,7% longer that fibrils from healthy tumours. The scattering signals from the breast tissues were compared with the histology by building colour-coded maps across the samples. They were also imaged with the DEI technique. There was a total agreement between the scattering maps, the morphological features seen in the images and the information of the histo- pathological examination. The thesis demonstrates that the x-ray scattering signal can be used to characterize tissues and that it carries important information about the pathological state of the breast tissues, thus showing the potential of the SAXS technique as a possible diagnostic tool for breast cancer.
Resumo:
The output of a laser is a high frequency propagating electromagnetic field with superior coherence and brightness compared to that emitted by thermal sources. A multitude of different types of lasers exist, which also translates into large differences in the properties of their output. Moreover, the characteristics of the electromagnetic field emitted by a laser can be influenced from the outside, e.g., by injecting an external optical field or by optical feedback. In the case of free-running solitary class-B lasers, such as semiconductor and Nd:YVO4 solid-state lasers, the phase space is two-dimensional, the dynamical variables being the population inversion and the amplitude of the electromagnetic field. The two-dimensional structure of the phase space means that no complex dynamics can be found. If a class-B laser is perturbed from its steady state, then the steady state is restored after a short transient. However, as discussed in part (i) of this Thesis, the static properties of class-B lasers, as well as their artificially or noise induced dynamics around the steady state, can be experimentally studied in order to gain insight on laser behaviour, and to determine model parameters that are not known ab initio. In this Thesis particular attention is given to the linewidth enhancement factor, which describes the coupling between the gain and the refractive index in the active material. A highly desirable attribute of an oscillator is stability, both in frequency and amplitude. Nowadays, however, instabilities in coupled lasers have become an active area of research motivated not only by the interesting complex nonlinear dynamics but also by potential applications. In part (ii) of this Thesis the complex dynamics of unidirectionally coupled, i.e., optically injected, class-B lasers is investigated. An injected optical field increases the dimensionality of the phase space to three by turning the phase of the electromagnetic field into an important variable. This has a radical effect on laser behaviour, since very complex dynamics, including chaos, can be found in a nonlinear system with three degrees of freedom. The output of the injected laser can be controlled in experiments by varying the injection rate and the frequency of the injected light. In this Thesis the dynamics of unidirectionally coupled semiconductor and Nd:YVO4 solid-state lasers is studied numerically and experimentally.