900 resultados para LARGE-AREA TELESCOPE
Resumo:
Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.
Resumo:
The quality of astronomical sites is the first step to be considered to have the best performances from the telescopes. In particular, the efficiency of large telescopes in UV, IR, radio etc. is critically dependent on atmospheric transparency. It is well known that the random optical effects induced on the light propagation by turbulent atmosphere also limit telescope’s performances. Nowadays, clear appears the importance to correlate the main atmospheric physical parameters with the optical quality reachable by large aperture telescopes. The sky quality evaluation improved with the introduction of new techniques, new instrumentations and with the understanding of the link between the meteorological (or synoptical parameters and the observational conditions thanks to the application of the theories of electromagnetic waves propagation in turbulent medias: what we actually call astroclimatology. At the present the site campaigns are evolved and are performed using the classical scheme of optical seeing properties, meteorological parameters, sky transparency, sky darkness and cloudiness. New concept are added and are related to the geophysical properties such as seismicity, microseismicity, local variability of the climate, atmospheric conditions related to the ground optical turbulence and ground wind regimes, aerosol presence, use of satellite data. The purpose of this project is to provide reliable methods to analyze the atmospheric properties that affect ground-based optical astronomical observations and to correlate them with the main atmospheric parameters generating turbulence and affecting the photometric accuracy. The first part of the research concerns the analysis and interpretation of longand short-time scale meteorological data at two of the most important astronomical sites located in very different environments: the Paranal Observatory in the Atacama Desert (Chile), and the Observatorio del Roque de Los Muchachos(ORM) located in La Palma (Canary Islands, Spain). The optical properties of airborne dust at ORM have been investigated collecting outdoor data using a ground-based dust monitor. Because of its dryness, Paranal is a suitable observatory for near-IR observations, thus the extinction properties in the spectral range 1.00-2.30 um have been investigated using an empirical method. Furthermore, this PhD research has been developed using several turbulence profilers in the selection of the site for the European Extremely Large Telescope(E-ELT). During the campaigns the properties of the turbulence at different heights at Paranal and in the sites located in northern Chile and Argentina have been studied. This given the possibility to characterize the surface layer turbulence at Paranal and its connection with local meteorological conditions.
Resumo:
In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.
Resumo:
A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.
Resumo:
The arid regions are dominated to a much larger degree than humid regions by major catastrophic events. Although most of Egypt lies within the great hot desert belt; it experiences especially in the north some torrential rainfall, which causes flash floods all over Sinai Peninsula. Flash floods in hot deserts are characterized by high velocity and low duration with a sharp discharge peak. Large sediment loads may be carried by floods threatening fields and settlements in the wadis and even people who are living there. The extreme spottiness of rare heavy rainfall, well known to desert people everywhere, precludes any efficient forecasting. Thus, although the limitation of data still reflects pre-satellite methods, chances of developing a warning system for floods in the desert seem remote. The relatively short flood-to-peak interval, a characteristic of desert floods, presents an additional impediment to the efficient use of warning systems. The present thesis contains introduction and five chapters, chapter one points out the physical settings of the study area. There are the geological settings such as outcrop lithology of the study area and the deposits. The alluvial deposits of Wadi Moreikh had been analyzed using OSL dating to know deposits and palaeoclimatic conditions. The chapter points out as well the stratigraphy and the structure geology containing main faults and folds. In addition, it manifests the pesent climate conditions such as temperature, humidity, wind and evaporation. Besides, it presents type of soils and natural vegetation cover of the study area using unsupervised classification for ETM+ images. Chapter two points out the morphometric analysis of the main basins and their drainage network in the study area. It is divided into three parts: The first part manifests the morphometric analysis of the drainage networks which had been extracted from two main sources, topographic maps and DEM images. Basins and drainage networks are considered as major influencing factors on the flash floods; Most of elements were studied which affect the network such as stream order, bifurcation ratio, stream lengths, stream frequency, drainage density, and drainage patterns. The second part of this chapter shows the morphometric analysis of basins such as area, dimensions, shape and surface. Whereas, the third part points the morphometric analysis of alluvial fans which form most of El-Qaá plain. Chapter three manifests the surface runoff through rainfall and losses analysis. The main subject in this chapter is rainfall which has been studied in detail; it is the main reason for runoff. Therefore, all rainfall characteristics are regarded here such as rainfall types, distribution, rainfall intensity, duration, frequency, and the relationship between rainfall and runoff. While the second part of this chapter concerns with water losses estimation by evaporation and infiltration which are together the main losses with direct effect on the high of runoff. Finally, chapter three points out the factors influencing desert runoff and runoff generation mechanism. Chapter four is concerned with assessment of flood hazard, it is important to estimate runoff and tocreate a map of affected areas. Therefore, the chapter consists of four main parts; first part manifests the runoff estimation, the different methods to estimate runoff and its variables such as runoff coefficient lag time, time of concentration, runoff volume, and frequency analysis of flash flood. While the second part points out the extreme event analysis. The third part shows the map of affected areas for every basin and the flash floods degrees. In this point, it has been depending on the DEM to extract the drainage networks and to determine the main streams which are normally more dangerous than others. Finally, part four presets the risk zone map of total study area which is of high inerest for planning activities. Chapter five as the last chapter concerns with flash flood Hazard mitigation. It consists of three main parts. First flood prediction and the method which can be used to predict and forecast the flood. The second part aims to determine the best methods which can be helpful to mitigate flood hazard in the arid zone and especially the study area. Whereas, the third part points out the development perspective for the study area indicating the suitable places in El-Qaá plain for using in economic activities.
Resumo:
Cor-Ten is a particular kind of steel, belonging to low-alloyed steel; thanks to his aesthetic features and resistance to atmospheric corrosion, this material is largely used in architectural, artistic and infrastructural applications. After environmental exposure, Cor-Ten steel exhibits the characteristic ability to self-protect from corrosion, by the development of a stable and adherent protective layer. However, some environmental factors can influence the formation and stability of the patina. In particular, exposure of Cor-Ten to polluted atmosphere (NOx, SOx, O3) or coastal areas (marine spray) may cause problems to the protective layer and, as a consequence, a release of alloying metals, which can accumulate near the structures. Some of these metals, such as Cr and Ni, could be very dangerous for soils and water because of their large toxicity. The aim of this work was to study the corrosion behavior of Cor-Ten exposed to an urban-coastal site (Rimini, Italy). Three different kinds of commercial surface finish (bare and pre-patinated, with or without a beeswax covering) were examined, both in sheltered and unsheltered exposure conditions. Wet deposition brushing the specimens surface (leaching solutions) are monthly collected and analyzed to evaluate the extent of metal release and the form in which they leave the surface, for example, as water-soluble compounds or non-adherent corrosion products. Five alloying metals (Fe, Cu, Cr, Mn and Ni) and nine ions (Cl-, NO3-, NO2-, SO42-, Na+, Ca2+, K+, Mg2+, NH4+) are determined through Atomic Absorption Spectroscopy and Ion Chromatography, respectively. Furthermore, the evolution and the behaviour of the patina are periodically followed by surface investigations (SEM-EDS and Raman Spectroscopy). After two years of exposure, the results show that Bare Cor-Ten, cheaper than the other analyzed specimens, even though undergoes the greater mass variation, his metal release is comparable to the release of the pre-patinated samples. The behavior of pre-patinated steel, with or without beeswax covering, do not show particular difference. This exposure environment doesn’t allow a completely stabilization of the patina; nevertheless an estimate of metal release after 10 years of exposure points out that the environmental impact of Cor-Ten is very low: for example, the release of chromium in the soluble fraction is less than 10 mg if we consider an exposed wall of 10 m2.
Resumo:
Concerns over global change and its effect on coral reef survivorship have highlighted the need for long-term datasets and proxy records, to interpret environmental trends and inform policymakers. Citizen science programs have showed to be a valid method for collecting data, reducing financial and time costs for institutions. This study is based on the elaboration of data collected by recreational divers and its main purpose is to evaluate changes in the state of coral reef biodiversity in the Red Sea over a long term period and validate the volunteer-based monitoring method. Volunteers recreational divers completed a questionnaire after each dive, recording the presence of 72 animal taxa and negative reef conditions. Comparisons were made between records from volunteers and independent records from a marine biologist who performed the same dive at the same time. A total of 500 volunteers were tested in 78 validation trials. Relative values of accuracy, reliability and similarity seem to be comparable to those performed by volunteer divers on precise transects in other projects, or in community-based terrestrial monitoring. 9301 recreational divers participated in the monitoring program, completing 23,059 survey questionnaires in a 5-year period. The volunteer-sightings-based index showed significant differences between the geographical areas. The area of Hurghada is distinguished by a medium-low biodiversity index, heavily damaged by a not controlled anthropic exploitation. Coral reefs along the Ras Mohammed National Park at Sharm el Sheikh, conversely showed high biodiversity index. The detected pattern seems to be correlated with the conservation measures adopted. In our experience and that of other research institutes, citizen science can integrate conventional methods and significantly reduce costs and time. Involving recreational divers we were able to build a large data set, covering a wide geographic area. The main limitation remains the difficulty of obtaining an homogeneous spatial sampling distribution.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
Biotic and abiotic phenological observations can be collected from continental to local spatial scale. Plant phenological observations may only be recorded wherever there is vegetation. Fog, snow and ice are available as phenological para-meters wherever they appear. The singularity of phenological observations is the possibility of spatial intensification to a microclimatic scale where the equipment of meteorological measurements is too expensive for intensive campaigning. The omnipresence of region-specific phenological parameters allows monitoring for a spatially much more detailed assessment of climate change than with weather data. We demonstrate this concept with phenological observations with the use of a special network in the Canton of Berne, Switzerland, with up to 600 observations sites (more than 1 to 10 km² of the inhabited area). Classic cartography, gridding, the integration into a Geographic Information System GIS and large-scale analysis are the steps to a detailed knowledge of topoclimatic conditions of a mountainous area. Examples of urban phenology provide other types of spatially detailed applications. Large potential in phenological mapping in future analyses lies in combining traditionally observed species-specific phenology with remotely sensed and modelled phenology that provide strong spatial information. This is a long history from cartographic intuition to algorithm-based representations of phenology.
Resumo:
BACKGROUND: In general cantons regulate and control the Swiss health service system; patient flows within and between cantons are thereby partially disregarded. This paper develops an alternative spatial model, based upon the construction of orthopedic hospital service areas (HSAOs), and introduces indices for the analysis of patient streams in order to identify areas, irrespective of canton, with diverse characteristics, importance, needs, or demands. METHODS: HSAOs were constructed using orthopedic discharge data. Patient streams between the HSAOs were analysed by calculating three indices: the localization index (% local residents discharged locally), the netindex (the ratio of discharges of nonlocal incoming residents to outgoing local residents), and the market share index (% of local resident discharges of all discharges in local hospitals). RESULTS: The 85 orthopedic HSAOs show a median localization index of 60.8%, a market share index of 75.1%, and 30% of HSAOs have a positive netindex. Insurance class of bed, admission type, and patient age are partially but significantly associated with those indicators. A trend to more centrally provided health services can be observed not only in large urban HSAOs such as Geneva, Bern, Basel, and Zurich, but also in HSAOs in mountain sport areas such as Sion, Davos, or St.Moritz. Furthermore, elderly and emergency patients are more frequently treated locally than younger people or those having elective procedures. CONCLUSION: The division of Switzerland into HSAOs provides an alternative spatial model for analysing and describing patient streams for health service utilization. Because this small area model allows more in-depth analysis of patient streams both within and between cantons, it may improve support and planning of resource allocation of in-patient care in the Swiss healthcare system.
Resumo:
We showed that when CA3 pyramidal neurons in the caudal 80% of the dorsal hippocampus had almost disappeared completely, the efferent pathway of CA3 was rarely detectable. We used the mouse pilocarpine model of temporal lobe epilepsy (TLE), and injected iontophoretically the anterograde tracer phaseolus vulgaris leucoagglutinin (PHA-L) into gliotic CA3, medial septum and the nucleus of diagonal band of Broca, median raphe, and lateral supramammillary nuclei, or the retrograde tracer cholera toxin B subunit (CTB) into gliotic CA3 area of hippocampus. In the afferent pathway, the number of neurons projecting to CA3 from medial septum and the nucleus of diagonal band of Broca, median raphe, and lateral supramammillary nuclei increased significantly. In the hippocampus, where CA3 pyramidal neurons were partially lost, calbindin, calretinin, parvalbumin immunopositive back-projection neurons from CA1-CA3 area were observed. Sprouting of Schaffer collaterals with increased number of large boutons in both sides of CA1 area, particularly in the stratum pyramidale, was found. When CA3 pyramidal neurons in caudal 80% of the dorsal hippocampus have almost disappeared completely, surviving CA3 neurons in the rostral 20% of the dorsal hippocampus may play an important role in transmitting hyperactivity of granule cells to surviving CA1 neurons or to dorsal part of the lateral septum. We concluded that reorganization of CA3 area with its downstream or upstream nuclei may be involved in the occurrence of epilepsy.
Resumo:
Steers were sorted into four groups based on hip height and fat cover at the start of the finishing period. Each group of sorted steers was fed diets containing 0.59 or 0.64 Mcal NEg per lb. of diet dry matter. Steers with less initial fat cover (0.08 in.) compared with those with more (0.17) had less carcass fat cover 103 days later. The steers with less fat cover accumulated fat at a faster rate, but this was not apparent prior to 80 days. Accretion of fat was best predicted by an exponential growth equation, and was not affected by the two concentrations of energy fed in this study. Steers with greater initial height accumulated fat cover at a slower rate than shorter steers. This difference was interpreted to mean that large-frame steers accumulate subcutaneous fat at a slower rate than medium-frame steers. Increase in area of the ribeye was best described by a linear equation. Initial fat cover, hip height, and concentrations of energy in the diet did not affect rate of growth of this muscle. Predicting carcass fat cover from the initial ultrasound measurement of fat thickness found 46 of the 51 carcasses with less than 0.4 in. of fat cover. Twelve carcasses predicted to have less than 0.4 in. of fat cover had more than 0.4 in. Five carcasses predicted to have more than 0.4 in. actually had less than that. Accurate initial measurements of initial fat thickness with ultrasound might be a useful measurement to sort cattle for specific marketing grids.
Resumo:
1. The acceptance of reserves as a useful management strategy relies on evidence of their effectiveness in preserving stocks of harvested species and conserving biodiversity. A history of ad hoc decisions in terrestrial and marine protected area planning has meant that many of these areas are contributing inefficiently to conservation goals. The conservation value of existing protected areas should be assessed when planning the placement of additional areas in a reserve network. 2. This study tested (1) the effectiveness of protection for intertidal molluscs of a marine reserve (Bouddi Marine Extension, NSW, Australia) established in 1971, and (2) the contribution of the protected area to the conservation of regional species, assemblages, and habitats. 3. The shell length and population density of one harvested (Cellana tramoserica), and three non-harvested species (Bembicium nanum, Morula marginalba, Nerita atramentosa) of intertidal molluscs were examined in the protected area and two reference locations over two seasons. 4. The heavily collected limpet C. tramoserica was significantly larger in the protected area and was the only species to exhibit a significant difference. No species significantly differed in population density between the protected area and reference locations. 5. Temporally replicated surveys of macro-molluscs at 21 locations over 75km of coastline identified that the existing protected area included 50% of species, two of five assemblage types and 19 of 20 intertidal rocky shore habitats surveyed in the study region. Reservation of a further three rocky reefs would protect a large proportion of species (71%), a representative of each assemblage and all habitat types. 6. Despite originally being selected in the absence of information on regional biodiversity, the protected area is today an effective starting point for expansion to a regional network of intertidal protected areas.
Resumo:
We studied the influence of surveyed area size on density estimates by means of camera-trapping in a low-density felid population (1-2 individuals/100 km(2) ). We applied non-spatial capture-recapture (CR) and spatial CR (SCR) models for Eurasian lynx during winter 2005/2006 in the northwestern Swiss Alps by sampling an area divided into 5 nested plots ranging from 65 to 760 km(2) . CR model density estimates (95% CI) for models M0 and Mh decreased from 2.61 (1.55-3.68) and 3.6 (1.62-5.57) independent lynx/100 km(2) , respectively, in the smallest to 1.20 (1.04-1.35) and 1.26 (0.89-1.63) independent lynx/100 km(2) , respectively, in the largest area surveyed. SCR model density estimates also decreased with increasing sampling area but not significantly. High individual range overlaps in relatively small areas (the edge effect) is the most plausible reason for this positive bias in the CR models. Our results confirm that SCR models are much more robust to changes in trap array size than CR models, thus avoiding overestimation of density in smaller areas. However, when a study is concerned with monitoring population changes, large spatial efforts (area surveyed ≥760 km(2) ) are required to obtain reliable and precise density estimates with these population densities and recapture rates.
Resumo:
The reconstruction of past flash floods in ungauged basins leads to a high level of uncertainty, which increases if other processes are involved such as the transport of large wood material. An important flash flood occurred in 1997 in Venero Claro (Central Spain), causing significant economic losses. The wood material clogged bridge sections, raising the water level upstream. The aim of this study was to reconstruct this event, analysing the influence of woody debris transport on the flood hazard pattern. Because the reach in question was affected by backwater effects due to bridge clogging, using only high water mark or palaeostage indicators may overestimate discharges, and so other methods are required to estimate peak flows. Therefore, the peak discharge was estimated (123 ± 18 m3 s–1) using indirect methods, but one-dimensional hydraulic simulation was also used to validate these indirect estimates through an iterative process (127 ± 33 m3 s–1) and reconstruct the bridge obstruction to obtain the blockage ratio during the 1997 event (~48%) and the bridge clogging curves. Rainfall–Runoff modelling with stochastic simulation of different rainfall field configurations also helped to confirm that a peak discharge greater than 150 m3 s–1 is very unlikely to occur and that the estimated discharge range is consistent with the estimated rainfall amount (233 ± 27 mm). It was observed that the backwater effect due to the obstruction (water level ~7 m) made the 1997 flood (~35-year return period) equivalent to the 50-year flood. This allowed the equivalent return period to be defined as the recurrence interval of an event of specified magnitude, which, where large woody debris is present, is equivalent in water depth and extent of flooded area to a more extreme event of greater magnitude. These results highlight the need to include obstruction phenomena in flood hazard analysis.