935 resultados para Optical signal and image processing device
Resumo:
Tailoring properties of materials by femtosecond laser processing has been proposed in the last decade as a powerful approach for technological applications, ranging from optics to biology. Although most of the research output in this field is related to femtosecond laser processing of single either organic or inorganic materials, more recently a similar approach has been proposed to develop advanced hybrid nanomaterials. Here, we report results on the use of femtosecond lasers to process hybrid nanomaterials, composed of polymeric and glassy matrices containing metal or semiconductor nanostructures. We present results on the use of femtosecond pulses to induce Cu and Ag nanoparticles in the bulk of borate and borosilicate glasses, which can be applied for a new generation of waveguides. We also report on 3D polymeric structures, fabricated by two-photon polymerization, containing Au and ZnO nanostructures, with intense two-photon fluorescent properties. The approach based on femtosecond laser processing to fabricate hybrid materials containing metal or semiconductor nanostructures is promising to be exploited for optical sensors and photonics devices.
Resumo:
This work reports on the construction and spectroscopic analyses of optical micro-cavities (OMCs) that efficiently emit at ~1535 nm. The emission wavelength matches the third transmission window of commercial optical fibers and the OMCs were entirely based on silicon. The sputtering deposition method was adopted in the preparation of the OMCs, which comprised two Bragg reflectors and one spacer layer made of either Er- or ErYb-doped amorphous silicon nitride. The luminescence signal extracted from the OMCs originated from the 4I13/2→4I15/2 transition (due to Er3+ ions) and its intensity showed to be highly dependent on the presence of Yb3+ ions.According to the results, the Er3+-related light emission was improved by a factor of 48 when combined with Yb3+ ions and inserted in the spacer layer of the OMC. The results also showed the effectiveness of the present experimental approach in producing Si-based light-emitting structures in which the main characteristics are: (a) compatibility with the actual microelectronics industry, (b) the deposition of optical quality layers with accurate composition control, and (c) no need of uncommon elements-compounds nor extensive thermal treatments. Along with the fundamental characteristics of the OMCs, this work also discusses the impact of the Er3+-Yb3+ ion interaction on the emission intensity as well as the potential of the present findings.
Resumo:
[EN] This article describes an implementation of the optical flow estimation method introduced by Zach, Pock and Bischof. This method is based on the minimization of a functional containing a data term using the L norm and a regularization term using the total variation of the flow. The main feature of this formulation is that it allows discontinuities in the flow field, while being more robust to noise than the classical approach. The algorithm is an efficient numerical scheme, which solves a relaxed version of the problem by alternate minimization.
Resumo:
[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.
Resumo:
[EN] In this work, we describe an implementation of the variational method proposed by Brox et al. in 2004, which yields accurate optical flows with low running times. It has several benefits with respect to the method of Horn and Schunck: it is more robust to the presence of outliers, produces piecewise-smooth flow fields and can cope with constant brightness changes. This method relies on the brightness and gradient constancy assumptions, using the information of the image intensities and the image gradients to find correspondences. It also generalizes the use of continuous L1 functionals, which help mitigate the efect of outliers and create a Total Variation (TV) regularization. Additionally, it introduces a simple temporal regularization scheme that enforces a continuous temporal coherence of the flow fields.
Resumo:
We analyse the influence of colour information in optical flow methods. Typically, most of these techniques compute their solutions using grayscale intensities due to its simplicity and faster processing, ignoring the colour features. However, the current processing systems have minimized their computational cost and, on the other hand, it is reasonable to assume that a colour image offers more details from the scene which should facilitate finding better flow fields. The aim of this work is to determine if a multi-channel approach supposes a quite enough improvement to justify its use. In order to address this evaluation, we use a multi-channel implementation of a well-known TV-L1 method. Furthermore, we review the state-of-the-art in colour optical flow methods. In the experiments, we study various solutions using grayscale and RGB images from recent evaluation datasets to verify the colour benefits in motion estimation.
Resumo:
[EN]The aim of this work is to study several strategies for the preservation of flow discontinuities in variational optical flow methods. We analyze the combination of robust functionals and diffusion tensors in the smoothness assumption. Our study includes the use of tensors based on decreasing functions, which has shown to provide good results. However, it presents several limitations and usually does not perform better than other basic approaches. It typically introduces instabilities in the computed motion fields in the form of independent \textit{blobs} of vectors with large magnitude...
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
In der vorliegenden Arbeit wurden Struktur-Eigenschaftsbeziehungen des konjugierten Modell-Polymers MEH-PPV untersucht. Dazu wurde Fällungs-fraktionierung eingesetzt, um MEH-PPV mit unterschiedlichem Molekulargewicht (Mw) zu erhalten, insbesondere MEH-PPV mit niedrigem Mw, da dieses für optische Wellenleiterbauelemente optimal geeignet ist Wir konnten feststellen, dass die Präparation einer ausreichenden Menge von MEH-PPV mit niedrigem Mw und geringer Mw-Verteilung wesentlich von der geeigneten Wahl des Lösungsmittels und der Temperatur während der Zugabe des Fällungsmittels abhängt. Alternativ dazu wurden UV-induzierte Kettenspaltungseffekte untersucht. Wir folgern aus dem Vergleich beider Vorgehensweisen, dass die Fällungsfraktionierung verglichen mit der UV-Behandlung besser geeignet ist zur Herstellung von MEH-PPV mit spezifischem Mw, da das UV-Licht Kettendefekte längs des Polymerrückgrats erzeugt. 1H NMR and FTIR Spektroskopie wurden zur Untersuchung dieser Kettendefekte herangezogen. Wir konnten außerdem beobachten, dass die Wellenlängen der Absorptionsmaxima der MEH-PPV Fraktionen mit der Kettenlänge zunehmen bis die Zahl der Wiederholeinheiten n 110 erreicht ist. Dieser Wert ist signifikant größer als früher berichtet. rnOptische Eigenschaften von MEH-PPV Wellenleitern wurden untersucht und es konnte gezeigt werden, dass sich die optischen Konstanten ausgezeichnet reproduzieren lassen. Wir haben die Einflüsse der Lösungsmittel und Temperatur beim Spincoaten auf Schichtdicke, Oberflächenrauigkeit, Brechungsindex, Doppelbrechung und Wellenleiter-Dämpfungsverlust untersucht. Wir fanden, dass mit der Erhöhung der Siedetemperatur der Lösungsmittel die Schichtdicke und die Rauigkeit kleiner werden, während Brechungsindex, Doppelbrechung sowie Wellenleiter-Dämpfungsverluste zunahmen. Wir schließen daraus, dass hohe Siedetemperaturen der Lösungsmittel niedrige Verdampfungsraten erzeugen, was die Aggregatbildung während des Spincoatings begünstigt. Hingegen bewirkt eine erhöhte Temperatur während der Schichtpräparation eine Erhöhung von Schichtdicke und Rauhigkeit. Jedoch nehmen Brechungsindex und der Doppelbrechung dabei ab.rn Für die Schichtpräparation auf Glassubstraten und Quarzglas-Fasern kam das Dip-Coating Verfahren zum Einsatz. Die Schichtdicke der Filme hängt ab von Konzentration der Lösung, Transfergeschwindigkeit und Immersionszeit. Mit Tauchbeschichtung haben wir Schichten von MEH-PPV auf Flaschen-Mikroresonatoren aufgebracht zur Untersuchung von rein-optischen Schaltprozessen. Dieses Verfahren erweist sich insbesondere für MEH-PPV mit niedrigem Mw als vielversprechend für die rein-optische Signalverarbeitung mit großer Bandbreite.rn Zusätzlich wurde auch die Morphologie dünner Schichten aus anderen PPV-Derivaten mit Hilfe von FTIR Spektroskopie untersucht. Wir konnten herausfinden, dass der Alkyl-Substitutionsgrad einen starken Einfluss auf die mittlere Orientierung der Polymerrückgrate in dünnen Filmen hat.rn
Resumo:
Diese Arbeit widmet sich der Untersuchung der photophysikalischen Prozesse, die in Mischungen von Elektronendonoren mit Elektronenakzeptoren zur Anwendung in organischen Solarzellen auftreten. Als Elektronendonoren werden das Copolymer PBDTTT-C, das aus Benzodithiophen- und Thienothiophene-Einheiten besteht, und das kleine Molekül p-DTS(FBTTh2)2, welches Silizium-überbrücktes Dithiophen, sowie fluoriertes Benzothiadiazol und Dithiophen beinhaltet, verwendet. Als Elektronenakzeptor finden ein planares 3,4:9,10-Perylentetracarbonsäurediimid-(PDI)-Derivat und verschiedene Fullerenderivate Anwendung. PDI-Derivate gelten als vielversprechende Alternativen zu Fullerenen aufgrund der durch chemische Synthese abstimmbaren strukturellen, optischen und elektronischen Eigenschaften. Das gewichtigste Argument für PDI-Derivate ist deren Absorption im sichtbaren Bereich des Sonnenspektrums was den Photostrom verbessern kann. Fulleren-basierte Mischungen übertreffen jedoch für gewöhnlich die Effizienz von Donor-PDI-Mischungen.rnUm den Nachteil der PDI-basierten Mischungen im Vergleich zu den entsprechenden Fulleren-basierten Mischungen zu identifizieren, werden die verschiedenen Donor-Akzeptor-Kombinationen auf ihre optischen, elektronischen und strukturellen Eigenschaften untersucht. Zeitaufgelöste Spektroskopie, vor allem transiente Absorptionsspektroskopie (TA), wird zur Analyse der Ladungsgeneration angewendet und der Vergleich der Donor-PDI Mischfilme mit den Donor-Fulleren Mischfilmen zeigt, dass die Bildung von Ladungstransferzuständen einen der Hauptverlustkanäle darstellt.rnWeiterhin werden Mischungen aus PBDTTT-C und [6,6]-Phenyl-C61-buttersäuremethylesther (PC61BM) mittels TA-Spektroskopie auf einer Zeitskala von ps bis µs untersucht und es kann gezeigt werden, dass der Triplettzustand des Polymers über die nicht-geminale Rekombination freier Ladungen auf einer sub-ns Zeitskala bevölkert wird. Hochentwickelte Methoden zur Datenanalyse, wie multivariate curve resolution (MCR), werden angewendet um überlagernde Datensignale zu trennen. Zusätzlich kann die Regeneration von Ladungsträgern durch Triplett-Triplett-Annihilation auf einer ns-µs Zeitskala gezeigt werden. Darüber hinaus wird der Einfluss des Lösungsmitteladditivs 1,8-Diiodooctan (DIO) auf die Leistungsfähigkeit von p-DTS(FBTTh2)2:PDI Solarzellen untersucht. Die Erkenntnisse von morphologischen und photophysikalischen Experimenten werden kombiniert, um die strukturellen Eigenschaften und die Photophysik mit den relevanten Kenngrößen des Bauteils in Verbindung zu setzen. Zeitaufgelöste Photolumineszenzmessungen (time-resolved photoluminescence, TRPL) zeigen, dass der Einsatz von DIO zu einer geringeren Reduzierung der Photolumineszenz führt, was auf eine größere Phasentrennung zurückgeführt werden kann. Außerdem kann mittels TA Spektroskopie gezeigt werden, dass die Verwendung von DIO zu einer verbesserten Kristallinität der aktiven Schicht führt und die Generation freier Ladungen fördert. Zur genauen Analyse des Signalzerfalls wird ein Modell angewendet, das den gleichzeitigen Zerfall gebundener CT-Zustände und freier Ladungen berücksichtigt und optimierte Donor-Akzeptor-Mischungen zeigen einen größeren Anteil an nicht-geminaler Rekombination freier Ladungsträger.rnIn einer weiteren Fallstudie wird der Einfluss des Fullerenderivats, namentlich IC60BA und PC71BM, auf die Leistungsfähigkeit und Photophysik der Solarzellen untersucht. Eine Kombination aus einer Untersuchung der Struktur des Dünnfilms sowie zeitaufgelöster Spektroskopie ergibt, dass Mischungen, die ICBA als Elektronenakzeptor verwenden, eine schlechtere Trennung von Ladungstransferzuständen zeigen und unter einer stärkeren geminalen Rekombination im Vergleich zu PCBM-basierten Mischungen leiden. Dies kann auf die kleinere Triebkraft zur Ladungstrennung sowie auf die höhere Unordnung der ICBA-basierten Mischungen, die die Ladungstrennung hemmen, zurückgeführt werden. Außerdem wird der Einfluss reiner Fullerendomänen auf die Funktionsfähigkeit organischer Solarzellen, die aus Mischungen des Thienothienophen-basierenden Polymers pBTTT-C14 und PC61BM bestehen, untersucht. Aus diesem Grund wird die Photophysik von Filmen mit einem Donor-Akzeptor-Mischungsverhältnis von 1:1 sowie 1:4 verglichen. Während 1:1-Mischungen lediglich eine co-kristalline Phase, in der Fullerene zwischen den Seitenketten von pBTTT interkalieren, zeigen, resultiert der Überschuss an Fulleren in den 1:4-Proben in der Ausbildung reiner Fullerendomänen zusätzlich zu der co kristallinen Phase. Transiente Absorptionsspektroskopie verdeutlicht, dass Ladungstransferzustände in 1:1-Mischungen hauptsächlich über geminale Rekombination zerfallen, während in 1:4 Mischungen ein beträchtlicher Anteil an Ladungen ihre wechselseitige Coulombanziehung überwinden und freie Ladungsträger bilden kann, die schließlich nicht-geminal rekombinieren.
Resumo:
This thesis work aims to find a procedure for isolating specific features of the current signal from a plasma focus for medical applications. The structure of the current signal inside a plasma focus is exclusive of this class of machines and a specific analysis procedure has to be developed. The hope is to find one or more features that shows a correlation with the dose erogated. The study of the correlation between the current discharge signal and the dose delivered by a plasma focus could be of some importance not only for the practical application of dose prediction but also for expanding the knowledge anbout the plasma focus physics. Vatious classes of time-frequency analysis tecniques are implemented in order to solve the problem.
Resumo:
Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system’s position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device’s projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.
Resumo:
The task considered in this paper is performance evaluation of region segmentation algorithms in the ground-truth-based paradigm. Given a machine segmentation and a ground-truth segmentation, performance measures are needed. We propose to consider the image segmentation problem as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning. By doing so, we obtain a variety of performance measures which have not been used before in image processing. In particular, some of these measures have the highly desired property of being a metric. Experimental results are reported on both synthetic and real data to validate the measures and compare them with others.
Resumo:
Currently, observations of space debris are primarily performed with ground-based sensors. These sensors have a detection limit at some centimetres diameter for objects in Low Earth Orbit (LEO) and at about two decimetres diameter for objects in Geostationary Orbit (GEO). The few space-based debris observations stem mainly from in-situ measurements and from the analysis of returned spacecraft surfaces. Both provide information about mostly sub-millimetre-sized debris particles. As a consequence the population of centimetre- and millimetre-sized debris objects remains poorly understood. The development, validation and improvement of debris reference models drive the need for measurements covering the whole diameter range. In 2003 the European Space Agency (ESA) initiated a study entitled “Space-Based Optical Observation of Space Debris”. The first tasks of the study were to define user requirements and to develop an observation strategy for a space-based instrument capable of observing uncatalogued millimetre-sized debris objects. Only passive optical observations were considered, focussing on mission concepts for the LEO, and GEO regions respectively. Starting from the requirements and the observation strategy, an instrument system architecture and an associated operations concept have been elaborated. The instrument system architecture covers the telescope, camera and onboard processing electronics. The proposed telescope is a folded Schmidt design, characterised by a 20 cm aperture and a large field of view of 6°. The camera design is based on the use of either a frame-transfer charge coupled device (CCD), or on a cooled hybrid sensor with fast read-out. A four megapixel sensor is foreseen. For the onboard processing, a scalable architecture has been selected. Performance simulations have been executed for the system as designed, focussing on the orbit determination of observed debris particles, and on the analysis of the object detection algorithms. In this paper we present some of the main results of the study. A short overview of the user requirements and observation strategy is given. The architectural design of the instrument is discussed, and the main tradeoffs are outlined. An insight into the results of the performance simulations is provided.