931 resultados para Space-time codes (STCs)
Resumo:
Questa Tesi aspira a mostrare un codice a livello di pacchetto, che abbia performance molto vicine a quello ottimo, per progetti di comunicazioni Satellitari. L’altro scopo di questa Tesi è quello di capire se rimane ancora molto più difficile maneggiare direttamente gli errori piuttosto che le erasures. Le applicazioni per comunicazioni satellitari ora come ora usano tutte packet erasure coding per codificare e decodificare l’informazione. La struttura dell’erasure decoding è molto semplice, perché abbiamo solamente bisogno di un Cyclic Redundancy Check (CRC) per realizzarla. Il problema nasce quando abbiamo pacchetti di dimensioni medie o piccole (per esempio più piccole di 100 bits) perché in queste situazioni il costo del CRC risulta essere troppo dispendioso. La soluzione la possiamo trovare utilizzando il Vector Symbol Decoding (VSD) per raggiungere le stesse performance degli erasure codes, ma senza la necessità di usare il CRC. Per prima cosa viene fatta una breve introduzione su come è nata e su come si è evoluta la codifica a livello di pacchetto. In seguito è stato introdotto il canale q-ary Symmetric Channel (qSC), con sia la derivazione della sua capacità che quella del suo Random Coding Bound (RCB). VSD è stato poi proposto con la speranza di superare in prestazioni il Verification Based Decoding (VBD) su il canale qSC. Infine, le effettive performance del VSD sono state stimate via simulazioni numeriche. I possibili miglioramenti delle performance, per quanto riguarda il VBD sono state discusse, come anche le possibili applicazioni future. Inoltre abbiamo anche risposto alla domande se è ancora così tanto più difficile maneggiare gli errori piuttosto che le erasure.
Resumo:
The radio communication system is one of the most critical system of the overall satellite platform: it often represents the only way of communication, between a spacecraft and the Ground Segment or among a constellation of satellites. This thesis focuses on specific innovative architectures for on-board and on-ground radio systems. In particular, this work is an integral part of a space program started in 2004 at the University of Bologna, Forlì campus, which led to the completion of the microsatellite ALMASat-1, successfully launched on-board the VEGA maiden flight. The success of this program led to the development of a second microsatellite, named ALMASat-EO, a three-axis stabilized microsatellite able to capture images of the Earth surface. Therefore, the first objective of this study was focused on the investigation of an innovative, efficient and low cost architecture for on-board radio communication systems. The TT&C system and the high data rate transmitter for images downlink design and realization are thoroughly described in this work, together with the development of the embedded hardware and the adopted antenna systems. Moreover, considering the increasing interest in the development of constellations of microsatellite, in particular those flying in close formations, a careful analysis has been carried out for the development of innovative communication protocols for inter-satellite links. Furthermore, in order to investigate the system aspects of space communications, a study has been carried out at ESOC having as objective the design, implementation and test of two experimental devices for the enhancement of the ESA GS. Thus, a significant portion of this thesis is dedicated to the description of the results of a method for improving the phase stability of GS radio frequency equipments by means of real-time phase compensation and a new way to perform two antennas arraying tracking using already existing ESA tracking stations facilities.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Many psychophysical studies suggest that target depth and direction during reaches are processed independently, but the neurophysiological support to this view is so far limited. Here, we investigated the representation of reach depth and direction by single neurons in an area of the medial posterior parietal cortex (V6A). Single-unit activity was recorded from V6A in two Macaca fascicularis monkeys performing a fixation-to-reach task to targets at different depths and directions. We found that in a substantial percentage of V6A neurons depth and direction signals jointly influenced fixation, planning and arm movement-related activity in 3D space. While target depth and direction were equally encoded during fixation, depth tuning became stronger during arm movement planning, execution and target holding. The spatial tuning of fixation activity was often maintained across epochs, and this occurred more frequently in depth. These findings support for the first time the existence of a common neural substrate for the encoding of target depth and direction during reaching movements in the posterior parietal cortex. Present results also highlight the presence in V6A of several types of cells that process independently or jointly eye position and arm movement planning and execution signals in order to control reaches in 3D space. It is possible that depth and direction influence also the metrics of the reach action and that this effect on the reach kinematic variables can account for the spatial tuning we found in V6A neural activity. For this reason, we recorded and analyzed behavioral data when one monkey performed reaching movements in 3-D space. We evaluated how the target spatial position, in particular target depth and target direction, affected the kinematic parameters and trajectories describing the motor action properties.
Resumo:
The thesis work concerns X-ray spectrometry for both medical and space applications and is divided into two sections. The first section addresses an X-ray spectrometric system designed to study radiological beams and is devoted to the optimization of diagnostic procedures in medicine. A parametric semi-empirical model capable of efficiently reconstructing diagnostic X-ray spectra in 'middle power' computers was developed and tested. In addition, different silicon diode detectors were tested as real-time detectors in order to provide a real-time evaluation of the spectrum during diagnostic procedures. This project contributes to the field by presenting an improved simulation of a realistic X-ray beam emerging from a common X-ray tube with a complete and detailed spectrum that lends itself to further studies of added filtration, thus providing an optimized beam for different diagnostic applications in medicine. The second section describes the preliminary tests that have been carried out on the first version of an Application Specific Integrated Circuit (ASIC), integrated with large area position-sensitive Silicon Drift Detector (SDD) to be used on board future space missions. This technology has been developed for the ESA project: LOFT (Large Observatory for X-ray Timing), a new medium-class space mission that the European Space Agency has been assessing since February of 2011. The LOFT project was proposed as part of the Cosmic Vision Program (2015-2025).
Resumo:
The membrane protein Cytochrome c Oxidase (CcO) is one of the most important functional bio-molecules. It appears in almost every eukaryotic cell and many bacteria. Although the different species differ in the number of subunits, the functional differences are merely marginal. CcO is the terminal link in the electron transfer pathway of the mitochondrial respiratory chain. Electrons transferred to the catalytic center of the enzyme conduce to the reduction of molecular oxygen to water. Oxygen reduction is coupled to the pumping of protons into the inter-membrane space and hence generates a difference in electrochemical potential of protons across the inner mitochondrial membrane. This potential difference drives the synthesis of adenosine triphosphate (ATP), which is the universal energy carrier within all biological cells. rnrnThe goal of the present work is to contribute to a better understanding of the functional mechanism of CcO by using time-resolved surface enhanced resonance Raman spectroscopy (TR-SERRS). Despite intensive research effort within the last decades, the functional mechanism of CcO is still subject to controversial discussions. It was the primary goal of this dissertation to initiate electron transfer to the redox centers CuA, heme a, heme a3 and CuB electrochemically and to observe the corresponding redox transitions in-situ with a focus on the two heme structures by using SERRS. A measuring cell was developed, which allowed combination of electrochemical excitation with Raman spectroscopy for the purpose of performing the accordant measurements. Cytochrome c was used as a benchmark system to test the new measuring cell and to prove the feasibility of appropriate Raman measurements. In contrast to CcO the heme protein cc contains only a single heme structure. Nevertheless, characteristic Raman bands of the hemes can be observed for both proteins.rnrnIn order to investigate CcO it was immobilized on top of a silver substrate and embedded into an artificial membrane. The catalytic activity of CcO and therefore the complete functional capability of the enzyme within the biomimetic membrane architecture was verified using cyclic voltammetry. Raman spectroscopy was performed using a special nano-structured silver surface, which was developed within the scope of the present work. This new substrate combined two fundamental properties. It facilitated the formation of a protein tethered bilayer lipid membrane (ptBLM) and it allowed obtaining Raman spectra with sufficient high signal-to-noise ratios.rnSpectro-electrochemical investigations showed that at open circuit potential the enzyme exists in a mixed-valence state, with heme a and and heme a3 in the reduced and oxidized state, respectively. This was considered as an intermediate state between the non-activated and the fully activated state of CcO. Time-resolved SERRS measurements revealed that a hampered electron transfer to the redox center heme a3 characterizes this intermediate state.rn
Resumo:
Cytochrom c Oxidase (CcO), der Komplex IV der Atmungskette, ist eine der Häm-Kupfer enthaltenden Oxidasen und hat eine wichtige Funktion im Zellmetabolismus. Das Enzym enthält vier prosthetische Gruppen und befindet sich in der inneren Membran von Mitochondrien und in der Zellmembran einiger aerober Bakterien. Die CcO katalysiert den Elektronentransfer (ET) von Cytochrom c zu O2, wobei die eigentliche Reaktion am binuklearen Zentrum (CuB-Häm a3) erfolgt. Bei der Reduktion von O2 zu zwei H2O werden vier Protonen verbraucht. Zudem werden vier Protonen über die Membran transportiert, wodurch eine elektrochemische Potentialdifferenz dieser Ionen zwischen Matrix und Intermembranphase entsteht. Trotz ihrer Wichtigkeit sind Membranproteine wie die CcO noch wenig untersucht, weshalb auch der Mechanismus der Atmungskette noch nicht vollständig aufgeklärt ist. Das Ziel dieser Arbeit ist, einen Beitrag zum Verständnis der Funktion der CcO zu leisten. Hierzu wurde die CcO aus Rhodobacter sphaeroides über einen His-Anker, der am C-Terminus der Untereinheit II angebracht wurde, an eine funktionalisierte Metallelektrode in definierter Orientierung gebunden. Der erste Elektronenakzeptor, das CuA, liegt dabei am nächsten zur Metalloberfläche. Dann wurde eine Doppelschicht aus Lipiden insitu zwischen die gebundenen Proteine eingefügt, was zur sog. proteingebundenen Lipid-Doppelschicht Membran (ptBLM) führt. Dabei musste die optimale Oberflächenkonzentration der gebundenen Proteine herausgefunden werden. Elektrochemische Impedanzspektroskopie(EIS), Oberflächenplasmonenresonanzspektroskopie (SPR) und zyklische Voltammetrie (CV) wurden angewandt um die Aktivität der CcO als Funktion der Packungsdichte zu charakterisieren. Der Hauptteil der Arbeit betrifft die Untersuchung des direkten ET zur CcO unter anaeroben Bedingungen. Die Kombination aus zeitaufgelöster oberflächenverstärkter Infrarot-Absorptionsspektroskopie (tr-SEIRAS) und Elektrochemie hat sich dafür als besonders geeignet erwiesen. In einer ersten Studie wurde der ET mit Hilfe von fast scan CV untersucht, wobei CVs von nicht-aktivierter sowie aktivierter CcO mit verschiedenen Vorschubgeschwindigkeiten gemessen wurden. Die aktivierte Form wurde nach dem katalytischen Umsatz des Proteins in Anwesenheit von O2 erhalten. Ein vier-ET-modell wurde entwickelt um die CVs zu analysieren. Die Methode erlaubt zwischen dem Mechanismus des sequentiellen und des unabhängigen ET zu den vier Zentren CuA, Häm a, Häm a3 und CuB zu unterscheiden. Zudem lassen sich die Standardredoxpotentiale und die kinetischen Koeffizienten des ET bestimmen. In einer zweiten Studie wurde tr-SEIRAS im step scan Modus angewandt. Dafür wurden Rechteckpulse an die CcO angelegt und SEIRAS im ART-Modus verwendet um Spektren bei definierten Zeitscheiben aufzunehmen. Aus diesen Spektren wurden einzelne Banden isoliert, die Veränderungen von Vibrationsmoden der Aminosäuren und Peptidgruppen in Abhängigkeit des Redoxzustands der Zentren zeigen. Aufgrund von Zuordnungen aus der Literatur, die durch potentiometrische Titration der CcO ermittelt wurden, konnten die Banden versuchsweise den Redoxzentren zugeordnet werden. Die Bandenflächen gegen die Zeit aufgetragen geben dann die Redox-Kinetik der Zentren wieder und wurden wiederum mit dem vier-ET-Modell ausgewertet. Die Ergebnisse beider Studien erlauben die Schlussfolgerung, dass der ET zur CcO in einer ptBLM mit größter Wahrscheinlichkeit dem sequentiellen Mechanismus folgt, was dem natürlichen ET von Cytochrom c zur CcO entspricht.
Resumo:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.
Resumo:
The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.
Resumo:
The PhD research activity has taken place in the space debris field. In detail, it is focused on the possibility of detecting space debris from the space based platform. The research is focused at the same time on the software and the hardware of this detection system. For the software, a program has been developed for being able to detect an object in space and locate it in the sky solving the star field. For the hardware, the possibility of adapting a ground telescope for space activity has been considered and it has been tested on a possible electronic board.
Resumo:
The study of supermassive black hole (SMBH) accretion during their phase of activity (hence becoming active galactic nuclei, AGN), and its relation to the host-galaxy growth, requires large datasets of AGN, including a significant fraction of obscured sources. X-ray data are strategic in AGN selection, because at X-ray energies the contamination from non-active galaxies is far less significant than in optical/infrared surveys, and the selection of obscured AGN, including also a fraction of heavily obscured AGN, is much more effective. In this thesis, I present the results of the Chandra COSMOS Legacy survey, a 4.6 Ms X-ray survey covering the equatorial COSMOS area. The COSMOS Legacy depth (flux limit f=2x10^(-16) erg/s/cm^(-2) in the 0.5-2 keV band) is significantly better than that of other X-ray surveys on similar area, and represents the path for surveys with future facilities, like Athena and X-ray Surveyor. The final Chandra COSMOS Legacy catalog contains 4016 point-like sources, 97% of which with redshift. 65% of the sources are optically obscured and potentially caught in the phase of main BH growth. We used the sample of 174 Chandra COSMOS Legacy at z>3 to place constraints on the BH formation scenario. We found a significant disagreement between our space density and the predictions of a physical model of AGN activation through major-merger. This suggests that in our luminosity range the BH triggering through secular accretion is likely preferred to a major-merger triggering scenario. Thanks to its large statistics, the Chandra COSMOS Legacy dataset, combined with the other multiwavelength COSMOS catalogs, will be used to answer questions related to a large number of astrophysical topics, with particular focus on the SMBH accretion in different luminosity and redshift regimes.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.
Resumo:
Neglect is defined as the failure to attend and to orient to the contralesional side of space. A horizontal bias towards the right visual field is a classical finding in patients who suffered from a right-hemispheric stroke. The vertical dimension of spatial attention orienting has only sparsely been investigated so far. The aim of this study was to investigate the specificity of this vertical bias by means of a search task, which taps a more pronounced top-down attentional component. Eye movements and behavioural search performance were measured in thirteen patients with left-sided neglect after right hemispheric stroke and in thirteen age-matched controls. Concerning behavioural performance, patients found significantly less targets than healthy controls in both the upper and lower left quadrant. However, when targets were located in the lower left quadrant, patients needed more visual fixations (and therefore longer search time) to find them, suggesting a time-dependent vertical bias.
Resumo:
Identification of the subarachnoid space has traditionally been achieved by either a blind landmark-guided approach or using prepuncture ultrasound assistance. To assess the feasibility of performing spinal anaesthesia under real-time ultrasound guidance in routine clinical practice we conducted a single center prospective observational study among patients undergoing lower limb orthopaedic surgery. A spinal needle was inserted unassisted within the ultrasound transducer imaging plane using a paramedian approach (i.e., the operator held the transducer in one hand and the spinal needle in the other). The primary outcome measure was the success rate of CSF acquisition under real-time ultrasound guidance with CSF being located in 97 out of 100 consecutive patients within median three needle passes (IQR 1-6). CSF was not acquired in three patients. Subsequent attempts combining landmark palpation and pre-puncture ultrasound scanning resulted in successful spinal anaesthesia in two of these patients with the third patient requiring general anaesthesia. Median time from spinal needle insertion until intrathecal injection completion was 1.2 minutes (IQR 0.83-4.1) demonstrating the feasibility of this technique in routine clinical practice.