Fabio Roli and Giorgio Giacinto in the Top italian scientists (Tis) ranking, published by Via-Academy. Media Coverage.
Fabio Roli and Giorgio Giacinto are among the Top italian scientists, according to the ranking published by Via-Academy, specialized site, which made a census and comparison over four thousand researchers, who have submitted and uploaded their works on Google scholar.
From the UNICA's web magazine: Top italian scientists, buone performance dei docenti di UniCa. Il sito specializzato, che ospita i ricercatori che hanno inoltrato e caricato i propri lavori su Google scholar, ha censito oltre quattromila ricercatori. Per l’Università del capoluogo ricadute e visibilità in ambito mondiale.
Ventidue i docenti dell’Università di Cagliari nella classifica del sito www.topitalianscientists.org. E potrebbero essercene anche altri: la classifica è in continuo aggiornamento. Di fatto, un bel colpo per l’ateneo del capoluogo sotto diversi punti di vista. Intanto, un risultato proficuo se si pensa alla visibilità, alla credibilità e alle ricadute nella comunità accademica internazionale. Ma anche un bel biglietto da visita in tema di qualità della didattica e della ricerca, utile anche per una promozione puntuale dei corsi di laurea e delle scuole di specializzazione.
Media Coverage: Ansa.it; Unica Notizie; unionesarda.it; sardegnaoggi.it; notizie.olbia24.it; notizie.portotorres24.it; gazzettadelsulcis.it.
Congratulations to Battista Biggio, appointed IEEE Senior Member!
Battista Biggio has been elevated to IEEE Senior Member, coveted goal forscientists and researchers in Electrical and Electronic Engineering. From the IEEE website:
Senior member is the highest grade for which IEEE members can apply. IEEE members can self-nominate, or be nominated, for Senior Member grade.
To be eligible for application or nomination, candidates must:
- Be engineers, scientists, educators, technical executives, or originators in IEEE-designated fields
- Have experience reflecting professional maturity
- Have been in professional practice for at least ten years (with some credit for certain degrees)
- Show significant performance over a period of at least five of their years in professional practice
PRA Lab and Pluribus One among the partners of the challenging European project LETS CROWD, aimed at monitoring and protecting people during mass gatherings.
LETS CROWD (Law Enforcement agencies human factor methods and Toolkit for the Security and protection of CROWDs in mass gatherings) is a project funded by the European Commission under the HORIZON 2020 Programme. Europe has suffered many criminal actions and terrorist attacks during mass gatherings, which have great impact on the citizens and the society, in the last few years. LEAs must face this new scenario (it is considered a priority by the European Union), which imposes a multitude of heterogeneous challenges. Hence, the key is to deter, prevent, protect, pursue and effectively respond to criminal and/or terrorist actions, achieving the best possible protection for people gathering in a specific area where particular events are taking place, thus increasing also the sense of security whit the necessary balance between protection and rights of EU citizens. For all these reasons, novel methodologies and tools must be investigated for strategic and operational activities, involving also strong cross-border cooperation and intelligence sharing, and planning solutions for all these issues, where the human and sociological factor is often the key driver. In fact, humans play a key role in every dimension of crowd protection against criminal and terrorist acts: as perpetrators, protectors and victims.
LETS CROWD will overcome challenges preventing the effective implementation of the European Security Model (ESM) with regards to mass gatherings. This will be achieved by providing the following to security policy practitioners and in particular, LEAs: (1) A dynamic risk assessment methodology for the protection of crowds during mass gatherings centred on human factors in order to effectively produce policies and deploy adequate solutions. (2) A policy making toolkit for the long-term and strategic decision making of security policy makers, including a database of empirical data, statistics and an analytical tool for security policies modelling, and (3) A set of human centred tools for Law Enforcement Agencies (LEAs), including real time crowd behaviour forecasting, innovative communication procedures, semantic intelligence applied to social networks and the internet, and novel computer vision techniques.
LETS CROWD will be a security practitioner driven project, fostering the communication and cooperation among LEAs, first responders, civil protection and citizens in the fight against crime and terrorism during mass gatherings by a set of cooperation actions. The project will put citizens at the centre of the research and will assess and evaluate how security measures affect them, and how they perceive them, while respecting EU fundamental rights. LETS CROWD impact will be measured under practical demonstrations involving seven LEAs and relevant emergency services units. In order to facilitate the assessment of the performance, transferability, scalability and large scale deployment of these solutions, the demonstrations will be conducted following eleven use cases.The project, lead by ETRA Investigación y Desarrollo S.A. (Spain), will be implemented by a consortium of 16 partners, from 8 different countries (including SMEs, universities, LEAs), operating in the critical areas of government, security, energy, finance, transport and utilities.
ImageCLEF 2017 Lifelog Task, September 11-14, Dublin - Call for participation
This is a cordial invitation to participate in the 1st edition of the Lifelog Task. ImageCLEF is one of the labs of CLEF 2017, which will be held in Dublin, Ireland.
The availability of a large variety of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips in every moment of our life is creating vast archives of personal data where the totality of an individual's experiences, captured multi-modally through digital sensors are stored permanently as a personal multimedia archive. This unified digital records, commonly referred to as lifelogs, has been gathering increasing attention in recent years within the research community due to the need for systems that can automatically analyse this huge amounts of data in order to categorize, summarize and also query them to retrieve the information that the user may need.
Despite the increasing number of successful related workshops and panels in the last years lifelogging has seldom been the subject of a rigorous comparative benchmarking exercise. This task aims to bring the attention of lifelogging to an as wide as possible audience and to promote research into some of the key challenges of the coming years.
The task addresses the problems of lifelogging data retrieval and summarization and it is divided in two subtasks based on the same data of a large collection of wearable camera images, description of the semantic locations, and the physical activities of the lifeloggers.
The objective of the first subtask is to analyse the lifelog data and, according to several specific queries (e.g., Find the moment(s) when I was shopping for wine in a supermarket), to return the correct answers. The second subtask objective is the analysis of all the images in the dataset and the summarization of them according to specific requirements.
SubTask 1: Lifelog retrieval (LRT)
The participants should analyse the lifelog data and according to several specific queries they have to return the correct answers. For example:
Shopping for a Bottle of Wine: Find the moment(s) when I was shopping for wine in a supermarket.
Shopping For Fish: Find the moment(s) when I was shopping for fish in the supermarket.
The Metro: Find the moment(s) when I was riding a metro.
SubTask 2: Lifelog summarization (LST)
The participants should analyse all the images and summarize them according to specific requirements. The summary should be represented by 50 images, and it is required to be both relevant and diverse. All of the topics in this subtask will have more than 50 relevant images, so if the participants do not submit 50 images, it will be considered as an incorrect format result. The represented images are considered to be diverse if they depict different moments of the lifelogger in terms of activity, location, day-time, viewpoint, etc. of the queried topic. For example:
Public Transport: Summarize the use of public transport by a user.
The participant should recognize any different mean of transport depicted in the images of the dataset and if a particular mean of transport it is depicted in different day-time the participant should recognize this.
The participants will have a chance to submit a paper describing their system, which will be published in the CLEF Labs Working Notes. Furthermore, the groups of the best performing systems will be invited to give an oral presentation at CLEF 2017 and others will be given the option of presenting a poster.
For more details on the task please visit http://www.imageclef.org/2017/lifelog
14.11.2016: Registration opens.
14.11.2016: Development data release.
20.03.2017: Test data release.
01.05.2017: Deadline for submission of runs by the participants 11:59:59 PM GMT.
15.05.2017: Release of processed results by the task organizers.
26.05.2017: Deadline for submission of working notes papers by the participants 11:59:59 PM GMT.
17.06.2017: Notification of acceptance of the working notes papers.
01.07.2017: Camera ready working notes papers.
11.-14.09.2017: CLEF 2017, Dublin, Ireland.
- Duc-Tien Dang-Nguyen, Dublin City University, Ireland (duc-tien.dang-nguyen(at)dcu.ie)
- Luca Piras, University of Cagliari, Italy (luca.piras(at)diee.unica.it)
- Michael Riegler, University of Oslo, Norway (michaari(at)student.matnat.uio.no)
- Cathal Gurrin, Dublin City University, Ireland (cgurrin(at)computing.dcu.ie)
- Giulia Boato, University of Trento, Italy (giulia.boato(at)unitn.it)