<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0" xml:lang="ja">
	<channel>
		<title>HASCA2025</title>
		<link>http://hasca2025.hasc.jp/</link>
		<atom:link href="http://hasca2025.hasc.jp/rss2.xml" rel="self" type="application/rss+xml" />
		<description></description>
		<language>ja</language>
		<copyright>Copyright (C) 2026 HASCA2025 All rights reserved.</copyright>
		<lastBuildDate>Thu, 09 Oct 2025 18:49:39 +0900</lastBuildDate>
		<generator>a-blog cms</generator>
		<docs>http://blogs.law.harvard.edu/tech/rss</docs>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Organizers &amp; Committee</title>
			<link>http://hasca2025.hasc.jp/pc/index.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<h2 id="h694">ORGANIZERS</h2>
				

				
			
				
				
				<ul >
<li>Kazuya MURAO (Ritsumeikan University, Japnan)</li>
<li>Yu ENOKIBORI (Nagoya University, Japan)</li>
<li>Hristijan GJORESKI (Ss. Cyril and Methodius University, N. Macedonia)</li>
<li>Paula LAGO (Concordia University, Canada)</li>
<li>Tsuyoshi OKITA (Kyushu Institute Technology, Japan)</li>
<li>Pekka SIIRTOLA (University of Oulu, Finland)</li>
<li>Kei HIROI (Kyoto University, Japan)</li>
<li>Philipp M. SCHOLL (University of Freiburg, Germany)</li>
<li>Mathias CILIBERTO (University of Sussex, UK)</li>
<li>Kenta URANO (Nagoya University, Japan)</li>
<li>Marius Bock (University of Siegen, Germany)</li>
</ul>
				

				
			
				
				
				<h2 id="h696">ADVISORY BOARDS</h2>
				

				
			
				
				
				<ul >
<li>Nobuo Kawaguchi (Nagoya University, Japan)</li>
<li>Nobuhiko Nishio (Ritsumeikan University, Japan)</li>
<li>Daniel Roggen (University of Sussex, UK)</li>
<li>Sozo Inoue (Kyushu Institute of Technology, Japan)</li>
<li>Susanna Pirttikangas (University of Oulu, Finland)</li>
<li>Kristof van Laerhoven (University of Freiburg, Germany)</li>
</ul>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>pc</category>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/pc/index.html</guid>
			<pubDate>Thu, 01 May 2025 16:34:22 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Call for Contributions</title>
			<link>http://hasca2025.hasc.jp/cfp/entry-64.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p >We are pleased to announce that the HASCA (Human Activity Sensing Corpus and Applications) Workshop will take place as part <a href="https://www.ubicomp.org/ubicomp-iswc-2025/" target="_blank">Ubicomp2025</a>.<br />
HASCA is one of the largest workshops in Ubicomp, it has been held over 13 years.</p>
				

				
			
				
				
				<h2 id="h705">Dates</h2>
				

				
			
				
				
				<p >Submission Deadline: June <s>15 22 (ext.)</s><b>29 (ext.)</b><br />
* for submission after 23rd, please follow the aanouncement in the submission site.<br />
<font color="red">* (update at June 8th) submissions are now open for papers rejected at ISWC. Please submit your paper, ISWC review results, and the letter explaining the revision policy. Submission Deadline: July 13 (AoE)</font><br />
Acceptance Notification: July 16<br />
Camera-ready: July 31 <strong>HARD</strong><br />
Workshop: Oct. 12 (Room U3)<br />
<br />
For SHL Challenge and WEAR challenge, please check each challenge's conditions as dates may differ.</p>
				

				
			
				
				
				<h2 id="h707">SUMMARY</h2>
				

				
			
				
				
				<p >The objective of this workshop is to share the experiences among<br />
researchers about current challenges of real-world activity<br />
recognition with newly developed datasets and tools, breaking through<br />
towards open-ended contextual intelligence.<br />
<br />
This workshop discusses the challenges of designing reproducible<br />
experimental setups, the large-scale dataset collection campaigns, the<br />
activity and context recognition methods that are robust and adaptive,<br />
and evaluation systems in the real world.<br />
<br />
As a special topic of this year we will reflect on the challenges to<br />
recognize situations, events and/or activities among the statically<br />
predefined pools and beyond - which is the current state of the art -<br />
and instead we will adopt an "open-ended view" on activity and context<br />
awareness. This may result in combinations of the automatic discovery<br />
of relevant patterns in sensor data, the experience sampling and<br />
wearable technologies to unobtrusively discover the semantic meaning<br />
of such patterns, the crowd-sourcing of dataset acquisition and<br />
annotation, and new "open-ended" human activity modeling techniques.</p>
				

				
			
				
				
				<h2 id="h709">CALL FOR CONTRIBUTIONS</h2>
				

				
			
				
				
				<p ><strong>- *Data collection*, *Corpus construction*.</strong><br />
Experiences or reports from data collection and/or corpus construction<br />
projects, including papers which describes the formats, styles and/or<br />
methodologies for data collection. Cloud-sourcing data collection and<br />
participatory sensing also could be included in this topic.<br />
<br />
<strong>- *Effectiveness of Data*, *Data Centric Research*.</strong><br />
There is a field of research based on the collected corpora, which is<br />
so called "data centric research". Also, we call for the experience of<br />
using large-scale human activity sensing corpora. Using large-scale<br />
corpora with an analysis by machine learning, there will be a large<br />
space for improving the performance of recognition results.<br />
<br />
<strong>- *Tools and Algorithms for Activity Recognition*.</strong><br />
If we have appropriate tools for the management of sensor data,<br />
activity recognition researchers could have more focused on their<br />
actual research theme. This is because the developed tools and<br />
algorithms are often not shared among the research community. In this<br />
workshop, we solicit reports on developed tools and algorithms for<br />
forwarding to the community.<br />
<br />
<strong>- *Real World Application and Experiences*.</strong><br />
Activity recognition "in the lab" usually works well. However, it does<br />
not scale well with real world data. In this workshop, we also solicit<br />
the experiences from real world applications. There is a huge gap<br />
between "lab" and "real world” environments . Large-scale human<br />
activity sensing corpora will help to overcome this gap.<br />
<br />
<strong>- *Sensing Devices and Systems*.</strong><br />
Data collection is not only performed by the "off-the-shelf" sensors<br />
but also the newly developed sensors which supply information which<br />
has not been investigated. There is also a research area about the<br />
development of new platform for data collection or the evaluation<br />
tools for collected data.<br />
<br />
In light of this year's special emphasis on open-ended contextual<br />
awareness, we wish cover these topics as well:<br />
<br />
<strong>- *Mobile Experience Sampling*, *Experience Sampling Strategies*.</strong><br />
Advances in experience sampling approaches, for instance intelligent<br />
user query or those using novel devices (e.g. smartwatches), are<br />
likely to play an important role to provide user-contributed<br />
annotations of their own activities.<br />
<br />
<strong>- *Unsupervised Pattern Discovery*.</strong><br />
Discovering meaningful patterns in sensor data in an unsupervised<br />
manner can be needed in the context of informing other elements of the<br />
system by inquiring the user and by triggering the annotation with<br />
crowd-sourcing.<br />
<br />
<strong>- *Dataset Acquisition and Annotation*, *Crowd-Sourcing*, *Web-Mining*.</strong><br />
A wide abundance of sensor data is potentially within the reach of<br />
users instrumented with their mobile phones and other<br />
wearables. Capitalizing on crowd-sourcing to create larger datasets in<br />
a cost effective manner may be critical to open-ended activity<br />
recognition. Many online datasets are also available and could be used<br />
to bootstrap recognition models.<br />
<br />
<strong>- *Transfer Learning*, *Semi-Supervised Learning*, *Lifelog Learning*.</strong><br />
The ability to translate recognition models across modalities or to<br />
use minimal forms of supervision would allow to reuse datasets in a<br />
wider range of domains and reduce the costs of acquiring annotations.<br />
<br />
<strong>- *Deep Learning*.</strong><br />
Together with the big success of deep learning in other AI domain, deep<br />
learning models are gradually playing an important role in activity<br />
recognition as well.</p>
				

				
			
				
				
				<h2 id="h711">AREAS OF INTEREST</h2>
				

				
			
				
				
				<ul >
<li>Human Activity Sensing Corpus</li>
<li>Large Scale Data Collection</li>
<li>Data Validation</li>
<li>Data Tagging / Labeling</li>
<li>Efficient Data Collection</li>
<li>Data Mining from Corpus</li>
<li>Automatic Segmentation</li>
<li>Performance Evaluation</li>
<li>Man-machine Interaction</li>
<li>Noise Robustness</li>
<li>Non Supervised Machine Learning</li>
<li>Sensor Data Fusion</li>
<li>Tools for Human Activity Corpus/Sensing</li>
<li>Participatory Sensing</li>
<li>Feature Extraction and Selection</li>
<li>Context Awareness</li>
<li>Pedestrian Navigation</li>
<li>Social Activities Analysis/Detection</li>
<li>Compressive Sensing</li>
<li>Sensing Devices</li>
<li>Lifelog Systems</li>
<li>Route Recognition/Detection</li>
<li>Wearable Application</li>
<li>Gait Analysis</li>
<li>Health-care Monitoring/Recommendation</li>
<li>Daily-life Worker Support</li>
<li>Deep Learning</li>
</ul>
				

				
			
				
				
				<h2 id="h713">FORMAT & TEMPLATE</h2>
				

				
			
				
				
				<p ><b>The paper must be in 6 pages <s>including references</s> in the 2-column format. References do not count to the page limit, but all texts and figures/tables must be in the first 6 pages.</b> Due to capacity reasons, some papers may be accepted as poster presentations during the workshop (not UbiComp/ISWC poster sessions) instead of oral presentations. We also plan to open the submissions for the papers rejected by the ISWC Note/Brief.<br />
(Update at Jun. 4: page limitation has been changed to fit with ISWC notes/briefs)<br />
<br />
ACM requires UbiComp/ISWC 2025 workshop submissions to use the double-column template. Please note that the template for submission is double-column format and the template for publication (camera-ready) is in single-column.<br />
Please carefully read <a href="https://www.ubicomp.org/ubicomp-iswc-2025/authors/formatting/" target="_blank">Ubicomp website about the template</a>.<br />
<br />
<b>Submissions do not need to be anonymous</b>.<br />
All publications will be peer reviewed together with their contribution to the topic of the workshop.<br />
The accepted papers will be published in the UbiComp/ISWC 2025 adjunct proceedings, which will be included in the ACM Digital Library.<br />
</p>
				

				
			
				
				
				<h2 id="h715">SUBMISSION</h2>
				

				
			
				
				
				<p ><s>AS of May 13, submission site is not open. DEtails below will be updated after submission site is ready.</s><br />
At June 4, submission site is open!<br />
<br />
Please submit your papers from <a href="https://new.precisionconference.com/submissions" target="_blank" rel="noopener noreferrer">https://new.precisionconference.com/submissions</a><br />
Make a new submission as follows:</p>
				

				
			
				
				
				<ol >
<li>Society: SIGCHI</li>
<li>Conference/Journal: UbiComp/ISWC 2025</li>
<li>Tack: UbiComp/ISWC 2025 13th Workshop on HASCA</li>
<li>"Go" button</li>
</ol>
				

				
			
				
				
				<h2 id="h718">IMPORTANT DATES </h2>
				

				
			
				
				
				<p >HASCA session papers:<br />
* For SHL Challenge and WEAR challenge, please check each challenge's conditions as dates may differ from HASCA paper.<br />
* for submission after 23rd, please follow the aanouncement in the submission site.<br />
<font color="red">* (update at June 8th) submissions are now open for papers rejected at ISWC. Please submit your paper, ISWC review results, and the letter explaining the revision policy. Submission Deadline: July 13 (AoE)</font></p>
				

				
			
				
				
				<ul >
<li>Submission Deadline: June <s>15 22 (ext.)</s><b>29 (ext.)</b><br></li>
<li>Acceptance Notification: July 16<br></li>
<li>Camera-ready: July 31 <strong>HARD</strong><br></li>
<li>Workshop: Oct. 12 or 13, <br></li>
</ul>
				

				
			
				
				
				<h2 id="h721">SPECIAL SESSION</h2>
				

				
			
				
				
				<p >This year, the following challenges are held with HASCA.<br />
<br />
Sussex-Huawei Locomotion (SHL) Challenge<br />
<a href="http://www.shl-dataset.org/challenges/" target="_blank">http://www.shl-dataset.org/challenges/</a><br />
<br />
WEAR Dataset Challenge<br />
<a href="https://mariusbock.github.io/wear/challenge.html" target="_blank">https://mariusbock.github.io/wear/challenge.html</a></p>
				

				
			
				
				
				<h2 id="h723">CONTACT<br />
hasca-organizer[at]ml.hasc.jp</h2>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>cfp</category>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/cfp/entry-64.html</guid>
			<pubDate>Thu, 01 May 2025 16:18:51 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Program</title>
			<link>http://hasca2025.hasc.jp/program/entry-62.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p >HASCA Workshop will take place on Saturday, 12th Oct. at Room U3.<br />
<br />
Presentation time:<br />
HASCA oral presentation - 15 min (12-min talk + 3-min Q&A)<br />
Other presentation - follow the timetable<br />
<br />
(note at Oct. 9: Timetable has been slightly modified to follow the official ubicomp timetable. According to that, FedFitTech... has been moved to 1st session from 4th.)</p>
				

				
			
				
				
				<table>
<tr>
	<td>08:00-09:00</td>
	<td>
		Registration<br>
	</td>
</tr>
<tr>
	<td>09:00-10:30</td>
	<td>
		Session 1: HASCA paper session 1 [90 min] (Chair: Kazuya Murao)<br>
		<ul>
			<li><em>Opening talk (10 min)</em></li>
			<li><em>Where Are the Best Positions of IMUs for HAR?- Investigation with four DNN models of different characteristics (15 min)</em><br>
			Yu Enokibori, Takahiro Sato, Kenji Mase (Nagoya University)</li>
			<li><em>Smartphone-Based Activity Recognition in a Logistics Warehouse Using Self-supervised Representation Learning (15 min)</em><br>
			Kisho Watanabe, Kazuma Kano, Tahera Hossain, Shin Katayama, Kenta Urano, Takuro Yonezawa, Nobuo Kawaguchi (Nagoya University)</li>
			<li><em>Identifying Routine from Sequences of Activities of Daily Living in Smart-homes (15 min)</em><br>
			Sayeda Shamma Alia, Paula Lago (Concordia University)</li>
			<li><em>One-Class Classifier-based Incremental Learning Method to Personalize Multi-Class Human Activity Recognition Models from Streaming Data (15 min)</em><br>
			Pekka Siirtola (University of Oulu)</li>
			<li><em>FedFitTech: A Baseline in Federated Learning for Fitness Tracking (15 min)</em><br>
			Zeyneddin Oz, Shreyas Korde, Marius Bock, Kristof Van Laerhoven (University of Siegen)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>10:30-11:00</td>
	<td>
		Coffee Break
	</td>
</tr>
<tr>
	<td>11:00-12:30</td>
	<td>
		Session 2: WEAR challenge session [90 min] (Chair: Marius Bock)<br>
		<ul>
			<li><em>Opening Talk (10 min)</em></li>
			<li><em>Winning Solutions (15 min each)</em>
				<p>Note that the order does not reflect final ranking. The result will be disclosed at the conference.</p>
				<ul>
					<li><em>FAME: Feature-Augmented Multi-View Ensemble Framework for Human Activity Recognition using Inertial Sensors</em><br>
					Francisco Calatrava (Örebro University), Lala Shakti Swarup Ray, Vitor Fortes Rey, Paul Lukowicz (DFKI), Oscar Mozos (Universidad Politécnica de Madrid)</li>
					<li><em>Challenging High-Performance Human Activity Recognition with a State-of-the-art Model and Simple Preprocessing</em><br>
					Atsuya Sumitou, Yu Enokibori (Nagoya University)</li>
					<li><em>Mitigating Null-Class Dominance in Multiclass Inertial-Based Activity Recognition</em><br>
					Ricarda Link, Heiner Stuckenschmidt (University of Mannheim)</li>
				</ul>
			</li>
			<li><em>Award Ceremony (10 min)</em></li>
			<li><em>Poster Sessions (25 min)</em></li>
		</ul>
	</td>
</tr>
<tr>
	<td>12:30-14:30</td>
	<td>
		Lunch Break
	</td>
</tr>
<tr>
	<td>14:30-16:00</td>
	<td>
		Session 3: SHL challenge session [90min] (Chair: Mathias Ciliberto)<br>
		<ul>
			<li><em>Opening Remarks (5 min)</em></li>
			<li>(Summary Task 1)<em>Summary of SHL Challenge 2025: Locomotion and Transportation Mode Recognition Using Foundation Models (12 min)</em><br>
			Lin Wang (Queen Mary University of London), Mathias Ciliberto (University of Cambridge), Hristijan Gjoreski (University in Skopje), Paula Lago (Concordia University), Kazuya Murao (Ritsumeikan University), Tsuyoshi Okita (Kyushu Institute of Technology), Daniel Roggen (University of Sussex)</li>
			<li>(Summary Task 2)<em>Foundation Models to Tackle Activity Recognition in Unknown Domain:  Sussex-Huawei Locomotion Challenge 2025 Task 2 (12 min)</em><br>
			Tsuyoshi Okita, Kosuke Ukita, Asahi Miyazaki, Daichi Kubota, Jukichi Ota, Naoki Kagiyama, Asahi Nishikawa, Daichi Nagayasu, Syunya Tomitaka, Daisuke Nozaki, Yuki Odo, Raku Yamashita, Xiaolong Ye, Huayu Gao, Kazuki Okahashi, Koki Matsuishi, Masaharu Kagiyama, Kodai Hirata, Haruki Kai (Kyushu Institute of Technology), Lin Wang (Queen Mary University of London), Hristijan Gjoreski (University in Skopje), Mathias Ciliberto (University of Cambridge), Paula Lago (Concordia University), Kazuya Murao (Ritsumeikan University), Daniel Roggen (University of Sussex)</li>
			<li>(Oral Presentation Task 2)<em>Robust Sensor-Based Activity Recognition under Domain Shift via Fine-Tuning the Time-Series Foundation Model (12 min)</em><br>
			Ryoichi Sekiguchi, Hiroshi Minowa, Masaki Kawakatsu (Tokyo Denki University)</li>
			<li><em>Video show - Task 1 (8 min)</em></li>
			<li>(Oral Presentation Task 1)<em>Revisiting Foundation Models for Human Activity Recognition: Multiresolution Sensor Fusion with TimesFM (12 min)</em><br>
			Takumi Hyugaji, Itsuki Theo Terashita, Masaki Kawakatsu (Tokyo Denki University)</li>
			<li>(Oral Presentation Task 1)<em>Ensemble of Foundation Models for Sensor-Based Locomotion and Transportation Mode Recognition (12 min)</em><br>
			Mohammad Foad Abdi, Yousef Alikhani, Mohammad Mahdi Azizi, Mohammad Saleh Azizikia, Bagher BabaAli, Mohammad Mahdi Mohebbizadeh, Arash Nasr Esfahani</li>
			<li>(Oral Presentation Task 1)<em>IMU2IMG: IMU in the Language of Vision Foundation Models (12 min)</em><br>
			Sunkyung Lee, Hyuntae Jeong, Seungeun Chung, Kyoung Ju Noh, Jeong Mook Lim, Gyuwon Jung, Se Won Oh</li>
			<li><em>Ceremony (5 min)</em></li>
		</ul>
	</td>
</tr>
<tr>
	<td>16:00-16:30</td>
	<td>
		Coffee Break
	</td>
</tr>
<tr>
	<td>16:30-17:45</td>
	<td>
		Session 4: HASCA paper session [75min] (Chair: Pekka Siirtola)<br>
		<ul>
			<li><em>Mouth Gesture Recognition Using PPG Sensors in Earbuds (15 min)</em><br>
			Taiki Yuma, Kazuya Murao (Ritsumeikan University)</li>
			<li><em>Evaluating Rhythmic Representations in Mental Health from Wearable Devices Using the GLOBEM Datasets (15 min)</em><br>
			Melika Mirzaseyedi, Abdelwahab Hamou-lhadj, Paula Lago (Concordia University)</li>
			<li><em>Fingerprint Spoof Detection during Fingerprint Authentication Using Active Acoustic Sensing (15 min)</em><br>
			Koki Okeda, Kazuya Murao (Ritsumeikan University)</li>
			<li><em>Controlling the Influence of Ranking Information on Preference Judgments by Information Presentation Across Perceptual Channels (15 min)</em><br>
			Sho Nakazawa, Kyosuke Futami, Kazuya Murao (Ritsumeikan University)</li>
			<li><em>Silent Speech-Based Personal Authentication Using a Mask-Type Device with Infrared Sensors (15 min)</em><br>
			Takumi Sakamoto, Kyosuke Futami, Kazuya Murao (Ritsumeikan University)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>17:45-</td>
	<td>
		Closing
	</td>
</tr>
</table>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>program</category>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/program/entry-62.html</guid>
			<pubDate>Thu, 01 May 2025 16:18:43 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Welcome to HASCA2025</title>
			<link>http://hasca2025.hasc.jp/index.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<h2 id="h670">Welcome to HASCA2025 Web site!</h2>
				

				
			
				
				
				<p>HASCA2025 is an 13th International Workshop on Human Activity Sensing Corpus and Applications. The workshop will be held in conjunction with <a href="https://www.ubicomp.org/ubicomp-iswc-2025/" targe="_blank">UbiComp/ISWC2025</a>.</p>

<p><strong>Important Dates</strong><br>
Submission Deadline: June <s>15 22 (ext.)</s><b>29 (ext.)</b><br>
* for submission after 23rd, please follow the aanouncement in the submission site.
Acceptance Notification: July 16<br>
<font color="red">* (update at June 8th) submissions are now open for papers rejected at ISWC. Please submit your paper, ISWC review results, and the letter explaining the revision policy.
Submission Deadline: July 13 (AoE)</font><br>
Camera-ready: July 31 <strong>HARD!!</strong><br>
Workshop: Oct. 12 (Room U3), <br></p>

<p>For SHL Challenge and WEAR challenge, please check each challenge's conditions as dates may differ.</p>

				

				
			
				
				
				<h2 id="h672">Challenges</h2>
				

				
			
				
				
				<p >Following challenges are held with HASCA 2025.<br />
Please refer to each challenge website for details including rules and deadlines.<br />
<br />
Sussex-Huawei Locomotion (SHL) Challenge<br />
<a href="http://www.shl-dataset.org/challenges/" target="_blank">http://www.shl-dataset.org/challenges/</a><br />
<br />
WEAR Dataset Challenge<br />
<a href="https://mariusbock.github.io/wear/challenge.html" target="_blank">https://mariusbock.github.io/wear/challenge.html</a></p>
				

				
			
				
				
				<h2 id="h674">Abstract</h2>
				

				
			
				
				
				<p>The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpora and improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.</p>

<p>The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):</p>

				

				
			
				
				
				<h2 id="h676">Data collection / Corpus construction</h2>
				

				
			
				
				
				<p>Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.</p>

				

				
			
				
				
				<h2 id="h678">Effectiveness of Data / Data Centric Research</h2>
				

				
			
				
				
				<p>There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.</p>

				

				
			
				
				
				<h2 id="h680">Tools and Algorithms for Activity Recognition</h2>
				

				
			
				
				
				<p>If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.</p>

				

				
			
				
				
				<h2 id="h682">Real World Application and Experiences</h2>
				

				
			
				
				
				<p>Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.</p>

				

				
			
				
				
				<h2 id="h684">Sensing Devices and Systems</h2>
				

				
			
				
				
				<p>Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.</p>

				

				
			
				
				
				<h2 id="h686">Mobile experience sampling, experience sampling strategies: </h2>
				

				
			
				
				
				<p >Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.</p>
				

				
			
				
				
				<h2 id="h688">Unsupervised pattern discovery</h2>
				

				
			
				
				
				<p >Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.</p>
				

				
			
				
				
				<h2 id="h690">Dataset acquisition and annotation through crowd-sourcing, web-mining</h2>
				

				
			
				
				
				<p >A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.</p>
				

				
			
				
				
				<h2 id="h692">Transfer learning, semi-supervised learning, lifelong learning</h2>
				

				
			
				
				
				<p >The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.</p>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/index.html</guid>
			<pubDate>Thu, 01 May 2025 16:18:37 +0900</pubDate>
		</item>
	</channel>
</rss>