全球公共假日数据的来源是 PyPI holidays 数据包和 Wikipedia,涵盖自 1970 年至 2099 年的 38 个国家或地区。
每一行表示某一特定日期、国家/地区的假日信息,以及大多数人是否带薪休假。
注意
Microsoft 按“原样”提供 Azure 开放数据集。 Microsoft 对数据集的使用不提供任何担保(明示或暗示)、保证或条件。 在当地法律允许的范围内,Microsoft 对使用数据集而导致的任何损害或损失不承担任何责任,包括直接、必然、特殊、间接、偶发或惩罚性损害或损失。
此数据集是根据 Microsoft 接收源数据的原始条款提供的。 数据集可能包含来自 Microsoft 的数据。
数量和保留期
此数据集以 Parquet 格式存储。 它是包含 1970 年 1 月 1 日至 2099 年 1 月 1 日假日信息的快照。 数据大小约为 500 KB。
存储位置
此数据集存储在美国东部 Azure 区域。 为确保相关性,建议将计算资源置于美国东部。
此数据集合并了来源于维基百科 (WikiMedia Foundation Inc) 和 PyPI 节假日包的数据。
可以凭知识共享署名 ShareAlike 3.0 非商业性使用许可获取该合并数据集。
如果对数据源有任何疑问,请发送电子邮件至 aod@microsoft.com。
列
名称 |
数据类型 |
唯一 |
值(示例) |
说明 |
countryOrRegion |
字符串 |
38 |
Sweden Norway |
国家/地区全称。 |
countryRegionCode |
字符串 |
35 |
SE NO |
国家或地区代码遵循此处的格式。 |
date |
timestamp |
20,665 |
2074-01-01 00:00:00 2025-12-25 00:00:00 |
假期的日期。 |
holidayName |
字符串 |
483 |
Søndag Söndag |
假期全称。 |
isPaidTimeOff |
boolean |
3 |
True |
指示大多数人是否在此日期带薪休假(现仅适用于美国、英国和印度)。 如果为 Null,则表示未知。 |
normalizeHolidayName |
字符串 |
438 |
Søndag Söndag |
假期的规范化名称。 |
预览
countryOrRegion |
holidayName |
normalizeHolidayName |
countryRegionCode |
date |
挪威 |
Søndag |
Søndag |
是 |
12/28/2098 12:00:00 AM |
瑞典 |
Söndag |
Söndag |
SE |
12/28/2098 12:00:00 AM |
澳大利亚 |
Boxing Day |
Boxing Day |
AU |
12/26/2098 12:00:00 AM |
匈牙利 |
Karácsony másnapja |
Karácsony másnapja |
HU |
12/26/2098 12:00:00 AM |
奥地利 |
Stefanitag |
Stefanitag |
AT |
12/26/2098 12:00:00 AM |
加拿大 |
Boxing Day |
Boxing Day |
CA |
12/26/2098 12:00:00 AM |
克罗地亚 |
Sveti Stjepan |
Sveti Stjepan |
HR |
12/26/2098 12:00:00 AM |
捷克语 |
2. svátek vánoční |
2. svátek vánoční |
CZ |
12/26/2098 12:00:00 AM |
数据访问
Azure Notebooks
# This is a package in preview.
from azureml.opendatasets import PublicHolidays
from datetime import datetime
from dateutil import parser
from dateutil.relativedelta import relativedelta
end_date = datetime.today()
start_date = datetime.today() - relativedelta(months=1)
hol = PublicHolidays(start_date=start_date, end_date=end_date)
hol_df = hol.to_pandas_dataframe()
hol_df.info()
# Pip install packages
import os, sys
!{sys.executable} -m pip install azure-storage-blob
!{sys.executable} -m pip install pyarrow
!{sys.executable} -m pip install pandas
# Azure storage access info
azure_storage_account_name = "azureopendatastorage"
azure_storage_sas_token = r""
container_name = "holidaydatacontainer"
folder_name = "Processed"
from azure.storage.blob import BlockBlobServicefrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
if azure_storage_account_name is None or azure_storage_sas_token is None:
raise Exception(
"Provide your specific name and key for your Azure Storage account--see the Prerequisites section earlier.")
print('Looking for the first parquet under the folder ' +
folder_name + ' in container "' + container_name + '"...')
container_url = f"https://{azure_storage_account_name}.blob.core.windows.net/"
blob_service_client = BlobServiceClient(
container_url, azure_storage_sas_token if azure_storage_sas_token else None)
container_client = blob_service_client.get_container_client(container_name)
blobs = container_client.list_blobs(folder_name)
sorted_blobs = sorted(list(blobs), key=lambda e: e.name, reverse=True)
targetBlobName = ''
for blob in sorted_blobs:
if blob.name.startswith(folder_name) and blob.name.endswith('.parquet'):
targetBlobName = blob.name
break
print('Target blob to download: ' + targetBlobName)
_, filename = os.path.split(targetBlobName)
blob_client = container_client.get_blob_client(targetBlobName)
with open(filename, 'wb') as local_file:
blob_client.download_blob().download_to_stream(local_file)
# Read the parquet file into Pandas data frame
import pandas as pd
print('Reading the parquet file into Pandas data frame')
df = pd.read_parquet(filename)
# you can add your filter at below
print('Loaded as a Pandas data frame: ')
df
Azure Databricks
# This is a package in preview.
# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import PublicHolidays
from datetime import datetime
from dateutil import parser
from dateutil.relativedelta import relativedelta
end_date = datetime.today()
start_date = datetime.today() - relativedelta(months=1)
hol = PublicHolidays(start_date=start_date, end_date=end_date)
hol_df = hol.to_spark_dataframe()
display(hol_df.limit(5))
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "holidaydatacontainer"
blob_relative_path = "Processed"
blob_sas_token = r""
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
blob_sas_token)
print('Remote blob path: ' + wasbs_path)
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))
Azure Synapse
# This is a package in preview.
from azureml.opendatasets import PublicHolidays
from datetime import datetime
from dateutil import parser
from dateutil.relativedelta import relativedelta
end_date = datetime.today()
start_date = datetime.today() - relativedelta(months=1)
hol = PublicHolidays(start_date=start_date, end_date=end_date)
hol_df = hol.to_spark_dataframe()
# Display top 5 rows
display(hol_df.limit(5))
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "holidaydatacontainer"
blob_relative_path = "Processed"
blob_sas_token = r""
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
blob_sas_token)
print('Remote blob path: ' + wasbs_path)
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))
后续步骤
查看开放数据集目录中的其余数据集。