大数据学习(133)-Hive数据分析2
🍋🍋大数据学习🍋🍋
🔥系列专栏: 👑哲学语录: 用力所能及,改变世界。
💖如果觉得博主的文章还不错的话,请点赞👍+收藏⭐️+留言📝支持一下博主哦🤞
十一、分组去重(保留每组最新 / 最早记录)
核心思路:
使用窗口函数 ROW_NUMBER()
按分组排序,保留指定顺序的第一条记录。
典型例题:查询每个用户最后一次登录记录。
代码模板:
WITH ranked_logins AS (SELECT user_id,login_date,ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY login_date DESC -- 降序取最新,升序取最早) AS rnFROM user_login
)
SELECT user_id, login_date
FROM ranked_logins
WHERE rn = 1;
十二、间隔计算(事件时间差分析)
核心思路:
用窗口函数 LAG()
/LEAD()
获取相邻行的时间值,计算时间间隔。
典型例题:计算用户两次登录的时间间隔(分钟)。
代码模板:
SELECT user_id,login_date,TIMESTAMPDIFF(MINUTE, LAG(login_date) OVER (PARTITION BY user_id ORDER BY login_date), login_date) AS interval_minutes
FROM user_login
ORDER BY user_id, login_date;
十三、行列转换(行转列 / 列转行)
核心思路:
- 行转列:用
CASE WHEN
配合聚合函数(如MAX
/SUM
)实现。 - 列转行:用
LATERAL VIEW
或UNNEST
展开列数据。
典型例题:将用户月度活跃天数转为列(行转列)。
代码模板:
SELECT user_id,MAX(CASE WHEN month = 1 THEN active_days ELSE 0 END) AS jan_active,MAX(CASE WHEN month = 2 THEN active_days ELSE 0 END) AS feb_active,...
FROM (SELECT user_id,MONTH(login_date) AS month,COUNT(DISTINCT login_date) AS active_daysFROM user_loginGROUP BY user_id, MONTH(login_date)
) t
GROUP BY user_id;
十四、流量路径分析(漏斗模型)
核心思路:
用 COUNT(DISTINCT)
按阶段过滤用户,计算各阶段转化率。
典型例题:计算注册→浏览→下单的漏斗转化率。
代码模板:
WITH funnel_stages AS (SELECT user_id,MAX(CASE WHEN event = 'register' THEN 1 ELSE 0 END) AS registered,MAX(CASE WHEN event = 'browse' THEN 1 ELSE 0 END) AS browsed,MAX(CASE WHEN event = 'order' THEN 1 ELSE 0 END) AS orderedFROM user_eventsGROUP BY user_id
)
SELECT SUM(registered) AS total_registrants,SUM(browsed)/SUM(registered) AS reg_to_browse_rate,SUM(ordered)/SUM(browsed) AS browse_to_order_rate
FROM funnel_stages;
十五、字符串处理(提取、替换、分割)
核心思路:
使用字符串函数(如 SUBSTRING
、REGEXP_EXTRACT
、REPLACE
)处理非结构化数据。
典型例题:从 URL 中提取域名(如 https://www.example.com/path
→ example.com
)。
代码模板:
SELECT url,REGEXP_EXTRACT(url, 'https?://(www\\.)?([^/]+)', 2) AS domain
FROM urls;
十六、递归查询(层级结构数据处理)
核心思路:
使用 WITH RECURSIVE
递归 CTE 处理树状结构数据(如部门层级、类目层级)。
典型例题:查询所有子部门及其上级部门路径。
代码模板(MySQL 示例):
WITH RECURSIVE dept_hierarchy AS (-- 初始节点:根部门SELECT dept_id, parent_id, dept_name, CAST(dept_name AS CHAR(100)) AS pathFROM departmentsWHERE parent_id IS NULL -- 根节点条件UNION ALL-- 递归查询:关联子部门SELECT d.dept_id, d.parent_id, d.dept_name, CONCAT(h.path, ' → ', d.dept_name)FROM departments dJOIN dept_hierarchy h ON d.parent_id = h.dept_id
)
SELECT * FROM dept_hierarchy;
十七、会话分析(用户行为会话划分)
核心思路:
通过事件时间间隔划分会话,间隔超过阈值则视为新会话(如 30 分钟无操作)。
典型例题:计算每个用户的会话数及会话时长。
代码模板:
WITH session_data AS (SELECT user_id,event_time,-- 生成会话标识:时间差超过30分钟则新会话SUM(CASE WHEN TIMESTAMPDIFF(MINUTE, LAG(event_time) OVER (PARTITION BY user_id ORDER BY event_time), event_time) > 30 THEN 1 ELSE 0 END) OVER (PARTITION BY user_id ORDER BY event_time) AS session_idFROM user_events
)
SELECT user_id,session_id,MIN(event_time) AS session_start,MAX(event_time) AS session_end,TIMESTAMPDIFF(MINUTE, MIN(event_time), MAX(event_time)) AS session_duration
FROM session_data
GROUP BY user_id, session_id;
十八、同比环比进阶(跨年度对比)
核心思路:
用窗口函数 LAG()
关联去年同期数据,计算同比增长率。
典型例题:计算 2023 年各月销售额同比 2022 年的增长率。
代码模板:
WITH monthly_sales AS (SELECT YEAR(sale_date) AS year,MONTH(sale_date) AS month,SUM(amount) AS salesFROM salesGROUP BY YEAR(sale_date), MONTH(sale_date)
)
SELECT year,month,sales,LAG(sales, 12) OVER (ORDER BY year, month) AS sales_last_year,(sales - LAG(sales, 12) OVER (ORDER BY year, month)) / LAG(sales, 12) OVER (ORDER BY year, month) AS yoy_growth
FROM monthly_sales
WHERE year = 2023;
十九、抽样分析(随机采样数据)
核心思路:
用 RAND()
生成随机数,配合 WHERE
过滤实现抽样(如抽取 10% 数据)。
典型例题:从订单表中随机抽取 5% 样本用于分析。
代码模板:
SELECT *
FROM orders
WHERE RAND() < 0.05; -- 抽取5%样本
二十、数据透视表(多维度交叉分析)
核心思路:
用 GROUP BY
结合聚合函数,按多个维度(如时间、地区、品类)交叉统计。
典型例题:按年份、地区统计各品类销售额占比。
代码模板:
WITH category_sales AS (SELECT YEAR(sale_date) AS year,region,category,SUM(amount) AS salesFROM salesGROUP BY YEAR(sale_date), region, category
)
SELECT year,region,SUM(CASE WHEN category = 'A' THEN sales ELSE 0 END) AS cat_A_sales,SUM(CASE WHEN category = 'B' THEN sales ELSE 0 END) AS cat_B_sales,SUM(sales) AS total_sales,SUM(sales)/SUM(SUM(sales)) OVER (PARTITION BY year, region) AS category_ratio
FROM category_sales
GROUP BY year, region;