Răsfoiți Sursa

feat(mercari_spider): 添加Mercari DPoP动态Token生成和爬虫集成

- 新增dpop_generator.py实现基于RFC 9449的DPoP Proof JWT动态生成
- 支持搜索列表和商品详情接口分别生成动态DPoP Token
- 在mercari_jp_spider.py中使用动态生成的DPoP Token替换硬编码常量
- 编写详细的DPoP参数分析文档dpop_analysis.md,说明结构及生成流程
- 实现MySQL连接池支持,便于爬虫高效存储爬取数据
- 优化日志和重试机制,提高请求稳定性和错误跟踪能力
charley 11 ore în urmă
părinte
comite
c5637b9b96

+ 78 - 0
mercari_spider/YamlLoader.py

@@ -0,0 +1,78 @@
+# -*- coding: utf-8 -*-
+# Author : Charley
+# Python : 3.10.8
+# Date   : 2025/12/22 10:44
+import os, re
+import yaml
+
+regex = re.compile(r'^\$\{(?P<ENV>[A-Z_\-]+:)?(?P<VAL>[\w.]+)}$')
+
+
+class YamlConfig:
+    def __init__(self, config):
+        self.config = config
+    
+    def get(self, key: str):
+        return YamlConfig(self.config.get(key))
+    
+    def getValueAsString(self, key: str):
+        try:
+            match = regex.match(self.config[key])
+            group = match.groupdict()
+            if group['ENV'] is not None:
+                env = group['ENV'][:-1]
+                return os.getenv(env, group['VAL'])
+            return None
+        except:
+            return self.config[key]
+    
+    def getValueAsInt(self, key: str):
+        try:
+            match = regex.match(self.config[key])
+            group = match.groupdict()
+            if group['ENV'] is not None:
+                env = group['ENV'][:-1]
+                return int(os.getenv(env, group['VAL']))
+            return 0
+        except:
+            return int(self.config[key])
+    
+    def getValueAsBool(self, key: str):
+        try:
+            match = regex.match(self.config[key])
+            group = match.groupdict()
+            if group['ENV'] is not None:
+                env = group['ENV'][:-1]
+                return bool(os.getenv(env, group['VAL']))
+            return False
+        except:
+            return bool(self.config[key])
+
+
+def readYaml(path: str = 'application.yml', profile: str = None) -> YamlConfig:
+    if os.path.exists(path):
+        with open(path) as fd:
+            conf = yaml.load(fd, Loader=yaml.FullLoader)
+    
+    if profile is not None:
+        result = path.split('.')
+        profiledYaml = f'{result[0]}-{profile}.{result[1]}'
+        if os.path.exists(profiledYaml):
+            with open(profiledYaml) as fd:
+                conf.update(yaml.load(fd, Loader=yaml.FullLoader))
+    
+    return YamlConfig(conf)
+
+# res = readYaml()
+# mysqlConf = res.get('mysql')
+# print(mysqlConf)
+
+# print(res.getValueAsString("host"))
+# mysqlYaml = mysqlConf.getValueAsString("host")
+# print(mysqlYaml)
+# host = mysqlYaml.get("host").split(':')[-1][:-1]
+# port = mysqlYaml.get("port").split(':')[-1][:-1]
+# username = mysqlYaml.get("username").split(':')[-1][:-1]
+# password = mysqlYaml.get("password").split(':')[-1][:-1]
+# mysql_db = mysqlYaml.get("db").split(':')[-1][:-1]
+# print(host,port,username,password)

+ 6 - 0
mercari_spider/application.yml

@@ -0,0 +1,6 @@
+mysql:
+  host: ${MYSQL_HOST:100.64.0.21}
+  port: ${MYSQL_PROT:3306}
+  username: ${MYSQL_USERNAME:crawler}
+  password: ${MYSQL_PASSWORD:Pass2022}
+  db: ${MYSQL_DATABASE:crawler}

+ 163 - 0
mercari_spider/dpop_analysis.md

@@ -0,0 +1,163 @@
+# Mercari DPoP 参数分析文档
+
+## 一、什么是 DPoP
+
+DPoP(Demonstrating Proof-of-Possession)是 **RFC 9449** 定义的标准协议,用于将请求绑定到特定客户端的密钥对上。Mercari API 使用 DPoP 作为请求鉴权的一部分。
+
+DPoP Token 本质是一个 **JWT**(JSON Web Token),格式为 `header.payload.signature`,三段 Base64URL 编码。
+
+---
+
+## 二、Token 结构解析
+
+### 2.1 Header(两个接口相同)
+
+```json
+{
+  "typ": "dpop+jwt",
+  "alg": "ES256",
+  "jwk": {
+    "crv": "P-256",
+    "kty": "EC",
+    "x": "j9I6kLKekdSNdHyHsaZl5gkbbFhDaAP3DwsuvVjCrWg",
+    "y": "NLtDkddYVafFykQGbk-x6AaJzAjUnVepYt_jsvewpgI"
+  }
+}
+```
+
+| 字段 | 值 | 说明 |
+|------|----|------|
+| `typ` | `dpop+jwt` | 标识为 DPoP 类型的 JWT |
+| `alg` | `ES256` | 签名算法:ECDSA + P-256 曲线 + SHA-256 |
+| `jwk` | EC 公钥 | 内嵌椭圆曲线公钥,服务端用来验证签名 |
+
+### 2.2 Payload(两个接口不同)
+
+**LIST_DPOP**(搜索列表接口):
+
+```json
+{
+  "iat": 1778046219,
+  "jti": "44bc836e-abaa-4258-a248-3e691153f665",
+  "htu": "https://api.mercari.jp/v2/entities:search",
+  "htm": "POST",
+  "uuid": "a00429c5-ad26-4be4-83ae-60b7239e14d5"
+}
+```
+
+**DETAIL_DPOP**(商品详情接口):
+
+```json
+{
+  "iat": 1778046009,
+  "jti": "1f4f049a-7f0f-4c4f-af71-20baad8a1784",
+  "htu": "https://api.mercari.jp/items/get",
+  "htm": "GET",
+  "uuid": "a00429c5-ad26-4be4-83ae-60b7239e14d5"
+}
+```
+
+### 2.3 Payload 字段说明
+
+| 字段 | 说明 | LIST 值 | DETAIL 值 |
+|------|------|---------|-----------|
+| `iat` | 签发时间(Unix 时间戳) | `1778046219` | `1778046009` |
+| `jti` | JWT ID(UUID v4 随机值,防重放) | `44bc836e-...` | `1f4f049a-...` |
+| `htu` | HTTP Target URI(目标接口地址) | `https://api.mercari.jp/v2/entities:search` | `https://api.mercari.jp/items/get` |
+| `htm` | HTTP Method(请求方法) | `POST` | `GET` |
+| `uuid` | 设备标识(与 `LAPLACE_DEVICE_UUID` 相同) | `a00429c5-...` | `a00429c5-...` |
+
+### 2.4 Signature
+
+使用 EC 私钥对 `header_base64url.payload_base64url` 进行 **ES256** 签名,输出 64 字节的 raw `r||s` 格式(各 32 字节),再做 Base64URL 编码。
+
+---
+
+## 三、两个关键问题
+
+### 3.1 为什么会过期?
+
+`iat`(issued at)字段是一个**固定的时间戳**。服务端会校验该时间与当前时间的差值,超过有效期窗口(通常几天)就会拒绝请求。硬编码的 token 中 `iat` 永远不变,所以过几天就失效。
+
+### 3.2 为什么 LIST 和 DETAIL 不一样?
+
+DPoP 规范要求 token 绑定到**具体的接口和方法**:
+
+| 差异字段 | LIST(搜索列表) | DETAIL(商品详情) |
+|----------|-----------------|-------------------|
+| `htu` | `/v2/entities:search` | `/items/get` |
+| `htm` | `POST` | `GET` |
+| `jti` | 各自独立的随机 UUID | 各自独立的随机 UUID |
+| `iat` | 各自的签发时间 | 各自的签发时间 |
+
+服务端会验证 `htu` 和 `htm` 是否匹配实际请求,所以两个接口不能共用同一个 DPoP token。
+
+---
+
+## 四、生成流程
+
+```
+1. 生成 EC P-256 密钥对(私钥 + 公钥)
+         ↓
+2. 构造 JWT Header(typ + alg + 公钥 jwk)
+         ↓
+3. 构造 JWT Payload(iat=当前时间戳, jti=随机UUID, htu=目标URL, htm=方法, uuid=设备ID)
+         ↓
+4. 拼接 base64url(header).base64url(payload)
+         ↓
+5. 用私钥对拼接字符串做 ES256 签名
+         ↓
+6. 输出 base64url(header).base64url(payload).base64url(signature)
+```
+
+---
+
+## 五、Python 实现
+
+已封装在 `dpop_generator.py` 中,依赖 `cryptography` 库。
+
+### 5.1 快速使用
+
+```python
+from dpop_generator import generate_list_dpop, generate_detail_dpop
+
+# 搜索列表接口
+list_dpop = generate_list_dpop()
+
+# 商品详情接口
+detail_dpop = generate_detail_dpop()
+```
+
+### 5.2 自定义设备 UUID
+
+```python
+from dpop_generator import DPoPGenerator
+
+gen = DPoPGenerator(device_uuid="your-device-uuid")
+list_dpop = gen.generate(htu="https://api.mercari.jp/v2/entities:search", htm="POST")
+detail_dpop = gen.generate(htu="https://api.mercari.jp/items/get", htm="GET")
+```
+
+### 5.3 在爬虫中集成
+
+将 `mercari_jp_spider.py` 中的硬编码常量替换为动态调用:
+
+```python
+from dpop_generator import generate_list_dpop, generate_detail_dpop
+
+# build_headers() 中:
+"dpop": generate_list_dpop(),
+
+# get_detail_page() 中:
+"dpop": generate_detail_dpop(),
+```
+
+每次调用自动生成新的 `iat` 和 `jti`,不会再过期。同一个 `DPoPGenerator` 实例复用同一密钥对,保证签名一致性。
+
+---
+
+## 六、依赖安装
+
+```bash
+pip install cryptography
+```

+ 123 - 0
mercari_spider/dpop_generator.py

@@ -0,0 +1,123 @@
+# -*- coding: utf-8 -*-
+# DPoP (RFC 9449) Token 生成器
+# 用于 Mercari API 的 DPoP Proof JWT 动态生成
+
+import json
+import time
+import uuid
+import base64
+from cryptography.hazmat.primitives.asymmetric import ec
+from cryptography.hazmat.primitives import hashes
+from cryptography.hazmat.backends import default_backend
+
+
+def _base64url_encode(data: bytes) -> str:
+    return base64.urlsafe_b64encode(data).rstrip(b"=").decode("ascii")
+
+
+def _int_to_base64url(n: int, length: int) -> str:
+    return _base64url_encode(n.to_bytes(length, byteorder="big"))
+
+
+class DPoPGenerator:
+    """
+    生成 Mercari DPoP Proof JWT (ES256 / P-256)。
+
+    用法:
+        gen = DPoPGenerator(device_uuid="a00429c5-ad26-4be4-83ae-60b7239e14d5")
+        list_dpop = gen.generate(
+            htu="https://api.mercari.jp/v2/entities:search",
+            htm="POST",
+        )
+        detail_dpop = gen.generate(
+            htu="https://api.mercari.jp/items/get",
+            htm="GET",
+        )
+    """
+
+    def __init__(self, device_uuid: str = "a00429c5-ad26-4be4-83ae-60b7239e14d5"):
+        self.device_uuid = device_uuid
+        self._private_key = ec.generate_private_key(ec.SECP256R1(), default_backend())
+        self._public_key = self._private_key.public_key()
+        pub_numbers = self._public_key.public_numbers()
+        self._jwk = {
+            "crv": "P-256",
+            "kty": "EC",
+            "x": _int_to_base64url(pub_numbers.x, 32),
+            "y": _int_to_base64url(pub_numbers.y, 32),
+        }
+        self._header_b64 = _base64url_encode(
+            json.dumps(
+                {"typ": "dpop+jwt", "alg": "ES256", "jwk": self._jwk},
+                separators=(",", ":"),
+            ).encode()
+        )
+
+    def generate(self, htu: str, htm: str) -> str:
+        payload = {
+            "iat": int(time.time()),
+            "jti": str(uuid.uuid4()),
+            "htu": htu,
+            "htm": htm,
+            "uuid": self.device_uuid,
+        }
+        payload_b64 = _base64url_encode(
+            json.dumps(payload, separators=(",", ":")).encode()
+        )
+        signing_input = f"{self._header_b64}.{payload_b64}".encode("ascii")
+
+        der_sig = self._private_key.sign(signing_input, ec.ECDSA(hashes.SHA256()))
+
+        # DER -> raw r||s (各 32 字节)
+        r, s = _decode_der_signature(der_sig)
+        raw_sig = r.to_bytes(32, "big") + s.to_bytes(32, "big")
+
+        return f"{self._header_b64}.{payload_b64}.{_base64url_encode(raw_sig)}"
+
+
+def _decode_der_signature(der_bytes: bytes) -> tuple[int, int]:
+    from cryptography.hazmat.primitives.asymmetric.utils import decode_dss_signature
+    return decode_dss_signature(der_bytes)
+
+
+# ── 便捷函数 ──────────────────────────────────────────────
+
+_default_gen = None
+
+
+def _get_generator(device_uuid: str) -> DPoPGenerator:
+    global _default_gen
+    if _default_gen is None or _default_gen.device_uuid != device_uuid:
+        _default_gen = DPoPGenerator(device_uuid=device_uuid)
+    return _default_gen
+
+
+def generate_list_dpop(
+    device_uuid: str = "a00429c5-ad26-4be4-83ae-60b7239e14d5",
+) -> str:
+    return _get_generator(device_uuid).generate(
+        htu="https://api.mercari.jp/v2/entities:search",
+        htm="POST",
+    )
+
+
+def generate_detail_dpop(
+    device_uuid: str = "a00429c5-ad26-4be4-83ae-60b7239e14d5",
+) -> str:
+    return _get_generator(device_uuid).generate(
+        htu="https://api.mercari.jp/items/get",
+        htm="GET",
+    )
+
+
+if __name__ == "__main__":
+    gen = DPoPGenerator()
+    list_token = gen.generate(
+        htu="https://api.mercari.jp/v2/entities:search", htm="POST"
+    )
+    detail_token = gen.generate(
+        htu="https://api.mercari.jp/items/get", htm="GET"
+    )
+    print("LIST_DPOP =", list_token)
+    print()
+    print("DETAIL_DPOP =", detail_token)

+ 386 - 0
mercari_spider/mercari_jp_spider.py

@@ -0,0 +1,386 @@
+# -*- coding: utf-8 -*-
+# Author : Charley
+# Python : 3.10.8
+# Date   : 2026/4/30 14:28
+import time
+import inspect
+import schedule
+import requests
+from loguru import logger
+from mysql_pool import MySQLConnectionPool
+from tenacity import retry, stop_after_attempt, wait_fixed
+
+"""
+请求网址:
+https://jp.mercari.com/search?category_id=82&page_token=v1%3A1&status=sold_out%7Ctrading
+"""
+
+SEARCH_URL = "https://api.mercari.jp/v2/entities:search"
+PAGE_SIZE = 120
+LAPLACE_DEVICE_UUID = "a00429c5-ad26-4be4-83ae-60b7239e14d5"
+SEARCH_SESSION_ID = "cfba38acec8cae78136c62441bbb267a"
+LIST_DPOP = "eyJ0eXAiOiJkcG9wK2p3dCIsImFsZyI6IkVTMjU2IiwiandrIjp7ImNydiI6IlAtMjU2Iiwia3R5IjoiRUMiLCJ4IjoiajlJNmtMS2VrZFNOZEh5SHNhWmw1Z2tiYkZoRGFBUDNEd3N1dlZqQ3JXZyIsInkiOiJOTHREa2RkWVZhZkZ5a1FHYmsteDZBYUp6QWpVblZlcFl0X2pzdmV3cGdJIn19.eyJpYXQiOjE3NzgwNDYyMTksImp0aSI6IjQ0YmM4MzZlLWFiYWEtNDI1OC1hMjQ4LTNlNjkxMTUzZjY2NSIsImh0dSI6Imh0dHBzOi8vYXBpLm1lcmNhcmkuanAvdjIvZW50aXRpZXM6c2VhcmNoIiwiaHRtIjoiUE9TVCIsInV1aWQiOiJhMDA0MjljNS1hZDI2LTRiZTQtODNhZS02MGI3MjM5ZTE0ZDUifQ.KqYWvIC42NYjNTewIfttuPMFHYAwJ4JZIXn4ulQye6s9c5zQutabWoOp8sKDjy-zvmbDCYA-6K7e7dW3bVu3cw"
+DETAIL_DPOP = "eyJ0eXAiOiJkcG9wK2p3dCIsImFsZyI6IkVTMjU2IiwiandrIjp7ImNydiI6IlAtMjU2Iiwia3R5IjoiRUMiLCJ4IjoiajlJNmtMS2VrZFNOZEh5SHNhWmw1Z2tiYkZoRGFBUDNEd3N1dlZqQ3JXZyIsInkiOiJOTHREa2RkWVZhZkZ5a1FHYmsteDZBYUp6QWpVblZlcFl0X2pzdmV3cGdJIn19.eyJpYXQiOjE3NzgwNDYwMDksImp0aSI6IjFmNGYwNDlhLTdmMGYtNGM0Zi1hZjcxLTIwYmFhZDhhMTc4NCIsImh0dSI6Imh0dHBzOi8vYXBpLm1lcmNhcmkuanAvaXRlbXMvZ2V0IiwiaHRtIjoiR0VUIiwidXVpZCI6ImEwMDQyOWM1LWFkMjYtNGJlNC04M2FlLTYwYjcyMzllMTRkNSJ9._92fashFF1PmC0Ol0HFqz9rIYdzL-w_ZJwXXRTI3zX_8oNP_ziNUIwySB50Itgp88vsgy8skp4DZ2DTd3WBWnQ"
+
+logger.remove()
+logger.add("./logs/{time:YYYYMMDD}.log", encoding='utf-8', rotation="00:00",
+           format="[{time:YYYY-MM-DD HH:mm:ss.SSS}] {level} {message}",
+           level="DEBUG", retention="7 day")
+
+
+def after_log(retry_state):
+    """
+    retry 回调
+    :param retry_state: RetryCallState 对象
+    """
+    # 检查 args 是否存在且不为空
+    if retry_state.args and len(retry_state.args) > 0:
+        log = retry_state.args[0]  # 获取传入的 logger
+    else:
+        log = logger  # 使用全局 logger
+
+    if retry_state.outcome.failed:
+        log.warning(
+            f"Function '{retry_state.fn.__name__}', Attempt {retry_state.attempt_number} Times")
+    else:
+        log.info(f"Function '{retry_state.fn.__name__}', Attempt {retry_state.attempt_number} succeeded")
+
+
+@retry(stop=stop_after_attempt(5), wait=wait_fixed(1), after=after_log)
+def get_proxys(log):
+    """
+    获取代理
+    :return: 代理
+    """
+    tunnel = "x371.kdltps.com:15818"
+    kdl_username = "t13753103189895"
+    kdl_password = "o0yefv6z"
+    try:
+        proxies = {
+            "http": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": kdl_username, "pwd": kdl_password, "proxy": tunnel},
+            "https": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": kdl_username, "pwd": kdl_password, "proxy": tunnel}
+        }
+        return proxies
+    except Exception as e:
+        log.error(f"Error getting proxy: {e}")
+        raise e
+
+
+def build_headers() -> dict:
+    """构造 Mercari 搜索接口请求头。"""
+    return {
+        "accept": "application/json, text/plain, */*",
+        "accept-language": "ja",
+        "content-type": "application/json",
+        "dpop": LIST_DPOP,
+        # "dpop": "eyJ0eXAiOiJkcG9wK2p3dCIsImFsZyI6IkVTMjU2IiwiandrIjp7ImNydiI6IlAtMjU2Iiwia3R5IjoiRUMiLCJ4IjoiajlJNmtMS2VrZFNOZEh5SHNhWmw1Z2tiYkZoRGFBUDNEd3N1dlZqQ3JXZyIsInkiOiJOTHREa2RkWVZhZkZ5a1FHYmsteDZBYUp6QWpVblZlcFl0X2pzdmV3cGdJIn19.eyJpYXQiOjE3NzgwNDYyMTksImp0aSI6IjQ0YmM4MzZlLWFiYWEtNDI1OC1hMjQ4LTNlNjkxMTUzZjY2NSIsImh0dSI6Imh0dHBzOi8vYXBpLm1lcmNhcmkuanAvdjIvZW50aXRpZXM6c2VhcmNoIiwiaHRtIjoiUE9TVCIsInV1aWQiOiJhMDA0MjljNS1hZDI2LTRiZTQtODNhZS02MGI3MjM5ZTE0ZDUifQ.KqYWvIC42NYjNTewIfttuPMFHYAwJ4JZIXn4ulQye6s9c5zQutabWoOp8sKDjy-zvmbDCYA-6K7e7dW3bVu3cw",
+        "origin": "https://jp.mercari.com",
+        "priority": "u=1, i",
+        "referer": "https://jp.mercari.com/",
+        "sec-ch-ua": "\"Google Chrome\";v=\"147\", \"Not.A/Brand\";v=\"8\", \"Chromium\";v=\"147\"",
+        "sec-ch-ua-mobile": "?0",
+        "sec-ch-ua-platform": "\"Windows\"",
+        "sec-fetch-dest": "empty",
+        "sec-fetch-mode": "cors",
+        "sec-fetch-site": "cross-site",
+        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/147.0.0.0 Safari/537.36",
+        "x-country-code": "HK",
+        "x-platform": "web",
+    }
+
+
+def build_payload(page_token: str = "v1:0", category_id: int = 1289) -> dict:
+    """构造单页请求参数,第一页 page_token 传 v1:0。"""
+    return {
+        "userId": "",
+        "config": {
+            "responseToggles": [
+                "QUERY_SUGGESTION_WEB_1",
+            ],
+        },
+        "pageSize": PAGE_SIZE,
+        "pageToken": page_token,
+        "searchSessionId": SEARCH_SESSION_ID,
+        "source": "BaseSerp",
+        "indexRouting": "INDEX_ROUTING_UNSPECIFIED",
+        "thumbnailTypes": [],
+        "searchCondition": {
+            "keyword": "",
+            "excludeKeyword": "",
+            "sort": "SORT_SCORE",
+            "order": "ORDER_DESC",
+            "status": [
+                "STATUS_SOLD_OUT",
+                "STATUS_TRADING",
+            ],
+            "sizeId": [],
+            "categoryId": [
+                category_id,
+            ],
+            "brandId": [],
+            "sellerId": [],
+            "priceMin": 0,
+            "priceMax": 0,
+            "itemConditionId": [],
+            "shippingPayerId": [],
+            "shippingFromArea": [],
+            "shippingMethod": [],
+            "colorId": [],
+            "hasCoupon": False,
+            "attributes": [],
+            "itemTypes": [],
+            "skuIds": [],
+            "shopIds": [],
+            "excludeShippingMethodIds": [],
+        },
+        "serviceFrom": "suruga",
+        "withItemBrand": True,
+        "withItemSize": False,
+        "withItemPromotions": True,
+        "withItemSizes": True,
+        "withShopname": False,
+        "useDynamicAttribute": True,
+        "withSuggestedItems": True,
+        "withOfferPricePromotion": True,
+        "withProductSuggest": True,
+        "withParentProducts": False,
+        "withProductArticles": True,
+        "withSearchConditionId": False,
+        "withAuction": True,
+        "laplaceDeviceUuid": LAPLACE_DEVICE_UUID,
+    }
+
+
+def build_page_token(page_number: int) -> str:
+    """
+    把页码转换成接口 pageToken:第 1 页为 v1:0,第 2 页为 v1:1。
+    :param page_number: 页码
+    :return: pageToken
+    """
+    if page_number < 1:
+        raise ValueError("page_number 必须从 1 开始")
+    return f"v1:{page_number - 1}"
+
+
+@retry(stop=stop_after_attempt(5), wait=wait_fixed(1), after=after_log)
+def fetch_page(
+        log,
+        category_id: int,
+        page_number: int,
+        session: requests.Session | None = None,
+        timeout: int = 22,
+) -> requests.Response:
+    """
+    请求单页数据。
+    :param log: logger对象
+    :param category_id: 类别ID
+    :param page_number: 页码
+    :param session: requests.Session对象
+    :param timeout: 超时时间
+    :return: requests.Response对象
+    """
+    log.info(f"请求第 {page_number} 页数据............")
+    client = session or requests.Session()
+    page_token = build_page_token(page_number)
+    # print(page_token)
+    response = client.post(
+        SEARCH_URL,
+        headers=build_headers(),
+        json=build_payload(page_token=page_token, category_id=category_id),
+        timeout=timeout
+    )
+    response.raise_for_status()
+    return response
+
+
+@retry(stop=stop_after_attempt(5), wait=wait_fixed(1), after=after_log)
+def get_detail_page(log, pid):
+    """
+    获取商品详情。
+    :param log: logger对象
+    :param pid: 商品ID
+    """
+    log.info(f"获取商品详情 {pid}............")
+    headers = {
+        "accept": "application/json, text/plain, */*",
+        # "accept-language": "ja",
+        "dpop": DETAIL_DPOP,
+        # "referer": "https://jp.mercari.com/",
+        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/147.0.0.0 Safari/537.36",
+        "x-platform": "web"
+    }
+    url = "https://api.mercari.jp/items/get"
+    params = {
+        # "id": "m69042262006",
+        "id": pid,
+        "include_item_attributes": "true",
+        "include_product_page_component": "true",
+        "include_non_ui_item_attributes": "true",
+        "include_donation": "true",
+        "include_item_attributes_sections": "true",
+        "include_auction": "true",
+        "country_code": "JP"
+    }
+    response = requests.get(url, headers=headers, params=params, timeout=22)
+    response.raise_for_status()
+    resp_json = response.json()
+    data = resp_json.get("data", {})
+    tag_seller = data.get("seller", {})
+    seller_id = tag_seller.get("id")
+    seller_name = tag_seller.get("name")
+    photos = data.get("photos", [])
+    photos = ''.join(photos) if photos else None
+    # print(seller_id, seller_name, photos)
+    return seller_id, seller_name, photos
+
+
+def parse_list(log, resp_json, sql_pool, category_id, category_name):
+    """
+    解析商品列表数据。
+    :param log: logger对象
+    :param resp_json: 响应的 JSON 数据
+    :param sql_pool: MySQL连接池
+    :param category_id: 类别ID
+    :param category_name: 类别名称
+    """
+    items = resp_json.get("items", [])
+    for item in items:
+        pid = item.get("id")
+        # sellerId = item.get("sellerId")
+        status = item.get("status")
+        product_name = item.get("name")
+        price = item.get("price")
+
+        created_at = item.get("created")  # 1777512645  时间戳
+        created_at = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(created_at))) if int(created_at) else None
+        updated_at = item.get("updated")  # 1777512645  时间戳
+        updated_at = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(updated_at))) if int(updated_at) else None
+
+        # thumbnails = item.get("thumbnails", [])
+        # img = thumbnails[0] if thumbnails else None
+
+        # categoryId = item.get("categoryId")
+
+        # 获取详情页多图
+        try:
+            seller_id, seller_name, photos = get_detail_page(log, pid)
+        except Exception as e:
+            log.error(f"Error getting detail page: {e}")
+            seller_id, seller_name, photos = None, None, None
+
+        data_dict = {
+            "pid": pid,
+            "seller_id": seller_id,
+            "seller_name": seller_name,
+            "photos": photos,
+            "status": status,
+            "product_name": product_name,
+            "price": price,
+            "created_at": created_at,
+            "updated_at": updated_at,
+            "category_id": category_id,
+            "category_name": category_name
+        }
+        # log.info(data_dict)
+
+        sql_pool.insert_one_or_dict(table="mercari_record", data=data_dict, ignore=True)
+
+
+def iter_pages(
+        log,
+        sql_pool,
+        category_id: int,
+        category_name: str,
+        start_page: int = 1,
+        end_page: int = 15000,
+):
+    """
+    循环请求多页,返回页码和 Response。
+    :param log: logger对象
+    :param sql_pool: MySQL连接池
+    :param category_id: 类别ID
+    :param category_name: 类别名称
+    :param start_page: 开始页码
+    :param end_page: 结束页码
+    """
+    if category_id == 1289:
+        start_page = 42
+
+    if end_page < start_page:
+        raise ValueError("end_page 必须大于等于 start_page")
+
+    with requests.Session() as session:
+        for page_number in range(start_page, end_page + 1):
+            response = fetch_page(
+                log=log,
+                category_id=category_id,
+                page_number=page_number,
+                session=session,
+            )
+            # 解析 response
+            resp_json = response.json()
+            # print(resp_json)
+            parse_list(log, resp_json, sql_pool, category_id, category_name)
+
+            # 返回数据条数不固定  不能以120条为标准
+            len_resp_json = len(resp_json.get("items", []))
+            log.info(f"第 {page_number} 页返回 {len_resp_json} 个商品...................")
+            if len_resp_json == 0:
+                log.info(f">>>>>>>>>>>>>>>>>>>>> 第 {page_number} 页返回的商品数量为0,停止请求 <<<<<<<<<<<<<<<<<<<<<<")
+                break
+
+            # if page_number < end_page and sleep_seconds > 0:
+            #     time.sleep(sleep_seconds)
+
+
+@retry(stop=stop_after_attempt(100), wait=wait_fixed(3600), after=after_log)
+def mercari_main(log):
+    """
+    主函数
+    :param log: logger对象
+    """
+    log.info(
+        f'开始运行 {inspect.currentframe().f_code.co_name} 爬虫任务....................................................')
+
+    # 配置 MySQL 连接池
+    sql_pool = MySQLConnectionPool(log=log)
+    if not sql_pool:
+        log.error("MySQL数据库连接失败")
+        raise Exception("MySQL数据库连接失败")
+
+    # 抓取类别
+    crawl_categories = [
+        {"category_id": 1289, "category_name": "Pokemon"},
+        {"category_id": 1409, "category_name": "One Piece"},
+        {"category_id": 7290, "category_name": "Sports"}
+    ]
+    try:
+        for category in crawl_categories:
+            try:
+                category_id = category["category_id"]
+                category_name = category["category_name"]
+                log.debug(f'开始爬取类别 {category_name}............')
+                iter_pages(log, sql_pool, category_id, category_name, start_page=1, end_page=15000)
+            except Exception as e:
+                log.error(f'{inspect.currentframe().f_code.co_name} error: {e}')
+
+    except Exception as e:
+        log.error(f'{inspect.currentframe().f_code.co_name} error: {e}')
+    finally:
+        log.info(f'爬虫程序 {inspect.currentframe().f_code.co_name} 运行结束,等待下一轮的采集任务............')
+
+
+# def schedule_task():
+#     """
+#     设置定时任务
+#     """
+#     mercari_main(log=logger)
+#
+#     schedule.every().day.at("05:00").do(mercari_main, log=logger)
+#     while True:
+#         schedule.run_pending()
+#         time.sleep(1)
+
+if __name__ == "__main__":
+    mercari_main(log=logger)
+    # get_detail_page(logger, "m69042262006")

+ 650 - 0
mercari_spider/mysql_pool.py

@@ -0,0 +1,650 @@
+# -*- coding: utf-8 -*-
+# Author : Charley
+# Python : 3.10.8
+# Date   : 2025/3/25 14:14
+import re
+import pymysql
+import YamlLoader
+from loguru import logger
+from dbutils.pooled_db import PooledDB
+
+# 获取yaml配置
+yaml = YamlLoader.readYaml()
+mysqlYaml = yaml.get("mysql")
+sql_host = mysqlYaml.getValueAsString("host")
+sql_port = mysqlYaml.getValueAsInt("port")
+sql_user = mysqlYaml.getValueAsString("username")
+sql_password = mysqlYaml.getValueAsString("password")
+sql_db = mysqlYaml.getValueAsString("db")
+
+
+class MySQLConnectionPool:
+    """
+    MySQL连接池
+    """
+
+    def __init__(self, mincached=1, maxcached=2, maxconnections=3, log=None):
+        """
+        初始化连接池
+        :param mincached: 初始化时,链接池中至少创建的链接,0表示不创建
+        :param maxcached: 池中空闲连接的最大数目(0 或 None 表示池大小不受限制)
+        :param maxconnections: 允许的最大连接数(0 或 None 表示任意数量的连接)
+        :param log: 自定义日志记录器
+        """
+        # 使用 loguru 的 logger,如果传入了其他 logger,则使用传入的 logger
+        self.log = log or logger
+        self.pool = PooledDB(
+            creator=pymysql,
+            mincached=mincached,
+            maxcached=maxcached,
+            maxconnections=maxconnections,
+            blocking=True,  # 连接池中如果没有可用连接后,是否阻塞等待。True,等待;False,不等待然后报错
+            host=sql_host,
+            port=sql_port,
+            user=sql_user,
+            password=sql_password,
+            database=sql_db,
+            charset="utf8mb4",
+            use_unicode=True,
+            init_command="SET NAMES utf8mb4",
+            ping=1,  # 0:完全关闭(更快), 1:仅在取连接时检查, 2:每次执行前检查连接有效性,防止使用已断开的连接
+            connect_timeout=5,  # 连接超时时间(秒)
+            # read_timeout=30,  # 读取超时时间(秒)
+            write_timeout=30  # 写入超时时间(秒)
+        )
+
+    def _execute(self, query, args=None, commit=False):
+        """
+        执行SQL
+        :param query: SQL语句
+        :param args: SQL参数
+        :param commit: 是否提交事务
+        :return: 查询结果
+        """
+        try:
+            with self.pool.connection() as conn:
+                with conn.cursor() as cursor:
+                    cursor.execute(query, args)
+                    if commit:
+                        conn.commit()
+                    self.log.debug(f"sql _execute, Query: {query}, Rows: {cursor.rowcount}")
+                    return cursor
+        except Exception as e:
+            if commit and conn:
+                conn.rollback()
+            self.log.exception(f"Error executing query: {e}, Query: {query}, Args: {args}")
+            raise e
+
+    def select_one(self, query, args=None):
+        """
+        执行查询,返回单个结果
+        :param query: 查询语句
+        :param args: 查询参数
+        :return: 查询结果
+        """
+        cursor = self._execute(query, args)
+        return cursor.fetchone()
+
+    def select_all(self, query, args=None):
+        """
+        执行查询,返回所有结果
+        :param query: 查询语句
+        :param args: 查询参数
+        :return: 查询结果
+        """
+        cursor = self._execute(query, args)
+        return cursor.fetchall()
+
+    def insert_one(self, query, args):
+        """
+        执行单条插入语句
+        :param query: 插入语句
+        :param args: 插入参数
+        """
+        self.log.info('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>data insert_one 入库中>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
+        cursor = self._execute(query, args, commit=True)
+        return cursor.lastrowid  # 返回插入的ID
+
+    def insert_all(self, query, args_list):
+        """
+        执行批量插入语句,如果失败则逐条插入
+        :param query: 插入语句
+        :param args_list: 插入参数列表
+        """
+        conn = None
+        cursor = None
+        try:
+            conn = self.pool.connection()
+            cursor = conn.cursor()
+            cursor.executemany(query, args_list)
+            conn.commit()
+            self.log.debug(f"sql insert_all, SQL: {query[:100]}..., Rows: {cursor.rowcount}")
+            self.log.info('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>data insert_all 入库中>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
+        except pymysql.err.IntegrityError as e:
+            if "Duplicate entry" in str(e):
+                conn.rollback()
+                self.log.warning(f"批量插入遇到重复,开始逐条插入。错误: {e}")
+                rowcount = 0
+                for args in args_list:
+                    try:
+                        self.insert_one(query, args)
+                        rowcount += 1
+                    except pymysql.err.IntegrityError as e2:
+                        if "Duplicate entry" in str(e2):
+                            self.log.debug(f"跳过重复条目: {e2}")
+                        else:
+                            self.log.error(f"插入失败: {e2}")
+                    except Exception as e2:
+                        self.log.error(f"插入失败: {e2}")
+                self.log.info(f"逐条插入完成: {rowcount}/{len(args_list)}条")
+            else:
+                conn.rollback()
+                self.log.exception(f"数据库完整性错误: {e}")
+                raise e
+        except Exception as e:
+            conn.rollback()
+            self.log.exception(f"批量插入失败: {e}")
+            raise e
+        finally:
+            if cursor:
+                cursor.close()
+            if conn:
+                conn.close()
+
+    def insert_one_or_dict(self, table=None, data=None, query=None, args=None, commit=True, ignore=False):
+        """
+        单条插入(支持字典或原始SQL)
+        :param table: 表名(字典插入时必需)
+        :param data: 字典数据 {列名: 值}
+        :param query: 直接SQL语句(与data二选一)
+        :param args: SQL参数(query使用时必需)
+        :param commit: 是否自动提交
+        :param ignore: 是否使用ignore
+        :return: 最后插入ID
+        """
+        if data is not None:
+            if not isinstance(data, dict):
+                raise ValueError("Data must be a dictionary")
+
+            keys = ', '.join([self._safe_identifier(k) for k in data.keys()])
+            values = ', '.join(['%s'] * len(data))
+
+            # 构建 INSERT IGNORE 语句
+            ignore_clause = "IGNORE" if ignore else ""
+            query = f"INSERT {ignore_clause} INTO {self._safe_identifier(table)} ({keys}) VALUES ({values})"
+            args = tuple(data.values())
+        elif query is None:
+            raise ValueError("Either data or query must be provided")
+
+        try:
+            cursor = self._execute(query, args, commit)
+            self.log.info(f"sql insert_one_or_dict, Table: {table}, Rows: {cursor.rowcount}")
+            self.log.info('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>data insert_one_or_dict 入库中>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
+            return cursor.lastrowid
+        except pymysql.err.IntegrityError as e:
+            if "Duplicate entry" in str(e):
+                self.log.warning(f"插入失败:重复条目,已跳过。错误详情: {e}")
+                return -1  # 返回 -1 表示重复条目被跳过
+            else:
+                self.log.exception(f"数据库完整性错误: {e}")
+                raise
+        except Exception as e:
+            self.log.exception(f"未知错误: {e}")
+            raise
+
+    def insert_many(self, table=None, data_list=None, query=None, args_list=None, batch_size=1000, commit=True,
+                    ignore=False):
+        """
+        批量插入(支持字典列表或原始SQL)
+        :param table: 表名(字典插入时必需)
+        :param data_list: 字典列表 [{列名: 值}]
+        :param query: 直接SQL语句(与data_list二选一)
+        :param args_list: SQL参数列表(query使用时必需)
+        :param batch_size: 分批大小
+        :param commit: 是否自动提交
+        :param ignore: 是否使用ignore
+        :return: 影响行数
+        """
+        if data_list is not None:
+            if not data_list or not isinstance(data_list[0], dict):
+                raise ValueError("Data_list must be a non-empty list of dictionaries")
+
+            keys = ', '.join([self._safe_identifier(k) for k in data_list[0].keys()])
+            values = ', '.join(['%s'] * len(data_list[0]))
+
+            # 构建 INSERT IGNORE 语句
+            ignore_clause = "IGNORE" if ignore else ""
+            query = f"INSERT {ignore_clause} INTO {self._safe_identifier(table)} ({keys}) VALUES ({values})"
+            args_list = [tuple(d.values()) for d in data_list]
+        elif query is None:
+            raise ValueError("Either data_list or query must be provided")
+
+        total = 0
+        for i in range(0, len(args_list), batch_size):
+            batch = args_list[i:i + batch_size]
+            try:
+                with self.pool.connection() as conn:
+                    with conn.cursor() as cursor:
+                        cursor.executemany(query, batch)
+                        if commit:
+                            conn.commit()
+                        total += cursor.rowcount
+            except pymysql.err.IntegrityError as e:
+                # 处理唯一索引冲突
+                if "Duplicate entry" in str(e):
+                    if ignore:
+                        # 如果使用了 INSERT IGNORE,理论上不会进这里,但以防万一
+                        self.log.warning(f"批量插入遇到重复条目(ignore模式): {e}")
+                    else:
+                        # 没有使用 IGNORE,降级为逐条插入
+                        self.log.warning(f"批量插入遇到重复条目,开始逐条插入。错误: {e}")
+                        if commit:
+                            conn.rollback()
+                        
+                        rowcount = 0
+                        for j, args in enumerate(batch):
+                            try:
+                                if data_list:
+                                    # 字典模式
+                                    self.insert_one_or_dict(
+                                        table=table,
+                                        data=dict(zip(data_list[0].keys(), args)),
+                                        commit=commit,
+                                        ignore=False  # 单条插入时手动捕获重复
+                                    )
+                                else:
+                                    # 原始SQL模式
+                                    self.insert_one(query, args)
+                                rowcount += 1
+                            except pymysql.err.IntegrityError as e2:
+                                if "Duplicate entry" in str(e2):
+                                    self.log.debug(f"跳过重复条目[{i+j+1}]: {e2}")
+                                else:
+                                    self.log.error(f"插入失败[{i+j+1}]: {e2}")
+                            except Exception as e2:
+                                self.log.error(f"插入失败[{i+j+1}]: {e2}")
+                        total += rowcount
+                        self.log.info(f"批次逐条插入完成: 成功{rowcount}/{len(batch)}条")
+                else:
+                    # 其他完整性错误
+                    self.log.exception(f"数据库完整性错误: {e}")
+                    if commit:
+                        conn.rollback()
+                    raise e
+            except Exception as e:
+                # 其他数据库错误
+                self.log.exception(f"批量插入失败: {e}")
+                if commit:
+                    conn.rollback()
+                raise e
+        if table:
+            self.log.info(f"sql insert_many, Table: {table}, Total Rows: {total}")
+        else:
+            self.log.info(f"sql insert_many, Query: {query}, Total Rows: {total}")
+        return total
+
+    def insert_many_two(self, table=None, data_list=None, query=None, args_list=None, batch_size=1000, commit=True,
+                        ignore=False):
+        """
+        批量插入(支持字典列表或原始SQL) - 备用方法
+        :param table: 表名(字典插入时必需)
+        :param data_list: 字典列表 [{列名: 值}]
+        :param query: 直接SQL语句(与data_list二选一)
+        :param args_list: SQL参数列表(query使用时必需)
+        :param batch_size: 分批大小
+        :param commit: 是否自动提交
+        :param ignore: 是否使用INSERT IGNORE
+        :return: 影响行数
+        """
+        if data_list is not None:
+            if not data_list or not isinstance(data_list[0], dict):
+                raise ValueError("Data_list must be a non-empty list of dictionaries")
+            keys = ', '.join([self._safe_identifier(k) for k in data_list[0].keys()])
+            values = ', '.join(['%s'] * len(data_list[0]))
+            ignore_clause = "IGNORE" if ignore else ""
+            query = f"INSERT {ignore_clause} INTO {self._safe_identifier(table)} ({keys}) VALUES ({values})"
+            args_list = [tuple(d.values()) for d in data_list]
+        elif query is None:
+            raise ValueError("Either data_list or query must be provided")
+    
+        total = 0
+        for i in range(0, len(args_list), batch_size):
+            batch = args_list[i:i + batch_size]
+            try:
+                with self.pool.connection() as conn:
+                    with conn.cursor() as cursor:
+                        cursor.executemany(query, batch)
+                        if commit:
+                            conn.commit()
+                        total += cursor.rowcount
+            except pymysql.err.IntegrityError as e:
+                if "Duplicate entry" in str(e) and not ignore:
+                    self.log.warning(f"批量插入遇到重复,降级为逐条插入: {e}")
+                    if commit:
+                        conn.rollback()
+                    rowcount = 0
+                    for args in batch:
+                        try:
+                            self.insert_one(query, args)
+                            rowcount += 1
+                        except pymysql.err.IntegrityError as e2:
+                            if "Duplicate entry" in str(e2):
+                                self.log.debug(f"跳过重复条目: {e2}")
+                            else:
+                                self.log.error(f"插入失败: {e2}")
+                        except Exception as e2:
+                            self.log.error(f"插入失败: {e2}")
+                    total += rowcount
+                else:
+                    self.log.exception(f"数据库完整性错误: {e}")
+                    if commit:
+                        conn.rollback()
+                    raise e
+            except Exception as e:
+                self.log.exception(f"批量插入失败: {e}")
+                if commit:
+                    conn.rollback()
+                raise e
+        self.log.info(f"sql insert_many_two, Table: {table}, Total Rows: {total}")
+        return total
+
+    def insert_too_many(self, query, args_list, batch_size=1000):
+        """
+        执行批量插入语句,分片提交, 单次插入大于十万+时可用, 如果失败则降级为逐条插入
+        :param query: 插入语句
+        :param args_list: 插入参数列表
+        :param batch_size: 每次插入的条数
+        """
+        self.log.info(f"sql insert_too_many, Query: {query}, Total Rows: {len(args_list)}")
+        for i in range(0, len(args_list), batch_size):
+            batch = args_list[i:i + batch_size]
+            try:
+                with self.pool.connection() as conn:
+                    with conn.cursor() as cursor:
+                        cursor.executemany(query, batch)
+                        conn.commit()
+                        self.log.debug(f"insert_too_many -> Total Rows: {len(batch)}")
+            except Exception as e:
+                self.log.error(f"insert_too_many error. Trying single insert. Error: {e}")
+                # 当前批次降级为单条插入
+                for args in batch:
+                    self.insert_one(query, args)
+
+    def update_one(self, query, args):
+        """
+        执行单条更新语句
+        :param query: 更新语句
+        :param args: 更新参数
+        """
+        self.log.info('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>data update_one 更新中>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
+        return self._execute(query, args, commit=True)
+
+    def update_all(self, query, args_list):
+        """
+        执行批量更新语句,如果失败则逐条更新
+        :param query: 更新语句
+        :param args_list: 更新参数列表
+        """
+        conn = None
+        cursor = None
+        try:
+            conn = self.pool.connection()
+            cursor = conn.cursor()
+            cursor.executemany(query, args_list)
+            conn.commit()
+            self.log.debug(f"sql update_all, SQL: {query}, Rows: {len(args_list)}")
+            self.log.info('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>data update_all 更新中>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
+        except Exception as e:
+            conn.rollback()
+            self.log.error(f"Error executing query: {e}")
+            # 如果批量更新失败,则逐条更新
+            rowcount = 0
+            for args in args_list:
+                self.update_one(query, args)
+                rowcount += 1
+            self.log.debug(f'Batch update failed. Updated {rowcount} rows individually.')
+        finally:
+            if cursor:
+                cursor.close()
+            if conn:
+                conn.close()
+
+    def update_one_or_dict(self, table=None, data=None, condition=None, query=None, args=None, commit=True):
+        """
+        单条更新(支持字典或原始SQL)
+        :param table: 表名(字典模式必需)
+        :param data: 字典数据 {列名: 值}(与 query 二选一)
+        :param condition: 更新条件,支持以下格式:
+            - 字典: {"id": 1} → "WHERE id = %s"
+            - 字符串: "id = 1" → "WHERE id = 1"(需自行确保安全)
+            - 元组: ("id = %s", [1]) → "WHERE id = %s"(参数化查询)
+        :param query: 直接SQL语句(与 data 二选一)
+        :param args: SQL参数(query 模式下必需)
+        :param commit: 是否自动提交
+        :return: 影响行数
+        :raises: ValueError 参数校验失败时抛出
+        """
+        # 参数校验
+        if data is not None:
+            if not isinstance(data, dict):
+                raise ValueError("Data must be a dictionary")
+            if table is None:
+                raise ValueError("Table name is required for dictionary update")
+            if condition is None:
+                raise ValueError("Condition is required for dictionary update")
+
+            # 构建 SET 子句
+            set_clause = ", ".join([f"{self._safe_identifier(k)} = %s" for k in data.keys()])
+            set_values = list(data.values())
+
+            # 解析条件
+            condition_clause, condition_args = self._parse_condition(condition)
+            query = f"UPDATE {self._safe_identifier(table)} SET {set_clause} WHERE {condition_clause}"
+            args = set_values + condition_args
+
+        elif query is None:
+            raise ValueError("Either data or query must be provided")
+
+        # 执行更新
+        cursor = self._execute(query, args, commit)
+        # self.log.debug(
+        #     f"Updated table={table}, rows={cursor.rowcount}, query={query[:100]}...",
+        #     extra={"table": table, "rows": cursor.rowcount}
+        # )
+        return cursor.rowcount
+
+    def _parse_condition(self, condition):
+        """
+        解析条件为 (clause, args) 格式
+        :param condition: 字典/字符串/元组
+        :return: (str, list) SQL 子句和参数列表
+        """
+        if isinstance(condition, dict):
+            clause = " AND ".join([f"{self._safe_identifier(k)} = %s" for k in condition.keys()])
+            args = list(condition.values())
+        elif isinstance(condition, str):
+            clause = condition  # 注意:需调用方确保安全
+            args = []
+        elif isinstance(condition, (tuple, list)) and len(condition) == 2:
+            clause, args = condition[0], condition[1]
+            if not isinstance(args, (list, tuple)):
+                args = [args]
+        else:
+            raise ValueError("Condition must be dict/str/(clause, args)")
+        return clause, args
+
+    def update_many(self, table=None, data_list=None, condition_list=None, query=None, args_list=None, batch_size=500,
+                    commit=True):
+        """
+        批量更新(支持字典列表或原始SQL)
+        :param table: 表名(字典插入时必需)
+        :param data_list: 字典列表 [{列名: 值}]
+        :param condition_list: 条件列表(必须为字典,与data_list等长)
+        :param query: 直接SQL语句(与data_list二选一)
+        :param args_list: SQL参数列表(query使用时必需)
+        :param batch_size: 分批大小
+        :param commit: 是否自动提交
+        :return: 影响行数
+        """
+        if data_list is not None:
+            if not data_list or not isinstance(data_list[0], dict):
+                raise ValueError("Data_list must be a non-empty list of dictionaries")
+            if condition_list is None or len(data_list) != len(condition_list):
+                raise ValueError("Condition_list must be provided and match the length of data_list")
+            if not all(isinstance(cond, dict) for cond in condition_list):
+                raise ValueError("All elements in condition_list must be dictionaries")
+
+            # 获取第一个数据项和条件项的键
+            first_data_keys = set(data_list[0].keys())
+            first_cond_keys = set(condition_list[0].keys())
+
+            # 构造基础SQL
+            set_clause = ', '.join([self._safe_identifier(k) + ' = %s' for k in data_list[0].keys()])
+            condition_clause = ' AND '.join([self._safe_identifier(k) + ' = %s' for k in condition_list[0].keys()])
+            base_query = f"UPDATE {self._safe_identifier(table)} SET {set_clause} WHERE {condition_clause}"
+            total = 0
+
+            # 分批次处理
+            for i in range(0, len(data_list), batch_size):
+                batch_data = data_list[i:i + batch_size]
+                batch_conds = condition_list[i:i + batch_size]
+                batch_args = []
+
+                # 检查当前批次的结构是否一致
+                can_batch = True
+                for data, cond in zip(batch_data, batch_conds):
+                    data_keys = set(data.keys())
+                    cond_keys = set(cond.keys())
+                    if data_keys != first_data_keys or cond_keys != first_cond_keys:
+                        can_batch = False
+                        break
+                    batch_args.append(tuple(data.values()) + tuple(cond.values()))
+
+                if not can_batch:
+                    # 结构不一致,转为单条更新
+                    for data, cond in zip(batch_data, batch_conds):
+                        self.update_one_or_dict(table=table, data=data, condition=cond, commit=commit)
+                        total += 1
+                    continue
+
+                # 执行批量更新
+                try:
+                    with self.pool.connection() as conn:
+                        with conn.cursor() as cursor:
+                            cursor.executemany(base_query, batch_args)
+                            if commit:
+                                conn.commit()
+                            total += cursor.rowcount
+                            self.log.debug(f"Batch update succeeded. Rows: {cursor.rowcount}")
+                except Exception as e:
+                    if commit:
+                        conn.rollback()
+                    self.log.error(f"Batch update failed: {e}")
+                    # 降级为单条更新
+                    for args, data, cond in zip(batch_args, batch_data, batch_conds):
+                        try:
+                            self._execute(base_query, args, commit=commit)
+                            total += 1
+                        except Exception as e2:
+                            self.log.error(f"Single update failed: {e2}, Data: {data}, Condition: {cond}")
+            self.log.info(f"Total updated rows: {total}")
+            return total
+        elif query is not None:
+            # 处理原始SQL和参数列表
+            if args_list is None:
+                raise ValueError("args_list must be provided when using query")
+
+            total = 0
+            for i in range(0, len(args_list), batch_size):
+                batch_args = args_list[i:i + batch_size]
+                try:
+                    with self.pool.connection() as conn:
+                        with conn.cursor() as cursor:
+                            cursor.executemany(query, batch_args)
+                            if commit:
+                                conn.commit()
+                            total += cursor.rowcount
+                            self.log.debug(f"Batch update succeeded. Rows: {cursor.rowcount}")
+                except Exception as e:
+                    if commit:
+                        conn.rollback()
+                    self.log.error(f"Batch update failed: {e}")
+                    # 降级为单条更新
+                    for args in batch_args:
+                        try:
+                            self._execute(query, args, commit=commit)
+                            total += 1
+                        except Exception as e2:
+                            self.log.error(f"Single update failed: {e2}, Args: {args}")
+            self.log.info(f"Total updated rows: {total}")
+            return total
+        else:
+            raise ValueError("Either data_list or query must be provided")
+
+    def check_pool_health(self):
+        """
+        检查连接池中有效连接数
+
+        # 使用示例
+        # 配置 MySQL 连接池
+        sql_pool = MySQLConnectionPool(log=log)
+        if not sql_pool.check_pool_health():
+            log.error("数据库连接池异常")
+            raise RuntimeError("数据库连接池异常")
+        """
+        try:
+            with self.pool.connection() as conn:
+                conn.ping(reconnect=True)
+                return True
+        except Exception as e:
+            self.log.error(f"Connection pool health check failed: {e}")
+            return False
+
+    def close(self):
+        """
+        关闭连接池,释放所有连接
+        """
+        try:
+            if hasattr(self, 'pool') and self.pool:
+                self.pool.close()
+                self.log.info("数据库连接池已关闭")
+        except Exception as e:
+            self.log.error(f"关闭连接池失败: {e}")
+
+    @staticmethod
+    def _safe_identifier(name):
+        """SQL标识符安全校验"""
+        if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', name):
+            raise ValueError(f"Invalid SQL identifier: {name}")
+        return name
+
+
+if __name__ == '__main__':
+    sql_pool = MySQLConnectionPool()
+    # data_dic = {'card_type_id': 111, 'card_type_name': '补充包 继承的意志【OPC-13】', 'card_type_position': 964,
+    #             'card_id': 5284, 'card_name': '蒙奇·D·路飞', 'card_number': 'OP13-001', 'card_rarity': 'L',
+    #             'card_img': 'https://source.windoent.com/OnePiecePc/Picture/1757929283612OP13-001.png',
+    #             'card_life': '4', 'card_attribute': '打', 'card_power': '5000', 'card_attack': '-',
+    #             'card_color': '红/绿', 'subscript': 4, 'card_features': '超新星/草帽一伙',
+    #             'card_text_desc': '【咚!!×1】【对方的攻击时】我方处于活跃状态的咚!!不多于5张的场合,可以将我方任意张数的咚!!转为休息状态。每有1张转为休息状态的咚!!,本次战斗中,此领袖或我方最多1张拥有《草帽一伙》特征的角色力量+2000。',
+    #             'card_offer_type': '补充包 继承的意志【OPC-13】', 'crawler_language': '简中'}
+    # sql_pool.insert_one_or_dict(table="one_piece_record", data=data_dic)
+
+    sql_pool.insert_many(
+        table="jhs_product_record",
+        data_list=[
+            {
+                "product_id": 99999991,
+                "seller_username": "浣熊小助理(裸卡版)",
+                "auction_product_name": "2000 日文 无编号 #175 U 波克比 有瑕疵",
+            },
+            {
+                "product_id": 99999992,
+                "seller_username": "测试商家二号",
+                "auction_product_name": "中文批量插入测试",
+            },
+        ],
+        ignore=False
+    )
+
+

+ 9 - 0
mercari_spider/requirements.txt

@@ -0,0 +1,9 @@
+-i https://mirrors.aliyun.com/pypi/simple/
+cryptography==46.0.7
+DBUtils==3.1.2
+loguru==0.7.3
+PyMySQL==1.1.2
+PyYAML==6.0.3
+requests==2.33.1
+schedule==1.2.2
+tenacity==9.1.4