Conversation
Summary by CodeRabbit릴리스 노트
WalkthroughRedisson 기반 분산 락 인프라와 AOP를 도입하고 RedisService를 Redisson으로 전환했습니다. 토큰·인터뷰·이력서 관련 서비스에 락과 트랜잭션 격리를 적용하고, 테스트 인프라에 RedissonClient 스파이 및 동시성 테스트를 추가했습니다. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant InterviewFacadeService
participant RedisService
participant InterviewProceedBedrockFlowAsyncService
participant RedissonClient
Client->>InterviewFacadeService: proceedInterview(memberId, qAndA, interviewId)
InterviewFacadeService->>InterviewFacadeService: lockValue = UUID.randomUUID()
InterviewFacadeService->>RedisService: acquireLockWithValue(lockKey, lockValue, ttl)
RedisService->>RedissonClient: setIfAbsent(lockKey, lockValue, ttl)
alt Lock Acquired
RedissonClient-->>RedisService: success
RedisService-->>InterviewFacadeService: lock acquired
InterviewFacadeService->>InterviewProceedBedrockFlowAsyncService: proceedInterviewByBedrockFlowAsync(..., lockValue)
InterviewProceedBedrockFlowAsyncService->>InterviewProceedBedrockFlowAsyncService: process with lockValue
alt Success or Failure
InterviewProceedBedrockFlowAsyncService->>RedisService: releaseLockSafely(lockKey, lockValue)
RedisService->>RedissonClient: eval(luaScript, key, expectedValue)
RedissonClient-->>RedisService: deleted if match
end
else Lock Not Acquired
RedissonClient-->>RedisService: failed
RedisService-->>InterviewFacadeService: lock not acquired
InterviewFacadeService-->>Client: error / retry response
end
sequenceDiagram
participant Client
participant TokenFacadeService
participant DistributedLockAspect
participant RedissonClient
participant TokenRepository
participant Database
Client->>TokenFacadeService: useToken(memberId)
TokenFacadeService->>DistributedLockAspect: intercepted by `@DistributedLock`
DistributedLockAspect->>RedissonClient: tryLock(resolvedKey, waitTime, leaseTime)
alt Lock Acquired
RedissonClient-->>DistributedLockAspect: locked
DistributedLockAspect->>TokenFacadeService: proceed()
TokenFacadeService->>TokenRepository: read/update token
TokenRepository->>Database: SELECT/UPDATE
Database-->>TokenRepository: result
TokenRepository-->>TokenFacadeService: updated entity
DistributedLockAspect->>RedissonClient: unlock()
RedissonClient-->>DistributedLockAspect: unlocked
TokenFacadeService-->>Client: success
else Lock Not Acquired
RedissonClient-->>DistributedLockAspect: lock denied
DistributedLockAspect-->>TokenFacadeService: throw BadRequestException
TokenFacadeService-->>Client: error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @unifolio0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 이 PR은 애플리케이션의 동시성 문제를 해결하기 위해 Redisson 기반의 분산 락을 도입하고 기존 Redis 관련 로직을 리팩토링합니다. 특히 소셜 로그인 회원 가입, 인터뷰 진행, 토큰 구매/사용/환불, 이력서 기반 질문 생성 등 여러 핵심 기능에 분산 락을 적용하여 데이터 일관성을 보장하고 동시 요청으로 인한 오류를 방지합니다. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. 락 걸어 안전히, 동시성 문제 사라져, 코드 흐름 평화. Footnotes
|
Test Results 41 files 41 suites 1m 7s ⏱️ Results for commit f9b88bf. ♻️ This comment has been updated with latest results. |
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
api/src/main/java/com/samhap/kokomen/token/service/TokenFacadeService.java (1)
36-49: 🧹 Nitpick | 🔵 Trivial외부 결제 확인 후 DB 작업 실패 시 보상 트랜잭션이 없습니다.
Line 44에서
paymentClient.confirmPayment()가 성공한 후 Line 45-46의 DB 작업(saveTokenPurchase,addPaidTokens)이 실패하면 트랜잭션이 롤백되지만, 결제는 이미 확인된 상태입니다. 이는 기존 로직에서도 존재하는 문제이지만, 분산 락 도입 시점에서 보상 메커니즘(환불 로직 또는 이벤트 기반 재시도)을 고려해볼 수 있습니다.api/src/main/java/com/samhap/kokomen/interview/service/InterviewProceedBedrockFlowAsyncService.java (2)
62-75:⚠️ Potential issue | 🔴 Critical
proceedInterviewByBedrockFlowAsync에서invokeFlow의 동기적 예외 발생 시 락이 해제되지 않습니다.
bedrockAgentRuntimeAsyncClient.invokeFlow()가 비동기 디스패치 전에 동기적으로 예외를 던지면 (예: 유효성 검증 실패, 연결 오류), 락이 해제되지 않습니다.proceedInterviewByGptFlowAsync(Line 84-94)에서는 try-catch로 이 경우를 처리하고 있으나, Bedrock 경로에는 동일한 보호가 없습니다.🐛 제안하는 수정
public void proceedInterviewByBedrockFlowAsync(Long memberId, QuestionAndAnswers questionAndAnswers, Long interviewId, String lockValue) { Map<String, String> mdcContext = MDC.getCopyOfContextMap(); String lockKey = InterviewFacadeService.createInterviewProceedLockKey(memberId); String interviewProceedStateKey = InterviewFacadeService.createInterviewProceedStateKey(interviewId, questionAndAnswers.readCurQuestion().getId()); - bedrockAgentRuntimeAsyncClient.invokeFlow( - InterviewInvokeFlowRequestFactory.createInterviewProceedInvokeFlowRequest(questionAndAnswers), - createInterviewProceedInvokeFlowResponseHandler(memberId, questionAndAnswers, interviewId, lockKey, - lockValue, interviewProceedStateKey, mdcContext)); - redisService.setValue(interviewProceedStateKey, InterviewProceedState.LLM_PENDING.name(), - Duration.ofSeconds(300)); + try { + bedrockAgentRuntimeAsyncClient.invokeFlow( + InterviewInvokeFlowRequestFactory.createInterviewProceedInvokeFlowRequest(questionAndAnswers), + createInterviewProceedInvokeFlowResponseHandler(memberId, questionAndAnswers, interviewId, lockKey, + lockValue, interviewProceedStateKey, mdcContext)); + redisService.setValue(interviewProceedStateKey, InterviewProceedState.LLM_PENDING.name(), + Duration.ofSeconds(300)); + } catch (Exception e) { + redisService.releaseLockSafely(lockKey, lockValue); + throw e; + } }
142-160:⚠️ Potential issue | 🔴 Critical플로우 이벤트 타입 처리 누락으로 인한 락 누수 위험
callbackInterviewProceedBedrockFlow(FlowResponseStream, ...)메서드(라인 142-160)에서FlowOutputEvent가 아닌 이벤트는 처리되지 않고 락 해제 없이 반환됩니다.AWS Bedrock의
FlowResponseStream은FlowOutputEvent,FlowCompletionEvent,FlowTraceEvent,FlowMultiTurnInputRequestEvent등 다양한 이벤트 타입을 방출할 수 있습니다. 현재 코드는FlowOutputEvent만 처리하며, 다른 이벤트 타입이 도착하면 finally 블록(라인 157-159)에서 MDC만 정리하고 락을 해제하지 않습니다.
onError콜백은 예외 발생 시에만 작동하므로 이벤트 라우팅 문제는 커버하지 못합니다. 결과적으로FlowCompletionEvent만 발생하거나 다른 이벤트 타입만 도착하는 경우 락이 영구적으로 잠긴 상태로 남아 사용자의 인터뷰 진행이 차단됩니다.모든
FlowResponseStream이벤트 타입에 대해 적절한 처리와 락 해제 로직을 추가하거나, finally 블록에서 무조건적으로 락을 해제해야 합니다. (ResumeEvaluationAsyncService, ResumeBasedQuestionBedrockService에서도 동일한 패턴이 발견됨)
🤖 Fix all issues with AI agents
In `@api/build.gradle`:
- Line 33: Both modules hard-code the same Redisson coordinate
("org.redisson:redisson:3.52.0"), risking drift; consolidate the version into a
single source of truth (e.g., add redissonVersion = '3.52.0' in the root
project's ext block or declare it in the version catalog) and update the
dependency declarations in both modules to reference that single variable
(replace literal "org.redisson:redisson:3.52.0" with the shared version variable
or catalog entry).
In `@api/src/main/java/com/samhap/kokomen/auth/service/AuthService.java`:
- Around line 37-46: Extract the duplicated try-catch social login block into a
single private helper (e.g., handleSocialLogin) that accepts the provider and
the identity fields (provider: SocialProvider, id: String, optional nickname:
String), then call memberService.saveSocialMember(...) inside the helper and
fall back to memberService.readBySocialLogin(...) in the catch, returning new
MemberResponse(...) from the helper; replace the current duplicated blocks
around memberService.saveSocialMember and memberService.readBySocialLogin (used
for SocialProvider.KAKAO and SocialProvider.GOOGLE) with calls to this new
helper to centralize the logic.
In
`@api/src/main/java/com/samhap/kokomen/interview/service/InterviewFacadeService.java`:
- Around line 129-134: Update the exception logging in InterviewFacadeService so
the full stack trace is logged by passing the Throwable as the last argument to
log.error (change log.error("Bedrock API 호출 실패, GPT 폴백에시 기록 - {}", e) to pass e
as the throwable and similarly change log.error("Gpt API 호출 실패 - {}", ex) to
pass ex as the throwable); also fix the typo "폴백에시" to "폴백 시". Target the
log.error calls in the try/catch block around
interviewProceedBedrockFlowAsyncService.proceedInterviewByGptFlowAsync(...) and
the subsequent catch that calls redisService.releaseLockSafely(...).
In
`@api/src/main/java/com/samhap/kokomen/interview/service/InterviewSchedulerService.java`:
- Around line 37-44: The scheduler currently only calls
redisService.releaseLock(INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK) inside the catch
block so the lock remains until TTL expiry on success; decide whether this is
intentional: if not, move the releaseLock call into a finally block (ensuring it
runs after scanKeys(...) and syncInterviewViewCounts(...)) to always release
INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK; if it is intentional, add a clear comment
above the try/catch referencing INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK and
explaining that the lock is intentionally held until TTL to prevent duplicate
daily runs.
In
`@api/src/test/java/com/samhap/kokomen/token/service/TokenFacadeServiceConcurrencyTest.java`:
- Around line 96-108: In TokenFacadeServiceConcurrencyTest, the concurrent
runnable only catches BadRequestException which hides other failures; change the
runnable that calls tokenFacadeService.useToken(member.getId()) to catch
Exception (or add a second catch(Exception e)) so any unexpected exception
increments failCount and is logged (include the exception and stack trace)
before countDown on latch; keep BadRequestException handling if you need
specific assertions but ensure all exceptions are recorded/logged and failCount
is incremented to make assertions deterministic.
- Around line 45-66: The concurrency test TokenFacadeServiceConcurrencyTest may
fail because the default `@DistributedLock` waitTime (3s) might be too short for
10 threads contending on the same memberId; update either the production method
annotation or the test to increase the lock waitTime: locate the useToken(...)
method (tokenFacadeService.useToken / the method annotated with
`@DistributedLock`) and set an explicit larger waitTime value sufficient for your
environment, or adjust the test to reduce contention (fewer threads or staggered
starts) so that all threads can acquire the lock within the configured waitTime.
In
`@common/src/main/java/com/samhap/kokomen/global/aop/DistributedLockAspect.java`:
- Around line 20-25: The annotation order on DistributedLockAspect violates the
guideline—Lombok annotations must come before Spring annotations; move Lombok
annotations (`@Slf4j` and `@RequiredArgsConstructor`) to the top of the annotation
block and place Spring annotations (`@Component`, `@Aspect`, `@Order`(0)) after them,
keeping the more significant Spring annotation lower as needed so the final
order is Lombok first then Spring.
- Around line 57-68: The resolveLockKey method currently concatenates the SpEL
result directly, which can produce "lock:prefix:null" when
PARSER.parseExpression(distributedLock.key()).getValue(context) returns null;
update resolveLockKey to validate the resolvedKey after evaluation (call to
PARSER.parseExpression(...).getValue(context)) and if it is null or empty throw
a clear IllegalStateException (or custom runtime exception) describing the
failing annotation (include distributedLock.key() and distributedLock.prefix()
and the target method from the MethodSignature) so the caller fails fast instead
of generating a meaningless lock key; ensure you still use LOCK_KEY_PREFIX and
distributedLock.prefix() when building the key only after validation.
In `@common/src/main/java/com/samhap/kokomen/global/config/RedissonConfig.java`:
- Around line 17-21: Update the RedissonConfig method that currently creates
Config and returns Redisson.create(config) (the block creating Config, using
useSingleServer(), setAddress and returning Redisson) to read and apply
production-relevant properties: set the Redis password from
spring.data.redis.password via useSingleServer().setPassword(...); choose the
address scheme based on an SSL flag or property (use "rediss://" when TLS is
required); and configure connectionPoolSize, connectionMinimumIdleSize, timeout
and retryAttempts via
useSingleServer().setConnectionPoolSize(...).setConnectionMinimumIdleSize(...).setTimeout(...).setRetryAttempts(...).
Ensure these values are pulled from injected configuration/properties and
applied before calling Redisson.create(config).
In `@common/src/main/java/com/samhap/kokomen/global/service/RedisService.java`:
- Around line 60-67: The Redis keys are accessed with inconsistent codecs
causing runtime deserialization errors: ensure the same codec is used for both
initial set and increments by updating incrementKey to obtain the atomic long
with StringCodec (use redissonClient.getAtomicLong(key, StringCodec.INSTANCE))
or change the initial write in InterviewViewCountService.setIfAbsent to use an
RAtomicLong for initialization; likewise make expireKey use a bucket with the
same codec (redissonClient.getBucket(key, StringCodec.INSTANCE)) so all
operations on viewCountKey share a consistent codec/serialization format.
- Around line 81-88: The null check in RedisService.multiGet is ineffective
because RBuckets.get(...) never returns null; replace the redundant null check
with a check for an empty result (result.isEmpty()) or simply remove the
defensive branch and handle empty maps appropriately; update the method in
RedisService.multiGet to either throw a RedisException when result.isEmpty() (to
preserve current error behavior) or return the empty map directly depending on
intended semantics.
| // PDF 텍스트 추출 | ||
| implementation 'org.apache.pdfbox:pdfbox:3.0.3' | ||
|
|
||
| testImplementation 'org.redisson:redisson:3.52.0' |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redisson 버전이 두 모듈에 하드코딩되어 있습니다.
common/build.gradle(Line 5)과 api/build.gradle(Line 33) 모두 org.redisson:redisson:3.52.0을 개별적으로 선언하고 있어 버전 불일치 위험이 있습니다. 루트 build.gradle의 ext 블록이나 버전 카탈로그를 통해 버전을 한 곳에서 관리하는 것을 권장합니다.
🤖 Prompt for AI Agents
In `@api/build.gradle` at line 33, Both modules hard-code the same Redisson
coordinate ("org.redisson:redisson:3.52.0"), risking drift; consolidate the
version into a single source of truth (e.g., add redissonVersion = '3.52.0' in
the root project's ext block or declare it in the version catalog) and update
the dependency declarations in both modules to reference that single variable
(replace literal "org.redisson:redisson:3.52.0" with the shared version variable
or catalog entry).
| try { | ||
| Member member = memberService.saveSocialMember(SocialProvider.KAKAO, | ||
| String.valueOf(kakaoUserInfoResponse.id()), | ||
| kakaoUserInfoResponse.kakaoAccount().profile().nickname()); | ||
| return new MemberResponse(member); | ||
| } catch (DataIntegrityViolationException exception) { | ||
| Member member = memberService.readBySocialLogin(SocialProvider.KAKAO, | ||
| String.valueOf(kakaoUserInfoResponse.id())); | ||
| return new MemberResponse(member); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Kakao/Google 로그인 동시성 처리 로직이 중복됩니다.
두 메서드의 try-catch 패턴이 provider와 ID 추출 방식만 다르고 구조가 동일합니다. 공통 로직을 private 메서드로 추출하면 유지보수성이 향상됩니다.
♻️ 리팩터링 예시
+ private MemberResponse getOrCreateSocialMember(SocialProvider provider, String socialId, String nickname) {
+ return memberService.findBySocialLogin(provider, socialId)
+ .map(MemberResponse::new)
+ .orElseGet(() -> {
+ try {
+ Member member = memberService.saveSocialMember(provider, socialId, nickname);
+ return new MemberResponse(member);
+ } catch (DataIntegrityViolationException exception) {
+ Member member = memberService.readBySocialLogin(provider, socialId);
+ return new MemberResponse(member);
+ }
+ });
+ }Also applies to: 58-66
🤖 Prompt for AI Agents
In `@api/src/main/java/com/samhap/kokomen/auth/service/AuthService.java` around
lines 37 - 46, Extract the duplicated try-catch social login block into a single
private helper (e.g., handleSocialLogin) that accepts the provider and the
identity fields (provider: SocialProvider, id: String, optional nickname:
String), then call memberService.saveSocialMember(...) inside the helper and
fall back to memberService.readBySocialLogin(...) in the catch, returning new
MemberResponse(...) from the helper; replace the current duplicated blocks
around memberService.saveSocialMember and memberService.readBySocialLogin (used
for SocialProvider.KAKAO and SocialProvider.GOOGLE) with calls to this new
helper to centralize the logic.
| log.error("Bedrock API 호출 실패, GPT 폴백에시 기록 - {}", e); | ||
| interviewProceedBedrockFlowAsyncService.proceedInterviewByGptFlowAsync(memberAuth.memberId(), | ||
| questionAndAnswers, interviewId); | ||
| questionAndAnswers, interviewId, lockValue); | ||
| } catch (Exception ex) { | ||
| log.error("Gpt API 호출 실패 - {}", ex); | ||
| redisService.releaseLock(lockKey); | ||
| redisService.releaseLockSafely(lockKey, lockValue); |
There was a problem hiding this comment.
예외 로깅 시 스택 트레이스가 손실됩니다.
log.error("... - {}", e) 형식은 예외의 toString()만 출력하고 스택 트레이스를 누락합니다. SLF4J에서 스택 트레이스를 포함하려면 예외 객체를 마지막 인자로 전달해야 합니다. 또한 Line 129의 "폴백에시"는 "폴백 시"의 오타로 보입니다.
🔧 수정 제안
- log.error("Bedrock API 호출 실패, GPT 폴백에시 기록 - {}", e);
+ log.error("Bedrock API 호출 실패, GPT 폴백 시 기록", e);
interviewProceedBedrockFlowAsyncService.proceedInterviewByGptFlowAsync(memberAuth.memberId(),
questionAndAnswers, interviewId, lockValue);
} catch (Exception ex) {
- log.error("Gpt API 호출 실패 - {}", ex);
+ log.error("Gpt API 호출 실패", ex);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| log.error("Bedrock API 호출 실패, GPT 폴백에시 기록 - {}", e); | |
| interviewProceedBedrockFlowAsyncService.proceedInterviewByGptFlowAsync(memberAuth.memberId(), | |
| questionAndAnswers, interviewId); | |
| questionAndAnswers, interviewId, lockValue); | |
| } catch (Exception ex) { | |
| log.error("Gpt API 호출 실패 - {}", ex); | |
| redisService.releaseLock(lockKey); | |
| redisService.releaseLockSafely(lockKey, lockValue); | |
| log.error("Bedrock API 호출 실패, GPT 폴백 시 기록", e); | |
| interviewProceedBedrockFlowAsyncService.proceedInterviewByGptFlowAsync(memberAuth.memberId(), | |
| questionAndAnswers, interviewId, lockValue); | |
| } catch (Exception ex) { | |
| log.error("Gpt API 호출 실패", ex); | |
| redisService.releaseLockSafely(lockKey, lockValue); |
🤖 Prompt for AI Agents
In
`@api/src/main/java/com/samhap/kokomen/interview/service/InterviewFacadeService.java`
around lines 129 - 134, Update the exception logging in InterviewFacadeService
so the full stack trace is logged by passing the Throwable as the last argument
to log.error (change log.error("Bedrock API 호출 실패, GPT 폴백에시 기록 - {}", e) to pass
e as the throwable and similarly change log.error("Gpt API 호출 실패 - {}", ex) to
pass ex as the throwable); also fix the typo "폴백에시" to "폴백 시". Target the
log.error calls in the try/catch block around
interviewProceedBedrockFlowAsyncService.proceedInterviewByGptFlowAsync(...) and
the subsequent catch that calls redisService.releaseLockSafely(...).
| try { | ||
| Iterable<String> scannedKeys = redisService.scanKeys(INTERVIEW_VIEW_COUNT_KEY_PATTERN, | ||
| REDIS_INTERVIEW_VIEW_COUNT_BATCH_SIZE); | ||
| syncInterviewViewCounts(scannedKeys); | ||
| } catch (Exception e) { | ||
| log.error("인터뷰 조회수를 DB에 반영하는 스케줄러 동작 중 에러 발생", e); | ||
| redisService.releaseLock(INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
스케줄러 성공 시 락이 해제되지 않는 것은 의도된 동작인지 확인 필요.
catch 블록에서만 releaseLock을 호출하고 있어, 정상 완료 시에는 TTL(6시간)이 만료될 때까지 락이 유지됩니다. 일일 스케줄러의 중복 실행 방지를 위한 의도적인 설계로 보이지만, finally 블록에서의 해제가 아닌 점이 명확하지 않습니다.
만약 의도된 동작이라면, 해당 의도를 명확히 하는 주석을 추가하는 것을 권장합니다.
🤖 Prompt for AI Agents
In
`@api/src/main/java/com/samhap/kokomen/interview/service/InterviewSchedulerService.java`
around lines 37 - 44, The scheduler currently only calls
redisService.releaseLock(INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK) inside the catch
block so the lock remains until TTL expiry on success; decide whether this is
intentional: if not, move the releaseLock call into a finally block (ensuring it
runs after scanKeys(...) and syncInterviewViewCounts(...)) to always release
INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK; if it is intentional, add a clear comment
above the try/catch referencing INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK and
explaining that the lock is intentionally held until TTL to prevent duplicate
daily runs.
| int threadCount = 10; | ||
| ExecutorService executorService = Executors.newFixedThreadPool(threadCount); | ||
| CountDownLatch latch = new CountDownLatch(threadCount); | ||
| AtomicInteger successCount = new AtomicInteger(0); | ||
| AtomicInteger failCount = new AtomicInteger(0); | ||
|
|
||
| // when | ||
| for (int i = 0; i < threadCount; i++) { | ||
| executorService.execute(() -> { | ||
| try { | ||
| tokenFacadeService.useToken(member.getId()); | ||
| successCount.incrementAndGet(); | ||
| } catch (Exception e) { | ||
| failCount.incrementAndGet(); | ||
| } finally { | ||
| latch.countDown(); | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| latch.await(); | ||
| executorService.shutdown(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd "DistributedLock" --type f -e javaRepository: samhap-soft/kokomen-backend
Length of output: 227
🏁 Script executed:
git ls-files | head -20Repository: samhap-soft/kokomen-backend
Length of output: 633
🏁 Script executed:
cat common/src/main/java/com/samhap/kokomen/global/annotation/DistributedLock.javaRepository: samhap-soft/kokomen-backend
Length of output: 577
🏁 Script executed:
cat common/src/main/java/com/samhap/kokomen/global/aop/DistributedLockAspect.javaRepository: samhap-soft/kokomen-backend
Length of output: 2778
🏁 Script executed:
cat api/src/test/java/com/samhap/kokomen/token/service/TokenFacadeServiceConcurrencyTest.javaRepository: samhap-soft/kokomen-backend
Length of output: 6291
🏁 Script executed:
rg "def useToken|public.*useToken" api/src/main/java/com/samhap/kokomen/token/service/ -A 5Repository: samhap-soft/kokomen-backend
Length of output: 1470
🏁 Script executed:
fd "TokenFacadeService.java" --type f | grep -v TestRepository: samhap-soft/kokomen-backend
Length of output: 145
🏁 Script executed:
cat api/src/main/java/com/samhap/kokomen/token/service/TokenFacadeService.javaRepository: samhap-soft/kokomen-backend
Length of output: 6937
기본 waitTime 3초가 충분한지 확인이 필요합니다.
10개 스레드가 동일한 memberId로 락을 순차 획득해야 하므로, @DistributedLock의 기본 waitTime 값(3초)이 모든 스레드의 락 획득을 보장할 수 있을 만큼 충분해야 합니다. 각 스레드의 트랜잭션 처리 시간이 길거나 락 대기 시간이 누적되면, 일부 스레드가 시간 초과로 실패할 수 있어 테스트의 안정성이 떨어질 수 있습니다.
필요시 useToken() 메서드의 @DistributedLock 어노테이션에 명시적으로 충분한 waitTime을 설정하거나, 테스트 환경에서 데이터베이스 성능을 고려하여 조정하세요.
🤖 Prompt for AI Agents
In
`@api/src/test/java/com/samhap/kokomen/token/service/TokenFacadeServiceConcurrencyTest.java`
around lines 45 - 66, The concurrency test TokenFacadeServiceConcurrencyTest may
fail because the default `@DistributedLock` waitTime (3s) might be too short for
10 threads contending on the same memberId; update either the production method
annotation or the test to increase the lock waitTime: locate the useToken(...)
method (tokenFacadeService.useToken / the method annotated with
`@DistributedLock`) and set an explicit larger waitTime value sufficient for your
environment, or adjust the test to reduce contention (fewer threads or staggered
starts) so that all threads can acquire the lock within the configured waitTime.
| @Slf4j | ||
| @Order(0) | ||
| @Aspect | ||
| @Component | ||
| @RequiredArgsConstructor | ||
| public class DistributedLockAspect { |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
어노테이션 순서가 코딩 가이드라인에 맞지 않습니다.
가이드라인에 따르면 Lombok 어노테이션이 먼저, Spring 어노테이션이 그 다음에 위치해야 합니다. 현재 @RequiredArgsConstructor(Lombok)가 Spring 어노테이션들 뒤에 있습니다.
♻️ 제안하는 수정
`@Slf4j`
-@Order(0)
-@Aspect
-@Component
`@RequiredArgsConstructor`
+@Order(0)
+@Aspect
+@Component
public class DistributedLockAspect {As per coding guidelines, "Order annotations: Lombok annotations first, then Spring annotations (more important annotations go below)".
🤖 Prompt for AI Agents
In
`@common/src/main/java/com/samhap/kokomen/global/aop/DistributedLockAspect.java`
around lines 20 - 25, The annotation order on DistributedLockAspect violates the
guideline—Lombok annotations must come before Spring annotations; move Lombok
annotations (`@Slf4j` and `@RequiredArgsConstructor`) to the top of the annotation
block and place Spring annotations (`@Component`, `@Aspect`, `@Order`(0)) after them,
keeping the more significant Spring annotation lower as needed so the final
order is Lombok first then Spring.
| private String resolveLockKey(ProceedingJoinPoint joinPoint, DistributedLock distributedLock) { | ||
| MethodSignature signature = (MethodSignature) joinPoint.getSignature(); | ||
| String[] parameterNames = signature.getParameterNames(); | ||
| Object[] args = joinPoint.getArgs(); | ||
|
|
||
| EvaluationContext context = new StandardEvaluationContext(); | ||
| for (int i = 0; i < parameterNames.length; i++) { | ||
| context.setVariable(parameterNames[i], args[i]); | ||
| } | ||
|
|
||
| Object resolvedKey = PARSER.parseExpression(distributedLock.key()).getValue(context); | ||
| return LOCK_KEY_PREFIX + distributedLock.prefix() + ":" + resolvedKey; |
There was a problem hiding this comment.
SpEL 표현식 결과가 null일 경우 락 키가 "lock:prefix:null"이 됩니다.
PARSER.parseExpression(distributedLock.key()).getValue(context)가 null을 반환하면 문자열 연결 시 "null"로 변환되어, 의미 없는 락 키가 생성됩니다. 이는 서로 다른 요청이 동일한 잘못된 키로 락을 공유하게 되어 심각한 동시성 문제를 초래할 수 있습니다.
🛡️ 제안하는 수정
private String resolveLockKey(ProceedingJoinPoint joinPoint, DistributedLock distributedLock) {
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
String[] parameterNames = signature.getParameterNames();
Object[] args = joinPoint.getArgs();
EvaluationContext context = new StandardEvaluationContext();
for (int i = 0; i < parameterNames.length; i++) {
context.setVariable(parameterNames[i], args[i]);
}
Object resolvedKey = PARSER.parseExpression(distributedLock.key()).getValue(context);
+ if (resolvedKey == null) {
+ throw new IllegalArgumentException("분산 락 키를 해석할 수 없습니다. key: " + distributedLock.key());
+ }
return LOCK_KEY_PREFIX + distributedLock.prefix() + ":" + resolvedKey;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private String resolveLockKey(ProceedingJoinPoint joinPoint, DistributedLock distributedLock) { | |
| MethodSignature signature = (MethodSignature) joinPoint.getSignature(); | |
| String[] parameterNames = signature.getParameterNames(); | |
| Object[] args = joinPoint.getArgs(); | |
| EvaluationContext context = new StandardEvaluationContext(); | |
| for (int i = 0; i < parameterNames.length; i++) { | |
| context.setVariable(parameterNames[i], args[i]); | |
| } | |
| Object resolvedKey = PARSER.parseExpression(distributedLock.key()).getValue(context); | |
| return LOCK_KEY_PREFIX + distributedLock.prefix() + ":" + resolvedKey; | |
| private String resolveLockKey(ProceedingJoinPoint joinPoint, DistributedLock distributedLock) { | |
| MethodSignature signature = (MethodSignature) joinPoint.getSignature(); | |
| String[] parameterNames = signature.getParameterNames(); | |
| Object[] args = joinPoint.getArgs(); | |
| EvaluationContext context = new StandardEvaluationContext(); | |
| for (int i = 0; i < parameterNames.length; i++) { | |
| context.setVariable(parameterNames[i], args[i]); | |
| } | |
| Object resolvedKey = PARSER.parseExpression(distributedLock.key()).getValue(context); | |
| if (resolvedKey == null) { | |
| throw new IllegalArgumentException("분산 락 키를 해석할 수 없습니다. key: " + distributedLock.key()); | |
| } | |
| return LOCK_KEY_PREFIX + distributedLock.prefix() + ":" + resolvedKey; | |
| } |
🤖 Prompt for AI Agents
In
`@common/src/main/java/com/samhap/kokomen/global/aop/DistributedLockAspect.java`
around lines 57 - 68, The resolveLockKey method currently concatenates the SpEL
result directly, which can produce "lock:prefix:null" when
PARSER.parseExpression(distributedLock.key()).getValue(context) returns null;
update resolveLockKey to validate the resolvedKey after evaluation (call to
PARSER.parseExpression(...).getValue(context)) and if it is null or empty throw
a clear IllegalStateException (or custom runtime exception) describing the
failing annotation (include distributedLock.key() and distributedLock.prefix()
and the target method from the MethodSignature) so the caller fails fast instead
of generating a meaningless lock key; ensure you still use LOCK_KEY_PREFIX and
distributedLock.prefix() when building the key only after validation.
| ) { | ||
| Config config = new Config(); | ||
| config.useSingleServer() | ||
| .setAddress("redis://" + host + ":" + port); | ||
| return Redisson.create(config); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
프로덕션 환경을 위한 Redis 연결 설정이 누락되어 있습니다.
현재 설정에 다음 항목들이 빠져 있습니다:
- 비밀번호/인증:
spring.data.redis.password속성을 통한 인증 설정 - SSL/TLS: 프로덕션 환경(예: AWS ElastiCache)에서
rediss://프로토콜 사용 필요 가능성 - 커넥션 풀/타임아웃:
connectionPoolSize,connectionMinimumIdleSize,timeout,retryAttempts등 안정성 관련 설정
현재 개발 환경에서는 동작하지만, 프로덕션 배포 시 이 설정들이 필요할 수 있습니다.
🔧 비밀번호 설정 예시
`@Bean`(destroyMethod = "shutdown")
public RedissonClient redissonClient(
`@Value`("${spring.data.redis.host}") String host,
- `@Value`("${spring.data.redis.port}") int port
+ `@Value`("${spring.data.redis.port}") int port,
+ `@Value`("${spring.data.redis.password:}") String password
) {
Config config = new Config();
- config.useSingleServer()
- .setAddress("redis://" + host + ":" + port);
+ SingleServerConfig serverConfig = config.useSingleServer()
+ .setAddress("redis://" + host + ":" + port);
+ if (!password.isEmpty()) {
+ serverConfig.setPassword(password);
+ }
return Redisson.create(config);
}🤖 Prompt for AI Agents
In `@common/src/main/java/com/samhap/kokomen/global/config/RedissonConfig.java`
around lines 17 - 21, Update the RedissonConfig method that currently creates
Config and returns Redisson.create(config) (the block creating Config, using
useSingleServer(), setAddress and returning Redisson) to read and apply
production-relevant properties: set the Redis password from
spring.data.redis.password via useSingleServer().setPassword(...); choose the
address scheme based on an SSL flag or property (use "rediss://" when TLS is
required); and configure connectionPoolSize, connectionMinimumIdleSize, timeout
and retryAttempts via
useSingleServer().setConnectionPoolSize(...).setConnectionMinimumIdleSize(...).setTimeout(...).setRetryAttempts(...).
Ensure these values are pulled from injected configuration/properties and
applied before calling Redisson.create(config).
| public Long incrementKey(String key) { | ||
| Long count = redisTemplate.opsForValue().increment(key, 1); | ||
| if (count == null) { | ||
| throw new RedisException("Redis 카운트 증가 실패. key: " + key); | ||
| } | ||
|
|
||
| return count; | ||
| RAtomicLong atomicLong = redissonClient.getAtomicLong(key); | ||
| return atomicLong.incrementAndGet(); | ||
| } | ||
|
|
||
| public boolean expireKey(String key, Duration ttl) { | ||
| Boolean expireSuccess = redisTemplate.expire(key, ttl); | ||
| if (expireSuccess == null) { | ||
| throw new RedisException("Redis 키 만료 설정 실패. key: " + key); | ||
| } | ||
|
|
||
| return expireSuccess; | ||
| return redissonClient.getBucket(key).expire(ttl); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the RedisService.java file to understand the full context
fd RedisService.java --type fRepository: samhap-soft/kokomen-backend
Length of output: 143
🏁 Script executed:
# Run the provided verification script
echo "=== incrementKey 사용처 ==="
rg -n --type java 'incrementKey\(' -C 3
echo ""
echo "=== expireKey 사용처 ==="
rg -n --type java 'expireKey\(' -C 3Repository: samhap-soft/kokomen-backend
Length of output: 3431
🏁 Script executed:
# Also check the full RedisService.java file to see codec usage patterns
cat common/src/main/java/com/samhap/kokomen/global/service/RedisService.javaRepository: samhap-soft/kokomen-backend
Length of output: 3391
setIfAbsent와 incrementKey의 코덱 불일치로 인한 실행 오류 위험
코드 검증 결과, InterviewViewCountService.java에서 정확히 이 문제가 발생하고 있습니다(line 39-43):
- Line 41:
setIfAbsent(viewCountKey, String.valueOf(interview.getViewCount()), ...)는StringCodec.INSTANCE로 문자열 값을 저장 - Line 43:
incrementKey(viewCountKey)는RAtomicLong으로 같은 키에 접근하려고 시도
RAtomicLong과 StringCodec은 서로 다른 직렬화 형식을 사용하므로, 문자열 값으로 저장된 데이터를 RAtomicLong으로 역직렬화할 수 없어 런타임 오류가 발생합니다.
해결 방법:
incrementKey에서 명시적으로StringCodec.INSTANCE를 사용하거나setIfAbsent대신RAtomicLong을 사용해 초기값을 설정하거나- 일관된 코덱을 사용하도록 통일해야 합니다.
또한 expireKey도 코덱을 명시하지 않아 예상치 못한 동작이 발생할 수 있으니 함께 수정하세요.
🤖 Prompt for AI Agents
In `@common/src/main/java/com/samhap/kokomen/global/service/RedisService.java`
around lines 60 - 67, The Redis keys are accessed with inconsistent codecs
causing runtime deserialization errors: ensure the same codec is used for both
initial set and increments by updating incrementKey to obtain the atomic long
with StringCodec (use redissonClient.getAtomicLong(key, StringCodec.INSTANCE))
or change the initial write in InterviewViewCountService.setIfAbsent to use an
RAtomicLong for initialization; likewise make expireKey use a bucket with the
same codec (redissonClient.getBucket(key, StringCodec.INSTANCE)) so all
operations on viewCountKey share a consistent codec/serialization format.
common/src/main/java/com/samhap/kokomen/global/service/RedisService.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Code Review
This PR implements distributed locks using Redisson to address concurrency issues, particularly improving concurrency control for token-related operations and interview processing logic. However, critical security and logic issues were identified. The token deduction logic is incomplete, allowing users to bypass token costs in text-based interviews and GPT model paths. A distributed lock leak was also found in the interview scheduler, which could prevent it from running correctly. Further improvements include correcting an incorrect Redisson dependency version in build.gradle and addressing code duplication in AuthService.
| implementation 'org.springframework.boot:spring-boot-starter-data-jpa' | ||
| implementation 'org.springframework.boot:spring-boot-starter-web' | ||
| implementation 'org.springframework.boot:spring-boot-starter-data-redis' | ||
| implementation 'org.redisson:redisson:3.52.0' |
| try { | ||
| Iterable<String> scannedKeys = redisService.scanKeys(INTERVIEW_VIEW_COUNT_KEY_PATTERN, | ||
| REDIS_INTERVIEW_VIEW_COUNT_BATCH_SIZE); | ||
| syncInterviewViewCounts(scannedKeys); | ||
| } catch (Exception e) { | ||
| log.error("인터뷰 조회수를 DB에 반영하는 스케줄러 동작 중 에러 발생", e); | ||
| redisService.releaseLock(INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK); | ||
| } |
There was a problem hiding this comment.
The distributed lock INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK is leaked if the try block completes successfully. It is currently only released in the catch block. This prevents the lock from being freed, blocking the scheduler for the duration of the lock's TTL (6 hours). To ensure the lock is always released, it should be moved to a finally block.
try {
Iterable<String> scannedKeys = redisService.scanKeys(INTERVIEW_VIEW_COUNT_KEY_PATTERN,
REDIS_INTERVIEW_VIEW_COUNT_BATCH_SIZE);
syncInterviewViewCounts(scannedKeys);
} catch (Exception e) {
log.error("인터뷰 조회수를 DB에 반영하는 스케줄러 동작 중 에러 발생", e);
} finally {
redisService.releaseLock(INTERVIEW_VIEW_COUNT_SCHEDULER_LOCK);
}| .orElseGet(() -> { | ||
| Member member = memberService.saveSocialMember(SocialProvider.KAKAO, | ||
| String.valueOf(kakaoUserInfoResponse.id()), | ||
| kakaoUserInfoResponse.kakaoAccount().profile().nickname()); | ||
| tokenService.createTokensForNewMember(member.getId()); | ||
| return new MemberResponse(member); | ||
| try { | ||
| Member member = memberService.saveSocialMember(SocialProvider.KAKAO, | ||
| String.valueOf(kakaoUserInfoResponse.id()), | ||
| kakaoUserInfoResponse.kakaoAccount().profile().nickname()); | ||
| return new MemberResponse(member); | ||
| } catch (DataIntegrityViolationException exception) { | ||
| Member member = memberService.readBySocialLogin(SocialProvider.KAKAO, | ||
| String.valueOf(kakaoUserInfoResponse.id())); | ||
| return new MemberResponse(member); | ||
| } | ||
| }); |
There was a problem hiding this comment.
kakaoLogin과 googleLogin 메소드 내에 멤버를 생성하고 잠재적인 경쟁 상태를 처리하는 로직이 중복되어 있습니다. 유지보수성을 높이고 코드 중복을 줄이기 위해 이 로직을 별도의 private 헬퍼 메소드로 추출하는 것을 고려해 보세요.
예시:
private MemberResponse createSocialMember(SocialProvider provider, String socialId, String nickname) {
try {
Member member = memberService.saveSocialMember(provider, socialId, nickname);
return new MemberResponse(member);
} catch (DataIntegrityViolationException exception) {
Member member = memberService.readBySocialLogin(provider, socialId);
return new MemberResponse(member);
}
}그런 다음 다음과 같이 호출할 수 있습니다:
.orElseGet(() -> createSocialMember(SocialProvider.KAKAO, ...))
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In
`@common/src/main/java/com/samhap/kokomen/global/aop/DistributedLockAspect.java`:
- Around line 68-70: Replace the BadRequestException thrown when resolvedKey is
null with a more appropriate server-side exception (e.g.,
IllegalArgumentException or IllegalStateException) in DistributedLockAspect;
include the failed SpEL expression or key template in the exception message to
aid debugging (e.g., mention the original expression variable used to resolve
the key along with a short descriptive message).
- Around line 37-42: The tryLock calls inside DistributedLockAspect (where you
set boolean acquired using distributedLock.leaseTime() and lock.tryLock(...))
need InterruptedException handling: wrap the tryLock invocations in a try/catch
that catches InterruptedException, call Thread.currentThread().interrupt() to
restore the interrupt status, and then rethrow a suitable unchecked exception
(e.g., new RuntimeException("Interrupted while acquiring distributed lock", e)
or a custom LockAcquisitionException) so the interruption is preserved and the
caller is informed; ensure both branches (the two lock.tryLock(...) calls) use
the same handling pattern.
| boolean acquired; | ||
| if (distributedLock.leaseTime() == -1) { | ||
| acquired = lock.tryLock(distributedLock.waitTime(), distributedLock.timeUnit()); | ||
| } else { | ||
| acquired = lock.tryLock(distributedLock.waitTime(), distributedLock.leaseTime(), distributedLock.timeUnit()); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
tryLock의 InterruptedException 처리를 고려해 주세요.
lock.tryLock()은 InterruptedException을 던질 수 있습니다. 현재 throws Throwable로 전파되지만, 인터럽트가 발생하면 스레드의 인터럽트 플래그가 초기화됩니다. 상위 호출자가 인터럽트 상태를 확인할 수 없게 될 수 있으므로, InterruptedException을 catch하여 Thread.currentThread().interrupt()로 인터럽트 상태를 복원한 후 적절한 예외로 감싸서 던지는 것이 좋습니다.
♻️ 제안하는 수정
boolean acquired;
- if (distributedLock.leaseTime() == -1) {
- acquired = lock.tryLock(distributedLock.waitTime(), distributedLock.timeUnit());
- } else {
- acquired = lock.tryLock(distributedLock.waitTime(), distributedLock.leaseTime(), distributedLock.timeUnit());
+ try {
+ if (distributedLock.leaseTime() == -1) {
+ acquired = lock.tryLock(distributedLock.waitTime(), distributedLock.timeUnit());
+ } else {
+ acquired = lock.tryLock(distributedLock.waitTime(), distributedLock.leaseTime(), distributedLock.timeUnit());
+ }
+ } catch (InterruptedException e) {
+ Thread.currentThread().interrupt();
+ throw new BadRequestException("락 획득 중 인터럽트가 발생했습니다.");
}🤖 Prompt for AI Agents
In
`@common/src/main/java/com/samhap/kokomen/global/aop/DistributedLockAspect.java`
around lines 37 - 42, The tryLock calls inside DistributedLockAspect (where you
set boolean acquired using distributedLock.leaseTime() and lock.tryLock(...))
need InterruptedException handling: wrap the tryLock invocations in a try/catch
that catches InterruptedException, call Thread.currentThread().interrupt() to
restore the interrupt status, and then rethrow a suitable unchecked exception
(e.g., new RuntimeException("Interrupted while acquiring distributed lock", e)
or a custom LockAcquisitionException) so the interruption is preserved and the
caller is informed; ensure both branches (the two lock.tryLock(...) calls) use
the same handling pattern.
| if (resolvedKey == null) { | ||
| throw new BadRequestException("분산 락 키를 생성할 수 없습니다."); | ||
| } |
There was a problem hiding this comment.
BadRequestException은 이 상황에 적합하지 않습니다.
SpEL 키 해석 실패는 클라이언트의 잘못된 요청이 아니라 서버 측 설정/프로그래밍 오류입니다. BadRequestException (HTTP 400) 대신 IllegalArgumentException 또는 IllegalStateException을 사용하는 것이 의미적으로 더 정확합니다. 또한, 디버깅을 위해 실패한 키 표현식을 메시지에 포함하면 좋겠습니다.
♻️ 제안하는 수정
Object resolvedKey = PARSER.parseExpression(distributedLock.key()).getValue(context);
if (resolvedKey == null) {
- throw new BadRequestException("분산 락 키를 생성할 수 없습니다.");
+ throw new IllegalArgumentException(
+ "분산 락 키를 해석할 수 없습니다. key: " + distributedLock.key());
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (resolvedKey == null) { | |
| throw new BadRequestException("분산 락 키를 생성할 수 없습니다."); | |
| } | |
| Object resolvedKey = PARSER.parseExpression(distributedLock.key()).getValue(context); | |
| if (resolvedKey == null) { | |
| throw new IllegalArgumentException( | |
| "분산 락 키를 해석할 수 없습니다. key: " + distributedLock.key()); | |
| } | |
| return LOCK_KEY_PREFIX + distributedLock.prefix() + ":" + resolvedKey; |
🤖 Prompt for AI Agents
In
`@common/src/main/java/com/samhap/kokomen/global/aop/DistributedLockAspect.java`
around lines 68 - 70, Replace the BadRequestException thrown when resolvedKey is
null with a more appropriate server-side exception (e.g.,
IllegalArgumentException or IllegalStateException) in DistributedLockAspect;
include the failed SpEL expression or key template in the exception message to
aid debugging (e.g., mention the original expression variable used to resolve
the key along with a short descriptive message).
closed #319
작업 내용
스크린샷
참고 사항