-
Bug
-
Resolution: Unresolved
-
Blocker
-
AWS Redhat 8, jenkins 2.462.2 hosted on tomcat 9.0.90
1. First, i tried these which didn't work
- yum install redis and started in the same host where jenkins is
- In jenkins under system > results cache > added URL as "redis://localhost:6379"
- Created freestyle parameterized job with testing code added below. Also enabled "Enable results cache for job" under the job configuration leaving it blank to take all parameters.
#!/bin/bash# Parameters FILE_SIZE=${FILE_SIZE:-"1G"} # File size default is 1GB SORT_COMMAND=${SORT_COMMAND:-""} # Sort command parameters (if any) COMPRESSION_TYPE=${COMPRESSION_TYPE:-"gzip"} # Compression type default is gzip DELAY=${DELAY:-"120"} # Delay time default is 120 seconds REPEAT_SORT=${REPEAT_SORT:-"2"} # Number of times to repeat the sorting REPEAT_COMPRESSION=${REPEAT_COMPRESSION:-"2"} # Number of times to repeat compressionecho "Parameters:" echo "File size: $FILE_SIZE" echo "Sort command: $SORT_COMMAND" echo "Compression type: $COMPRESSION_TYPE" echo "Delay: $DELAY seconds" echo "Repeat sort: $REPEAT_SORT times" echo "Repeat compression: $REPEAT_COMPRESSION times"echo "Starting complex task..."# Step 1: Generate a large random file echo "Generating large random file of size $FILE_SIZE..." base64 /dev/urandom | head -c $FILE_SIZE > large_file.txt# Step 2: Sort the file multiple times (this will be disk and CPU-intensive) for ((i=1; i<=REPEAT_SORT; i++)); do echo "Sorting the large file, iteration $i with options: $SORT_COMMAND..." start_time=$(date +%s) sort $SORT_COMMAND large_file.txt -o sorted_file_$i.txt end_time=$(date +%s) elapsed_sort=$((end_time - start_time)) echo "Sorting iteration $i completed. Time taken: $elapsed_sort seconds." done# Step 3: Count the number of words in the last sorted file echo "Counting words in the sorted file (last iteration)..." start_time=$(date +%s) word_count=$(wc -w < sorted_file_$REPEAT_SORT.txt) end_time=$(date +%s) elapsed_wc=$((end_time - start_time)) echo "Word count completed. Time taken: $elapsed_wc seconds." echo "Total words: $word_count"# Step 4: Compress the sorted file multiple times (Disk I/O-intensive task) for ((i=1; i<=REPEAT_COMPRESSION; i++)); do echo "Compressing the sorted file from iteration $i using $COMPRESSION_TYPE..." start_time=$(date +%s) if [ "$COMPRESSION_TYPE" = "gzip" ]; then gzip -k sorted_file_$i.txt elif [ "$COMPRESSION_TYPE" = "bzip2" ]; then bzip2 -k sorted_file_$i.txt else echo "Unknown compression type: $COMPRESSION_TYPE" exit 1 fi end_time=$(date +%s) elapsed_compress=$((end_time - start_time)) echo "Compression iteration $i completed. Time taken: $elapsed_compress seconds." done# Step 5: Simulate a network delay or other long operation echo "Simulating a delay of $DELAY seconds..." sleep $DELAY# Calculate total elapsed time total_elapsed=$((elapsed_sort + elapsed_wc + elapsed_compress + DELAY)) echo "Total time for the complex task: $total_elapsed seconds."# Cleanup rm large_file.txt sorted_file_*.txt.* 2>/dev/nullecho "Complex task completed."
- when executing the job I get this error
[Results Cache] (Pre-Checkout) Checking cached result for this job (hash: 4fb4ada78b068b92fb4774) [Results Cache][WARNING] (Pre-Checkout) Unable to get cached result for this job (hash: 4fb4ada78b068b92fb4774). Exception: unknown protocol: redis <code execution> <code execution> <code execution> [Results Cache] (Post Build) Sending build result for this job (result: SUCCESS :: build: 38 :: hash: 4fb4ada78b068b566d3d69dc92fb4774) [Results Cache][WARNING] (Update status: FAILURE) Unable to connect with cache server. Exception: unknown protocol: redis
2. I thought port is an issue
- I edited the port from 6379 to 8181 which is enabled in our security groups
- Changed the URL prost to 8181
- I triggered the job and once again saw the same reply as above.
3. I tried another approach
- added a python file and triggered to create an endpoint URL pointing to redis
from flask import Flask, request, jsonify import redisapp = Flask(__name__) client = redis.StrictRedis(host='localhost', port=8181, db=0)@app.route('/cache/<key>', methods=['GET']) def get_cache(key): value = client.get(key) if value: return jsonify({key: value.decode('utf-8')}), 200 else: return 'Key not found', 404@app.route('/cache', methods=['POST']) def set_cache(): data = request.get_json() key = data.get('key') value = data.get('value') client.set(key, value) return 'Value cached', 201@app.route('/cache/<key>', methods=['DELETE']) def delete_cache(key): client.delete(key) return 'Cache entry deleted', 200if __name__ == '__main__': app.run(host='0.0.0.0', port=8282)
- . added the URL under the system > results cache URL as "http://localhost:8181"
- This seems to work giving different result
[Results Cache] (Pre-Checkout) Checking cached result for this job (hash: 4fb4ada78b068b566d3d69dc92fb4774) [Results Cache] (Pre-Checkout) Cached result for this job (hash: 4fb4ada78b068b566d3d69dc92fb4774) is NOT_BUILT; found on build number -1 <code execution> <code execution> <code execution> [Results Cache] (Post Build) Sending build result for this job (result: SUCCESS :: build: 37 :: hash: 4fb4ada78b068b566d3d69dc92fb4774) [Results Cache] (Update status: SUCCESS) Build result sent
- But for real, it does not work. The before and after job execution time is still the same.