I asked DeepSeek the same question, and it explained iNEXT output results:
`f1-f10`: Number of singletons, doubletons, etc. (species represented by exactly 1, 2,...10 individuals).
So, it makes sense, now. Aditionally, sample coverage results also were coherent wit expectations.
I had to add QT += svg
to the pro-file.
By default Playwright saves downloads with a unique filename, but you can get the original name using SuggestedFilename
from the Download API. After the download finishes, use download.PathAsync()
to get the temp path, then rename the file with File.Move()
to match the original filename.
Since Homebrew has disabled fetching [email protected], try to download and follow readme
from this repo
https://github.com/nanto88/mac-openssl
Solution is here
https://learn.microsoft.com/en-us/answers/questions/1517358/cannot-import-microsoft-graph-modules-import-modul
and here
https://github.com/microsoftgraph/msgraph-sdk-powershell/issues/1488
Ensure you have installed the module using Install-Module Microsoft.Graph -Scope YourPrefferedScope(CurrentUser/AllUsers)
doc link here then verify if installed using Get-InstalledModule Microsoft.Graph
Please increase the $
maximumfunctioncount
to the max 32768 and try and load the Graph Modules that you need- Reference for existing modules . Including necessary modules will free up capacity.
You can run the Import-Module Microsoft.Graph to allow for the cmdlets to be available for the PowerShell session.
Though it's not explicitly stated in the documentation, however I have encountered a similar case and could only receive the sent template once the payment method has been added. Also faced failure when the payment method validity expired and had to renew it
You need to upgrade @mui/material
to version ^7.0
. Just run npm i @mui/material@latest
and follow the migration guide from https://mui.com/material-ui/migration/upgrade-to-v7/
First of all fair warning, it's been some time since I last used Laravel. From what I have seen online the lambda limit isn't specific to the tmp file as the whole lambda system is temporary. I believe your problem should be solved with simple queueing. You should be able to queue chunks of data in laravel and upload them consecutively so that you don't hit the 512mb quota. Let me know how it goes if you try it!
I have a similar issue, in prod each time the screen changes my RootLayout is rerender. It doesn't happen in dev mode but it happens when running with npx expo start --no-dev --minify
. I tried with a new project from npx create-expo-app
and just added this lines in the RootLayout and the problem still occure :
const count = useRef(0);
count.current += 1;
alert(`RootLayout rendered ${count.current} times`);
I need to do the opposite: I have a GNU/Linux executable which I can't make it SYSV due to it's dependencies, and I have SYSV shared object. Executable fails to load my SYSV so file so I want to try compile it as GNU/Linux. So how can I force g++ to create GNU/Linux object file instead of SYSV?
"check and set the projectKey
according to what is listed in the sample file. For me it was a combination of organization and actual project key, not the name of the project in SonarQube!"
This did the trick for me. I was getting the error
Could not find a default branch for project with key...
when using project key only for -Dsonar.projectKey
.
After Changing this to -Dsonar.projectKey=${SONAR_ORGANIZATION}_${CI_PROJECT_NAME}
error went away. Thank you!
Use mkl_sparse_d_mv()
from oneAPI MKL for sparse matrix-vector multiplication. Ensure the matrix is in CSR format and handle descriptors correctly for accurate scalar product results.
It might be you are trying to grant access to qq@'%', but you are connected as root@localhost.
So, try granting to user@'localhost', not '%'.
Fix useCallback
:
const handleRender = useCallback(() => {
renderCountRef.current += 1;
console.log('ExpensiveComponent render count:', renderCountRef.current);
}, []);
How do we enable the functionality so that tapping on the accessory/tab bar opens a different view in a modal like the Music app?
There is no boolean null in kdb+. Only true 1b
, and false 0b
.
https://code.kx.com/q/basics/datatypes/
The short datatype exists which is 2 bytes in size.
Using PIVOT
SELECT *
FROM(
SELECT id_request, Alert_Outcome, ROW_NUMBER() OVER(PARTITION BY id_request ORDER BY Alert_Outcome) AS rn
FROM Test
)
PIVOT (MAX(Alert_Outcome) FOR rn IN (1 AS Alert1, 2 AS Alert2, 3 AS Alert3))
ORDER BY id_request;
Using Conditional Aggregation
SELECT
id_request,
MAX(CASE WHEN rn = 1 THEN Alert_Outcome END) AS Alert1,
MAX(CASE WHEN rn = 2 THEN Alert_Outcome END) AS Alert2,
MAX(CASE WHEN rn = 3 THEN Alert_Outcome END) AS Alert3
FROM(
SELECT id_request, Alert_Outcome, ROW_NUMBER() OVER(PARTITION BY id_request ORDER BY Alert_Outcome) AS rn
FROM Test
) AS D
GROUP BY id_request
ORDER BY id_request
Explanation is in the documentation. It is a serialization issue. The class you getting in the migrations are very basics.
https://docs.djangoproject.com/en/5.2/topics/migrations/#historical-models
I currently have a similar problem with my Python script.
I have a "script.py" file in which I parse the HTML code of a webpage and look for different elements, among which this one:
<div class="stock available" title="Disponibilité">
<span>
En stock
</span>
</div>
Here is the part of my code looking for this element in "script.py":
import requests
from bs4 import BeautifulSoup
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.57'
}
page = requests.get("target_website.com", headers = headers, verify = False)
html = BeautifulSoup(page.text, "html.parser")
element_dispo = html.find('div', {'title':'Disponibilité'})
element_dispo = element_dispo.get('class') if element_dispo else []
dispo = 'Dispo' if 'available' in element_dispo else 'Non dispo'
When running the script by itself, everything works as expected, but if I try to execute the "script.py" file from the "main_script.py" file, with the code below, then the wanted element is not found.
with open("script.py") as file:
exec(file.read())
Does anyone have any idea of what's happening?
Here are several ways to optimize training performance:
What I’ve shared are just a few of the available solutions. The YOLO website offers more comprehensive and detailed strategies for improving training performance. You can refer to the following link: https://github.com/ultralytics/ultralytics/blob/main/docs/en/yolov5/tutorials/tips_for_best_training_results.md
I think it because of the keyboard
Have you tried edgecolors="face"
?
Other price replacement modes only works when you have multiple subscriptions and you're switching between them, not ONE subscription with multiple base plans.
Reference: https://developer.android.com/google/play/billing/subscriptions#change
As of August 2022, Microsoft now allows the Visual Studio Build Tools to be used for compiling C and C++ open source projects without requiring a license, even for commercial/enterprise users:
Using the latest version of plotly=6.1.2
and plotly.express=1.30.0
, I was able to just remove the to_pandas()
and your code just worked as is. This is because plotly
, natively support polars now.
import plotly.express as px
import polars as pl
tidy_df_pl = pl.DataFrame(
{
"x": [10, 10, 10, 20, 20, 20, 30, 30, 30],
"y": [3, 4, 5, 3, 4, 5, 3, 4, 5],
"value": [5, 8, 2, 4, 10, 14, 10, 8, 9],
}
)
print(tidy_df_pl)
pivot_df_pl = (
tidy_df_pl.pivot(index="x", on="y", values="value")
)
print(pivot_df_pl)
fig = px.imshow(pivot_df_pl)
fig.show()
As an alternative, you can also plot the heatmap with seaborn=0.13.2
, which also supports polars
now.
import seaborn as sns
sns.heatmap(pivot_df_pl, annot=True)
I managed to resolve this after contact with the AWS support by adding "--provenance=false" together with "--output type=docker" as arguments to the docker buildx build commands. This made it build in the V2 format supported by SageMaker. In our case the building was done via the aws-ecr circle-ci orb, using "extra_build_args", but adding the "--provenance==false" may help in other build environments too.
That's a really helpful explanation from Answer 1... my error seems to include:
https://www.example.com/index.html
https://example.com/index.html
but there is only one file index.html to have a canonical tag...
If I add to index.html, why does the console keep showing the 4 examples above? In my head it should just display all 4 as https://www.example.com/ and never see the others... But doesn't seem to be the case...
You have to replace enable_lazy_ghost_objects: true
by enable_native_lazy_objects: true
in config/packages/doctrine.yaml :
doctrine:
orm:
auto_generate_proxy_classes: true
naming_strategy: doctrine.orm.naming_strategy.underscore_number_aware
enable_native_lazy_objects: true
auto_mapping: true
See https://github.com/doctrine/orm/issues/11950 and https://github.com/doctrine/orm/pull/11853
For me using /wp-json/wp/v2/
at the end of the URL worked.
It looks we can set GRPC_SSL_TARGET_NAME_OVERRIDE_ARG using CreateCustomChannel but it would be also good to have option to completely skip name verification
grpc::ChannelArguments args;
args.SetSslTargetNameOverride("alias.namespace.svc.cluster.local");
args.SetString(GRPC_SSL_TARGET_NAME_OVERRIDE_ARG, "alias.namespace.svc.cluster.local");
grpc::CreateCustomChannel(addr + ":" + port, channel_creds, args);
Same thing happened to me with 2.26.0 - non proxy related issue.
Solved it by 'uninstalling the latest while it said there where 2 versions available.
The latest update was 12.06.2025 -> uninstall this one on windows 11.
While agreeing with Kaushik's answer giving height might not be always desirable, even with MediaQuery, You might think wrapping your widget with LayoutBuilder or wrapping whole ExpansionTile with SingleChildScrollView
https://github.com/nextauthjs/next-auth/issues/11544#issuecomment-2538494101
I used this fix. Change your middleware to be
export default await auth((req) => {
// my custom logic here
})
Try setting .frame(maxWidth:)
for the content in the swipe action
These may help in future,
This uses the logical OR operator with assignment to accomplish the same thing in one line. If calls[ev] exists and is truthy, it assigns that value to list. If it doesn't exist or is falsy, it creates a new empty object, assigns it to calls[ev], and then assigns that same object to list.
using round
function floorWithPrecision($value, $precision = 1) {
return round($value - ( 0.5 / ( 10 ** $precision ) ), $precision, PHP_ROUND_HALF_UP);
}
/*
As a function
*/
echo floorWithPrecision(49.955, 2);
/*
49.95
*/
echo PHP_EOL;
/*
Same but no function both are keeping float
*/
echo round(49.955-0.005, 2, PHP_ROUND_HALF_UP);
/*
49.95
*/
In our case, a Cognito WAF rule was blocking due to NoUserAgent
To make it world accessible you will also have to connect it to a internet connection and it should have a public accessible address just as any other web server.
If you really need access control aren't you better of handing out passes and attaching some sort of card reader to it to verify user has access as is the common case with barriers.
Unfortunately, your request can only be done with Navisworks Desktop and its API. There is no Cloud version. You can refer to this sample from my colleague.
Change the ad unit id to:
ca-app-pub-3940256099942544/9257395921
Pycharm is built for python projects and so you will definately not get a good experience when you try using it for flutter development.
Visual studio code, Android Studio or Intellij IDEA would work fine after you install the flutter and dart plugins/extensions. Personally, I use vscode
We oped a case @Microsoft and need votes now:
Post adaptive card and wait for a response not returning Message ID if it fails with Action Time out · Community
If you're deciding between Django and React for full-stack development, seeing how a real project is structured might help. I’ve built my portfolio using React for the frontend and FastAPI for the backend—it's a practical example of separating frontend and backend concerns in a full-stack setup.
You can view it here:
https://jerophin-portfolio.vercel.app
It may give you clarity on how modern frontend frameworks like React integrate into full-stack workflows.
wao amazing. finally this case is solved. thanks to Roy who open this case in this forum and Guscarr, thanks the solution is very solving.
The error is literally telling you the problem is not the input dtype but the input formatting:
"--custom_input_op_name_np_data_path is not specified, all input OPs must assume 4D tensor image data. INPUT Name: image_repr INPUT Shape: [1, 768] INPUT dtype: float32"
Your model is not using a 4D tensor image data as input currently, but one with the shape [1, 768]. I recommend testing the "--custom_input_op_name_np_data_path" argument first and see if that is enough to complete the quantization process. By the way, is your model using an image as input?
sizeof: calculates size in bytes of: variable, functions, arrays... depends on it's type and value.
your two functions is similar but in one is "unsigned" and in the other is just "char", difference in that is "unsigned" means values is only >= 0; it's dont make variables larger or smaller in size.
this is how much size in language C, of "unsigned char" and "char":
char : one symbol. Have 1 bytes (8 bit).
unsigned char : one symbol. Have 1 bytes (8 bit). Any value from 0 to 255
If you have other question or if i need to add something ask in comments!
Have you read the Firebase documentation on how to set up code to receive messages? For specifically, check out the Foreground Messages and the Background Messages. I followed that instruction and I was able to receive notifications.
Why not
constexpr void noop(std::string_view)
{
}
int main()
{
noop("just for demonstrating usage of this noop implementation");
return 0;
}
?
It also allows for a non-comment comment string, similar to static_assert.
Dear My Man Guscarr thanks for solution its done clear
I have faced the same problem and stuck for 2 hours and only solution is to recheck the files .And in file do not create something like microprocessor like this file because ultralytics already contain this file and due to this you recive the error
here is error
ImportError: cannot import name 'YOLO' from partially initialized module 'ultralytics'
And this is my project structure "machine learning /object detection/multiprocessing.py"
and error is "ultralytics
→ torch
→ multiprocessing
(your file) → ultralytics
again..."
solution is :
change its name
rename from multiprocessing.py
to something else, like my_process.py
.
also what is circular import ?
A circular import is when:
File A imports File B
File B imports File A
I had the same issue, and what I did was install the yarn without --frozen-lockfile flag and the yarn-lock file got updated, and then I included this flag in my CI/CD code and it worked.
That sounds like overfitting. Your model is "too complex" for your training data. It learns the few training examples by heart but it cannot generalize. Think of a 9th-order polynominal that you try to fit a line with 9 Datapoints. Its corret at your support points but does wild swings everywhere else.
Thats why the training accuracy gets better and better (your support points), but validation stays a a low level (everywhere else).
I think it'd be better if you create a simple demo to illustrate the problem. It'll make the solution easier to find.
Your API works on the emulator because it maps localhost to your PC, but on a physical device, use your PC’s local IP instead of localhost and make sure the firewall allows the port.
I went to the folder and did shift + right click.
Then pressed open with terminal.
Then wrote there "code ."
And a new window appeared and it worked for me..
This error usually happens when you're installing the web3
package (or a dependency like lru-dict
) and Python is trying to build a C extension, but your system doesn't have the required C++ compiler properly set up — even if you think you have the Microsoft Build Tools installed.
Here’s how to solve it step by step on Windows:
1. Install the Required C++ Build Tools (Visual C++ 14.0+)
Even if you've already installed some build tools, you may still be missing the C++ workload specifically.
Steps:
Go to: https://visualstudio.microsoft.com/visual-cpp-build-tools/
Download and run the installer.
In the installer:
Select "C++ build tools"
Ensure these components are checked:
MSVC v14.x (choose the latest)
Windows 10 or 11 SDK
C++ CMake tools for Windows
Finish installation and restart your terminal.
pip install --upgrade pip setuptools wheel
This ensures you're using versions that handle modern pyproject.toml
builds better.
If you want to avoid compiling lru-dict
yourself:
pip install lru-dict==1.1.8 --only-binary :all:
Then install web3
:
pip install web3
This may work if there's a prebuilt binary available for your system.
Sometimes dependencies clash. Run:
python -m venv web3env web3env\Scripts\activate pip install --upgrade pip pip install web3
If nothing else works, using WSL (Ubuntu on Windows) will let you avoid these Windows-specific compilation issues.
conda
EnvironmentIf you use Anaconda or Miniconda:
conda create -n web3env python=3.11 conda activate web3env pip install web3
Conda environments often resolve C dependency issues more smoothly.
If you still face problems, let me know your:
Python version (python --version
)
Pip version (pip --version
)
Exact OS version (e.g., Windows 10/11, x64?)
enter image description here in Visual studio already have option Debug -> Attach to Process , just click and select yout Revit instance
So I also tried it with the sample Python implementation of the National Rail community and it did also not work. Hence I am 100% the error lies in the Zeep package parsing.
Workaround: I used the Zeep HistoryPlugin in order to receive the raw XML from the request. Then i manually parsed the type, using lxml package, into my Zeep reponse. This works pretty good if you only need one or two additional types.
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js"></script>
estoy teniendo actualmente el mismo problema, mi sistema de 24vCPU + 48 GB de RAM, al llegar al 70% de uso de CPU y 20 de Load Average, a pesar de no notar saturación, empieza a generar degradación y lentitud, ya descarte BD ya que verifiqué en nginx que solicitudes que tardan más de 2 segundos en responder, en la BD no figuran en slow query, o valido que las querys que hace dicha solicitud procesan en menos de 500 ms, por lo que infiero que la lentitud y saturación es de backend directamente, me podrías indicar que mejora aplicaste y si te funcionó para aprovechar mejor los recursos?
Ya que de momento incrementé la capacidad de mi servidor a 48vCPU + 96 GB de RAM pero aún así llega al 25% y pasa nuevamente esto en ciertos momentos donde observo muchas solicitudes golpeando mi backend con la misma similitud que describí anteriormente y observo que no está aprovechando el poder de procesamiento correcto.
did you find answer, I have same issue
public virtual static TSelf operator +(TSelf left, TSelf right) =>
TSelf.Create(left.Value + right.Value);
Thank you for your example code.
Here is the script modified from yours, but did not work.
@ECHO OFF
SETLOCAL
rem The following settings for the directories and filenames are names
rem that I use for testing and deliberately includes spaces to make sure
rem that the process works using such names. These will need to be changed to suit your situation.
SET sourcedir=d:\test
SET destdir=d:\test\result
SET filename1=%sourcedir%\sample.log
SET "beginfence=========IPMI Sensor Info Begin========"
SET "endfence=========IPMI Sensor Info End========"
SET /a outfilenum=0
SET "output="
FOR /f "usebackqdelims=" %%y IN (%filename1%) DO (
SET "line=%%y"
CALL :outline
ECHO x%%y|FIND "%endfence%">NUL
IF ERRORLEVEL 1 (
ECHO x%%y|FIND "%beginfence%">NUL
IF NOT ERRORLEVEL 1 (
SET "output=Y"
SET /a outfilenum+=1
CALL :outline
)
) ELSE (
rem endfence found
SET "output="
)
)
GOTO :EOF
:outline
IF DEFINED output >>"%destdir%\filebase_%outfilenum%.log" ECHO %line%
GOTO :eof
I ran the following command in CMD:netstat -ano | findstr :8088
And got this result:TCP 192.168.1.14:8088 13.107.219.254:443 CLOSE_WAIT 10652
Fix Option 1: Kill the process holding the port
taskkill /PID 10652 /F
Fix Option 2: Change your Spring Boot port
As the internet continues to evolve, professionals must adapt to new devices to remain competitive. One technology that is becoming increasingly popular among professionals is Node js development. Node.js is a platform that allows creators to build ascendable, fast, and efficient applications. By hiring a Node.js development company, professionals can take advantage of this technology and achieve significant benefits. In this article, I will discuss the benefits of hiring a Node js development company, how Node.js development can take your business to the next level, the services offered by Node.js development companies, case studies of successful Node.js projects, best practices for Node.js development, project management for Node.js projects, pricing models for Node.js development, and how to choose the right Node.js development company for your business.
enter image description here
for more information visit https://idealpost.co.uk/
[Observable] is no longer under active development. It is largely unused in the JDK, and has, for the most part, been superseded by the 1.1 Beans/AWT event model.
As you have already replaced the url and hosted it ec2 at aws but when calling it from lambda functions, outcome has been null or no value.
It might be due to aws service configuration, try running the function and ec2 in same VPC network.
For ec2 check the security for inbound rules as by default its restricted by aws - any outside request.
Check for Role to access EC2 by lambda functions for Auth.
GitHub Actions "Artifact storage quota has been hit" – Fix (Short Guide)
Check Usage:
Go to:
https://github.com/<owner>/<repo>/settings/actions
Delete Artifacts:
Use GitHub CLI:
bash
CopyEdit
gh run list gh run delete <run-id>
Clear Cache:
bash
CopyEdit
gh cache list gh cache delete <cache-id>
Use API if needed:
List & delete:
bash
CopyEdit
curl -H "Authorization: token TOKEN" https://api.github.com/repos/<owner>/<repo>/actions/artifacts curl -X DELETE https://api.github.com/repos/<owner>/<repo>/actions/artifacts/<id>
Wait a few hours:
GitHub may take time to update usage.
Still stuck?
👉 Contact GitHub Support
What is the issue here? I try to point to a local project
Although this issue has been a long time ago, I still have some experiences to share. Even when using the latest arm-none-eabi-gcc toolchain, fpv5-d16 and fpv5-sp-d16 can result in differences in the generated assembly code. When using fpv5-d16, some functions with strange construction and building methods may use double precision supported instructions such as vcvt.f64.s32. This instruction will cause the MCU that supports single precision FPU which I have chosen to enter HardFault. According to ARM's introduction, the FPU carried by Cortex-M7 core supports single and double precision options, so it is necessary to confirm the type of FPU from the reference document of the selected CPU.
Of course, I am also curious about how this compilation option works on clang. It seems that clang only has options for fpv5-d16. I haven't completed the compilation process using clang yet, but I suspect that clang has optimized the code generation for FPU.
Another solution could be:
SELECT
COUNT(DISTINCT products_never_sold)*100.0/ COUNT(DISTINCT product_category) AS pct_product_categories_never_sold
FROM (
SELECT
product_category
, (CASE WHEN SUM(units_sold) IS NULL THEN product_category END) AS products_never_sold
FROM product_classes pc
LEFT JOIN products p
ON pc.product_class_id = p.product_class_id
LEFT JOIN sales s
ON s.product_id = p.product_id
GROUP BY 1
)
We also have the same problem as you guys. Please reply if you find a fix!
Another note I have to add to this: The phones that fail to load symbols also fail to show input textures on renderdoc.
So I'll add to the list of phones:
Not working:
S25 ultra
S25
S23 Ultra
Red magic 10 p
Working:
Pixel fold
Pixel 9,8,7,6
OnePlus 11
I had the same issue where variable colours/pylance would seem to just stop working randomly.
I found the issue was that pyright was indexing too many files (working in many repos at the same time, and some other podman related folders). I included a pyrightconfig.json file at the root of my workspace and excluded the paths with loads of files that I wasn't using at the time and it seemed to fix the issue straight away.
pyright info: https://github.com/microsoft/pyright/blob/main/docs/configuration.md
In my case I had multiple build variants. So, I had to:
Go to View -> Tool Windows -> Build Variants
Press "Re-import with defaults" button
Looks like they changed it again I couldn't find it for a while so I am posting this in case it helps the next person.
I used Beautiful Soup https://www.crummy.com/software/BeautifulSoup/bs4/doc/
I'm doing a Python course, if I'm wrong, sorry (I'm learning)
# pip install beautifulsoup4
# import beautifulsoup4
from bs4 import BeautifulSoup
# Read the contents of the HTML file
with open("./website.html") as file:
contents = file.read()
# Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(contents, "html.parser")
print(soup) # print the entire HTML document
# Print the title of the HTML document
print(soup.title) # <title>My Website</title>
print(soup.title.string) # My Website
I compiled with the Prod profile, rather than the Dev profile, and it now works.
Presumption: The Dev profile does not package all files into Docker image, so are not found when you run in another environment. That is a guess. If you can enlighten me as to what happens that would be appreciated.
(not a solving) I'm also trying to find it.
Name of the ringtone I want to find: Beat Plucker.
I know this is an old thread, and already addressed by more than one answer ... but I wanted to add one more consideration.
I found this thread because my C# app was executing fairly simple queries, but very slowly. While I was reading through this great info, I started a new query in my app ... and it resolved nearly immediately.
What had changed? I'd closed the Discord app. Could be a memory usage issue, I suppose, but I'm willing to just blame Discord for being a resource hog. :)
Sharing this in case it helps anyone else in my situation...
Since this is C++, the calling parameters are significant to selecting the exact variation of the function to call. That is, function names can be overloaded and use different parameters, or parameters of different types.
Are you sure you are calling the function with the right number and types of parameters?
automatic_stock_export_conversion($stock_export_id)
The function was calling twice from somewhere else.
I have resolved this issue while disconnected the VPN. One more thing, go to computer network connection and find proxy setting and turn off proxy setting and set the internet connection with automatic default.
Thank you very much for helping the community.
Your answer to that problem was very helpful, as I couldn't install Flask with the pip command (pip install Flask).
pip: ERROR: Could not install packages due to an OSError: Missing dependencies for SOCKS support.
Line: 1 Character: 1
+ pip install request -i http://pypi.douban.com/simple --trusted-host p ...
+ CategoryInfo: NotSpecified: (ERROR: Could no... SOCKS support.:String) [], RemoteException
+ FullyQualifiedErrorId: NativeCommandError
WARNING: There was an error checking the latest version of pip.
Take a look at this link: exactly what you need
https://codemia.io/knowledge-hub/path/set_up_of_hyperledger_fabric_on_2_different_pcs
Using the following versions worked for me:
xgboost == 2.1.4
shap == 0.45.1
Hope that helps!
There are compiler plugins for gcc and clang that randomize struct layouts (Randstruct). This is primarily used in the Linux Kernel: https://lwn.net/Articles/722293/
The clang PR: https://reviews.llvm.org/D121556
I'm unsure if these plugins have support for C++ structs/classes but it might be possible.
This just says anything that's not an alphanumeric or a space and replace it with nothing.
@search_query = @search_query.tr('^A-Za-z0-9 ', '')
RUBY:001 > "sdfl &8~~!".tr('^A-Za-z0-9 ','')
"sdfl 8"
You cannot send messages to a chat if using Application permissions. Configure your app with delegated permissions and enable "Public client flows" Then you will be able to send messages.
Remember to assign the following delegated permissions:
Chat.ReadWrite
Chat.Create
User.Read
User.ReadBasic.All
In my case, I wasn't able to edit the shortcode, so I added wpcf7_form_id_attr
filter in my functions.php
file like following:
add_filter('wpcf7_form_id_attr', fn() => 'wpcf7-' . WPCF7_ContactForm::get_current()->id());
I had the same issue, I just clicked on TBAddUser under C:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\DotNetInstallFiles, it added the user and opened a Word, it shows the BIPublisher Ribbon.
You should type in a row: =SUM(VALUE(A1:A2)) and then press ctrl+shift+enter
Maybe it works.
Welcome to stackoverflow!
Expected functionality as per CORS
Cross-origin resource sharing (CORS) is a mechanism to safely bypass the same-origin policy, that is, it allows a web page to access restricted resources from a server on a domain different than the domain that served the web page.
Origin webserver has not allowed sharing. Unlikely that they will if asked so try another method to grab the .psd file.
IFRAME
or via a proxy
or your own file downloader script running on your own web server then link to that local file instead.
as per another answer here.
I hope I get you right and it will help you or other people who face the same issue.
I was trying to figure how to set matrix dinamically 'cause I have to check services' dirs that have been changed and save services' names, than run pipeline for each service. This is what I did:
I have the first job that defines what kind of services I have to rerun (define_changed_service) and the second job (trigger_test_services) which triggers this artifact:
define_changed_service:
image: alpine:latest
stage: build-push
before_script:
- apk add --no-cache git grep sed coreutils
- which git grep sed sort
script:
- SERVICES=$(git diff --name-only origin/main...HEAD | grep -E '^([^/]+/){3}(src|\.docker)/' | sed -E 's@^([^/]+/[^/]+/[^/]+)/.*@\1@' | sort -u | sed 's@/@-@g' | paste -sd "," -)
- echo "Services $SERVICES"
- echo "test-services:" > matrix.yml
- echo " parallel:" >> matrix.yml
- echo " matrix:" >> matrix.yml
- |
echo "$SERVICES" | tr ',' '\n' | while read srv; do
echo " - SERVICE: $srv" >> matrix.yml
done
- echo " script:" >> matrix.yml
- echo " - echo \"Running tests for service \$SERVICE\"" >> matrix.yml
- cat matrix.yml
artifacts:
paths:
- matrix.yml
trigger_test_services:
stage: build-push
needs:
- job: define_changed_service
artifacts: true
trigger:
include:
- artifact: matrix.yml
job: define_changed_service
strategy: depend
AND IT WORKS! I can believe I actually DID IT. I run into my browser history and went here to post the answer. This is my first answer on stackoverflow, lol. Enjoy. Let me now if it will help. I added screenshots on the off-chance
My pipeline
Do you have the ispac files from a previous good build? I exported my problem children from the SSISDB, and used the create new project from ispac file option in visual studio. They converted just fine that way, then could replace the project that wasn't working.
Oh and I also needed to change the db connections to use the new oledb drivers. Since 2019 was the last change exit for that version.
You can try disabling Enable Enterprise Plus features under the Query insights part of the instance settings and check whether you are able to access query insights. Also ensure that your project has the following roles:
roles/databaseinsights.monitoringViewer
roles/cloudsql.viewer
roles/cloudsql.editor
roles/cloudsql.admin
If the issue still persists, consider filing a bug issue so that the engineering team can look into it. Note that there’s no specific timeline when the fix will be available.
@jezrael 's solution is correct, but uses a deprecated alias for Seconds:
Deprecated since version 2.2.0: Aliases
H
,BH
,CBH
,T
,S
,L
,U
, andN
are deprecated in favour of the aliasesh
,bh
,cbh
,min
,s
,ms
,us
, andns
.
df['Date'] = pd.to_datetime(df['Date']).dt.floor('s')
For whoever struggles with similar issues with Bash name references, the good starting point is Greg's wiki:
Name reference variables are the preferred method for performing variable indirection. Older versions of Bash could also use a ! prefix operator in parameter expansions for variable indirection. Namerefs should be used unless portability to older bash versions is required. No other shell uses ${!variable} for indirection and there are problems relating to use of that syntax for this purpose. It is also less flexible.
With that, it becomes important to understand the dynamic scoping of Bash (thanks @jhnc) which is not particularly emphasized, but (qouting the same):
Indirection can only be achieved by indirectly evaluating variable names. IOW, you can never have a real unambiguous reference to an object in memory; the best you can do is use the name of a variable to try simulating the effect. Therefore, you must control the value of the ref and ensure side-effects such as globbing, user-input, and conflicting local parameters can't affect parameter names. Names must either be deterministic or validated in a way that makes certain guarantees. If an end user can populate the ref variable with arbitrary strings, the result can be unexpected code injection.
And importantly on main Bash page of Greg's wiki:
Name references are created with declare -n, and they are local variables with local names. Any reference to the variable by its local name triggers a search for a variable with the name of its content. This uses the same dynamic scope rules as normal variables. So, the obvious issues apply: the local name and the referenced name must be different. The referenced name should also not be a local variable of the function in which the nameref is being used.
The workaround for this is to make every local variable in the function (not just the nameref) have a name that the caller is unlikely to use.
Further, from Bash manual:
Local variables "shadow" variables with the same name declared at previous scopes. For instance, a local variable declared in a function hides a global variable of the same name: references and assignments refer to the local variable, leaving the global variable unmodified. When the function returns, the global variable is once again visible.
The shell uses dynamic scoping to control a variable’s visibility within functions. With dynamic scoping, visible variables and their values are a result of the sequence of function calls that caused execution to reach the current function. The value of a variable that a function sees depends on its value within its caller, if any, whether that caller is the "global" scope or another shell function. This is also the value that a local variable declaration "shadows", and the value that is restored when the function returns.
For example, if a variable var is declared as local in function func1, and func1 calls another function func2, references to var made from within func2 will resolve to the local variable var from func1, shadowing any global variable named var.
And to top this all off, I had quite a roller coaster ride with declare and its switches.
The issue is that it appears it's setting or unsetting variable attributes, but not all switches are equal in that sense:
Using ‘+’ instead of ‘-’ turns off the attribute instead, with the exceptions that ‘+a’ and ‘+A’ may not be used to destroy array variables and ‘+r’ will not remove the readonly attribute. When used in a function, declare makes each name local, as with the local command, unless the -g option is used. If a variable name is followed by =value, the value of the variable is set to value.
One option that stands out is -g
that really is NOT an attribute and therefore has to be repeated every time when manipulating other attributes or otherwise one is operating on a differently (scoped) variable.
Another less logical is -n
which has confusing documentation:
-n
Give each name the nameref attribute, making it a name reference to another variable. That other variable is defined by the value of name. All references, assignments, and attribute modifications to name, except for those using or changing the -n attribute itself, are performed on the variable referenced by name’s value. The nameref attribute cannot be applied to array variables.
This attribute also "disappears" when not reused, e.g. one cannot global_array=(x y z); f() { declare -n refarray; refarray=global_array; declare -r refarray; echo "${refarray[*]}"; }; f
as the second -r
loses one the array, it would need to go again with -rn
.
from threading import Thread
Thread(target=Ui_MainWindow(), daemon=True).start()
if __name__ == '__main__':
main()
use a dict that contains shared parameters of both applications.
shared_parameter = {
'....': ...
}
def find_closest_enemy(enemies, player_pos):
closest = None
min_dist = float('inf')
for enemy in enemies:
dist = ((enemy[0] - player_pos[0])**2 + (enemy[1] - player_pos[1])**2)**0.5
if dist < min_dist:
min_dist = dist
closest = enemy
return closest
def aim(player_pos, enemy_pos):
direction = (enemy_pos[0] - player_pos[0], enemy_pos[1] - player_pos[1])
print(f"Aiming at direction: {direction}")
enemies_positions = [(120, 150), (200, 180), (140, 160)]
player_position = (130, 140)
target = find_closest_enemy(enemies_positions, player_position)
aim(player_position, target
Okay, I have another question
I was trying to use your program on a different web page
of the same website.. but I could not generate the output..
This is the url of the new page :
"https://www.cocorahs.org/ViewData/ListDailyPrecipReports.aspx"
NOW, this new web page has checkboxes..
I think that's where I'm messing up..
I did a view page source and looked up the html tag names but it seems
I am missing something very conceptual.. I am wondering what it might be !!
This is my code. What changes should I make to it ?
Thank You !!
import requests
from bs4 import BeautifulSoup
from requests_html import HTMLSession
import pandas as pd
from io import StringIO
from datetime import datetime
session = requests.Session()
response = session.get('https://www.cocorahs.org/ViewData/ListDailyPrecipReports.aspx')
soup = BeautifulSoup(response.content, "html.parser")
view_state = soup.find("input", {"name": "__VIEWSTATE", "value": True})["value"]
view_state_generator = soup.find("input", {"name": "__VIEWSTATEGENERATOR", "value": True})["value"]
event_validation = soup.find("input", {"name": "__EVENTVALIDATION", "value": True})["value"]
response = session.post('https://www.cocorahs.org/ViewData/ListDailyPrecipReports.aspx', data={
"__EVENTTARGET": "",
"__EVENTARGUMENT": "",
"__LASTFOCUS": "",
"VAM_Group": "",
"__VIEWSTATE": view_state,
"VAM_JSE": "1",
"__VIEWSTATEGENERATOR": view_state_generator,
"__EVENTVALIDATION": event_validation,
"obsSwitcher:ddlObsUnits": "usunits",
"frmPrecipReportSearch:ucStationTextFieldsFilter:tbTextFieldValue": "FL-BV-163",
"frmPrecipReportSearch:ucStationTextFieldsFilter:cblTextFieldsToSearch:0": "checked",
"frmPrecipReportSearch:ucStationTextFieldsFilter:cblTextFieldsToSearch:1": "",
"frmPrecipReportSearch:ucStateCountyFilter:ddlCountry": "allcountries",
"frmPrecipReportSearch:ucDateRangeFilter:dcStartDate:di": "6/13/2025",
"frmPrecipReportSearch:ucDateRangeFilter:dcStartDate:hfDate": "2025-06-13",
"frmPrecipReportSearch:ucDateRangeFilter:dcEndDate:di": "6/16/2025",
"frmPrecipReportSearch:ucDateRangeFilter:dcEndDate:hfDate": "2025-06-16",
"frmPrecipReportSearch:ddlPrecipField": "GaugeCatch",
"frmPrecipReportSearch:ucPrecipValueFilter:ddlOperator": "LessEqual",
"frmPrecipReportSearch:ucPrecipValueFilter:tbPrecipValue:tbPrecip": "0.15",
"frmPrecipReportSearch:btnSearch": "Search",
})
table = BeautifulSoup(response.content, "html.parser").find("table", id="ucReportList_ReportGrid")
if table is None:
raise RuntimeError("table#ucReportList_ReportGrid not found")
df = pd.read_html(StringIO(str(table)))[0]
print(df)
The biggest benefit in my opinion is that ListItem renders as <li> by default, whereas ListItemButton does not. This makes it easier to navigate the page if you're using a screen reader, making the website more accessible.